2020-01-26 11:47:14
Source: https://medium.com/@xamat/the-year-in-ai-2019-ml-ai-advances-recap-c6cc1d902d5
the Turing award. You might think that after a few years of neck-breaking speed in innovation, this kind of recognition might be signaling that we are getting near some sort of plateau. Well, think again. That is nowhere in sight yet. It is true that hot areas have clearly shifted and while a few years ago image recognition was all the rave this year we have seen more impressive advances in language. I will go into all of this below, but let me start by summarizing the biggest headlines of AI in 2019, in my own very biased opinion:
- scaringly good)
- AI becoming good at creating synthetic content has some serious consequences
- The biggest theoretical controversy continues to be how to incorporate innate knowledge or structure into machine learned models. There has been little practical progress towards this end, and little progress towards any other theoretical breakthrough.
- The revolution may get unsupervised at some point, but for now we can make it self-supervised
- Computers continue to get better at playing games and can now collaborate in multi-agent escenarios
- Other areas like Healthcare and Recommender Systems continue to see advances by using Deep Learning, but some of these advances are questioned
- The war between frameworks continues, with a major TensorFlow release and also big movements on the Pytorch arena.
But, let’s get right into it and dive into each of these fascinating 2019 headlines.
The year of the Language Models
Facebook seem to all have really bought into the Language Model revolution.
Allen Institute’s Aristo AI passed an eighth-grade science test. In fact, if we look at the SQUAD leaderboard, it seems nowadays anyone can surpass human-level reading comprehension by combining some of these known approaches (see image below).
Stop Thinking With Your Head” where he shows how for many tasks a simple LSTM model can perform almost as well as the most complicated Transformer.
Combining knowledge/structure with deep learning
In 2019 we continued to hear loud voices advocating for AI not to get stuck in a Deep local maxima. According to many, me included, we should be able to combine data-intensive deep learning approaches with more knowledge-intensive methods to add some form of innate structure. While it is true that there is a lot of work to be done in that space, we did see many examples of research combining deep learning and more “traditional” AI.
Wizard of Wikipedia: Knowledge-Powered Conversational agents is Facebook’s response in that same space.
syntax trees can be directly inferred from such models.
The self-supervised revolution
Self-training with Noisy Student improves ImageNet classification”. All of these approaches improve on SOTA supervised methods while using much less labeled data.
Learning from the experts” where we sidestep the need for costly and noisy labeling of medical data by generating synthetic training data.
Other miscellaneous research advances
The year also came with other advances that don’t neatly fit into the main trends of combining knowledge with deep learning, or self-supervision. What follows are some of my favorite highlights in this miscellaneous category.
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks” introduces an approach to uniformly scale all dimensions in a CNN.
according to Chip Huyen, it’s the most commonly asked question during interviews).
Learning from the experts: From expert systems to machine-learned diagnosis models” already proposed a combination of these two techniques.
Open Set Medical Diagnosis”.
$1M deep-fake detection challenge. Clearly detecting fake content is going to be a huge deal in the future, and it is good to see that we are already putting efforts into this.
Let’s keep playing
Starcraft competition with AlphaStar. Both these advances show us the ability of algorithms not only to master complicated but highly structured games like Go, but also to adapt to more fuzzy strategic goals in which even collaboration is needed.
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model where Deepmind again shows like a combination of search and a learned model can be applied to gain superhuman performance not only in a single game, but in a range of games.
Is there room for deep outside of text and image?
Of course the Deep revolution is impacting way beyond text and image. I will focus on the two areas I follow most closely: recommender systems and healthcare. Interestingly, I have seen a similar pattern in both areas this year (warning: you should know that us “scientists” see patterns all around us).
a paper that questions most of the recent advances in using deep learning approaches and shows how simpler methods obtain similar to better results.
Prototypical Clustering Networks for Dermatological Disease Diagnosis”). However, when applied to more complex data like Electronic Health Records (EHR) we show that much simpler models perform just as well as deep neural networks (see our upcoming “The accuracy vs. coverage trade-off in patient-facing diagnosis models”).
The Framework/Platform war
according to some data, Pytorch continues to win the research battle, while TensorFlow dominates in production-ready systems.
Chainer merging into Pytorch.
framework for dialogue system research.
What to expect in 2020
Thanks for making it this far! I know this is a long post. Plus, I am always much better at explaining the past than predicting the future. So, I won’t keep you here much longer. I don’t have risky predictions for what 2020 will bring, but I am sure of a few things:
- There will be more advances in NLP, some of which will be categorized as breakthroughs
- AI will get better at faking all kinds of content, and we will see efforts on how to avoid the possible negative side effects
- Aspects such as uncertainty modeling, out-of-distribution modeling, metalearning, or interpretability will continue to be top of mind for many.
- AI will continue to hugely impact broad application areas such as Healthcare
- We won’t get see self-driving Teslas on the roads
- We won’t solve AGI
Hope you enjoyed the post, and looking forward to your feedback and comments!
Thanks to Anitha Kannan for feedback on an early version of this post.