The New Bitter Lesson - an Update
What Yann LeCun, Geoffrey Hinton, and others are now grokking at in AI
The original bitter lesson by Paul Sutton, the father of reinforcement learning, is deeply thoughtful about the supremacy of search and learning as the two main methods that have demonstrably scaled AI compared to symbolic AI or otherwise researcher leveraging their human knowledge of the domain which always hits limits. Many in the AI scene have internalized the bitter lesson in this age of LLMs / deep learning, so it is an important read.
A new lesson is in order now, because while AI like ChatGPT has already exceeded many expectations of AI and will only get smarter, the hallucination problem is like the smile of the Cheshire cat, a big tell that have missed something big in our very conception of intelligence. It is well known that there is no possible solution to the hallucination problem in the current paradigm; deep learning is a blackbox. There was an implicit assumption that on the pathway to AGI or the singularity, the hallucination problem will melt yet lack of progress suggests otherwise.
More AI researchers are realizing this. Geoffrey Hinton, the father of backpropagation (the core algorithm behind deep learning/LLMs), after leaving Google is focusing more on an alternative to his brainchild, in the forward-forward algorithm. Yann LeCun from Meta recently outlined his work for the “new architecture” to overcome AI’s key limitations. Even commercially, Sam Altman of OpenAI has been talking about potentially missing something foundational in their approach and that the age of large models is over. Perhaps quite personally, Google’s Sundar Pachi admitted to having many hallucinations on 60 Minutes.
The new bitter lesson is the death of the old bitter lesson as it becomes more obvious that hyper-parameterized AI is not enough to fully represent what intelligence can be.
We (the AI industry) need to now go back to using our brains in our design of thinking machines. We can’t naively hope that more compute and data will solve all of our problems, nor can we go back to the past of symbolic AI or AI through human knowledge which as Sutton suggested is played out.
At AO Labs, we’ve been working on AI from first principles, starting from the simplest examples of animal intelligence, from what came far before human knowledge, to build bottom-up AI with understanding we can relate to with our own, AI we can trust. We’ve been at it since 2018 while Hinton, LeCun, et al are only now starting up with much to unlearn first.
Developer or AI curious?