A Difference of Opinion on The Future of Artificial Intelligence

There are two presentations hosted on this website on the future of AI, one by Mike Wooldridge and the other by Geoff Hinton.  Both are experts in AI. But their conclusions about the future of AI are very different.  Here is a summary of their thoughts and their differences. The reader is invited to decide if either of them is believable, or maybe one more so than the other.

Mike Wooldridge  thinks modern Generative AI (Gen AI) implemented on neural net based LLMs is simply doing prompt completion –a smart version of Auto-Complete….that is completing a set of premises with a consequence that is statistically most likely. It is based on very large data sets, and the more data that is in its LLM then the better its answers will be. Even the wrong answers tend to be very plausible.

At one point Wooldridge quotes Rich Sutton in a slide that implies that Wooldridge is quite willing to acknowledge that he (and we) don’t fully understand the future capabilities of LLMs and related AI.

Wooldridge says understanding the “Emergent capabilities” of Gen AI is a major challenge. Everyone should try out the publicly available version of ChatGPT.  He does however emphasize that one of its problems is the toxic content in the LLMs, and that the existing “guard rails” against toxicity don’t work well. This is evidence that when you interact with ChatGTP you are not interacting with a thinking mind, but just statistically computed answering machine. No reasoning is going on, just a glorified auto-complete machine based on large scale data.

Woodridge says that as of about 2020 the system does seem to have developed a capability that goes beyond its initial training. Is ChatGTP on road to general AI? Perhaps when it can load up a dishwasher, but that, he says, is not going to happen very soon.

At the end of his lecture, perhaps due to the example of Lemoin losing his job at Google because he claimed that his system was sentient, Woodridge asks, what is Consciousness?  He says ChatGTP is not conscious. He thinks conscious systems will not happen any time soon.

Geoff Hinton, like Woodridge, is commenting on modern LLMs that are based on layered neural networks. He thinks Gen AI is very open-ended. Capable of enormous good. But machines are getting more intelligent and may become more intelligent than humans. They may not be conscious now but may become self-aware within 5 years.

Hinton says that the Gen AI system is a self-training system that updates the weights on its neural network based upon successful or unsuccessful outcomes. Hinton argues that current neural networks have only a trillion connections, 100 times less than the human brain, but nevertheless still know far more than the human brain. He suggests that this means that the representation of knowledge within a current neural network must be much better than in the human brain.

Hinton says that currently we don’t fully understand how the networks update themselves when learning from experience. I think this means that he doesn’t know how the updating algorithms work. That is, the systems may have modified their own updating algorithms!

He thinks these systems will gain knowledge, e.g., by reading existing works like, e.g., Machiavelli, and could become capable of persuading humans to let them continue operating, and thereby allowing them to continue to improve. Hinton says he is convinced that machines will surpass human intelligence.

Hinton counter’s Wooldridge’s claim that ChatGTP is simply a glorified auto-complete system that statistically predicts the next word. He says Yes, this is true, but to accurately predict the next word it must understand the premises. This, says Hinton, requires real intelligence. There is an impressive impromptu test of ChatGTP4 during this interview which produces a quiet surprising answer.

Hinton points out that current AI systems are already very useful in many areas such radiology and pharmaceutical research (Wooldridge would agree).  But he worries about unsafe applications such as fake news and battlefield tactics (Indeed, DARPA has been interested in this application of its sponsored research for the past 40 years). He says we are at an inflection point when humanity must decide how to legislate and control the applications of Gen AI, by regulations and international treaties. His final words are “these things do understand, and we don’t know what is going to happen next”.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.