Is An AI Safety Summit a Waste of Time?

by David Luckham

British prime minister Rishi Sunak organized an “AI Safety Summit” held a week ago at Bletchley Park.[1] Yes, he chose as the venue the headquarters of British Intelligence during World War 2, and the place where Alan Turing and others worked on breaking the Nazi codes. This “summit” was well attended, including the US vice-president, Kamala Harris, the European Commission president, Ursula von der Leyen, various computer scientists and executives at all the leading AI companies, and of course, Elon Musk.

Why? Well, there seems to be a growing opinion that current developments in AI technology have increased the risks of it being misused. This is best described as the “dark side” of AI. An immediate fear is the use of AI for fake news and a disinformation glut in political elections. Longer term fears of some prominent AI researchers include the use of AI in unauthorized surveillance, identity theft, creation of realistic fake videos or audio recordings, the development of autonomous weapons systems, malicious cyberattacks, and to manipulate stock prices. The list of fears is endless!

The upshot of this summit meeting was the signing of an international declaration that recognized the need to address risks represented by AI development, and the organization of another AI safety summit to be held in France in 2024.

But nowhere in this summit does there appear to have been a discussion of what AI is and is not. A vague description of AI as “a computer system that can perform tasks typically associated with intelligent beings” seems to have been as far as it went.

 Indeed, what seems to have been signed by default is an international declaration that any smart or unusual computer program should be subjected to review for potential misuse. For example, the latest programs that beat the best humans at games like Chess and Go could be included.

This is a step too far. Any laws governing AI must specify what properties the programs must have in order to be subjected to AI rules and regulations.

Of course, the attendees did not intend such a broad mandate! But had they tried to tackle the problem of defining AI, what would they have come up with?

The crux of this matter is to give a precise enough test for “AI-ness” to decide whether a computer program should be subjected to the AI regulations. An AI program must have some rather difficult-to-define properties such as whether the program could deviate from the specifications of its creators, create new behaviors, or think for itself. Just try coming up with a precise test of those kinds of properties!  Such a set of tests would require an enormous number of inputs and automation of the analysis of the outputs for aberrant behavior (which in itself is a challenging problem).  

Another issue that has been ignored in the summit is the process of training an AI program. Training itself can reward bad behavior. In fact, the whole area of training AI systems is a black box since the details that have been released by the AI companies are vague.  Examples of a training method called Reinforcement Learning from Human Feedback (RLHF) that have been published are toy examples and do not go into details. Sam Altman (CEO of OpenAI) stated that the cost of training GPT-4 was more than $100 million.[2] Clearly training is a big component of the development of modern AI systems.

So, it may be assumed that any summit on safety of AI systems should consider regulating training as a way of ensuring against bad behavior.

What has happened recently that has led to the summit is that large language models (LLM’s) have enabled construction of a new class of AI programs. These programs, using heuristics and access to the data in the LLMs, can produce results that are not predictable and do surprise their creators. For example, since the release of GPT-4, OpenAI has been adding capabilities to the AI system with “plugins”, giving it the ability to look up data on the open web, plan holidays, and even order groceries. But the company has to deal with the issue of unpredictability in that its own systems are more powerful than it knows at release!

The latest AI programs led to the recent famous letter signed by a thousand researchers and executives calling for a pause in AI developments. The letter talks about a pause so the capabilities and dangers of AI systems can be properly studied and mitigated. But the real worry is that a future AI program could have the potential to “think for itself” and might decide to annihilate humanity. That is where the real fear lies!

Not everyone agrees. Nick Clegg, president of global affairs at Meta and an attendee, said AI was caught in a “great hype cycle” and warned that new technologies inspired a mixture of excessive zeal and excessive pessimism. He said predictions such as a highly powerful form of AI could emerge within years with revolutionary consequences, often don’t quite turn out as feared.

One real problem with this summit was the non-attendees included China and Russia. AI has many applications and can give a country an advantage in the quality or performance of its products and exports.  Would the USA, or any of the summit attendees, really pause its AI development in the face of international competition?

So, this summit, and the following ones, are likely a waste of time in that nobody is going to pause its AI development. The reasons are obvious:

  1. The lack of a precise definition of AI that any regulation would apply to.
  2. What system properties and behaviors AI regulations would test for, and the costs involved.
  3. The lack of discussion of regulating the training of AI systems.
  4. The time and investment involved in testing an AI system.
  5. How to decide if an AI testing system is good enough.
  6. Some very important countries are not signatories.

.


[1] https://amp.theguardian.com/technology/2023/nov/02/five-takeaways-uk-ai-safety-summit-bletchley-park-rishi-sunak

[2] https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.