Categories
AI Technology Thoughts and Musings

Did the AI ‘godfather’ quit?

Should we halt AI research. Apparently, Geoffrey Hinton thinks so and resigned from Google. But is that what he really did, said and believes? There’s always more under the headlines.

There has been no shortage of media coverage of the recent AI breakthroughs, with the articles about Google’s Geoffrey Hinton resigning being touted as an example of why we should be concerned, following hot on the heels of a few prominent people calling for a halt on AI training.

I am intrinsically cautious about any general reporting, where editors are commonly driven by pressures other than balance and whose depth of knowledge is rarely in the subject at hand. I this case, the national and online press uniformly led with “A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.” And saying how he now regretted his work. Powerful words and a portent of the danger of AI. Or, maybe, editorial hyperbole?

We may not all be “godfathers of AI”, however some knowledge and a willingness to read deeper and think should provide perspective on the development of AI technologies and their impact on people, business and society.

Rather than stop at the headline, let’s take a deeper dive into what Mr Hinton actually said.

What Geoffrey said

One shouldn’t hold Mr Hinton to account for how his words were reported. It is certain that much was said and even the best journalists and editors (and this author) crop and select to build their story. Let’s dive into what was reported, mostly quoting from the BBC interview:

He told the BBC some of the dangers of AI chatbots were “quite scary” and “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

Intelligence has been defined in many ways: higher level abilities (such as abstract reasoning, mental representation, problem solving, and decision making), the ability to learn, emotional knowledge, creativity, and adaptation to meet the demands of the environment effectively.[1]

He’s right, AIs are clearly not more intelligent than us. ‘Intelligence’ is a massively nuanced field, with many different theories of intelligence proposed. It is clearly the case that ChatGPT and its cousins have more general knowledge than any individual, which is what makes them so useful. However, knowledge is not a feature of theories of intelligence; we will come back to this shortly. Learning is a feature of many, and many AIs have a capacity to ‘learn’ through access to public information on the internet. This is the case for Bing Chat, which is ChatGPT 4 married to the Bing search index; presumably Google’s Bard is likewise constantly updated. Creative intelligence is a specific aspect in some theories; AIs can mimic creativity to some extent[2], though it would be hard to argue they exhibit innovation or imagination. Importantly, they do none of this activity proactively; they are not spontaneously creative, nor do they learn through experience. There is no synthesis of ideas, only linkage of existing knowledge.

[Dr Hinton] told the BBC that chatbots could soon overtake the level of information that a human brain holds. “Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has, and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said. “And given the rate of progress, we expect things to get better quite fast. So, we need to worry about that.”

Given that these AIs know what the internet knows, then they have already far surpassed the human brain in capacity. The numbers are around ~120 zettabytes for the internet (and growing fast, 2.5 petabytes for us humans[3]. That’s about fifty thousand times more information on the internet than in my meagre head (probably more).

The AI I asked about this quite sagely also advised,

However, this comparison does not take into account the quality, relevance, accuracy, or usefulness of the data or information. Not all data on the internet is reliable or meaningful, and not all information in the human brain is accessible or applicable. Moreover, knowledge is not just a collection of facts or data, but also a process of understanding, interpreting, applying, and creating new information.”

That’s a smart answer, I like to think that I would have said the same, given time.

Regardless of how one frames it, AIs know a lot. They are also pretty good at interpolating the data and giving sensible answers, with appropriate caveats. That’s more than most people do, which makes them both valuable and reasonable. They handle basic reasoning tasks at least as well as your average bloke down the pub. They lack understanding and can be tripped up by cause-effect relationships; I invite you to read or listen to anything/everything by Tim Harford on this equally human failing. As Dr Hinton states, they are getting better and it is likely that they will surpass humans at many types of reasoning, if asked the right questions.

All interesting stuff, but not the reason Dr Hinton quit.

In the New York Times article, Dr Hinton referred to “bad actors” who would try to use AI for “bad things”. When asked by the BBC to elaborate on this, he replied:

“This is just a kind of worst-case scenario, kind of a nightmare scenario. The scientist warned that this eventually might “create sub-goals like ‘I need to get more power'”.

The statement is certainly a cause for concern. Bad actors are clearly a problem. They are the cause of much of the trouble in the current world and they use many tools to pursue their dogmatic or irrational goals. The suggestion that AIs might create these sub-goals is much less credible.

Let’s pause to consider Artificial General Intelligence (AGI) (think ‘Skynet’) vs. Narrow AI (think ‘predictive text’). Until 2022 all AIs were the narrow variety, focused on very specific tasks and knowledge domains. Very far from AGI.

The revolution we are seeing in 2023 has been the emergence AI that are generalised to be able to do a wide variety of tasks. This new class could be described as Generalised Narrow AI (GNAI). We now have broadly skilled, ‘smart’ AI. What they are not is either sentient or self-aware. They have no survival instinct, no motivation to procreate or expand, no ego to feed. They utterly lack desire or purpose. Without this it’s extremely hard to conceive AIs creating the kind of sub-goals described. People, including bad actors, might give future AIs goals. In the absence of self-awareness and sentience but I cannot see how they might arise spontaneously in the kind of AI we have.

Sentience is the ability to feel or sense stimuli. All people and animals are sentient. AI are not.

Self-awareness is just that – an awareness of self, having an individual identity. Being able to state, “I think, therefore I am” and know what it means.

Dr Hinton again,

“We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

 “And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

It’s a fair point Dr Hinton makes, but… so what? If future AIs talk to each other, which is by no means a given, since Amazon, Goole and Microsoft are not in a hurry to lose their competitive advantage, then all that really happens is that they know a just a little more than they already know – essentially just the differences in what their search indexes hold and the Large Language Models used to train them. There is no causal link from this to runaway AI evolution, capability acceleration or negative consequence.  This is a slippery slope logical fallacy.

Matt Clifford, the chairman of the UK’s Advanced Research and Invention Agency, speaking in a personal capacity, told the BBC that Dr Hinton’s announcement,

“underlines the rate at which AI capabilities are accelerating”. “There’s an enormous upside from this technology, but it’s essential that the world invests heavily and urgently in AI safety and control,” he said.

Well said that man. AIs are getting smarter because people are getting better at designing and teaching them. There are things to be concerned about, as with every new technology, but conflating increasing capability with reasons to be fearful would be a mistake.

Dr Hinton made some fascinating observations, drew some questionable inferences, but are his ‘concerns’ the reason he quit. Let’s drop back into the BBC interview to find out.

Dr Hinton accepted that his age had played into his decision to leave the tech giant, telling the BBC:

“I’m 75, so it’s time to retire.”

Though played down, this is someone who has worked his arse off for a lifetime. He didn’t ‘quit’, he retired.

In fact, Dr Hinton specifically told the BBC that “in the shorter term” he thought AI would deliver many more benefits than risks, “so I don’t think we should stop developing this stuff,”.  He also said he wants to be able to speak more freely about the risks of AI without being associated with Google, while stating that “the company had been “very responsible” in developing AI”.

We have entered an exciting and breathlessly fast-moving new era, with the Information Age becoming the AI Age.

Should we be concerned? Emphatically Yes!

Things will change; we will make mistakes; bad people will do bad things; social and legal institutions will struggle to keep up.

Should we be afraid?

Should we be afraid? Emphatically No!

Current AI is limited by three factors:

  1. They only know what humankind knows; they do not and cannot create knew knowledge.
  2. They lack ‘desire’. Nothing motivates them to act, they are merely tools wielded at the desire of others.
  3. They have no intrinsic creativity, no capability for innovation, ‘left field’ problem solving or adaptation.

Their impact will change our world. Lacking desire, they will not change themselves. I think Dr Hinton would agree.


[1] https://www.simplypsychology.org/intelligence.html

[2] https://openai.com/product/dall-e-2

[3] https://www.scientificamerican.com/article/what-is-the-memory-capacity/

By Simon

Simon Hudson is an entrepreneur, health sector specialist and founder of Cloud2 Ltd. and Kinata Ltd. and, most recently, Novia Works Ltd. He has an abiding, evangelical interest in information, knowledge management and has a lot to say on best practice use of Microsoft Teams, SharePoint and cloud technologies, the health sector, sustainability and more. He has had articles and editorials published in a variety of knowledge management, clinical benchmarking and health journals. He is a co-facilitator of the M365 North User Group Leeds and is Entrepreneur in Residence at the University of Hull.

Simon is passionate about rather too many things, including science, music (he writes and plays guitar & mandola), skiing, classic cars, technology and, by no means least, his family.

//cdn.credly.com/assets/utilities/embed.js

Leave a comment