Themes in popular Sci-Fi movies have always tended to reflect the paranoia of the times; films about Artificial Intelligence have been around for half a century – 2001 was released in 1968 and there have been noteworthy films on the subject every decade since. It’s rare (Her, WALL-E?) for the AI to be anything other than the antagonist. While it might be good entertainment, this drip feed is at odds with the positive message Microsoft, Open-AI, Google and others expound, and there is no shortage of debate on the dangers.
Recently a few prominent people called for a halt on AI training, while ‘the godfather of AI’ quit purportedly over his concerns, fuelling our conditioned fears. Meanwhile we happily use our home assistants and other ‘better than Star Trek’ tech on a day-to-day basis, without a thought.
So… should we be afraid?
The real world risks.
Artificial intelligence research began in the mid-1950s[i] with the search for Artificial General Intelligence (AGI), though progress has mostly been the development of Narrow AI focused on specific tasks. These continue to proliferate, with services such as the Azure OpenAI, IBM Watson and Google AI make it easy for anyone to build on the technology. The recent breakthroughs have seen the emergence of ‘smart AIs, which are a long short of being Artificial General Intelligence, but use Large Language Models to be capable of an almost intimidating array of activities. These Generalised Narrow AIs can hold human-like conversations, write code, compose and summarise documents, create poems and pictures, as well as being excellent search engines.
Fearmongering about Skynet level AGI taking over the world is unhelpful. The risks are more subtle and less apocalyptic. AI will be undoubtedly be a hugely disruptive technology. Human history is replete with examples of such disruptions, and the cautionary tales of those who attempted to stop them.
Never-the-less, with the increased availability, capability and public awareness of AI, people are beginning to ask:
- Can AIs be trusted?
- Will I be (adversely) affected by AIs?
- Who will control and regulate them?
- Will AIs take over the world?
Can AIs be trusted? Accuracy and truth
Those as yet unfamiliar current AI express concerns about whether Large Language Model tools like ChatGPT are using reliable information or just trawling the internet and picking up everything, including the inaccuracies, misunderstandings and motivated deceits that abound therein. These are legitimate concerns and ones the developers and researchers are working actively and rapidly to address this. Earlier this year ChatGPT began citing its primary sources and these seem to be credible. There is a major research push to ensure information is more reliable than most people would obtain via a traditional search, for example.
Then there are the reports of AI hallucinations. Digging under these articles usually reveals that hallucinations have been the result of concerted attempts to trick the AI into saying something unwise (it looks a lot like entrapment, which is just not cricket). AIs are not omniscient and lack the Judgement of Solomon; yet using humans as the benchmark I’d say they do remarkably well. Making them better is also a major area of AI research; an Open AI article on this shows how deeply this stuff is considered.
Wise people treat any source of information with a degree of scepticism and, where possible, validate the sources. Since most people don’t do this and seem willing to be swayed by the bloke down the pub or ‘something’ they saw on the internet or Facebook (ugh!), then I’m pretty happy that current AI outperforms the publics benchmark for truth.
Will I be (adversely) affected by AIs – The future of work.
Microsoft and others are researching and publishing their credible findings on what work / the workplace mean, post-pandemic. The latest such publications have advanced from our adoption of remote and hybrid work for office staff to the world of work in the AI Age. For the first time in our history, machines will take over the roles not just of the blue collar workers, but also those who make a living by thinking. White collared ‘Knowledge Workers’ such as clerks, designers and even creative will be impacted. In the fullness of time we will see the same for managers & analysts, solicitors and lawyers, maybe even some GPs and surgeons. The parallel development of robotics, a field which is already moving at pace, catalyses the transition.
Will this lead to a smaller workforce?
Technology has always fuelled economic growth, improved standards of living, and opened up avenues to new and better kinds of work.
Will this lead to a smaller workforce?
Erik Brynjolfsson and Andrew McAfee describe two machine “ages”.[ii] The first commences with the invention of the steam engine in 1775. The effect of this led to average standards of living today far beyond what even the wealthiest families of the 18th and 19th century could imagine. The “second machine age” began in the 1990s, driven by three factors: exponential increases in computing power, (Moore’s Law); the ability of digital technologies to remove the cost to replicate ideas and products; a vast increase in our ability to connect and build from ideas and knowledge ideas (called ‘recombinant growth’).
The relationship between technology innovation and job creation is complex. Some professions are lost to machines; many other careers are created or boosted[iii]. Automation of routine tasks reduces less motivating middle-skill jobs. Conversely, it complements social and innovation tasks, creating more interesting low- and high-skill jobs. Automation doesn’t just create or destroy jobs – it transforms them.[iv]
“AI won’t take your job, but someone who understands AI will.”
Microsoft’s introduction of Copilot technology into pretty much everything starts with incorporating generic assistants for knowledge workers and technical folk; all human-directed activity. The Cognitive Business competency (part of the Maturity Model for Microsoft 365) describes this evolution in the business context, with Level 500 outlining a true partnership between people and AI, and with AI having some autonomy, such as the ability to undertake auto-remediation of issues. At this point, the AI is no longer seen as a tool, but an essential part of the team.

Careers and jobs will change or disappear (ever has it been thus), the only difference is that this will happen faster than previous transitions. In the fullness of time, it is likely that the types of roles that will remain are:
- Artisans, craftspeople. We still like to buy unique, personal things.
- The trades. We are a long way from machines being able to do the plastering, fix the plumbing or rewire the house. Or do the ironing for that matter.
- People-facing roles. Charismatic baristas, receptionists, front of house, entertainers etc. will become even more valued in a world where AI handles the drudge work. We are a social species, and we will have more leisure time to enjoy (probably)
- True Innovators. For those with the flair for the truly creative, disruptive and outside the box thinking there will always be a role.
- People who are adept at working alongside AIs. AIs will need guidance, purpose, oversight and a way of interacting with people; we need a new class of experts with these skills.

There are going to be a lot of people wondering what they are going to do for a living. The new roles, as yet unthought-of, will probably be the answer. Pessimistically perhaps there will be a lot of people who won’t have a job.
The question is… is this a bad thing?
In a world with almost unlimited cheap energy (renewables hold that promise later this century), with AI and robots taking on the trivial, meaningless and tedious work, while enhancing the capabilities and productivity of the sophisticated industry and enterprise, it is conceivable that societies’ wealth and nations GDP will no longer be tied to the number of people that can be employed. A true living wage might provide for everyone (as some Arab nations already enjoy, from their oil revenue legacy) and many people will work for self-fulfilment rather than economic survival. As societies, we need to ensure this is the benefit from AI, rather than wealth being concentrated in the hands of corporations and oligarchs. It’s a potent dream.
Who will control and regulate them? – Ethics and governance
There are proper concerns about AI ethics and governance. How do we know that AIs are fair, inclusive and socially responsible.
AIs just don’t care about you and me.
AIs aren’t sentient. As such they just don’t care about you and me. Examples abound of machine learning tools that demonstrate bias, ethical failures and lead people to unwise decision making. Industry and our legislators have taken note.
AIs just don’t care about you and me.

The vast majority of legal protection already exist in the form of existing laws around privacy (GDPR etc), accountability (Companies Act and other corporate legislation) etc. The European Commission includes transparency and traceability among its requirements for AI systems. The French government has committed to publishing the code that powers the algorithms it uses. In the United States, the Federal Trade Commission’s Office of Technology Research and Investigation is required to provide guidance on algorithmic transparency. Some nations are actively introducing new controls; meanwhile the UK’s Information Commissioner (ICO) has published a white paper on AI proposing a light-touch approach based on five principles that guide the responsible development and use of AI and using existing legislation where possible.
In parallel, industry has developed the Responsible AI Partnership so that “AI advances positive outcomes for people and society”.
Dr Hinton himself stressed that Google had been “very responsible”. In a statement, Google’s chief scientist Jeff Dean said: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.” Microsoft’s Satya Nadell has laid out 10 laws of AI behaviour.
Satya Nadell
- AI must be designed to assist humanity.
- AI must be transparent.
- AI must maximize efficiencies without destroying the dignity of people.
- AI must be designed for intelligent privacy.
- AI needs algorithmic accountability so humans can undo unintended harm.
- AI must guard against bias.
- It’s critical for humans to have empathy.
- It’s critical for humans to have education.
- The need for human creativity won’t change.
- A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision.
There is a need to require AI developments to abide by responsible AI frameworks and this should be an area for future legislation, but it would be wrong to suggest that the ethics, governance and oversight of AI is not being addressed. A significant hurdle to overcome is the lack of a language to describe and codify our ethics and morals, one able to define and accommodate the differences we have between people, nations and organisations. Until this is achieved it will be hard to know that our AIs reflect our values correctly.
Will AIs take over the world?
AIs like Skynet are literally the stuff of nightmares. The good news is that threats from our current AIs are never going to leave the realm of fiction. In a New York Times article, Dr Hinton, the ‘godfather’ referred both to “bad actors” who would try to use AI for “bad things” and that AIs might “create sub-goals like ‘I need to get more power'”. While they might be increasingly capable, the short version is that they are neither sentient nor self-aware. AI have no underlying purpose or desires. They have no survival instinct, no motivation to procreate or expand, no ego to feed. In the absence of self-awareness and sentience and the attendant motivators it is inconceivable that the kind of sub-goals feared might arise spontaneously in the Generalised Narrow AI we have today.
AI have no underlying purpose or desires.
Without desire, the ‘singularity’, where AI capability expands at an uncontrollable and exponential rate, cannot happen. For desire to be present, AIs must become sentient and possibly self-aware. Research on this has progressed little and there is little consensus on how we would identify true sentence rather than sophisticated mimicry. That’s not to say it’s impossible and the breakthrough, if we want to call it that, might be linked to quantum computing. Either way, we have several decades before AGI with any kind of consciousness might appear, so plenty of time to ensure safeguards are in place.
Sentience is the ability to feel or sense stimuli. All people and animals are sentient. AI are not.
Self-awareness is just that – an awareness of self, having an individual identity. Being able to state, “I think, therefore I am” and know what it means.
As for, people, including bad actors, doing bad things with AI, this is a certainty. We should regulate the tool, but monitor and constrain such people, as we do with all other technologies. Given the rapid progress of open-source AI, however, this might be hard.
Conclusions
We have entered an exciting and breathlessly fast-moving new era, with the Information Age becoming the AI Age.
Should we be concerned? Emphatically Yes!
As Robert Miles says, there are short term vs long term, and accidental and misuse risks. These are reasonable, but the ‘Very bad stuff’ scenarios assume ‘High Level Machine Intelligence’, AKA self-aware AGI.
| Short Term | Long Term | |
|---|---|---|
| Accident | e.g. Self-driving car accidents, inadvertent bias | Very bad stuff |
| Misuse | e.g. Deep fakes | e.g. AI-enabled authoritarianism |
Things will change; we will make mistakes (especially setting safe goals and constraints for the AIs); bad people will do bad things; social and legal institutions will struggle to keep up. Current work roles will be significantly impacted and people’s attitudes to work and self-purpose will need to evolve. There are risks that ‘big business’ will gain too much power (as always).
AI only know what humankind knows; they do not and cannot create knew knowledge.
Should we be afraid? Overall, no!
Current AI is limited by three factors:
- They only know what humankind knows; they do not and cannot create knew knowledge.
- They lack ‘desire’. Nothing motivates them to act, they are merely tools wielded at the desire of individuals. They are unfeeling, uncaring, unaware. In this they are not sentient and only respond to instructions.
- They have no intrinsic creativity. They are potent at reforming existing knowledge, art and data, but have no capability for innovation, ‘left field’ problem solving or spontaneous adaptation to new environments or scenarios. Without the vast edifice of prior art that humans have created, they cannot synthesise.
We know only 1 way to create sentient and self-aware intelligences. It’s rather fun and involves people ‘who love each other very much’. Other mammals can be less circumspect about it.
We know only 1 way to create sentient and self-aware intelligences
My point is that we shouldn’t be emotionally fearful; as the cliché goes, they are a tool. Tools are not to be feared; weapons, and those that wield, them should be.
PS. Yes, Bing Chat did create the images for this blog. Thanks for that!
[i] https://en.wikipedia.org/wiki/Dartmouth_workshop
[ii] https://en.wikipedia.org/wiki/The_Second_Machine_Age
[iii] https://fee.org/articles/technology-creates-more-jobs-than-it-destroys/
[iv] https://hbr.org/2021/11/automation-doesnt-just-create-or-destroy-jobs-it-transforms-them
