Unpacking AGI: Revolution or Risk for Humanity?

Artificial General Intelligence, commonly referred to as AGI, has fascinated and unsettled experts for decades. While Hollywood has given us plenty of dramatic interpretations—from HAL in 2001: A Space Odyssey to Skynet in Terminator—the real-world implications of AGI lie somewhere between utopian visions and genuine, grounded concerns. AGI is an example of Strong AI, and promises a future where machines can reason, adapt, and solve a breadth of problems with the same (or likely better) depth and flexibility as humans. Yet, should we be excited or terrified by this possibility?

Introduction

“The question of whether machines can think is about as relevant as the question of whether submarines can swim.” — Edsger W. Dijkstra

Artificial General Intelligence, commonly referred to as AGI, has fascinated and unsettled experts for decades. While Hollywood has given us plenty of dramatic interpretations—from HAL in 2001: A Space Odyssey to Skynet in Terminator—the real-world implications of AGI lie somewhere between utopian visions and genuine, grounded concerns. AGI is an example of Strong AI, and promises a future where machines can reason, adapt, and solve a breadth of problems with the same (or likely better) depth and flexibility as humans. Yet, should we be excited or terrified by this possibility?

As Dijkstra’s famous quote suggests, the real question isn’t really whether machines think in the way humans do. Just as submarines can navigate underwater without “swimming” like a fish, AGI could perform complex, human-like tasks without having consciousness or subjective experience. Perhaps then the focus shouldn't be on whether AGI will be conscious or self-aware, but rather on what AGI will do—and how it could transform, disrupt, and improve our world.

Today, we’ll explore what AGI truly is, why it could become humanity's greatest asset, and why it may also present complex challenges that we’re only beginning to understand. With insights from current research and leading companies in the field, this article will cover the practical and philosophical questions surrounding AGI: Could it improve our lives in unimaginable ways? Or are we staring down the barrel of unintended consequences?

So what Exactly is AGI?

When we talk about Artificial General Intelligence, we’re not referring to today’s most advanced AI models like ChatGPT and Gemini. These programs, while impressive, are narrowly focused and excel at specific tasks, such as generating text or predicting protein structures. ChatGPT, for instance, has recently been updated with dynamic, realistic-sounding voices that allow users to hold natural, conversational exchanges with AI that frankly – apart from a few technical hitches here and there – eerily parallel to holding a genuine conversation with a real person.

The key distinction here is that these models only give the illusion of thinking and responding with a considered reply, but in truth – no matter how realistic they sound – they are limited to responding within predefined parameters rather than genuinely understanding or reasoning like a human. AGI, by contrast, is an artificial intelligence in the truest sense of the word, an entity capable of actually understanding, learning, and reasoning across any domain without adhering to predetermined responses—a machine with a mind of its own, able to take on any intellectual task a human can, and potentially much more. Creating AGI is a primary goal for many AI researchers and companies, such as OpenAI and Meta – both of which see AGI as the future of advanced intelligence.

The timeline for AGI’s arrival remains a subject of intense debate. While some experts predict it could be developed within a few years or decades, others believe it may take a century or even longer. A small minority suggests AGI may never be achieved at all, while others speculate that it might already exist in nascent forms. Influential AI researcher Geoffrey Hinton has recently voiced concerns about the rapid pace of progress, noting that AGI could become a reality sooner than many anticipate.

In the more practical sense, AGI would be far more adaptable in ways that current AI is not. Rather than needing explicit training on narrowly defined tasks, AGI would be capable of independently gathering information, making decisions, and even innovating in unexpected ways. Some experts believe AGI is a natural progression of machine learning advancements, where models become increasingly capable of understanding context and nuance. Others argue AGI requires a completely different approach, one that integrates elements of human cognition, ethics, and even psychology into machine programming.

The Potential Impact of AGI on Humanity

AGI’s appeal lies in its vast potential applications, and given the proper ethical considerations and correct frameworks could theoretically usher in a new golden age of unprecedented human advancement. For instance, AGI could be revolutionary in space exploration. Instead of human-led missions limited by the need for life-supporting environments, AGI systems could operate autonomously on planets like Mars, conducting experiments and mining data without any need for human oversight. They could lay the groundwork for human colonisation or even discover entirely new resources for humanity right here on earth.

However, such amazing power also raises uncomfortable questions.

Firstly, who decides how AGI is used and for whose benefit? Would we need to impose thought restrictions on it, or have it decide what is most important for itself? Also if AGI is designed to solve our biggest problems, do we risk becoming overly reliant on it, losing vital skills, or even our drive to innovate? And perhaps the most important question of all is how do we ensure that AGI (which by its inherent design would likely have at least the same level of awareness as humans), actually aligns with human values – could it behave unpredictably, or even develop its own goals that directly conflict with our own?

The Challenges and Risks of AGI Development

The creation of AGI isn’t just a technical challenge—it’s also a moral and existential one. One immediate issue is the environmental cost. As mentioned in a previous article, training and running AI is already an extremely costly process from a resources point of view, with a simple 100 word chatbot response using the equivalent of three bottles of water and lighting 500 light bulbs for an hour.

The natural assumption then is that AGI would use vast amounts of energy and water, dwarfing even the current, intense demands of large language models. Studies have shown that AI already uses as much energy as a small country. When drastically amping this up for AGI, the world could very quickly find itself facing a scenario where solving global problems using sophisticated technology actually exacerbates other issues, such as energy and resource shortages. (A counterpoint here would of course be to use AGI to solve the problem of its own energy consumption, in an ouroboros of critical thinking).

There is also the issue of autonomy. If AGI were to make decisions independently, what would stop it from pursuing goals that are misaligned with human values to a deadly degree? Hollywood’s ‘killer robot’ trope may be overplayed, but experts like the late Stephen Hawking have indeed warned that if we lack control over AGI, its decisions could unintentionally harm us. This is because AGI’s cognitive architecture may differ significantly from our own, appearing alien-like in its processes, which may ultimately span a spectrum of capabilities and cognitive models (including our own), eventually drawing conclusions on how to solve issues that conflict with our own. Indeed, science explains how humans by nature behave like viruses, destroying much of the planet for selfish gains. An AGI might prioritise efficiency over empathy, for example, deciding that humans themselves are the source of most ecological problems and therefore require eradication.

Ethics is another big challenge. An AGI that operates on a purely logical basis might make morally questionable decisions, for example an AGI in healthcare that prioritises treatments for the ‘most productive’ individuals, rather than the most vulnerable. The question in this sense then becomes do we build moral frameworks for AGI to understand and respect human values, especially when those values vary across cultures and contexts?

Economic disruption is yet another concern and one we are already seeing to a degree with AI right now. With its ability to outperform humans in intellectual tasks, AGI could displace jobs across sectors from finance to education on a mass scale. AGI-powered systems could replace human workers not just in manual or routine tasks, but in complex roles like law, medicine, and research. The economic divide could widen as AGI adoption accelerates, leaving behind those who lack access or the ability to adapt. This calls for discussion on reskilling and economic policies to mitigate the potential societal upheaval AGI could trigger.

Leading the Way – Who’s Driving AGI Research?

AGI is becoming a big subject of interest for a growing number of companies. A 2020 survey identified 72 active AGI research projects spanning 37 countries, underscoring the widespread commitment to bringing AGI to fruition. Several companies are at the forefront of AGI research, each bringing unique perspectives and goals to the table.

OpenAI, for instance, has been working towards AGI with a keen focus on ensuring its development benefits all of humanity. The organisation’s advancements with large language models like GPT-4 are seen as stepping stones towards general intelligence (and some believe that it is already there to a certain degree) and OpenAI remains committed to transparency and ethical safeguards.

AGI Odyssey is dedicated to making AGI accessible and decentralised. By creating a collaborative environment for AGI development, and working with some of the top research institutes in the world, they aim to democratise AGI tools so that they are not monopolised by a select few organisations with the biggest bank balances. Their mission is to ensure AGI serves global, rather than corporate interests.


Although Google isn’t the first name people likely go to when they think of fair use and ethical AGI, their subsidiary research company DeepMind, based in the UK, has made headlines with breakthrough projects like AlphaGo and AlphaFold. DeepMind has extensive resources at its disposal and is working towards “solving intelligence.” Their aim is to apply AGI to scientific research, with particular focus on healthcare and environmental solutions.

Another noteworthy player is Anthropic, a company founded by former OpenAI researchers. Anthropic, by their own words, “puts safety at the frontier” of their research, which is focused on creating reliable and safe AI, emphasising interpretability and transparency in AI decision-making. This approach could be critical in ensuring that AGI systems operate within human-aligned ethical frameworks.

At NetMind.AI we believe that AI should be for the people and for the betterment of Humanity. We work towards developing AI products that make a positive impact on humanity in an ethical way, whilst keeping the end user's interest at heart. Our goadl is to create a machine that can understand or learn any intellectual task that a human being can.


These organisations, while ambitious and proactive, are no doubt well aware of the challenges that lie ahead. They are not just racing like many to be the first to create AGI – like a misguided 21st century space race – but rather investing in frameworks and policies to ensure it remains beneficial and controlled, to hopefully find solid answers to many of the key questions and concerns this article raises.

So is AGI A Revolution or Risk?

The answer of whether AGI is a revolution or a risk is essentially…both. AI at this level is an unknown commodity, and handled badly there will likely be unprecedented risks.

The crucial component is in how it’s handled. Companies like OpenAI, AGI Odyssey, DeepMind, and Anthropic have a lot resting on their shoulders – the choices they make will shape our shared future. The real question again may not be whether we can make AGI think like us – because after all is that really a good thing – but whether we can think far enough ahead ourselves to guide it wisely through its evolution, akin to caring parents. As we stand on the brink of this new era, it’s crucial that we approach AGI development with both ambition and caution, ensuring that humanity, and not the technology itself, remains at the core of the AI revolution.