View all topics
Back to CompTIA

AI: The emerging Artificial General Intelligence debate

  • 2 wk ago

Since Google’s artificial intelligence (AI) subsidiary DeepMind published a paper a few weeks ago describing a generalist agent they call Gato (which can perform various tasks using the same trained model) and claimed that artificial general intelligence (AGI) can be achieved just via sheer scaling, a heated debate has ensued within the AI community. While it may seem somewhat academic, the reality is that if AGI is just around the corner, our society—including our laws, regulations, and economic models—is not ready for it.

Indeed, thanks to the same trained model, generalist agent Gato is capable of playing Atari, captioning images, chatting, or stacking blocks with a real robot arm. It can also decide, based on its context, whether to output text, join torques, button presses, or other tokens. As such, it does seem a much more versatile AI model than the popular GPT-3, DALL-E 2, PaLM, or Flamingo, which are becoming extremely good at very narrow specific tasks, such as natural language writing, language understanding, or creating images from descriptions.

This led DeepMind Scientist and University of Oxford Professor Nando de Freitas to claim that “It’s all about scale now! The Game is Over!” and argue that artificial general intelligence (AGI) can be achieved just via sheer scaling (i.e., larger models, larger training datasets, and more computing power). However, what ‘game’ is Mr. de Freitas talking about? And what is the debate all about?

The AI debate: strong vs weak AI

Before discussing the debate’s specifics and its implications for wider society, it is worth taking a step back to understand the background.

The meaning of the term ‘artificial intelligence’ has changed over the years, but in a high-level and generic way, it can be defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals. This definition purposely leaves the matter of whether the agent or machine actually ‘thinks’ out of the picture, as this has been the object of heated debate for a long time. British mathematician Alan Turing advocated back in 1950 in his famous ‘The Imitation Game’ paper that rather than considering if machines can think, we should focus on “whether or not it is possible for machinery to show intelligent behavior“.

This distinction leads to conceptually two main branches of AI: strong and weak AI. Strong AI, also known as artificial general intelligence (AGI) or general AI, is a theoretical form of AI whereby a machine would require an intelligence equal to humans. As such, it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. This is the most ambitious definition of AI, the ‘holy grail of AI’—but, for now, this remains purely theoretical. The approach to achieving strong AI has typically been around symbolic AI, whereby a machine forms an internal symbolic representation of the ‘world’, both physical and abstract, and therefore can apply rules or reasoning to learn further and take decisions.

While research continues in this field, it has so far had limited success in resolving real-life problems, as the internal or symbolic representations of the world quickly become unmanageable with scale.

Continue reading:

Reply Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
Like1 Follow
  • 2 wk agoLast active
  • 1Views
  • 1 Following
Powered by Forumbee


View all topics