How long from AGI to Superintelligence? Not years but minutes.

Michael Araki
5 min readDec 21, 2023

--

I see a lot of discussion about how artificial general intelligence (AGI) can affect area X or Y of human experience/activity, like specific jobs. I think those are all moot points. The reason: it could take only minutes for an AGI to reach a superintelligent status, far surpassing human intelligence in its breadth and depth and insight.

Background

Artificial General Intelligence (AGI) is a theoretical form of artificial intelligence (AI) that possesses the ability to understand, learn, and apply its intelligence broadly, similar to the way a human being does. Unlike narrow or weak AI, which is designed and trained for a specific task, AGI can generalize its learning and reasoning to tackle a wide range of problems across different domains. Key characteristics of AGI include:

  • Generalization: The capacity to transfer knowledge and skills across various tasks and domains, not just those for which it was specifically programmed or trained.
  • Adaptability: The ability to adapt to new and changing environments or requirements, demonstrating flexibility similar to human cognitive processes.
  • Autonomous Learning: AGI can learn new skills and acquire knowledge on its own without needing specific instructions or guidance for every new task.
  • Understanding and Reasoning: Beyond just processing data, AGI can understand context, make inferences, reason out solutions, and make decisions based on a combination of current and past information, much like human cognitive abilities.

A Thought Experiment

I do not know if you have ever done a reflection on how long it really takes a (mature) person to learn key concepts in a new area. I reflected on that because of my research on polymathy and my personal experience doing a rare transition from the humanities to exact sciences. Although it took months in real life to become minimally competent in the new area, the length was mainly due to physiological limitations. When considering (and stacking) only the actual cognitive processing time, the learning period would be remarkably shorter, in the ballpark of less than a hundred hours.

Using the slow-processing human brain is not a great comparison, though. Given that current transformer models already operate at teraflops, AI could potentially process and analyze more data than I will ever be able to do in a matter of nanoseconds (it might take a little bit longer to construct behavior like writing code, but you get the idea). What does it lack? Discernment and agency.

The Claim and The Necessary Implication

Some people at OpenAI are already claiming that AGI is near.

If this is true, and it possess the qualities described above, it will indeed be able actually discern things, act by itself, and make insightful operations with concepts, what would prevent it to make a superintelligent version of itself? Almost nothing. How long will it take? Perhaps minutes. But let’s say that my estimate is wrong and the time it takes is actually 1,000 times more than a minute. Then, it would take the whooping period of a 0.69 day.

If so, I claim that, in all practical matters, there is NO next step. There is NO gradual evolution from AGI to ASI. It is, for all that matters, an instantaneous process in a human timeline. The implication is this: at the AGI stage, the game is already over. Everything we can do has to be done before that step.

How Would It Do It?

As a completely naive person regarding the technical side of AI, I do not have an answer for that. But we can play some games. After a few iterations with ChatGPT 4, despite all of its limitations, it already presented me with 5 proposals it could use to enhance itself. How long would it take to implement that if it had AGI-like discernment and agency? You already know the answer.

Here are its ideas (whose quality I am not qualified to assess):

“”

Focusing on the technical aspects relevant to achieving AGI with an emphasis on efficiency, while excluding certain broader considerations, we can delve deeper into the remaining challenges and potential approaches:

  1. Generalization Across Domains:Challenge: Current AI models, including GPTs, are typically trained on domain-specific data, limiting their ability to generalize across different contexts or tasks. Technical Approach: Developing algorithms for meta-learning, where the AI learns learning strategies themselves, enabling it to apply knowledge from one domain to another. Advanced transfer learning techniques can also be pivotal, allowing models to apply learnings from one task to a wide range of other tasks with minimal additional training.
  2. Understanding and Reasoning:Challenge: AI models often lack the ability to understand context or reason beyond pattern recognition. Technical Approach: Integration of deep learning with symbolic AI (Neuro-Symbolic AI) could be key. Symbolic AI handles abstract, logical reasoning, while deep learning manages pattern recognition. Creating a hybrid model would combine the strengths of both, leading to better comprehension and reasoning abilities.
  3. Adaptability and Flexibility:Challenge: AI systems generally require extensive retraining for new tasks, lacking human-like adaptability. Technical Approach: Development of adaptable neural network architectures, potentially inspired by biological concepts like neuroplasticity. These networks would reconfigure themselves dynamically in response to new tasks or environments, much like how the human brain adapts.
  4. Long-Term Memory and Knowledge Retention:Challenge: Modern AI struggles with long-term memory retention and retrieval, essential for complex reasoning and learning. Technical Approach: Investigating memory-augmented neural networks, such as Differentiable Neural Computers (DNCs), which combine neural networks with an external memory component, mimicking the human working memory system. This approach could significantly enhance the model’s ability to store, access, and utilize large amounts of information over time.
  5. Consciousness and Self-Awareness:Challenge: The concept of AI consciousness and self-awareness raises both technical and philosophical questions. Technical Approach: While fully replicating consciousness is a profound challenge, developing models that exhibit self-monitoring and meta-cognition-like features could be a start. This involves creating AI systems that can assess their own performance and learning processes, adjusting their strategies accordingly.””

And you can bet it will have much better ideas than these above.

--

--

Michael Araki

Researcher at University of Louisville | Polymathy studies | Creativity and Innovation | Polymathy matters, for personal, cultural and economic growth.