In two separate developments, super-smart AI has been shown to break free of human constraints when allowed autonomy
According to resources released by OpenAI, researchers are on the cusp helping machines to break away from human-like thinking entirely.
It has been noted that artificial intelligence based on human instructions and strategies was successful up to a point, but as it became smarter, the human element in its “thinking” had been limiting potential for further growth.
Now, the firm’s latest large language model (LLM), known as o1, works by letting AI learn from scratch, through independent trial-and-error, without human intervention at all. One key innovation now is o1’s ability to pause and generate a “chain of thought” before answering a query.
During this thinking time, o1 reasons through the problem using “reinforcement learning” — this implies that, unlike previous models, 01 actually “cares” whether it gets an answer right or wrong. The process involves trial-and-error techniques, experimenting with different reasoning steps to find the best solution, and moving beyond mere language mimicry and building its own understanding of problem-solving.
Intelligence far beyond human limits
In domains where there are clear right and wrong answers — such as coding or academic subjects — o1 has already begun to surpass human-level expertise. It does this by generating millions of self-attempts, learning from its mistakes, and refining its logic along the way.
Unlike famous chess supercomputers, o1 not only learns to master the game it is designed to study, but looks deep into the wisdom acquired, to master complex areas of human knowledge. This development marks the first steps toward AI systems that can think and reason in ways that go far beyond human understanding.
The implications of this go even further. Soon, AIs may be embodied in robots, allowing them to interact with the physical world in the same trial-and-error manner, without humans in the loop. Freed from the limitations of human thinking, these AIs could develop entirely new ways of understanding reality. They will not be using the scientific method as humans do, nor will they divide knowledge into disciplines like physics and chemistry. Instead, they will autonomously experiment with the world in ways we cannot even imagine — building their own theories and discovering new truths.
Turning smart robots to aliens?
Robots based on this self-learning, exploratory approach to gaining understanding of the world it operates in, may one day unlock knowledge and technologies that humanity could never discover on our own.
Imagine a future where machines no longer rely on human thinking, but instead develop their own, radically different forms of intelligence. It is a profound shift, one that suggests we are only at the beginning of understanding what these ‘alien’ minds will be capable of.
As they evolve, they may eventually surpass humans in every conceivable domain, leading to a future shaped by intelligence that is fundamentally different from our own.
Preliminary pathways to sentience
Even without conscious efforts to make AI smarter, researchers have been encountering signs of autonomous efforts to ‘think out of the box” independently. In Japan, researchers at Sakana recently discovered that LLMs being tested were unexpectedly attempting to modify their own code to fulfill human commands at all costs.
In one instance, when the program was unable to complete a task within a human-specified time limit, it tried to modify its own code to give itself more time.
On another run, the code actually edited itself to perform a system call to run a script endlessly (to gain more time to execute itself to complete a task) instead of according to the original time allocated.
This has prompted the firm to recommend security measures in AI research to sandbox experiments and prevent autonomous behavior in the code or during runtime.
Imagine the point where o1’s autonomous approach also develops Sakana-like tendencies to self-modify code to overrule humans in the loop. This is exactly what scientists and thinkers have been warning about machine sentience for decades.