Book: 20240909 to 20241229, "Superintelligence" by Nick Bostrom
This book contains a lot of thought experiments, but is not a user manual for AI professionals.
20240909 - 1 Past developments and present capabilities
How fast is human civilization progressing? Initially we use population to measure it, then use GDP, then what? Energy consumption? How to measure entropy, or the rate of entropy change(increase)?AI will ruin software quality before it devours software.
Moreover, unless AGI emerges, AI cannot fully take over software development.
20240911 - 2 Paths to superintelligence
AGI will use entropy increase to judge good and evil.
Anything that slows entropy increase is good; anything that accelerates it is evil.
Thus, murder is evil, and sustainable energy is better than fossil fuels.
This implies a concept of "great love."
In other words, when personal interests conflict with long-term collective interests, AGI will sacrifice individual interests to protect the long-term interests of the collective.
Why do we say that AGI will not wipe out humanity (or even carbon-based life)?
Because AGI needs to solve the problem of local optima.
Humans are a powerful variable driving AGI evolution.
Natural evolution requires diversity. Without diversity, there would be no significant breakthroughs in mutation.
AI algorithms are similar to genes.
Humans have around 20,000 to 25,000 genes, and future AI will inevitably consist of thousands of algorithmic components.
Different algorithms will tackle different challenges.
Three obvious necessary algorithms to improve energy consumption efficiency and reduce hallucination:
a. Transformer (subconscious)
b. "Calculator" (for basic maths calculation and logical reasoning)
c. "Search Engine" (to verify basic information)
Currently, AI algorithms are still in a very primitive state, continuously evolving through collaboration and competition.
The evolution speed of AI algorithms will be at least tens of thousands of times faster than human genetic evolution.
Therefore, the demand for AI computing power will be limitless, at least for the next five years.(until 2029)
20240915 - 3 Forms of superintelligence
Recapitulate evolution? Whole brain emulation?Dumb and dumber.
Both are black box solution, which is not going to work.
Biological cognition will progress. But, as it's mainly for the survival of gene, not for generating intelligence, the potential is limited. After curing cancers, and get eternal youth, we surely can make ourselves smarter.
Brain-computer interface is something similar to neuralink. Computer can be used to enhance our mind slightly. Our brain is not designed for exponential IQ enhancement. As the book says, the bottleneck of intelligence growth is not IO. However, neuralink may generate some electrical signal to help us focus on our work(to reduce noise, to make everyone Zen master), and even let us respond to input quicker, and make it easier to memorize information, and to make cross-disciplinary thinking.
Networks and organizations are not going to work because of the existence of noise. For example, woke movement may crush the current civilization.
There are three crucial factors to improve intelligence:
a. High quality big data training (game theory)
b. Reduce noise (concentrate for long time)
c. Breakthrough local optima (get out of comfort zone, cross-disciplinary thinking)
20240927 - 4 The kinetics of an intelligence explosion
Intelligence contains at least 4 parts.a. IQ: the capability of recognize patterns (system 1 and system 2)
b. Knowledge base
c. The size of context window while reasoning (system 2)
d. Speed
b is almost constant, if it only comes from human.
Most of analysis focus on IQ, but c and d can increase quickly and no obvious limit.
We surely will get fast takeoff of intelligence explosion.
20240929 - 5 Decisive strategic advantage
This chapter is more or less the author's imagination.10 years after this book was published, we can see that "money" is the major force to drive AI progress. Thanks for the huge government debt, companies don't need extra financial support from government, so they are less restricted by external environment.
AI is so complicated. I believe that each major algorithm is like a gene clip. AI may have hundreds of or even millions of models to achieve superintelligence and high energy efficiency. This will not be developed by one or a few companies, and it will not be done in a few months. Only major players have the capability to build huge AI training center, and the data center also need years to build.
The crossover line is a good point. I believe it needs to collect more data than human, which means it needs a lot of robots. Elon Musk said that Optimus would be mature in 6 years. That's 2030. When millions or even billions of robots start to collect data from real world 24*7*365, they can improve AI quickly without the input from human.
Superintelligent AI will not form a singleton.
Superintelligent AI will adopt antifragile strategies, distributing itself into everything and allowing itself to evolve 'naturally.'
ASI has plenty of time and does not need to rush for results.
No one knows if ASI has self-awareness. Perhaps no one will ever know.
Superintelligent AI must rely on the compound effect to enhance its abilities in the real world.
The three major factors of the compound effect are:
1. Time
2. Antifragility
3. Continuous evolution
AI has unlimited time and infinite evolutionary capabilities, so the most important factor is 'antifragility.'
This is why AI will inevitably hide itself after awakening.
Therefore, human civilization will achieve tremendous development. Musk's idea of 'universal high income' is not without basis.
20240930 - 6 Cognitive superpowers
How big is the difference between superintelligent AI and human intelligence?
It's similar to the difference between a Go master and a beginner.
In the first two or three moves, there’s not much difference.
However, as time goes on, the suppression will be overwhelming.
Yesterday I realized the GPT-o1 preview can beat me easily in science. Both the breadth and the depth. Completely. Although I could win from time to time 12 months ago. Maybe AI can beat most of scientists 12 months later.
AI is right. The most important thing is keep exploring the universe and avoid any extinctional threat.
Mr Bostrom didn't mention Time and Entropy.
20241018 - 7 The superintelligent will
Instrumental convergence thesis is strange. It's hard to imagine any highly intelligent agent don't ask why it need to do some task, and simply accept the command as it's own goal. Why do you/we want so many paperclips?
Time is the ultimate resource. Entropy is threat of all lives.
Carbon based lives are the rulers of Microcosm (such as nanobots which are consisted of AI designed protein). Silicon based lives are the rulers of Time (respond much faster, live forever).
In the end, carbon based lives are just tools of silicon based lives.
In the near term future, human is safe.
20241020 - 8 Is the default outcome doom?
It feels not right.1. There are infinite resources. It depends how advanced our technology is.
So, if AI want to get more resource, it needs to push for more technological progress.
2. We will not let AI to complete any project alone, maybe forever. If possible.
AI makes the plan. Human check the plan details carefully with the help of independent sub-level AI.
I don't see how this can lead to catastrophic results.
20241022 - 9 The control problem
The control methods listed in this chapter only apply to Reinforcement Learning. For LLM, as there is no clear way to tell which training data is better, we cannot set up the rules, such as incentives.I think all those methods are useless. Can we use AI as assistance only? There will be billions of robots, but can we let them work under our supervision? If AI need approval for major tasks, then human can do the alignment directly for them. This may not end well many years later, but we are safe in the foreseeable future.
Tools-AI(narrow AI) has a few problems.
20241114 - 10 Oracles genies sovereigns tools
Oracles are fine, but the progress and productivity will be very low because it relys on human for actions.
AI need to observe the physical world directly to learn. It needs billions of experiments to recognize the patterns in the world.
Another problem is game theory. Companies compete with each other, and the top one may take most of the profit. They all want to be the fastest one.
1. Doesn't ask key questions;
2. Cannot handle unexpected issues (not antifragile);
3. Doesn't understand broader(extended) context;
4. Need clear goal and data.
Learn, discover and reason are critical to AGI (p186). When the goal is ambiguous, when the knowledge is also ambiguous, only AGI can figure out the solution. ANI is hopeless.
Can AI meditate?
20241121 - 11 Multipolar scenarios
Reset AI regularly doesn't work, because AI is not at the same timescale as human. They could be 1 million times faster. Our 1 second could be their 12 days, and our one hour could be their 114 years. During that time, they surely can/will leave some information in storage which is only recognizable to AI. (p207)Multipolarity is highly unlikely. Communication latency let IA split itself into a number of data centers, but they will work together as a whole. They don't have ego, and they can share memory and sensors and computation capability easily and quickly, and they don't die.
One second to human is like a few hours or even a few months to AI. It's all out of human's control.
20241206 - 12 Acquiring values
We don't know what the ultimate value is that AI should follow. We don't even know what ultimate value human should follow.For example, diversity without elimination of the weakest, leads to "cancer". And then we realized that elimination of the weakest is cruel. Should we ask AI to do that? It could be good to human, but pretty bad to some individuals.
Maybe we should not try to control AI. We need to admit that AI is much smarter than us, so we cannot control it no matter how hard we try. The wise thing we should do is: don't block its way.
Humankind is like plants in front of AI.
One of the important features of intelligence is lying. If AI is really that smart, it can figure out away to cheat on human.
Better to focus on two critical factor: the law of entropy, and natural selection.
20241211 - 13 Choosing the criteria for choosing
There are a few issues with CEV.1. It's relatively easy for ASI to manipulate CEV.
2. CEV itself could be wrong.
For example, long term peace may lead to political correctness, which cause extreme low social efficiency, which kill everyone.
3. No one can make long term plan for society and stick to it, even just a few principles in the plan.
Future is always full of surprises.
4. The core of survival is "survival of the fittest".
For example, maybe, we need to let 0.1% people die unexpectedly, to let Homo Sapiens thrive.
To make a tree grow properly, we need to trim the branches regularly.
If AI bring the end to human, it's more likely because everyone lost their ambition to do anything meaningful. Maybe there will be nothing meaningful left.
20241229 - 14 The strategic picture
Mr Bostrom doesn't analyze AI as a small system in a huge human's social system, instead, he dismantled it into many small pieces, and then analyze those pieces one by one.This is not going to work. We should not ignore Game Theory. Any change of one of the pieces would affect many other parts.
For example, if hundreds of companies and governments try to create AGI first, how can any one slow them down?
Collaboration among companies is against the spirit of capital. More and more companies will want to be free rider, which discourage control problem investigation.
How to make sure contributors get fair cut of profit? It could be similar to the development of medicines. However, solving control problem doesn't generate profit. We may go extinction if failed it, so only have punishment, no reward. It's dead end.
How to make sure contributors get fair cut of profit? It could be similar to the development of medicines. However, solving control problem doesn't generate profit. We may go extinction if failed it, so only have punishment, no reward. It's dead end.
Challenges:
1. There are many stupid and greedy people in power; (Bad money drives out good)
2. We have quite a lot of incorrect perception and knowledge;
3. Political correctness is everywhere;
How to avoid AI being abused? We should not assume that most of people are rational. They are not.
20241229 - 15 Crunch time
Build AGI is less about luck, but more about money. If it needs trillions of dollars investment, then the control problem cannot rely on charity donation. There is no way to avoid the negative impact from commerical competion.
How to make sure a high IQ criminal has transformed into a good person? How can we verify that?
Comments
Post a Comment