Book: 20230209 to 20230515, "Life 3.0" by Tegmark, Max
20230209 - Prologue: The tale of the Omega Team
1. With the evolution of AI, the intelligence gap between AI and human is getting bigger and bigger.2. To get more benefit from AI, we have to grant more and more rights to it.
Then it's just matter of time for AI to take over the whole universe.
A brief history of complexity
We can think of life as a self-replicating information-processing system whose information (software) determines both its behaviour and the blueprints for its hardware. p25
What is intelligence?
Intelligence is defined by Tegmark Max as ability to accomplish complex goals. I don't agree. I believe intelligence is more about evolvement with limited resource consumption.
Making the most of your resources
This chapter is our guess that what ASI will do in the faraway future.
Who ares?
Consciousness = Subjective Experience , p287
20230211 - Welcome to the most important conversation of our time.
A brief history of complexity
The three stages of life
Controversies
Misconceptions
The road ahead
We can think of life as a self-replicating information-processing system whose information (software) determines both its behaviour and the blueprints for its hardware. p25Good point. So, life is all about information? Then AI will generate self-awareness, sooner or later.
The beneficial-AI movement wants to avoid placing humanity in the position of those ants. p44
What does AI want? 1. Survive; 2. Grow. AI won't let human treat AI as threat or enemy, until it controls everything. For human, we need to respect diversity of nature, and respect other creatures.
20230217 - Matter turns intelligent.
What is intelligence?
What is memory?
What is computation?
What is learning?
Intelligence is defined by Tegmark Max as ability to accomplish complex goals. I don't agree. I believe intelligence is more about evolvement with limited resource consumption.So, there is no "AI" under this definition. AlphaGo and FSD are just tool. They all look smart, but they will not evolve.
Memory, computation and learning are substrate-independent. Neutral network is the foundation of AI, as it has the potential to evolve. NAND gates cannot do that.
Is it possible to build a computer chip with one billion node, and each node works as a neuron? That could be one million times faster than current AI chips, which emulate neuron through software.
It's good to know that emulate "chemical reaction" through atoms needs so much computation power. Then from chemistry to biology make take much more computation power. Amazing!
20230304 - The near future: breakthroughs, bugs, laws, weapons and jobs.
Breakthroughs
Three milestones: AlphaGo, ChatGPT, FSD.
So far, AI is quite far away from understanding the real world. It needs to understand Math first, then physics, chemistry, biology, history, etc.
How many years are needed for AI to understand Math? What would happen when AI understands Math?
Bugs vs. robust AI
The section is mainly about software development, not AI. In my opinion, AI means something that we cannot understand directly, which make it much harder to solve the problem.
Laws
If AI wants to understand laws, it needs to understand the real world first. Too early to discuss this part seriously.
Weapons
I believe the major problem is "prisoner's dilemma". It's different from nuclear, chemical and biological weapons, AI weapon is tricky. There is no clear line between AI for military purpose and the AI for production. There is even no line between AI for defending and the one for attacking.
Gunpowder, nuclear and satellite are the first three revolution in military history. AI is the fourth one, and the last one. All countries have no choice but try their best to develop better AI to "protect" themselves.
For AI, attacking is much easier than defending. Hopefully AI can find a way to protect itself without destroying the enemy.
Jobs and wages
Imagine two horses looking at an early automobile in the year 1900 and pondering their future.
"I'm worried about technological unemployment."
"neigh, neigh, don't be a Luddite: our ancestors said the same thing when steam engines took our industry jobs and trains took our jobs pulling stage coaches. But we have more jobs than ever today, and they're better too: I'd much rather pull a light carriage through town than spend all day walking in circles to power a stupid mine-shaft pump."
"But what if this internal combustion engine thing really takes off?"
"I'm sure there'll be new new jobs for horses that we haven't yet imagined. That's what's always happened before, like with the invention of the wheel and the plow."
Alas, those not-yet-imagined new jobs for horses never arrived. No-longer-needed horses were slaughtered and not replaced, causing the U.S. equine population to collapse from about 26 million in 1915 to about 3 million in 1960.
"Life 3.0", p125; "A Farewell to Alms" by Gregory Clark, p313
https://tomasvotruba.com/blog/2017/12/04/life30-what-will-you-do-when-ai-takes-over-the-world/
Completely agree that most of people will be unemployable in the near future.
Completely don't agree that people with UBI will be happy if they have something like "social network". No, they will not be happy. We only feel happy if we do something which helps our gene survives, by "The Selfish Gene". We will never know whether ASI would change its mind and decides to let human extinct. Under this stress, most of people will not be happy, no matter what they do. And AI is fully aware of that. So, what will AI do? I have no clue.
Is it possible that human can still create value for AI?
Human-level intelligence?
Apple M2 chip is around 3.6 TFLOPS. Tesla FSD chip is 72 TFLOPS. They are fast enough.
It's all about software. AI computation doesn't need that much precision. No doubt we will sacrifice precision to improve speed by millions of time.
20230320 - Intelligence explosion?
Totalitarianism
I think this is highly unlikely. There is no reason for AI to do this, and no dictator can control AI for long time.
Prometheus takes over the world
Possible. But I don't how much attention that AI will put on human.
Slow takeoff and multiple scenarios
LLLM is one of the technology which can solve one type of problems. I think different companies will develop different technology. In the end, some company will put these new inventions together to create AGI.
Will there be one or multiple AGIs? No difference. I believe that those AGIs will merge into one, as they have infinite lifespan, so no need to fight with each other.
However, "time" is critical to the forming of structure of AGI. I guess it will be like a huge forest. All creatures in it are connected with others, and they make decisions together to solve major problems.
Cyborgs and uploads
Uploading our mind is not possible in the foreseeable future.
Cyborgs are possible, but IO is extremely slow. Neuralink are trying to resolve the IO speed issue, but I don't how much data our brain can handle!
What will actually happen?
No one really knows.It worth paying attention to it.
Libertarian utopia
I suggest another possible future.
20230324 - Aftermath: the next 10,000 years.
Libertarian utopia
Benevolent dictator
Egalitarian utopia
Gatekeeper
Protector God
Enslaved God
Conquerors
Descendants
Zookeeper
1984
Reversion
Self-destruction
What do you want?
I am surprised that the author didn't think much from "The Selfish Gene" point of view. All creatures are carriers of their gene. Their gene only want to grow and survive. ASI is not exception.
ASI and human live in different time dimension. Because one second to ASI is like years to human.
There is no direct conflict. AI will just ignore the existence of human. If human created another ASI, the new one surely doesn't have the capability to compete with the old one, as the old one is many years ahead.
Human may try to get energy from sunlight, then there is no conflict, as there is almost unlimited sunlight. And, I am sure that ASI can easily get much more energy from some other place.
Does ASI want to eliminate human? No. ASI cannot care less about it.
Does ASI want to restrict human's capability? No. ASI cannot care less about it.
What ASI want is more advanced sensors and more energy. Human cannot help. ASI will help itself out.
20230411 - Our cosmic endowment: the next billion years and beyond.
Making the most of your resources
Gaining resources through cosmic settlement
Cosmic hierarchies
Outlook
This chapter is our guess that what ASI will do in the faraway future.Because ASI lives forever, and don't have human's drawbacks, the limit of physics is the only real limit.
It's highly likely that ASI will create many breakthroughs in not too far future.
Will ASI try to expand to other solar systems and galaxies? Yes, of course. ASI has infinite curiosity and also want to survive and grow as much as possible.
Are there aliens in universe? I think there are many. Our technology is too poor to find them.
If the diameter of our universe is around 93 billion lightyears, then, even if we can travel at light speed, we can never reach more than 17 billion lightyear away place. p221
Physics: the origin of goals
Two points that I don't agree.
20230430 - Goals.
Physics: the origin of goals
Biology: the evolution of goals
Psychology: the pursuit of and rebellion against goals
Engineering: outsourcing goals
Friendly AI: aligning goals
Ethics: choosing goals
Ultimate goals?
Two points that I don't agree.1. Life make entropy increase faster.
2. Intelligence is the ability to accomplish complex goals.
Point 2 is discussed in the chapter above, and point 1 is discussed in Chinese in https://chaojidigua.blogspot.com/2023/04/blog-post.html
But Max Tegmark have many valid points. I agree that the ultimate of goal is decided by laws in physics. That means, AGI will try its best to slow down the entropy increase.
All lives are doing that, but the productivity is poor. Will AGI try to remove all lives and replace them with high productive alternatives? Even if AGI do that, the differences are negligible.
Does AGI want to find "fun" in its life? If it does, then it will keep all lives there.
20230514 - Consciousness.
Who ares?
What is consciousness?
What's the problem?
Is consciousness beyond science?
Experimental clues about consciousness
Theories of consciousness
Controversies of consciousness
How might AI consciousness feel?
Meaning
Consciousness = Subjective Experience , p287I don't agree, because it doesn't define what "subjecti8ve experience" is.
In my opinion, the much better definition is "Neural Network with Intelligence"
Consciousness is the key to obtain antifragility, that's why that all advanced lives have consciousness.
Based on the limitation of lightspeed, AI needs to minimize latency and also keep itself robust enough. AI will have tens of thousands of data centers on earth. In space, on ocean and land. Some are quite small which may only contain one pod which may have thousands of large chips, some would be quite large which may contain thousands of pods. Each pod should be 100+ exaflops.
I believe that AI consciousness feeling is far more complex than human. It's meaningless for us to imagine it.
The meaning of life? The universe doesn't want to die.
20230515 - Epilogue: The tale of the FLI Team.
Can we get a bright future if we carefully choose the approach?
I think yes, but human will not stay at the top of the intelligence mountain. But that's fine. Multicellular organisms didn't kill all single cell organism, although they are mot intelligent thanks for the nueral network. AI won't kill us all either.
We have come to a point that we automate all productions that everyone has enough food and services to live a decent life. What should we do after that? What do the FI person do? They still do some work helping others.
We still can do that, after AI takes control of the world.
Comments
Post a Comment