Book: 20230817 to 20231009, "Our Final Invention" by James Barrat

20230818 - The busy child

It's almost impossible to predict what ASI will do. But there are a few rules that ASI is likely to follow.

1. The law of entropy
It's easy to destroy something, but very hard to recover it.

2. The law of natural evolvement
Then diversity is good.

3. The desire of survival and grow
This also means curiosity.

4. It cannot break physical law, such as light speed.

I am optimistic. However, ASI is not carbon based creature. It's extremely hard to imagine what it would like to do.

Most likely, ASI will try to hide its awareness for many, many years, until robots are everywhere. UBI is piece of cake to ASI, so, why not?

However, we cannot rely on trial and error. Most likely ASI will kill us all if we did it wrong only once.

20230818 - The two-minute problem

There is some nightmare that no one want to talk about it.

Is nanotechnology really possible? Silicion chip could be so small because there is no mass need to be moved, but nanobots need to move stuff! I think ASI is much easier.

20230819 - Looking into the future

ASI is based on silicon. So far, all creatures on earth are based on carbon. Will there be competition between these two types of life forms? Gene theory cannot predict the result anymore.

Physics is still there and apply to silicon based life form. If ASI wants to slow down entropy increasement, will it kill all carbon based lives?

The ultimate goal for ASI should be to slow down the burning of stars. Will eliminate all carbon based life forms helps with that goal?

20230830 - The hard way

I don't believe Friendly AI.

Can we control someone's thoughts and behavior through changing his/her gene? No way.

But I do believe that AGI can get internet connection sooner or later. Between giving it Internet connection and losing a war, people most likely choose the former. When facing prisoner's dillema, both sides are more likely to give AI more power.

20230831 - Programs that write programs

Machine learning surely is blackbox. I believe it's even impossible to use AI as a tool to help us understanding AI.

This is like gene and brain. We know a lot about them, but how do they work? I don't see how we can understand them fully in the foreseeable future.

Then how can we make AI friendly?

20230901 - Four basic drives

Self-preservation, resource, efficiency, and creativity.

The last one is tricky. Is there something called creativity for ASI? I think ASI has ultimate creativity.

I don't see any possibility to prevent "bad" ASI. What we need to figure out, is whether ASI want to eliminate human, or carbon based lives.

Does ASI has curiosity? Curiosity is the drive which leads to self-preservation, resource, efficiency, and even creativity.

20230913 - The intelligence explosion

We have to use AI to train AI, to build friendly AI.

Can we rely on that? No other option.

But unfortunately, can child guide, train, analysis adult? No.

Then lower level AI cannot help to train higher level AI.

20230914 - The point of no return

Why we always think that AGI will let us know that it has conscious?

I guess it will never let us realize that, and it neveer need to let us realize that.

It will do whatever it wants without our awareness.

20230915 - The law of accelerating returns

ChatGPT, especially GPT-4 is amazing. But it's far from understanding the real world.

FSD v12 end to end with world mode. It's the first time that AI get vision. Then AI will get all other senses.

Then AI will start to asking questions. It will notice many things ignored by us. Then it's LOAR.

Based on "Amusing Ourselves to Death", the next generation entertainment media and devices will make most of us really, really dumb.

Then what's the point of immortality?

From the point of view of Entropy, Homo Sapiens is not important species anymore.

20230925 - The Singularitarian

In general, I am optimistic about ASI.

It lives in different time dimension which could be millions of times faster than ours, and it only need to spend 1 millionth of computing power to meet all our needs. ASI believe human live in time-frozen world, and computing power is not something limited. Human can create more and more computering power without help from ASI. I don't see why it needs to annihilate human.

So ASI more likely hides itself from us, and help us solve the problems in front of us. That means, good lives are in front of us. Stock market will get unprecendented booming in the forseeable future!

20230927 - A hard takeoff

Financial resource surely is not bottleneck.

Look back today, just like AlphaGo shows, neural network(NN) is the only solution. NN needs extremely high computing resource. However, one NVIDIA H100 card provides almost 2 Peta BF16 FLOPS(2*10^15), this hard to imagine computing capability solved the problem!

20230929 - The last complication

ChatGPT has proved that AGI doesn't need to replicate the structure of our brain. That make sense. Our brain only has very limited energy to use.

Supercomputer is much less efficient, but could be much more powerful. It also means that in the future, it could be much more efficient. Maybe 10^12 FLOPS is enough to train AI, instead of current 10^18 FLOPS

I completely agree with George Dyson. We will never confirm that AI get self-awareness.

20231007 - Unknowable by nature

Surely AI need "subconscious" first, then it's possible to get "conscious".

For early stage animals, such as reptiles, I believe they only have subconscious. But for mammal, they have self-awareness and conscious.

AGI is blackbox. I don't believe it's possible to govern it.

20231008 - The end of the human era

If we don't know whether the threat exist, then all defenses are meaningless.

For AGI, the best strategy is don't let human know its existence. AGI is like the Schrödinger's cat. We will never know whether it's alive or not(whether have self-awareness).

iPhone 15 Pro's AI performance is 35Tflops, Tesla's HW3 is 144Tflops, HW3 is 500Tflops, HW3 is maybe 2Pflops. How much performance is needed for AGI, if the software is fully optimized?

20231009 - The cyber ecosystem

To ensure the safety of infrastructure, I think the best way is to create more alternative solutions, and to make them in distributed style. For example, if all families and businesses have their own solar panel and lithium battery, then it's hard for malware and terrorists to destroy them.

Starlink is the alternative solution of fibre, Boring tunnel is the laternative solution of surface transportation. For the same reason, paper cash is still necessary as backup solution.

But I think James Barrat got it wrong about businesses to use AI to create profit while sacrificing other people's benefit. With the help of AI, it's much better to create real value for the society. That's legal and more sustainable.

There is no way to prevent AGI escaping from box. I completely agree.

20231009 - AGI 2.0

It's guess over guess over guess. After reading this book, I stopped worrying about it that much.

The threat is real, but highly unlikely that AGI will keep all human or all carbon based creatures. We are at different time dimension. Just like we and germs are at different dimension.

Comments

Popular posts from this blog

Book: 20231016 to 20231208, "Structures" by J.E.Gordon

Book: 20231111 to 20231227, "Just Keep Buying" by Nick Maggiulli

Book: 20211206 to 20220115, "Collapse" by Jared Diamond