Why an AGI Cold War will be disastrous for humanity
Dr Waku Dr Waku
15.8K subscribers
9,986 views
578

 Published On Jun 22, 2024

As we get closer to AGI or artificial general intelligence, the military applications of AI are becoming more and more obvious. To start to navigate this, OpenAI added the former director of the NSA, Paul Nakasone, to its board. We discuss the possible ramifications of this move.

In a recent essay, Leopold Aschenbrenner (who left OpenAI’s superalignment team) outlines his predictions for the future. They include rough timeline estimates for scaling and thus AGI development. The essay also highlights the US's likely increased military interest in AGI, and the potential for a national military effort like the Manhattan Project to develop militarized AGI.

This would likely lead to an all-out race with China, essentially a new Cold War to develop the new era of super weapons. We discuss why this would be a terrible outcome for humanity, because of the increased risk of losing control of AGI or creating non-aligned superintelligence. Let's hope a coalition of the major players can avoid such a Moloch outcome.

#agi #superintelligence #espionage

OpenAI appoints former top US cyberwarrior Paul Nakasone to its board of directors
https://apnews.com/article/openai-nsa...

AI Safety Summit Talks with Yoshua Bengio
   • AI Safety Summit Talks with Yoshua Be...  

SITUATIONAL AWARENESS: The Decade Ahead
https://situational-awareness.ai/

SITUATIONAL AWARENESS IIIb. Lock Down the Labs: Security for AGI
https://situational-awareness.ai/lock...

SITUATIONAL AWARENESS II. From AGI to Superintelligence: the Intelligence Explosion
https://situational-awareness.ai/from...

SITUATIONAL AWARENESS IIIa. Racing to the Trillion-Dollar Cluster
https://situational-awareness.ai/raci...

International Scientific Report on the Safety of Advanced AI
https://www.gov.uk/government/publica...

The AI Revolution: Our Immortality or Extinction
https://waitbutwhy.com/2015/01/artifi...

0:00 Intro
0:27 Contents
0:33 Part 1: Militarization
1:03 Leopold Aschenbrenner's Situational Awareness
1:56 State intervention in AGI development
2:19 Military funding for research
3:18 OpenAI's board adds director of NSA
3:49 Who to hire to defend against attacks
4:16 Why else would Nakasone join the board?
4:55 Part 2: The San Francisco project
5:17 The Manhattan project
5:38 As we get closer to AGI, the government will take notice
6:02 Possibly nationalizing labs
6:42 The first group to reach AGI wins a lot
7:09 Rivalry between US and China
7:39 China will likely hack into AI companies
8:04 Intelligence agencies and Edward Snowden
8:52 Technical capabilities of intelligence agencies
9:30 Security at startups is the worst
9:57 Cakewalk to infiltrate AI companies
10:20 Part 3: The Doomsday project
10:33 Other possible shapes to the future
11:10 Essay hasn't gotten much attention
11:32 Plan ends at the development of superintelligence
12:10 Strong act to prevent new superintelligence
12:43 Example: marbles analogy
13:18 Example: black marble candidates
13:48 You can't keep superintelligence in a box
14:11 Extinction or immortality
14:28 An AI race pushes us towards extinction
15:13 What can we do about this problem?
16:08 Darwinian race off a cliff
16:24 Conclusion
17:10 Situational Awareness is a doomsday prediction
17:57 Book and reading recommendations
18:19 Outro

show more

Share/Embed