Anonymous ID: fa1eb6 June 4, 2024, 5:17 p.m. No.20968066   🗄️.is 🔗kun

https://situational-awareness.ai/lock-down-the-labs/

 

SITUATIONAL AWARENESS

 

The Decade Ahead

 

Leopold Aschenbrenner, June 2024

 

IIIb. Lock Down the Labs: Security for AGI

 

"In the fall of 1940, Fermi had finished new carbon absorption measurements on graphite, suggesting graphite was a viable moderator for a bomb. Szilard assaulted Fermi with yet another secrecy appeal. “At this time Fermi really lost his temper; he really thought this was absurd,” Szilard recounted. Luckily, further appeals were eventually successful, and Fermi reluctantly refrained from publishing his graphite results.

 

At the same time, the German project had narrowed down on two possible moderator materials: graphite and heavy water. In early 1941 at Heidelberg, Walther Bothe made an incorrect measurement on the absorption cross-section of graphite, and concluded that graphite would absorb too many neutrons to sustain a chain reaction. Since Fermi had kept his result secret, the Germans did not have Fermi’s measurements to check against, and to correct the error. This was crucial: it led the German project to pursue heavy water instead - a decisive wrong path that ultimately doomed the German nuclear weapons effort.

 

If not for that last-minute secrecy appeal, the German bomb project may have been a much more formidable competitor - and history might have turned out very differently.

 

There’s a real mental dissonance on security at the leading AI labs. They full-throatedly claim to be building AGI this decade. They emphasize that American leadership on AGI will be decisive for US national security. They are reportedly planning 7T chip buildouts that only make sense if you really believe in AGI. And indeed, when you bring up security, they nod and acknowledge 'of course, we’ll all be in a bunker' and smirk.

 

And yet the reality on security could not be more divorced from that. Whenever it comes time to make hard choices to prioritize security, startup attitudes and commercial interests prevail over the national interest. The national security advisor would have a mental breakdown if he understood the level of security at the nation’s leading AI labs.

 

There are secrets being developed right now, that can be used for every training run in the future and will be the key unlocks to AGI, that are protected by the security of a startup and will be worth hundreds of billions of dollars to the CCP.

 

The reality is that, a) in the next 12-24 months, we will develop the key algorithmic breakthroughs for AGI, and promptly leak them to the CCP, and b) we are not even on track for our weights to be secure against rogue actors like North Korea, let alone an all-out effort by China, by the time we build AGI. “Good security for a startup” simply is not even close to good enough, and we have very little time before the egregious damage to the national security of the United States becomes irreversible.

 

We’re developing the most powerful weapon mankind has ever created. The algorithmic secrets we are developing, right now, are literally the nation’s most important national defense secrets - the secrets that will be at the foundation of the US and her allies’ economic and military predominance by the end of the decade, the secrets that will determine whether we have the requisite lead to get AI safety right, the secrets that will determine the outcome of WWIII, the secrets that will determine the future of the free world. And yet AI lab security is probably worse than a random defense contractor making bolts.

 

It’s madness.

 

Basically nothing else we do - on national competition, and on AI safety - will matter if we don’t fix this, soon."

 

-

 

See also:

 

"Survey of Chinese Espionage in the United States Since 2000"

 

CSIS

 

https://www.csis.org/programs/strategic-technologies-program/survey-chinese-espionage-united-states-2000