Anonymous ID: ad2ca5 Jan. 16, 2025, 6:38 a.m. No.22364238   🗄️.is 🔗kun   >>4240

>>22364179

Elon Musk vs Sam Altman: How the Billionaire Feud May Impact US AI Development

Story by Kurt Robson • 4d.1/2

 

Key Takeaways

• The battle between Elon Musk and Sam Altman highlights a larger debate surrounding the ethical direction of AI development.

• Since Musk’s departure from OpenAI, Tesla’s CEO has heavily criticized the firm’s direction.

• Musk recently launched his fourth lawsuit against OpenAI.

• When it comes to AI, few names are more significant than Elon Musk and OpenAI CEO Sam Altman.

 

After becoming co-founders of OpenAI, their public alliance and shared vision shattered into a now very public feud.

 

At the center of the disagreement is a profound debate about AI’s ethical direction, which is currently being debated worldwide.

Musk and Altman’s fallout is as much about billionaire rivalry as it isabout the future of humanity’s relationship with AI.

 

Business Sam Altman Net Worth Explained:OpenAI CEO’s Huge Reddit Stake May Make Him Billions

 

Elon Musk’s Criticism of Sam Altman

After successfully launching the non-profit firm in 2015, Musk left the company three years later, citing a conflict of interest with his AI work at Tesla.

 

Shortly after his departure,the firm switched to a capped for-profit model, which allowed it to attract substantial funding from private companies.

 

Musk heavily criticized this move, claiming=the company had prioritized corporate profits over its original goal of prioritizing AI research for humanity’s welfare.

 

Since his departure, Musk has criticized almost every move made by Altman and OpenAI, from its $1 billion investment from Microsoft in 2019 to its deeper integration with consumer products.

The Tesla boss has since sued the firm numerous timesover allegations that it illegally converted the operation. In one lawsuit, Musk even claimed that Altman and co-founder Greg Brockman “manipulated” him to help create the company.

 

The lawsuit stated thatMusk was “deceived” by fellow co-founders, preying on “Musk’s humanitarian concern about the existential dangers posed by AI.”

 

OpenAI Fires Back

In December, OpenAI responded to Musk’s latest lawsuit with a blog post claiming the Tesla CEO had previously advocated for the organization to become a for-profit.

 

“Musk’s latest legal filing against OpenAI marks his fourth attempt in less than a year to reframe his claims,” OpenAI said. “However, his own words and actions speak for themselves.”

 

In the blog post, OpenAI provides emails dating back to 2015, which allegedly show the Tesla boss questioning nonprofit status.

 

“Elon not only wanted, but actually created, a for-profit as OpenAI’s proposed new structure,” OpenAI wrote. “When he didn’t get majority equity and full control, he walked away and told us we would fail.”

 

“You can’t sue your way to AGI,” OpenAI stated, alluding to the fact that Musk’s legal battles actually revolved around competition between the two billionaires.

 

Elon Musk Bullying Accusations

Since his departure, Musk has launched his own AI startup, xAI, with its chatbot Grok behind a paywall for X subscribers.

 

Launched in 2023, Musk has marketed the startup differently by focusing on it being “truth-seeking” and has repeatedly claimed that “AI should be open and transparent, not proprietary.”

 

This puts Tesla’s boss in direct competition with OpenAI and ChatGPT, but Altman claims Musk’s competition is with everyone.

 

https://www.msn.com/en-us/technology/tech-companies/elon-musk-vs-sam-altman-how-the-billionaire-feud-may-impact-us-ai-development/ar-BB1rjNYb

Anonymous ID: ad2ca5 Jan. 16, 2025, 6:39 a.m. No.22364240   🗄️.is 🔗kun   >>4322

>>22364238

2/2

Talking on a Bari Weiss podcast , Altman slammed Musk as a “bully” who “clearly likes to get in fights,” adding:

 

“Right now, it’s me. It’s been Bezos, Gates, Zuckerberg, lots of other people. And I think, fundamentally, this is about OpenAI doing really well. Elon cares about doing really well.

 

Musk and Altman’s Impact on AI

There’s no denying that two major AI billionaires arguing about the ethics of technology will likely impact its progress in the future.

 

Over the past year, Musk’s relationship with President-elect Donald Trump has blossomedbeyond a friendship into something more powerful.

 

The Tesla CEO’s recent appointment to head up the Department of Government Efficiency (DOGE) could position him as a key figure in shaping AI policy.

 

However, Altman remains hopeful that their feud will not lead to Musk abusing this power and hurting OpenAI’s AI development.

 

“I think there are people who will really be a jerk on Twitter, who will still not abuse the system of a country they’re now in a sort of extremely influential political role for,” Altman said during the Weiss podcast. “That seems completely different to me.”

 

Regardless, Trump’s upcoming presidency has promised to significantly shape the future of AI, with the president-elect vowing to lift previously restrictive guardrails on technology development.

 

Following Musk’s lead, Altman has expressed a willingness to collaborate with the incoming administration and said he believed Trump would work well to boost the industry.

 

Musk and Altman’s alignment of respect for Trump could potentially see the President-elect bring the two feuding billionaires back together, but that remains to be seen.

 

https://www.msn.com/en-us/technology/tech-companies/elon-musk-vs-sam-altman-how-the-billionaire-feud-may-impact-us-ai-development/ar-BB1rjNYb

Anonymous ID: ad2ca5 Jan. 16, 2025, 7:03 a.m. No.22364322   🗄️.is 🔗kun   >>4418 >>4426 >>4428

>>22364240

In Tuckers interview the mother told of the Board of OpenAI fired Altman because he was doing hideous evil things with Ai,,the board was shocked at experiments Altman was doing, then 700 hundred employees supported Altman to be hired back. The man Ilyis that fired him, rehired him and left. To this day Ilyis has two bodyguards by him at all times wherever he goes. She also said many top leaders programmers are leaving OpenAI in droves and no one explains why, probably because of an NDA but they've seen what has happened to others.

 

The basic understanding is Altman is doing unethical and hideous experiments or coding the AI to do things it should never have the power to do.

 

The very fact that Altman and others would not sign the Ethics document, because top scientists elaborated of all the dangers AI could be used for, shows you that the money hungry developers are willing to create dangers to ALL Mankind

Anonymous ID: ad2ca5 Jan. 16, 2025, 7:30 a.m. No.22364418   🗄️.is 🔗kun

>>22364322.1/2

Open letter on artificial intelligence(2015)

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence[2] calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.[3]

 

An Open Letter

Created January 2015 Author(s) Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts.

 

Subject research on the societal impacts of AI

Background

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community,[4] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.[5] The letter was made public on January 12.[6]

 

Purpose

The letter highlights both the positive and negative effects of artificial intelligence.[7] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks.[6] The letter contends that:

 

The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.[8]

 

One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist.[4] Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues".[9]

 

Concerns raised by the letter

The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do".[1] The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification.

 

Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").[10]

 

Short-term concerns

Further information: Machine ethics

Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned?

 

Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.[4]

 

(https://en.m.wikipedia.org/wiki/Open_letter_on_artificial_intelligence_(2015)

Anonymous ID: ad2ca5 Jan. 16, 2025, 7:31 a.m. No.22364426   🗄️.is 🔗kun

>>22364322.1/2

Open letter on artificial intelligence(2015)

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence[2] calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.[3]

 

An Open Letter

Created January 2015 Author(s) Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts.

 

Subject research on the societal impacts of AI

Background

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community,[4] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.[5] The letter was made public on January 12.[6]

 

Purpose

The letter highlights both the positive and negative effects of artificial intelligence.[7] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks.[6] The letter contends that:

 

The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.[8]

 

One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist.[4] Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues".[9]

 

Concerns raised by the letter

The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do".[1] The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification.

 

Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").[10]

 

Short-term concerns

Further information: Machine ethics

Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned?

 

Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.[4]

 

(https://en.m.wikipedia.org/wiki/Open_letter_on_artificial_intelligence_(2015)

Anonymous ID: ad2ca5 Jan. 16, 2025, 7:32 a.m. No.22364428   🗄️.is 🔗kun   >>4431 >>4433

>>22364322.1/2

Open letter on artificial intelligence(2015)

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence[2] calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.[3] (this doc attached)

 

An Open Letter

Created January 2015 Author(s) Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts.

 

Subject research on the societal impacts of AI

Background

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community,[4] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.[5] The letter was made public on January 12.[6]

 

Purpose

The letter highlights both the positive and negative effects of artificial intelligence.[7] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks.[6] The letter contends that:

 

The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.[8]

 

One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist.[4] Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues".[9]

 

Concerns raised by the letter

The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do".[1] The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification.

 

Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").[10]

 

Short-term concerns

Further information: Machine ethics

Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned?

 

Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.[4]

 

(https://en.m.wikipedia.org/wiki/Open_letter_on_artificial_intelligence_(2015)

Anonymous ID: ad2ca5 Jan. 16, 2025, 7:32 a.m. No.22364431   🗄️.is 🔗kun

>>22364428

2/2

Long-term concerns

Further information: Existential risk from artificial general intelligence and Superintelligence

The document closes by echoing Microsoft research director Eric Horvitz's concerns that:

we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes– and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? … What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?

 

Existing tools for harnessing AI, such as reinforcement learning and simple utility functions,are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem".[10]

 

Signatories

Signatories include physicist Stephen Hawking, business magnate Elon Musk, the entrepreneurs behind DeepMind and Vicarious, Google's director of research Peter Norvig,[1] Professor Stuart J. Russell of the University of California, Berkeley,[11] and other AI experts, robot makers, programmers, and ethicists.[12] The original signatory count was over 150 people,[13] including academics from Cambridge, Oxford, Stanford, Harvard, and MIT.[14]

 

https://en.m.wikipedia.org/wiki/Open_letter_on_artificial_intelligence_(2015)

 

Research Priorities for Robust and Beneficial Artificial Intelligence

Stuart Russell, Daniel Dewey, Max Tegmark