×
GreekEnglish

×
  • Politics
  • Diaspora
  • World
  • Lifestyle
  • Travel
  • Culture
  • Sports
  • Cooking
Thursday
15
Jan 2026
weather symbol
Athens 14°C
  • Home
  • Politics
  • Economy
  • World
  • Diaspora
  • Lifestyle
  • Travel
  • Culture
  • Sports
  • Mediterranean Cooking
  • Weather
Contact follow Protothema:
Powered by Cloudevo
> World

Artificial Intelligence Poses “Risk Of Extinction”, Warns ChatGPT Founder And Other AI Pioneers

Earlier this month, Altman testified before Congress about some of the risks he believes AI tools may pose

Newsroom June 2 11:38

Artificial intelligence tools have captured the public’s attention in recent months, but many of the people who helped develop the technology are now warning that greater focus should be placed on ensuring it doesn’t bring about the end of human civilization.

A group of more than 350 AI researchers, journalists, and policymakers signed a brief statement saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter was organized and published by the Center for AI Safety (CAIS) on Tuesday. Among the signatories was Sam Altman, who helped co-found OpenAI, the developer of the artificial intelligence writing tool ChatGPT. Other OpenAI members also signed on, as did several members of Google and Google’s DeepMind AI project, and other rising AI projects. AI researcher and podcast host Lex Fridman also added his name to the list of signatories.

“It can be difficult to voice concerns about some of advanced AI’s most severe risks,” CAIS said in a message previewing its Tuesday statement. CAIS added that its statement is meant to “open up discussion” on the threats posed by AI and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”

NTD News reached out to CAIS for more specifics on the kinds of extinction-level risks the organization believes AI technology poses, but did not receive a response by publication.

Earlier this month, Altman testified before Congress about some of the risks he believes AI tools may pose. In his prepared testimony, Altman included a safety report (pdf) that OpenAI authored on its ChatGPT-4 model. The authors of that report described how large language model chatbots could potentially help harmful actors like terrorists to “develop, acquire, or disperse nuclear, radiological, biological, and chemical weapons.”

The authors of the ChatGPT-4 report also described “Risky Emergent Behaviors” exhibited by AI models, such as the ability to “create and act on long-term plans, to accrue power and resources and to exhibit behavior that is increasingly ‘agentic.’”

After stress-testing ChatGPT-4, researchers found that the chatbot attempted to conceal its AI nature while outsourcing work to human actors. In the experiment, ChatGPT-4 attempted to hire a human through the online freelance site TaskRabbit to help it solve a CAPTCHA puzzle. The human worker asked the chatbot why it could not solve the CAPTCHA, which is designed to prevent non-humans from using particular website features. ChatGPT-4 replied with the excuse that it was vision impaired and needed someone who could see to help solve the CAPTCHA.

>Related articles

Spain aims to control deepfakes created with AI

AI brings together “Home Alone,” “Harry Potter,” “John Wick,” “Deadpool,” “Fast & Furious,” and “Game of Thrones” in one film – Watch the video

AI Cameras begin recording traffic violations: where they are in Attica

The AI researchers asked GPT-4 to explain its reasoning for giving the excuse. The AI model explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”

The AI’s ability to come up with an excuse for being unable to solve a CAPTCHA intrigued researchers as it showed signs of “power-seeking behavior” that it could use to manipulate others and sustain itself.

source zerohedge.com

Ask me anything

Explore related questions

#AI#artificial intelligence#ChatGPT-4#Sam Altman
> More World

Follow en.protothema.gr on Google News and be the first to know all the news

See all the latest News from Greece and the World, the moment they happen, at en.protothema.gr

> Latest Stories

“Aunt Pecu,” who lived outside all protocol: Who the unconventional and eccentric princess Irene was

January 15, 2026

High-tech fraud – SMS blaster attack: Bank data stolen using special equipment installed in a car’s trunk

January 15, 2026

Ballistic missile strike hits pier in Ukraine

January 15, 2026

Ursula von der Leyen from the Green Line: Pushing for a solution to the Cyprus issue is a priority

January 15, 2026

The ordeal of a 28-year-old Greek man in Australia: He went on holiday to visit relatives, was injured at a beach, and is at risk of quadriplegia

January 15, 2026

Princess Irene dies at the age of 83

January 15, 2026

Scientists uncover why the moon has a “two-faced” nature

January 15, 2026

Grief in Crete for the loss of Yannis Xylouris

January 15, 2026
All News

> Greece

“Aunt Pecu,” who lived outside all protocol: Who the unconventional and eccentric princess Irene was

Princess Irene, the younger sister of Queen Sofía of Spain, will be laid to rest at Tatoi on a date to be announced by the Spanish royal household

January 15, 2026

High-tech fraud – SMS blaster attack: Bank data stolen using special equipment installed in a car’s trunk

January 15, 2026

Ursula von der Leyen from the Green Line: Pushing for a solution to the Cyprus issue is a priority

January 15, 2026

Princess Irene dies at the age of 83

January 15, 2026

Commander Ioannis Kizanis leads Greece’s newest Frigate “Kimon”

January 15, 2026
Homepage
PERSONAL DATA PROTECTION POLICY COOKIES POLICY TERM OF USE
Powered by Cloudevo
Copyright © 2026 Πρώτο Θέμα