OpenAI working on ‘next level’ of creating artificial intelligence ‘smarter than humans’ and launches ‘safety committee’

OPENAI has created a Safety and Security Committee as it works to make its artificial intelligence platform more advanced than ever, alongside concerns.

The move to add a safety committee comes as OpenAI recently announced it is working to create AGI which will be “the next level” of AI systems. 

ChatGPT is a chatbot run by OpenAI that has gained major traction since it became available to the public in 2022
2
ChatGPT is a chatbot run by OpenAI that has gained major traction since it became available to the public in 2022Credit: Getty
OpenAI is a leading contributor to AI technology
2
OpenAI is a leading contributor to AI technologyCredit: Getty

It is unclear when AGI would be released and OpenAI is only on the path of its creation at this time. 

“If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility,” OpenAI said in a release in February.  

“AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.”

However, OpenAI highlighted how if AGI is successfully created there could possibly be the issue of “serious misuse, drastic accidents, and societal disruption.”

The advancements that would come with AGI would be so great that OpenAI does not want to delay its creation over fear but rather tackle it responsibly.

“Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right,” OpenAI added. 

OpenAI is also currently training a new upgraded version of its AI - GPT-4o - the company also runs ChatGPT.

SAFETY COMMITTEE 

The AI research and deployment company has four Titans as its safety board including CEO Sam Altman, chair director Bret Taylor, director Adam D’Angelo, and lawyer Nicole Seligman, OpenAI said in a release on Tuesday. 

“This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations,” the release said. 

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.”

ChatGPT's astonishing new skills: The future of AI interaction

The AI giant wants to open a discussion about any current concerns with AI so the committee can discuss and assist. 

“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the release continued.

“A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days.”

Once the committee has all the considerations, it will create a new outline to maintain the safety of the platform. 

What is AGI?

  • Definition: AGI refers to a type of artificial intelligence that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a human level or beyond.
  • Human-like Cognitive Abilities: Unlike narrow AI, which is designed for specific tasks, AGI possesses general cognitive abilities similar to those of humans, such as reasoning, problem-solving, and abstract thinking.
  • Adaptability: AGI can adapt to new and unforeseen situations, making decisions and solving problems without needing to be specifically programmed for each task.
  • Learning and Understanding: AGI can learn from experience and improve its performance over time, understanding and processing complex information similarly to a human brain.
  • Cross-Domain Functionality: AGI is capable of performing a wide variety of tasks across different domains, from scientific research to artistic creation, without needing task-specific adjustments.
  • Potential Impact: The development of AGI has the potential to transform industries, improve decision-making, and address complex global challenges by providing intelligent solutions that are beyond current capabilities.
  • Ethical and Safety Considerations: The creation and deployment of AGI raise significant ethical and safety concerns, necessitating careful regulation and governance to ensure beneficial outcomes and prevent misuse.
  • Current Status: As of now, AGI remains a theoretical concept and a goal for future AI research, with existing AI systems being specialized (narrow AI) rather than general.

“At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board,” the release said. 

“Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

AI ADVANCEMENTS

OpenaAI is dedicated to consistently improving AI to empower humanity.

In the announcement about AGI, OpenAI went over key points as to why it believes it is important to keep working at AI in terms of upgrades and safety. 

The goal is to make the universe thrive and flourish with the help of technology. 

OpenAI said it doesn’t expect a perfect utopia but that it wants to maximize positive outcomes and minimize negative ones, making AGI an enhancer of human potential.

READ MORE SUN STORIES

It aims for the benefits, access, and governance of AGI to be widely and fairly distributed.

The company insisted it is working to successfully navigate significant risks.

OpenAI full statement on Safety and Security Commitee

Today, the OpenAI Board formed a Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.

OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.

A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.

OpenAI technical and policy experts Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee.

Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin.

Credit - OpenAI