Follow BigDATAwire:

November 24, 2023

Altman’s Back As Questions Swirl Around Project Q-Star

(AI generated image/Shutterstock)

Sam Altman’s wild weekend had a happy ending, as he reclaimed his CEO position at OpenAI earlier this week. But questions over the whole ordeal remain as rumors of a powerful new AI capability developed at OpenAI called Project Q-Star are swirling.

Altman returned to OpenAI after a tumultuous four days in exile. During that time, Altman nearly reclaimed his job at OpenAI last Saturday, was rebuffed again, and the next day took a job at Microsoft, where he was to head an AI lab. Meanwhile, the majority of OpenAI’s 770 or so employees threatened to quit en masse if Altman was not reinstated.

The employee’s open revolt ultimately appeared to convince OpenAI Chief Scientist Ilya Sutskever, the board member who led Altman’s ouster–reportedly over concerns that Altman was rushing the development of a potentially unsafe technology–to back down. Altman returned to his job at OpenAI, which reportedly is worth somewhere between $80 billion and $90 billion, on Tuesday.

Just when it seemed as if the story couldn’t get any stranger, rumors started to circulate that the whole ordeal was due to OpenAI being on the cusp of releasing a potentially groundbreaking new AI technology. Dubbed Project Q-Star (or Q*), the technology purportedly represents a major advance toward artificial general intelligence, or AGI.

Project Q-Star’s potential to threaten humanity was reportedly a factor in Altman’s temporarilyi ouster from OpenAI (cybermagician/Shutterstock)

Reuters said it learned of a letter wrote by several OpenAI staffers to the board warning them of the potential downsides of Project Q-Star. The letter was sent to the board of directors before they fired Altman on November 17, and is considered to be one of several factors leading to his firing, Reuters wrote.

The letter warned the board “of a powerful artificial intelligence discovery that they said could threaten humanity,” Reuters reporters Anna Tong, Jeffrey Dastin and Krystal Hu wrote on November 22.

The reporters continued:

“Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.”

OpenAI hasn’t publicly announced Project Q-Star, and little is known about it, other than that it exists. That, of course, hasn’t stopped rampant speculation about its supposed capabilities on the Internet, particularly around a branch of AI called Q-learning.

Sam Altman at OpenAI DevDay on November 6, 2023

The board intrigue and AGI tease come on eve of the one-year anniversary of the launch of ChatGPT, which catapulted AI into the public spotlight and caused a gold rush to develop bigger and better large language models (LLMs). While the emergent capabilities of LLMs like GPT-3 and Google LaMDA were well-known in the AI community before ChatGPT, the launch of OpenAI’s Web-based chatbot supercharged interest and investment in this particular form of AI, and the buzz has been resonating around the world ever since.

Despite the advances represented by LLMs, many AI researchers have stated that they do not believe humans are, in fact, close to achieving AGI, with many experts saying it was still years if not decades away.

AGI is considered to be the Holy Grail in the AI community, and marks an important point at which the output of AI models is indiscernible from a human. In other words, AGI is when AI becomes smarter than humans. While LLMs like ChatGPT display some characteristics of intelligence, they are prone to output content that is not real, or hallucinate, which many experts say presents a major barrier to AGI.

Related Items:

Sam A.’s Wild Weekend

Like ChatGPT? You Haven’t Seen Anything Yet

Google Suspends Senior Engineer After He Claims LaMDA is Sentient

 

 

BigDATAwire