No, the Fear of ChatGPT CEO for What He’s Creating It’s Not Funny

“People should be happy” that OpenAI is “a little scared” of the potential of artificial intelligence.

6 min readApr 6, 2023

--

Photo by D koi on Unsplash

“I try not to think about it too much,” told Altman. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

Sam Altman used these words to describe his being a prepper around a campfire at a YCombinator party in 2016.

“I am preparing for survival,” Altman said, providing two scenarios that could end civilization: a lab-modified super contagious virus released on the world population and an attack on humanity by artificial intelligence.

Several years have passed since that party, and while for the first scenario, some are ready to swear that it has already occurred, the attack by an artificial superintelligence still seems far off.

Yet artificial intelligence continues to cause concern for Altman, who has admitted that he is “a little bit scared” of his creation because of the tangible consequences the technology could have in the near future.

Following the mockery this nervousness has garnered in the past few weeks, the OpenAI CEO reiterated that people should not make fun of him but rather “be happy” about it.

“I think it’s weird when people think it’s like a big dunk that I say, I’m a little bit afraid,” Altman told podcaster Lex Fridman. “And I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.

The nature of the concern

“The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for,” Sam Altman said. “And that doesn’t require superintelligence.”

Months after the public release of ChatGPT, those who have used it do not need too much imagination to realize how the technology can negatively impact society with different levels of threats.

Automation-spurred job loss, privacy violations, deep fakes, socioeconomic inequality, and weapons automatization are just a few consequences involved.

Already in its current state, artificial intelligence has the potential to erode many cornerstone principles of democracy and to widen further the gap of economic inequality between various social classes.

Economy repercussions

Altman’s main concerns relate to the magnitude of the economic transition and the number of jobs that artificial intelligence could replace.

An economic transition is already taking shape and will be increasingly evident as these systems become more and more powerful.

Altman acknowledges that many people may be losing their jobs but, at the same time, argues that this may result in the appearance of better jobs in the future.

This metamorphosis, which involves a transition on an unprecedented historical scale, will not be painless.

“People need time to update, to react, to get used to this technology,” Altman said.

Looking at what’s happening right now, ChatGPT has already demarcated a new way of working for many developers with the function of “co-pilot” for programmers. Any profession may soon have an equivalent tool.

“We can have that for every profession, and we can have a much higher quality of life, like standard of living,” Altman said. “But we can also have new things we can’t even imagine today — so that’s the promise.”

Authoritarian governments

“We do worry a lot about authoritarian governments developing this,” Altman said.

In unsuspected times, before the war in Ukraine, Kremlin leader Putin said that whoever leads in artificial intelligence will become “the ruler of the world.”

The use authoritarian governments could make of these tools concerns both domestically and internationally. ChatGPT has already proven it can become a disinformation superspreader for Chinese and Russian propaganda efforts.

Generative AI significantly reduces the cost and time of producing false information on various topics –from science to politics, from economics to history.

Whether it is domestic or foreign policy, for authoritarian countries, generative AI is the ideal tool for mass manipulation, consolidation of authoritative power, and curtailment of personal freedoms.

Subscribe for more in-depth insights into future trends and nascent waves of innovation.

Competition and the associated risks

Altman is not concerned about the technologies to date that are available to the public but rather about what’s simmering. “While they’re not very scary, I think we’re potentially not that far off from those being potentially scary.”

Despite this apprehension, Sam Altman’s name does not appear among the more than 1,000 technology leaders and researchers who signed an open letter to pause the development of AI models more powerful than ChatGPT4.

The letter, a subject of discord in recent days, calls on intelligence laboratories and artificial intelligence experts to halt AI development for six months “to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

The letter states that advanced AI systems –more powerful than ChatGPT4– should be developed only after a certain degree of confidence that “their effects will be positive” and “their risks will be manageable.”

Earlier this year, Demis Hassabis, founder of renowned artificial intelligence lab DeepMind, told TIME microphones that “when it comes to very powerful technologies-and obviously AI is going to be one of the most powerful ever-we have to be careful.”

Antithetical to this sentiment was Microsoft’s statement at the ChatGPT launch event. “The race starts today, and we’re going to move and move fast,” Satya Nadella, CEO of Microsoft, said.

“Race” or “rush” have been redundant words in news outlets in recent months. And the concealed meaning is anything but positive.

In a multi-trillion-dollar industry, Microsoft is chasing a technological edge it has lacked for years, Google is accelerating to make up for lost ground, and Meta, for its part, does not want to sit on the sidelines and watch. In this scenario, Chinese companies such as Baidu, which held a release event for the release of its chatbot Ernie, are playing their cards.

Even before the release of chatGPT, Demis Hassabis urged calm. “It’s important NOT to move fast and break things for a technology as important as AI,” he wrote on Twitter in September.

Blind and unregulated competition, with no imposed or self-imposed limits, to bring out ever more advanced intelligence technology has a significant risk of escalating into something unwanted, planned to be harmful to society.

The Silicon Valley philosophy of throwing ideas against the wall and seeing what sticks is not a path to developing a technology that experts call potentially “more dangerous than the atomic bomb.”

“Not everybody is thinking about those things,” Hassabis said. “It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”

A promise or a conviction

Repeatedly Sam Altman seems to prepare the ground for future justifications should something accidentally go wrong.

“I wish that all generations would treat previous generations with indulgence,” he wrote. “Humanity is deeply imperfect. Our grandparents did horrible things; our grandchildren will understand that we did horrible things we don’t yet understand.”

“The people who came before us are a complete package of good and bad,” he continued, “and collectively they pushed the world forward; it’s important to view the moral progress of society as an ongoing joint project we are all responsible for. Now it’s our turn. I hope the people of the future will view us the same way.”

If the ultimate goal is to develop an AGI, what is undoubtedly a priority is to agree on how to proceed from here on in its development and what preventive measures to take.

So far, contrary to industry fears of releasing overly advanced tools to an unprepared society, building in public has been OpenAI’s prerogative.

The startup sees it as necessary to build in public to bring out all the use cases, identify all the negatives and positives, and give people, the economy, and institutions time to react and readjust.

In Altman’s beliefs, minimizing the downsides and letting society get the big upsides requires continuous deployment in the world, even if that means making a few mistakes along the way.

Researchers aim to minimize the negative aspects while allowing society to access the great benefits of technology. Something that needs continuous implementation to be feasible, Altman argues, allowing people, the economy, and institutions to react and readjust.

“This idea that we have, that we have an obligation and society will be better off for us to build in public, even if it means making some mistakes along the way-I think that’s really important,” Altman said.

Surely this perspective can make sense, but only when all parties agree on what “mistake” means and are aware of the implications.

Subscribe for more in-depth insights into future trends and nascent waves of innovation.

--

--

Eugenio De Lucchi
Eugenio De Lucchi

No responses yet