That Microsoft deal isn’t exclusive, video is coming, and more from OpenAI CEO Sam Altman
That Microsoft deal isn’t exclusive, video is coming, and more from OpenAI CEO Sam Altman Connie Loizos
OpenAI co-founder and CEO Sam Altman sat down for a wide-ranging interview with this editor late last week, answering questions about some of his most ambitious personal investments, as well as about the future of OpenAI.
There was much to discuss. The now eight-year-old outfit has dominated the national conversation in the two months since it released ChatGPT, a chatbot that answers questions like a person. OpenAI’s products haven’t just astonished users; the company is reportedly in talks to oversee the sale of existing shares to new investors at a $29 billion valuation despite its comparatively nominal revenue.
Altman declined to talk about the company’s current business dealings, firing a bit of a warning shot when asked a related question during our sit-down.
He did reveal a bit about the company’s plans going forward, however. For one thing, in addition to ChatGPT and the outfit’s popular digital art generator, DALL-E, Altman confirmed that a video model is also coming, though he said that he “wouldn’t want to make a competent prediction about when,” adding that “it could be pretty soon; it’s a legitimate research project. It could take a while.”
Altman made clear that OpenAI’s evolving partnership with Microsoft — which first invested in OpenA in 2019 and earlier today confirmed it plans to incorporate AI tools like ChatGPT into all of its products — is not an exclusive pact.
Further, Altman confirmed that OpenAI can build its own software products and services, in addition to licensing its technology to other companies. That’s notable to industry watchers who’ve wondered whether OpenAI might one day compete directly with Google via its own search engine. (Asked about this scenario, Altman said: “Whenever someone talks about a technology being the end of some other giant company, it’s usually wrong. People forget they get to make a counter move here, and they’re pretty smart, pretty competent.”)
As for when OpenAI plans to release the fourth version of the GPT, the sophisticated language model off which ChatGPT is based, Altman would only say that the hotly anticipated product will “come out at some point when we are confident that we can [release] it safely and responsibly.” He also tried to temper expectations regarding GPT-4, saying that “we don’t have an actual AGI,” meaning artificial general intelligence, or a technology with its own emergent intelligence, versus OpenAI’S current deep learning models that solve problems and identify patterns through trial and error.
“I think [AGI] is sort of what is expected of us” and GPT-4 is “going to disappoint” people with that expectation, he said.
In the meantime, asked about when Altman expects to see artificial general intelligence, he posited that it’s closer than one might imagine but also that the shift to “AGI” will not be as abrupt as some expect. “The closer we get [to AGI], the harder time I have answering because I think that it’s going to be much blurrier and much more of a gradual transition than people think,” he said.
Naturally, before we wrapped things up, we spent time talking about safety, including whether society has enough guardrails in place for the technology that OpenAI has already released into the world. Plenty of critics believe we do not, including worried educators who are increasingly blocking access to ChatGPT owing to fears that students will use it to cheat. (Google, very notably, has reportedly been reluctant to release its own AI chatbot, LaMDA over concerns about its “reputational risk.)
Altman said here that OpenAI does have “an internal process where we kind of try to break things and study impacts. We use external auditors. We have external red teamers. We work with other labs and have safety organizations look at stuff.”
At the same time, he said, the tech is coming — from OpenAI and elsewhere — and people need to start figuring out how to live with it, he suggested. “There are societal changes that ChatGPT is going to cause or is causing. A big one going on now is about its impact on education and academic integrity, all of that.” Still, he argued, “starting these [product releases] now [makes sense], where the stakes are still relatively low, rather than just put out what the whole industry will have in a few years with no time for society to update.”
In fact, educators — and perhaps parents, too — should understand there’s no putting the genie back in the bottle. While Altman said that OpenAI and other AI outfits “will experiment” with watermarking technologies and other verification techniques to help assess whether students are trying to pass off AI-generated copy as their own, he also hinted that focusing too much on this particular scenario is futile.
“There may be ways we can help teachers be a bit more likely to detect output of a GPT-like system, but honestly, a determined person is going to get around them, and I don’t think it’ll be something society can or should rely on long term.”
It won’t be the first time that people have successfully adjusted to major shifts, he added. Observing that calculators “changed what we test for in math classes” and Google rendered the need to memorize facts far less important, Altman said that deep learning models represent “a more extreme version” of both developments. But he argued the “benefits are more extreme as well. We hear from teachers who are understandably very nervous about the impact of this on homework. We also hear a lot from teachers who are like, ‘Wow, this is an unbelievable personal tutor for each kid.'”
For the full conversation about OpenAI and Altman’s evolving views on the commodification of AI, regulations, and why AI is going in “exactly the opposite direction” that many imagined it would five to seven years ago, it’s worth checking out the clip below.
You’ll also hear Altman address best- and worst-case scenarios when it comes to the promise and perils of AI. The short version? “The good case is just so unbelievably good that you sound like a really crazy person to start talking about it,” he said. “And the bad case — and I think this is important to say — is, like, lights out for all of us.”
That Microsoft deal isn’t exclusive, video is coming, and more from OpenAI CEO Sam Altman by Connie Loizos originally published on TechCrunch