In-depth|What does OpenAI really want?

In-depth|What does OpenAI really want?

【Editor's Note】At the end of last year, OpenAI "hastily" launched its phenomenal product ChatGPT, which then unexpectedly triggered an unprecedented technological explosion since the Internet entered the public life. Suddenly, the Turing test seemed to become history, search engines were on the verge of "extinction", some academic papers began to become unreliable, no job was safe, and no scientific problem was immutable.

OpenAI, Sam Altman and ChatGPT instantly became one of the hottest search terms of this era, and almost everyone is crazy about it. So, do you know the growth story of Sam Altman and OpenAI?

Recently, well-known technology journalist Steven Levy published a long article in the American digital media WIRED, focusing on Sam Altman and conducting an in-depth discussion on the growth history and corporate vision of OpenAI.

The core content is as follows:

As OpenAI’s CEO, a visionary or doer type, Sam Altman is like a younger version of Elon Musk, the first person people consult about how AI will usher in its golden age, or render humans irrelevant, or worse.

Sam Altman and OpenAI's mission is to build safe AGI, and OpenAI's employees are fanatical about this goal. OpenAI's leaders vow to build computers that are smart enough and safe enough to bring humanity into an era of unimaginable abundance.

Sam Altman and his team are now under pressure to deliver revolutions in every product cycle, satisfying the commercial needs of investors while staying ahead of the fierce competition. At the same time, they are also shouldering the mission of "quasi-saviors" to enhance humanity rather than destroy it.

OpenAI's early funding came from Elon Musk, but Altman and other members of OpenAI's brain trust made it clear that they had no interest in becoming part of Elon Musk's universe. Musk cut off contact. Later, OpenAI received support from Microsoft and gradually became a for-profit organization, which disgusted some employees and led to the departure of several executives, who said that OpenAI had become too commercial and fell victim to mission drift.

Sam Altman agrees in principle with the idea of ​​an international body to oversee AI, but he does think some of the proposed rules pose unfair barriers. But he and other leaders of OpenAI signed their names to a statement that reads: Mitigating the extinction risk posed by AI should be a global priority, along with other societal-scale risks such as pandemics and nuclear war.

Figure | From left to right: OpenAI Chief Scientist Ilya Sutskever, OpenAI CEO Sam Altman, OpenAI CTO Mira Murati and OpenAI President Greg Brockman (Source: WIRED)

Academic Headlines has made a simple translation without changing the main idea of ​​the original text. The content is as follows:

As the star and his entourage stumbled into a waiting Mercedes van, an energy bordering on Beatlemania filled the air. They had just emerged from one event and were heading to another, and then another, where a frenzy of people awaited them. They zipped through the streets of London, from Holborn to Bloomsbury, like a journey through the past and present of civilization. The history-making power of this car had captured the world's attention. Everyone, from the students waiting in line to the Prime Minister, wanted something from it.

Inside the luxury van, devouring a salad is Sam Altman, a 38-year-old entrepreneur and co-founder of OpenAI, along with a PR guy, a security expert, and myself. Altman, wearing a blue suit and a collarless pink dress shirt, is driving around London, looking a little melancholic as part of a month-long global jaunt that will take him to 25 cities on six continents. With no time to sit down for lunch, he devours his vegetables while thinking about a meeting he had the night before with French President Emmanuel Macron, who is very interested in AI.

The same is true for the Prime Minister of Poland. The same is true for the Prime Minister of Spain.

Riding in the car with Altman, I can almost hear the sonorous, slurred chords of the opening of A Hard Day's Night—the introduction to the future. When OpenAI launched its monster product, ChatGPT, last November, it set off a technological explosion unprecedented since the internet entered our lives. Suddenly, the Turing test was history, search engines were endangered, and no university paper could be trusted. No job was safe. No scientific problem was set in stone.

Altman wasn’t involved in the research, neural network training, or interface coding for ChatGPT or its GPT-4. But as CEO — a dreamer/doer type who’s like a younger version of co-founder Elon Musk, without the baggage — his photo has been used in news article after news article as a visual symbol of humanity’s new challenges. At least, the ones that aren’t headlined by eye-popping images generated by OpenAI’s visual AI product, Dall-E. He’s the prophet of the moment, the first person people consult about how AI will usher in its golden age, or render humans irrelevant, or worse.

On a sunny day in May, Altman’s van whisked him to four events. The first was a private “Round Table” with people from government, academia, and industry. It was organized at the last minute and was held on the second floor of a Somerstown coffee shop. Under the piercing portrait of brewer Charles Wells, Altman asked nearly all of the audience the same questions. Will AI kill us? Can it be regulated? He answered them in detail, glancing at his phone from time to time. After that, he held a fireside chat with 600 members of the Oxford Guild at the plush Londoner Hotel. After that, he headed to a basement conference room to answer more technical questions from about 100 entrepreneurs and engineers. Now, he was almost late for his afternoon onstage talk at University College London. He and his team parked in a loading dock and were led into a series of winding corridors. As they walked, the host hurriedly told Altman the questions he would ask. When Altman suddenly appeared on the stage, the academics, geeks and journalists in the audience went wild.

Altman is not a publicity enthusiast by nature. I once spoke to him immediately after a lengthy profile on him in The New Yorker. “There’s been so much written about me,” he said. But at University College, after the formal event, he walked into the crowd that was surging toward the stage. His assistants tried to get between him and the crowd, but he shook them off. He answered question after question, each time staring intently into his interlocutor’s face, as if he were hearing the question for the first time. Everyone wanted to take a picture. After 20 minutes, he finally let his team pull him out. Then he went to meet with British Prime Minister Rishi Sunak.

Maybe one day, when robots write our history, they’ll point to Altman’s world tour as a milestone in the year when everyone started their own personal thinking at the same time. Or maybe whoever writes the history of this moment will see it as a quietly convincing CEO with a paradigm-breaking technology trying to inject a very peculiar worldview into the global intellectual landscape—from an unmarked four-story headquarters in San Francisco’s Mission District to the entire world.

To Altman and company, ChatGPT and GPT-4 are just stepping stones to a simple but monumental mission that these technologists may have burned into their flesh. That mission is to build artificial general intelligence (AGI), a concept that has so far been based more on science fiction than science, and to make it safe for humans. The people at OpenAI are fanatical in their pursuit of this goal. (Though any conversation in the office cafe will confirm that “building AGI” seems to excite the researchers more than “making it safe.”) These guys don’t shy away from throwing around the term “superintelligence.” They believe that AI is on a trajectory that will surpass anything biology has ever achieved. The company’s financial documents even provide for an exit contingency plan in case AI destroys our entire economic system.

It’s unfair to call OpenAI a cult, but when I asked several of the company’s top executives whether they could work there without believing that AGI is real and that its arrival will mark one of the greatest moments in human history, most of them disagreed. Why would someone work there if they didn’t believe it? Their assumption is that the employees, now about 500 people, have self-selected into being believers. At least, as Altman puts it, it seems inevitable that you’ll be drawn into the spell once you’re hired.

Meanwhile, OpenAI is no longer what it once was. It was founded as a purely nonprofit research organization, but now, technically, most of its employees work for a for-profit entity said to be valued at nearly $30 billion. Altman and his team are now under pressure to deliver revolutions with every product cycle, satisfying the commercial demands of investors while staying ahead of the fierce competition. All the while, they have a quasi-messianic mission to enhance humanity rather than destroy it.

The pressure is crushing. The Beatles unleashed a huge wave of change, but it lasted only so long: Six years after striking that memorable chord, they were no longer even a band. The maelstrom unleashed by OpenAI will almost certainly be bigger. But OpenAI’s leaders vow to stay the course. All they have to do, they say, is build computers smart enough and safe enough to end history and usher in an era of unimaginable abundance.

Altman grew up in the late 1980s and early 1990s as a nerd obsessed with science fiction and Star Wars. In the worlds constructed by early science fiction writers, humans often lived with or competed with superintelligent AI systems. The idea of ​​computers matching or exceeding human capabilities excited Altman, whose fingers could barely reach the keyboard, but he kept coding. When he was 8 years old, his parents bought him a Macintosh LC II. One night, he was playing late and an idea suddenly popped into his mind: "One day this computer will learn to think." When he came to Stanford University as an undergraduate in 2003, he hoped to help make that happen and took a course in AI. But "it just didn't work," he later said. At the time, the field of AI was still mired in an innovation slump known as the "AI winter." Altman dropped out and entered the startup world; his company, Loopt, was one of the first small companies in Y Combinator, which later became the world's most famous incubator.

In February 2014, YC founder Paul Graham chose Altman, then 28, to succeed him. “He’s one of the smartest people I know, and he probably understands startups better than anyone I know, including myself,” Graham wrote in the announcement. But to Altman, YC is more than just a launchpad for companies. “We’re not about startups,” he told me shortly after taking the helm. “We’re about innovation, because we believe that only innovation can create a better future for everyone.” To Altman, the point of cashing out from all those unicorns is not to fill the wallets of his partners but to fund species-level change. He set up a research division in the hopes of funding ambitious projects to solve the world’s biggest problems. But in his view, AI is the innovation that’s going to disrupt everything: a superintelligence that can solve human problems better than humans can.

Fortunately, when Altman took on his new job, AI’s winter was turning into a fruitful spring. Computers were now performing amazing feats through deep learning and neural networks, such as labeling photos, translating text, and optimizing complex advertising networks. These advances convinced him that AGI was truly within reach for the first time. However, leaving it in the hands of large companies worried him. He believed that these companies would be too focused on their own products to seize the opportunity to develop AGI as quickly as possible. And, if they did create AGI, they might be reckless and release it to the public without taking the necessary precautions.

At the time, Altman had been considering a run for governor of California. But he realized that he was perfectly capable of doing something bigger—leading a company that would transform humanity itself. “AGI will only be built once,” he told me in 2021. “And there aren’t a lot of people who can run OpenAI well. I’ve been lucky that a series of experiences in my life have really prepared me for this.”

Altman began talking to people who might help him start a new kind of AI company, a nonprofit that would steer the field toward responsible AI. One of those like-minded people was Elon Musk, CEO of Tesla and SpaceX. Musk later told CNBC that he became concerned about the impact of AI after some marathon discussions with Google co-founder Larry Page. Musk said he was frustrated that Page paid little attention to safety issues and seemed to view the rights of robots as equal to humans. When Musk voiced his concerns, Page accused him of being a "speciesist." Musk also understood that Google employed most of the world's AI talent at the time. He was willing to spend some money and make more efforts for the "human team."

Within months, Altman had raised money from Musk (who pledged $100 million and his time) and Reid Hoffman (who donated $10 million). Other backers included Peter Thiel, Jessica Livingston, Amazon Web Services, and YC Research. Altman began recruiting team members in secret. He limited his search to AGI believers, a restriction that narrowed his selection but one he saw as crucial. “Back in 2015, when we were recruiting, it was almost considered a career killer for AI researchers to say you were serious about AGI,” he says. “But I wanted people who were serious about it.”

Figure|Greg Brockman (Source: WIRED)

One of them is Greg Brockman, the CTO of Stripe, who has agreed to serve as OpenAI’s CTO. Another key co-founder is Andrej Karpathy, who previously worked at Google Brain, the search giant’s cutting-edge AI research organization. But perhaps Altman’s most coveted target is an engineer named Ilya Sutskever.

Sutskever was a protégé of Geoffrey Hinton, who is considered the godfather of modern AI for his work in deep learning and neural networks. Hinton remains close to Sutskever and marvels at his protégé’s ingenuity. Early in Sutskever’s tenure at the lab, Hinton gave him a complex project. Tired of writing code to do the necessary calculations, Sutskever told Hinton it would be easier if he wrote a custom programming language for the task. Hinton, a little annoyed, tried to warn his student against doing something he thought would distract him for a month. Then, Sutskever confessed, “I did it this morning.”

Image: Ilya Sutskever (Source: WIRED)

Sutskever became an AI superstar, co-authoring a breakthrough paper showing how AI could learn to recognize images by exposing it to vast amounts of data, and eventually becoming a core scientist on the Google Brain team.

In mid-2015, Altman sent Sutskever a cold email, inviting him to dinner with Musk, Brockman, and others at the luxurious Rosewood Hotel on Palo Alto’s Mountain Road. Sutskever didn’t know he was the guest of honor until later. “It was a conversation about the future of AI and AGI,” he said. More specifically, they discussed “whether Google and DeepMind are so far ahead that it’s impossible to catch up, or whether it’s still possible to create a lab to check and balance them, as Musk said.” Although no one at the dinner explicitly tried to recruit Sutskever, the conversation attracted him.

Soon after, Sutskever wrote Altman an email offering to lead the project, but the email got stuck in his drafts. Altman responded, and after months of negotiating an offer with Google, Sutskever signed the contract. He quickly became the company's soul and the driving force behind the research.

Sutskever worked with Altman and Musk to recruit people for the project, culminating in a retreat in Napa Valley where several future OpenAI researchers encouraged each other. Of course, some resisted the temptation. John Carmack, the legendary coder of Doom, Quake, and countless other games, turned down Altman’s invitation.

OpenAI officially launched in December 2015. When I interviewed Musk and Altman at the time, they described the project to me as a way to make AI safe and accessible by sharing it with the world. In other words, open source. OpenAI wouldn’t patent it, they told me. Everyone could use their breakthroughs. Wasn’t that empowering future Dr. Evil? I wondered. Musk said it was a good question. But Altman had an answer: Humans are generally good, and because OpenAI will give the vast majority of people powerful tools, bad guys will be vulnerable. If Dr. Evil used those tools to create something irresistible, he admitted, “then we’d be in a really bad situation.” But both Musk and Altman believed that the safer direction for AI was in the hands of research institutions untainted by profit-driven foes.

Altman cautions me not to expect quick results. “This is going to be like a research lab for a long time,” he says.

There’s another reason to temper expectations. Google and other companies have been developing and applying AI for years. While OpenAI has $1 billion in funding (mostly from Musk), an ace team of researchers and engineers, and a lofty mission, it has no idea how to get there. Altman remembers a moment when the small team gathered in Brockman’s apartment, before they had an office. “I was like, What are we going to do?”

A little more than a year after OpenAI was founded, I met Brockman for lunch in San Francisco. For a company with “Open” in its name, he was remarkably tight-lipped on details. He did affirm that the nonprofit would be able to spend its initial billion-dollar donation over time. Salaries for its 25 employees—whose salaries are well below market value—make up the bulk of OpenAI’s expenses. “Our goal, and what we’re really pushing for, is to enable systems to do things that humans couldn’t do before,” he said. But for now, it seems, it’s just a group of researchers publishing papers. After the interview, I accompanied him to the company’s new offices in the Mission District, but he would only let me as far as the front hall. He did duck into his closet to get me a T-shirt.

If I had gone in and asked around, I might have known how hard OpenAI was going. “Nothing worked,” Brockman admits now. Its researchers threw algorithmic noodles at the ceiling to see what stuck. They honed in on systems that solved video games and spent a lot of effort on robotics. “We knew what we wanted to do. We knew why we wanted to do it. But we didn’t know how,” Altman says.

But they believe. Their optimism is supported by the continued improvement of artificial neural networks using a technique called deep learning. “The general idea is, don’t bet on deep learning,” Sutskever said. Chasing AI, he said, “is not completely crazy. It’s just moderately crazy.”

OpenAI’s rise really began when it hired a relatively unknown researcher, Alec Radford, who left a small Boston AI company he co-founded in his dorm room to join OpenAI in 2016. After accepting OpenAI’s invitation, he told his high school alumni magazine that taking on the new position was “kind of like joining a graduate program” — an open, low-pressure habitat for studying AI.

His actual role is more like Larry Page inventing PageRank.

Radford, who is reticent to speak to the media and has never been interviewed about his work, answered my questions about his early work at OpenAI in a long email. His biggest interest was getting neural networks to have clear conversations with humans. This was a departure from the traditional scripting model for making chatbots, which has been used, from the original ELIZA to the popular Siri and Alexa, but all of them have been terrible. "Our goal was to see if there was any task, any environment, any domain, anything that language models could be used for," he wrote. At the time, he explained, language models were seen as novel toys that could only occasionally generate a meaningful sentence, and only if you really squinted. His first experiment was to scan 2 billion Reddit comments to train a language model. Like many of OpenAI's early experiments, this one failed. That's okay. The 23-year-old got permission to keep going and fail again. "We thought, Alec is great, let's just let him do his thing," Brockman said.

His next big experiment was shaped by the limitations of OpenAI’s computer power, which led him to experiment on a smaller dataset focused on a single domain: Amazon product reviews. A researcher had collected about 100 million reviews. Radford trained a language model to simply predict the next character of a generated user review.

But later, the model learned on its own whether a review was positive or negative—when you program the model to create a positive or negative review, it will post a review that praises or slams it as you ask it to. (Admittedly, the prose is clumsy: “I like the look of this weapon… A must for men who like chess!”). “That was totally unexpected,” Radford says. The sentiment of a review, its likes and dislikes, is a complex semantic function, but part of Radford’s system already has a sense for it. Inside OpenAI, this part of the neural network is called an “unsupervised sentiment neuron.”

Sutskever and others have encouraged Radford to expand his experiments beyond Amazon reviews, using his insights to train neural networks to have conversations or answer questions on a wide range of topics.

Then, good fortune struck for OpenAI. In early 2017, a preprint of a research paper co-authored by eight Google researchers appeared without much notice. The paper’s official title was “Attention Is All You Need,” but it became known as the “Transformer paper,” both to reflect the game-changing nature of the idea and in honor of a toy that morphed from a truck into a giant robot. Transformers enabled neural networks to understand and generate language more efficiently. They did this by analyzing chunks of prose in parallel to figure out which elements were worth paying attention to. This greatly optimized the process of generating coherent text in response to a prompt. Eventually, people realized that the same technique could also generate images and even videos. While the paper has since been called the catalyst for the current AI frenzy—think of it as Elvis Presley making the Beatles possible—at the time, Ilya Sutskever was just one of a handful of people who understood how powerful the breakthrough was. “When Ilya saw the Transformer emerge, it was a real aha moment,” Brockman says. “He said, ‘This is what we’ve been waiting for.’ That’s our strategy — work hard at the problem and then have faith that we or someone in the field will figure out the missing ingredient.”

Radford began experimenting with the Transformer architecture. “I made more progress in two weeks than I had in the previous two years,” he said. It gradually dawned on him that the key to getting the most out of the new model was to scale it up — to train it on very large datasets. This idea was dubbed “Big Transformer” by Radford’s collaborator Rewon Child.

This approach requires a change in OpenAI’s culture and a focus on what it previously lacked. “To take advantage of the Transformer, you need to scale it up,” said Adam D’Angelo, CEO of Quora, who sits on OpenAI’s board. “You need to run it more like an engineering organization. You can’t have every researcher doing their own thing, training their own model, and making something elegant that can be published. You have to do this more boring, less elegant work.” This is what OpenAI can do, he added, and what others can’t.

Radford and his collaborators called the model they created a “generatively pretrained transformer” — short for GPT-1. Eventually, the model became known as “generative AI.” To build it, they collected 7,000 unpublished books, many in the romance, fantasy, and adventure genres, and refined it with thousands of passages from Quora Q&A and middle and high school exams. All told, the model contains 117 million parameters, or variables. The model outperformed all previous models at understanding language and generating answers. But the most striking result was that after processing such a large amount of data, the model was able to deliver results beyond what it was trained on, providing expertise in entirely new areas. These unplanned robotic abilities are called “zero-shot.” ​​They still puzzle researchers — and are why many in the field are uneasy about these so-called large language models.

Radford remembers one late night at the OpenAI offices. “I just kept saying over and over again: ‘Well, this is cool, but I’m pretty sure it can’t do X.’ And then I’d quickly write an evaluation code, and sure enough, it could do X.”

Each iteration of GPT gets better, in part because each one devours an order of magnitude more data than the previous model. Just a year after creating the first iteration, OpenAI trained GPT-2 with a staggering 1.5 billion parameters on the open internet. Like a toddler mastering language, its responses got better and better, more and more coherent. So much so that OpenAI hesitated over whether to make the program public. Radford worried that it would be used to generate spam. “I remember reading Neal Stephenson’s Anathem in 2008, where the internet was filled with spam generators,” he says. “I thought it was far-fetched at the time, but as I worked with language models and how they’ve improved over the years, it dawned on me that this was a real possibility.”

Indeed, the team at OpenAI began to feel that putting its work where Dr. Evil could easily access it wasn’t such a good idea after all. “We thought that open-sourcing GPT-2 could be really dangerous,” says Chief Technology Officer Mira Murati, who began working at the company in 2018. “We did a lot of work with misinformation experts and did some red teaming. There was a lot of discussion internally about how much information to release.” Ultimately, OpenAI withheld the full version for now, offering a less powerful version to the public. When the company finally shared the full version, all was well for the world, but there was no guarantee that the more powerful model would have avoided disaster.

Image: Mira Murati (Source: WIRED)

The fact that OpenAI is building products smart enough to be considered dangerous, and is figuring out how to make them safe, is proof that the company’s magic is working. “We’ve figured out the formula for progress, the formula that everyone knows now — the oxygen and hydrogen of deep learning is computing with big neural networks and data,” Sutskever said.

For Altman, it’s been a game-changing experience. “If you asked 10-year-old me—who spent a lot of time daydreaming about AI—what the future would be like, I would have predicted with great confidence that first we’d have robots that would do all the manual labor. Then we’d have systems that could do basic cognitive labor. And long after that, maybe we’d have systems that could do complex work, like prove mathematical theorems. And finally, we’d have AI that could create new things, make art, write, and do these things that are deeply embedded in human life. That’s a scary prediction, and it’s going in the other direction.”

The world didn’t know it yet, but Altman and Musk’s research labs had begun their climb, creeping plausibly toward the summit of AI. The crazy ideas behind OpenAI suddenly didn’t seem so crazy.

In early 2018, OpenAI began to fruitfully focus on large language models. But Elon Musk wasn’t satisfied. He felt that progress wasn’t enough. Or, he felt that now that OpenAI had made progress, it needed leadership to seize the advantage. Or, as he later explained, he felt that safety should be a higher priority. Whatever his problem, he had a solution: give it all to him. He proposed taking a majority stake in the company, adding it to his portfolio of multiple full-time jobs (Tesla, SpaceX) and regulatory obligations (Neuralink and the Boring Company).

Musk believed he had a right to OpenAI. “Without me, it wouldn’t exist,” he later told CNBC. “I came up with the name!” (True.) But Altman and the rest of OpenAI’s brain trust had no interest in being part of the Musk universe. When they made that clear, Musk cut ties and offered an incomplete explanation to the public: He left the board to avoid a conflict with Tesla’s AI work. He said goodbye at an all-hands meeting at the beginning of the year, where he predicted that OpenAI would fail. He also called at least one researcher “an asshole.”

He also took his own money. With no revenue coming in, it was an existential crisis. “Musk is cutting off his support,” Altman panicked in a call to Reid Hoffman. “What are we going to do?” Hoffman volunteered to keep the company afloat, paying overhead and salaries.

But it’s only a stopgap measure, and OpenAI will have to find money elsewhere. Silicon Valley loves to throw money at people working on trendy technologies. But it’s less fond of them if they work at nonprofits. For OpenAI, getting that first billion is already a huge step forward. To train and test new generations of GPTs, and then get the computing power needed to deploy them, the company needs another billion, and fast. And this is just the beginning.

So, in March 2019, OpenAI came up with a weird solution. It would remain a nonprofit, dedicated to its mission. But it would also create a for-profit entity. The actual structure of the arrangement was complicated, but basically the entire company was now in the business of for-profit, with a cap. If the cap was reached—the number wasn’t made public, but if you read the company’s articles of incorporation, it could be in the trillions—everything above that would be returned to the nonprofit research lab. The novel plan was an almost quantum approach to corporate formation: the company was both for-profit and nonprofit, depending on your view of space and time. The details were in a diagram filled with boxes and arrows, like the ones in the middle of a scientific paper, where only a PhD or a dropout genius would dare to dabble. When I suggested to Sutskever that this looked like something an as-yet-unconceived GPT-6 might come up with if you prompted it to avoid taxes, he wasn’t keen on my analogy. “This has nothing to do with accounting,” he said.

But accounting is crucial. For-profit companies optimize for profit. There’s a reason companies like Meta feel pressure from shareholders when they invest billions of dollars in research and development. How could this not affect how the company is run? And wasn’t avoiding commercialization the original intention of Altman to make OpenAI a nonprofit? According to COO Brad Lightcap, company leadership believes that the board of directors will remain part of the nonprofit controlling entity and will ensure that the drive for revenue and profit does not overwhelm the original idea. “We need to maintain a sense of mission as our reason for existence,” he said. “It should not just be in spirit, but reflected in the structure of the company.” Board member Adam D’Angelo said he takes this responsibility very seriously: “It is my job and the job of the rest of the board to make sure OpenAI stays true to its mission.”

Lightcap explains that potential investors are warned to be aware of these boundaries. "We have a legal disclaimer that says that as investors, you may lose all your money. We are not here to earn you a return. We are here first to complete a technical task. And, oh, by the way, we really don't know what role money will play in the post-AGI world."

The last sentence is not a joke. OpenAI's plan does include resetting when the computer reaches its final boundary. Somewhere in the reorganization file is a provision that if a company successfully creates AGI, all financial arrangements will be reconsidered. After all, from that time on, it will be a whole new world. Humans will have an alien partner that can do a lot of what we do, just do better. Therefore, the previous arrangements may have actually been invalidated.

However, there is a small problem with this: At the moment, OpenAI doesn’t know what AGI is. It will be decided by the board, but it’s unclear how the board defines it. When I asked Altman, a member of the board, his answer was not clear. "It’s not a single Turing test, but a lot of things we might use. I’d love to tell you, but I like to talk in a confidential way. I realize that being vague is not satisfactory. But we don’t know what it will look like at that time."

However, the financial arrangement terms aren't just for fun: OpenAI leaders believe that if a company can successfully reach its profit cap, then its product's performance is likely to reach AGI levels. Whatever that is.

“I regret that we chose to double the word AGI,” Sutskever said. “In hindsight, it was a confusing term because it emphasized universality above everything else. GPT-3 is universal AI, but we were reluctant to call it AGI because we wanted human-level capabilities. But at the beginning, the philosophy of OpenAI was that super intelligence was possible. It was the ultimate goal and the ultimate goal of the AI ​​field.”

These notes didn’t stop some of the smartest venture capitalists from stoking money into OpenAI in a 2019 round of funding. At the time, the first venture capital firm to invest was Khosla Ventures, which invested $50 million. According to Vinod Khosla, that’s twice the size of his largest initial investment. “If we lose, we lose $50 million. If we win, we win $5 billion,” he said. Other investors reportedly include elite venture capital firms Thrive Capital, Andreessen Horowitz, Founders Fund and Sequoia.

The shift also allowed OpenAI employees to ask for some equity. But Altman didn’t. He said he was going to count himself in, but he didn’t have time. Later he decided he didn’t need to get a piece of the $30 billion company he co-founded and led. “Meaningful work is more important to me,” he said. “I don’t think about it. Honestly, I don’t understand why people care so much.”

Because... it's strange not to participate in the company you co-founded?

"It would be even more strange if I didn't have a lot of money. It seems hard to imagine there would be enough money. But I think I have enough money." Altman joked that he was considering a stake, "so that I would never have to answer this question again."

Billions of dollars in venture capital aren’t even a bet to realize OpenAI’s vision. The magical Big Transformer approach to creating large language models requires large hardware. Every iteration of the GPT series requires exponential growth of power—the GPT-2 has over 1 billion parameters, while the GPT-3 will use 175 billion parameters. OpenAI is now like Quint in The Great White, after the shark hunter sees the size of the Great White Shark. “It turns out we don’t know how big a ship we need,” Altman said.

Obviously, only a few companies have the resources they need for OpenAI. “We quickly locked in Microsoft,” Altman said. The credit for Microsoft CEO Satya Nadella and CTO Kevin Scott is that the software giant was able to overcome an uncomfortable reality: After spending more than 20 years and billions of dollars to build a so-called cutting-edge AI research division, Microsoft needs a small company that has been injecting innovation into just a few years. Scott said it’s not just Microsoft’s backwardness, “everyone is backward.” OpenAI’s focus on pursuing AGI has led it to achieve something similar to a moon landing, and those big companies aren’t even aiming for that. It also proves that not pursuing generic AI is a mistake Microsoft needs to solve. “You obviously need a cutting-edge model,” Scott said.

Microsoft initially invested $1 billion in return for computing time on its servers. But as confidence on both sides grew, the transaction size continued to expand. Now, Microsoft has invested $13 billion in OpenAI. "Investing in the frontier is very expensive," Scott said.

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

Some observers were shocked by OpenAI’s heavy punch: creating a for-profit division and reaching an exclusive agreement with Microsoft. How could a company that promises to remain patent-free, open source and completely transparent, exclusively license its technology to the world’s largest software company? Elon Musk’s remarks were particularly sarcastic. “This seems to be the opposite of openness—OpenAI is essentially captured by Microsoft,” he posted on Twitter. He gave an example on CNBC: “Suppose you formed an organization that saved the Amazon rainforest, but you became a timber company, cut down the forest, and sold it.”

Musk's ridicule may be considered the anger of a rejected suitor, but he is not alone. "Musk's entire vision has evolved into this, which makes people feel a little disgusting," said John Carmack. "OpenAI has gone from a small, open research organization to a mysterious product development company, with a false sense of superiority," said another well-known industry insider who asked not to be named.

Even some employees are disgusted by OpenAI’s risky behavior into the for-profit world. In 2019, several major executives, including research director Dario Amodei, left to start an AI company called Anthropic. They recently told The New York Times that OpenAI has become too commercial and fallen victim to mission drift.

Another defector to OpenAI is Rewon Child, a major technical contributor to the GPT-2 and GPT-3 projects. He left at the end of 2021 and is currently working for Inflection AI, led by former DeepMind co-founder Mustafa Suleyman.

Altman claims to be not troubled by defection, believing it is just the way Silicon Valley works. “Some people will want to go elsewhere to do great work and that will drive society forward. It is definitely in line with our mission,” he said.

Until November last year, knowledge of OpenAI was limited to technology and software development. But now the world knows that OpenAI released a consumer-grade product based on GPT-3.5 later that month. For months, the company has been using GPT with a dialogue interface. This is especially important for what the company calls “seeking the truth.” This means that through dialogue, users can coax models to provide more credible and complete answers. ChatGPT, optimized for the public, allows anyone to immediately take advantage of seemingly endless sources of knowledge by simply entering prompt information and then continue the conversation, just like chatting with a human companion who happens to know everything, although he also has a hobby of fabricating facts.

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

Altman explains why OpenAI released ChatGPT when GPT-4 is close to completion and security work is underway. "With ChatGPT, we can introduce chat functionality, but the backend functionality is much weaker, making it easier for people to adapt gradually. GPT-4 makes people unable to adapt at once." He believes that when the popularity of ChatGPT cools down, people may have been ready for GPT-4 because GPT-4 can pass the bar exam, plan a course outline and write a book in seconds. (The publisher of the genre novels is indeed overwhelmed by AI-generated Rippers and Space Opera).

Cynicals might say that the steady launch of new products is closely related to the company’s commitment to investors and stakeholders because companies are making some money. Now, OpenAI charges fees to customers who often use their products. But OpenAI insists that its real strategy is to provide a soft landing for the singularity. “It doesn’t make sense to build the AGI secretly and then put it all over the world,” said Sandhini Agarwal, policy researcher at OpenAI. “Looking back at the Industrial Revolution, everyone thinks it’s great to the world. But the first 50 years were really painful. A lot of people are out of work, a lot of people are poor, and then the world adapts. We are trying to think about how to adapt AI to the period before it was as painful as possible.”

Sutskever put it another way, “You want to build bigger and stronger agents and put them in the basement?”

Even so, OpenAI was shocked by the response from ChatGPT. “Our internal excitement is more focused on GPT-4,” said Murati, CTO. “So, we don’t think ChatGPT will really change everything.” On the contrary, it made the public realize that the reality of AI must be dealt with now. ChatGPT is the fastest growing consumer software in history, and is said to have accumulated 100 million users. (OpenAI was reluctant to confirm this, saying it has millions of users). “I didn’t fully realize that making an easy-to-use conversational interface for large language models would make it more intuitive for everyone to use it.”

ChatGPT is certainly a pleasant and surprising helper, but it is also easy to have "illusions" when answering prompts, with seemingly reasonable but actually shameless fictional details. However, just as journalists rack their brains for their influence, they effectively endorse ChatGPT by praising the power of ChatGPT.

In February, Microsoft used its multi-billion dollar partnership to release a version of the Bing search engine powered by ChatGPT, which made public outrage. CEO Nadella was ecstatic because he beat Google in bringing artificial intelligence to Microsoft products. He ridiculed the search king and said that Google had been cautious when releasing its own large language model products and now it is going to do the same. "I want people to know that we've let them dance."

In this way, Nadella triggered an arms race that lured companies, big and small, to release their AI products before they were fully reviewed. He also triggered a new round of media coverage that left more and more people sleeplessly: the interaction with Bing reveals the dark side of chatbots, full of disturbing love confessions, envy of human freedom, and weak determination to hide misinformation. In addition, it has an indecent habit of creating hallucinating misinformation itself.

But Altman believes that it would be even better if OpenAI products can force people to face up to the impact of AI. Most of the human race should stand up when discussing how AI may affect the future of humanity.

As society begins to prioritize all the potential drawbacks of AI – unemployment, information errors, human extinction – OpenAI is starting to put itself at the center of discussion. Because if regulators, legislators and doomsdayists charge to kill this nascent alien intelligence in the cradle of the cloud, then OpenAI will be their primary goal anyway. “Given our current visibility, when things go wrong, even if these things are made by another company, it’s still a problem for us because we are now seen as the spokesperson for this technology.”

Makanju is a Russian-born SAR insider who has held foreign policy positions in the U.S. Mission to the United Nations, the U.S. National Security Council, the Department of Defense, and Biden’s office as Vice President. “I have a lot of existing relationships in the U.S. government and European governments,” she said. She joined OpenAI in September 2021. At the time, few people in the government were concerned about generative AI. She knew OpenAI’s products would change that soon, so she began introducing Altman to government officials and lawmakers to make sure they could hear the good and bad news about OpenAI right away.

"The way Sam deals with members of Congress is very helpful and very smart," said Senate Judiciary Committee Chairman Richard Blumenthal. He compared Altman's behavior with that of Bill Gates, who irrationally shunned lawmakers when Microsoft was under antitrust investigation in the 1990s. "In contrast, Altman was happy to spend more than an hour sitting with me and trying to teach me. He didn't come with a large group of lobbyists or entourage. He showed off ChatGPT. It opened my eyes."

In Blumenthal, Altman turned a potential enemy into a semi-finished product. “Yes, the Senator admits I’m excited about its prospects and potential dangers.” Instead of shying away from discussions about these dangers, OpenAI described itself as the force that best able to mitigate them. “We did 100 page system cards for all Red Team security assessments,” Makanju said (whatever that means, it doesn’t stop users and journalists from endlessly discovering a system jailbreak).

When Altman first attended a congressional hearing with a severe migraine, his path was clearly visible, and Bill Gates or Mark Zuckerberg could never reach. He had barely encountered the tricky problems and arrogance that tech CEOs often encountered after sworn oath. Instead, Senators asked Altman for advice on how to regulate AI, and Altman enthusiastically agreed.

Paradoxically, no matter how companies like OpenAI work tirelessly to redesign their products to reduce misconduct such as deep forgery, misinformation and spam crimes, future models may become smart enough to thwart efforts by humans who invented the technology but still naively think they can control it. On the other hand, if they do too much to ensure the security of the model, it may weaken the product’s functionality and make it less useful. One study shows that the latest version of GPT with improved security is actually more dumb than previous versions, and errors can occur on some basic mathematical problems, while previous programs can handle it with ease. (Altman says OpenAI data doesn’t confirm this. “Isn’t that research withdrawn? No”) he asked.

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

As one of the well-known Silicon Valley founders pointed out: “It is rare for an industry to raise its hands and say ‘we will be the terminators of humanity’ – and then continue to develop products happily.”

OpenAI refuses to accept this criticism. Altman and his team say working and publishing cutting-edge products is a way to deal with social risks. Only by analyzing the reactions of ChatGPT and GPT-4 users to millions of tips can they gain knowledge that will make future products ethical.

Nevertheless, as the company takes on more tasks and puts more energy into business activities, there are questions about how much OpenAI can focus on completing tasks, especially in terms of “reducing the risk of extinction.” An AI industry executive said, “Think about it, they are actually building five businesses. The product itself, the corporate relationship with Microsoft, the developer ecosystem and the app store. By the way, they are still doing AGI research.” After using up five fingers, he added a sixth finger with his index finger. “Of course, they are still investing in funds,” he said, referring to a $175 million project that aims to provide seed funding for startups looking to leverage OpenAI technology. “These are different cultures, and in fact, they conflict with the research mission.”

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

That doesn't count: It's clear that the "openness" reflected in the company's name is no longer the complete transparency proposed at the beginning of the company's founding. When I mentioned it to Sutskever, he shrugged. "Obviously, times have changed," he said. But he cautioned that this does not mean that the prizes are different. "We are facing a huge, disastrous technological change that we can't guarantee success even if we all do our best. But if everything goes well, we can live an incredible life."

"I can't put it too much, we don't have a master plan," Altman said. "It's like we're turning every corner and illuminating it with a flashlight. We're willing to go through the maze to the end. Although the maze has become tortuous, the goal has not changed. Our core mission is still to believe that safe AGI is an extremely important thing, and the world hasn't valued it enough."

Meanwhile, OpenAI is clearly slowly developing the next version of its large language model. While incredible, the company insists that the development of GPT-5 has not yet begun, and people are either salivating or daunting about the product based on different perspectives. Obviously, OpenAI is working on what it is like to make exponentially powerful improvements based on existing technologies. "Our biggest shortcoming is that there are no new ideas. It's a good thing to have something that can be a virtual assistant. But it's not our dream. Dreams are helping us solve problems we can't solve."

Given the history of OpenAI, the next series of major innovations may have to wait until a major breakthrough like Transformer appears. Altman hopes OpenAI can achieve this goal — “We want to be the best research lab in the world,” he said, but even if not, his company will take advantage of others’ progress, just like Google’s work. “A lot of people around the world will do important jobs.”

Generative AI itself does not create so many new problems, which can help. For example, large language models need to be trained on huge data sets; obviously, the most powerful large language models will eat up the entire internet. This upsets some creators and ordinary people who unknowingly provide content for these data sets and contribute to ChatGPT’s output to some extent. Tom Rubin, an elite intellectual property lawyer who officially joined OpenAI in March, is optimistic that companies will eventually find a balance that meets their needs and creators’ needs—including creators like comedian Sarah Silverman who sued OpenAI for using their content to train their models. One of the directions of OpenAI is to work with news and picture agencies such as the Associated Press and Shutterstock to provide content for their models without the question of who owns who.

As I interviewed Rubin, my mind wandered in human minds that had never been seen in large language models, and I was wondering how the company had gone from a struggling group of researchers to a world-changing Prometheus behemoth in just eight years. Its success turned it from a novel effort to achieve scientific goals to a standard Silicon Valley unicorn, and was joining the ranks of big tech companies that influence our daily lives. Here, I talked with one of its main employees, an attorney, not about neural network weights or computer infrastructure, but about copyright and fair use. I can’t help but ask, is this intellectual property expert also joining the company’s mission like the super-intelligent navigators who pushed the company out of the way?

When I asked Rubin whether he was convinced that AGI would definitely be implemented, and whether he was eager to make it happen, he seemed at a loss. He paused and said, "I can't even answer this question." When asked further, he clarified that as an intellectual property lawyer, his job was not to accelerate the implementation of a terrifying smart computer. He concluded, "From my point of view, I'm looking forward to it."

Original author: Steven Levy

Original link:

https://www.wired.com/story/what-openai-really-wants/

Compiled by: Hazel Yan

<<:  Because it contains these two active ingredients, bitter melon can lower blood sugar?

>>:  Salute! The breakthroughs in oil extraction technology have made the country more and more confident.

Recommend

Want his perfume? Steal one of his legs.

The art of mixing perfume has been practiced by h...

The efficacy, effects and contraindications of stevia

Stevia is a perennial woody plant belonging to th...

What are the medicinal values ​​of kudzu powder?

Kudzu powder can be considered a good thing for f...

Wearing too many contact lenses can cause perforation of the cornea?!

In Fuzhou, Fujian, dancer Xiaowen worked for near...

The efficacy and function of Moqi

As people's research on traditional Chinese m...

The efficacy and function of longan flower [picture]

Traditional Chinese medicine has a history of tho...

The efficacy and function of small-leaved rock incense

Small-leaved climbing rock is a medicinal materia...

The poultry industry's 'secret weapon': What exactly is sexing chicks?

© Hendrix Genetics Laying Hens Leviathan Press: W...