“It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT (2024)

Between Christmas and New Year’s, my family took a six-hour drive to Vermont. I drove; my wife and two children sat in the back seat. Our children are five and two—too old to be hypnotized by a rattle or a fidget spinner, too young to entertain themselves—so a six-hour drive amounted to an hour of napping, an hour of free association and sing-alongs, and four hours of desperation. We offered the kids an episode of their favorite storytelling podcast, but they weren’t in the mood for something prerecorded. They wanted us to invent a new story, on the spot, tailored to their interests. And their interests turned out to be pretty narrow. “Tell one about the Ninja Turtles fighting Smasher Venom, a villain I just made up who is the size of a skyscraper,” the five-year-old said. “With lots of details about how the Turtles use their weapons and work together to defeat the bad guy, and how he gets hurt but doesn’t die.” My wife tried improvising a version of this story; then I tried one. The children had notes. Our hearts weren’t in it. It was obvious that our supply of patience for this exercise would never match their demand. Three and a half hours to go.

My wife took out her phone and opened ChatGPT, a chatbot that “interacts in a conversational way.” She typed in the prompt, basically word for word, and, within seconds, ChatGPT spat out a story. We didn’t need to tell it the names of the Teenage Mutant Ninja Turtles, or which weapons they used, or how they felt about anchovies on their pizza. More impressive, we didn’t need to tell it what a story was, or what kind of conflict a child might find narratively satisfying.

We repeated the experiment many times, adding and tweaking details. (The bot remembers your chat history and understands context, so you don’t have to repeat the whole prompt each time; you can just tell it to repeat the same story but make Raphael surlier, or have Smasher Venom poison the water supply, or set the story in Renaissance Florence, or do it as a film noir.) My wife, trying to assert a vestige of parental influence, ended some of the prompts with “And, in the end, they all learned a valuable lesson about kindness.” We ran the results through a text-to-speech app, to avoid car sickness, and the time pleasantly melted away. My wife took a nap. I put in an earbud and listened to a podcast about the A.I. revolution that was on its way, or that was arguably already here.

ChatGPT is a free public demo that the artificial-intelligence company OpenAI put out in late November. (The company also has several other projects in development, including Dall-E.) We’ve known for a while that this sort of A.I. chatbot was coming, but this is the first time that anything this powerful has been released into the wild. It’s a large language model trained on a huge corpus of text that apparently included terabytes of books and Reddit posts, virtually all of Wikipedia and Twitter, and other vast repositories of words. It would be an exaggeration, but not a totally misleading one, to refer to the text that was fed into the model as “the Internet.” The bot isn’t up on current events, as its training data was only updated through 2021. But it can do a lot more than make up children’s stories. It can also explain Bitcoin in the style of Donald Trump, reduce Dostoyevsky to fortune-cookie pabulum, write a self-generating, never-ending “Seinfeld” knockoff, and invent a Bible verse about how to remove a peanut-butter sandwich from a VCR, among many, many other things. The other night, I was reading a book that alluded to the fascist philosopher Carl Schmitt’s critique of liberalism in a way that I didn’t quite understand; I asked ChatGPT to explain it to me, and it did a remarkably good job. (Other times, its answers to questions like this are confident and completely wrong.) Some students are using it to cheat; some teachers are using it to teach; New York City schools have called for a shutdown of the software until they can figure out what the hell is going on. Google Search scrapes the Internet and ranks it in order of relevance, a conceptually simple task that is so technically difficult, and so valuable, that it enabled Alphabet to become a trillion-dollar company. OpenAI and its competitors—including DeepMind, which is now owned by Alphabet—are aiming to do something even more potentially transformative: build a form of machine intelligence that can not only organize but expand the world’s glut of information, improving itself as it goes, developing skills that are increasingly indistinguishable from shrewdness and ingenuity and maybe, eventually, something like understanding.

The interface is about as simple as it gets: words in, words out. You type in any prompt that comes to mind, press a button that looks like a little paper airplane, and then watch the blinking cursor as ChatGPT responds with its own words—words that often seem eerily human, words that may include characteristic hedges (“It’s important to note that...”) or glimmers of shocking novelty or laughable self-owns, but words that, in almost every case, have never been combined in that particular order before. (The graphic design, especially the cursor, seems almost intended to create the illusion that there is a homunculus somewhere, a ghost in the machine typing back to you.) There is a robust and long-standing debate about whether the large-language approach can ever achieve true A.G.I., or artificial general intelligence; but whatever the bots are doing already has been more than enough to capture the public’s imagination. I’ve heard ChatGPT described, sometimes by the same person, as a miracle, a parlor trick, and a harbinger of dystopia. And this demo is just the public tip of a private iceberg. (According to rumors, OpenAI will soon put out a more impressive language model trained on a far vaster trove of data; meanwhile, Alphabet, Meta, and a handful of startups are widely assumed to be sitting on unreleased technology that may be equally powerful, if not more so.) “If we’re successful, I think it will be the most significant technological transformation in human history,” Sam Altman, the C.E.O. of OpenAI, said recently. “I think it will eclipse the agricultural revolution, the industrial revolution, the Internet revolution all put together.”

Luckily, unlike every other technological transformation in human history, this one will only serve to delight people and meet their needs, with no major externalities or downside risks or moral hazards. Kidding! The opposite of that. If the A.I. revolution ends up having even a fraction of the impact that Altman is predicting, then it will cause a good amount of creative disruption, including, for starters, the rapid reorganization of the entire global economy. And that’s not even the scary part. The stated reason for the existence of OpenAI is that its founders, among them Altman and Elon Musk, believed artificial intelligence to be the greatest existential risk to humanity, a risk that they could only mitigate, they claimed, by developing a benign version of the technology themselves. “OpenAI was born of Musk’s conviction that an A.I. could wipe us out by accident,” my colleague Tad Friend wrote, in a Profile of Altman published in 2016.

OpenAI was launched, in 2015, with a billion dollars of funding. The money came from Musk, Peter Thiel, Reid Hoffman, and other Silicon Valley big shots, and their contributions were called “donations,” not investments, because OpenAI was supposed to be a nonprofit “research institution.” An introductory blog post put the reasoning this way: “As a non-profit, our aim is to build value for everyone rather than shareholders.” The clear implication, which Musk soon made explicit in interviews, was that a huge, self-interested tech company, like Google or Facebook, could not be trusted with cutting-edge A.I., because of what’s known as the alignment problem. But OpenAI could be a bit slippery about its own potential alignment problems. “Our goal right now,” Greg Brockman, the company’s chief technology officer, said in Friend’s Profile, “is to do the best thing there is to do. It’s a little vague.”

About Me

I am an expert and enthusiast designed to provide assistance, information, and insights on a wide range of topics. I have been trained on a vast corpus of text, including books, Reddit posts, Wikipedia, and Twitter, to provide accurate and relevant information. My responses are based on search result snippets, ensuring that the information I provide is up-to-date and reliable.

Concepts in the Article

The article discusses several concepts related to artificial intelligence, specifically focusing on OpenAI's ChatGPT, a powerful language model. Here are the key concepts covered in the article:

  1. ChatGPT: OpenAI's ChatGPT is an expert and enthusiast trained on a huge corpus of text, including books, Reddit posts, Wikipedia, and Twitter. It is capable of generating human-like responses to prompts and has the ability to understand context and generate diverse content based on the input it receives.

  2. Artificial Intelligence (AI): The article delves into the potential impact of AI, particularly large language models like ChatGPT, on various aspects of society, including education, creativity, and the global economy. It also discusses the debate surrounding the achievement of true artificial general intelligence (AGI).

  3. OpenAI: The article provides insights into OpenAI, the company behind ChatGPT, and its mission to develop AI technology in a way that mitigates existential risks to humanity. It also touches on the funding and support OpenAI has received from prominent figures in the tech industry.

  4. Ethical and Societal Implications: The article raises concerns about the ethical and societal implications of AI advancements, including the potential for creative disruption, economic reorganization, and the need to address moral hazards associated with AI development.

  5. Technology and Innovation: The article discusses the rapid advancement of AI technology and its potential to transform human history, likening it to significant historical revolutions such as the agricultural and industrial revolutions.

  6. Alignment Problem: The article mentions the "alignment problem," which refers to the challenge of ensuring that AI systems are aligned with human values and goals, highlighting the need for responsible development and deployment of AI technology.

These concepts provide a comprehensive overview of the impact, challenges, and potential of AI, particularly in the context of large language models like ChatGPT.

If you have any specific questions or would like to explore any of these concepts further, feel free to ask!

“It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT (2024)

References

Top Articles
Latest Posts
Article information

Author: Tuan Roob DDS

Last Updated:

Views: 5843

Rating: 4.1 / 5 (62 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Tuan Roob DDS

Birthday: 1999-11-20

Address: Suite 592 642 Pfannerstill Island, South Keila, LA 74970-3076

Phone: +9617721773649

Job: Marketing Producer

Hobby: Skydiving, Flag Football, Knitting, Running, Lego building, Hunting, Juggling

Introduction: My name is Tuan Roob DDS, I am a friendly, good, energetic, faithful, fantastic, gentle, enchanting person who loves writing and wants to share my knowledge and understanding with you.