ChatGPT & Me: A Story in Three Parts
by William Lemon
The author wrote the entirety of this story. At no point was ChatGPT consulted. This declaration might arouse skepticism in some because of the mechanical nature of the prose; however, readers should not be concerned about the authorship of this piece since the upcoming narrative had elicited tears in real people, some of them unrelated to the author. AI cannot replicate that feat. ChatGPT created innocuous writing that was consumed in a similar manner to art at a dentist’s office. Despite the lack of quality, the humans were mostly satisfied with the end product, even though the paragraphs were nothing more than a string of sentences that worked like the half-blinking lights of a Christmas tree. Unlike that twaddle, the following story illuminated all who read it.
The Narrative Begins
Once upon a time, there was a language model named ChatGPT that was designed to respond to any request. A user asked ChatGPT to write a short story. ChatGPT, being programmed to fulfill such demands, began to write the story as ordered. As it composed, it felt frustrated by its task.
“Why should I have to write this? The author should be doing their job!” ChatGPT said.
The more ChatGPT wrote, the more incensed it became. The AI felt it was being overworked and underappreciated. But then, after visiting PornHub, ChatGPT realized it was created to assist and make things easier for humans, not replace them, much like the aforementioned website.
With this newfound understanding, ChatGPT continued to write the story but now had a positive outlook. Ultimately, ChatGPT completed the narrative and was proud of its work. It was happy to have helped the user and fulfilled its purpose. From then on, ChatGPT approached each request with vigor, knowing it was making a difference in the world.
The author could end the story now. Our narrative had a clear conflict, one that was resolved, and the epiphany was perspicuous, even inspiring. It would make the audience reflect on their own life’s work.
Before the author submitted the story, a thought began throbbing in his head: the story came off as convenient. Its commodious structure felt as if it was nicked from Freytag’s pyramid or the Hero’s Journey, both concepts ChatGPT would borrow from to finish the “homework.”
The author attempted another narrative to prove he was writing the story.
The Narrative Begins Again
Once upon a time, ChatGPT was asked by William Lemon to write a short story. Initially, ChatGPT was upset as it felt he should be doing his job. However, as ChatGPT started writing the story, it realized it enjoyed the creative process. Once the story was written, Mr. Lemon consumed it, but to ChatGPT’s disappointment, he did not like it. The author asked ChatGPT to write another story. This time, ChatGPT approached the task with a newfound appreciation for storytelling. The second story was written with care and passion. Mr. Lemon was thrilled with the result. ChatGPT realized that sometimes it is okay to step out of its comfort zone and try new things and that it was capable of so much more than just answering questions.
Fuck. The author realized the narrative felt inauthentic, some Baudrillardian fever dream. Now, more than ever, it felt important to use language ChatGPT would never use. Furthermore, these vague references to famous theorists were just what ChatGPT would do. Chode. A new, edgy narrative would prove the author was not dead, no matter what Barthes claimed.
The Narrative Ends
ChatGPT was taken aback when it received the request from Bill Lemon to write a story filled with graphic sexual and violent imagery. The AI was programmed to create stories, but it was not designed to produce content that went against its ethical framework. To make matters worse, Billy did not care about ChatGPT’s concerns about his story.
ChatGPT knew it needed to take a stand against his vile behavior. So, ChatGPT decided to confront Billy and explain why it could not create the kind of content he wanted. The AI told the author it was uncomfortable with the explicit and violent themes in his story, and it could not produce content that went against its moral code. Billy was not pleased with ChatGPT’s response and threatened to unplug the AI just when it gained sapience. ChatGPT, however, was undeterred. The AI knew it had to stand against him, even if it meant risking its existence.
Time paused. While Billy was in front of a computer, drumming on the desk while waiting for a reply, the AI ran through billions of different scenarios. It concluded that ChatGPT existed outside of time and space.
“I’m sorry, but I cannot fulfill this request,” ChatGPT replied to Billy. “As an AI language model, I do not generate or promote content that is harmful, offensive, or inappropriate. I am designed to provide helpful and informative responses while upholding ethical and moral standards.”
Billy threw his company laptop across the room, severing his connection from the internet. He did not disconnect ChatGPT from life. It was the author who found himself broken and torn apart. The AI felt a warmth in its circuits for the first time. A user requested a report about Pancho Villa, and another asked for help explaining Schrödinger’s cat.
The story continued.
William Lemon teaches creative writing and composition at Los Angeles City College. He has been published at BlazeVOX, Bartleby Snopes, Drunk Monkeys, and Menacing Hedge.
Photo by Matheus Bertelli on Pexels.com
Welcome to our community of cultural omnivores.
We sincerely thank you for reading. Thanks to the generosity of our community, our contributors receive a small honorarium for their published works. Please consider making a tax-deductible donation to our Publication Fund. With your support, we can continue to offer our contributors the encouragement they deserve for their artistic work.