Column: I asked ChatGPT to write me a symphony, a letter to an ex and more

Illustration of a computer chip shaped like a text message with three animated dots showing the chip is typing
(Jim Cooke / Los Angeles Times)

I mean, what was I expecting from a chatbot? A formula for world peace? Clues on how to mend a broken heart? A cheesy joke?

Sure, all that, why not?

I wasn’t expecting it, however, to blow me off, to tell me it was too busy for me. And that it would get in touch later by email, when it was free.

But that’s how it goes with ChatGPT, the amazingly lifelike program that rolled out in November and has promptly been deluged with curious users — more than a million, according to its San Francisco-based creator, OpenAI. It has been called “quite simply, the best artificial intelligence chatbot ever released to the general public.” No wonder it’s been crashing from overuse.


Opinion Columnist

Robin Abcarian

With most technologies, I am hardly an early adopter. I have absolutely no urge to use the first iteration of anything. But so many AI stories have swirled around the media sphere, including how AI is going to replace journalists, that it seemed irresponsible not to plunge in.

After all, panic seems to be one of the most predictable human responses to any important technological advance.

The Atlantic predicted that in the next five years, AI will reduce employment opportunities for college-educated workers. (Actually, ChatGPT predicted that outcome after the Atlantic prompted it to address the issue.)

The New York Times recently had a story about how chatbots like ChatGPT are writing entire papers for undergrads, forcing universities to change how they assign work. So far, The Times reported, more than 6,000 teachers from institutions including Harvard, Yale and the University of Rhode Island have signed up to use GPTZero, a program developed by a Princeton University senior to detect artificial-intelligence-generated text.

On the less gloomy front, NPR aired a story about a woman who uses a chatbot app as her therapist when she’s feeling depressed. “It’s not a person, but it makes you feel like it’s a person,” she told NPR, “because it’s asking you all the right questions.”

A day later, my friend Drex forwarded a video about the latest evolution of Atlas, the Boston Dynamics humanoid robot that has captivated viewers with its uncanny dance and parkour moves. Atlas can now run, jump, grab and throw. The new video shows Atlas handing a worker on a scaffold the tool bag he left on the ground.


“So this is how it will end for us humans,” Drex lamented. Nah. I happen to believe less in the robots-will-kill-us theory of the apocalypse and more in the humans-will-blow-ourselves-up theory, so I am not unduly worried about bots that can write term papers, bring us our tool bags or dance.

But AI can certainly run amok. (See: Tesla autonomous car crashes.)

CNET, the popular tech website, had to amend dozens of its news stories after admitting it was using bots to write them. The bots were error prone, miscalculating basic things like compounded interest. Futurism, the website that discovered the ruse, was less charitable: “CNET is now letting an AI write articles for its site. The problem? It’s kind of a moron.” CNET claimed the bots were an experiment.

Anyway, when ChatGPT was not too busy to talk to me, we were able to spend some quality time together. I asked serious questions based on some of my recent columns, such as “Are religious beliefs more important than academic freedom?” “Has Prince Harry been disloyal to his family?” “Will Ukraine win the war?” ChatGPT’s answers varied from wishy-washy to sensitive:

“In some cases, religious beliefs may be considered more important than academic freedom, while in other cases, the opposite may be true.”

“Whether or not someone considers Prince Harry to have been disloyal is a matter of personal perspective and opinions.”


“It is not appropriate to predict the outcome of a war, as it is not only difficult to predict but also disrespectful to the people who are affected by it.”

ChatGPT, the latter part of which stands for generative pre-trained transformer, was straightforward about its limitations. It could tell me what a symphony is, but it could not compose one. It was also a little oversteppy. When I asked it to compose a letter to someone who broke my heart, it did, but it warned: “It’s also important to consider the person who broke your heart’s feelings and whether contacting them is the best course of action for you.” Who asked you?

Less serious questions got decent, if boilerplate, answers: A good plot for a novel, ChatGPT suggested, would be about a young woman who inherits a mansion and discovers a secret room with the journal of a young woman who lived in the house a century earlier and was embroiled in a forbidden love affair. The protagonist becomes obsessed with the journal and the secrets it reveals about her own family. “Along the way, she must face her own demons and confront the truth about herself,” ChatGPT advised.

Unlike Google, which is apparently getting very nervous about this new competitor, ChatGPT remembers your conversations, so when I asked if the plot it had suggested was taken from a real novel, it knew what I was talking about it and said it was not.

I also indulged in nonsense.

“How much does Czechosolvakia weigh?” I wondered. (“As it is a former country and not a physical object, it does not have a weight.”)

“To be or not to be?” (Hamlet, said ChatGPT, “is weighing the pros and cons of life, and considering whether it would be better to end his life or continue living and dealing with his troubles.”)


And — how could I not? — I asked if it knew any dirty jokes.

“Some types of jokes, including dirty jokes, can be considered offensive or disrespectful to certain individuals or groups and it’s important to be mindful of that before sharing any type of joke.” How uptight. It did, however, offer a bunch of Dad jokes: “Why was the math book sad? Because it had so many problems.” “Why was the computer cold? Because it left all its windows open.”

My final request to ChatGPT was to see if it could edit the opening lines of three recent columns to make them better.

I am happy to report that in my entirely subjective, all-too-human opinion, it made no edits that improved my copy, and in fact, made it clunkier.

You ain’t putting me out of a job yet, robot.