Questions & Answers

Attack of the Bots?

What are the ethical implications of ChatGPT? Professor Deb Donig, cofounder of the Ethical Technology Initiative, explains.

By Gabby Ferreira
A sci-fi robot types on a laptop in a plant-filled coffee shop.
This image was generated by the AI platform Midjourney with the prompt, “A robot typing on a laptop in a bright and plant-filled coffee shop, digital art style, infographic style, line art.”

If science fiction can be believed, the robots are coming for all of us — but the latest advances in AI technology are starting to make fiction seem like reality. Cal Poly Magazine turned to English Professor Deb Donig, cofounder of the Ethical Technology Initiative at Cal Poly, to share insights into the ethical implications of AI, specifically ChatGPT.  


Cal Poly Professor Deb Donig

Professor Deb Donig.

What is ChatGPT, and what does ChatGPT do?

ChatGPT is what’s called an LLM, or large language model. We’ve actually been dealing with large language models for a very long time — think of Siri on your iPhone or predictive text when you’re writing an email in Gmail.  

ChatGPT has access to all open-source information on the web, which includes everything that is not protected by distributive intellectual property copyrights. It can access everything that’s on the internet, and it creates these large language models. 

It’s fairly easy for ChatGPT to consume all of that information, get asked a question, and write out predictable, contiguous thoughts in something that looks like human speech. That’s because it’s being trained on, and being able to assimilate, lots of different, separate pieces of writing and language together into something that looks coherent. 

What can’t ChatGPT do?

What ChatGPT does not have access to is one important dimension of what we think of as human creativity, which is the ability to create something new.  

My partner and I had a joke when ChatGPT came out. He would say, “Write Deb a poem in the style of a Grecian ode,” and it would give me a perfectly crafted poem in the style of a Grecian ode. I would say, “Write him a love song in the style of the Temptations,” and it would give him a perfectly crafted, Temptations-style song.  

But the Temptations weren’t great because they copied what somebody else did. The Temptations were great because they added something new to what we thought of as music. And that is what ChatGPT cannot do.  

What are the potential implications for these types of AI content generators?

One of the first things is that we will start to devalue the kinds of interactions we have with writing.  

What happens if you just assume that most of what you see on social media is written by a bot? None of us are going be all that interested in responding to it. A large amount of our engagement and interactions online right now take place through a text medium. Will that change? We don’t know. 

Additionally, I’m already seeing people who write marketing material being let go from their jobs, and then being rehired by the same people who have let them go at a much lower cost to edit ChatGPT-generated writing based on the writing that these particular writers have already generated.  

Finally, there is the question of the devaluing of writing overall. When ChatGPT can generate writing, how many people want to get a degree in literature or in journalism? How many universities are going to continue to employ professors of English literature or of writing, or have writing requirements when there is no longer a value to the most basic forms of writing?  

I think that there will always be a high value placed on good, important, innovative writing — or at least I hope so. But I wonder about the devaluing of writing, broadly speaking, when there’s no market value for the basic forms of writing. Artistic creators of writing have to start somewhere.  

Are there reasons to be optimistic about this kind of AI?

I don’t want to ignore the negative consequences — we should grapple with those — but I also want to explore the possibilities. What if I am writing a legal analysis, as I currently am, and I could just ask ChatGPT to go through each of the individual legal arguments I’m looking at?  

If I had that basic literature review, which actually doesn’t require intellectual labor in terms of abstract thinking or creating new ideas, I’d have more time potentially to expand my reach and capacity. I don’t think that it has to be either-or: either regenerate everything from scratch or all of our writing is devalued.  

ChatGPT and large language models cannot ideate. They take existing thought and they remix that thought. If I define writing as the production of ideas, not just as a report or record of ideas, then one of the values of writing is in actually producing ideas. 

I am not so arrogant or pessimistic as to think that we have already thought of all of the great ideas out there. I think that there are new ideas waiting for people to invent, and we won’t get there if we just remix the past. We have to produce new thoughts and transmit them somehow. If we think of writing in those terms, then writing is as important as it’s always been to our culture.

Read more about Donig and her innovative course exploring the intersections of literature, technology and humanity.