If Ronald Reagan was alive to read my column this week, he’d undoubtedly shake his head and smile and utter his famous debate phrase, “There you go again.”
I have to admit that I’m fascinated by the phenomenon of artificial intelligence. Our understanding of computers was greatly diminished back in the day when Microsoft introduced Windows and hid the operating system from PC users. It made PC’s immensely more user-friendly, but vaulted computing into the realm of the magical for the average user, and it’s only gotten more mysterious and elusive since.
Artificial intelligence is just as magical to me these days in how it can analyze immense amounts of data and make decisions independent from humans about what to do with it.
Most recently my attention has been focused on ChatGPT, the artificial intelligence chatbot that debuted last fall that can compose emails, blog posts, lists, essays, computer code and more from a simple user direction or question.
It’s taken the computing world by storm. Almost every time I try to log in to the ChatGPT server during working hours I get the message that the server is at capacity, a common occurrence these days for the millions of people trying it out. With persistence one can get in during the day, but I find it easier to log in late at night.
It’s caused quite a stir among those who depend on written text to conduct their chosen pursuits. Educators fear, with good reason, that the text it generates will be used by students as a substitute for their own work. Seventeen percent of Stanford University students reported using ChatGPT for their classwork last fall, mostly to generate outlines for papers or conduct basic research, but some did report using the text for essays. Colleges are racing to revise their student codes of conduct to define how it can and cannot be used in higher education. Public schools are doing the same.
Techies who get industry news from the website CNET found out this past week that many of the articles they’ve been reading in recent months were generated entirely by ChatGPT and not by CNET human reporters. The subterfuge was discovered when people started noticing errors in the articles, something ChatGPT is prone to because of how it was developed. For the moment, CNET has stopped the practice of using AI-generated articles.
I’ve found ChatGPT to be quite adept at writing those “Ten reasons why …” type of lists often found on blog sites or in popular magazines. I’ve given it numerous such queries and have generated samples that match or exceed the quality of many blogs I’ve visited. Bloggers, and freelancers who generate blog posts to sell, could definitely see some competition from ChatGPT in the days ahead.
True to the magical nature of computing, many of the reviews I’ve read of ChatGPT don’t seem to have a grasp on what it is. I recently read an article by a sportswriter about how ChatGPT won’t ever replace him because it doesn’t know anything about current sports teams and gave many examples of its inability to answer questions about players, games, and statistics. He’s a good illustration of someone who has a fundamental misunderstanding of what ChatGPT is.
ChatGPT is a language model that has been trained on a huge amount of language input to respond in a human-like manner to a human’s questions and directions. Its neural network was trained on 300 billion words fed into the system from books, web-based text, articles, and other writing sources. The neural network analyzes a request from a human and creates a contextual response from all of the data it has, data that only goes up to the year 2021.
Given the level of use, ChatGPT is making millions of decisions every second about what words and phrases to use to craft its responses, which usually take only a few seconds to start appearing. It doesn’t evaluate facts, it evaluates language it’s been programmed with. So ChatGPT generates human-like responses that are sometimes nonsensical or inaccurate, much like some of the internet language it was trained on.
What ChatGPT is not, as the sportswriter and CNET found out, is a program that searches the internet in real time to get information for what it writes. If one asked ChatGPT to write an article about the trials and tribulations of Kevin McCarthy’s recent bid to become Speaker of the House, it could not produce an accurate response because it does not do a real time search of articles written about that. It does have a search engine that looks for websites in response to someone’s query, but it isn’t actively reading those sites for data to use for answers.
But despite some of its limitations, ChatGPT is a formidable piece of artificial intelligence. He knows enough to pass the MBA exam at the Wharton School of Business at the University of Pennsylvania. It’s a whiz at generating accurate computer code. And when I asked it to describe five drawbacks to copper-nickel mining, it made me wonder if some of Marshall Helmberger’s writing was included in its database. It wasn’t nearly as good or detailed as Marshall’s writing, but it was accurate as a basic overview.
I’ll admit I was tempted to have ChatGPT write my column this week and just clue you in at the end what I’d done. It might not have had my style, but I suspect it would have been good enough to make you think you were getting the genuine Colburn and not an artificially generated one.
And ChatGPT is already generating spin-offs, such as a site that promises to integrate current information into what ChatGPT writes. Another company is touting its ability to take the “best selling book” that you “write” using ChatGPT and turn it into an e-book for sale. One could even choose to illustrate the book with AI-generated art from a site like DALL-E or Midjourney.
It seems AI technology like ChatGPT is racing ahead of our readiness to deal with it, and as with most technology there are practical and ethical matters to address. If you “write” a book or a poem on ChatGPT, who owns it for copyright purposes? Much of the written material ChatGPT was trained on was copyrighted material, so do the original copyright owners of a phrase get a stake in what it produces? Is it ethical to produce a book and list yourself as the author when it was actually written and illustrated by artificial intelligence? Does ChatGPT have an ethical responsibility to make sure the material it produces is factual? The questions are many, complex, and largely unanswered right now.
And the same can be said for artificial intelligence in general as it wriggles its way into ever more of our daily lives. A continuous question posed by advances in artificial intelligence is this. if we can do it with AI, should we do it?
Artificial intelligence is rapidly expanding our society, making decisions humans used to make and using their immense computing capacity to make decisions humans can’t make. It’s moving at a rate faster than most of us are aware of, with implications we can’t fully fathom. As with all technology, there will be benefits and there will be pitfalls. Is it moving too fast for us to keep up? Only time will tell.