AI versus writers: Weighing both sides and ChatGTP's startling response
/I am only the umpteen-thousandth writer to write about the impact of AI on writing as a profession. You might even ask what took me so long to pipe up about it.
But the widespread use of AI over the last several months was not a predictable or previously known process. Now some data and some real world impacts are starting to come in and I can make some educated guesses.
First of all, many writers are aghast at the ability of AI generators to produce life-like text in seconds on just about any topic imaginable. There has been a vehemently negative response to AI in online writers’ groups and I’ve seen more than one writer who admitted to using AI viciously excoriated by other writers. The majority opinion, at least among human writers, is definitely anti-AI.
And there are good reasons for that.
While AI-produced text is banal and prone to factual errors, it is the work of a few minutes for a skilled writer to take an AI text and spruce it up dramatically. One of the most immediate impacts of AI writing tools is to significantly speed up technical or corporate writing. Work that used to require ten writers can now be done by one writer, who mainly just edits, checks facts and adds a bit of flare here and there.
A similar thing happened in automobile manufacturing a few decades ago. Technological advances increased production per worker so dramatically that only a few workers retained their jobs, which left large industrial cities looking like scenes out of a post-apocalyptic novel. The same could easily happen to the large section of white color jobs where writing plays a major role.
Jobs like marketing staff, copy writers, copyeditors, technical writers, medical or scientific editors, webpage content providers and speech writers are all likely to be literally decimated. One in ten may be left in five years and most of those jobs that do survive will involve a lot of checking over vast amounts of AI-generated text—not the kind of job most writers want.
It’s happening already. I personally know several people in technical writing who have already lost their jobs or seen their business gutted because of AI. Most of my editing jobs still involve assisting writers for whom English is a second language. It’s high-level work, where the author isn’t able to accurately check the text themselves, and the nuance required is beyond current AI capabilities. So, my gigs aren’t yet under direct threat, but even that is likely to become ever more rare, especially as language-translation AI improves.
Even so, there are writers who extol the virtues of AI. Some have made money by producing dozens or even hundreds of AI books and posting them on Amazon. Even if no individual AI book procures many sales, someone inevitably gets sucked in and with enough hooks in the water, there are apparently substantial returns, for the moment. The problem with that strategy is likely to be that Amazon and other self-publishing platforms will cease to be a place anyone goes to find good books, because the quality of these auto-generated books tends to be very low. The more the platform is overwhelmed with AI and other low-quality offerings, the less it will attract readers.
Another way to get the other side of this story is to ask an AI generator.
When asked, “What will be the positive and negative impacts of AI for people in writing professions?” ChatGTP divides its answer into two categories:
Positive impacts: “AI will streamline research, provide content suggestions, and assist in editing, saving time and enhancing productivity for writers. It can also generate personalized content tailored to specific audiences, potentially increasing engagement and reach.”
Negative impacts: “AI may lead to job displacement for some writers as automation becomes more advanced, and there's a risk of overreliance on AI tools, diminishing creativity and human touch in writing. Additionally, concerns about AI-generated content authenticity and ethical implications may arise, challenging the integrity of written work.”
I’ll take that as four separate points: 1. Labor-saving and enhanced productivity, 2. Personalized content to pinpoint marketing, 3. Job displacement and over-use, and 4. Ethical and authenticity concerns.
I’ve got to hand it to ChatGTP for being reasonably even-handed on criticism of itself. Now, let’s break it down.
1. Labor-saving and enhanced productivity
Few writers today can credibly argue that AI isn’t a helpful assistant in spellchecking manuscripts. We’ve been using that level of automation for decades and most of us would be lost without it. Today, writers can use AI to quickly scan for repetitive words, cliches, passive voice, inconsistencies in usage and clunky sentences. Few writers would argue that this type of AI use is unethical or much different than spellchecking, but it is going to gouge people who have made a living in copyediting.
The downside of this handy type of AI is likely to be that both students and teachers may rely on these tools to such an extent that fewer people will make it over the rather steep learning hump that most previous generations sweated over in high school composition classes. If you can just put down a general idea and let AI fix it—or even write most of it—how many high school or college students are going to actually know how to write an essay for themselves?
You might say that’s just fine. Millennials and even a lot of Gen-Xers are far worse in spelling than our forebears, thanks to spellcheck applications, it hasn’t really hurt us much. We don’t need to spell very well in the modern world. However, the higher levels of editing and even generating an essay in the first place encroach much more on the thinking part than the pure technicality of a spellcheck.
This blurs the line between this “positive” point and the "negative” point about overuse. To be fair, AI is offering tools. Those tools can be used for some good things, like spellchecking and checking for inconsistencies or repetitive text.
Advanced levels of AI, generally in paywalled categories at present, can help with checking academic or scientific sources for both correct citations as well as to ensure no inadvertent “plagiarism” occurs when a writer uses phrasing too similar to that of the original source. Conversely, those tasked with reviewing such work, whether as university instructors or in peer reviewed publications can use these same tools to require exacting citation standards, which may be a net positive, though it will mean cuts in the jobs of graduate students who previously assisted with much of this tedious workSimilarly, advanced levels of AI—mostly in paywalled categories at present—can help with more advanced tasks like checking academic or scientific sources for both correct citations as well as to ensure no inadvertent “plagiarism” occurs when a writer uses phrasing too similar to that of the original source. Conversely, those tasked with reviewing such work, whether as university instructors or in peer reviewed publications can use these same tools to require exacting citation standards, which may be a net positive, though it will mean cuts in the jobs of graduate students who previously assisted with much of this tedious work. There again, the line is blurred between the positive aspects and the negative of job displacement, which I’ll get to below.
Still, it’s hard to argue that these checking tools are negative uses and modern AI has made such tools better. So, I’ll give ChatGTP the point on the positive side.
2. Personalized content to pinpoint marketing
ChatGTP doesn’t word it like this, but I’m translating “personalized content tailored to specific audiences, potentially increasing engagement and reach” into regular English here. This means advertising and marketing of every variety, including that of a blog post, which is meant to help a lowly author like myself gain the interest of readers.
What if I didn’t just write my blog post but I had AI data on you as an individual reader and I could make sure that the post you see agrees with your personal opinions and selects interests you have personally expressed online? The post you see from Arie Farnam might be significantly different from the post someone else sees, if I used this type of “personalized content AI” to “enhance” my posts. (Don’t worry. I won’t.)
What the bot means when claiming this as a “positive” aspect of AI is that much of the advertising and marketing text we see is likely to become even more targeted to one’s specific tastes. Once advertisers know that you are a customer who prioritizes durability, for example, every product you encounter will likely be marketed as “durable” and “well-made” and “long-lasting,” whether it is or not. Marketing will likely reflect even less of what the product is and more what the customer personally wants to hear.
While I have to agree with ChatGTP that this might be a “good thing” for corporations who want to make money at all cost by misleading customers, it is definitely not a good thing for most actual humans.
As an example, I recently purchased a “Sporting Goods Protection Plan” for a $200 fishing pole. The ad popped up as I was checking out on Amazon and offered a warranty against accidental damage and broken items. Because this pole was going to be used by teenagers, I happily shelled out an extra 10 percent of the product price for a two-year warranty. But when it came time to make a claim for a broken pole 9 months later, the insurance company, Asurion, says the “Sporting Goods Protection Plan” I purchased only covers phones and tablets, not fishing poles.
The ad had automatically tailored the text to what I was likely to want to see, so I paid for an insurance plan for a non-existent phone while buying a fishing pole. I did actually read some of the terms and conditions, which stated that the plan covered “cracked, broken and scratched portable items.” Asurion now says a “portable item” refers only to phones and tablets and I should have known that.
Asurion’s representative went so far as to google “Is a fishing pole a portable item?” to try to prove to me that I was wrong. Of course, Google, at this date, still agrees that a fishing pole is portable, so Asurion had to take another tack to continue denying my claim. But what happens when Google provides “personalized” answers to questions like that according to what corporate partners need at the moment? You can see why Asurion would see this as a “good” aspect of AI.
3. Job displacement and over-use
We are already seeing the effects of job displacement. Just as automation and robotic assembly lines increased the productivity of manufacturing workers so dramatically that most of them lost their jobs over the past several decades, the same thing is now happening to that strata of the workforce where writing is a large part of the job. Projects and agendas that used to need 10 staff with writing skills, now only need one. What a writer previously needed ten hours to write, that same writer can now check and correct in one hour.
It’s less rewarding work and because there are so many similar people losing their jobs, those who still have a job are likely to be under immense pressure to process huge amounts of AI generated text in ever shorter time periods and without complaint. It is a terrible outcome both for those who lose their livelihoods and for the few who may still hang on. At least ChatGTP agrees that this is negative.
Of course, it isn’t just negative for the writers. It’s also a disaster for those who read such texts. A text written by a human writer and spellchecked with AI is like a carpenter using a power tool but still building customized furniture and homes by hand. Today’s AI writing is like building houses on an automated assembly line. We all know the difference between the work of a skilled artisan with power tools versus the stiflingly boring, poor quality work of modern prefab homes. We also know how such craftspeople have become scarce, as their jobs are subsumed by corporate automation.
And that’s where the “overuse” part comes in. While letting AI write a boring essay for you “just this once” may feel utterly harmless, especially since you’re an experienced writer and you can check the essay over and touch it up after AI is done, multiplied by thousands and millions, that “just this once” will mean that most writers may never truly gain the needed skills for themselves. Writing student essays, ad copy, website copy and even obituaries was a mind-numbingly boring rite of passage for most of today’s writers.
But that’s the thing. No one actually goes from being a young creative with stories swimming in their head to an experienced and skilled writer without that work-a-day journeyman’s stage of cranking out copy. Pure talent is a myth believed only by people outside the writing profession and by some early beginners.
AI is taking over that training ground. And even though, an individual writer could—through perseverance and tenacity—still learn the craft by writing every essay in school the old way, they will never again get a job writing basic copy fresh out of college. Those jobs that most of today’s writers cut their teeth on are truly a thing of the past, thanks to AI. My first writing internship was editing obituaries for my tiny hometown paper. My first paid writing gig was writing repetitive restaurant reviews. I didn’t love either of these jobs, but when I consider that the next generation will likely have no such toe holds to start climbing the steep wall of the writing profession, I feel grateful to have been born early.
4. Ethical and authenticity concerns
The number of ethical issues that have come up in the other points is already a bit overwhelming, from students using AI to fake their essays to the massive loss of good quality jobs, from the potential for abuse of “personalization” in marketing and customer service to the loss of craftsmanship and authenticity in writing. But there are still a few I haven’t yet mentioned.
One of the biggest ethical issues of AI is that the companies profiting from it are profiting off of stolen intellectual property. Today’s AI engines were trained on vast amounts of published online text, much of it in copyrighted works and periodicals. They may not copy them word for word as in traditional plagiarism but the basic concept of plagiarism still remains. AI copies vast amounts of text and then reshuffles it to create its own text and to determine what is correct usage.
For instance, I published articles in publications such as The Christian Science Monitor and Business Week back in the 2000’s. It is very likely those articles and those of my colleagues were part of the vast supply of published texts that AI copied in order to produce its own texts. Neither the publications nor the authors have been compensated in any way. In fact, if we want to use AI for tasks like checking documents for mistakes, we have to pay the companies that pirated our work to do it. That seems pretty unethical to me.
So, out of four sentences listing positive and negative impacts, ChatGTP gave me two true negative statements, one true positive and one sort-of positive that turns out to be a negative for most people. I’m not convinced that there are really two sides to this issue.
But let me try one last thing and give this post to ChatGTP and get its response…
Well, that took a dark turn. I have now been blocked from ChatGTP apparently. I can no longer use it without being logged in as it recognizes my computer and when I log in I simply cannot send any inquiries. The send button is permanently grayed out. I guess that’s response enough.
I am concerned about the impact of AI on writing, but I’m also very concerned about its impact on most aspects of public life. It is clearly ever easier to simply silence any voice of dissent.