GenAI Forces a Writing Quality Reckoning

Jun 27, 2024 3:05 pm

image

Forwarded by a friend? Subscribe now.


Howdy hey ,


I’m a sucker for a good opening line. In book writing, your first line is the most important thing to get right. And journalists know the power of a good lede.


Probably my favorite opening line is from George Orwell’s 1984:


It was a bright cold day in April, and the clocks were striking thirteen.


Why? From the jump, it pulls you into another reality. It’s familiar, then suddenly foreign, much like the world of Oceania proves to be throughout the novel. And it doesn’t thump you over the head with that message — the unease builds, tiny at first, then monstrous by the final page.


Of course, the first line is but a fraction of what makes a book/article/story “good.” You also could think this first line sucks, illuminating the subjectivity inherent in qualifying something as “good.”


Welcome to the eternal challenge of defining “high-quality writing.” You’ve entered the gladiator pit where I’ve spent my professional writing career. Maddeningly, after many battles, a common rubric for defining “great writing” remains elusive.


But, in business writing, we need something to go off of. After all, we have KPIs and client expectations and tight deadlines. Company leaders need a quantifiable version of good


The desire for quantification has led folks too far into relying on the tactical, copy-edit level of writing to determine quality: Clear the red spellcheck squiggles and ship it.


Mechanical, structural, and copy edits do matter, but only partially. We dump extraneous resources into managing the tactical elements and forgo the strategic side — the developmental components undergirding truly great writing.


AI is eating our lunch on the tactical front. So it’s high time we get more strategic in defining and assessing “high-quality writing.”


AI Detection Shows Us How Wrong We’ve Been

Nothing exemplifies AI’s tactical encroachment better than the emerging cottage industry of AI writing detection software. While most folks don’t understand how generative AI works, they can use its output to cobble together a publishable article (and publishable ≠ good).


Because AI models follow mathematical principles, some companies surmise they can review these texts and determine if a human or machine wrote it. AI detection relies on its own AI models to review content against predictable mechanical and structural patterns that may signal machined writing. That includes diction (e.g. an overuse of delve), sentence structure, and paragraph construction.


It’s not too divorced from the copy edits high school English teachers would ding you for. PR Daily offers a similar list for catching AI copy, including:

  • Cliched phrasing
  • Extraneous adjectives and adverbs
  • Subject-verb agreement, passive voice, & overused verbs
  • Conciseness and precision


AI copy is rife with these sloppy mechanical and structural errors, that’s true. But…

  1. The models were trained on what’s available online, so a lot of its writing dataset (read: most published content) reflects lazy language and sloppy writing.
  2. These errors are exactly what tools like Grammarly, ProWritingAid, Hemingway, and basic Google Docs/Microsoft Word spellcheck have highlighted for years, yet these errors have persisted.


AI copy isn’t creating new errors; it’s industrializing the common errors human writers make. AI detectors can’t make that distinction, so it guesses based on assessed probabilities. “Human writers probably wouldn’t use that much passive voice, so it’s more likely to be ChatGPT.”


Hinging a successful determination on guesswork would be annoying but tolerable if this tech weren’t resulting in human writers being fired over false “AI-generated copy” accusations. (And brace yourselves for the AI arms race as other companies promise tech to “humanize” AI content and cheat detectors.) 


AI detection codifies the use of tactical denotations to not only pass judgment on “human or robot” but also set the terms of what “quality work” is. Yet this focus on mechanical and structural components misses the more important question: If you’re so worried a writer used AI to generate your content, did you have anything worth saying in the first place? 


Better Thinking = Better Quality

Bad writing originates from bad thinking. If you don’t begin the writing process pondering the stuff that makes writing interesting (your experiences, internal data, a unique stance or opinion) and couple them with the freedom to tell expressive stories, you get pablum from humans and AI alike.


Most readers intuit this. You can just kinda tell when a story is good or bad. But that doesn’t really help from a rubric-making perspective, does it?


When I was in my most recent content leadership seat, I had to assess my staff’s writing capabilities and quality. Like many leaders, I started with quantitative metrics:

  • How many pieces per week did they produce?
  • How long did it take to produce each piece?
  • What was the spelling/grammar/style error rate that tools like Grammarly reported?
  • How many days did content sit between the first and final drafts?


That data mattered for quantity planning but served as poor proxies for quality. Maybe you write a piece in three days, but if your client hates it, or a media outlet passes on it, then the piece's speed-to-market wasn’t relevant.


That’s true for tactical edits, too. I can make Grammarly happy by accepting its basic and Premium™ suggestions. But if the client disagrees with the central thesis, or if a final reader’s eyes drift after the first paragraph, my reduced error rate is meaningless in answering, “Is this piece good or not?”.


Those quantitative metrics are what GenAI promises to improve. Write more stuff, faster. Tidy up those tactical errors with Grammarly and a watchful editor’s eye, and numerically, we should crush our high-quality content goals.


But my assessment of writing staff went beyond these data points. I’ve broken this down before, but quickly, every content piece I reviewed had to answer three questions. Does the article:

  • Meet the needs of our target audience?
  • Teach them something new?
  • Enrapture them in a compelling narrative?


Writing that fulfills those questions feels “good,” even though it’s harder to show on a tactical, copy-edit level. But that’s the stuff that got clients to say, “Yes, I love this! Write more for us, please.”


In those questions lie better proxies for understanding high-quality writing — proxies we need to strategically activate across brand and comms teams to survive the quantity onslaught GenAI will invite. 


How Do We Activate Quality Proxies?

“Three questions, Alex? Really? If it’s that simple, why hasn’t everyone fixed this problem?” 


In a word? 


Cost


Specifically, it’s expensive to spend time, energy, and effort to accomplish what those questions reflect: 

  • Coherent content, marketing, and brand strategies 
  • High-quality information developed from trustworthy sources
  • Deep understanding of the developmental editing process


(tl;dr: It’s simply cheaper to click “Accept all of Grammarly’s suggestions” and be done.)


The more expensive option is better long-term for our teams and brands (and personal sanity). But it requires investing in the developmental components forming a great story’s foundation and giving it ample time to flourish. I’ve discussed these developmental elements previously, but it involves things like:

  • Deeply researching topics of interest to gather nuances and form complex, interesting opinions
  • Gathering primary and secondary (statistically significant) research and data to qualify your opinions
  • Incorporating information and feedback from subject matter and writing experts to polish rough edges
  • Protecting a story’s beating heart (aka the most interesting content) from overediting as it proceeds through organizational reviews


As a result, our content reviews should spend less energy resolving spelling errors and more on understanding how the writing makes us feel and think. We must analyze the directions and magnitudes in which the message pulls us — or unpack why we feel nothing. 


And we need more time in the editing process to advocate for heavier, time-consuming changes and additions to achieve more meaningful outcomes. I can’t rewrite entire sections when the piece is due at 5 PM today after it sat on someone’s desk for a week. 


We don’t benefit from strategic editing without respecting the costs a high-quality editing process extracts. But when you read that memorable, powerful piece, everyone realizes the juice was worth the squeeze.


Quality Doesn’t Stop at the Spellcheck Squiggle

This comes with specific concerns and caveats; we don’t always need a “clock striking thirteen” level of opener in our blog post about managing data infrastructure, and you could spend forever on developmental edits and lose your moment. It’s a highly personalized balancing act. 


But GenAI has forced us from the shallow end of the pool. We can no longer suffer content lacking that certain je ne sais quoi.


Writers still need expertise with tactical elements — if they can’t recognize and edit poorly constructed sentences, AI will devour them wholesale. But mechanical and structural competencies are table stakes, a minimum benchmark for high-quality writing.


Writers need to develop the competencies to transform vast stores of information into compelling stories that fit larger strategic goals. And leaders need to provide writers the time, space, and resources to accomplish this successfully.


This forms the foundation to build a more strategic, nuanced perspective on what qualifies as “good writing,” rubric or not. You’ll start recognizing those “clocks striking thirteen” moments in your company and industry and be ready to activate them meaningfully.


§


Stellar content about content

The Spectacular Failure of the Star Wars Hotel

by Jenny Nicholson


image


This video is four hours long and worth every minute you'll spend watching it (even The New York Times agrees with me!). It is a phenomenal takedown of a neat but poorly executed Star Wars Hotel concept at Walt Disney World (which failed after 18 months in operation).


For the marketing-minded, Jenny's video spends a while on how the messaging and the reality were incredibly misaligned and how that shone through in the marketing collateral. Watch Part II of her video for more, including something you'll never unsee: how paid Disney speakers/influencers don't talk like regular people when sharing their experiences. (Who calls the Cars ride "Radiator Springs Racers???")


§


What is ChatGPT Doing...And Why Does It Work?

by Stephen Wolfram, Founder & CEO of Wolfram Research


image

Stephen Wolfram is a famous mathematician, physicist, and entrepreneur — if you've been anywhere around math-y stuff, you've seen his name. I've had one of his apps, Wolfram Alpha, on my phone since I owned a Samsung Moment (with flip-out keyboard) in high school.


I've always enjoyed how he breaks down complex, mathematical topics, and he does that with his detailed analysis of how ChatGPT works. Since I've been on an "understanding generative AI" kick, this article was well-timed for additional research. It's long (like, novella-long), but interesting.


§


PAN Communications acquires BLASTmedia

by Jess Ruderman, PRWeek

image


I haven't dropped breaking news in an email newsletter since my TechPoint days. As you may know, I cut my content team teeth at BLASTmedia. PAN Communications just announced they acquired my old firm and are transforming it into PANBlast. Congrats to both teams!


§


Content from my pocket of the galaxy

🎥 It's Time for Deeper Generative AI Discussion

image

A BBC article recently made the rounds with another "ChatGPT took my job" story.


While it accurately highlights issues and threats writing talent face today, it also illustrates the worsening GenAI hype & doom cycle.


Despite so much noise around generative AI, it feels like discourse has stalled.


Click here to watch the video.


§


🎥 What Does "High-Quality Writing" Mean?

image

"You need to write better." Great, what does that even mean?


Agencies and their writers must deliver written work that meets or exceeds their clients' quality expectations.


But you must understand what fuels a qualitative perception and the right levers to pull to adjust and improve.


Click here to watch the video.


§


I Discuss the Unsexy Side of Entrepreneurship on UNHIRED


image

I'm late to put this in my email (again), but I've seen and heard from a fair number of folks lately who are trapped in the entrepreneurial cycle of suck. Media influencers push the "hurry up and make millions today" type of hype marketing, which grates on real entrepreneurs' psyches.


Allison Nordenbrock Brown's podcast, UNHIRED, is such a good antidote to the cycle of suck. I have my episode here, but go listen to the other wonderful people sharing their experiences.


§


See you soon,

Alex

Comments