The good news first
I want to start with the good news, because for me, it matters.
AI has genuinely improved my quality of life.
Living with physical disabilities means some everyday tasks take far more energy than they should. Tools like voice-to-text, intelligent editing, and organizational assistance have given me back time and focus I simply didn’t have before.
I use AI as a personal assistant, not a replacement for my voice.
It helps me:
- dictate and clean up text when my hands won’t cooperate
- organize outlines and notes for my book
- keep track of ideas that would otherwise get lost
- fine-tune graphics, especially text placement, which is difficult for me due to spatial and visual processing challenges
These tools don’t write for me. They help me do what I was already trying to do.
That distinction matters.
What I deliberately do not use AI for
Just as important is what I don’t use AI for.
I don’t use it to:
- write emotionally manipulative stories
- manufacture “feel-good” content to pull people toward ad-heavy websites
- pretend a computer-generated story is a lived human experience
I also don’t use AI to create personal testimony, spiritual reflection, or emotionally charged narratives for the sole purpose of provoking a reaction.
If something carries my name or voice, it comes from my lived experience, reflection, and intention.
AI can assist the process. It should not impersonate the person.
How I personally use AI, and where I draw the line
I’m very intentional about how I use AI when I do use it.
I deliberately instruct tools like ChatGPT to strip out added content, not expand it. My goal is clarity, not emotional amplification. I ask it to stay within the parameters of my grounded writing voice, not to invent one for me.
Anything AI helps me with goes through a fine-tooth comb before I publish it.
I reread everything. I remove anything that doesn’t sound like me. I make sure the tone, wording, and intent are my own.
If I wouldn’t say it out loud, it doesn’t stay on the page. Those who know me well can read my writing and say ‘that sounds just like Kath.”
AI helps me edit and organize. It does not decide what I think, feel, or say. I remain the writer, the editor, and the one responsible for what is shared.
That distinction is not incidental. It’s the difference between assistance and impersonation.
Where the concern comes in
Over the past while, I’ve noticed a sharp rise in so-called “wholesome” good-news stories circulating online, especially on Facebook, Instagram, Threads and in Reels.
At first glance, they look harmless. Encouraging, even.
But many of these stories are:
- entirely AI-generated
- published on anonymous websites
- designed to drive clicks, ads, and data collection
- sometimes bundled with aggressive tracking or malware
They aren’t written to inform or uplift. They’re written to extract attention.
What makes this tricky is that they’re often tailored to specific age groups or demographics.
Stories about well-known musicians, beloved public figures, military reunions, hospital miracles, or animals are chosen very deliberately. A story that mentions someone like Joni Mitchell or Betty White, for example, is far more likely to land emotionally with people in my age range than with a younger audience.
That’s not accidental. It’s targeted.
Why “wholesome” has become a red flag
One small but telling detail is language.
In everyday English, we don’t actually use the word wholesome the way these stories do. Not repeatedly. Not as a headline hook. Not as a moral stamp of approval.
When I see wholesome stacked alongside words like heartwarming, inspiring, or faith in humanity restored, with no concrete details underneath, it gives me pause.
Real stories can be verified. Synthetic ones rely on tone.
What this series is — and isn’t
This isn’t about shaming anyone for sharing something with good intentions.
Most people pass these stories along because they want goodness to win. I understand that impulse.
This is simply about discernment, especially in a digital landscape that has changed very quickly.
I’ll be posting at least one more article to help you spot AI-crafted content that’s deliberately manufactured to take advantage of you, rather than genuinely give something of value. If you’re curious and want to look a little closer at what’s crossing your screen, stay tuned.
Until next time,
©2025 Katherine Walden

