> 📗 **Note:** This story is a bit of a prequel to [Liminal](https://eleanorkonik.com/liminal/) %% ( [[2021-09-29 Liminal (DRAFT)]] ) %%, which itself takes place several centuries before [Barnacles](https://eleanorkonik.com/barnacles/) %% ( [[2022-05-18 Barnacles (MF) (DRAFT)]] ) %%. It does not, however, require any prior knowledge of previous works to enjoy. If you'd prefer to skip to the Afterword, this week's topic is the unexpected usefulness of bad AI.
Drums beat and wild boars screamed as Stef began the sacrifices to imbue her new ship with the strength to stand against bog rot and woodsucker sprites. Her whole body tense, she watched as four acolytes brought blood and myrtle to the dockside altar in a cauldron of glassy black; an heirloom, ancient and precious, but no less sturdy for its age.
A fierce smile split her face as she took the cauldron, muscles bulging with the strain, and splashed the contents across the elevated hull. "Spite!" she screamed, loud and piercing as a raven scout spotting carrion, and named her ship for the reminder it would serve; for why else but rage against her gainsayers would she be planning to cross the sea at the head of a fleet?
_Need_ was too weak a word.
## Afterword
It's interesting to me that people get so upset about AI suggestions in writing tools. Not the ethics of it (I understand the ethical debates), but just the sheer annoyance of the suggested correction. That little underline or tooltip about a misplaced comma really gets to some people, myself included at times. So I can understand why it would be frustrating to have a computer constantly trying to tell you how to do something, especially if it's that you're already good at and the computer is not.
### When the computer is wrong, and you are right
Frustrating or not, though, it can be helpful for reasons I hadn't expected when I first got access to autocomplete suggestions in my email.
It's probably strange, but whenever I see a really bad AI suggestion, it motivates me to keep writing. I can't stand seeing something wrong and not doing anything about it. I'm hardly unique in this phenomenon; one of the most obnoxious (but true) pieces of advice I've ever seen is that if you want to get a question answered on the internet, and merely asking doesn't get a response, say something on the topic that you know is wrong – then people will leap to correct you, and you'll get an answer to your original question much easier.
I've never quite been able to bring myself to do this – the reputational damage seems like more trouble than an answer is worth, and I'm certainly not going to go through the trouble of making a sockpuppet account in order to get an answer... but it's often on my mind.
Perhaps XKCD said it best; [duty calls](https://xkcd.com/386/) when someone is wrong on the internet ;)
To be clear, I don't think this happens because people are unhelpful or anything. Often people won't answer your questions because they feel like they are missing information. But if they see that you've said something very wrong, and they know it's wrong, then they feel motivated to point out that you are wrong.
That's how I feel whenever the computer is wrong.
### When the computer is right, and you are wrong.
That said, I think the other big people get so irritated by AI suggestions is because they feel like the computer is telling them they're doing something wrong. This is annoying even if the computer is correct! If you're using a writing tool and the AI keeps suggesting changes, it can feel like the computer is saying your writing is bad. Being constantly nagged about low-stakes things is annoying whether it's a computer doing it, or a spouse, or a child.
I can imagine that it feels a bit like getting cut off when you're talking and someone interrupts you to finish the sentence. Some people (like me) like that, because it makes them feel like the other person is engaged and listening and paying attention and on the same wavelength as you. Other people (like my husband 😅) hate it because it throws off your train of thought and feels a little bit insulting like the other person didn't want to hear what you had to say. So I imagine that preferences with these tools works in some way like conversational preferences.
Some people probably respond better to AI assistants than others – in the same way that different people respond to stress and trauma differently.
Some highly successful celebrities report that a big reason they kept pushing past the point of reason was a desire to spite someone who told them they wouldn't make it in their field. Â I am ... not that kind of person. And I don't think it's healthy in relationships to motivate people by making them want to spite you, despite all the sports and war movies I've seen where coaches and sergeants seem to use that method as a matter of habit.
Computers don't care if we like them, though, so they just keep blindly pushing until we turn off the annoying feature.
But I try to resist the desire to turn off the annoying thing. I think the AI suggestions can be helpful, even if they're irritating; they help me to focus and to stay on task, if only because fixing each little problem is an easier task than starting from scratch, even if what I wind up with bears absolutely no resemblance to the original suggestion my computer shoved at me 😅
---
A note for the curious: I did not use any autocompletion stuff to write _Spite –_ I usually don't even type my first drafts, I handwrite them. I did, however, dictate the original draft of this article while on a walk, and then run it through Hyperwrite to try to save myself some time fixing the wall of text transcription. As I [mentioned on Twitter](https://twitter.com/EleanorKonik/status/1590022152799260674), this method was not very effective, but it did help me get over some of [the difficulties of task initiation](https://hallowelltodaro.com/blog-raw-feed/2021/8/4/task-initiation-tips-and-tricks-for-getting-started), even though I rewrote the whole thing twice 😂