and are going live with this Thursday at 5 PM ET. Join us on the Substack app!
https://open.substack.com/live-stream/53794?r=5jw33r&utm_medium=ios
More and more, AI generated content litters the net.
It’s not clear to me how many people know that.
Neither of us are opposed to AI content as such. Sometimes its output is excellent. Typically, it is mediocre. The fact that newsletters that are clearly AI generated are successful on Substack seems like an indictment of everyone’s taste.
Would these letters have so many readers if people knew they were AI generated in less than 5 seconds? We suspect not.
Ideally, we’d have accounts of how writers are using (or not using) AI tools.
So here’s ours.
In short, we do not generate Stoa Letters from AI.
Instead, we use AI as an editor. Occasionally, we may use a sentence or two that sounds better to our ears. Mostly, however, we use it as another filter to detect typos, odd sentences and such.
Both of us use AI as a “thought partner” for generating and critiquing ideas. It can always be useful to pass ideas to an LLM and ask for sharp criticism and feedback. It’s not as good as a thoughtful human, but some of the more advanced models can often be exceptionally useful.
Apart from the letter itself, we use AI to generate ideas for titles and subjects for some emails. Last thing – our artwork is often generated using AI. That said, if you’re interested in making art for us, reach out.
We both believe that relying too much on external human or AI input will render you a worse writer. Sometimes outsourcing is fine. If you hire someone to do something for you, you may become worse at that task. Yet this is often a good decision nonetheless. However, in the case of writing, where writing closely mirrors the content of our thought, becoming a worse writer is not something you want to outsource too much. By doing so you will become a worse thinker.
And yet it’s clear that we do outsource some of our thinking to others. So to completely ignore the use of human or AI editors would often be a mistake.
As the technology develops, this is something we'll continue to think about how best to incorporate AI (or not!) into the letter, in addition to our own personal lives and work. Let us know what you all think in the comments and stay tuned for more developments on this front.
This is an interesting thought in the sense that one of the things that I've started doing is using AI to help me cognitively distance (to use a term I learned from Donald Robertson) when dealing with my ex. I also use it as a way to log her behavior over the long-term. To balance out these thoughts I also reflect on it by hand to gage whether using AI got a response that was favorable. So basically I use it as a thought partner. I've also used it as an editor for my serialized memoir in podcast form that I post here. It doesn't edit my words or anything it mostly gives me suggestions to edit the overall episode. I'd say I take about three to five suggestions out of the twelve or so it gives me. Then I write my response as marginalia there on why I followed it or not. Basically, I treat it like it's my editor and explain why I am keeping what I have the way I have it.
That being said, I am concerned about its impact on the environment and on people's ability to think critically. I teach in the English department at a Big 10 university and it just partnered with Open AI on its edu model, so I'm generally cautiously curious about how it will impact me and the students I teach. I make sure that for whatever I use it for, I spend twice as much time actually writing by hand because that's where the good stuff lives.
AI as a critic is the best use, but AI as a writer to think for yourself is a slippery slope.
DOAC has a recent podcast episode on this: “ChatGPT Brain Rot Debate”