Dear Worldcon,
I have no vested interest in the 2025 Worldcon, nor am I a Hugo finalist. I consider myself objective in this whole LLM debacle — to the extent that one can be objective in what has essentially become a baseless, ideological culture war.
I want to speak on behalf of the science fiction authors who are not afraid of Large Language Models, Gen AI, Machine Learning, or the ethical application of these technologies, for we are many. Sadly, we live in a time when sentiment mining can be near impossible due to variances in engagement and audience. Vocal minorities can become quite loud, drowning out the truth of the big picture.
That said, I want to make it clear that hatred and fear of AI is not unanimous among us. I, myself, am not only a sci-fi writer, but a technology writer. I’ve been in this space for over 20 years, and I’ve worked with AI firms and have trained LLM agents.
It’s clear to me that the widespread disdain for AI is largely fueled by ignorance — and to some extent, a well-deserved amount of fear. Unfortunately, the former tends to fuel the latter beyond what is deserved, or even reasonable.
I’m very aware of the origins of AI. I’m aware of how various models have been trained, and I’m extremely cognizant of how LLMs could be used to deprive writers of work. In fact, I’ve lost clients and tens of thousands of dollars of revenue over the past two years, because someone in an office decided that ChatGPT could do what I do. Yet I still advocate for the ethical use of AI.
Many of us have the clarity to separate our emotions from the truth. We understand that LLMs power modern search engines, and that prompting an LLM to assist with search is one very narrow remove from executing a Google search. We don’t see scripting or agent training as evil or malicious. We know the risks associated with Gen AI — and we know those risks can be mitigated or avoided altogether.
In short, we’re sci-fi writers that understand computer science, technology, and progress. (Imagine that.) We don’t fear Gen AI, because we recognize that it is a tool.
History repeats itself. I’m certain that the hordes of people who are attacking you are equally as ‘guilty’ of many of the following:
- Using cameras, which harms painters and illustrators
- Further, using digital cameras, which annihilated the film photography industry
- Even further, using a camera phone, which harms camera manufacturers, photographers, and pretty much eliminated the photojournalist career as whole
- Using YouTube, which harms the environment (and has been doing so for a long time — still beating out AI)
- Watching streaming services, which harms network television (and also harms the environment more than AI)
- Using a search engine, most of which leverage evil, evil LLMs
- Using any number of tools designed to assist writers, which use iterations or variations of ML that are cousins of LLMs to do so
I could go on until the end of time making a case for the hypocrisy required to fight technological progress.
To sum this up, we’re not all coming for you with pitchforks and torches, and I do not wish to be even passively lumped into the fearful Luddite category simply because I’m an author. Yes, I’m a creative. I’ve been impacted by AI, both negatively and positively. But I’d be a blind fool to fight against using a tool as a tool.
As long as we’re fighting against AI like zealots — a fight that anti-AI can’t possibly win — it’s taking attention away from a far more important issue: the ethical use of AI.
In short, it isn’t going away, so let’s focus on making sure it’s used in ways that don’t actually harm us. In that regard, it bothers me to see so many raging and rending garments over the use of an LLM for a bloody clerical task.
Tune out the hypocrites and zealots who radicalize at the mere mention of a word, and know that there are plenty of us who do not share in their rage.
-J