The more I follow the culture war around AI, the more I’m convinced that the “against it” crowd believes that AI is some singular entity, like SkyNet, lurking in the darkness and feeding — somehow — off of authors’ unearned royalties.
This latest bit of news hasn’t changed my mind. In short, Seattle Worldcon used ChatGPT to automate a process, and some people are losing their shit over it. This is my attempt at counterpointing their collective madness.
The Public Statement
I can’t deep dive into this situation without first giving you the backstory. Since I’m lazy and eager to get into the details, here’s an excerpt from an article on File770 that goes into it. They also link to the original public statement:
Seattle Worldcon 2025 Chair Kathy Bond today issued a public statement attempting to defend the use of ChatGPT as part of the screening process for program participants. The comments have been highly negative.
…We received more than 1,300 panelist applicants for Seattle Worldcon 2025. Building on the work of previous Worldcons, we chose to vet program participants before inviting them to be on our program. We communicated this intention to applicants in the instructions of our panelist interest form.
In order to enhance our process for vetting, volunteer staff also chose to test a process utilizing a script that used ChatGPT. The sole purpose of using this LLM was to automate and aggregate the usual online searches for participant vetting, which can take up to 10–30 minutes per applicant as you enter a person’s name, plus the search terms one by one. Using this script drastically shortened the search process by finding and aggregating sources to review.
Specifically, we created a query, including a requirement to provide sources, and entered no information about the applicant into the script except for their name. As generative AI can be unreliable, we built in an additional step for human review of all results with additional searches done by a human as necessary. An expert in LLMs who has been working in the field since the 1990s reviewed our process and found that privacy was protected and respected, but cautioned that, as we knew, the process might return false results.
The results were then passed back to the Program division head and track leads. Track leads who were interested in participants provided additional review of the results. Absolutely no participants were denied a place on the program based solely on the LLM search. Once again, let us reiterate that no participants were denied a place on the program based solely on the LLM search.
Using this process saved literally hundreds of hours of volunteer staff time, and we believe it resulted in more accurate vetting after the step of checking any purported negative results….
The Comments on Bluesky (and Elsewhere)
The File770 article then illustrates the negativity with a few screengrabs of Bluesky posts. I’m going to address these, as they are as loaded with misconception and twisted logic as we’ve all come to expect from the vehement anti-AI crowd*.
*Look, it’s okay to be against AI. You can have your reasons. But why not choose to have good reasons? It’s not helping anything to keep harping on points that have been disproven or flat out make zero sense. Outrage just bogs the conversation down with stupidity. More on that in a minute.
The following are not in the same order that were posted on the article (you can go see that yourself with the link I provided) but that doesn’t change a thing. I’ve also broken them up a bit so I can add my comments:
I hope this comment was meant to be cheeky, not serious, for two reasons.
- Comparing any modern use case of LLMs to an experience in 2023 shows a total lack of understanding regarding the evolution of LLMs. The models from two months ago are vastly improved, much less from two years ago.
- The generalization of how GPT was used is, frankly, ignorant. There’s likely zero comparison between the prompting used by this person in 2023 and the prompting used by Seattle Worldcon. This is like saying “I ate the rotting flesh of a dead fish I found on the side of the road and almost died, so there’s NO REASON why that restaurant should be serving salmon.” It’s absurd, because the connections are so absolutely tenuous.
The first one here is just hateful vitriol. It’s loaded with erroneous understanding of both how LLMs work, and how this particular LLM was being used. ChatGPT is not a ‘plagiarism machine’ by any stretch. It’s incapable of plagiarism. This is a myth that the anti-AI cling to, but it holds no water for anyone who even remotely understands how LLMs work.
The thing about it ‘destroying the planet’ is another grasp at straws by the anti-AI crowd. Yes, AI uses a lot of processing power, thereby consuming a lot of electricity and cooling resources. BUT…
These folks always seem to forget how AI is not the first or only thing that consumes shitloads of resources. It’s also not the most frivolous thing, by a long shot. You know what self-absorbed, unnecessary, and probably harmful technology consumes a metric boatload of resources, too? Social media.
Especially YouTube. It will take AI a few years to catch up with the environmental impact YouTube has accrued. Yet, why aren’t these people screaming for an end to YouTube? Instead, they hypocritically use the platform to talk about how harmful AI is to the environment. Makes no sense, unless you accept that this isn’t about facts — it’s about feelings. It’s about a grift.
And guess what? AI is actually being used to reduce the environmental impact of other things, like supply chains and power grids. Hell, AI is being used to reduce its own environmental impact! Meanwhile, YouTube is doing nothing to improve its footprint. What it is doing is continuing to promote consumption, grow, and use more resources.
Hm. Perhaps these people should shift their anger toward something else?
And the fourth post? The one about ‘FUCK YOU’ or whatever? That’s a very angry person who, again, didn’t read the article and has some kind of zealot’s eye for the term ‘AI’. The LLM didn’t make the decisions. Humans did make the decisions.
If they’d have read on instead of shitting their pants and rushing into their own spiteful, angry rant, they’d have known that. People that react this emotionally to simply seeing the word ‘ChatGPT’ will likely Luddite themselves out of the conversation very quickly. Don’t be one of them…learn. Understand.
I’m not going to obscure the name of this person, because they’re the only commenter who made any sense (EDIT: I’ve sense found other commenters who made sense, but this one stood out when I was first capturing screengrabs. Thanks D.O. for contributing!):
Good job, Chris!
Now I’m going to remove the other names from the other comments, because these people will be pissed when I point out how wrong they are:
Wrong. Perplexity is an LLM driven search tool that beats the living hell out of anything else out there. You can train a private model within Perplexity to run extremely specific, structured searches and cite its sources. (I wouldn’t use ChatGPT to do this kind of querying, so if I do have any gripe against what WorldCon did, it’s possibly using the wrong model for the job.)
No, you shouldn’t. Either they queried on a public model that doesn’t train longitudinally based on user prompts, or it was within a private agent that operates independently of everyone else’s agents. Learn how LLMs work so that you don’t get upset and anxious unnecessarily. It will be good for you!
There’s nothing in their statement about using bots. In fact, their statement claims the exact opposite. Jumping to the conclusion that AI bots took over the entire process is just evidence of anti-AI delusion. Next, you’ll be talking about how ChatGPT might order a drone strike on your house because someone chicken-pecked the wrong prompt into their Dell.
The Bigger Problem Here
Look, I can pick apart anti-AI ragebaiters all day long. I’ve done it in person, at events, and anywhere that someone foolishly tries to convince me that their feelings are more correct than my facts.
And that’s the kind of road we’re going down here. So many people believe that their feelings are correct. There’s this absurd, annoying problem where society at large has devalued the idea of ‘opinion’ such that everyone feels the need to justify their feelings as correct.
But feelings aren’t correct any more than they’re incorrect. That’s the thing with feelings. And that’s why it’s okay to dislike AI. I will gladly bow to anyone who says they just don’t like AI. They can even dislike people who advocate for the ethical use of AI, like me. Boycott my books because I don’t think AI is evil — even though I don’t use AI to write books. That’s all fine.
I won’t even ask these people to explain themselves, because it’s their opinion and they’re entitled to it.
But it seems like there’s a widespread need to step over that line of feelings and grasp for arguments. It’s not enough to just feel that AI is bad, now they want or need to throw ‘facts’ and ‘logic’ behind their opinion.
That’s where it breaks down. Because I have yet to see a ‘fact’ used by anti-AI that can’t be dismantled in five seconds. (See above.)
What I’m getting at is that I don’t see anti-AI winning anything by using hard logic in their fight. In fact, it makes no sense, because hard logic is pretty much on the side of technology, economics, and all of the other drivers behind AI.
Anti-AI folks claim to be in favor of humanity, so that’s where they need to keep the fight. Human things like feelings. Opinions. Stick to disliking AI for the sake of it, and you might be heard. As soon as you start quoting erroneous claims, brewing conspiracies about leaked data, and harping about environmental impact, you’re going to be shut down by actual facts and reason.
Don’t Be So Reactive!
The simple truth is that the above comments are evidence of reactionary, knee-jerk impulses. Most of these people clearly didn’t even read the article they’re firing off about.
We’re dealing with a group that sees the term ‘AI’ and, without any context or understanding, flips their fucking wigs and goes on the warpath.
Again, this isn’t wise. It isn’t wise because this raw, emotional response doesn’t allow you to pick your battles at all. It turns the anti-AI side into radical zealots and far, far fewer people will listen to them once that happens.
Strive instead to understand each of these use cases. Understand how the AI works and where it actually does create ethical problems. Because it does! There should be concern in those cases. But drawing attention to the mundane, defensible uses of AI does nothing but muddy the waters.
Pick your battles. Continue fighting for ethical AI. I’m all for it. But stop flying off the handle and looking the fool whenever you see those two letters smashed together. It doesn’t help the ethical AI cause at all.
Most importantly, it’s good to realize and accept that hatred of AI, distrust of AI — whatever — these are feelings. You have a right to them, but other people have the right to feel their own feelings. Attacking, boycotting, or cancelling people just because they don’t feel the way you do about AI (or about anything) is awful. It’s not justifiable. It’s bullying. It’s important for you to acknowledge that when you lash out with anger and accusations at someone for using AI, you’re choosing to be a bully. You’re choosing to use rage and threats to control someone else.
And that just sucks.
EDIT (5/3/25): I’ve noticed that this response is ranking fairly high on search engines since I posted it (probably because our SkyNet overlords put it there. I’m an ally!) so I figured I’d better add some additional notes…
Part Two: More Seattle Worldcon Anti-AI Comments Refuted
Seems like this hokum controversy is gaining steam, and the sheer weight of the virtue signaling is oppressive at this point.
Yes, we get that creatives feel personally attacked by AI. It’s a thing, and I understand. But I’m still seeing so many folks drawing comparisons between an AI prompt used for boring ass clerical tasks and AI that could, conceivably, plagiarize someone’s novel. I’m not seeing the connection.
That’s why this is virtue signaling, writ large. It’s not about how the AI was used, it’s about this weird desire everyone has to flip their shit as soon a AI is mentioned. Again, it’s folks self-identifying as misinformed and reactionary, and does nothing to push the conversation toward ethical use of AI.
BUT WAIT!
I’m ready for the folks who say ‘it can never be ethically used because it was created in an unethical way’, and I’ll be damned if this isn’t some kind of Millennial reasoning at work. It’s the kind of thinking that removes Columbus Day from the calendar and wants people to feel guilty about things their ancestors did hundreds of years before they were born.
Sorry, but you can’t cancel everything you don’t like. Some things, like technologies with massive upside potential, are not going to yield to people’s emotions. That’s why it’s our duty is to chaperone the implementation of AI — that’s how we have a positive impact on all of this. Not by pretending it doesn’t exist. Not by trying to bully it out of the public consciousness.
Anyway, here are some more clips:
Sigh. I don’t know how many times it needs to be clarified that humans were involved in this vetting process. Nothing in Worldcon’s statement said “we typed in a basic-ass prompt and let the AI decide everything while we went to lunch.”
And here’s the thing: the above post is just wrong. I’ll restate that I’m a technology writer, and I’ve been one for two decades or so. I write thousands of words a week on tech, and AI-assisted research is one of the most useful advances in internet technology that I’ve ever beheld. If you know how to use it, AI searches beat the hell out of Googling. It will cite sources and make it incredibly easy to trace information gathering and fact check the work.
That’s the rub, though. You have to know how to use these tools properly. People who just scream and break down whenever they see “Gen AI” do not know how to use them, because they’re too far up their high horse to learn.
It’s a lot more complex than just typing a prompt into Gemini.
I caution you: AI is not going to replace everyone. It’s going to replace people who refuse to accept that AI exists. Such people are willfully embracing obsolescence, and as a sci-fi author and technologist, I have no sympathy for that mindset. History has proven a jillion times over that when we adapt to technology and learn to wield it, we prosper. When we fight it, we bury ourselves.
Make your choice wisely.
I missed this one last time, probably because it makes so little sense that my mind edited it out of existence.
I tried several times to write an amusing metaphor to illustrate how off base this is, but the remark is so removed from reality that I can’t even land on something that does it justice.
Here’s the TL;DR: The broader AI model isn’t training itself based on individual users’ prompts. Think about how much of a clusterfuck that would be if they did. End of story.
Well, yeah. I think we’d all ask people to be factual when it comes to making decisions. Especially when that decision is “I’m gonna rant and scream and boycott and tell everyone how naughty Seattle Worldcon is!” We’re talking about people’s lives and reputations, and kneejerk emotional reactions to AI are not more important than civility, common sense, and a bit of due diligence. Again, this feels like something we’ve forgotten as a society. We’ve been led down this bullshit path of believing that if you feel something hard enough, it’s true. (Curse you, ‘The Secret’!)
Ah! And the last paragraph of the above clipping is a great illustration of my point (and I’d even forgotten it was there). “They all think that AI weeded them out…” So, there’s no proof? At all? AI is just the boogieman, so blame it for everything. Got it!
I thought this one was interesting because it appears to be complaining that ChatGPT won’t be critical enough of who it’s querying. That’s probably a good thing, and the reason why humans were always involved in the process. Remember how they openly stated that?
We’ve been promised more information about the process on Tuesday, so we’ll probably find out, but I suspect the agent wasn’t asked to conduct rigorous background checks on every individual. AI is typically implemented from the ground up. What’s the easiest, most mundane task we can automate? Okay, so what would the bored intern be doing with the application slush pile? (Probably checking to see who has the largest social media followings, tbh.) Then that process kicks the MOST LIKELY candidates up the chain for manual review.
I mean, I don’t know at all how they mapped their process, but that’s an example of how it could easily work without being set to remove, exclude, or disqualify anyone. After all, it’s a selection process, not a rejection process.
If this is your concern, be glad they didn’t use Grok! But seriously, I’m wondering how identity politics came into this. Is it really necessary to double-fist your ragebait topics?
This person might be on the verge of enlightenment. You see, they’re realizing that they’re causing themselves a ton of unnecessary stress and internal conflict for no reason. They’re right there…I can feel it.
Look! You can see the sparks behind their eyes! They’re realizing that all of those verbs used to describe their problem are their verbs. They’re creating the anxiety, not AI! They’re putting themselves in a cage because they are choosing a weird hill to die on.
Any minute now…just give it time…
All joking aside, this is madness. For someone to shoot themselves in the foot over this is absolutely fucking absurd. I’ll tell you this, all the virtue signalers who are jumping ship and screaming from the mountaintops because of this pseudo-controversy should broaden their perspectives a bit.
They’d see that most of the world does not react that way to LLMs. You’ll see that going apeshit and decrying an entire organization for doing a freakin’ LLM-assisted search is not some kind of moral victory…it’s a toddler’s tantrum.
LLMS, Gen AI, ML, RPA…these things are everywhere. Saying that you’re going to keep your work safely tucked away from these technologies is absolutely untenable. Did you do any Google searches to research your book? Sorry, you already used ‘AI’. Are you using social media to promote your book? You’re benefiting from ‘AI’.
Chances are you’ve already compromised that stalwart, hardline stance against AI, so why keep drawing arbitrary lines in the sand?
Image, I’m guessing. Optics.
So many people are bandwagoning this because it looks like a fast road to a kudos and backslapping. But I believe some rude awakenings are a-comin’. Look beyond the insular groups of anti-AI types and you’ll see what I mean.
The rest of the world — including other authors and creatives — don’t see moral superiority and anti-AI identity heroics. They just see childish white-knighting executed by people who refuse to take the time to learn about what’s fueling all that rage.
Yes, I said other authors and creatives. Because I also speak to a lot of writers, artists, and so forth who don’t fear AI. We understand it. We want to guide it in the right direction. And there are a lot of us, too.
You just don’t know it because most of them are rightfully afraid of being cancelled simply for having an opinion of AI that doesn’t fit the grift. I’ve stopped giving a damn because the longer this fight goes on, the more I see who’s coming out of it smelling like roses.
And it ain’t the folks who can’t talk about AI without soiling themselves and leaving the room in a huff.
I’ve written a lot about AI in the creative space, and this recent article expands on my takes including my super fun AI Agreement: The Dumbest Anti-AI Diss I’ve Heard Yet (They Brought Cyberpunk Into It 🙄)
And here’s my open letter to the Worldcon folks: Open Letter to Seattle Worldcon Regarding ChatGPT and AI