Everyone Is Still Thinking About AI the Wrong Way (But That’s Normal)

Another day, another meeting with people in the tech field. Not all of my meetings are about AI, but this one most certainly was.

Without going into the boring details, I’m glad to report that my predictions and general remarks about AI implementation continue to hold out:

  • Most technical-minded people are aware that AI can’t be trusted to perform tasks without human intervention.
  • Developers and implementers alike are not looking to clear-cut the labor force; we talk a lot about creating new jobs.

 

We also agreed on another point: people are still thinking in entirely the wrong way about implementing AI.

Same Crap, Different Tech

When Henry Ford introduced the assembly line, it put a lot of people out of work. It also created new jobs suited to the new paradigm — and those jobs paid higher.

Like I say all the time, AI is morally neutral. The future will be determined by how it is used. (And that’s why I wish people would stop wasting their time moaning about Gen AI being used to make images and focus on AI being used in places where it really will screw us all over — like in government, law enforcement, and surveillance.)

What it all comes down to right now is that stage nearly every technology stumbles through during its rise: the transition from injection to integration to transformation.

 

Injection

In many ways, Gen AI is still in the “injection” part of its lifecycle. This is where the majority of people don’t really know how to use it. Almost no one knows how to best use it. In a mad grab for relevance, companies begin “AI-washing” their products by slapping whatever AI-assist feature they can on them. 

You can tell when someone is stuck in this immature stage of thinking because they still imagine AI in terms of replacing people. Like a one-for-one swap. That’s not what’s going to happen in the long run.

 

Integration

In some applications and industries, AI is already in the “integration” stage. This is where people have figured out how to layer AI assistance on top of what they’re already doing — at least with some degree of success.

Some sectors and users aren’t here yet, but it’s one of those inevitable parts of the journey. The problem is that it’s still not fully working within a reality that’s been redefined by the tech. It’s really about shoving a new idea into old processes — and this cumbersome phase never yields the best results.

 

Transformation

 It takes some time to get here — and AI isn’t remotely near this point — but this is the stage when a new technology has transformed its surroundings rather than being haphazardly shoved into place wherever people think it might fit.

Trust me…even in the deep recesses of the tech space, most people are still thinking about AI in legacy terms. They see it as a plug-in, a feature, a module.

In reality, AI is a power plant and a chassis. We shouldn’t be strapping it to existing systems, we should be building entire systems on top of it. 

As I’ve said many times before, it’s like someone invented the internal combustion engine and everyone is trying to figure out how to strap it to a horse. 

When true transformation happens, it will look so completely different to what we’re used to that I can only theorize about it. AI will no longer change the way things are done, the very way things are done will change because of AI.

So when you’re imagining the future — whether you’re okay with AI or against it — you’re doing yourself a disservice by thinking of modern systems with AI slapped on top. That’s not the best, or even correct, way to visualize AI’s true potential.

I’ll give you an example.

In a previous contract, I was consulting with a company that was developing a brand new piece of software for managing tasks within a business. We had many discussions about how to integrate AI into this product. 

I would sit back and listen to (legacy) developers and management talk…

…let’s put an AI chat bot on the customer service tickets!

…is there a way to build AI into the accounting feature so it can generate data visualization on labor costs month over month?

…can the AI track user engagement and send messages to people who aren’t logged in for their whole workday?

That’s the kind of stuff they were throwing around. It was the absolute embodiment of AI-washing, because 90% of the tasks they wanted to “AI-enhance” could have easily been handled by legacy automation.

Finally, when I spoke up, I told them they were not thinking far enough ahead. I told them they needed to tear down their entire idea for this product and start over before they got too buried in the legacy development mindset.

And what I suggested was met with equal measures of outrage and appreciation.

I pointed to the mockups of their software’s UI. It looked like any other business application you or anyone has ever seen. Menu along the left. Tabs. A section for accounting, a section for sales, a section for service tickets, and so on.

“You want this to be AI-forward and innovative, right?” I asked. “So go back to what your software is supposed to be doing for the end user.”

It was being designed to help IT companies manage their clients, their techs, their vendors…all sorts of things. It was business management for a very specific sector, essentially.

“So, the menus and graphics aren’t integral to doing that,” I said. “That’s just what software is supposed to look like now. If you want AI-forward and cutting edge, you get rid of all that. No more sections. No more modules. You log in and have a big, white screen. And a button that lets you either type or speak your command.”

Naturally, a few people got confused and assumed I meant we had to build an Alexa or Siri. Some kind of “voice assistant” instead of a business management platform. But that was only partially right. 

My point was that the strength of AI is not computation or generating reports. It’s really coming into its own as a means of collating and reconciling data from different sources. What I was getting at is that they should simply build an AI agent that’s designed to tap into all these integrations (data feeds from other sources like vendor software) and work with that data — freeform.

In other words, all the developers should have been worried about is a) giving the model access to the right datastores; and b) training a model that behaves like a top-tier business consultant.

No need to click through menus or activate features. You just talk to the software. Tell it what you need. Hell, you could even say “I’m struggling to figure out why I’m losing money on client X, can you help?” and this software could answer you.

No buttons. No hard coding.

This is a very specific example, but I hope it illustrates my point. We’re not across the threshold where enough people fully understand how AI changes the game — they’re stuck on where to plug it into the obsolete old ways they’re doing things now. 

And that’s why the doom and gloomers have so much to doom and gloom over. When the tech is in the early, poorly-envisioned stage, most people tend to use it like they’re waving around a loaded gun. Whether it’s from ignorance of the bigger picture, greed, or a need to race to the market, a lot of mistakes get made.

And don’t get me wrong — even as an AI implementer and advocate for its ethical use, I know there are going to be some big mistakes in the future. I don’t suspect…I know. And I dread the possibilities.

But on the bright side, history tells us that the only people who will be displaced are those who refuse to accept change. That’s why it pains me to see so many creatives refusing to be in the same room with AI and complaining about unemployment at the same time. 

It hurts no one to adapt a little. To learn a little. And I’ve found the more someone learns about the realities of Gen AI (and LLMs, ML, RPA, etc.) the less they’re scared and angry about it. That’s a win-win for everyone.

Share this :

Leave a Reply

Your email address will not be published. Required fields are marked *