It’s important to remember that ChatGPT has been on the public’s radar since around late November/early December of 2022. It’s still a while, a long while in the AI hype cycle, before we’re at three full years of AI but we’re getting close to two and half years and have seen multiple model updates with all the major LLM and LLM-inclusive AI products (Gemini, Claude, etc.) during that time.
I’d venture that over that roughly two and half years everyone who is interested and who has access to the internet has played around with AI, even if they did so only briefly two years ago. I suspect that many had a similar experiences to what I did: amazement at first, confusion next (Why does it make things up?/That’s crazy), frustration (Why can’t it just reformat my resume the way I asked?), and finally exhaustion (Why am I wasting my time on this?).
Of course, the AI acolytes say, you were using the wrong model or didn’t ask the right questions or if you had only asked again it’d be right this time or the new model is so much better or did you know it can write code or… Ultimately though despite what I’m sure it growing use of all major AI models (funny how hype leads to use), I don’t see much value added to the average somewhat online person’s life outside of—let’s call it—recreational use.
I’m not going to address enterprise uses here, since I don’t know anything about them except that which I have experienced from “customer service” AI (and which have actually made customer service even worse than it was with pre-AI chatbots in my experience), and I’m not going to say anything about coding uses of AI because again I know nothing about that. I am more than willing to admit that at least in the first case, there has been surprisingly wide adoption of AI tools. I say surprisingly simply because essentially companies are beta-testing AI products on their businesses and customers hoping for a breakthrough (one that I expect isn’t actually coming). It’s a big risk to implement something that actually doesn’t work all that well widely (see Google’s AI generated search results) and hope for the best. But behind the scenes perhaps lots of money is being saved, or there’s a hope that it will be, or efficiency is improved or something. If that money being made or efficiency comes at the expense of people’s jobs (and it will if it hasn’t already), well see my last post on that.
I’m also not going to say that AI is useless—AlphaFold has already proven its possibilities. But I am going to say that for the average email writer, phone user, social media account holder, amateur researcher, news consumer, streaming media watcher the introduction of LLMs has made little to no impact. Of course there are heavy users, as there has been for Apple’s Siri and so forth, and I have no doubt that some regular folks have found some value in working with AI. But for a technology that was supposed to change the world, these are I think quite low bars.
And for the public, I’d even go so far as to say that LLM AIs have not made the online world better but notably worse. I won’t go into detail on this but we all know what’s been happening to Google results since they went full-throttle on Gemini and I’d interested to know how many Apple users have either turned off “Apple Intelligence” or find its impact on their device use negligible to somewhat annoying. The analogy for the latter would be Siri or Alexa or predictive texting/autocorrect, which many of us keep on because it can be helpful but often curse because it is almost constantly screwing up. Sounds like ChatGPT, at least in my experience.
Indeed if the major success story of public-facing LLMs has been how well they help high school and college students cheat/not read, and I (admittedly biasedly) think it probably is, then we ought to at least ask the question of whether this is in fact revolutionary technology. I think we can all agree that less time writing work emails (the other big public use I’d imagine) is appealing. But the cost-benefit analysis here doesn’t require GPT-4o level reasoning to work out. The proliferation of factual errors, hallucinations, convincing scams, and misinformation (intentionally produced or accidentally) vs. the actual use value of LLMs to the average internet/tech user/consumer seems pretty clearly weighted on the former.
To return to where we began, it’s been 2.5ish years. That’s not a lot of time (unless you live in OpenAI press release land) and algorithmic AI isn’t going anywhere. It will get better, but for reasons I note here I think better is likely to mean less bad than it is to mean amazing. But considering the sheer amount of use public-facing LLMs have gotten—“use” here could be translated to “paying to beta test” Cyberpunk 2077-style—in that 2.5 years and the widespread integration we’ve seen of LLM AIs, I think that the verdict as of February 27, 2025 is in and it’s reflected in the title of this post.
Discussion about this post
No posts