AI or not to AI 🥸
Jun 13, 2025 4:16 pm
Happy Friday!
I've made the mistake of spending more time on LinkedIn and my feed is full of three basic things. One is people talking about basic programming techniques like how to write a loop in TypeScript compared to Javascript. Another is folks talking about the insanity of the job market (For those of you who don't know/remember I wrote a book on this). Finally, and most exhausting is the hype around AI.
I am going to share a bit about my assessment on AI, a longer article I wrote on that and my predictions so you can have a little more to consider before making any major decisions.
LLMs Cannot Think
This is the most important thing to remember that many people are ignoring. LLM-based AI works on probabilities, not understanding. This is tricky because many of the results from AI look as though it came from a place of consideration and understanding, but it is an approximation and nothing more.
Want a great example of this? For a long time now you could ask ChatGPT how many "R"s were in the word Strawberry and it would get it wrong. If you think in terms of how it works as just choosing a response based on probability it makes sense why it cannot count those letters. This specific example was recently fixed.
But here's an article where someone put AI against an Atari 2600 chess game on easy and got destroyed.
A Probable Response Isn't Probably Correct
Since the technique under the hood of the AI hype is built around probabilities, you might look at the responses and answers and think probability lends itself neatly to correctness, but that is also not quite right.
Imagine learning or memorizing a set of phrases in a foreign language that you also learn to pronounce perfectly. The set of phrases you learn is based on the probability of what people will say in that conversation. There are comedy sketches built around this. When the other speaker follows those probabilities things seem perfect, but one slight misstep and suddenly it's wrong.
This is, in a weird way how this type of AI works. It simply calculates probabilities of what comes next and gives it to you. It doesn't matter if it's right, wrong, or nonsense.
This means AI, at this stage, cannot be trusted to be correct. You still need a human to edit AI's output.
It's Not All Bad
I'm not writing this to say that AI today is to be avoided or is bad, but I do want to caution anyone from believing it is smarter or more capable than it is. AI's strength is in its speed. If you know that it can be fast and wrong in some amounts, you can develop a set of processes to correct the mistakes and keep the speed benefit.
This would be putting these tools in a position to augment not replace.
Now, if you'd like to read more about my predictions of where AI will go and how to more specifically and safely introduce AI at work, check out this article I wrote.
Sincerely,
Ryan