There’s no substitute for empathy—not in the doting sense of the term, but in the ability to perceptively inhabit other people’s psyches, and thus imagine what various stimuli will feel like to them.
Brian Beutler
I have been digging deep into AI the last 20 months or so, learning, reflecting, evaluating. I understand how it works (well, somewhat better than in broad strokes), how the models are built. Mostly, I developed an understanding of its capabilities and limitations.
In short, it is a prediction machine based on probabilities derived from a massive amount of data – predicting what the next word / pixel is. Depending on how the model temperature is calibrated (e.g., temperature), the AI output ranges from the “close to the ground” to highly inventive (what some complain about as hallucinations). It is the flesh and soul of AI, really, as it is not a truth machine, but the generator of most likely string of words.

Realize that AI is not a truth-telling machine, but a data processing tool.
Let me reiterate – it helps us make sense of the data. What it cannot do in a way that makes it useful includes:
- generate data that we can use instead of real data- see so called synthetic users.
- be genuinely curious and empathetic, and follow hunches
- deal with ‘subtle’ data from humans – data that is not encoded in clear, unambiguous language, instead expressed in analogies, humor, references, i.e., linguistic output that are not accessible to AI
- interpret human tonal variation and facial expressions. This may change in the near future, but for now humans beat AI.
As a researcher, I live “in the data” – kind of like a method actor. As I connect with people, I develop theories that I probe on the spot in the middle of the conversation, and not just repeat the same set of questions. The AI tools I have tried lack that kind of curiosity and gut feeling. They can do basic usability testing, for sure, collect and tabulate data, basically a spoken survey, but no more than that. The follow up questions are as shallow as a dried out stream. For a human to open up, they need to be engaged in a conversation, and feel that the person across the table truly relates to them and to their experience.
Where AI can make a UX researcher more efficient
We UX Researchers thrive on data (qual and / or quant). And a tool that can help process data, why, what a godsend. Once you (human UX researcher) engaged in some lively conversations, AI excels at
- resurfacing themes
- summarizing interviews
- answering your questions – be they low level (how many people misunderstood a label), or high level (what is common to people appreciating a feature)
- pulling out quotes to support findings
- organizing your report
- creating visuals
For 99.99999999 % of human history ….
…. we evolved to connect with our fellow humans through conversations. With fellow humans, and no other entities (whenever we did, we did project human qualities onto them). As a result, we automatically assume, without thinking, that all connected words come from intelligent beings who tell us the truth, unless they are motivated to hide something from us.
Not even the world-famous Clever Hans, the horse that could communicate with people and thus do arithmetics could persuade us that he had human-like intelligence. We understood his limitations because he did not use words
So most of us fall for this text-producing AI trick, and accept what Claude or ChatGPT displays as truth, but this is where we need to be vigilant. AI does not feel our pain, does not get frustrated, angry or satisfied. It will not have a lived experience that we can base decisions on to help our fellow human beings to have a better experience.