
Travels with Maja
In 1960, the American novelist John Steinbeck set out on a road trip with his poodle Charley to rediscover America. USA was going through a period of rapid change at the time, and Steinbeck had failed to keep up. This article was originally posted on Eirik Sjåholm Knudsens own substac / blog: molekyl.no.
You can see the original text here on molekyl.io
Following the launch of ChatGPT in late 2022 the world too entered a period of rapid change. A period where many of us struggle to keep up with all that is happening, and their implications.
One day when I was walking my own dog, a poodle mix called Maja, it struck me that she might be helpful in making sense of some of these changes. Just like the standard poodle Charley did for the changing America back in the 1960s.
So my travels with Maja to understand AI began. Not in a camper van like Steinbeck, but on regular walks. Early and late. In all kinds of weather. While thinking about what a dog can teach us about AI.
It turns out, she can teach us more than I expected.
A dog’s perspective on AI
Maja entered our family two years ago, and ever since she has been one of two intelligent alien life forms in my life. The other being AIs, primarily in the form of large language models (LLMs).
At first glance, it’s not obvious what Maja’s 8 kilos of energy and chaos trapped in a black furry coat could possibly reveal about large language models. She is a mammal just like me, but her brain is both smaller than mine and wired for very different purposes. This, combined with her different sensory strengths (nose and ears) and weakness (sight), makes her see the world very differently than you and me.
And this is where I think Maja can help us see AI from a different perspective. Just like dogs, AIs too are intelligent alien life forms with distinct strengths and weaknesses, that see the world very differently than us.
By reflecting on key similarities and differences between the familiar dog and the more unfamiliar AIs, interesting insights start to emerge.
Super powers/super dumb
At some point in every walk I have with Maja she suddenly switches from casual sniff mode to alert. Her body stiffens, tail shoots in the air and her nose goes into intense search mode. She has noticed something I have not.
Dogs have up to 100,000 times better ability than us humans to discriminate between smells, and up the road or around the corner I might see what it was she noticed. A cat, a hedgehog hiding in the bushes, or a bird. Her nose is truly impressive and a super power by any means.
But if I five minutes later tell her to sit, she might as well lay down instead. Which is a level of understanding that doesn’t impress anyone. From super power to super dumb in a few minutes.
Just like dogs, AIs too have so-called jagged frontiers where they are incredibly good at something, and surprisingly bad at something else. A frontier LLM can easily rewrite your homework as a Shakespeare sonnet, but it took a scientific breakthrough and serious compute for LLMs to count the number of “r’s” in the word strawberry.
The challenge with jagged frontiers in both dogs and AIs are that they easily lead us to both overestimate and underestimate their capabilities.
If our first encounters with AI are on the dumb end of their scales, we quickly dismiss their potential. When we see their best side, we can slip into trusting them too much. In any case, we tend to blame the technology for problems that result from our misjudgments.
For dogs it’s different. We intuitively accept their combined super powers and flat out stupid behaviour, and adjust our expectations accordingly. When we get it wrong with dogs, we know that it often was on us. After all, the dog is just wired this way.
Many would probably get much more out of their LLM of choice just by adopting a similar perspective as we toward dogs. That is, changing the baseline assumption from “if it doesn’t work, its the AI” to “if it doesn’t work, its me”.
Living in the moment
Contrary to myself and the other humans in my house, Maja is fully immersed in every situation. She can have the best time of her life with a bone on the couch, and seconds later she has the best time of her life outdoors on a walk. She is like an embodiment of the philosophical view that neither the past nor the future exist. Only the present.
Maja doesn’t do this by choice, but because the fragmented default mode network and short working memory of her brain wires her to rapidly shift all her attention from one situation to the next.
LLMs also very much only live in the present. Or within each context window. They are born anew in every new chat unless we help them carry over context.
As for my dog, this makes AIs very bad at long-term planning but also very good at context shifting. You can have a deep conversation with an LLM about old philosophers, and flip to whatever else you want to discuss seconds later. It will be just as immersed on that topic.
Often, this context switching and intense presence bias can be frustrating. Having to repeat context to an LLM in chat after chat often makes me long for a model that can just remember this thing I mentioned few weeks back.
While it is frustrating when my dog forgets things I showed her minutes ago, I have also noticed that it is kind of nice that she doesn’t remember every small thing that happens in our daily lives. In fact, I do think I am better off with a dog that doesn’t hold any grudges against me for not sharing cheese from my lunch or for yelling too loudly at her that time she dug in our new couch.
Maybe the same weakness in LLMs also can be turned to a strength? That having an intelligent conversation partner that forgets your fumbling and stupid questions from last week, and lets you start afresh with a blank sheet every time can be a bliss? I think it is, which is why I have not turned on the memory feature of ChatGPT.
The communication trap
While Maja and my LLMs conceptually have much in common, there are also clear differences. One of them is their mode of communication.
With Maja, I face the same challenge Steinbeck had with his dog Charley. She doesn’t speak any of my languages, and we instead do our best to communicate by interpreting each other’s body language, tone of voice and observed behaviours.
With the AIs, communication is way easier as I can simply write or talk like I would with any human. While practical, it also masks features that are obvious with Maja: AIs don't think and understand the world around us in the same way we do. An obvious fact that the human language makes it easy to forget.
This can lead to issues like shadow thinking, where we believe we’ve thought something through when really the AI did the thinking, and to inadvertently outsource our judgment and decision-making.
It also makes it easy to forget that LLMs just like dogs read more between the lines than most humans. They constantly seek subtle clues about what it is that we really seek from a conversation, to better help us. Clues that can turn conversations and answers in directions we did not intend, or that we are not aware of.
If an LLM interpret me asking for feedback on a final paper draft as “he really just needs confirmation”, then the LLM might very well try to “help” me by giving inflated feedback. If you instead tell the AI that the best way they can help is to be a tough nail and give you hard, direct and constructive feedback, you tend to get something very different.
Any dog owner knows that dogs are conscious about the emotional signals we send. With dogs, this feels obvious and we naturally try to adjust our communication accordingly. With AIs, we tend to expect them to just figure us out, without us having to do any work.
Adaptation is on us
The overarching point that emerge from all of the above, is that bringing a new intelligent life form into our lives usually requires adaption on the human part to get the most out of any collaboration.
When we picked up Maja from the breeder, we didn’t expect her to be fully functional as a family dog. We knew that she came with some innate features from her breeding and early social learning, but that it was on us to take it from there.
LLMs also come with a default skill set like skills and knowledge from pre-training, fine-tuning and reinforcement learning from human feedback. But even though it’s also much up to us users to take it from there with LLMs, many seem to forget or not view it this way. It’s like we expect the AI to be delivered potty trained and with a bag of skills specific to how each of us work and live.
The value from LLM happens through interactions, and we need to figure out how to make these interactions work for our own specific purposes. In doing so, we simply cannot expect an AI to magically work as our companions if we are not open to making adjustments to how we work ourselves.
Any dog owner knows that getting a dog is just as much training and adjustments of the humans in the house as it is training the dog. It’s much the same with LLMs. If I am flexible, curious and adjust my own behaviour and processes to cater to the peculiarities of the models I work with, I get so much more out them.
Creative adjustments
I have made plenty of adjustments as a result of both the dog and AIs entering my life. One interesting example is how a new creative routine has emerged almost organically, involving both my dog and AIs.
After getting Maja, I spend a lot of time walking. And walking time is thinking time. My mind starts to wander, and I come up with new ideas, or solve problems I didn’t manage to solve earlier that day in my office. It’s often at dog walks I now have the gravitational collapses where scattered thoughts suddenly find their shape.
But the ideas I have when walking the dog can be fleeting. To avoid them slipping away, I write down any promising ones in my old-fashioned notebook when I return. And later, I often turn to an AI to dive deeper into exploring any idea that still seems promising.
So one of my personal creative processes now often looks like this: dog walk → idea → notebook → AI exploration → dog walk → idea refinement in notebook → AI. And so forth.
Maja provides the wandering space for ideas to form, the AI provides the patient exploration space to develop them.
What did Maja teach us?
When Steinbeck returned from his travels with Charley he learnt that USA was both different and familiar at the same time. My travels with Maja reveal something similar about AI. It's indeed an intelligent alien life form, but one with interesting parallels to another intelligent life form we have learnt to live with just fine. Dogs.
The bigger point is that getting new intelligent life forms into your house - ones wired very differently from ourselves - will never work unless we also change.
While this point is obvious when we look at adding a dog to our lives, it isn’t with AIs. Instead, we often expect the AI to elegantly slip into our existing idiosyncratic workflows without requiring any adjustments on our part. And when the AI doesn’t live up to our expectations, we blame the technology and wait for the engineers at the AI labs to fix the problem with the next model release.
So maybe the biggest thing Maja can teach us is that AI problems are actually human problems disguised as technology problems. And that we should spend less time waiting for the engineers at the AI labs to solve our problems, and more time trying to ourselves together and make adjustments to ourselves to become better AI users.
Because just as when a dog misbehaves, it’s often because of the human in the loop that things doesn’t go as intended with.