I recently overheard a conversation about someone teaching their cat to talk using AIC – Augmented Interspecies Communication – and the concept caught my attention. As both the owner of a very intelligent shepherd and someone who’s worked with machine learning in the past it tickles many of my interests (and then there are treatments of the idea in fiction, such as in David Brin’s excellent Uplift series). So if like me you hadn’t encountered AIC before, here’s an entertaining introduction to the topic:
I like this video just as much for its final comments:
I think a lot of it feels like ego, to be perfectly honest. We want to hear our dogs say things that we know they’re feeling, or that we assume they’re feeling, but we want to hear it in our language. I would love for the greatest takeaway to be not that our dogs can talk, but that they’ve already been saying it all along and we just haven’t been listening.Alexis Devine
My dog gets bored, worried, boisterous, hungry, thirsty, sick and tired, and he communicates these all the time (or I assume he does!). Having lived with humans all his life, he’s become adept at getting our attention, and we’ve become equally competent at meeting him half way to address his needs – just as with our children when they were infants. It’s not a huge stretch then to imagine we might teach a dog a slightly different (but equally accessible) communications method to use with its people.
Occasionally, I’ll find conversations steer toward much wider claims of interspecies sentience or rational thought using such communication as its lever. It’s perhaps an understandable leap, especially when we seem to be so good at anthropomorphising while interpreting animal responses. Way back when Koko’s signing was doing the rounds, it really felt like Penny was just interpreting what the audience wanted to hear. This example from when Koko was being interviewed by an AOL group might seem extreme, but maybe that’s the point:
AOL: Question: Do you like to chat with other people?https://web.archive.org/web/20070206214118/http://www.koko.org/world/talk_aol.html
PENNY: Koko, do you like to talk to people?
KOKO: Fine nipple.
PENNY: Yes, that was her answer. ‘Nipple’ rhymes with ‘people,’ OK? She doesn’t sign people per se, so she may be trying to do a ‘sounds like…’ but she indicated it was ‘fine.’
We’re human, and I suggest a tendency to be influenced by some combination of confirmation bias and the Barnum effect puts us in an awkward position when evaluating conversation. How much of our perception of a conversation is just us wanting to be talked with?
We see another compelling example of this when conversing with a contemporary machine-learning driven chatbot.
Modern general purpose chatbots (like Google’s LaMDA) are typically driven by probability engines that are themselves trained (programmed) using vast multi-disciplinary datasets available online. Given such a scenario, how do we rationally evaluate a conversation with a chatbot if it is the output of a complex pattern matching algorithm working from millions of conversations on a library of topics? We enter our questions, our input finds its way into the pattern matching algorithm (along with the rest of the conversation we’ve had thus far) and the engine forms a matching response to match the pattern. The patterns follow natural language patterns, so the responses look like natural language.
Is that a conversation?
From a purely objective point of view, perhaps so: It looks like a conversation, talks like a conversation, and smells like a conversation, ergo it is a conversation.
But I don’t know if I’m going to use the same logic to decide my conversation partner has a soul, just because the chatbot’s topic of conversation touches on matters of religion or philosophy, regardless of how compelling the conversation it may seem.
To be fair, I tend to stay away from those topics with my dog, too.