
The recent buzz around chatbots should remind us of a human weakness already widely exploited: We readily bond with machines, including the large profit machines known as corporations, often to our detriment.
One technically clued-up journalist this week told of his discomfort at being the target of an attempted seduction by a pre-release version of Microsoft’s new Bing search bot in the New York Times. It is doubtless alarming when silicon gives us the come-on. But we should be uncomfortable too when sense a brand’s personality or a corporation’s kindness.
We put faces on everything like a graffiti artist doodling a smiley on fire hydrants. This urge to personify means we can be readily persuaded to become emotionally engaged with objects and machines, ascribing to them a personhood into which we read personality, motives and morals. It is harmless to see a goofy grin on a fire hydrant, but dangerous when an illusion of personhood is at odds with.
Human intelligence relies on an immensely more complex signal processor than any of our silicon-based chatbots. And our signal processors are deeply embedded in bodies which, in turn, are deeply embedded in the world. No chatbot is ever going to come close to mimicking the output of such an astronomically complex system.
Corporations will not match any personification we make of them either. They exist solely to generate profits for shareholders with no other motive. Even the greediest person does not do that. Employees of corporations, though fully human, are paid to return profit for shareholders, whether or not the method is paralleled by normal human behaviour. Other employees are there to put a human face on it all.
So, while we should not worry that chatbots are getting close to being genuinely comparable with human intelligence, we should be worried we can be persuaded otherwise. We should explore this tendency to personify. It underlies current vulnerabilities to machine romance, notably to branding, public relations, are all charming chatbots attached to profit machines.
We should see machines for what they are, machines, not persons or pseudo persons to either love or to hate, or anything in the middle. These emotions are wasted. Instead we need to know how the machines that serve us operate, where they succeed and where they fail. Knowing this we can decide on rules which would make them work better, rather than how we feel about them.
A hard-headed approach may one day be useful in navigating a world full of charming chatbots. We can warm up by finding ways not to be duped and manipulated by legions of highly sophisticated profit machines. ■