FREEDOM AND SAFETY
I was standing in the kitchen not too long ago, cooking dinner, and needed to set a timer. “Hey Google,” I said. “Could you please set a timer for five minutes?”
The response was announced with a distinctly pleasant trill, followed by my Google Assistant’s comment, “What a classy way to ask!”
This feature, announced by Google in November and aptly called Pretty Please, is designed to encourage us to interact with our devices in more polite ways. Using Google’s voice print technology, it can even prompt certain users - children, for example - to use “the magic word” when placing a request.
On the surface, this might seem like a novelty. However, since my own Google Assistant started responding this way over the past couple of months, I’ve noticed a distinct change in the way we interact. I say please more often, I use a gentler tone, and I feel compelled to say “thank you” after a request is completed, even though the speaker isn’t listening.
Our technology can incentivize behavior in nearly every interaction we have. The feedback from tapping a ❤ on Instagram, the whistles and dings that accompany each text, the vibrations (which we even imagine when they’re not present) that draw our attention all trigger Pavlovian responses that shape our ongoing behavior.
But my Google Assistant got me thinking a lot more about how these incentives can be used not just to make us more addicted consumers, but to make us inherently better at being human.
When I picked up the Muse meditation headset for the first time and found myself in a period of focus, a bird chirped in my ear to provide positive feedback. When I drift a bit too far out of the lane in my car, the steering wheel vibrates to gently put me back on course.
And now my kitchen timer thanks me for being polite.
We’re living through a period of intense technological change that is upending many facets of our world, and debate around ethics and control in the development of artificial intelligence is a complicated, convoluted issue. After all, we know just how easy it is to coerce certain types of behavior and emotion through our interactions with technology.
I’ve written before to propose a thought experiment that approaches the development of AI as if raising a child. I suggest that we could apply to AI the same social structures that guide children into adulthood - albeit upgraded and reimagined - to help AI develop on a path that’s advantageous for humans.
Perhaps there’s another side to this as well, where our technologies bring us face to face with ourselves. And as we strive to teach them right from wrong, polite from improper, they can also encourage those behaviors in us.