Just when you thought it couldn’t get any creepier: At the I/O conference at the beginning of this month, Google debuted a new add-on to its Google Assistant program. Dubbed Duplex, this feature is billed as making heavily digitized personal lives mesh more easily with others who haven’t caught up — by using sophisticated voice recognition and natural sounding recordings so your computer can talk to humans who don’t realize its a computer.
I cannot emphasize how freaky this is. Check out a clip of the keynote here, where Google CEO Sundar Pichai demos real conversations between Duplex (strategically deploying “um”s, uptalk, and informal syntax) and the poor, obsolete, flesh and blood humans answering the phone at a hair salon and a restaurant. Duplex finagles reservations or information out of both and then messages the user with updated info. It even books a successfully scheduled event in the calendar.
As handy as this might be for some of us, Google’s not doing an awful lot to address a specific concern that arose almost immediately after this announcement. Google has a lot of information on each of us. Considering the issues the company has with keeping that info safe (as well as questions surrounding why on earth it needs it, to begin with), a deceptively helpful feature like Duplex could end up doing more harm than good to a user.
“It knows everything you browse on Chrome, and places you go on Google Maps. If you’ve got an Android device it knows who you call. If you use Gmail it knows how regularly you skip chain emails from your mom. Giving an AI that pretends to be human access to all that information should terrify you.
A bad actor could potentially cheat information out of the Duplex assistant in a phone call. Or use the Duplex assistant to impersonate you, making calls and reservations in your name. It’s also, just, you know, an AI that KNOWS YOUR ENTIRE LIFE.”
We have long held up our end of the bargain — we have given companies access to our data and metadata in exchange for fleeting fun or profit. I think the time has come for us to get a lot smarter about how we interact with potentially mercenary or exploitable tech… Because it looks like it’s just about ready to become smarter than us.