From counter-top tortilla makers to fridges you can tweet from, so-called “smart” appliances seem to be getting smarter.
But over at Fast Co. Design, writer Mark Wilson posits that the gadgets in our lives are exhibiting the wrong kind of smart — exemplified by his frustrating test drive of the June, “the intelligent convection oven.”
The June boasts an in-oven camera, temperature probe, app connectivity to your phone, and is WiFi capable. It strives to take the guesswork out of cooking by fully automating the whole process. From pre-heating with a button on your touchscreen, to pinging your device when your baked salmon is ready, the June takes control of every aspect of the home cooking experience. This is not good, says Wilson, as it treats the learning and practice of a fundamental life skill (or fun hobby!) as yet another tiresome task that we’re better off handing over to a machine. And to top it all off, this problematic philosophical worldview is packaged in a clunky, buggy shell.
“[T]he June’s fussy interface is archetypal Silicon Valley solutionism. Most kitchen appliances are literally one button from their intended function. […] The objects are simple, because the knowledge to use them correctly lives in the user. […] The June attempts to eliminate what you have to know, by adding prompts and options and UI feedback. Slide in a piece of bread to make toast. Would you like your toast extra light, light, medium, or dark? Then you get an instruction: ‘Toast bread on middle rack.’ But where there once was just an on button, you now get a blur of uncertainty: How much am I in control? How much can I expect from the oven? I once sat watching the screen for two minutes, confused as to why my toast wasn’t being made. Little did I realize, there’s a checkmark I had to press — the computer equivalent of ‘Are you sure you want to delete these photos?’— before browning some bread.”
Wilson’s “buyer beware” about letting the June into our lives can be read as a larger, more ominous warning, about holding onto our human intelligence and autonomy in the face of technological convenience. It’s interesting to consider how many of us are willing to surrender that, to devices from phones on up. What is your limit?
Despite some experts’ developing theories on the impossibility of our ever creating “strong A.I.” — that is, the kind of robot intelligence that we need to worry about getting away from us and eliminating us as threats to itself (ahem Skynet ahem) scientists out there are still plugging away at this fascinating issue.
One way to potentially solve the problem of achieving human-like consciousness is to overhaul the way machines learn, making it more like the method used by human babies and children. At the moment, many machines learn rigidly, systematically testing new input against a vast amount of information already stored. Flexibility in learning, however leads to very fast gains in intelligence, as anyone who’s ever observed a child grow from birth to age four would know! Researchers are now quantifying this human process — a statistical evaluation called Bayesian learning — and applying it to A.I., attempting to reduce the mass of data and time required to gain the same knowledge about the world.
“The new AI program can recognize a handwritten character just about as accurately as a human can after seeing just one example. Using a Bayesian program learning framework, the software is able to generate a unique program for every handwritten character it’s seen at least once before. But it’s when the machine is confronted with an unfamiliar character that the algorithm’s unique capabilities come into play. It switches from searching through its data to find a match, to employing a probabilistic program to test its hypothesis by combining parts and subparts of characters it has already seen before to create a new character — just how babies learn rich concepts from limited data when they’re confronted with a character or object they’ve never seen before.”
The University of Auckland’s Bioengineering Institute is taking this trend in a startling direction, as it works with “BabyX.” BabyX is an A.I. interface that is a 3D animated blonde baby, who can interact with researchers through a screen, demonstrating the real thinking and learning process of the machine intelligence behind it. The interface is essentially one big metaphor for the learning machine, with a bundle of fibre-optic cables as a “spinal cord,” connecting outside input to its “brain.” So BabyX learns by responding to its user as a real baby would to a parent.
“‘BabyX learns through association between the user’s actions and Baby’s actions,” says [project leader and Academy Award-winning animator Mark] Sagar. ‘In one form of learning, babbling causes BabyX to explore her motor space, moving her face or arms. If the user responds similarly, then neurons representing BabyX’s actions begin to associate with neurons responding to the user’s action through a process called Hebbian learning. Neurons that fire together, wire together.’”
All this work really goes to show that something so natural and seemingly simple — the infant human learning process — is actually really complicated, and hard to replicate for a machine. It will be very interesting to see how BabyX, and this new kind of A.I., “grows up” with us.
On Friday, I was happily taking care of the dishes while half listening to a local radio station playing in the next room. Suddenly, over the sound of the running water, I heard the most unholy buzzing screech. I had to turn off the tap and run to the centre of my home to locate it and figure out what it was — Was it an air raid siren? The carbon monoxide detector?!
It took a good couple seconds for me to realize it was coming from my radio, and signaled an active Ontario-wide Amber Alert for a missing Welland girl. When the robot voice started giving the details, I relaxed — but then got to thinking about the effectiveness of the noise in getting me to drop everything and pay attention!
Turns out there are people out there whose job it is to design alarms: and not just to sound so freaky that you freeze and listen, but to make us understand the nature and urgency of the thing we’re being warned about. Plus, they have to circumvent our highly developed brains’ instincts to ignore or disable alarms we have determined to be false or too annoying — with sometimes fatal results. The design specifications are very precise:
“The faster an alarm goes, the more urgent it tends to sound. And in terms of pitch, alarms start high. Most adults can hear sounds between 20 Hz and 20,000 Hz— [designe Carryl] Baldwin uses 1,000 Hz as a base frequency, which is at the bottom of the range of human speech. Above 20,000 Hz, she says, an alarm ‘starts sounding not really urgent, but like a squeak.’
Harmonics are also important. To be perceived as urgent, an alarm needs to have two or more notes rather than being a pure tone, ‘otherwise it can sound almost angelic and soothing,’ says Baldwin. ‘It needs to be more complex and kind of harsh.’ An example of this harshness is the alarm sound that plays on TVs across the U.S. as part of the Emergency Alert System. The discordant noise is synonymous with impending doom.
The Emergency Alert System (have a listen here!) has similarities to the Ontario Amber Alert alarm. But I find the differences really point up what the listener’s reaction should be: to me, the former spells, well, “impending doom,” so I my instinct is to sit calmly and absorb all instructions; and the latter makes me want to get up and do something — like find a child.
The next big project facing alarm designers is forming an “alarm philosophy:” a way of organizing multiple alarms in an environment, so that the most important don’t get drowned out or ignored. Meanwhile, the Amber Alert for Layla Sabry of Welland is officially called off, but she is still missing: familiarize yourself with her case here. And keep your ears open for the next terrifying screech from your radio — someone worked hard to bring it to you!