From counter-top tortilla makers to fridges you can tweet from, so-called “smart” appliances seem to be getting smarter.
But over at Fast Co. Design, writer Mark Wilson posits that the gadgets in our lives are exhibiting the wrong kind of smart — exemplified by his frustrating test drive of the June, “the intelligent convection oven.”
The June boasts an in-oven camera, temperature probe, app connectivity to your phone, and is WiFi capable. It strives to take the guesswork out of cooking by fully automating the whole process. From pre-heating with a button on your touchscreen, to pinging your device when your baked salmon is ready, the June takes control of every aspect of the home cooking experience. This is not good, says Wilson, as it treats the learning and practice of a fundamental life skill (or fun hobby!) as yet another tiresome task that we’re better off handing over to a machine. And to top it all off, this problematic philosophical worldview is packaged in a clunky, buggy shell.
“[T]he June’s fussy interface is archetypal Silicon Valley solutionism. Most kitchen appliances are literally one button from their intended function. […] The objects are simple, because the knowledge to use them correctly lives in the user. […] The June attempts to eliminate what you have to know, by adding prompts and options and UI feedback. Slide in a piece of bread to make toast. Would you like your toast extra light, light, medium, or dark? Then you get an instruction: ‘Toast bread on middle rack.’ But where there once was just an on button, you now get a blur of uncertainty: How much am I in control? How much can I expect from the oven? I once sat watching the screen for two minutes, confused as to why my toast wasn’t being made. Little did I realize, there’s a checkmark I had to press — the computer equivalent of ‘Are you sure you want to delete these photos?’— before browning some bread.”
Wilson’s “buyer beware” about letting the June into our lives can be read as a larger, more ominous warning, about holding onto our human intelligence and autonomy in the face of technological convenience. It’s interesting to consider how many of us are willing to surrender that, to devices from phones on up. What is your limit?
Despite some experts’ developing theories on the impossibility of our ever creating “strong A.I.” — that is, the kind of robot intelligence that we need to worry about getting away from us and eliminating us as threats to itself (ahem Skynet ahem) scientists out there are still plugging away at this fascinating issue.
One way to potentially solve the problem of achieving human-like consciousness is to overhaul the way machines learn, making it more like the method used by human babies and children. At the moment, many machines learn rigidly, systematically testing new input against a vast amount of information already stored. Flexibility in learning, however leads to very fast gains in intelligence, as anyone who’s ever observed a child grow from birth to age four would know! Researchers are now quantifying this human process — a statistical evaluation called Bayesian learning — and applying it to A.I., attempting to reduce the mass of data and time required to gain the same knowledge about the world.
“The new AI program can recognize a handwritten character just about as accurately as a human can after seeing just one example. Using a Bayesian program learning framework, the software is able to generate a unique program for every handwritten character it’s seen at least once before. But it’s when the machine is confronted with an unfamiliar character that the algorithm’s unique capabilities come into play. It switches from searching through its data to find a match, to employing a probabilistic program to test its hypothesis by combining parts and subparts of characters it has already seen before to create a new character — just how babies learn rich concepts from limited data when they’re confronted with a character or object they’ve never seen before.”
The University of Auckland’s Bioengineering Institute is taking this trend in a startling direction, as it works with “BabyX.” BabyX is an A.I. interface that is a 3D animated blonde baby, who can interact with researchers through a screen, demonstrating the real thinking and learning process of the machine intelligence behind it. The interface is essentially one big metaphor for the learning machine, with a bundle of fibre-optic cables as a “spinal cord,” connecting outside input to its “brain.” So BabyX learns by responding to its user as a real baby would to a parent.
“‘BabyX learns through association between the user’s actions and Baby’s actions,” says [project leader and Academy Award-winning animator Mark] Sagar. ‘In one form of learning, babbling causes BabyX to explore her motor space, moving her face or arms. If the user responds similarly, then neurons representing BabyX’s actions begin to associate with neurons responding to the user’s action through a process called Hebbian learning. Neurons that fire together, wire together.’”
All this work really goes to show that something so natural and seemingly simple — the infant human learning process — is actually really complicated, and hard to replicate for a machine. It will be very interesting to see how BabyX, and this new kind of A.I., “grows up” with us.
On Friday, I was happily taking care of the dishes while half listening to a local radio station playing in the next room. Suddenly, over the sound of the running water, I heard the most unholy buzzing screech. I had to turn off the tap and run to the centre of my home to locate it and figure out what it was — Was it an air raid siren? The carbon monoxide detector?!
It took a good couple seconds for me to realize it was coming from my radio, and signaled an active Ontario-wide Amber Alert for a missing Welland girl. When the robot voice started giving the details, I relaxed — but then got to thinking about the effectiveness of the noise in getting me to drop everything and pay attention!
Turns out there are people out there whose job it is to design alarms: and not just to sound so freaky that you freeze and listen, but to make us understand the nature and urgency of the thing we’re being warned about. Plus, they have to circumvent our highly developed brains’ instincts to ignore or disable alarms we have determined to be false or too annoying — with sometimes fatal results. The design specifications are very precise:
“The faster an alarm goes, the more urgent it tends to sound. And in terms of pitch, alarms start high. Most adults can hear sounds between 20 Hz and 20,000 Hz— [designe Carryl] Baldwin uses 1,000 Hz as a base frequency, which is at the bottom of the range of human speech. Above 20,000 Hz, she says, an alarm ‘starts sounding not really urgent, but like a squeak.’
Harmonics are also important. To be perceived as urgent, an alarm needs to have two or more notes rather than being a pure tone, ‘otherwise it can sound almost angelic and soothing,’ says Baldwin. ‘It needs to be more complex and kind of harsh.’ An example of this harshness is the alarm sound that plays on TVs across the U.S. as part of the Emergency Alert System. The discordant noise is synonymous with impending doom.
The Emergency Alert System (have a listen here!) has similarities to the Ontario Amber Alert alarm. But I find the differences really point up what the listener’s reaction should be: to me, the former spells, well, “impending doom,” so I my instinct is to sit calmly and absorb all instructions; and the latter makes me want to get up and do something — like find a child.
The next big project facing alarm designers is forming an “alarm philosophy:” a way of organizing multiple alarms in an environment, so that the most important don’t get drowned out or ignored. Meanwhile, the Amber Alert for Layla Sabry of Welland is officially called off, but she is still missing: familiarize yourself with her case here. And keep your ears open for the next terrifying screech from your radio — someone worked hard to bring it to you!
The Verge has published a moving meditation on a close friendship torn apart by death – and how the friend left behind has memorialized the other in a very 21st century way.
Eugenia Kuyda and Roman Mazurenko became acquainted in Moscow, where he was a cultural mover and shaker, and she wrote for a lifestyle magazine. As the years passed and they grew closer, they fed off each other’s’ entrepreneurial spirit: Roman founded Stampsy, and Eugenia created an A.I. startup called Luka. Roman was well loved in the arts and culture scene, with a bright future ahead of him – until he was struck and killed by a speeding car as he stepped into Moscow crosswalk.
When she felt other methods of memorialization didn’t suit Roman’s personality, or the scale of the grief felt by his friends, Eugenia sought a unique solution. With their permission, she input the lightly edited text and online conversations between Roman and ten friends and family members into a specially built neural network, and created a chat bot that could respond in Roman’s authentic voice. This unexpectedly filled a particularly modern need:
“[In a Y Combinator application before he died] Mazurenko had identified a genuine disconnection between the way we live today and the way we grieve. Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this ‘digital estate’ could form the building blocks for a new type of memorial.”
Many of Roman and Eugenia’s friends had never experienced the death of someone close to them. But they soon began to engage with the Roman bot in a format they had often used to communicate with the living Roman. The bot matched Roman’s statements to the content detected in the original query. The resulting conversations are really beautiful:
Reactions were mixed – some of Roman’s friends were disturbed and refused to interact with the bot at all, others found that reading their deceased friend’s turns of phrase anew was quite comforting.
As a “digital monument,” the Roman bot continues to be a presence – indeed, anyone who downloads Luka can talk to him in English or Russian. And he – or it – can also be held up as a case study of how our plugged-in society can find new ways to mourn.
It took me a good couple days to believe this in-development appliance was real, and not from The Onion – but once I did, I was staggered by it. The tubby little box follows a proven kitchen appliance formula: it lives on your counter, accepts single-use pods into its top, and in a few seconds, dispenses a fresh, hot … tortilla?
Dubbed the Flatev, the tortilla maker was developed by Carlos Ruiz, who was confronted by a dearth of authentic Mexican cuisine when he moved to Switzerland. He attempted to make tortillas by hand with his mother’s recipe, with Infomercial-Fail-like results. A fan of the Keurig, Carlos envisioned a tortilla maker that would take the mess and labour out of the whole process, much like the revolutionary coffeemaker.
Unlike the Keurig, the Flatev’s website assures us, the used dough pods will be recyclable. In addition, the dough inside the pods will be organic and preservative-free – which, the developers enthuse, produces a significant difference in flavor from pre-made supermarket tortillas.
Gee-whiz factor aside, I have a bit of a mental block about this. This is a gadget for a very specific sub-set of the population: people who a) eat lots of tortillas, b) are hung up about the freshness of said tortillas, and c) have enough money to not just purchase this one-purpose machine, but to afford a home with a kitchen big enough that it doesn’t take up all the counter space! (I may be biased, I also loathe Keurigs.)
But I also think of what users of the Flatev might miss out on by letting a machine take over. Progress is great. But it’s one thing to have your weekly trip to the creek with a washboard and lump of soap replaced by a washing machine; it’s another entirely to miss out on the fascination of trying a new skill, and the pride when your unbalanced, weird looking tortillas start getting better and better!
However, I would not turn down a tortilla, whether hand- or machine-made. Here’s hoping the Flatev crew gets lots of pre-orders: It would be worth seeing if this little guy lives up to the dream.
As the days grow shorter and the wind blows colder, thoughts turn to how the heck to keep from freezing every time I leave my cozy, wood stove heated house. I sometimes think of the animals who don’t have it so lucky, and have to stay outside over the winter in dens or burrows, or especially anywhere near water.
Turns out, I should save my pity! Science has finally quantified how well the fur of animals like beaver and otter traps air to keep them warm in their semi-aquatic adventures – with an eye to developing tech that will keep us warm too.
The team of MIT engineers is particularly interested in creating wetsuits for surfers – whose amphibious sporting lifestyle requires a wetsuit that stays warm in the water, but that sheds water quickly when they leave it. They were intrigued by this problem because it had never been properly measured. Indeed, the actual mechanics of the air trapping in aquatic mammals, surmised to be the work of long “guard hairs” protecting the downier fur underneath, was unobserved. Once this process was cracked, the team would be one step closer to humanity producing “furry” heat retaining surfaces artificially.
The experiments first took a rigorous form of trial and error:
“To make hairy surfaces, [lead author and grad student Alice] Nasto first created several molds by laser-cutting thousands of tiny holes in small acrylic blocks. With each mold, she used a software program to alter the size and spacing of individual hairs. She then filled the molds with a soft casting rubber called PDMS (polydimethylsiloxane), and pulled the hairy surfaces out of the mold after they had been cured. […]
[The] researchers mounted each hairy surface to a vertical, motorized stage, with the hairs facing outward. They then submerged the surfaces in silicone oil — a liquid that they chose to better observe any air pockets forming.”
From their observations of the different amounts of air trapped by different hairy surfaces, the team then constructed a model that described the air-trapping in a mathematical manner. They turned these results into a scalable equation: hair density and length, and speed of dive can now be used to determine air trapping – and heat saving – capabilities. (This precision prevents the need for a “Cookie Monster” level of hairiness to maintain warmth.)
While I’m no surfer, I am a fan of aquatic mammals, and I’m fascinated at the science behind this innovation! I can’t wait to see the eventual application – in wetsuits and other fields, like industrial dip-coating. We still have so much to learn from Nature, and this just proves it.
A team out of the University of Illinois at Urbana-Champaign has done a serious assessment of studies of brain training games, like those featured in programs likeLumosity and Learning Rx. They hoped to get a final answer on whether playing these skill-specific games can result in general memory and cognition strengthening, as these companies assert, and some scientists support. The results don’t bode well: many of the studies looked at did not “adhere to what we think of as best practices,” says project leader Daniel Simons, professor of psychology at UIUC – thus casting doubt on the assertion that overall brain power can be improved by several rounds of Word Bubbles.
Some of the studies’ sample sizes were too small, or they didn’t have a control group; others didn’t consider the placebo effect. And the studies that did have sound methodologies showed that task-based brain training games do indeed improve your brain’s function – but only when later performing that exact task.
“‘You can practice, for example, scanning baggage at an airport and looking for a knife,’ [Prof. Simons] says. ‘And you get really, really good at spotting that knife.’
But there was less evidence that people got better at related tasks, like spotting other suspicious items, Simons says. And there was no strong evidence that practicing a narrow skill led to overall improvements in memory or thinking.
That’s disappointing, Simons says, because ‘what you want to do is be better able to function at work or at school.’”
Some scientists are holding out hope that a longer term use of brain training games, which hasn’t yet been studied in depth, may lead to overall improvement in brain functioning, and stave off age-related decline.
Until then, Lumosity and Learning Rx have been knocked back on their heels by fines levied by the U.S. Federal Trade commission, which found their advertising of general cognitive improvement through gameplay to be unsupported. We’ll just have to see how this shakes out. I will be playing Word Bubbles while I wait – because it’s fun.
Schadenfreude, an originally German term, is used to describe a very human emotion — translated by a certain hit Broadway musical as “happiness at the misfortune of others.”. Now, a team of neuroscientists, who were actually studying neurons associated with observational learning, has found the particular brain cells that activate when they see someone else fail — which may show the physical manifestation of this complex emotion.
The phenomenon of observational learning — taking lessons from others’ experiences so we don’t have to undergo them — is central to human cognition. The behaviour of “schadenfreude neurons” adds an interesting extra dimension to this action.
“For the study, ten epileptic patients who had electrodes implanted deep in their brains — standard procedure for epileptic studies — were asked to play a card game in which they would draw a card from one of two decks. The odds were stacked against them in one deck, so that they only had a 30 percent chance of winning. The other deck was rigged in their favor […]
The researchers noticed a change in the firing of brain cells deep in the frontal lobe—specifically in a brain area associated with decision-making, emotion, and social interactions—that corresponded to whether the players thought their opponents would win or lose. Furthermore, the cells responded differently after players learned whether their prediction was correct or not. […]
The most surprising observation was that those cells also showed increased firing activity whenever a player won, or his or her opponents lost, and decreased activity when a player lost and the opponents won. That’s the basic definition of schadenfreude: we experience pleasure when we win, but also when others lose.”
Proving the connection between neuronal activity and human feeling has proven notoriously difficult for scientists, so this study is a step in an exciting direction, even if it emphasizes a sometimes-nasty impulse. But we’re complicated: “There’s a Fine, Fine Line” between good and bad in us, and the next time someone you know thinks “What Do You Do With A B.A. In English?” or why “It Sucks To Be Me,” or how much they miss “My Girlfriend Who Lives In Canada,” and you feel good about it, imagine the wonders your neurons are busy performing! “For Now” at least.
Sometimes, the foods that make you sickest are the ones that passed the – literal and figurative – sniff test. In determining food safety, there can be more guesswork than is comfortable. But soon, standing in front of your fridge wondering if that package of frozen hamburgers, whose batch number isn’t quite the one that was recalled, is worth the risk, will be a thing of the past.
Scientists have developed a new weapon for the food contamination detection arsenal, which can make the presence of the bacteria E. coli too obvious to miss. (E. coli, usually found in the guts of healthy people, can cause severe gastric distress if ingested by mouth. It can be found in ground beef, unpasteurized milk, and produce, like lettuce or spinach, that may have been exposed to farm runoff.) It’s a virus called NanoLuc, engineered from Oplophorus gracilirostris, a deep-sea shrimp that glows blue. NanoLuc is designed to infect only E. coli, and when it does so, the bacteria glows visibly, much like the shrimp. That “makes contaminant detection as simple as turning off the lights”!
Researchers are now busy creating a virus that will infect Salmonella, another cause of food-borne illness. An invention like this can not only prevent the personal discomfort of a bout of food poisoning, but can save food companies the money usually spent on giant recalls — as well as the medical system’s time and resources. (E. coli infections can be life-threatening). Delicious!
Earlier this week, we (meaning the dogs & humans living here) had some excitement in the backyard. The excitement started when Jill hit, no pounced, on the back door to go outside. Both she and Samson started some serious barking and jumping. I looked outside to see what all the excitement was about and it was a BIG PORCUPINE!!! Instead of opening the door to let them out because electric fence or not I believe they both would have gone after that prickly rodent in our yard, I locked the door and called out to my husband. (I think Jill would have gotten the door open) My husband took out his shotgun and played Elmer Fudd, except he didn’t want to hunt “wabbits”, he just wanted to scare away the porcupine. Porcupines seems to march to their own drummer, so when the shotgun was discharged over the porcupines head, all it did was raise his quills and continue plodding it’s way into a bush on the side the woods. Porcupines must be stupid or extremely confident to act so nonchalantly when a shotgun is shot at them…..
Bees + Elephants = Conservation
I have always thought of elephants as the gentle giants of the animal world. Turns out, if you live alongside them in Africa, they can often be the opposite of gentle: destroying crops and pushing over trees in their effort to get enough food to power their massive bodies.
Farmers and researchers in communities in Africa have long sought a way to limit elephants’ destructive tendencies in a way that doesn’t compromise the elephants’ well being. They seem to have found it, and are now steering one of the largest animals on earth away from damaging behaviour by harnessing their natural reaction to a very small animal: African bees!
Elephants are terrified of these particularly relentless insects, specifically because they can fly up their trunks and sting them from inside. So they will quickly evacuate an area where they hear the sound of an active beehive. This has led farmers and community members to put up hives along elephants’ routes towards tasty crops — when the bees are disturbed, they buzz, and the elephants turn right around!
Similarly, as conservationist and National Geographic Emerging Explorer Paula Kahumbu explains in this video for the venerable magazine, elephants are also repelled by the bee-like sound of drones. So drones can be used to “chase” elephants away from places they can damage — as well as from the resulting conflict with humans that could end up injuring them.
I love how nature and low-impact technology is being used to help these beautiful, intelligent creatures live their lives, as they battle back from the terrible effects of poaching. There’s a lesson for humans in here too: there are ways to work with many animals, if what they’re doing harms our way of life.