If you’ve ever investigated the culture of business or self improvement, you’ve likely heard tell of the marshmallow test: the study done at Stanford in the 1960s on young kids attending the university’s preschool program. Run by professor of psychology Walter Mischel, the test involved placing a marshmallow in front of a preschooler and telling them that, if they could successfully hold off eating that marshmallow for AN ETERNITY (fifteen minutes), they’d get a second marshmallow as a reward — then leaving them alone to rely on their own willpower. Mischel and team then tracked the kids into later life, and found that the ability to delay gratification led to greater incidences of “good” personality traits, like confidence — as well as, critically, higher SAT scores.
A new study out of NYU is now calling this classic test into question. This new team has tried the test again with an expanded subject pool, both in terms of numbers (900 versus the original 90), and of factors like race/ethnicity, parents’ education level, and household income. With a more representative sample, they have discovered that self-control is less an individual choice and more a result of social and economic background — and that in turn is the stronger predictor of later success.
“[A]mong kids whose mothers did not have college degrees, those who waited did no better than those who gave in to temptation, once other factors like household income and the child’s home environment at age 3 […] were taken into account. For those kids, self-control alone couldn’t overcome economic and social disadvantages.
The failed replication of the marshmallow test does more than just debunk the earlier notion; it suggests other possible explanations for why poorer kids would be less motivated to wait for that second marshmallow. For them, daily life holds fewer guarantees: There might be food in the pantry today, but there might not be tomorrow, so there is a risk that comes with waiting. And even if their parents promise to buy more of a certain food, sometimes that promise gets broken out of financial necessity.”
In short, context is key! In science and, most importantly, in life. Sometimes, personal responsibility includes taking ownership of your context in addition to your actions, and recognizing when the latter might not help you out with the former. Now, as an adult, I’m off to snack on as many marshmallows as my heart desires!
I’m no Brassica-phobe: I love broccoli. Steamed, roasted, in a crunchy salad or creamy soup: its flexibility and high vitamin C content make it one of my favourite green veggies.
But I draw the line at a new way of consuming broccoli developed in Australia, land of the weird — where it is dehydrated, pulverized, and sprinkled in espresso-based drinks.
Hort Innovation developed the powder to combat two unfortunate vegetable trends: the fact that the average Australian is not consuming their recommended 5 – 7 daily servings of vegetables, and that a staggering amount of produce is turfed before reaching stores because it is too “ugly” to sell. But Australians love their coffee — which has led to this unholy piggybacking of veggie upon java.
“The production process involves pre-treatment before drying and powdering the vegetable, to retain as much of the original colour, flavour and nutrients as possible.
The result may even be healthier than stir-fried florets — according to recent research, the best way to maximize the health benefits of broccoli is to chop it up as finely as possible to produce myrosinase activity (although the CSIRO hasn’t mentioned if myrosinase survives the drying process, so the jury is still out).
To make broccoli coffee, the powder is added to the cup after the espresso shot has been pulled. Steamed milk is added, and more broccoli powder is sprinkled on top.”
Apparently the broccoli powder does not seamlessly incorporate into the beverage: the taste is still a bit… cruciferous. But the creators are now investigating ways of bringing the powder to the home market — I imagine you’d have an easier time getting it down in a green smoothie or on top of a salad. (Though, if you’re already consuming smoothies and salads, Australia is probably not worried about you getting your seven daily servings…) I think we can agree that the concept — increasing veggie consumption while reducing food waste — is great. It’s just a matter of finding a more palatable execution!
And here I thought the biggest problem with NASA’s Voyager probes, launched for deep space exploration in 1977, was that they came with a handy map of how to get to Earth for whomever in the universe might be tempted to pop by and eat us. But a pair of scientists have taken a closer look at Voyager 1 and 2’s “Golden Records” — two duplicate plaques containing 115 images of life on our planet, natural and human-made sounds, as well as greetings in 55 languages. And what they hypothesize here is a failure to communicate.
Movies like Arrival epitomize recent trends in thinking about possible alien contact. There is no reason to expect humanoids with forehead ridges to show up; so who says that a civilization who might come across a Golden Record would have any reference point for what we’re trying to tell them?
Rebecca Orchard and Sheri Wells-Jensen of Ohio’s Bowling Green State University say that if aliens’ senses don’t include sight or hearing — let alone if they have a completely different way of organizing outside inputs — they will miss out on a good chunk of the shiny goodwill message from our humble planet.
“Orchard and Wells-Jensen went through the material on the record and considered what an alien civilization with a different suite of senses might make of it. The barrage of greetings ‘pile up in a way that could be construed as arguing’, said Orchard, in a language that has ‘no grammatical congruity’. That is, if they can hear.
The 12-inch gold-plated copper disc has audio on one side and images on the other, and this could lead to further misunderstandings, the researchers believe. If an alien civilization tried to match sounds to their objects, life on Earth can look very strange. ‘What if you pair the image of an open daffodil with the roar of a chainsaw?’ said Orchard.”
The Golden Record does take on darker shades when we think of accidentally confusing entities we might never begin to understand.
Perhaps our only hope is for one of the Voyagers to evolve itself into a more powerful, bionic being that can amplify and translate our peaceful message! Until then, you can find me preparing to greet our alien visitors by marinating myself in barbecue sauce.
We at DFC love elegant solutions — especially when they open up new experiences to folks underserved by the status quo. This is why we join most of the gaming community in a giant “w00t!” in response to the just-announced Xbox Adaptive Controller (XAC).
Created to address the accessibility challenges of the standard controller that ships with Microsoft’s popular Xbox family of consoles, the XAC is a streamlined flat white oblong that boasts but a few key inputs. Besides the two menu buttons and the d-pad, two hand-sized black buttons — the “A” and “B” keys — take up most of the real estate on the face of the device. These buttons can be reprogrammed with the Xbox Controller app, but their ease and variety of ways with which a gamer with different mobility requirements can hit them (wrist! elbow! foot!) doesn’t change.
But the really cool functionality of this device is unveiled when turned on its side. Arrayed there are nineteen 3.5mm jacks, all ready for a prospective user to plug in as many pieces of equipment they need to make up their unique gaming setup. The XAC becomes a hub for joysticks of all kinds, foot pedals, cheek- or head-operated buttons, sip/puff switches, and more! From Ars Technica’s incredibly comprehensive report on the XAC’s debut:
I watched MikeTheQuad, a member of the Warfighter Engaged community of disabled veteran gamers, test the XAC out. As a tetraplegic, Mike has some range of arm and hand motion, but his individual fingers are not up to the burden of holding a controller and pressing all its buttons. […]
Mike used a standard Xbox gamepad alongside the XAC, plus a few large buttons plugged into the unit to rest near his wrists for easier access. That positioning flexibility is no small perk. XAC’s combo of wireless protocols, 20-hour battery, and mounting brackets means someone like Mike can pretty much put the hub wherever is most convenient.
Mike also quite frequently flicked his wrist at the XAC’s two big ‘dumb’ buttons to access controls like crouching or weapon swaps. As I watched Mike flick at the XAC with the same speed I might move my thumb from the ‘A’ button to the ‘Y’ button, I thought for the first time in my life about what a privilege it is to quickly tap around all of a gamepad’s buttons.”
While the XAC team does consider themselves a bit late to the overall accessible gaming scene, they are happy that their new device brings unprecedented flexibility to the table for an equally accessible price ($99.99 USD; other controllers on the market can go for $300 change). And, they profess to the belief that anyone else — Nintendo, Sony — should be able to look to the XAC for inspiration for their own platforms. The object is to get as many gamers as possible having as much fun as possible, without barriers. And who doesn’t love fun? — At least as much as elegant solutions!
As we considered back in February, blue is everywhere on the Internet, and due to that fact has a strong case for being its official colour. I bet we can, therefore, consider it to be the most modern colour, then!
Design arguments aside, Science Alert has more scientific evidence that bolsters blue’s avant-garde status. There is linguistic evidence that blue has been recognized as its own colour — and therefore “seen” by human eyes — in the modern era.
Writer Fiona MacDonald parses studies that are up to 200 years old, and of a variety of cultures, that show that most of ancient humanity (with written histories) lacked a distinct word for the colour blue, while faring well with black, white, red, and yellow. The first human culture to have a recorded word for blue was actually the ancient Egyptians, who had invented mass producible blue dye. The colour blue was important to the Egyptians, who lived along the Nile, and revered the river for its religious and agricultural significance.
But does the lack of blue in a natural environment necessarily mean it went literally unseen? More recent research shows that it’s more a matter of the definition of “blue,” rather than, say, certain cones being absent from the eye.
The Himba community of Namibia was tested by a team out of Goldsmith’s University of London in 2006. The Himba do not have a word for blue in their language — but many words for “green”. In visual tests that showed one blue square in a circle of green squares, the Himba subjects were unable to identify which of the squares was blue; they saw them all as the same colour. (In contrast, the researchers attempted the test again with a one square a slightly different shade of green than the others. The Himba subjects spotted the square immediately, while English speakers could not!)
“Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) and dark blue (siniy), can discriminate between light and dark shades of blue much faster than English speakers.
This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t actually see blue.
Or, more accurately, they probably saw it as we do now, but they never really noticed it.”
There’s something lyrical about the idea of a colour coming into being for us only if we know we’re looking at it. It makes me wonder how much else we are missing out on, simply because we don’t have words for it. Perhaps our language will evolve to show us, as it has before. Then we — and our Internet — will never be the same.
Just when you thought it couldn’t get any creepier: At the I/O conference at the beginning of this month, Google debuted a new add-on to its Google Assistant program. Dubbed Duplex, this feature is billed as making heavily digitized personal lives mesh more easily with others who haven’t caught up — by using sophisticated voice recognition and natural sounding recordings so your computer can talk to humans who don’t realize its a computer.
I cannot emphasize how freaky this is. Check out a clip of the keynote here, where Google CEO Sundar Pichai demos real conversations between Duplex (strategically deploying “um”s, uptalk, and informal syntax) and the poor, obsolete, flesh and blood humans answering the phone at a hair salon and a restaurant. Duplex finagles reservations or information out of both and then messages the user with updated info. It even books a successfully scheduled event in the calendar.
As handy as this might be for some of us, Google’s not doing an awful lot to address a specific concern that arose almost immediately after this announcement. Google has a lot of information on each of us. Considering the issues the company has with keeping that info safe (as well as questions surrounding why on earth it needs it, to begin with), a deceptively helpful feature like Duplex could end up doing more harm than good to a user.
“It knows everything you browse on Chrome, and places you go on Google Maps. If you’ve got an Android device it knows who you call. If you use Gmail it knows how regularly you skip chain emails from your mom. Giving an AI that pretends to be human access to all that information should terrify you.
A bad actor could potentially cheat information out of the Duplex assistant in a phone call. Or use the Duplex assistant to impersonate you, making calls and reservations in your name. It’s also, just, you know, an AI that KNOWS YOUR ENTIRE LIFE.”
We have long held up our end of the bargain — we have given companies access to our data and metadata in exchange for fleeting fun or profit. I think the time has come for us to get a lot smarter about how we interact with potentially mercenary or exploitable tech… Because it looks like it’s just about ready to become smarter than us.
At DFC, communication is both our business and our obsession. We strive for the perfect balance of simplicity and effectiveness in each solution we provide. That’s why I am bowled over with admiration for a unique method of inter-village communication devised by the Bora people of the Peruvian, Brazilian, and Colombian Amazon. Recently studied for the first time in depth by linguist Frank Seifart of the University of Cologne, the Bora “public address” system uses drumbeats to send messages across large distances. But instead of requiring a separate code or language, drummers and their drums represent the tones and timing of spoken Bora — resulting in messages that are easily understood by community members kilometres away.
This style of communication has been common for centuries among cultures with tonal languages, including Yoruba, and Chin. Bora has two tones, low (coded as female) and high (coded as male) — so two drums made of hollowed tree trunks (called manguaré) are required.
Seifart and team undertook their study in collaboration with five drummers and drums in the Bora region, and collected a staggering amount of specific data that had been handed down traditionally for generations.
“As predicted, the tones of the 169 drummed messages matched the high and low tones of spoken Bora. Words appeared in a formulaic order, and nouns and verbs were always followed by a special marker. […]
When the team compared the drumbeats to the words they represented, they found a second pattern: The intervals between beats changed in length depending on the sounds that followed each vowel. If a sound segment consisted of just one vowel, the time after the beat was quite short. But if that vowel was followed by a consonant, the time after the beat went up an average of 80 milliseconds. Two vowels followed by a consonant added another 40 milliseconds. And a vowel followed by two consonants added a final 30 milliseconds.”
This slight difference in rhythm makes completely different drummed messages (“go fishing” vs. “bring firewood”) thoroughly intelligible — and may, Seifart and co. theorize, be transferable to spoken, non-tonal languages too. Linguistics experts have long been stumped by the “cocktail party problem”, lacking an explanation for how the human brain can make sense of words spoken in noisy contexts (like a conversation with a friend in a loud bar). The human awareness of small changes in rhythm, even if unconscious, may point to a fascinating new direction for research!
Like you, we at DFC have no time for Luddites: technology is here to stay, and it’s important for us to grapple with what kind of effect it will have on us, rather than sticking our heads in the sand and hoping it goes away.
This is easy to do when examples of said technology are obvious and ridiculous — I have been dining out on Juicero jokes for months now. But there are plenty of other insidious interventions that are either too scary to look at directly, or too well-hidden by the bad actors behind them.
I include in this category the fallout from Big Data — the general term referring to use of predictive analytics or other rigid algorithms to crunch masses of information. The results often have an impact on human lives, in ways that automated processes can’t take into account. For example, a faceless algorithm’s understanding of who it thinks you are online can funnel ads and news to you that excludes alternate viewpoints: editing (or virtually censoring!) the world before you can make a decision about it. I’ve put on my reading list mathematician Cathy O’Neil’s book Weapons of Math Destruction, and have been researching in preparation for diving in. What I’ve found worries me:
“Like the dark financial arts employed in the run-up to the 2008 financial crisis, the Big Data algorithms that sort us into piles of “worthy” and “unworthy” are mostly opaque and unregulated, not to mention generated (and used) by large multinational firms with huge lobbying power to keep it that way. ‘The discriminatory and even predatory way in which algorithms are being used in everything from our school system to the criminal justice system is really a silent financial crisis,’ says O’Neil. […]
Indeed, O’Neil writes that WMDs punish the poor especially, since ‘they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.’ Whereas the poor engage more with faceless educators and employers, ‘the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.’”
The supposed impartiality that Big Data dangles in front of us flawed humans is definitely attractive. It’s attractive because it’s aspirational; we are flawed. But we can’t forget that it’s precisely our nature to use or interpret Big Data in ways that are biased or prejudiced. Our reliance on technology doesn’t absolve us of moral responsibility, to the people we know directly, or to the greater society. We’re all in this together… You can’t say that about algorithms!
We at DFC live a pretty rural life. While we have an excellent emergency response, if we do say so ourselves, we still feel a bit more remote from others than we did in the ’burbs. I’ve often thought what would happen to Jill and Samson if either of their humans became suddenly compromised. I like to think that they would help us — but, according to National Geographic, we may need to have a talk with them about that.
Contributor Erika Engelhaupt has looked at 2015 study, as well as 63 separate cases of pet owners dying or becoming otherwise incapacitated alone in their homes, and having their beloved animals eat part of them. She lays to rest several assumptions about this behaviour that the public has perhaps formed from gruesome reports or legends. For example, the stereotype that cats are soulless hunters who would gladly eat the faces of their pitiful owners is not true by the numbers: dogs appear in reports most often as the ones who caved and, uh, chowed down.
Also, pets might not eat their owners out of malice, or as a last resort:
“In 24 percent of the cases in the 2015 review, which all involved dogs, less than a day had passed before the partially eaten body was found. What’s more, some of the dogs had access to normal food they hadn’t eaten.
The pattern of scavenging also didn’t match the feeding behavior of canines in the wild. When dogs scavenged dead owners indoors, 73 percent of cases involved bites to the face, and just 15 percent had bites to the abdomen. […]”
And, they may be listening to a deeper, wilder voice inside them from their deep evolutionary past — that overrides the more recent effect of their owner:
“‘One possible explanation for such behavior is that a pets will try to help an unconscious owner first by licking or nudging, […] but when this fails to produce any results the behavior of the animal can become more frantic and in a state of panic, can lead to biting.’
From biting, it’s an easy jump to eating, [forensic anthropologist Carolyn] Rando says: ‘So it’s not necessarily that the dog wants to eat, but eating gets stimulated when they taste blood.’”
To which I say, “Not on my watch!” I am sure we can train Jill (who is smart enough to open doors for herself) or Samson (who knows how to keep a cool head in an emergency) to call for help if we fall down the well. They will be the exception, of course.
I had a productive conversation with a friend this week, in which we potayto–potahto-ed over the world’s most controversial herb, cilantro. I love it, and welcome its delicate flavour in anything from curry to scrambled eggs. He loathes it, swears it tastes like “metal soap,” and would willingly launch every last ounce of it into the sun.
Science has shown that my violently anti-cilantro friend shares a genetic heritage with up to 14% of the world’s population, that causes them to be sensitive to aldehydes in cilantro that are chemically similar to aldehydes that are byproducts of soapmaking. (Soapy = poisonous makes a compelling reason to avoid the herb!) Since I can’t detect the soapiness, I find myself in the population whose bodies won’t reject cilantro as possible poison, making me… a dinosaur?
Perhaps literally, says a new study from evolutionary psychologists at the University of Baltimore! Turns out that, with similar, foolishly self-destructive tastebuds, dinosaurs may have contributed to their own demise by persisting in chowing down on harmful angiosperms, not realizing their danger. From Phys.org:
“‘Learned taste aversion’ is an evolutional defense seen in many species, in which the animal learns to associate the consumption of a plant or other food with negative consequences, such as feeling ill. […]
The first flowering plants, called angiosperms, appear in the fossil record well before the asteroid impact and right before the dinosaurs began to gradually disappear. [Study leaders Gordon Gallup and Michael] Frederick claim that as plants were evolving and developing toxic defenses, dinosaurs continued eating them despite gastrointestinal distress. Although there is uncertainty about exactly when flowering plants developed toxicity and exactly how long it took them to proliferate, Gallup and Frederick note that their appearance coincides with the gradual disappearance of dinosaurs.”
While climate change due to asteroid impact definitely had an effect, an overall weakening of the dinosaurs through diet neatly explains how long it took them to become extinct — over millions of years both before and after the asteroid hit. That’s a cosmic timeline that brings me comfort: at least I can still enjoy ALL THE CILANTRO for the more human-scale time I personally have left!