As we considered back in February, blue is everywhere on the Internet, and due to that fact has a strong case for being its official colour. I bet we can, therefore, consider it to be the most modern colour, then!
Design arguments aside, Science Alert has more scientific evidence that bolsters blue’s avant-garde status. There is linguistic evidence that blue has been recognized as its own colour — and therefore “seen” by human eyes — in the modern era.
Writer Fiona MacDonald parses studies that are up to 200 years old, and of a variety of cultures, that show that most of ancient humanity (with written histories) lacked a distinct word for the colour blue, while faring well with black, white, red, and yellow. The first human culture to have a recorded word for blue was actually the ancient Egyptians, who had invented mass producible blue dye. The colour blue was important to the Egyptians, who lived along the Nile, and revered the river for its religious and agricultural significance.
But does the lack of blue in a natural environment necessarily mean it went literally unseen? More recent research shows that it’s more a matter of the definition of “blue,” rather than, say, certain cones being absent from the eye.
The Himba community of Namibia was tested by a team out of Goldsmith’s University of London in 2006. The Himba do not have a word for blue in their language — but many words for “green”. In visual tests that showed one blue square in a circle of green squares, the Himba subjects were unable to identify which of the squares was blue; they saw them all as the same colour. (In contrast, the researchers attempted the test again with a one square a slightly different shade of green than the others. The Himba subjects spotted the square immediately, while English speakers could not!)
“Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) and dark blue (siniy), can discriminate between light and dark shades of blue much faster than English speakers.
This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t actually see blue.
Or, more accurately, they probably saw it as we do now, but they never really noticed it.”
There’s something lyrical about the idea of a colour coming into being for us only if we know we’re looking at it. It makes me wonder how much else we are missing out on, simply because we don’t have words for it. Perhaps our language will evolve to show us, as it has before. Then we — and our Internet — will never be the same.
Just when you thought it couldn’t get any creepier: At the I/O conference at the beginning of this month, Google debuted a new add-on to its Google Assistant program. Dubbed Duplex, this feature is billed as making heavily digitized personal lives mesh more easily with others who haven’t caught up — by using sophisticated voice recognition and natural sounding recordings so your computer can talk to humans who don’t realize its a computer.
I cannot emphasize how freaky this is. Check out a clip of the keynote here, where Google CEO Sundar Pichai demos real conversations between Duplex (strategically deploying “um”s, uptalk, and informal syntax) and the poor, obsolete, flesh and blood humans answering the phone at a hair salon and a restaurant. Duplex finagles reservations or information out of both and then messages the user with updated info. It even books a successfully scheduled event in the calendar.
As handy as this might be for some of us, Google’s not doing an awful lot to address a specific concern that arose almost immediately after this announcement. Google has a lot of information on each of us. Considering the issues the company has with keeping that info safe (as well as questions surrounding why on earth it needs it, to begin with), a deceptively helpful feature like Duplex could end up doing more harm than good to a user.
“It knows everything you browse on Chrome, and places you go on Google Maps. If you’ve got an Android device it knows who you call. If you use Gmail it knows how regularly you skip chain emails from your mom. Giving an AI that pretends to be human access to all that information should terrify you.
A bad actor could potentially cheat information out of the Duplex assistant in a phone call. Or use the Duplex assistant to impersonate you, making calls and reservations in your name. It’s also, just, you know, an AI that KNOWS YOUR ENTIRE LIFE.”
We have long held up our end of the bargain — we have given companies access to our data and metadata in exchange for fleeting fun or profit. I think the time has come for us to get a lot smarter about how we interact with potentially mercenary or exploitable tech… Because it looks like it’s just about ready to become smarter than us.
At DFC, communication is both our business and our obsession. We strive for the perfect balance of simplicity and effectiveness in each solution we provide. That’s why I am bowled over with admiration for a unique method of inter-village communication devised by the Bora people of the Peruvian, Brazilian, and Colombian Amazon. Recently studied for the first time in depth by linguist Frank Seifart of the University of Cologne, the Bora “public address” system uses drumbeats to send messages across large distances. But instead of requiring a separate code or language, drummers and their drums represent the tones and timing of spoken Bora — resulting in messages that are easily understood by community members kilometres away.
This style of communication has been common for centuries among cultures with tonal languages, including Yoruba, and Chin. Bora has two tones, low (coded as female) and high (coded as male) — so two drums made of hollowed tree trunks (called manguaré) are required.
Seifart and team undertook their study in collaboration with five drummers and drums in the Bora region, and collected a staggering amount of specific data that had been handed down traditionally for generations.
“As predicted, the tones of the 169 drummed messages matched the high and low tones of spoken Bora. Words appeared in a formulaic order, and nouns and verbs were always followed by a special marker. […]
When the team compared the drumbeats to the words they represented, they found a second pattern: The intervals between beats changed in length depending on the sounds that followed each vowel. If a sound segment consisted of just one vowel, the time after the beat was quite short. But if that vowel was followed by a consonant, the time after the beat went up an average of 80 milliseconds. Two vowels followed by a consonant added another 40 milliseconds. And a vowel followed by two consonants added a final 30 milliseconds.”
This slight difference in rhythm makes completely different drummed messages (“go fishing” vs. “bring firewood”) thoroughly intelligible — and may, Seifart and co. theorize, be transferable to spoken, non-tonal languages too. Linguistics experts have long been stumped by the “cocktail party problem”, lacking an explanation for how the human brain can make sense of words spoken in noisy contexts (like a conversation with a friend in a loud bar). The human awareness of small changes in rhythm, even if unconscious, may point to a fascinating new direction for research!
Like you, we at DFC have no time for Luddites: technology is here to stay, and it’s important for us to grapple with what kind of effect it will have on us, rather than sticking our heads in the sand and hoping it goes away.
This is easy to do when examples of said technology are obvious and ridiculous — I have been dining out on Juicero jokes for months now. But there are plenty of other insidious interventions that are either too scary to look at directly, or too well-hidden by the bad actors behind them.
I include in this category the fallout from Big Data — the general term referring to use of predictive analytics or other rigid algorithms to crunch masses of information. The results often have an impact on human lives, in ways that automated processes can’t take into account. For example, a faceless algorithm’s understanding of who it thinks you are online can funnel ads and news to you that excludes alternate viewpoints: editing (or virtually censoring!) the world before you can make a decision about it. I’ve put on my reading list mathematician Cathy O’Neil’s book Weapons of Math Destruction, and have been researching in preparation for diving in. What I’ve found worries me:
“Like the dark financial arts employed in the run-up to the 2008 financial crisis, the Big Data algorithms that sort us into piles of “worthy” and “unworthy” are mostly opaque and unregulated, not to mention generated (and used) by large multinational firms with huge lobbying power to keep it that way. ‘The discriminatory and even predatory way in which algorithms are being used in everything from our school system to the criminal justice system is really a silent financial crisis,’ says O’Neil. […]
Indeed, O’Neil writes that WMDs punish the poor especially, since ‘they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.’ Whereas the poor engage more with faceless educators and employers, ‘the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.’”
The supposed impartiality that Big Data dangles in front of us flawed humans is definitely attractive. It’s attractive because it’s aspirational; we are flawed. But we can’t forget that it’s precisely our nature to use or interpret Big Data in ways that are biased or prejudiced. Our reliance on technology doesn’t absolve us of moral responsibility, to the people we know directly, or to the greater society. We’re all in this together… You can’t say that about algorithms!
We at DFC live a pretty rural life. While we have an excellent emergency response, if we do say so ourselves, we still feel a bit more remote from others than we did in the ’burbs. I’ve often thought what would happen to Jill and Samson if either of their humans became suddenly compromised. I like to think that they would help us — but, according to National Geographic, we may need to have a talk with them about that.
Contributor Erika Engelhaupt has looked at 2015 study, as well as 63 separate cases of pet owners dying or becoming otherwise incapacitated alone in their homes, and having their beloved animals eat part of them. She lays to rest several assumptions about this behaviour that the public has perhaps formed from gruesome reports or legends. For example, the stereotype that cats are soulless hunters who would gladly eat the faces of their pitiful owners is not true by the numbers: dogs appear in reports most often as the ones who caved and, uh, chowed down.
Also, pets might not eat their owners out of malice, or as a last resort:
“In 24 percent of the cases in the 2015 review, which all involved dogs, less than a day had passed before the partially eaten body was found. What’s more, some of the dogs had access to normal food they hadn’t eaten.
The pattern of scavenging also didn’t match the feeding behavior of canines in the wild. When dogs scavenged dead owners indoors, 73 percent of cases involved bites to the face, and just 15 percent had bites to the abdomen. […]”
And, they may be listening to a deeper, wilder voice inside them from their deep evolutionary past — that overrides the more recent effect of their owner:
“‘One possible explanation for such behavior is that a pets will try to help an unconscious owner first by licking or nudging, […] but when this fails to produce any results the behavior of the animal can become more frantic and in a state of panic, can lead to biting.’
From biting, it’s an easy jump to eating, [forensic anthropologist Carolyn] Rando says: ‘So it’s not necessarily that the dog wants to eat, but eating gets stimulated when they taste blood.’”
To which I say, “Not on my watch!” I am sure we can train Jill (who is smart enough to open doors for herself) or Samson (who knows how to keep a cool head in an emergency) to call for help if we fall down the well. They will be the exception, of course.