416-479-0074

Climate Change: Toxic in More Ways than One

Climate Change: Toxic in More Ways than One

cassava affected by climate change

We at DFC have been really lucky. As we’ve been expanding our sauce and condiment business, we’ve developed fantastic relationships with some really lovely food folks  — the Ormsbee’s Mercantile team who provide us with maple syrup, and the garlic gurus at the Bowness Family Farm, to name just two! We’ve been learning more about organic, local growing practices, that are in touch with the natural cycles of the earth — and often result in better, more delicious products!

But, this new awareness of our interconnectedness has brought a stark realization with it. Our freaky winters and parched summers can wreak havoc with our friends’ harvests, and their — and our — livelihood.

Climate change is upon us. And, as I’ve recently read in a fascinating take in Vice, the most devastating effect it could have (floods and ice storms aside) on the world’s food supplies. According to scientists, we are already seeing higher amounts of toxins in certain foodstuffs. Cassava is a big one: not only is lack of water is allowing (in this case, naturally occurring) hydrogen cyanide to concentrate, but the scarcity of other things to eat due to drought forces people to eat more of the hardy root— and more of its cyanide.

But there are many other ways climate change could start to poison our food.
 
“Plants try to protect themselves in the face of a changing climate too, and the ways they do can be harmful to humans. They use a compound called nitrate to grow and convert it into other molecules like amino acids and proteins. When crops like barley, maize or millet are faced with drought, they slow down or stop this conversion, which leads to a nitrate buildup. […] If a human eats large amounts of nitrate, it can ‘stop red blood cells from transporting oxygen in the human body,’ Yale360 reported. […]

In the opposite direction, heavy rains can lead to a toxic buildup of hydrogen cyanide or prussic acid in foods like flax, maize, sorghum, arrow grass, cherries and apples. […] With flooding, there can be an increase in fungal growth and mycotoxins on crops.”
 
All these toxins wreak havoc on the human body. And, as is the unfortunate case with most terrible natural disasters, the people hardest hit will be the most disadvantaged — the ones who can’t pay their way to safety.
 
This planet is the responsibility of all of us, though. And we in Canada can’t content ourselves that we’ll be (short term) fine as the planet warms, nor can we out-of-sight-out-of-mind our fellow food eaters. I don’t know what it’ll take to turn this climate change ship around, but I sure hope it doesn’t require something as intimate as changing our very sustenance into poison.

Machine Learning and Human Health: Decoding New Antibiotics

Long-time readers of this newsletter can corroborate: We’re always interested in the development of AI through machine learning. We’ve seen computer bits of intelligence fool university students, teach English to Japanese schoolkids, name kittens, and sort Lego.
 
While it is fun to think about a computer dubbing a baby cat Snox Boops, how well does machine learning work with less frivolous data? Well, a team from MIT has found out, by challenging an AI to pore through thousands of pharmaceutical compounds and come up with a working antibiotic. And it has succeeded — unbelievably well.
 
“To find new antibiotics, the researchers first trained a ‘deep learning’ algorithm to identify the sorts of molecules that kill bacteria. To do this, they fed the program information on the atomic and molecular features of nearly 2,500 drugs and natural compounds, and how well or not the substance blocked the growth of the bug E. coli.
 
Once the algorithm had learned what molecular features made for good antibiotics, the scientists set it working on a library of more than 6,000 compounds under investigation for treating various human diseases. Rather than looking for any potential antimicrobials, the algorithm focused on compounds that looked effective but unlike existing antibiotics. This boosted the chances that the drugs would work in radical new ways that bugs had yet to develop resistance to.”
 
The AI found a stellar combo, which the researchers cheekily named “halicin,” after the meddling computer HAL 9000 in 2001: A Space Odyssey. But the antibiotic itself is far more helpful to humans than anything HAL was responsible for: In tests, it has cleared the bacterium behind tuberculosis and C. difficile, as well as a host of other, equally drug-resistant bugs.
 
Its creators are hoping to work with a non-profit or pharmaceutical company to bring halicin to the market in the near future. Until then, their concept proven, they will continue to throw molecules at their trusty AI — who knows what medical wonders will come out the other side!

Chilly Canadian Data Key in Teaching Self-Driving Cars

We at DFC spend a lot of time on the road, from visiting clients to dropping in on family, to ferrying our barbeque sauces to market. We’ve seen our share of good, fair, and poor drivers — but what we haven’t seen yet are cars with no drivers at all.
 
While I’ve been watching developments in autonomous cars keenly, what hasn’t occurred to me is the fact that they’re all being tested in California and other temperate climes have nothing to do with proximity to Silicon Valley. It’s primarily because the weather there is nice — and in rugged wintry Canada, it’s, well… not so much.
 
But this has led to a bias in the AI used in autonomous cars, where the data set of road conditions in sunny SoCal is perfect — too perfect. This spells danger in the Great White North. As reads an account in Wired, professor Krzysztof Czarnecki, who built his own self-driving car in 2018, and attempted to train it in snowy Waterloo with a data set from more temperate Germany nearly didn’t make it out alive. He quickly figured out why.
 
“Inclement conditions are challenging for autonomous vehicles for several reasons. Snow and rain can obscure and confuse sensors, hide markings on the road, and make a car perform differently. Beyond this, bad weather represents a difficult test for artificial intelligence algorithms. Programs trained to pick out cars and pedestrians in bright sunshine will struggle to make sense of vehicles topped with piles of snow and people bundled up under layers of clothing.

‘Your AI will be erratic,’ Czarnecki says of the typical self-driving car faced with snow. ‘It’s going to see things that aren’t there and also miss things.’”
 
Czarnecki is surprised that big industry players aren’t trying to tackle the harsh weather issue, especially considering the autonomous vehicle industry is pretty well-tested in ideal conditions and could use the challenge. I guess capitalism drives (pun intended!) everything: perhaps there’s not enough of an audience in self-sufficient Canada to make the innovation worthwhile? What do you think the reasons are, dear reader? And, would you even trust a driverless car in some of our wackiest weather?

Hearing a Voice from the Grave — Through Science!

Boris Karloff’s immersive acting technique ain’t got nothing on the determined researchers from Royal Holloway – the University of London, University of York, and Leeds Museum. They were able to scan the preserved vocal cords of a 3,000-year-old mummy, and 3D printed a version that was then paired with an established invention called the Vocal Tract Organ. Then, they “played” the scanned vocal cords — allowing us to hear a time-traveling vowel sound straight from the throat of an ancient Egyptian priest!

“Professor David Howard, from Royal Holloway, said: ‘I was demonstrating the Vocal Tract Organ in June 2013 to colleagues, with implications for providing authentic vocal sounds back to those who have lost the normal speech function of their vocal tract or larynx following an accident or surgery for laryngeal cancer.

‘I was then approached by Professor John Schofield who began to think about the archaeological and heritage opportunities of this new development. […]

Professor Joann Fletcher, of the department of archaeology at the University of York, added: ‘Ultimately, this innovative interdisciplinary collaboration has given us the unique opportunity to hear the sound of someone long dead by virtue of their soft tissue preservation combined with new developments in technology.’”

(You can hear Nesyamun’s voice from the grave here.)

For me, the most satisfying aspect of this recreation is that it aligns with Nesyamun’s own beliefs: in his religious practice, to speak the name of the dead is to make them live again. Nesyamun has done one better — he is speaking for himself. And we’re hearing his story through our modern technology!

What the Cuttlefish Saw: 3D Hunting and the Structure of the Brain

If the octopus is the mastermind of the sea, then I consider the cuttlefish its tough, canny cousin — a cephalopod enforcer with a literal backbone (not really: it’s an internal shell), a Joe “Pesce,”  if you will.
 
Okay, okay, I’ll stop… But a team of scientists from the University of Cambridge and the University of Minnesota won’t: won’t stop trying to understand the cuttlefish predation process using unusual and hilarious means, that is! In an experiment conducted at the Woods Hole Oceanographic Institute, the team outfitted cuttlefish with 3D glasses — the classic, monster-movie, red-and-blue ones — in an effort to find out how they hunt their especially skittish aquatic prey. Turns out, it’s a delicate proposition: cuttlefish use their long dual feeding tentacles to snag dinner, and they have to be just the right distance. If not, they risk scaring the doomed shrimp or crab away, or even missing it entirely. Humans use stereopsis, or binocular, vision as the basis of our depth perception — but do cuttlefish?
 
“To test how the cuttlefish brain computes distance to an object, the team trained cuttlefish to wear 3D glasses and strike at images of two walking shrimp, each a different color displayed on a computer screen […]

The images were offset, allowing for the researchers to determine if the cuttlefish were comparing images between the left and the right eyes to gather information about distance to their prey. […] Depending on the image offset, the cuttlefish would perceive the shrimp to be either in front of or behind the screen. The cuttlefish predictably struck too close to or too far from the screen, according to the offset.
 
‘How the cuttlefish reacted to the disparities clearly establishes that cuttlefish use stereopsis when hunting,’ said Trevor Wardill, assistant professor at the Department of Ecology, Evolution and Behavior in the College of Biological Sciences. ‘When only one eye could see the shrimp, meaning stereopsis was not possible, the animals took longer to position themselves correctly. When both eyes could see the shrimp, meaning they utilized stereopsis, it allowed cuttlefish to make faster decisions when attacking. This can make all the difference in catching a meal.’”
 
While this experiment uncovers one point where cuttlefish and human vision dovetail, that is where the similarities end. Cuttlefish process stereoscopic images differently than humans do, due to their vastly different brains. Unlike us, they don’t have an occipital lobe; that is, a part of the brain that is specifically dedicated to processing visual stimuli. That means that stereopsis in humans (and other vertebrates) and cuttlefish developed independently. The next step is for researchers to dissect cuttlefish brain circuitry, to see if they can pin this fascinating difference down!
 
It’s staggering that brains as different as humans and cuttlefish can develop the exact same skill. We humans can learn so much from the natural world — not least the fact that despite our advancements we are animals too.

From Beds to the Podium: Recycling at the 2020 Olympics

Olympic and Paralympic officials in Tokyo are scoring a point for sustainability in the design of athletes’ accommodations for the summer Games this July and August. Specifically, the bedframes that the competitors will be sleeping on between matches, races, or bouts in the Athletes Village will be made of a sturdy but recyclable cardboard.
 
As anyone who has ever tried to collapse a shipping box to go in the blue bin knows, corrugated cardboard can be flimsy on its sides, but tenaciously durable along its folded edges. The Tokyo bedframes are constructed out of several folded modules that seem to take advantage of that fact. (Takashi Kitajima, general manager of the Athletes Village, has stated that the cardboard bedframes are stronger than wood.) The organizers envision total recyclability of the bedframes after the Olympics and Paralympics into a variety of paper products. Additionally, the plastic-based mattresses will be fully recycled into plastic items.
 
“‘The organizing committee was thinking about recyclable items, and the bed was one of the ideas,’ Kitajima explained, crediting local Olympic sponsor Airweave Inc. for the execution.

Organizers say this is the first time that the beds and bedding in the Athletes Village have been made of renewable materials.

The Athletes Village being built alongside Tokyo Bay will comprise 18,000 beds for the Olympics and be composed of 21 apartment towers. Even more building construction is being planned in the next several years.

Real estate ads say the units will be sold off afterward, or rented, with sale prices starting from about 54 million yen—or about $500,000—and soaring to three or four times that much.”

Japan in a very recycling-conscious society; trust them to come up with such a staggering plan, and follow through with it! They are also a practical culture and assure athletes their recyclable beds are guaranteed to support a sleeping weight of 200kg — though they can’t guarantee they’ll hold up under a celebratory gold-medal bed-jumping party, or any other particularly vigorous, um, sport that athletes at high-level competitions are notorious for. Regardless, we at DFC wish all Olympic and Paralympic competitors the absolute best and look forward to watching their (well-rested!) efforts this summer.
 

Playing at Work: Toys and Brainstorming Creativity

Playmobil, that Lego-complementary, German bastion of many a plastic childhood fantasy world, is poised to enter the workforce!
 
Jason Wilson at the Washington Post trialed the innovation, called Playmobil Pro; a pared-down play set that can be used in a play-therapy-like business brainstorming session.
 
The Playmobil Pro set neutralizes the human figurines by making them all blank white (you can write identifiers on them with dry-erase marker), and provides heavily symbolic accessories — a jester hat, a superhero cape, flowers, a megaphone. Wilson visits Legoland Windsor, to contrast the Playmobil Pro experience with that of industry leader Lego Serious Play. He participates in an exercise where he decorates a figure that represents how he thinks others see him.
 
“I figured I may as well be honest and wrote ‘Troublesome’ on my figure. I attached a winter hood, suggesting that I’m too closed-off, a suitcase for all my baggage, and a tuba, representing that I often have too loud of a voice. I guess I was trying to be cheeky, at first, but I’d also arrived at some truth.

I found it strange how simply adding tiny accessories to a blank Playmobil figure had caused a level of introspection. Yet everyone else at the table was equally, surprisingly self-critical. […]

During the reflection period, there was great excitement about the applications for Playmobil Pro. ‘I had my reservations that Playmobil Pro might not have the same opportunities for riffing,’ [certified Lego facilitator Ben] Mizen said. “But wow, this is great for role play.’ […]

The only note of skepticism came from [fellow participant Greg] Stadler, who said, ‘This is great. But at what point do you put down the toys and start working?’ 
 
Wilson’s experience shows that play is important for childhood development, but is difficult to translate into the adult experience. We don’t play the way kids do; our brains no longer require it. So, it may feel weird sitting in a circle of grownups slapping accessories on faceless figures, and trying to shoehorn a clunky metaphor into our choices.
 
But who’s to say we shouldn’t try? Feeling like we shouldn’t — being told to “grow up” by authority — is how we lost that skill to begin with! Tell us, dear reader: Would you feel self-conscious, or creatively unleashed, playing with Playmobil Pro with your boss?

Busy Bees and the Mystery of Lyme

In this newsletter, we have encountered the many wonders of bees. But Texas Monthly has an account of another we can — possibly — add to the list: Lyme disease therapy??
 
Unfortunately, the science is murky, but that seems par for the course for anything involving Lyme — so named after a town in Connecticut where, in the early 1970s, residents began feeling feverish, achy, and intensely fatigued after being bit by deer ticks carrying the bacterium Borrelia burgdorferi. A course of antibiotics usually clears up infections caught early, but symptoms can persist afterward as “post-treatment Lyme disease syndrome”. Major controversy starts when people feel chronic fatigue and pain, without a conclusive b. burgdorferi infection in their past, and attribute their experience to “chronic Lyme.” Many experts don’t think chronic Lyme exists. But that doesn’t mean the pain and fatigue and disruption sufferers experience don’t.
 
One such sufferer was Tricia Gschwind, who believed she had chronic Lyme after spotting a bullseye rash on her ankle in 2009. Therapies from established doctors did nothing to alleviate her symptoms, so she approached alternative therapies. Soon, she was deliberately stinging herself with honey bees (!) — and, incredibly, started feeling better.
 
“It is possible — some would say probable — that these individuals are promoting a technique whose success is based more on psychology than pharmacology. There is little science to substantiate a cure by stinging. There have been two clinical studies investigating the link between bee venom and Lyme, and though they are compelling, they are confined to petri dishes. […]

Yet these in vitro experiments don’t translate to the human body. For starters, bees don’t carry enough venom to have more than a local effect. This is a good thing, experts argue, because if the venom did reach the bloodstream in a dose large enough to be effective against the bacteria, it could kill the patient. ‘Many things are antibiotic — like bleach,’ said Sam Robinson, a venom researcher at the University of Queensland, in Brisbane, Australia. ‘Bleach is effective at killing any microbe, but you can’t use it as a drug.’”
 
Apitherapy — the use of bee products in human health management — has been part of folk medicine for millennia. How the bee-sting protocol operates for followers is up in the air: after all, the placebo effect actually works. I’d personally steer clear of any treatment whose side effects include sudden anaphylaxis, as well as honey bee murder! But that’s easy to say if I’m not at the end of my rope.

Puffins Tool Use Scratches Deeper into Animal Intelligence

Humans are great at a lot of things — but one that we excel at most is being unconsciously biased towards how we see the world!
 
As such, we have created lots of so-called “intelligence” tests, designed to gauge how the non-human animals with whom we share our planet measure up in the smarts department. But, as we’ve explored in this space before, these sorts of tests are often favour human-like behaviour as the gold standard. For example, the classic mirror self-recognition test is a cinch for species that are heavily visual in their information processing — that is, us, and not, say, notoriously nose-smart dogs.
 
But even with that caveat, every once in a while an animal will beat us at our own (heavily rigged) intelligence game. Like the Atlantic puffin recently observed by Annette L. Fayet of Oxford University, who was the second puffin the scientist had observed using our most sacrosanct of intelligence markers, a tool.
 
“This time, the action unfolded in front of a camera: The bird spots a stick and grasps it with a cartoon-bright beak. The bird makes a burbling sound. It turns, as if to face the lens. And then it scratches its chest feathers with the stick’s pointy end.

This was not some nesting behavior gone awry. Puffins collect soft grass for their nests, then hurry into their burrows with beaks full of bedding. The puffin in Iceland dropped the stick after it finished scratching. Hours later, the camera recorded the stick, still discarded, on the ground.”
 
Puffins now join the 1 per cent of species worldwide that have been observed using tools, a group which includes their feathered brethren New Caledonian crows and keas (a New Zealand parrot). And they also join us; showing humans that, not only are our attempts to wall ourselves off from our fellow animals completely arbitrary, but that species-specific necessity breeds species-specific invention. Smart — “for an animal” — really is smart!

Smart Headphones, Safe Head

Happy New Year, and welcome to the future!
 
With the holidays over and done with, I’m sure plenty of folks are showing off what Santa brought them. The especially lucky may have found 2019’s trendy audio accessory, a pair of wireless earbuds, in their stocking.
 
But the proliferation of low-profile earbuds and headphones has some safety experts worried. Between the music pumped into our ears, phone screens hijacking our eyes, and winter hoods blocking our periphery, being a pedestrian in the winter can be fatal.
 
But a team out of the Data Science Institute at Columbia University is hoping to change the frightening statistics, by creating “smart” headphones. With (standard) ’phones in, many folks can no longer hear a honked horn or the whoosh of an approaching vehicle: The proposed headphones would sense those external cues, and then impose a warning sound right overtop of the user’s playlist or podcast of choice, in-ear.
 
“The research and development of the smart headphones is complex: It involves embedding multiple miniature microphones in the headset as well as developing a low-power data pipeline to process all the sounds near to the pedestrian. It must also extract the correct cues that signal impending danger. The pipeline will contain an ultra-low power, custom-integrated circuit that extracts the relevant features from the sounds while using little battery power.

The researchers are also using the most advanced data science techniques to design the smart headset. Machine-learning models on the user’s smartphone will classify hundreds of acoustical cues from city streets and nearby vehicles and warn users when they are in danger. The mechanism will be designed so that people will recognize the alert and respond quickly.”
 
Of course, the crank in me pipes up around now, demanding why we can’t just look up and pay attention to our surroundings to keep ourselves safe. But human beings are not perfect actors — and if we can rely on tech to remind our fallible brains that there’s a baby in the backseat, why not with walking distracted? (Besides, sometimes a driver just doesn’t see you; and in a battle between a 2-tonne chunk of metal and human body, guess who loses.)
Averting tragedy is a net good that has no moralizing value. And, really, walking is so much better with a soundtrack! Why not make tech work for we pedestrians this winter?