New in fascinating nature updates! Everyone’s favourite adorable wrinkly horror, the naked mole rat, has been found by scientists out of lifespan study company Calico Life Sciences LLC to defy everyone’s least favourite law of mortality, the Gompertz law. This means that naked mole rats do not age in a typical way. In addition to being a trippy addition to the already extraordinary list of cool attributes about this mammal, this fact could have an impact on human health.
Among the fun facts about naked mole rats: they spend their lives underground in eusocial colonies of up to 300 individuals, with a dominant female “queen” and just a few select male mole rats involved in breeding. They have evolved beyond the need to see very well and regulate their body temperatures. They are (almost) immune to cancer, and don’t seem to experience pain. And now:
“The team collected what they describe as 3,000 points of data regarding the lifespan of the naked mole rat, and found that many had lived for 30 years. But perhaps more surprisingly, they found that the chance of dying for the mole rats did not increase as they aged. All other mammals that have been studied have been found to conform to what is known as Gompertz’s mortality law, which states that the risk of death for a typical mammal grows exponentially after they reach sexual maturity — for humans, that means the odds of dying double every eight years after reaching age 30. This, the researchers claim, suggests that mole rats do not age — at least in the conventional sense. They do eventually die, after all.
The scientists involved in this study state that what “determines naked mole-rat lifespans in captivity [i.e. without predation skewing the numbers] is currently unknown,” and needs an awful lot more research. Given that investigation into the imperviousness of naked mole rats to pain has shed light on human congenital pain insensitivity, could that research lead to a greater understanding of human aging? Here’s hoping these sweet, kind of horrifying adorable little old men of the animal kingdom may soon let us in on more of their secrets!
It’s been so long since I’ve seen a flower out in the wild; I’m having a hard time believing they’ll ever return! But, of course, they will, fulfilling a genetic legacy that has made flowering plants (or angiosperms) the dominant plant type on Earth.
Flowering plants edged out former first-place gymnosperms (which include conifers and ferns) around 150 million years ago, but until a recent co-study from teams out of San Francisco State University and Yale, scientists were stumped (pun intended!) as to how the small, colourful plants did it. It turns out, it likely had to do with their very smallness — of their genome.
The team analyzed data held by the Royal Botanic Gardens, Kew (in London, UK), and compared genome sizes to physical properties like the number of leaf pores, and rates of leaf water loss and photosynthesis. The researchers found that the rise and continued success of flowering plants on Earth is a result of what they termed “genome downsizing.” From the BBC:
“By shrinking the size of the genome, which is contained within the nucleus of the cell, plants can build smaller cells.
In turn, this allows greater carbon dioxide uptake and carbon gain from photosynthesis, the process by which plants use light energy to turn carbon dioxide and water into glucose and oxygen.
Angiosperms can pack more veins and pores into their leaves, maximizing their productivity.
The researchers say genome-downsizing happened only in the angiosperms, and this was ‘a necessary prerequisite for rapid growth rates among land plants.’”
Plants with smaller genomes were therefore far more efficient at, well, being plants, which has lead to their dominance. Their aesthetic appeal to humans is a lucky side effect!
This landmark discovery has resulted in further, refined questions that scientists should have fun answering: like, why have ferns, ginkgo, and conifers still survived, if, by the metric set by this research, they are no longer the “fittest” plants? While I’m sure the answer will be fascinating, I confess I’m happy with all plants… Though at this moment deep in February, I’m not going to blame myself for treasuring crocuses or daffodils just a tiny bit more!
My two grandsons are still in the visual-entertainment-for-distraction phase (hello there, animated Wheels on the Bus or Paw Patrol for the forty-seventh time). But, when they grow old enough to grasp plot and theme, we’re looking forward to introducing them to all the modern film classics, including Star Wars (or “The Space Hero’s Journey” and The Lion King (or “Animal Hamlet”)
But a recent article by Isabel Fattal in The Atlantic has shed light on a phenomenon that occurs in both these movies, that I honestly hadn’t clocked — and that would be good to keep in mind when the grandkids graduate to a real movie night. Building on their initial 1998 study sociolinguist Calvin Gidney and Professor Julie Dobrow (both of Tufts University) has shown that many “bad guys” in American childrens’ entertainment speak with foreign accents or non-standard English dialects. This marks them as “other” in their narratives — creating an Us-vs.-Them conflict that potentially imparts a harmful subliminal message to kids about diversity.
The study found that the accent coded as most “evil” in the entertainments studied was British: In The Lion King, main villain Scar is voiced by Jeremy Irons, who trained at the Bristol Old Vic. But:
“German and Slavic accents are also common for villain voices. Henchmen or assistants to villains often spoke in dialects associated with low socioeconomic status, including working-class Eastern European dialects or regional American dialects such as ‘Italian-American gangster’ (like when Claude in Captain Planet says ‘tuh-raining’ instead of ‘training.’) None of the villains in the sample studied seemed to speak Standard American English; when they did speak with an American accent, it was always in regional dialects associated with low socioeconomic status.
(Interestingly, the preponderance of German, Slavic, and Russian accents sported by villains points to an inherited bias in American culture: The study authors say it’s likely a holdover from World War II and the Cold War — a time still within the memory of many creators of children’s’ entertainment. Though world conflicts have changed over the past 70 years, no new accents have knocked the above from their second place “evil” spot behind British.)
Isabel Fattal goes into detail about the cognitive repercussions of these lessons, absorbed by kids who, on the surface, are “just” watching a cartoon. Not only do children use television to sort out the concept of ethnic identities and where they themselves fit, they also (like adults) use linguistic cues to make judgments on the perceived intelligence and education of a speaker — and use those assumptions to determine how they treat others.
Fattal concludes that this problem seems deeply entrenched in our culture and entertainment, but that it can be turned around and used to educate kids when they watch these shows and movies with adults who can discuss the issue with them. So, I’m actually looking forward to hanging out with my grandkids in a way that involves more media awareness — and a lot less Wheels on the Bus.
“At least it’s summer in Australia,” I thought this morning while staring out my window at my frost-covered yard, wearing two sweaters, two pairs of socks, and cradling my second cup of hot coffee.
But though (like all of us right now I’m sure!) I crave the cheery presence of the July sun, I also recognize that with summer’s heat certainly come summer’s dangers. Last week, two boys in New South Wales tasted that danger when they ran into trouble in the waves off Lennox Beach. Thankfully, they were rescued by a 21st-century saviour — a drone — on its very first day of service with the beach’s lifeguards.
When bystanders saw the swimming boys in distress, they called the lifeguard station, which was located a kilometer away from the action. Instead of losing precious time battling the waves, the lifeguards fired up their Little Ripper Lifesaver, developed after company founder Kevin Weldon witnessed unmanned drone technology being used to locate and help people in the streets of New Orleans after Katrina.
Little Rippers are customizable with a variety of lifesaving technologies for land, snow, and sea; including defibrillators, thermal blankets, and automatically inflating floatation devices. The latter was dropped down to the two boys at Lennox Beach, when the operator used the drone’s camera to locate and help them after one minute.
The boys were able to catch hold of the floatation devices and swim themselves to shore. If they had been injured, or unconscious, the more traditional services of the lifeguarding team would also have been required.
Lifeguarding is a complicated calling with no room for error. It seems that the pairing of career and technology is perfect in this case, with the drone enhancing human skills rather than rendering them obsolete. Now, I just have to insert the obligatory joke about how everything in Australia will kill you — but not the ocean off Lennox Beach, thanks to the lifeguards and their drone! (and just for fun watch this video of kangaroos boxing in the middle of a quiet Australian street)
Does Sophia the robot inspire revulsion in you? You know, Sophia: the humanoid robot developed by Hanson Laboratories and activated in 2015, who has a title at the UN and has been granted Saudi Arabian citizenship? The one whose cranium is transparent, and who cracks jokes about destroying all humans? Yeah, her.
If Sophia and other humanoid robots like her give you the willies, you are experiencing the well-documented effects of the Uncanny Valley: the phenomenon of near-universal human distaste for things that appear human-like, but are somehow off. (This includes examples like Sophia, some computer animation, and bunraku puppets.)
I say near-universal because of some fascinating research has come out of the University of Michigan recently. This research shows that this unique sense of disgust, long thought to be a natural form of pathogen avoidance (as fellow human looking weird = sick or dying, to our lizard brains), might not be inherent in human experience, but actually learned.
Professor of psychology Henry Wellman and doctoral candidate Kimberly Brink studied a group of 240 children aged 3 to 18. The children were shown videos of three robots of varying humanity: “very human-like,” machine-like, and mostly machine but with human qualities (described as a combination between lovable movie robots Baymax and EVE.) The researchers then asked the children questions about whether they thought each kind of robot had feelings, if they could experience hunger or pain, if they could think on their own, and if they could tell right from wrong. The subjects were also asked if any of the robots creeped them out.
Interestingly, the children younger than 9 reported no feelings of weird loathing — but the children older than nine, like adults, were more horrified the more human the robots appeared.
“So what does this mean about the Uncanny Valley? One feature of our results provides a clue: Children’s attributions of mind to the robots affect how children feel. The younger children preferred a robot when they believed the robot could think and make decisions. For them, the more mind the better. This is in contrast to adults and older children for whom the more robots seemed to have minds (and especially minds that could produce and house human-like feelings and thoughts) the creepier that made them. For adults and older children, a machine-like mind is fine, but a robot with a human-like one is out of bounds. Perceived creepiness is related to a perceived mind.”
So this takes the Uncanny Valley effect out of the evolutionary realm, and locates it in the more complicated territory of individual brain development. As robots become more integrated into our everyday lives, the researchers propose that we think about designing them differently for humans at different ages. That is, a Sophia might be all right assisting kids in a kindergarten class, but would be a deeply unpleasant sight for everyone in a nursing home. Like it or not, we do have to think about these things: as Sophia’s popularity alone has shown us, the robots are coming!
With certain personal items, it’s quite obvious when it’s appropriate to swap them out and wash them. Socks? Every day, of course! Bed sheets? Weekly, you spend like 60 hours total in them, for Pete’s sake. Bath towel? Well, considering you’re squeaky clean (in theory) when you dry yourself off post-shower, you’re… probably good for a while, right?
Wrong, says NYU School of Medicine microbiologist Philip Tierno. As clean as you think your shower gets you, your bath towel is covered in bodily secretions, fungal spores, bits of dead skin — plus any other free floating oogies that are present in your bathroom. (Six-foot toilet plume, anyone?) Also, your towel is damp and warm: perfect conditions for trouble.
“Your cellular debris and other deposits from the air serve as food for the microbes, and the moisture supplies water at a neutral pH. […]
[If] you share your towel with others, you could potentially come into contact with organisms that your body isn’t used to dealing with – such as Staphylococcus aureus, Tierno said, ‘which may give rise to a boil, or a pimple, or an infection.’”
According to expert Tierno, the maximum number of times you should use your bath towel before washing it is three — and that’s only if you manage to completely dry it out between uses… Yeugh.
This fascinating tale makes me glad we live in a time when we can know what’s crawling around on us and on our bath towels, for the sake of both interest and cleanliness! Now, if you’ll excuse me, I’m off to throw everything I own into the washing machine. See you next week!
Back in November, muon tomography of the Great Pyramid at Giza revealed a previously undiscovered large void, deep within the last standing Wonder of the Ancient World. Naturally scientists are super enthused about exploring this new mystery — but how to do so in a way that won’t break through doors or walls, compromising the state in which the mystery was left? Besides the loss of scientific clues due to damage, explorers could also cause a collapse of part of the structure, or might bring down an ancient curse. (Kidding about that last one!)
To get around these challenges, French research institutes Inria and CNRS are working on a brand new kind of remote robot, an autonomous blimp. They hope the robot will open up the Great Pyramid void to science, while leaving it as sealed as possible in practical terms.
The deployment procedure would involve drilling an approximately 3.5cm hole into the outer wall of the chamber, and sliding the cylindrical robot inside, nestled into its rod-like dock. Once in the chamber, the robot would unfold and inflate its 80cm helium envelope and take off into the void. Equipped with 50g worth of sensors, lights, and motors, the blimp would investigate the secrets of the space, before returning to the dock, folding back up, and being withdrawn through the hole. (Check out the design video here)
The robot is designed but not yet prototyped — one of the puzzles the creators are still working through is how exactly the robot will fold up its deflated envelope again. But the rest of the concept is well hashed out, and represents a great improvement over other, traditional methods of exploration:
“[T]here are quite a few good reasons why it would be better than other types of ground robots with wheels, tracks, or legs, or drones with rotors. A blimp doesn’t have to worry about stairs, rocks, ramps (or traps). You get a much better perspective from a blimp, and you can also cover more area more quickly. Blimps can also harmlessly bounce off of obstacles and are less likely to crash than a conventional rotorcraft, and you have to figure that a blimp crash (if it does occur) would be much more pillowy in nature.”
I cannot wait to see how cutting edge tech unveils more of the mysteries of the human past, in a way that is as respectful as possible of the mystery itself! I’m sure chances of finding something as immediately gratifying as, say, treasure, are not high. But there is huge value in any information the blimp might uncover — and besides, if there’s no treasure, there’s no Boris Karloff look-alike to hassle you over it!
On this New Year’s Day, I’m excited to tackle the challenges the future will bring! But this exhilaration is tempered a bit by the realization that the holiday-mandated relaxation period will soon be over. During these holidays, I’ve been unwinding by watching nature documentaries — particularly BBC selections narrated by the incomparable Sir David Attenborough.
I’m fascinated most by episodes about the oceans, and I particularly love hearing about my favourite sea creatures, jellyfish! We solid bipeds might only think of them as either goopy blobs on beaches, or venomous menaces that ruin your swim first by causing stinging pain, then by obliging someone to urinate on you. (The efficacy of that strategy is a myth, by the way!) The Conversation has a breakdown of some of the coolest jellyfish facts that you can bust out around the water cooler when you return to work — including, my personal fave, the fact that some species of jellyfish are effectively immortal.
“Many jellies have evolved unique abilities, some of which seem almost supernatural. […] The pièce de résistance is surely their second chance at youth. When conditions are unfavourable, certain species including compass, barrel, and moon jellyfish can reverse their development and effectively turn back into jelly-children in order to wait out the hard times.”
This youth-cycling happens on a cellular level, and is actually right up there with stem cells as a possible solution to the human aging process. Hopefully, this remains a fun fact for you, rather than a fervently wished for dream resulting from overindulgence last night! Happy New Year, and here’s to a 2018 filled with natural marvels and technological breakthroughs.
Between the holidays bringing people closer together and cold weather forcing them together, often in enclosed spaces, now is the time that illness-causing bacteria and viruses start jumping from host to host, having the time of their short, hedonistic lives.
One of the best ways of minimizing their rampage, and the chances of getting sick, is by keeping our hands clean. Thanks to our hygiene-obsessed culture, we have at our disposal a whole arsenal of anti-germ tactics, from fancy hand sanitizers to good old soap and water. But how do they stack up against each other? Microbiologist Michelle Sconce Massequoi has some opinions about it — and also about handwashing technique, which is often the weakest link in the illness prevention chain.
The first main strategy is to reduce the number of bad bacteria or viruses on our hands:
“Studies have shown that effectively washing with soap and water significantly reduces the bacterial load of diarrhea-causing bacteria.
The second strategy is to kill the bacteria. We do this by using products with an antibacterial agent such as alcohols, chlorine, peroxides, chlorhexidine or triclosan.
[…]However, there’s a problem. Some bacterial cells on our hands may have genes that enable them to be resistant to a given antibacterial agent. This means that after the antibacterial agent kills some bacteria, the resistant strains remaining on the hands can flourish.”
While there does appear to be some extra benefit to having anti-bacterial properties in the soap, regular soap and elbow grease goes a long way towards knocking out hand-based bacteria, as well as avoiding the development of more dastardly superbugs. Sudsing up all the surfaces of your hands (including wrists, if needed!) for between 15 and 30 seconds — “about the time to sing ‘Happy Birthday’ twice,” says Sconce Massequoi — is key. (A recent NIH study of a college town population’s hand hygiene habits returned the sobering average scrub length as six seconds. While the refined focus of the study didn’t include a look at subsequent illness rates, I think we can all agree that that comparative light rinse sounds gross.)
Dear reader, as you gather your family around you this winter, or press an elevator button, or muscle your way through a subway trip beside some dude who unrepentantly sneezes on you four times, I wish you all good luck in keeping colds and viruses at bay. It’s heartening to remember that we are not powerless against them — our best tools to stay healthy are literally in our hands!
We at DFC are about to take a strategic news holiday, having overdosed a little on politics updates from our southern neighbour. There’s a lot of atypical decision making going on down there — and most of these choices are united by the same underlying lack of empathy.
To risk a sweeping generalization, it seems to be easy for folks in power to leverage troublesome policies on people who are, well, not them. It’s likely you’ve even seen this change on a personal or business level, where a colleague or friend gets promoted, and gradually loses the ability to see things from the perspectives of others.
The Atlantic has a fascinating recent breakdown of behavioural and neurocognitive research into the phenomenon. It outlines how a study out of Berkeley coined the term “power paradox” to describe it. And how, most recently, a team from McMaster University has looked to brain imaging for answers: Scans showed that, in the brains of people who felt powerful, “mirroring” (the lighting up of sympathetic areas of the brain when we witness another person take an action) is physically impaired.
“Was the mirroring response broken? More like anesthetized. None of the participants possessed permanent power. They were college students who had been “primed” to feel potent by recounting an experience in which they had been in charge. The anestheticwould presumably wear off when the feeling did — their brains weren’t structurally damaged after an afternoon in the lab. But if the effect had been long-lasting — say, by dint of having Wall Street analysts whispering their greatness quarter after quarter, board members offering them extra helpings of pay, and Forbes praising them for “doing well while doing good” — they may have what in medicine is known as “functional” changes to the brain.”
All is not lost, though, with either colleagues or politicians — the effect of the power paradox can be reduced! The trick is, the affected individual has to cease actually feelingpowerful. This can happen consciously, if a CEO reminds themselves of how they felt a time when they weren’t in authority. And (I’m sure) it can also happen involuntarily, when, say, a voting population decides to remind a politician who works for whom…