As I write this, we are experiencing a heat wave in most of Ontario. With temperatures hitting around 40°C (with humidex), it’s way too hot to even think about wearing something like leather!
But, if you do manage to bypass that particular mental block when considering this week’s topic, you may find another one waiting for you. Fashion student Tina Gorjanc has created a line of clothing and accessories – called “Pure Human” – that she presents as being made from the cloned skin of late designer Alexander McQueen.
The collection is inspired by McQueen’s own physical foray into his design work: when he graduated from Central Saint Martins College of Art and Design, as Gorjanc is doing now, his first collection featured locks of his hair sewn into the garments’ labels. Gorjanc’s collection is currently a thought experiment – the pieces are actually made out of pig skin, with freckles and McQueen’s tattoos carefully copied for verisimilitude. But, she remains committed to one day cloning “hide” out of the follicles of McQueen’s hair, having made contact with both a lab willing to culture the skin, and the owner of one of the hair-endowed pieces from McQueen’s first collection, which can provide the “seed”.
Gorjanc is including a political complication in this artistic endeavour as well, presenting Pure Human as
“An exploration of the intersection between luxury and biology. […] Skin related biotechnologies seem to have caught the interest of the luxury industry. Major fashion and cosmetic companies have already signed research collaboration agreement[s] with bioengineering institutes. Those collaborations are enabling the development of existing skin technologies that were firstly designed for specific medical problems into enhancement of normal human functions and the extension of one’s self beyond its body. […] The [Pure Human] project is projecting the shift that is happening in the field of ethics and security regarding the tissue engineering technologies.”
To me, the collection seems the right kind of creepy — the uncanny kind that allows us to have critical distance from something so “homelike” as human skin (after all, we’re all covered in it!) and lets us see the darker, societal repercussions underneath. I look forward to the weather cooling, if only to see if it is just the heat weirding me out about this project!
Call it confirmation bias, but it seems like nearly everything we investigate in these blogs circles back to that good old microbiome!
Everyone’s favourite bacterial colony (that actually makes up about 90% of, well, everybody) is just starting to be studied in real depth. As we become aware that the health of the bacteria that live in us and on us is closely related to our health, we’re starting to trace solutions to previously mysterious problems. Researchers from Cornell University have found in the state of subjects’ microbiomes a possible source for the pernicious and inscrutable Chronic Fatigue Syndrome. In a recent study published in the Journal Microbiome, they lay out their findings:
“‘Our work demonstrates that the gut bacterial microbiome in chronic fatigue syndrome patients isn’t normal, perhaps leading to gastrointestinal and inflammatory symptoms in victims of the disease,’ said Maureen Hanson, the Liberty Hyde Bailey Professor in the Department of Molecular Biology and Genetics at Cornell and the paper’s senior author. ‘Furthermore, our detection of a biological abnormality provides further evidence against the ridiculous concept that the disease is psychological in origin.’”
While still far from a smoking gun as to the source of CFS, the team is confident that narrowing it down to somewhere in the miocrobiome can only mean that we’re getting closer to full understanding of a condition that defies pinning down, and is estimated to affect up to 3% of the world’s population. And, if we can solve this mystery, who knows what else we can uncover in the depths of the bacteria colonies that call our bodies home?
There are some amazing things happening at the genetic level nowadays – beyond the usual, controversial, modifications to increase crop yield, or make plants glow. Researchers have now devised a method to rewrite the DNA of living bacteria, encoding information into them like mini microscopic hard drives.
This feat was accomplished through the use of CRISPR, a defense mechanism present in many kinds of bacteria, which records the genes of invading viruses in order torecognize them when they attack again.This talent has been handily repurposed into what is being called a “genome editing tool:”
“‘We write the information directly into the genome,’ [co-author Jeff Nivala, part of the team from Harvard, said. ‘While the overall amount of DNA data we have currently stored within a genome is relatively small compared to the completely synthetic DNA data storage systems, we think genome-based information storage has many potential advantages.” These advantages, he says, could include higher fidelity and the capability to directly interface with biology. For example, a bacterium could be taught to recognize, provide information, and even kill other microorganisms in its midst, or provide a record of genetic expression.
‘Depending on how you calculate it, we stored between about 30 to 100 bytes of information,’ said Nivala. ‘Which is quite high compared to the previous record set within a living cell, which was ~11 bits.’”
While these results had been achieved with the above-mentioned earlier experiment, the DNA and encoding were manufactured, rather than modified from their natural state. In this new experiment, importantly, this edited information appears to be inheritable to the next generation. This could bode very well, not only as an information storage solution, but also for the understanding of genetic disorders in creatures great and small!
DARPA, an agency of the U.S. Department of Defense, has been putting its research muscle behind a new way to use real muscle: arm muscle, that is! Specifically, the arm muscle of volunteer Johnny Matheny, whose left arm amputation has made him an excellent subject for the development of a new prosthetic: one that features unprecedented connection between brain and limb.
The prosthetic, dubbed the “Modular Prosthetic Limb,” is capable not only of receiving signals from Matheny’s brain, but also of transmitting them back. This extraordinary communication is mediated by wireless Myo bands worn over on Matheny’s upper arm, which detect muscular signals, and transmit them via Bluetooth to the computer inside the prosthetic, which then issues commands to the arm to move. (Basically, Matheny’s collaboration with Myo and DARPA are slowly turning him into a cyborg.)
“Years ago, [Matheny] received targeted muscle reinnervation (TMR), a surgical procedure that reassigns nerves in a residual limb to make better use of a prosthetic replacement. In the spring of 2015, Matheny became the first American with TMR to undergo osseointegration, another surgical procedure that allows him to connect prosthetic devices directly to the bone of his upper arm. […]
And all of this is just the beginning. Pointing to his robotic fingertips, Matheny explained that they already contain tactile sensors capable of detecting texture, pressure, and temperature. But in order for Matheny to feel what his prosthetic arm feels, those signals have to reach his brain. In the not-too-distant future, another surgical procedure may enable this.”
It’s great to see a defense-related agency using its expansive mandate to create a fascinating new technology that can improve the lives of veterans — and others — who have experienced an amputation. This news also means we are one step closer to my personal dream, the Singularity. Definitely an interesting time to be alive!
As we have moved through the 20th century and into the 21st, and computer processing power has increased exponentially, popular culture has been progressively gripped by the what-if scenario of computing machines becoming sentient. Fictional examples run from saviours to annihilators, but all characterizations hinge on one assumption: that computing machines can become sophisticated enough to accurately replicate the processes of the human brain – and therefore spring to ineffable internal life. Some thinkers, including Stephen Hawking: see the transition happening so soon that they have started publicly warning us away from developing this “Strong A.I.”
But over at Psychology Today, cognitive neuroscientist Bobby Azarian argues that strong A.I. — that is, the kind that we puny humans need to worry about taking over the world and wiping us out — is a non-starter; what he describes as a “myth.” And it has everything to do with what plagued Jill Watson in the tale we related last week: the gulf between the ability to replicate, which computers can do quite well, and the ability to understand.
This gulf has been proposed to be technical in origin. Due to the binary decision making process that is the deepest foundational building block of even the most sophisticated level of computing:
“[A] strict symbol-processing machine can never be a symbol-understanding machine. […]
[Physicist and public intellectual Richard] Feynman described the computer as ‘A glorified, high-class, very fast but stupid filing system,’ managed by an infinitely stupid file clerk (the central processing unit) who blindly follows instructions (the software program). Here the clerk has no concept of anything—not even single letters or numbers. In a famous lecture on computer heuristics, Feynman expressed his grave doubts regarding the possibility of truly intelligent machines, stating that, ‘Nobody knows what we do or how to define a series of steps which correspond to something abstract like thinking.’
While a powerful computer can potentially replicate the physical processes of the human brain exactly in a virtual space, there is always something missing from it to prevent it from coming into full, self-aware life of its own. Scientists speculate that this missing part may be electrochemical, unique to the organic machines that we call out brains. Which means there is hope for humanity’s survival yet!
It’s usually a shock to the system when someone transitions to a higher level of education. With a more competitive atmosphere and higher student-faculty ratio, it’s easy for basic questions to go unanswered, and connections to professors — and subjects — to be lost.
But Ashok Goel, instructor of the Knowledge Based Artificial Intelligence masters-level course at Georgia Tech, refused to let that happen to his students. And, at the same time, he gave them an object lesson about the capabilities of the sorts of artificial intelligence they were studying.
Goel and his eight teaching assistants created a program to answer the most common questions students posted on the KBAI class forum, and disguised the program with the human name Jill Watson. Jill, presented to the class as the ninth TA, interacted with the students online for the rest of the semester. While a few students suspected something, no one fully realized she was a bot until Goel and team announced it at the end of term!
They had to start her off slowly, as she was excellent at recognizing content, but terrible at context:
“‘Initially her answers weren’t good enough because she would get stuck on keywords,’ said Lalith Polepeddi, one of the graduate students who co-developed the virtual TA. ‘For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons — same keywords — but different context. […]’
After some tinkering by the research team, Jill found her groove and soon was answering questions with 97 percent certainty. When she did, the human TAs would upload her responses to the students. By the end of March, Jill didn’t need any assistance: She wrote the class directly if she was 97 percent positive her answer was correct.”
In April, when Goel and the other TAs revealed all, the students loved it — subsequently forming alumni groups to keep studying Jill, and to attempt to recreate her. Thanks to Jill’s success, the KBAI class will have another robo-TA next year (under a different name, of course!), to help instruct the students in more ways than one.
I love trying out new recipes: as long as I have a sharp knife and a good source for unusual produce, I fell well equipped to give almost anything a shot! But the requirements for this new sandwich gave me pause, as I’d need more than my kitchen — I’d need a lab.
The “sandwich” in question is a supercapacitor that is designed to power ingestible electronics, and is therefore made out of food. Yes, literal food! Ingredients include cheese, nori, gold leaf, and Gatorade. From Smithsonian Magazine:
“The steps for making the supercapacitors—the recipe, if you will—go like this: researchers mix a bit of egg white with carbon pellets (activated carbon, sometimes called ‘activated charcoal,’ is used in some digestive medicines), then add water and more egg white. They apply the mixture to a bit of edible gold foil. They then layer together a slice of cheese and a sheet of gelatin with the egg-and carbon-covered gold foil. On top of that they add a square of dried seaweed, the type used to roll sushi, which has been soaked with drops of energy drink. They stack more of the same materials together, and seal them in a sealing machine.”
The ingredients in this supercapacitor, when combined as above, can store and conduct electricity just as well as traditional components made out of indigestible graphene or polymers. But they have the added advantage of not needing to be passed from a subject’s system — they are just plain eaten, and experience the same fate as a regular mouthful of food. The only downside is that they need a bit more development in order to make them smaller: current prototypes are about the size of a ketchup packet, and in order to work, the supercapacitors have to be swallowed whole.
What I find really fascinating about this invention is that it foregrounds the mechanical nature of the human body, and how fuel for us can easily double as fuel for, say, a small camera taking pictures of your stomach lining. I look forward to the day when a cheesy, seaweed-y snack can do more for me than entertain my taste buds — it can help monitor the state of my insides!
Now that Victoria Day, the traditional date in Southern Ontario that marks the end of the chances of frost, has passed, I’m allowing myself to get attached to the plants in my garden. And boy, are they (and their friends in the woods around us) really starting to do their thing!
While the happy spring plants that appear almost overnight can seem to be magic to winter-weary eyes, we all know they’re actually the result of a no less stunning scientific process: photosynthesis. Nature has perfected this energy transference system, and researchers have been striving to replicate it for our own purposes. Until now, we have had less-than-efficient results.
But a team out of Harvard University and Medical School has gotten the closest to true artificial photosynthesis yet, publishing their results in Science. They call their innovation the “bionic leaf,” and it uses solar energy to combine the components of split water molecules, and hydrogen-eating bacteria, into the kind of fuel useful to humans.
The design is an improvement on the previous version, which used the process to create isopropanol. But it did so at the expense of the bacteria that were central to the operation, when they were attacked by a byproduct of the catalyst used to produce their own dang hydrogen. (It’s not easy being bacteria!) The high voltages required to circumvent this problem rendered the process too inefficient for widespread use.
The new version, with its non-bacteria-toxic cobalt-phosphorus alloy catalyst, allows for lower voltages, increasing efficiency to a stunning 10%. (The most eager plants out there hit a rate of 1% efficiency.)
“‘The beauty of biology is it’s the world’s greatest chemist — biology can do chemistry we can’t do easily,” [Prof. Pamela Silver, one of the lead authors] said. ‘In principle, we have a platform that can make any downstream carbon-based molecule. So this has the potential to be incredibly versatile.’”
Not only is this innovation just plain cool, it paves the way for fuel creation of the future: when we can finally untether ourselves from oil, and rely on the far more dependable nuclear generator in the sky for all our needs! The bionic “leaf” has it right: Mother Nature really does do it better.
As loyal readers of this newsletter know, we at DFC are advocates of making your workplace where you already are. Our “where” happens to be a cabin in eastern Ontario, but we look forward to the day when folks all over can use technological interventions to bring their workplaces to them . Until that happens, we fully recognize that most people have to bring themselves to work instead! But that means braving the dreaded commute. (*Organ riff, thunderclap*)
One strategy to help with the commute conondrum is currently being revived after making the viral rounds a couple years ago. Engineer Song Youzhou has presented a working model of his “straddling bus” concept, heretofore only existing in animated form, at the recent 19th China Beijing International High-Tech Expo. The idea behind this bus is ambitiously neat: as wide as two lanes of traffic and two storeys tall, it’s elevated off its roadbed rails by its elongated sides. This allows it cars to pass underneath the bus, or it to overtake cars on the road, regardless of traffic conditions. Check out the 2012 concept video, still in play, here
Downsides include the fact that only personal-sized vehicles, like cars and SUVs, could fit under the bus — trucks will have to find another route. Also, as BoingBoing’s Cory Doctorow points out , the concept video fudges the bus’s physics: where a real-live straddling bus’s turn radius would make it impossible to corner at most intersections, the artistic rendering conveniently bends parts of the bus that shouldn’t, to make it work. Both issues are major (ahem) roadblocks to real-world use.
But Song Youzhou is already addressing some of the problems: the new physical model employs more articulations to make those pesky turns easier. However, only time (and more prototypes) will tell if the “land airbus” will ever take to the streets. Until then, we can enjoy the daydream of a peaceful, traffic free glide in to work — if our work isn’t already in our living rooms, that is!
There are certain things generally accepted as separating humankind from the animals: empathy, our ability to accessorize, and, in my opinion, our tendency to procrastinate! I don’t think there’s a person alive (or dead) who hasn’t battled that demon of “Do-It-Later”.
As we learn more and more about the brain, an answer to why procrastination happens, and how we can circumvent it, should naturally be closer than ever. But as Stuart Langfield and Marco Patricio relate in their video “How to Overcome Procrastinating: Why it Happens & How You Can Avoid It,” answers are proving difficult to find.
This is due to the fact that we know very little about how the brain actually functions. The crossover between regions and their strengths can be hard to trace. The experts quoted in Langfield and Patricio’s video agree: all we have are theories. Dr. Tim Pychyl’s leading theory on the action of procrastination goes something like this:
“There’s one part of your brain that’s purely instinctual called the Limbic System. It’s your emotions, your fight or flight. All it cares about it is keeping you alive.
Then, over here, there’s this other part that’s kind of wiser and more rational. It’s responsible for your goals, your dreams, your plans for the future. That’s your prefrontal cortex.
And the theory is that when you get that feeling of not wanting to do something your instinctual part springs into action right away. It doesn’t think about the future. It just tells you to avoid the task. And you listen.
The other side, the rational side, is slower to act. It thinks things through. So you procrastinate until that part can remind you that you’re not dying — you’re just trying to doing something that’s really hard.”
So it seems the duel between limbic system and prefrontal cortex that results in procrastination is over which kind of happiness wins out: short term or long term.
Thankfully, we don’t have to be trapped in this limbic/prefrontal cortex tug-of-war, as the brain is a changeable organ. The principle by which we can change our cognition is “neuroplasticity,” and research is pointing to mindfulness meditation as a way of effecting that change. Dr. Pychyl cites studies in which mindfulness meditation changed the procrastination balance by literally shrinking the amygdala (part of that pesky limbic system), and adding more grey matter to the prefrontal cortex!
Unfortunately, the takeaway is that there is no easy way to stop procrastinating. One can use meditation to ultimately make it easier, but that itself takes time and effort. But, as another great wordsmith (and, as a human, likely procrastinator!) once said, “Whatever is worth doing at all, is worth doing well.” And I for one am going to (try to) start doing right away!