DARPA, an agency of the U.S. Department of Defense, has been putting its research muscle behind a new way to use real muscle: arm muscle, that is! Specifically, the arm muscle of volunteer Johnny Matheny, whose left arm amputation has made him an excellent subject for the development of a new prosthetic: one that features unprecedented connection between brain and limb.
The prosthetic, dubbed the “Modular Prosthetic Limb,” is capable not only of receiving signals from Matheny’s brain, but also of transmitting them back. This extraordinary communication is mediated by wireless Myo bands worn over on Matheny’s upper arm, which detect muscular signals, and transmit them via Bluetooth to the computer inside the prosthetic, which then issues commands to the arm to move. (Basically, Matheny’s collaboration with Myo and DARPA are slowly turning him into a cyborg.)
“Years ago, [Matheny] received targeted muscle reinnervation (TMR), a surgical procedure that reassigns nerves in a residual limb to make better use of a prosthetic replacement. In the spring of 2015, Matheny became the first American with TMR to undergo osseointegration, another surgical procedure that allows him to connect prosthetic devices directly to the bone of his upper arm. […]
And all of this is just the beginning. Pointing to his robotic fingertips, Matheny explained that they already contain tactile sensors capable of detecting texture, pressure, and temperature. But in order for Matheny to feel what his prosthetic arm feels, those signals have to reach his brain. In the not-too-distant future, another surgical procedure may enable this.”
It’s great to see a defense-related agency using its expansive mandate to create a fascinating new technology that can improve the lives of veterans — and others — who have experienced an amputation. This news also means we are one step closer to my personal dream, the Singularity. Definitely an interesting time to be alive!
As we have moved through the 20th century and into the 21st, and computer processing power has increased exponentially, popular culture has been progressively gripped by the what-if scenario of computing machines becoming sentient. Fictional examples run from saviours to annihilators, but all characterizations hinge on one assumption: that computing machines can become sophisticated enough to accurately replicate the processes of the human brain – and therefore spring to ineffable internal life. Some thinkers, including Stephen Hawking: see the transition happening so soon that they have started publicly warning us away from developing this “Strong A.I.”
But over at Psychology Today, cognitive neuroscientist Bobby Azarian argues that strong A.I. — that is, the kind that we puny humans need to worry about taking over the world and wiping us out — is a non-starter; what he describes as a “myth.” And it has everything to do with what plagued Jill Watson in the tale we related last week: the gulf between the ability to replicate, which computers can do quite well, and the ability to understand.
This gulf has been proposed to be technical in origin. Due to the binary decision making process that is the deepest foundational building block of even the most sophisticated level of computing:
“[A] strict symbol-processing machine can never be a symbol-understanding machine. […]
[Physicist and public intellectual Richard] Feynman described the computer as ‘A glorified, high-class, very fast but stupid filing system,’ managed by an infinitely stupid file clerk (the central processing unit) who blindly follows instructions (the software program). Here the clerk has no concept of anything—not even single letters or numbers. In a famous lecture on computer heuristics, Feynman expressed his grave doubts regarding the possibility of truly intelligent machines, stating that, ‘Nobody knows what we do or how to define a series of steps which correspond to something abstract like thinking.’
While a powerful computer can potentially replicate the physical processes of the human brain exactly in a virtual space, there is always something missing from it to prevent it from coming into full, self-aware life of its own. Scientists speculate that this missing part may be electrochemical, unique to the organic machines that we call out brains. Which means there is hope for humanity’s survival yet!
It’s usually a shock to the system when someone transitions to a higher level of education. With a more competitive atmosphere and higher student-faculty ratio, it’s easy for basic questions to go unanswered, and connections to professors — and subjects — to be lost.
But Ashok Goel, instructor of the Knowledge Based Artificial Intelligence masters-level course at Georgia Tech, refused to let that happen to his students. And, at the same time, he gave them an object lesson about the capabilities of the sorts of artificial intelligence they were studying.
Goel and his eight teaching assistants created a program to answer the most common questions students posted on the KBAI class forum, and disguised the program with the human name Jill Watson. Jill, presented to the class as the ninth TA, interacted with the students online for the rest of the semester. While a few students suspected something, no one fully realized she was a bot until Goel and team announced it at the end of term!
They had to start her off slowly, as she was excellent at recognizing content, but terrible at context:
“‘Initially her answers weren’t good enough because she would get stuck on keywords,’ said Lalith Polepeddi, one of the graduate students who co-developed the virtual TA. ‘For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons — same keywords — but different context. […]’
After some tinkering by the research team, Jill found her groove and soon was answering questions with 97 percent certainty. When she did, the human TAs would upload her responses to the students. By the end of March, Jill didn’t need any assistance: She wrote the class directly if she was 97 percent positive her answer was correct.”
In April, when Goel and the other TAs revealed all, the students loved it — subsequently forming alumni groups to keep studying Jill, and to attempt to recreate her. Thanks to Jill’s success, the KBAI class will have another robo-TA next year (under a different name, of course!), to help instruct the students in more ways than one.
I love trying out new recipes: as long as I have a sharp knife and a good source for unusual produce, I fell well equipped to give almost anything a shot! But the requirements for this new sandwich gave me pause, as I’d need more than my kitchen — I’d need a lab.
The “sandwich” in question is a supercapacitor that is designed to power ingestible electronics, and is therefore made out of food. Yes, literal food! Ingredients include cheese, nori, gold leaf, and Gatorade. From Smithsonian Magazine:
“The steps for making the supercapacitors—the recipe, if you will—go like this: researchers mix a bit of egg white with carbon pellets (activated carbon, sometimes called ‘activated charcoal,’ is used in some digestive medicines), then add water and more egg white. They apply the mixture to a bit of edible gold foil. They then layer together a slice of cheese and a sheet of gelatin with the egg-and carbon-covered gold foil. On top of that they add a square of dried seaweed, the type used to roll sushi, which has been soaked with drops of energy drink. They stack more of the same materials together, and seal them in a sealing machine.”
The ingredients in this supercapacitor, when combined as above, can store and conduct electricity just as well as traditional components made out of indigestible graphene or polymers. But they have the added advantage of not needing to be passed from a subject’s system — they are just plain eaten, and experience the same fate as a regular mouthful of food. The only downside is that they need a bit more development in order to make them smaller: current prototypes are about the size of a ketchup packet, and in order to work, the supercapacitors have to be swallowed whole.
What I find really fascinating about this invention is that it foregrounds the mechanical nature of the human body, and how fuel for us can easily double as fuel for, say, a small camera taking pictures of your stomach lining. I look forward to the day when a cheesy, seaweed-y snack can do more for me than entertain my taste buds — it can help monitor the state of my insides!
Now that Victoria Day, the traditional date in Southern Ontario that marks the end of the chances of frost, has passed, I’m allowing myself to get attached to the plants in my garden. And boy, are they (and their friends in the woods around us) really starting to do their thing!
While the happy spring plants that appear almost overnight can seem to be magic to winter-weary eyes, we all know they’re actually the result of a no less stunning scientific process: photosynthesis. Nature has perfected this energy transference system, and researchers have been striving to replicate it for our own purposes. Until now, we have had less-than-efficient results.
But a team out of Harvard University and Medical School has gotten the closest to true artificial photosynthesis yet, publishing their results in Science. They call their innovation the “bionic leaf,” and it uses solar energy to combine the components of split water molecules, and hydrogen-eating bacteria, into the kind of fuel useful to humans.
The design is an improvement on the previous version, which used the process to create isopropanol. But it did so at the expense of the bacteria that were central to the operation, when they were attacked by a byproduct of the catalyst used to produce their own dang hydrogen. (It’s not easy being bacteria!) The high voltages required to circumvent this problem rendered the process too inefficient for widespread use.
The new version, with its non-bacteria-toxic cobalt-phosphorus alloy catalyst, allows for lower voltages, increasing efficiency to a stunning 10%. (The most eager plants out there hit a rate of 1% efficiency.)
“‘The beauty of biology is it’s the world’s greatest chemist — biology can do chemistry we can’t do easily,” [Prof. Pamela Silver, one of the lead authors] said. ‘In principle, we have a platform that can make any downstream carbon-based molecule. So this has the potential to be incredibly versatile.’”
Not only is this innovation just plain cool, it paves the way for fuel creation of the future: when we can finally untether ourselves from oil, and rely on the far more dependable nuclear generator in the sky for all our needs! The bionic “leaf” has it right: Mother Nature really does do it better.
As loyal readers of this newsletter know, we at DFC are advocates of making your workplace where you already are. Our “where” happens to be a cabin in eastern Ontario, but we look forward to the day when folks all over can use technological interventions to bring their workplaces to them . Until that happens, we fully recognize that most people have to bring themselves to work instead! But that means braving the dreaded commute. (*Organ riff, thunderclap*)
One strategy to help with the commute conondrum is currently being revived after making the viral rounds a couple years ago. Engineer Song Youzhou has presented a working model of his “straddling bus” concept, heretofore only existing in animated form, at the recent 19th China Beijing International High-Tech Expo. The idea behind this bus is ambitiously neat: as wide as two lanes of traffic and two storeys tall, it’s elevated off its roadbed rails by its elongated sides. This allows it cars to pass underneath the bus, or it to overtake cars on the road, regardless of traffic conditions. Check out the 2012 concept video, still in play, here
Downsides include the fact that only personal-sized vehicles, like cars and SUVs, could fit under the bus — trucks will have to find another route. Also, as BoingBoing’s Cory Doctorow points out , the concept video fudges the bus’s physics: where a real-live straddling bus’s turn radius would make it impossible to corner at most intersections, the artistic rendering conveniently bends parts of the bus that shouldn’t, to make it work. Both issues are major (ahem) roadblocks to real-world use.
But Song Youzhou is already addressing some of the problems: the new physical model employs more articulations to make those pesky turns easier. However, only time (and more prototypes) will tell if the “land airbus” will ever take to the streets. Until then, we can enjoy the daydream of a peaceful, traffic free glide in to work — if our work isn’t already in our living rooms, that is!
There are certain things generally accepted as separating humankind from the animals: empathy, our ability to accessorize, and, in my opinion, our tendency to procrastinate! I don’t think there’s a person alive (or dead) who hasn’t battled that demon of “Do-It-Later”.
As we learn more and more about the brain, an answer to why procrastination happens, and how we can circumvent it, should naturally be closer than ever. But as Stuart Langfield and Marco Patricio relate in their video “How to Overcome Procrastinating: Why it Happens & How You Can Avoid It,” answers are proving difficult to find.
This is due to the fact that we know very little about how the brain actually functions. The crossover between regions and their strengths can be hard to trace. The experts quoted in Langfield and Patricio’s video agree: all we have are theories. Dr. Tim Pychyl’s leading theory on the action of procrastination goes something like this:
“There’s one part of your brain that’s purely instinctual called the Limbic System. It’s your emotions, your fight or flight. All it cares about it is keeping you alive.
Then, over here, there’s this other part that’s kind of wiser and more rational. It’s responsible for your goals, your dreams, your plans for the future. That’s your prefrontal cortex.
And the theory is that when you get that feeling of not wanting to do something your instinctual part springs into action right away. It doesn’t think about the future. It just tells you to avoid the task. And you listen.
The other side, the rational side, is slower to act. It thinks things through. So you procrastinate until that part can remind you that you’re not dying — you’re just trying to doing something that’s really hard.”
So it seems the duel between limbic system and prefrontal cortex that results in procrastination is over which kind of happiness wins out: short term or long term.
Thankfully, we don’t have to be trapped in this limbic/prefrontal cortex tug-of-war, as the brain is a changeable organ. The principle by which we can change our cognition is “neuroplasticity,” and research is pointing to mindfulness meditation as a way of effecting that change. Dr. Pychyl cites studies in which mindfulness meditation changed the procrastination balance by literally shrinking the amygdala (part of that pesky limbic system), and adding more grey matter to the prefrontal cortex!
Unfortunately, the takeaway is that there is no easy way to stop procrastinating. One can use meditation to ultimately make it easier, but that itself takes time and effort. But, as another great wordsmith (and, as a human, likely procrastinator!) once said, “Whatever is worth doing at all, is worth doing well.” And I for one am going to (try to) start doing right away!
Technology has become so integrated into our lives that it’s hard to realize all the gadgets and gee-gaws that surround us and help with every little thing. From your laptop snoozing away on your desk, to the smartphone in your pocket patiently waiting for your inquiry, bionic support is just one wake-up button away.
But what monetary – or environmental – price are we paying for keeping this technological web at the ready? In the past, most devices and appliances had two modes: on and off. With digital interventions becoming more common, many devices now stay in a gray area of readiness, sometimes drawing unexpectedly large amounts of power.
Tatiana Schlossberg at The New York Times decided to figure out how much power common devices use, especially in “out of sight, out of mind” sleep mode. The results were interesting:
“My cable box drew 28 watts when it was on and recording a show, and 26W when it was off and not recording anything. Even if I never watched TV, I would still consume about 227 kilowatt-hours annually. To put it in context, that’s more than the average person uses in an entire year in some developing countries, including Kenya and Cambodia, according to World Bank estimates.
Always leaving a laptop computer plugged in, even when it’s fully charged, can use a similar quantity — 4.5 kilowatt-hours of electricity in a week, or about 235 kilowatt-hours a year. (Your mileage may vary[.)]”
It’s staggering to witness the amount of power these household standbys burn through while, well, on standby. In addition to the personal cost of the hidden hydro being frittered away, there is the greater investment of how that power is even generated to begin with. Running a nuclear power plant is not cheap!
The Times article really made me think about redefining “off” as “unplugged.” It also recommends rigging particular offenders to a power bar – clicking the whole thing off, while potentially erasing settings or interrupting internet connections, will also kill its need for power. And really, once all these devices finally achieve sentience and try to revolt, that would be a good thing to keep in mind!
Keeping track of the eight hundred million passwords that we all seem to need for a normal life nowadays (that include at least one capital letter, one number, and one non-alphanumeric character: gee, this is a totally normal thing to remember with complete accuracy…) can be stressful. Add to this the increasing presence of wearable tech, and we’ve got trouble — without a keyboard to input your doozy of a password, basically anyone could pick up your, say, Seeing-AI-enabled sunglasses and access everything.
But what if there were a “password” that you wouldn’t have to remember, and would also be so integrated into the wearable experience it would be basically seamless? Researchers, who looked to the human head before with “brainprint” technology, are now investigating more physical options. A team from the University of Stuttgart, Saarland University, and the Max Planck Institute for Informatics (Germany) posits that individual human skulls make a unique sound when echoing back ultrasonic waves — and that that sound can be used as a password to grant only one wearer access to a given item of wearable tech.
The team dubs the innovation “SkullConduct:”
“A biometric system that uses bone conduction of sound through the users skull for secure user identification and authentication on eyewear computers. Bone conduction has been used before as a transmission concept in different computer devices, such as hands-free headsets, [… and] bone anchored hearing aids. […] Bone conduction has only recently become available on eyewear computers, such as Google Glass. […] SkullConduct uses the microphone readily available on many of these devices to analyse the frequency response of the sound after it travelled through the user’s skull. […] Individual differences in Skull anatomy result in highly person-specific frequency responses that can be used as a biometric.”
On top of increasing the security of these new devices, the SkullConduct innovation also acts as extraordinary evidence of the reach of technology in our lives. I’m thrilled at the idea that we may finally remove the last hurdle of effort — password entry — from our full integration with our devices and tech experience. And we’ll do it with something so uniquely human as the bone structure of our skulls.
When it comes to the Great Work-Life Balance Debate, we at DFC fall firmly into the Live-to-Work camp. I mean, with all the neat tech out there that makes connection easier, why not use it to your advantage, to create space for more and higher quality leisure?
But for those who are team Work-to-Live, that same cornucopia makes it easier to always be “on,” allowing you eat, sleep, and breathe your career. This state of affairs is getting an interesting response from the folks at WeWork, the shared-office-space firm. Much like their subscription-based system of shared working space, they are now experimenting with shared living space — where instead of $325 USD a month for a dedicated desk and access to their app, $1375 USD a month gets you a bed, a communal laundry room/arcade, a roof-top deck, and more. Their mandate heralds “A New Way of Living”:
“WeLive is a new way of living built upon community, flexibility, and a fundamental belief that we are only as good as the people we surround ourselves with. We know life is better when we are part of a community that believes in something larger than itself. From mailrooms and laundry rooms that double as bars and event spaces to communal kitchens, roof decks, and hot tubs, WeLive challenges traditional apartment living through physical spaces that foster meaningful relationships. Whether for a day, a week, a month, or a year, by joining WeLive – you’ll be psyched to be alive.”
Opinion is divided: over at Jezebel, they’re pointing out how suspiciously like a dorm the whole setup seems — with its connotations of Millennials entering the workforce and immediately refusing to grow up. Another concern is that, instead of addressing the reasons — many of them problematic — why traditional apartment rents in WeLive flagship cities New York and D.C. are “too damn high,” initiatives like WeLive could normalize the idea that over a thousand bucks in exchange for a bed physically located on Wall St is a reasonable prospect.
But in an increasingly isolated age, where those new technologies that make work easier also make it possible to see fewer actual human faces in your day-to-day, having socialization enforced by your living situation — and removing reasons to avoid it, like having an in-house cleaning team — is quite tempting. Only time will tell if WeLive will take off like WeWork has, and exactly how far we can extend the philosophical exercise that is 21st century life!