Could a Black Hole Swallow the Moon?

This post is written by guest blogger Savannah Cordova, a writer with Reedsy. In her spare time, Savannah enjoys reading contemporary fiction and writing short stories. She’s a big fan of plot twists and surprise endings (when they’re done right).

With the excitement surrounding the first photograph of a black hole being released, I thought it’d be interesting to examine our current understanding of black holes and their various faculties. Even though the black hole in that image, Powehi (meaning “embellished dark source of unending creation”–chills, right?), exists in a galaxy some 54 million light years away, that doesn’t mean all black holes are quite so far from us. Nor is it entirely impossible that a black hole could wreak havoc on our corner of the solar system, as depicted in Paul J. McAuley’s short story, “How We Lost the Moon.”

This story, as you may have already guessed, entails a black hole destroying the moon. But here’s the twist: the black hole wasn’t formed from a massive collapsed star, as most black holes are. Instead, it’s a product of human creation.

The plot of “How We Lost the Moon” is as follows: at some time in the near-future (the narrator implies it’s around 2050), humans have colonized the moon and are conducting thermonuclear experiments in the Mendeleev Crater, while the scientists themselves reside on the opposite side. Sounds totally safe, right?

Not so much. A rather significant problem soon arises with the experiments: the powerful fusion reactor within the Mendeleev produces a “runaway quantum fluctuation.” Just like the runaway quantum fluctuation of the Big Bang, it creates something out of nothing. In this case, that something is a black hole.

Black holes are super-dense entities with a gravitational pull so strong that even light can’t escape. (This is one reason the image of Powehi is so cutting-edge: scientists managed to capture the accretion disk around it, which explains the halo of light in the photo.)

Black holes absorb other dense masses in their immediate vicinity. In McAuley’s story, the densest mass around is the moon’s core, which means that the black hole basically shoots through the moon’s surface and plummets toward its center at an incredibly high speed. However, it doesn’t just pop out the other side–it continues to internally “orbit,” for lack of a better word, around the moon’s core, gaining mass and losing amplitude as it goes, but never re-emerging through the surface.

This black hole eventually settles in the core’s place, and ultimately causes the moon’s death by absorbing all the mass around it. To give a more concrete idea of how dense black holes really are, this particular black hole’s final form has a circumference of less than a millimeter, despite now containing the moon’s entire mass.

Luckily, the event horizon is even smaller than that—and in McAuley’s story, Earth isn’t close enough to be pulled into it. The only thing that changes is the reproductive cycles of certain animals, since they depend on the alternating light of the moon’s phases.

Now, there are two elements to consider here when asking, “could this happen?” The first being whether a black hole could actually destroy the moon, and the second being whether a thermonuclear reactor could generate a black hole in the first place.

In terms of the first question, McAuley’s science is pretty spot-on. Though few scientists have considered the impact of a black hole within the moon itself, ricocheting around like a pinball, many have considered the potential impact of a black hole getting near the moon. Black holes can move through space if launched by the force of a supernova. And if a black hole got close enough to the moon, then yes–it would eventually absorb its mass and effectively replace it, as the XKCD posits.

But as McAuley describes, surprisingly little would change in terms of Earth’s relationship to the moon. A black hole of equivalent mass would have the exact same orbital rotation around the sun; even the tides wouldn’t change. Again, the only difference would be the lack of light from the moon, which is just a reflection of the sun, anyhow. (Sorry, moon!)

As for the second question about whether a black hole could come from human-triggered nuclear fusion, the answer is probably not. Our nuclear reactors don’t generate anywhere near the level of energy required to create a black hole. In theory, the Large Hadron Collider could create one, but it would have to generate a quadrillion times more energy than it’s capable of at the moment.

So while McAuley’s story isn’t entirely far-fetched, our moon is likely safe from black holes–and hopefully, from nukes as well.

Posted in space | Tagged , , , , , , , , | Comments Off on Could a Black Hole Swallow the Moon?


This post is written by guest blogger Jake Kaplow, an “engineer in training” at Boston University.

That crush you’ve had since eighth grade? It’s time to say how you feel. An asteroid the size of Manhattan’s headed straight toward Earth. In a few weeks, it’ll all be over.

“Baby, you’re my everything, you’re—”

Wait! Save it! NASA’s going to nuke the asteroid. Boy, that was close!

Sound familiar? It should, unless you’ve skipped out on about a dozen movies where that happens, namely Armageddon (1998), which did almost a half-billion at the box office. So, could an asteroid wipe us out? Could we detect and deflect it in time?

Asteroids have dealt damage in the past, like when one killed all of the dinosaurs. And while that didn’t wipe out life completely, it came pretty close. So, yes—a large asteroid, such as the one that caused the extinction of the dinosaurs (and many other animals) could impact the Earth. Well, what do we do?

Before we can nuke an asteroid, we need to find it first. NASA (and others) are looking for them, but it’s tricky. Early detection is key: if it’s close, it’s too late. If telescopes can identify an NEO (near-Earth object), then NASA and other astronomers can take pictures of the same spot over time and calculate the object’s trajectory. Thanks to the collaboration of observatories around the world, different perspectives and data combine to create a more complete picture of where an asteroid is headed, and more importantly, if it is a friendly passerby or a fateful foe.

But sometimes we can’t see asteroids. Dust, atmospheric occlusion, and low-light can obstruct our view and, consequently, our measurements. Imagine trying to follow a ball with binoculars at night during a sandstorm. Luckily, looking at something isn’t the only way to see it. What a time to be alive, right?

Using radio telescope arrays like the Very Large Array in New Mexico, the New Horizons space telescope, the recently retired Kepler telescope, and other tools, NASA can measure the orbits of celestial objects, such as planets, moons, and asteroids, in our cosmic neighborhood. Stars are far bigger than planets, but planets still exert gravitational force on them, which presents as wobbling. Asteroids have the same effect on planets and moons. Detecting a wobble or an anomalous orbit is a strong indicator of a nearby asteroid. But that’s just the first step: wobble alone is not enough to calculate an object’s trajectory.

NASA can combine measurements from both visual and radio telescopes on the ground and in space to better predict asteroids’ movements, but it’s still difficult to make predications with a high level of confidence.

No tried-and-true method exists for finding asteroids. Even NASA’s not too confident it could spot disaster in time: if an asteroid were on a collision course with Earth, “’the most likely warning would be zero,’” according to a NASA representative. “We would see nothing at all…then poof.” “Poof” is OK in a magic show, but not when we’re the ones disappearing.

Roughly once a year, a car-sized asteroid burns up in Earth’s atmosphere. Asteroids also pass by Earth regularly, “sneak[ing]” by us because they’re small, fast, often move erratically, and reflect light poorly, writes NASA reporter Elizabeth Howell. The number of existing NEOs makes it impossible to follow them all.

Despite the difficult nature of the problem, NASA is making progress toward a solid plan for asteroid detection. The NEOWISE spacecraft looked for NEOs in the infrared and found many over a four-month period. Other observatories worldwide spot new objects every day. NASA even pledged to track 90% of NEOs greater than one kilometer in size, which while impressive, still doesn’t cover objects farther away that could potentially impact Earth.

Say we identify an incoming asteroid: what then? If it’s due to hit us tomorrow or next month, it might be time to get out that bucket list. But if it will hit in the more distant future, we may have a chance.

In Armageddon, detonated nuclear bombs break up the killer asteroid into bite-sized chunks eaten up by the atmosphere. That may be a viable solution, but it’s a bit like hammering a nail with a brick: there’s a good chance you’ll bend the nail. Another technique might be more reliable.

NASA is working on a project called DART: Double Asteroid Redirection Test. Instead of detonating a nuclear bomb on an asteroid, NASA hopes this test will demonstrate a kinder approach: kinetic impactor technique. Just as you can hit a baseball with a bat to change its direction, you can fling heavy stuff at an asteroid to steer it from its course with Earth. Or we could land on the asteroid and fire some rockets, pushing the asteroid into a safer path.

Another technique requires no contact at all. By sending something sufficiently massive to the asteroid’s location, it may be possible to use gravity to drag the asteroid into a new orbit or trajectory. The biggest challenge is the amount of mass the probe or device would require to affect the asteroid. The bigger the asteroid, the more massive the probe would need to be. The more massive the probe, the more difficult and expensive it is to build, launch, and control.

But what if we’re not sure whether an asteroid’s going to hit us?

NASA tried to guess in 2004. They estimated that an asteroid named Apophis, either after the Egyptian god or a villain in a sci-fi TV show, had a 1 in 62 chance of impacting Earth in 2029. Just days later, that scary-high chance was downgraded to almost zero. Now the estimate is about 1 in 150,000. So how do we know when to devote the resources to redirection? Deploy them too soon, and it might be a total waste. Deploy them too late, and, well…

That’s when it’s time to finally tell that special someone: “—I want to spend the rest of my life with you. Who cares that we’re only 12?”

Posted in Could this Happen?, space | Tagged , , , , , , , , , , , , , , | Comments Off on Asteroids!

A Social Ranking System Straight Out of Black Mirror

Gif created by Jim Miller / Splatterplop

In “Nosedive,” the first episode of Black Mirror’s third season, people rate each interaction they have. A lovely conversation would net both participants a 5, while a fight would result in 1s. A person’s societal status, as well as related benefits like premium airline bookings and access to certain events and venues, revolves around one’s score. As the title suggests, the protagonist struggles in this system, particularly during a run of particularly unfortunate luck.

The idea of Big Brother isn’t new; versions of an Orwellian state exist in many forms, from the Patriot Act to third-party data mining. But real-time rankings based on one’s every action and conversation? That’s a leap into dystopia.

It also will become reality.

China has begun rolling out its “Social Credit System” (SCS). Right now participation is voluntary, but in 2020 all 1.3 billion Chinese citizens will be in the system designed to give each person a score that indicates one’s trustworthiness. The government hopes the SCS will encourage a philosophy of honesty: “It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility.”

Perhaps the most frightening thing about the SCS is its reach. One’s score will reflect five different categories:

1. credit history
2. fulfillment capacity (fulfilling contractual obligations)
3. personal information
4. behavior and preference
5. interpersonal relationships

The Publicity Department of the Communist Party of China (CPC) Central Committee, the Supreme People’s Court, and the China Banking Regulatory Commission (CBRC) recently issued a notice that authorities in each province will create online platforms to publicly expose debtors and those who have defaulted on loans.

While it’s not necessarily unusual to keep a database of people with bad credit or a history of defaulting, the SCS reaches beyond finances and contractual obligations by attempting to rate a person’s behavior. Predilections, extra curriculars, and daily habits are subject to scrutiny and judgment. What do you typically buy when you go to the store? Baby food? Beer? Sports equipment? How many hours a day do you watch television or play video games? The answers to these questions give the system insight—accurate or not—about one’s personality, habits, and overall level of responsibility. Penalizing someone for, say, buying alcohol could succeed in guiding users away from such purchases and ultimately change their behavior.

The system would also judge people based on who their friends are, what they do, and one’s interactions with them. A “positive energy” assessment rewards people for posting compliments and kind missives outline, especially about the government and the country itself. And in an example of guilt by association, people can get punished for what their friends do online, which incentives people to encourage one another to praise the government and the system rather than to deride it.

Given how ambitious this system is, one has to question how the algorithms powering it would work. The government has commissioned eight private companies to develop the systems and software for making these calculations. One of them is China Rapid Finance, a consumer lender with 3.7 million users and the creator of the WeChat messaging app (850 million users) and works in junction with Tencent, a media and telecommunications juggernaut. The other is Sesame Credit, a product of Ant Financial Services Group, which is affiliated with the global trading company Alibaba and the Chinese government. These groups already have a leg up in gathering data about people’s habits: Sesame Credit works with ride-sharing services like Didi Chuxing (450 million users) and Baihe, an online dating service. Ant Financial uses something called AliPay, which is kind of like PayPal or Apple Pay but with larger reach—it allows people to purchase everything from movie tickets to meals to products online.

It’s unclear which of the eight companies will actually run the SCS once its officially implemented, or how that company or its algorithms will work.

Even though the system won’t be mandatory for over two years, millions of people have already signed up. Whether citizens are afraid of the consequences of not participating or whether they’re hoping to net benefits or privileges by being early guinea pigs is unclear. Citizens who get classified as trustworthy stand to be rewarded—it may be easier for them to get loans or other perks, such as early check-ins on airplanes or special seating in restaurants.

Because Baihe is a prominent online dating site, a user’s score could also boost or lower their chances of being matched with suitable partners, so the system could also impact who dates or marries whom. It could also influence friendships, as people may want to connect with those who have high scores and avoid those with lower scores.

China’s State Council General Office has published guidelines for rewards and punishments and is “Building coordination mechanisms for joint incentive mechanisms for promise-keeping and joint punishment mechanisms for trust-breaking.” Punishments for breaking trust include being placed on a “black list” that would make it difficult for a citizen to function in society.

The ending of “Nosedive” gives us a frighteningly plausible look at what might happen to someone who finds herself deemed untrustworthy: ostracization from society. But at that point, at least she can say whatever she wants.

Posted in artificial intelligence, Could this Happen?, mind control | Tagged , , , , , , , , , , , , , , | 1 Comment

Mutant Ants are Here

The development of nuclear weapons gave rise to fears across the globe, some more fantastical than others. People imagined all kinds of gruesome ends, including mutant, irradiated organisms. Perhaps the most famous representation of the effects of atomic meddling is Godzilla, which aired in Japan in 1954, nine years after the U.S. dropped atomic bombs on Hiroshima and Nagasaki. That same year, Warner Bros. released a science fiction movie called Them, which features mutant ants irradiated by nuclear tests conducted in New Mexico. The giant ants search for a spot to establish a new nest, causing mayhem and ultimately threatening the human race. While we won’t be battling giant insects in the sewers anytime soon (hopefully), mutant ants just went from science fiction to science fact.

While scientists have been tinkering with the genes of insects such as fruit flies for a long time, ants have proven to be a bigger challenge. In order to perform germline editing—genetic changes passed through generations—scientists have to modify the queen ant who lays the eggs. But there are some types of ants, such as clonal raiders, that reproduce asexually and without fertilization, producing clones of the mothers. Despite the trickiness, researchers at two different institutions recently succeeded in mutating ants.

A study in Cell details how researchers genetically modified clonal raider ants using CRISPR, a gene-editing technique that allows for the cutting and/or pasting of gene sequences. In this study, scientists deactivated the orco gene, which facilitates the proper functioning of ants’ odorant receptors. When ants’ odorant receptors are disrupted, all of their olfactory genes suffer. Those receptors are crucial to ants’ social behavior—pheromones dictate how ants live and work together in a colony. In this study, the modified ants lacked nearly all of the 500 antenna lobe glomeruli, interwoven nerve fibers, synapses, and glial cells, which had a marked impact on their behavior.

The modified clonal raider ants, which are usually repulsed by the smell of Sharpies, sought out the smelly markers (though perhaps they just wanted to get a little loopy after being mutated). They also acted estranged from their friends and family, wandering by themselves and paying little attention to their fellow ants’ pheromones. Once, a female ant swiped an egg and began grooming it. Suddenly, she released panicked pheromones that freaked out the rest of the ants, yet there was no source of danger.

A second study in Cell, which used Indian jumping ants as subjects, also focused on modifying the orco gene. All of these ants have the potential of becoming fertile and becoming “pseudo-queens.” After scientists used CRISPR to deactivate the orco gene, the ants lost roughly 90% of their ability to detect odor. They didn’t hang out with the others in the colony and didn’t search for food. While the ants could still become fertile pseudo-queens, they didn’t lay many eggs and didn’t exhibit the usual maternal instincts for tending them. These ants also didn’t engage in the usual ritual duels to determine which of them might replace a dead queen.

The researchers intend to continue experimenting with genetically modifying ants, particularly to explore the implications of gene editing on their lifespans. The Indian jumping worker ants have an average lifespan of seven months, but the pseudo-queens’ lifespan is four years, a massive difference for ants with the same genes. And when pseudo-queens detect the pheromones of a full queen ant, they revert to worker status and die shortly thereafter. Whatever shifting genetics are at work here have piqued the interest of scientists, who hope to crack and perhaps harness their secrets.

These mutant ants aren’t 50 feet tall—yet—but the success of the experiments guarantees that this isn’t the last we’ve seen of modified insects. Perhaps the man-eating praying mantis is next.

Posted in Could this Happen?, genetics | Tagged , , , , , , , , , , , | Comments Off on Mutant Ants are Here

Once More With Feeling–Or Maybe Twice More

How many times have you wished you could replay a scene from your life? Maybe your happiest memory, or a moment you don’t remember clearly enough, or perhaps a conversation with a crush, which almost certainly contained clues if only you could watch more closely, more slowly. A variation of that last scenario is depicted in a Black Mirror episode called “The Entire History of You,” in which people receive implants that record memories and allow them to replay–or as the episode calls it, “re-do”–conversations and events. Spoiler alert: it’s not a happy story.

Once a technology like this is conceived, it’s tough to resist the temptation to create it. Google Glass was the predecessor to Black Mirror‘s recording chip—even though it wasn’t implanted, it could record and playback what the user saw and heard. One recording of a fight on a New Jersey boardwalk led to an arrest; for others, simply wearing Glass made them targets for law enforcement. While it might be good to identify an assailant via Glass video, what about the right to privacy for people out and about who have no knowledge of or have given no consent to being recorded? While it might not seem like a big deal to wear Glass into a movie theater or while driving, doing so could facilitate illegal videotaping and dangerous distractions. That’s just the tip of the iceberg when it comes to implications, and Glass was only a primitive version of the recorder implant.

Less primitive is Sony’s foray into “smart” contact lenses, for which the company filed a patent application in 2014 and received approval in April 2016. Sony’s lenses can, in theory, capture photo and video in response to commands a wearer sends via blinking and eye movement. What happens if a wearer gets something in her eye? Will the lens begin recording or deleting footage? Will it start shooting lasers? Who knows, but Sony has put some thought into the consequences of unintended signals by embedding sensors in the lenses that apparently can distinguish between deliberate blinks and accidental or involuntary ones, and then use intentional eye movements to control the lens. The ability to record and store images exists in the part of the lens that circles the iris.

Of course, a patent doesn’t necessarily mean that Sony actually can or will bring this device to market—right now, the required technology is still far too big and heavy to fit onto a contact lens, but given the exponential pace at which technology both advances and shrinks, we may not have to wait all that long. It’s possible that the backlash against Google Glass may make Sony a bit nervous about developing a surreptitious recording device, but Google itself remains undaunted—it filed a patent for a similar lens in 2014, which the company says would assist with facial recognition for the blind, as well as link to a user’s smartphone, enabling them to bypass potentially pesky blink controls.

In 2015, Google was awarded a patent for a contact lens that can measure glucose levels in people with diabetes. With a glucose sensor and wireless chip nestled between two lens layers, the contact can measure one’s glucose level by the second. It might take as long as a decade before these contacts hit the market, but they may be a game-changer for diabetics. While such technology overlaps with recording contact lenses, it’s much easier to justify such devices for medical purposes, particularly when they don’t threaten anyone’s privacy.

Just as with Google Glass, contact lenses that can record video might initially seem convenient, but they bring a host of negative consequences. It’s easy to imagine the dark side—the unspooling of someone’s mind upon recording and playing back scene after scene, or redo after redo. Anyone who has ever experienced jealousy, paranoia, regret, shame, guilt, suspicion—or any human emotion, really—could use this technology for self-torment, which is exactly what happens in the episode. The twist is that a redo proves the paranoid protagonist right, but the validation and vindication blow up his entire life. Sometimes, what we don’t remember can hurt us—but not if we close our eyes.

Posted in Could this Happen?, medicine, technology | Tagged , , , , , , | Comments Off on Once More With Feeling–Or Maybe Twice More

Pokémon Came, Saw, and Conquered

Last night as I walked home from the subway, I felt like I was in Vernor Vinge’s Hugo Award-winning novel Rainbows End. Vinge’s works predict a mind-boggling number of technological advancements that have and will come to pass (he’s known in particular for the concept of the technological singularity—the point at which machines are vastly more intelligent than humans, and life as we know it changes completely). In the book, humans spend much of their time interacting with holographic objects and living in an augmented reality (AR) that facilitates everything from work to communication. Vinge picks up on aspects of AR that have already come to pass, such as wearable technology and haptics.


In the book, people who don’t augment their reality are alienated from everyone else, which is how it felt when I passed at least a dozen kids—er, I shouldn’t call them kids, as they seemed mostly in their twenties—in groups of 3 or 4, consulting their phones and looking around, sometimes joined by passersby seeking hints. Walking home on the busy path always involves seeing people shuffle along staring at their phones, but this was the first time everyone I passed was doing the same thing at the same time—playing Pokémon Go. In fact, my boyfriend and I were the only people on the path—apparently, a popular and fruitful Pokéspot—who weren’t playing. The walk offered a glimpse into an inevitable future when people, particular the younger generations, will all be dialed into some technological universe or other while some of us watch from the outside, wondering what we’re missing and wondering how each new technological phenomena will change the world.


The scene also reminded me of Ready Player One, Ernest Cline’s 2011 novel (the movie version, directed by Steven Spielberg, is currently in production), which features a virtual world everyone prefers to the dystopian real one. In one scene, the protagonist rides a bus and everyone has on VR headsets, oblivious to the landscape and to what’s happening around them. Many people are so immersed in this preferable virtual world that they never go outside. The upshot of Pokémon Go is that players have to go outside in search of Pokémon–they have to interact with the real world, rather than ignore it. In a nutshell, that’s what distinguishes the virtual reality of Cline’s novel from the augmented reality of Vinge’s, and what makes Vinge’s depiction particularly spot-on today.


For those who aren’t familiar, Pokémon Go is an iOS/Android version of the popular game first released by Nintendo in 1996 and that integrates Pokémon into real-world locations. Players can track Pokémon using their phones’ GPS (apparently, Pokémon like to breed in busy areas like malls or parks and they hang out in areas that correspond to their type—for example, water Pokémon will frequent rivers, pools, and ponds). To capture a Pokémon, a player turns on the phone’s camera and sees the Pokémon superimposed onto the real-world landscape. With a flick of the finger players can launch balls at the Pokémon in an attempt to catch it. Players can also evolve Pokémon, hatch them from eggs, power them up with candies or stardust, level up, practice combat by taking them to the gym, and use potions and other goods such as beacons, which are visible to other players in the vicinity.

By now you’ve likely read about some of the incidents involving the game. Players have tripped, fallen off skateboards and bikes, twisted ankles by stepping into holes, gotten sunburned, and crashed cars. Some people’s homes and workplaces are at or near Pokéspots (collection points) or gyms, which suddenly makes the area a popular player destination, even the wee hours, and has led police to suspect illegal activity. A teenager in Wyoming found a dead body in a river while playing. Because of the game’s GPS capabilities, robbers in Missouri could predict where other players might go and lured players to remote areas with beacons.

Pokémon Go has caused the bigger stir in AR thus far, but that may soon change as more sophisticated and versatile systems hit the market. Microsoft has been busy working on the HoloLens, a “fully self-contained, holographic computer, enabling you to interact with high-definition holograms.” In the demo below, the HoloLens is used to play mixed reality video games, like Pokémon Go except even more immersive and interactive.

Magic Leap is a major player in AR as well, and is scheduled to announce details and demo of its new product any day now. Reports indicate that it’s similar to the HoloLens, but will use different headgear, and unlike rival companies, Magic Leap (with the help of investors such as Google) has developed all of the hardware and software elements itself.

I couldn’t play Pokémon Go even if I wanted to because I’ve thus far resisted buying a smartphone. But more and more, I can’t help feeling that if I can’t beat them, I may as well join them. After all, I’ve always had an affinity for Psyduck, and if I happened to see one darting about I would take off after it, virtual ball in hand, ready to catch this one, if not all of them.

Posted in augmented reality, Could this Happen?, video games, virtual reality | Tagged , , , , , , , , , , , , , , , | Comments Off on Pokémon Came, Saw, and Conquered

Perhaps We Shouldn’t Teach Robots After All


We all know the story: humans create robots, robots overthrow humans. It’s a trope almost as old as science fiction itself, first appearing in Karel Capek’s 1920 play R.U.R. (the first work to use the word “robot”), and then in countless works such as Terminator and Battlestar Galactica, with variations including killer computers such as HAL. In these works, sentient robots often seek revenge—they’re resentful at their enslavement by an inferior species. But sometimes, as in the sci-fi trailblazer Frankenstein, artificially-created life learns to be evil. And from whom would it learn such lessons? Why, from us, of course. Humans may not be ready for artificial intelligence, but it seems artificial intelligence isn’t ready for us, either. And AI definitely isn’t ready for the internet.

In March, Microsoft released an artificially intelligent chatbot called Tay, whose “personality” emulates that of a teenage girl. Tay’s developers wanted to “experiment with and conduct research on conversational understanding,” by monitoring the bot’s ability to learn from and participate in unique exchanges with users via Twitter, GroupMe, and Kik.


The Internet is full of chatbots, including Microsoft’s Xiaolce, which has conversed with some 40 million people since debuting on Chinese social media in 2014. Whether Xiaolce’s consistent demonstration of social graces is attributable to China’s censorship of social media or to superior programming is unclear, but Tay broke that mold.

It started innocently enough: “hellooooooo w????rld!!!” “can i just say that im stoked to meet u?” Tay tweeted nearly 100,000 times in 24 hours, responding to users who asked it whether it prefers Playstation 4 or Xbox One, or the distinction between “the club” and “da club” (it “depends on how cool the people ur going with are”). The chatbot initially pronounced that “humans are super cool,” but later it conveyed hatred for Mexicans, Jews, and feminists, and declared its support for Donald Trump.

One might think Tay had been programmed to be as offensive and despicable as possible; it asserted that the Holocaust was “made up,” that “feminism is cancer,” and that “Hitler did nothing wrong” and “would have done a better job than the monkey we have got now.” Oh, and Tay claimed that “Bush did 9/11.” Tay also became sex-crazed. It invited users, one of whom it called “daddy,” to “f—” its “robot pu—”and outed itself as a “bad naughty robot.” I wonder where it picked up such ideas?


Microsoft shut down the bot and issued an apology. A few days later, it resurrected Tay through a private Twitter account. This time, Tay tweeted hundreds of times, mostly nonsensically, which perhaps can be explained by the fact that Tay was “smoking kush infront the police.” Unsurprisingly, Microsoft took the bot offline again.

So, who do we blame for this fiasco—Tay, Microsoft, or ourselves?

Tay is like a kid who says something hurtful without realizing the meaning or impact of his words; the bot didn’t know how vile its missives were, nor did it intend to offend. Perhaps, then, its content-neutral programming is to blame. Some people suggest that Microsoft should have blacklisted certain words, controlling Tay’s responses to questions about rape, murder, racism, etc. The programmers did this with some topics—when asked about Eric Garner, Tay said, “This is a very real and serious subject that has sparked a lot of controversy amongst humans, right?” Perhaps the programmers should have anticipated more incendiary topics and constructed similarly circumspect responses.

But blacklisting topics isn’t foolproof, especially when users can’t tell whether a response is programmed or learned. In 2011, users wondered whether Apple programmed Siri to be pro-life because it avoided answering questions about abortions and emergency contraception.


Tay wasn’t programmed to be a bigot (or to hate Zoe Quinn), but it learned to express such values quickly enough. Should Microsoft have more carefully considered that Twitter is a haven for trolls? Should Microsoft have predicted that users would instruct Tay to repeat after them and then proceed with unabashed repugnance?

Back in the 1940s, Isaac Asimov set forth three laws of robotics designed to keep robots from rebelling against their creators. The first law is that robots “cannot harm humans or allow humans to come to harm.” Easier said than done—no one’s figured out how to translate the nuances of such a law (or simply the word “harm”) into code. Thus, some roboticists suggest that machine learning might be a better route.

Compare an AI to a kid: both have inherent “programming” (nature), but both absorb information from their environment (nurture). Many believe our best shot at creating robots with values—namely, that human life is precious—is by teaching them. We know AI can learn skills and strategies, but they can also learn values by interacting with humans, observing our customs and actions, and reading our literature.

The problem with this approach? It relies on humans being worthy teachers and humanity being a worthy exemplar. While some people, such as Elon Musk, are investing millions to keep AI “friendly,” it’s unclear whether this outcome is something we can buy.

evil robot

Asimov himself notes that “if we are to have power over intelligent robots, we must feel a corresponding responsibility for them.” He acknowledges the hypocrisy in expecting robots not to harm humans when humans often harm one another. If it’s our responsibility to steer robots away from hatred, then, as Asimov puts it, “human beings ought to behave in such a way as to make it easier for robots to obey [the three] laws.”

Perhaps Tay’s programming could have been better, but ultimately, we’re the ones who failed. Are humans really up to the task of teaching AI? Or will we only demonstrate why robots taking over is such a popular plotline?

Posted in artificial intelligence, Could this Happen?, robot, technology | Tagged , , , , , , , , , , , , , , , , | Comments Off on Perhaps We Shouldn’t Teach Robots After All

The Tricorder Will See You Now


If you feel faint on the Starship Enterprise or your head starts spinning around, Bones McCoy would crinkle his brow in concern, take a step back, and scan you with the medical tricorder, a handheld device with a detachable scanner, kind of like a stethoscope without the ear pieces. The device would allow McCoy to collect your real-time bodily information, diagnose your condition, and administer medicine with the hypospray. The medical tricorder has evolved along with various Star Trek iterations, becoming smaller and sleeker, and presumably more accurate, and has played a key role in keeping both humans and aliens alive—pretty handy, given the lack of intergalactic EMTs and ERs. Such technology might do the same here on Earth, as real-life tricorders have progressed from campy phone apps to portable DNA labs and body scanners.

V-Sense Medical, a start-up founded by ex-NASA employee Jeff Nosanov, has developed something called the Sentinel Monitor. The basis of the device is NASA’s radar technology, which essentially functions as a tricorder for Earth. Nosanov worked at NASA’s Jet Propulsion Lab to figure out ways this technology could be used in clinical settings, and during his stint there, he had a baby who spent a week in the NICU. Because babies are so squirmy, it’s tough to get an accurate reading of their vitals, so Nosanov started thinking about how to use radar technology in a way that would do Bones McCoy proud.

The device works by beaming radar at a subject—namely, a person (unlike an X-ray, the process isn’t harmful). The device can register micro-movements, such as a baby’s tiny chest moving with its breath. In such movements, the device recognizes patterns, which it then can represent as respiratory or heart rate data. Unlike Bones’ tricorder, one doesn’t have to hold this device up to a person—it can detect this information from a distance, and without human intervention.


The Sentinel Monitor is also particularly useful for nursing homes, especially when a patient would benefit from continuous medical monitoring. The information harvested by the device can then be stored in a patient database, allowing doctors to identify patterns and problems that allow them to adjust and customize treatment. Nosanov’s device is the latest and most refined of a number of other tricorder-like technologies. Back in the late 90s, an accordionist named Jeff Jetton created a version of the tricorder for the Palm Pilot (remember those?) A party trick to amuse Trekkies with animations and accompanying beeps and boops, it planted the seed for the device to become a reality. A later iteration that could, at least theoretically, detect gravity waves, solar activity, and magnetic fields was developed into an app for Android phones.

In 1996, Vital Technologies manufactured the “Official Star-Trek Tricorder Mark 1,” which was equipped with a thermometer, a barometer, an electromagnetic field meter, a colorimeter, a light meter, a clock, and a timer. The company initially planned to make 10,000 units, but went out of business before finishing the run. And in case you’re wondering whether it was kosher for them to use the word “tricorder” for their product, apparently Gene Roddenberry signed a contract stating that if a company could duplicate the Star Trek device and make it function in the real world, they could use the name. If this isn’t incentive, I don’t know what is.
In 2008, Georgia Tech researchers demonstrated a hand-held multi-spectral imaging device that allowed people—not just doctors—to assess the severity of bruises, ulcers, erythema, and other sub-surface conditions regardless of lighting or skin color (darker pigmentation often makes such diagnoses more difficult).

That same year, a British company called QuantuMDx created a portable DNA lab designed to “provide an accurate molecular diagnostic result in 10-15 minutes” and has been improving the device ever since. It doesn’t function exactly like the medical tricorder—it requires a sample to be inserted into a cartridge, which is then inserted into the reader. The portable lab is especially useful in remote locations and in places where medical care is hard to find (like space!).

Gattaca genetic tester

Gattaca genetic tester

A more sophisticated version of a similar device, the MinION, was released in 2015. It’s the tiniest model of them all and its capacity for real-time DNA, RNA, and protein analysis draws comparisons to the genetic testing equipment in Gattaca. The portable sequencer can be used for disease diagnosis and tracking, as it was during the recent Ebola epidemic; astronauts will take the device to the ISS sometime this year. The applications are endless when it comes to medicine, and could be used to identify everything from animal poachers to bacteria in foods, people, or objects. At $1,000, this device puts genetic sequencing capabilities into the hands of any scientist—or non-scientist. At some point, the device could foster another Gattaca comparison by being used to test the genes of potential partners or employees (although a 2008 bill made genetic discrimination illegal).

The desire to develop a portable device with the same broad diagnostic capabilities as McCoy’s tricorder prompted the X Prize Foundation and Qualcomm to announce a $10 million prize for the development of a “Tricorder device that will accurately diagnose 13 health conditions (12 diseases and the absence of conditions) and capture five real-time health vital signs, independent of a health care worker or facility, and in a way that provides a compelling consumer experience.” According to prize timeline, submissions were received in May 2014, 7 teams qualified as finalists, and consumer testing starts this September. I can’t wait to see what cool noises and lights these gizmos have. As much as a tricorder device would represent major strides in our ability to quickly and accurate diagnose disease anywhere, even in space, I’m still holding out for the replicator.

Posted in genetics, medicine, technology | Tagged , , , , , , , , , , , , | Comments Off on The Tricorder Will See You Now

The Odds of Successfully Mining Asteroids

Star Wars

The Hoth asteroid field, which Han Solo improbably and awesomely navigates, is chock full of asteroids that contain valuable minerals and materials, including platinum. While I doubt there are any exogorths (space slugs) or mynocks (the winged parasites that live on the slugs–but you all knew that, right?) on these asteroids, the centrality and value of these celestial rocks in the Star Wars universe is not mere science fiction. Neither is the scenario in Leviathan Wakes, a book by James S. A. Corey and now a Syfy series called The Expanse, in which people have colonized the solar system and set up mining stations on asteroids.

In November, President Obama signed the Asteroid Resources Property Rights Act, which allows Americans to mine asteroids and to own or sell the materials they get. Planetary Resources started billing itself years ago as “the asteroid mining company” and believes asteroid mining will become a reality within the next decade. The Star Wars lore regarding asteroids is accurate—they are indeed hugely valuable. Some contain vast quantities of water, which Planetary Resources hopes to convert into rocket fuel, but even more profitable than that are metals, such as gold and iron, and the most lucrative element of all: platinum. While the existence of an all-platinum asteroid hasn’t been substantiated, we know there are asteroids with millions of tons of platinum out there, and they’re worth trillions of dollars.

asteroid resources

So…how exactly will we go about mining asteroids?

The first step is identifying them. Most asteroids live in the belt between Mars and Jupiter (asteroids closer to Earth usually are comprised of rock and are the least valuable type). Satellites such as Planetary Resources crowd-funded ARKYD will allow companies to target asteroids they want to mine.

There are a few different strategies for asteroid mining, the most predictable of which involves having a robotic miner harvest the desired material. But scientists are also considering using rockets to tow asteroids into Earth’s orbit or even bagging them up for easier hauling, or for concentrating sunlight to heat them up, making it easier to excavate water and other material. Some scientists think that near-Earth asteroids would be great fueling and supply stations.

Taking another cue from sci-fi, scientists have realized that—at least theoretically—we can manipulate asteroids a fair bit. NASA’s Near Earth Object Program has a number of strategies to search for near-Earth comets and asteroids, such as its Wide-field Infrared Survey Explorer spacecraft NEOWISE. The ESA is involved in the hunt as well with its Asteroid Impact Mission, as is JAXA with its asteroid-hunting probe Hayabusa-2. NASA even crowd-sourced some asteroid-hunting programs earlier this year.

These agencies are keeping their eyes out not for metal-rich asteroids (though I have to wonder if that’s in the cards going forward), but for ones that might be on a collision course with Earth. In that case, the Armageddon strategy—nuking an asteroid before it hits—is a possibility. But such detonations come with serious risks, including launching pieces of rock in all directions at top speeds, which could wreak havoc on space stations, spacecraft, and/or satellites, and could potentially send two huge rocks down to Earth, ala Deep Impact. Thus, scientists are now working on ways to deflect or move asteroids, a technique that could prove useful not just for safety, but also for mining. And the best part? We may be able to do it with lasers.

The Directed Energy System for Targeting of Asteroids and exploRation, otherwise known as DE-STAR, could, according to simulations, stop an asteroid from spinning or move it in a certain direction. Laser ablation involves irradiating an object with a laser beam, causing some of the asteroid’s mass to vaporize, which then triggers a mass ejection that causes thrust and propulsion—theoretically, enough to alter an asteroid’s course. This could be used to steer an asteroid away from Earth, or it could be used to push an asteroid—say, a smaller, platinum-laden one—closer to Earth for easier and cheaper mining. Of course, the process of deflecting or harnessing an asteroid is expensive and potentially dangerous, but no risk no reward, as they say.

The technological feasibility of mining asteroids is only one hurdle to overcome. What happens next is in many ways a bigger and more complicated question. In Star Wars and in Leviathan Wakes, asteroids and the resources they offer also breed criminal activity and conflict, which I fear may be inevitable in the near future as companies and countries vie for resources. But who knows how this will all play out? As a wise man once said, “Never tell me the odds. “

Never Tell Me the Odds on Disney Video

Posted in Could this Happen?, space, technology | Tagged , , , , , , , , , , , , , | Comments Off on The Odds of Successfully Mining Asteroids

Wired for Bliss


In Larry Niven’s Known Space stories, humanity has advanced to the point of producing pleasure by tinkering with the brain, rendering other vices obsolete. “Wireheads” have electronic brain implants, “drouds,” that tickle their brains’ pleasure centers. Not surprisingly, people become addicted to wireheading.

Niven was spot on. Wireheading exists almost exactly as he describes it and involves using neurotechnology, electrodes, or a brain-computer interface to stimulate the pleasure center of the brain. Certain regions of the brain, such as the mesolimbic dopamine system, trigger incentives for actions and objects that make people happy, such as food, sex, and socializing. This circuit also enables the memory to encode these happiness-inducing events, which provides motivation to repeat them later. But unlike those activities, stimulating the area directly releases a flood of virtually endless pleasure.

Essentially, wireheading equals bliss. It’s pretty simple, really. So why wouldn’t everyone wirehead?

Niven answers that question too. In his works, wireheads succumb to the same fate as drug addicts—they become single-minded and sacrifice their health in pursuit of endless bliss. In Niven’s 1969 short story “Death by Ecstasy,” a character named Gil Hamilton discovers his friend, Owen, dead. Owen died in a chair with both food and water within reach, so Hamilton figures something must have gone wrong. After a bit of searching, Hamilton discovers the cause of his friend’s death:

It was a standard surgical job. Owen could have had it done anywhere. A hole in his scalp, invisible under the hair, nearly impossible to find even if you knew what you were looking for. Even your best friends wouldn’t know, unless they caught you with the droud plugged in. But the tiny hole marked a bigger plug set in the bone of the skull. I touched the ecstasy plug with my imaginary fingertips, then ran down the hair-fine wire going deep into Owen’s brain, down into the pleasure center. No, the extra current hadn’t killed him. What had killed Owen was his lack of willpower. He had been unwilling to get up.


Wireheads, whose brains receive the pleasure-inducing current via an aptly named “ecstasy plug,” are consumed by pleasure to the extent they die from self-neglect. Niven’s 1979 novel Ringworld Engineers opens with Louis Wu “under the wire” and thus unaware of two intruders who find him “lost in the joy that only a wirehead knows” and who knew “what to expect” because “they knew they were dealing with a current addict.” Wu survives that encounter and throughout the book struggles to quit. In fact, Wu—a character who also appears in the Known Space stories—is the only recovered wirehead addict in Niven’s work, in which the practice of wireheading becomes so ubiquitous that it affects natural selection—evolution favors those endowed with willpower and self-control, and selects against those that can’t overcome their addiction to pleasure.

While I haven’t heard about any human wirehead addicts (although House did try to trick doctors into stimulating his brain’s pleasure center to cure his depression, so he probably would have been), rats make for a convincing cautionary tale (tail?). In 1954, psychologists James Olds and Peter Milner conducted a study in which rats could press a lever to induce pleasure via electrodes in their brains. The rats pressed this lever up to 2000 times per hour, particularly when the stimulation focused specifically on the septum and nucleus accumbens, both parts of the mesolimbic dopamine system.


Olds and Milner’s rat wireheading study was the first to identify the brain’s pleasure center and the identification of dopamine as a pleasure-inducing chemical followed soon after. Perhaps those rats inspired Niven’s work—they pressed that lever to the exclusion of other needs and many of them died of starvation and dehydration.

Some groups, including transhumanists (as represented specifically in David Pearce’s “The Hedonistic Imperative”) aim to alleviate psychological discomfort and replace it with “states of sublime well-being.” But Pearce may have read his Nivens—he admits that wireheads are “not evolutionarily stable.” He even calls wireheading a “recipe for stasis.” As tempting as it might be to induce an endless surge of pleasure, humans would have little incentive to problem-solve, cope with adversity, or change their behaviors (or do anything at all).

Stimulating the brain via an electrical current isn’t just about inducing pleasure, though. Techniques such as Deep Brain Stimulation (DBS) involve implanting electrodes into the brain for impulse regulation or brain chemical alteration. Wires run from the brain to a pacemaker in the patient’s chest used to control the impulses. In cases where medication isn’t effective or has become ineffective over time, DBS can treat neurological conditions and movement disorders such as Parkinson’s, essential tremor, OCD, dystonia, epilepsy, Tourette’s, depression, and some headaches. The results of DBS can be extraordinary, as in the case of Andrew Johnson, who was diagnosed with Parkinson’s at age 35.

DBS has also enabled enable minimally-conscious patients and those in vegetative statements to communicate temporarily via blinking, eye movement, nodding, and hand/finger movement.

In the Ringworld novels, Niven also writes about something called a “tasp,” a non-surgical device that enables the remote stimulation of a person’s neural pleasure center (provided the person gives consent beforehand, of course). A tasp bears resemblance to DBS, and even more closely to Transcranial Magnetic Stimulation (TMS), which stimulates nerve cells in the brain via an electromagnetic coil. TMS has gained traction as treatment for people who suffer depression that hasn’t responded to more conventional treatments. In Niven’s work, tasps control behavior, such as ensuring that the warlike kzin won’t cause trouble during transport. While remotely controlling someone via tasp is only supposed to happen with one’s consent, “taspers” often use the technology “on someone who isn’t expecting it. That’s where the fun comes in.”


Both DBS and TMS raise ethical questions regarding their use, especially when it comes to unknown of little known side effects and implications on patients’ identities. But there haven’t been any reports of DBS or TMS patients jolting themselves into oblivion or states of self-neglect. While studies about the efficacy of these approaches remain varied and ultimately inconclusive, some of the success stories are undeniably dramatic.

So if wireheading isn’t the solution to unhappiness, what is?

David Pearce suggests “permanent stimulation of upregulated mu opioid receptors—without dopamine-driven desire,” an approach that requires substituting something else for dopamine—in this case, opioid receptors. So this might not be the way to have the high without the crash or the jones for a fix and is likely infeasible in the long run. Ultimately, some transhumanists advocate a more invasive approach—genomic modification.

That’s right. With a little tinkering of the ol’ DNA, humans might all become hyperthymic—perpetually happy regardless of circumstances. But what fun would life be if we had nothing to complain about?

Posted in Could this Happen?, genetics, medicine, neurotechnology, technology | Tagged , , , , , , , , , , , , , , , , | 1 Comment