The Doomsday Handbook Read online
Page 7
If that concerns you, then there is something worse in store for the coming decades too. In future, with the emerging field of synthetic biology, we will be able to build life forms from scratch, programmed to do useful things for us. But what if these techniques (creating genetically altered viruses or microbes, say) are used by terrorist groups to spread disease? If they got the formulation right, perhaps a trained evildoer could create an airborne Ebola strain that could infect the world in days.
Unlike other doomsday weapons, such as nuclear bombs, biological technology is easily available, and anyone with a makeshift research lab could do global harm. Martin Rees, former president of the Royal Society, has remarked that a million lives might be lost because of a bioterror or “bioerror” event.
“It’s scary as hell,” said Drew Endy of Stanford University, a pioneer in synthetic biology, during an interview with the New Yorker magazine on the potential harm of misapplied technology. “It’s the coolest platform science has ever produced, but the questions it raises are the hardest to answer.”
What is GM? Can it go wrong?
Depending on who you talk to, genetic modification of crops is a panacea for our problems or a Pandora’s box. Some argue that it is crucial in solving the food shortages that will no doubt result from the rising human population and the ravaging effects of climate change on the world. Detractors believe that playing with genes is an untested danger, and that scientists are using the world as their laboratory. Unlike in a lab, though, if things go wrong with their experiments, all of us will suffer.
Roger Beachy, head of the US National Institute of Food and Agriculture, is in the former camp. GM crops, he says, are an important tool in keeping farming sustainable and have already reduced the use of harmful pesticides and herbicides and the “loss of soils because they promote no-till methods of farming. Nevertheless, there is much more that can be done.”
He points out that agriculture and forestry account for approximately 31 percent of global greenhouse gas emissions, more than the 26 percent from the energy sector. “Agriculture is a major source of emissions of methane and nitrous oxides and is responsible for some of the pollution of waterways because of fertilizer runoff from fields. Agriculture needs to do better. We haven’t reached the plateau of global population and may not until 2050 or 2060. In the interim, we must increase food production while reducing greenhouse gas emissions and soil erosion and decrease pollution of waterways. That’s a formidable challenge. With new technologies in seeds and in crop production, it will be possible to reduce the use of chemical fertilizers and the amount of irrigation while maintaining high yields. Better seeds will help, as will improvements in agricultural practices.”
In the UK, similar arguments are bringing GM back to the political and scientific table, a decade after the “Frankenstein foods” debacle, in which environmental groups successfully branded the technology irresponsible and harmful.
* * *
GM CROPS WORLDWIDE
USA 42%
Brazil 16%
Argentina 15%
India 6%
* * *
Today, GM technology is licensed for use in several parts of the world. The US is king when it comes to growing these crops, accounting for almost half of the world’s production in 2009. Brazil is the second-biggest fan, at 16 percent of the world’s production in the same year.
In Uganda, farmers are trying out bananas modified (by the addition of a gene from green peppers) to resist a bacterial disease called banana Xanthomonas wilt, which has destroyed crops across central Africa and costs farmers half a billion dollars every year. The EU is more cautious. In 2010 it approved the cultivation of Amflora, a GM potato that contains a form of starch better suited for industrial use in making paper, adhesives and textiles. Before that, in 1998, officials had granted biotechnology company Monsanto permission to grow Mon 810 corn, which is resistant to the corn borer bug.
Release into the wild
In 2010, a genetically modified crop was found growing in the wild in the US for the first time. The plant, a type of rapeseed, was found in North Dakota, and according to some scientists highlighted a lack of proper monitoring and control of GM crops.
Cynthia Sagers, an ecologist at the University of Arkansas in Fayetteville, led the researchers who found two types of transgenic canola in the wild. One of them was resistant to Monsanto’s Roundup herbicide (glyphosate), and one resistant to Bayer Crop Science’s Liberty herbicide (gluphosinate). She also found plants that were resistant to both chemicals, showing that the GM plants had managed to interbreed. Even more intriguing, these wild populations did not exist just at the edges of large farms, but were found growing significant distances away.
In 2004, Swiss biotech company Syngenta announced that it had mistakenly labeled and sold the seed for an unapproved GM corn, Bt-10, as the GM corn approved for sale in the US, Bt-11. Both strains contain a gene from the soil bacterium Bacillus thuringiensis, which helps them to create their own pesticides against the corn borer. The Bt-10 strain is a laboratory version that is kept for research purposes, while Bt-11 is licensed for animal feed in the US.
Stories like this justifiably cause worry among consumers, though it is worth pointing out that there is little robust evidence so far that any GM crops are actually harmful when eaten.
In the UK, naturalists worry not only about the safety of the crops per se, but also about the broad-spectrum herbicides that come with them. These are so effective at killing everything other than the protected crop that they might strip out all the minor plants and seeds that farmland animals need to survive. When the population of tiny bugs and grubs in soil and plants falls, so do skylarks, partridges and corn buntings among others. In any farmland ecosystem, all the different forms of life compete for light and nutrients. With GM, the farmer has a formidable opportunity to outcompete everything else and use up all the resources for his crop alone.
In 2007, Emma Rosi-Marshall, an ecologist at Loyola University Chicago in Illinois, found that the larvae of caddis flies that fed on Bt corn debris grew only half as fast as those eating unmodified corn. In addition, caddis flies that ate pollen from Bt corn died at twice the rate of those feeding on normal pollen. Rosi-Marshall concluded that widespread planting of Bt corn might have “negative effects” on local wildlife and perhaps “unexpected ecosystemscale consequences.”
* * *
These herbicides are so effective at killing everything other than the protected crop that they might strip out all the minor plants and seeds that farmland animals need to survive.
* * *
Michael Antoniu, head of the nuclear biology group at Guy’s Hospital in London and once an adviser to the UK government on GM foods, says that the problems with modification center around the unintended consequences. “It’s a highly mutagenic process,” he told the Observer newspaper in 2008. “It can cause changes in the genome that are not expected ... These crops that have come along seem to be doing what they claimed they would be doing. The question is what else has been done to the structure of that plant? You might inadvertently generate toxic effects.”
Doug Parr, chief scientist at Greenpeace UK, says that if GM products were to get out into the environment, there would be no containing them. “With the environment you could even create the problem simply by testing them.” The response of some nature groups has been direct action—ripping up plants from test fields.
And there might be a point to this behavior, however unproven the science is. When genes from Bt corn were found in the wild plants of Mexico, experts were puzzled, because all genetically modified corn is illegal in that country. Corn is thought to have originated in Mexico, and the genetic biodiversity there is important. If superweeds and superpests from biotechnology somehow managed to affect the wild plants, the genetic storehouse of this precious crop would be gone forever.
Biotech to order
Accidental leaks of GM crops into the environment are one
thing, but there is not much evidence yet that they have done any damage. The technology that underlies GM, though, has been improving in recent decades, to the point where simple modification of DNA is not the only thing possible. Nowadays, scientists can create genes and life forms from scratch.
Craig Venter is the pioneer in this regard. In 2010, he created the world’s first artificial organism, based on a bacterium that causes mastitis in goats. He produced genes from chemicals in the laboratory and inserted them into an existing bacterium, which used the synthetic genome to operate. According to Venter, the technology could be used to program bacteria to make environmentally friendly biofuels or soak up harmful pollution from oil spills or the atmosphere.
Of course, it probably won’t take long for someone to synthesize existing viruses or bacteria too, using publicly available genome information, or to design a new, rapidly expanding bug to which no human or important crop species has any immune resistance. Building and unleashing something like that on to an unsuspecting world would be catastrophic.
* * *
It probably won’t take long for someone to synthesize existing viruses or bacteria too, using publicly available genome information, or to design a new, rapidly expanding bug to which no human or important crop species has any immune resistance.
* * *
Safety first
The answer to the potential danger is not to ban or hide from the new genetic technologies. Regulation and oversight is key, but can it work?
“There is very little about the history of human activities involving living organisms that provides confidence that we can keep new life forms in their place,” says Arthur Caplan, a bioethicist at the University of Pennsylvania. He points out that for hundreds of years, people have been introducing new life forms into places where they create huge problems. “Rabbits, kudzu, starlings, Japanese beetles, snakehead fish, smallpox, rabies and fruit flies are but a short sample of living things that have caused havoc for humanity simply by winding up in places we do not want them to be.”
When it comes to GM technology, scientists could build in failsafes—perhaps genes that prevent the crop from reproducing or which make them require particular compounds to grow properly. Unfortunately, this also leads to questions around control. When biotech company Monsanto did something like this with genetically modified seed containing “terminator” genes, they were accused of enslaving poor farmers, who would have to buy new seeds every year from the multinational company.
Another idea, perhaps, is that synthetic life forms could be engineered to use a different amino acid code from natural organisms, so that they could be recognized in the wild and would be unable to reproduce with natural organisms.
Banning the technology outright, though, or applying more moratoriums is a non-starter. The solutions to any dangers posed by biotechology will come from biotechnology itself. And rather than being caught unawares, it has to be better to keep up to speed on information and ability when it comes to tackling any apocalyptic problems that might arise.
Nanotech Disaster
* * *
It is 2087, and there’s been an oil spill off the coast of Alaska. A tanker carrying billions of gallons of crude oil has run aground, threatening the local environment with catastrophe. Fortunately, the authorities have a tried and tested weapon to fight the spill: a flotilla of tiny oil-munching robots that can break down hydrocarbons, rendering the spill harmless.
* * *
The machines, each one no wider than a human hair, can create more of themselves as they go along, so that there are always enough of them to deal with any size of oil spill.
But this time, something unexpected happens when the robots are dropped on to the spill. One of them has an error in its programming—instead of eating only hydrocarbons, this robot starts to eat anything with carbon in it. In other words, it sets itself to consume any living thing, along with its meal of oil. It doesn’t take long before everything on Earth is consumed by the proliferating mass of robots. Life, as we know it, is gone.
Is is just a nightmare?
The end of the world brought about by self-replicating robots was an idea first put forward by Eric Drexler in his 1986 book Engines of Creation.
In the book, Drexler talked of the great possibilities and benefits of examining the world at the nano level. But he also warned of something more sinister. “Plants” with “leaves” no more efficient than today’s solar cells, he said, could outstrip real plants, crowding the biosphere with an inedible foliage. “Tough omnivorous ‘bacteria’ could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we make no preparation. We have trouble enough controlling viruses and fruit flies.”
The result: a world turned into a featureless “grey goo”; just a mass of these tiny robots rearranging atoms into copies of themselves. Science fiction authors picked up on the idea, notably in Michael Crichton’s Prey, where nanobots run wild. Even Martin Rees, the UK Astronomer Royal and former president of the Royal Society, highlighted them as a potential cause of humankind’s extinction.
Drexler’s nightmare world would not take long to happen. “Imagine such a replicator floating in a bottle of chemicals, making copies of itself ... the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight,” he said. “At the end of ten hours, there are not 36 new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined—if the bottle of chemicals hadn’t run dry long before.”
What is nanotechnology?
Drexler’s vision is worrying stuff. A 2003 editorial in the scientific journal Nature pondered whether nanotechnology was inherently dangerous, arguing that public calls for regulation of this rapidly developing and diverse discipline seemed to imply that it was. In response, the authorities seemed to think that nanotechnology could become a topic of public unease, “and that the resulting debate will take place in an informational vacuum that will quickly be filled with hot air and hysteria.”
Later that year, the UK’s Royal Society and Royal Academy of Engineering launched an investigation into the possible benefits and risks of nanotechnology, after Prince Charles had approached them with concerns about the technology.
When launching the Royal Society investigation, the UK’s then science minister David Sainsbury acknowledged the difficulty of the task ahead: “Nanotechnology could cover an enormous area. It is a bit like asking a committee when the first computer was designed to say: what is the impact of computers and IT going to be on the world in the future? The ability to predict far ahead is quite limited.”
* * *
In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined.
* * *
Feynman’s challenge
It is worth stepping back at this point and working out what nanotechnology really is. In December 1959, the great physicist Richard Feynman gave a lecture to the American Physical Society titled “There’s Plenty of Room at the Bottom.” He talked about how to fit the entire 24 volumes of the Encyclopedia Britannica on to the head of a pin, and calculated that it would be possible to write all the books in the world into a cube 1/200th of an inch wide, if they were encoded into strings of 1s and 0s, just as in computers. “Computing machines are very large, they fill rooms. Why can’t we make them very small, make them of little wires, little elements—and by little, I mean little.”
He finished his lecture with a challenge. “It is my intention,” he said, “to offer a prize of $1,000 to the f
irst guy who makes an operating electric motor which is only 1/64th inch cubed.”
And this is how, in part, nanotechnology got its start. It began as a way of making computers and machines as small as possible, building them by rearranging individual atoms or molecules. One of the consequences of this drive for miniaturization is the fingernail-sized microchip in your computer, with wiring that is only 80 nanometres thick and which contains hundreds of millions of transistors.
Modern nanotechnology has blossomed into a mix of subjects that few could have predicted. It encompasses disparate fields, from medicine to space science to telecommunications, united only by scale. Researchers using the “nano” prefix might all be working on objects that are a few millionths of a millimeter across, but they will all be doing very different things: fashioning materials never before seen in nature, tweaking molecular “machines” found in bacteria or simply investigating the basic physics of what happens at really small scales.
Nanorobots, though, never survived as a real research endeavor after the initial interest in them. The closest anyone came to a miniature robot was the man who ended up winning Feynman’s $1,000 challenge, fellow Caltech scientist Bill McLellan. He spent five months in the early 1960s building a motor that was less than half a millimeter across, its wires only 1/80th of a millimeter wide, thinner than human hairs. It is not quite nanotechnology, mere microtechnology, and it burned out after a few uses.