Biotech Archives - Future of Life Institute https://futureoflife.org/category/biotech/ Preserving the long-term future of life. Fri, 05 Jul 2024 11:18:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Sean Ekins on Regulating AI Drug Discovery https://futureoflife.org/podcast/sean-ekins-on-regulating-ai-drug-discovery/ Thu, 12 Jan 2023 16:32:49 +0000 https://futureoflife.org/?post_type=podcast&p=45747 Sean Ekins on the Dangers of AI Drug Discovery https://futureoflife.org/podcast/sean-ekins-on-the-dangers-of-ai-drug-discovery/ Thu, 05 Jan 2023 18:20:05 +0000 https://futureoflife.org/?post_type=podcast&p=45742 Biotechnology https://futureoflife.org/cause-area/biotechnology/ Tue, 17 May 2022 00:52:47 +0000 https://futureoflife.org/?post_type=cause-area&p=30784 Modern biotechnology refers to powerful tools kickstarted in the 20th century which can shape and repurpose the properties of living cells, plants and animals. These tools include DNA sequencing, synthetic biology, recombined or ‘recombinant’ DNA, and genome editing.

All these tools being developed to extend and save lives could wind up ending them – whether through unintended consequences or by malicious intent. Ebola, Zika and COVID-19 may have been of natural origin, but that doesn’t mean the next outbreak will be, nor that the right steps are being taken to prevent a lab leak. And while the use and development of bioweapons have been banned since 1972, nations rich and poor retain the resources and know-how to produce them. It may soon even be affordable to ‘print’ deadly proteins or cells at home. Many scientists are calling for a moratorium on forms of gene-editing technology until we have a better idea of what we are dealing with. One specific type of gene-editing, gain-of-function research, is particularly concerning. Gain-of-function research studies the enhancement of pathogens such as increasing their transmissibility, virulence or range of hosts. Given the high risks and unclear benefits, many have argued for a ban on gain-of function research.

DNA sequencing, which involves reordering the four building blocks in a given DNA strand, has fallen in price since the complete human genome was published in 2003. This allows for major medical breakthroughs, such as real time epidemic tracking or designing new anti-cancer drugs targeted at specific DNA mutations. However, genetic analysis of this kind also aids the waging of biowarfare; equally, governments and corporations are increasingly collecting individuals’ and families’ DNA information, threatening privacy and freedom.

Synthetic biology – that is, building bacterial genomes from scratch and injecting them into living cells – also surges ahead. Goals here include making custom cells for medicines, human cells immune to all viruses, and DNA-based data storage. Professor George Church has even proposed ‘de-extincting’ old friends like the woolly mammoth or the Neanderthal. These potentials raise ethical questions, and risk democratizing the ability to create bioweapons, among other harms. A virologist has already managed to produce the horse-pox virus (related to the virus that causes smallpox) with DNA ordered online, for instance.

Recombinant DNA, so-called ‘rDNA’, impacts much of modern society, from biomedical research and best-selling drugs, to the clothes you wear. With this tool, researchers can remove a useful protein from its original context and mass-reproduce it, or transplant it into new species. This has propelled many medical advances, including a growing number of vaccines. The risk is that recombinant viruses, or rDNA derived from drug-resistant bacteria, could escape from the lab and infect the public. rDNA also offers yet another way for rogue scientists or terrorists to produce bioweapons.

Finally, there’s genome editing. The scientists behind one gene-editing technology called CRISPR/Cas9 – also known as ‘DNA scissors’ – believe it may be the key to curing genetic diseases, mutation-based diseases like cancer, and even viruses. Inserting Cas9 into a patient’s cells could fix the mutations that cause diseases. But trials of CRISPR/Cas9 in human cells have produced disturbing results, with mutations showing up in parts of the genome that shouldn’t have been targeted for DNA changes. What’s more, genome editing in reproductive cells could lead to dangerous mutations passing down to future generations. Some fear unethical uses, such as parents choosing their children’s traits (the ‘designer baby’ idea) – though no straightforward links between one’s genes and their intelligence, appearance, etc have yet been found. A gene drive, despite possibly minimizing the spread of certain diseases, could create immense harm, as it is intended to kill or modify an entire species.

We must ensure that biotechnology remains associated with life-saving new developments, like the rapid COVID-19 vaccine successes, rather than terrifying weapons which have the potential to blur the lines between war and peace, or change what it means to be human.

]]>
Gene Drives: Assessing the Benefits & Risks https://futureoflife.org/recent-news/gene-drives-assessing-the-benefits-risks/ Thu, 05 Dec 2019 00:00:00 +0000 https://futureoflife.org/gene-drives-assessing-the-benefits-risks/ By Jolene Creighton

Most people seem to understand that malaria is a pressing problem, one that continues to menace a number of areas around the world. However, most would likely be shocked to learn the true scale of the tragedy. To illustrate this point, in 2017, more than 200 million people were diagnosed with malaria. By the year’s close, nearly half a million people had died of the disease. And these are the statistics for just one year. Over the course of the 20th century, researchers estimate that malaria claimed somewhere between 150 million and 300 million lives. With even the lowest figure, the death toll is still more than World War I, World War II, the Vietnam War, and the Korean War combined. 

Although its pace has slowed in recent years, according to the World Health Organization, malaria remains one of the leading causes of death in children under five. However, there is new hope, and it comes in the form of CRISPR gene drives. 

With this technology, many researchers believe humanity could finally eradicate malaria, saving millions of lives and, according to the World Health Organization, trillions of dollars in associated healthcare costs. The challenge isn’t so much a technical one. If scientists needed to deploy CRISPR gene drives in the near future, Ethan Bier, a Distinguished Professor of Cell and Developmental Biology at UC San Diego, notes that they could reliably do so

However, there’s a hitch. In order to eradicate malaria, we would need to use anti-malaria gene drives to target three specific species (maybe more) and force them into extinction. This would be one of the most audacious attempts by humans to engineer the planet’s ecosystem, a realm where humans already have a checkered past. Such a use sounds highly controversial, and of course, it is. 

However, regardless of whether the technology is being deployed to try and save a species or to try and force it into extinction, a number of scientists are wary. Gene drives will permanently alter an entire population. In many cases, there is no going back. If scientists fail to anticipate properly all of the effects and consequences, the impact on a particular ecological habitat — and the world at large — could be dramatic.

Rewriting Organisms: Understanding CRISPR

CRISPR/Cas9 editing technology enables scientists to alter DNA with enormous precision.

This degree of genetic targeting is only possible because of the unification of two distinct gene editing technologies: CRISPR/Cas9 and gene drives. On their own, each of these tools is powerful enough to alter a gene pool dramatically. Together, they can erase that pool entirely.

The first part of this equation is CRISPR/Cas9. More commonly known as “CRISPR,” this technology is most easily understood as a pair of molecular scissors. CRISPR, which stands for “Clustered Regularly Interspaced Palindromic Repeats,” was adapted from a naturally occurring defense system that’s found in bacteria. 

When a virus invades a host bacteria and is defeated, bacteria are able to capture the virus’ genetic material and merge snippets of it into their genomes. The virus’ genetic material is then used to make guide RNA. These guide RNA target and bind to complementary genetic sequences. When a new virus invades, the guide RNA will find the complementary sequences on the attacking virus and attach itself to that matching portion of the genome. From there, an enzyme known as Cas9 cuts it apart, and the virus is destroyed. 

Lab-made CRISPR allows humans to accomplish much the same  — cut any region of the genome with relatively high precision and accuracy, often disabling any cut sequence in the process. However, scientists have the ability to go a step farther than nature. After a cut is made using CRISPR, scientists can use the cell’s own repair machinery to add or replace an existing segment of DNA with a customized DNA sequence i.e. a customized gene. 

If genetic changes are made in somatic cells (the non-reproductive cells of a living organism) — a process known as “somatic gene editing” — it only affects the organism in which the genetic changes were made. However, if the genetic changes are made in the germline (the sequence of cells that develop into eggs and sperm) — a process known as “germline editing” — then the edited gene can spread to the organism’s offspring. This means that the synthetic changes — for good or bad — could permanently enter the gene pool. 

But by coupling CRISPR with gene drives, scientists can do more than spread a gene to the next generation — they can force it through an entire species

Rewriting Species: Understanding Gene Drives

Most species on our planet have two copies of their genes. During the reproductive cycle, one of these genes is selected to be passed on to the next generation. Because this selection process occurs randomly in nature, there’s about a 50/50 chance that any given gene will be passed down. 

Gene drives change those odds by increasing the probability that a specific gene (or suite of genes) will be inherited. Surprisingly, scientists have known about gene drive systems since the late 1800s. They occur naturally in the wild thanks to something known as “selfish genetic elements” (“selfish genes).” Unlike most genes, which wait patiently for nature to randomly select them for propagation, selfish genes use a variety of mechanisms to manipulate the reproductive process and ensure that they get passed down to more than 50% of offspring. 

One way that this can be achieved is through segregation distorters, which alter the replication process so that a gene sequence is replicated more frequently than others. Transposable elements, on the other hand, allow genes to move to additional locations in the genome. In both instances, the selfish genes use different mechanisms to increase their presence on the genome and, in so doing, improve their odds of being passed down. 

In the 1960s, scientists first realized that humanity might be able to use a gene drive to dictate the genetic future of a species. Specifically, biologists George Craig, William Hickey, and Robert Vandehey argued that a mass release of male mosquitoes with a gene drive that increased the number of male offspring could reduce the number of females, the sex that transmits malaria, “below the level required for efficient disease transmission.” In other words, the team argued that malaria could be eradicated by using a male-favoring gene drive to push female mosquitoes out of the population. 

However, gene editing technologies hadn’t yet been invented. Consequently, gathering a mass of mosquitoes with this specific gene drive was impossible, as scientists were forced to rely on time-consuming and imprecise breeding techniques. 

Then, in the 1970s, Paul Berg and his colleagues created the first molecule made of DNA from two different species, and laboratory-based genetic engineering was born. And not too long after that, in 1992, Margaret Kidwell and José Ribeiro proposed attaching a specific gene to selfish genes to drive the gene through a mosquito population and make it malaria-resistant. 

But despite the theoretical ingenuity of these designs, when it came to deploying them in reality, progress was elusive. Gene editing tools were still quite crude. As a result, they caused a number of off-target edits, where portions of the genome were cut unintentionally and segments of DNA were added in the wrong place. Then CRISPR came along in 2012 and changed everything, making gene editing comparably fast, reliable, and precise. 

It didn’t take scientists long to realize that this new technology could be used to create a remarkably powerful, remarkably precise human-made selfish gene. In 2014, Kevin M Esvelt, Andrea L Smidler, Flaminia Catteruccia, and George M. Church published their landmark paper outlining this process. In their work, they noted that by coupling gene drive systems with CRISPR, researchers could target specific segments of a genome, insert a gene of their choice, and ultimately ensure that the gene would make its way into almost 100% of the offspring. 

Thus, CRISPR gene drives were born, and it is at this juncture that humanity may have acquired the ability to rewrite — and even erase — entire species.

The Making of a Malaria-Free World

To eradicate malaria, at least three species of mosquitos would have to be forced into extinction.

Things move fast in the world of genetic engineering. Esvelt and his team only outlined the process through which scientists could create CRISPR gene drives in 2014, and researchers have had working prototypes for nearly as long. 

In December of 2015, scientists published a paper announcing the creation of a CRISPR gene drive that made Anopheles stephensi, one of the primary mosquito species responsible for the spread of malaria, resistant to the disease. Notably, the gene drive was just as effective as earlier pioneers had predicted: Although the team initially started with just two genetically edited males, after only two generations of cross-breeding, they had over 3,800 third generation mosquitoes, 99.5% of which expressed genes indicating that they had acquired the genes for malaria resistance. 

However, in wild populations, it’s likely that the malaria parasite would eventually develop resistance to the gene drive. Thus, other teams have focused their efforts not on malaria resistance, but making mosquitoes extinct. As a side note, it must be stressed that no one was (or is) suggesting that we should exterminate all mosquitoes to end malaria. While there are over 3000 mosquito species, only 30 to 40 mosquito species transmit the disease, and scientists think that we could eradicate malaria by targeting just three of them. 

In September of 2018, humanity took a major step towards realizing this vision, when scientists at the Imperial College London published a paper announcing that one of their CRISPR gene drives had wiped out an entire population of lab-bred mosquitoes in less than 11 generations. If this approach were released into the wild, the team predicted that it could propel the affected species into extinction in just one year. 

For their work, the team focused on Anopheles gambiae and targeted genes that code proteins that play an important role in determining an organism’s sexual characteristics. By altering the gene, the team was able to create female mosquitoes that were infertile. What’s more, the drive seems to be resistance-proof. 

There is still some technical work to be done before this particular CRISPR gene drive can be deployed in the wild. For starters, the team needs to verify that it is, in fact, resistant-proof. The results also have to be replicated in the same conditions in which Anopheles mosquitoes are typically found — conditions that mimic tropical locations across sub-Saharan Africa. Yet the researchers are making rapid progress, and a CRISPR gene drive that can end malaria may soon be a reality. 

Moving Beyond Malaria

Aside from eradicating malaria, one of the most promising applications of CRISPR gene drives involves combating invasive species. 

For example, in Australia, the cane toad has been causing an ecological crisis since the 1930s. Originating in the Americas, the cane toad was introduced to Australia in 1935 by the Bureau of Sugar Experiment Stations in an attempt to control beetle populations that were attacking sugar cane crops. However, the cane toad has no natural predators in the area, and so has multiplied at an alarming rate. 

Since its 1935 introduction to Australia, the poisonous cane toad has decimated populations of native species that attempt to prey on it. A gene drive could be used to eliminate its toxicity.

Although only 3,000 were initially released, scientists estimate that the cane toad population in Australia is currently over 200 million. For decades, the toads have been killing a number of native birds, snakes, and frogs that prey on it and inadvertently ingest its lethal toxin the population of one monitor lizard species dropped by 90% after the cane toad spread to its area. 

However, by genetically modifying the cane toads to keep them from producing these toxins, scientists believe they might be able to give native species a fighting chance. And because the CRISPR gene drive would only target the cane toad population, it may actually be safer than traditional pest control methods that involve poisons, as these chemicals impact a multitude of species. 

Research indicates that CRISPR gene drives could also be used to target a host of other invasive pests. In January of 2019, scientists published a paper demonstrating the first concrete proof that the technology also works in mammals — specifically, lab mice. 

Another use case involves deploying CRISPR gene drives to alter threatened or endangered organisms in order to better equip them for survival. For instance, a number of amphibian species are in decline because of the chytrid fungus, which causes a skin disease that is often lethal. Esvelt and his team note that CRISPR gene drives could be used to make organisms resistant to this fungal infection. Currently, there is no resistance mechanism for the fungus, so this specific use case is just a theoretical application. However, if developed, it could be deployed to save many species from extinction. 

The Harvard Wyss Institute suggests that CRISPR gene drives could also be used “to pave the way toward sustainable agriculture.” Specifically, the technology could be used to reverse pesticide resistance in insects that attack crops, or it could be used to reverse herbicide resistance in weeds.     

Yet, CRISPR gene drives are powerless when it comes to some of humanity’s greatest adversaries. 

Because they are spread through sexual reproduction, gene drives can’t be used to alter species that reproduce asexually, meaning they can’t be used in bacteria and viruses. Gene drives also don’t have a practical applications in humans or other organisms with long generational periods, as it would take several centuries for any impact to be noticeable.. The Harvard Wyss institute notes that, at these timescales, someone could easily create a reversal drive to remove the trait, and any number of other unanticipated events could prevent the drive from propagating. 

That’s not to say that reverse gene drives should be considered a safety net in case forward gene drives are weaponized or found to be dangerous. Rather, the primary point is to highlight the difficulty in using CRISPR gene drives to spread a gene throughout species with long generational and gestational periods. 

But, as noted above, when it comes to species that have short reproductive cycles, the impact could be profound on extremely short order. With this in mind — alt­hough the work in amphibian, mammal, and plant populations is promising — the general scientific consensus is that the best applications for CRISPR gene drives likely involve insects. 

Entering the Unknown

Before scientists introduce or remove a species from a habitat, they conduct research in order to understand the role that it plays within the ecosystem. This helps them better determine what the overall outcome will be, and how other individual organisms will be impacted. 

According to Matan Shelomi, an entomologist who specializes in insect microbiology, scientists haven’t found any organisms that will suffer if three mosquitoes species are driven into extinction. Shelomi notes that, although several varieties of fish, amphibians, and insects eat mosquito larvae, they don’t rely on the larvae to survive; in fact, most of the organisms that have been studied prefer other food sources, and no known species live on mosquitoes alone. The same, he argues, can be said of adult mosquitoes. While a number of birds do consume mature mosquitoes, none rely on them as a primary food source. 

Shelomi also notes that mosquitoes don’t play a critical role in pollination — or any other facet of the environment that scientists have examined. As a result, he says they are not a keystone species: “No ecosystem depends on any mosquito to the point that it would collapse if they were to disappear.” 

At least, not as far as we are aware. 

Because CRISPR gene drives cause permanent changes to a species, virologist Jonathan Latham notes that it is critical to get things right the first time: “They are ‘products’ that will likely not be able to be recalled, so any approval decision point must be presumed to be final and irreversible.” However, we have no way of knowing if scientists have properly anticipated every eventuality. “We certainly do not know all the myriad ways all mosquitoes interact with all life forms in their environment, and there may be something we are overlooking,” Shelomi admits. Due to these unknown unknowns, and the near irreversibility of CRISPR gene drives, Latham argues that they should never be deployed

Every intervention has consequences. To this end, the important thing is to be as sure as possible that the potential rewards outweigh the risks. For now, when it comes to anti-malaria CRISPR gene drives, this critical question remains unanswered. Yet the applications for CRISPR gene drives extend far beyond mosquitoes, making it all the more important for scientists to ensure that their research is robust and doesn’t cause harm to humanity or to Earth’s ecosystems. 

Risky Business, Designed for Safety

Although some development is still needed before scientists would be ready to release a CRISPR gene drive into a wild insect population, the most pressing issues that remain are of a regulatory and ethical nature. These include: 

Limiting Deployment and Obtaining Consent 

For starters, who gets to decide whether or not scientists should eradicate a species? The answer most commonly given by scientists and politicians is that the individuals who will be impacted should cast the final vote. However, substantial problems arise when it comes to limiting deployment and determining the degree to which informed consent is necessary. 

Todd Kuiken, a Senior Research Scholar with the Genetic Engineering and Society Center at NC State University and a member of the U.N.’s technical expert group for the Convention on Biological Diversity, notes that, “in the past, these kinds of applications or introductions were mostly controlled in terms of where they were supposed to go.” Gene drives are different, he argues, because they are “designed to move, and that’s really a new concept in terms of environmental policy. That’s what makes them really interesting from a policy perspective.” 

The highly globalized nature of the modern world would make it near-impossible to geographically contain the effects of a gene drive.

For example, if scientists release mosquitoes in a town in India that has approved the work, there is no practical way to contain the release to this single location. The mosquitoes will travel to other towns, other countries, and potentially even other continents.

The problem isn’t much easier to address even if the release is planned on a remote island. The nature of modern life, which sees people and goods continually traveling across the globe, makes it extremely difficult to prevent cross contamination.  A CRISPR gene drive deployed against an invasive species on an island could still decimate populations in other places — even places where it is native and beneficial. 

The release of a single engineered gene drive could, potentially, impact every human on Earth. Thus, in order to obtain the informed consent from all affected parties, scientists would effectively need to ensure that they had permission from everyone on the planet. 

To help address these issues of “free, prior, and informed consent,” Kuiken notes that scientists and policymakers must establish a consensus on the following: 

  1.   What communities, organizations, and groups should be part of the decision-making process? 
  2.   How far out do you go to obtain informed consent — how many centric circles past the introduction point do you need to move? 
  3.   At which decision stage of the project should these different groups or potentially impacted communities be involved? 

Of course, in order for individuals to effectively participate in discussions about CRISPR gene drive, they will have to know what it is. This also poses a problem: “Generally speaking, the majority of the public probably hasn’t even heard of it,” Kuiken says. 

There are also questions about how to verify that an individual is actually informed enough to give consent. “What does it really mean to get approval?” Kuiken asks, noting, “the real question we need to start asking ourselves is ‘what do we mean by [informed consent]?’” 

Because research into this area is already so advanced, Kuiken notes that there’s an immediate need for a broad range of endeavors aimed at improving individuals’ knowledge of, and interest in, CRISPR gene drives. 

Expert Inclusion 

And it’s not just the public that need schooling. When it comes to scientists’ understanding of the technology, there are also serious and significant gaps. The degree and depth of these gaps, Kuiken is quick to point out, varies dramatically from field to field. While most geneticists are at least vaguely familiar with CRISPR gene drives,some key disciplines are still in the dark: one of the key findings of this year’s upcoming IUCN (International Union for Conservation of Nature) report is that “the conservation science community is not well aware of gene drives at all.” 

“What concerns me is that a lot of the gene drive developers are not ecologists. My understanding is that they have very little training, or even zero training, when it comes to environmental interactions, environmental science, and ecology,” Kuiken says. “So, you have people developing systems that are being deliberately designed to be introduced to an environment or an ecosystem who don’t have the background to understand what all those ecological interactions might be.” 

To this end, scientists working on gene drive technologies must be brought into conversations with ecologists. 

Assessing the Impact 

But even if scientists work together under the best conditions, the teams will still face monumental difficulties when trying to assess the impact and significance of a particular gene drive. 

To begin with, there are issues with funding allocation. “The research dollars are not balanced correctly in terms of the development of the gene drive verses what the environmental implications will be,”  Kuikon says. While he notes that this is typically how research funds are structured — environmental concerns come in last, if they come at all — CRISPR gene drives are fundamentally about the environment and ecology. As such, the funding issues in this specific use case are troubling. 

Yet, if proper funding were secured, it would still be difficult to guarantee that a drive was safe. Even a small gap in our

Alterations made to a single species can have a detrimental impact on an entire ecosystem.

understanding of a habitat could result in a drive being released into a species that has a critical ecological function in its local environment. As with the cane toad in Australia, this type of release could cause an environmental catastrophe and irreversibly damage an ecosystem

One way to  help prevent adverse ecological impacts is to gauge the effect through daisy-chain gene drives. These are self-limiting drives that grow weaker and weaker and die off after a few generations, allowing researchers to effectively measure the overall impact while restricting the gene drive’s spread. If such tests determine that there are no unfavorable effects, a more lasting drive can subsequently be released. 

Kill-switches offer another potential solution. The Defense Advanced Research Projects Agency (DARPA) recently announced that it was allocating money to fund research into anti-CRISPR proteins. These could be used to prevent the expression of certain genes and thus counter the impact of a gene drive that has gone rogue or was released maliciously. 

Similarly, scientists from North Carolina State University’s Genetic Engineering and Society Center note that it may be beneficial to establish a regulatory framework requiring the development of immunizing drives, which spread resistance to a specific gene drive, to be developed alongside drives that are intended for release. These could be used to immunize related species that aren’t being targeted, or kept on the ready in case of any unanticipated occurrences. 

An Uncertain Future

But even if scientists do everything right, and even if researchers are able to verify that CRISPR gene drives are 100% safe, it doesn’t mean they will be deployed. “You can move yourself far in terms of generally scientific literacy around gene drives, but people’s acceptance changes when it potentially has a direct impact on them,” Kuiken explains. 

To support his claims, Kuiken points to the Oxitec mosquitoes in Florida

Here, teams were hoping to release male Aedes aegypti mosquitoes carrying a “self-limiting” gene. These are akin to, but distinct from, gene drives. When these edited males mate with wild females, the offspring inherit a copy of a gene that prevents them from surviving to adulthood. Since they don’t survive, they can’t reproduce, and there is a reduction in the wild population. 

After working with local communities, Oxitec put the release up for a vote. “The vote count showed that, generally speaking, it you tallied up the whole area of South Florida, it was about a 60 to 70 percent approval. People said, ‘yeah, this is a really good idea. We should do this,’” Kuiken said. “But when you focused in on the areas where they were actually going to release the mosquitoes, it was basically flipped. It was a classic ‘not in my backyard’ scenario.”

That fear, especially when it comes to CRISPR gene drives, isn’t really too hard to comprehend. Even if every scientific analysis showed that the benefits of these drives outweighed all the various drawbacks, there would still be the unknown unknowns. 

Researchers can’t possibly account for how every single species — all the countless plants, insects, and as yet undiscovered deep sea creatures — will be impacted by a change we make to an organism. So unless we develop unique and unprecedented scientific protocols, no matter how much research we do, the decision to use or not use CRISPR gene drives will have to be made without all the evidence. 

]]>
Dr. Matthew Meselson Wins 2019 Future of Life Award https://futureoflife.org/recent-news/dr-matthew-meselson-wins-2019-future-of-life-award/ Tue, 09 Apr 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/dr-matthew-meselson-wins-2019-future-of-life-award/

On April 9th, Dr. Matthew Meselson received the $50,000 Future of Life Award at a ceremony at the University of Boulder’s Conference on World Affairs. Dr. Meselson was a driving force behind the 1972 Biological Weapons Convention, an international ban that has prevented one of the most inhumane forms of warfare known to humanity. April 9th marked the eve of the Convention’s 47th anniversary.

Meselson’s long career is studded with highlights: proving Watson and Crick’s hypothesis on DNA structure, solving the Sverdlovsk Anthrax mystery, ending the use of Agent Orange in Vietnam. But it is above all his work on biological weapons that makes him an international hero.

“Through his work in the US and internationally, Matt Meselson was one of the key forefathers of the 1972 Biological Weapons Convention,” said Daniel Feakes, Chief of the Biological Weapons Convention Implementation Support Unit. “The treaty bans biological weapons and today has 182 member states. He has continued to be a guardian of the BWC ever since. His seminal warning in 2000 about the potential for the hostile exploitation of biology foreshadowed many of the technological advances we are now witnessing in the life sciences and responses which have been adopted since.”

Meselson became interested in biological weapons during the 60s, while employed with the U.S. Arms Control and Disarmament Agency. It was on a tour of Fort Detrick, where the U.S. was then manufacturing anthrax, that he learned the motivation for developing biological weapons: they were cheaper than nuclear weapons. Meselson was struck, he says, by the illogic of this — it would be an obvious national security risk to decrease the production cost of WMDs.


Do you know someone deserving of the Future of Life Award? If so, please consider submitting their name to our Unsung Hero Search page. If we decide to give the award to your nominee, you will receive a $3,000 prize from FLI for your contribution.

The use of biological weapons was already prohibited by the 1925 Geneva Protocol, an international treaty that the U.S. had never ratified. So Meselson wrote a paper, “The United States and the Geneva Protocol,” outlining why it should do so. Meselson knew Henry Kissinger, who passed his paper along to President Nixon, and by the end of 1969 Nixon renounced biological weapons.

Next came the question of toxins — poisons derived from living organisms. Some of Nixon’s advisors believed that the U.S. should renounce the use of naturally derived toxins, but retain the right to use artificial versions of the same substances. It was another of Meselson’s papers, “What Policy for Toxins,” that led Nixon to reject this arbitrary distinction and to renounce the use of all toxin weapons.

On Meselson’s advice, Nixon had resubmitted the Geneva Protocol to the Senate for approval. But he also went beyond the terms of the Protocol — which only ban the use of biological weapons — to renounce offensive biological research itself. Stockpiles of offensive biological substances, like the anthrax that Meselson had discovered at Fort Detrick, were destroyed.

Once the U.S. adopted this more stringent policy, Meselson turned his attention to the global stage. He and his peers wanted an international agreement stronger than the Geneva Protocol, one that would ban stockpiling and offensive research in addition to use and would provide for a verification system. From their efforts came the Biological Weapons Convention, which was signed in 1972 and is still in effect today.

“Thanks in significant part to Professor Matthew Meselson’s tireless work, the world came together and banned biological weapons, ensuring that the ever more powerful science of biology helps rather than harms humankind. For this, he deserves humanity’s profound gratitude,” said former UN Secretary-General Ban Ki-Moon.

Meselson has said that biological warfare “could erase the distinction between war and peace.” Other forms of war have a beginning and an end — it’s clear what is warfare and what is not. Biological warfare would be different: “You don’t know what’s happening, or you know it’s happening but it’s always happening.”

And the consequences of biological warfare can be greater, even, than mass destruction; Attacks on DNA could fundamentally alter humankind. FLI honors Matthew Meselson for his efforts to protect not only human life but also the very definition of humanity.

Said Astronomer Royal Lord Martin Rees, “Matt Meselson is a great scientist — and one of very few who have been deeply committed to making the world safe from biological threats. This will become a challenge as important as the control of nuclear weapons — and much more challenging and intractable. His sustained and dedicated efforts fully deserve wider acclaim.”

“Today biotech is a force for good in the world, associated with saving rather than taking lives, because Matthew Meselson helped draw a clear red line between acceptable and unacceptable uses of biology”, added MIT Professor and FLI President Max Tegmark. “This is an inspiration for those who want to draw a similar red line between acceptable and unacceptable uses of artificial intelligence and ban lethal autonomous weapons.

To learn more about Matthew Meselson, listen to FLI’s two-part podcast featuring him in conversation with Ariel Conn and Max Tegmark. In Part One, Meselson describes how he helped prove Watson and Crick’s hypothesis of DNA structure and recounts the efforts he undertook to get biological weapons banned. Part Two focuses on three major incidents in the history of biological weapons and the role played by Meselson in resolving them.

Publications by Meselson include:

The Future of Life Award is a prize awarded by the Future of Life Institute for a heroic act that has greatly benefited humankind, done despite personal risk and without being rewarded at the time. This prize was established to help set the precedent that actions benefiting future generations will be rewarded by those generations. The inaugural Future of Life Award was given to the family of Vasili Arkhipov in 2017 for single-handedly preventing a Soviet nuclear attack against the US in 1962, and the 2nd Future of Life Award was given to the family of Stanislav Petrov for preventing a false-alarm nuclear war in 1983.

]]>
FLI Podcast (Part 1): From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark https://futureoflife.org/podcast/fli-podcast-part-1-from-dna-to-banning-biological-weapons-with-matthew-meselson-and-max-tegmark/ Thu, 28 Feb 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/fli-podcast-part-1-from-dna-to-banning-biological-weapons-with-matthew-meselson-and-max-tegmark/ FLI Podcast (Part 2): Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark https://futureoflife.org/podcast/fli-podcast-part-2-anthrax-agent-orange-and-yellow-rain-verification-stories-with-matthew-meselson/ Thu, 28 Feb 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/fli-podcast-part-2-anthrax-agent-orange-and-yellow-rain-verification-stories-with-matthew-meselson/ Podcast: Governing Biotechnology, From Avian Flu to Genetically-Modified Babies with Catherine Rhodes https://futureoflife.org/podcast/podcast-governing-biotechnology-from-avian-flu-to-genetically-modified-babies/ Fri, 30 Nov 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/podcast-governing-biotechnology-from-avian-flu-to-genetically-modified-babies/ Benefits & Risks of Biotechnology https://futureoflife.org/biotech/benefits-risks-biotechnology/ Wed, 14 Nov 2018 17:58:41 +0000 https://futureoflife.org/uncategorized/risk-of-biotechnology/ Click here to see this page in other languages: Japanese    Russian 

“This is a whole new era where we’re moving beyond little edits on single genes to being able to write whatever we want throughout the genome.”

-George Church, Professor of Genetics at Harvard Medical School

What is biotechnology?

How are scientists putting nature’s machinery to use for the good of humanity, and how could things go wrong?

Biotechnology is nearly as old as humanity itself. The food you eat and the pets you love? You can thank our distant ancestors for kickstarting the agricultural revolution, using artificial selection for crops, livestock, and other domesticated animals. When Edward Jenner invented vaccines and when Alexander Fleming discovered antibiotics, they were harnessing the power of biotechnology. And, of course, modern civilization would hardly be imaginable without the fermentation processes that gave us beer, wine, and cheese!

When he coined the term in 1919, the agriculturalist Karl Ereky described ‘biotechnology’ as “all lines of work by which products are produced from raw materials with the aid of living things.” In modern biotechnology, researchers modify DNA and proteins to shape the capabilities of living cells, plants, and animals into something useful for humans. Biotechnologists do this by sequencing, or reading, the DNA found in nature, and then manipulating it in a test tube – or, more recently, inside of living cells.

In fact, the most exciting biotechnology advances of recent times are occurring at the microscopic level (and smaller!) within the membranes of cells. After decades of basic research into decoding the chemical and genetic makeup of cells, biologists in the mid-20th century launched what would become a multi-decade flurry of research and breakthroughs. Their work has brought us the powerful cellular tools at biotechnologists’ disposal today. In the coming decades, scientists will use the tools of biotechnology to manipulate cells with increasing control, from precision editing of DNA to synthesizing entire genomes from their basic chemical building blocks. These cells could go on to become bomb-sniffing plantsmiracle cancer drugs, or ‘de-extincted’ wooly mammoths. And biotechnology may be a crucial ally in the fight against climate change.

But rewriting the blueprints of life carries an enormous risk. To begin with, the same technology being used to extend our lives could instead be used to end them. While researchers might see the engineering of a supercharged flu virus as a perfectly reasonable way to better understand and thus fight the flu, the public might see the drawbacks as equally obvious: the virus could escape, or someone could weaponize the research. And the advanced genetic tools that some are considering for mosquito control could have unforeseen effects, possibly leading to environmental damage. The most sophisticated biotechnology may be no match for Murphy’s Law.

While the risks of biotechnology have been fretted over for decades, the increasing pace of progress – from low cost DNA sequencing to rapid gene synthesis to precision genome editing – suggests biotechnology is entering a new realm of maturity regarding both beneficial applications and more worrisome risks. Adding to concerns, DIY scientists are increasingly taking biotech tools outside of the lab. For now, many of the benefits of biotechnology are concrete while many of the risks remain hypotheticals, but it is better to be proactive and cognizant of the risks than to wait for something to go wrong first and then attempt to address the damage.

How does biotechnology help us?

Satellite images make clear the massive changes that mankind has made to the surface of the Earth: cleared forests, massive dams and reservoirs, millions of miles of roads. If we could take satellite-type images of the microscopic world, the impact of biotechnology would be no less obvious. The majority of the food we eat comes from engineered plants, which are modified – either via modern technology or by more traditional artificial selection – to grow without pesticides, to require fewer nutrients, or to withstand the rapidly changing climate. Manufacturers have substituted petroleum-based ingredients with biomaterials in many consumer goods, such as plastics, cosmetics, and fuels. Your laundry detergent? It almost certainly contains biotechnology. So do nearly all of your cotton clothes.

But perhaps the biggest application of biotechnology is in human health. Biotechnology is present in our lives before we’re even born, from fertility assistance to prenatal screening to the home pregnancy test. It follows us through childhood, with immunizations and antibiotics, both of which have drastically improved life expectancy. Biotechnology is behind blockbuster drugs for treating cancer and heart disease, and it’s being deployed in cutting-edge research to cure Alzheimer’s and reverse aging. The scientists behind the technology called CRISPR/Cas9 believe it may be the key to safely editing DNA for curing genetic disease. And one company is betting that organ transplant waiting lists can be eliminated by growing human organs in chimeric pigs.

What are the risks of biotechnology?

Along with excitement, the rapid progress of research has also raised questions about the consequences of biotechnology advances. Biotechnology may carry more risk than other scientific fields: microbes are tiny and difficult to detect, but the dangers are potentially vast. Further, engineered cells could divide on their own and spread in the wild, with the possibility of far-reaching consequences. Biotechnology could most likely prove harmful either through the unintended consequences of benevolent research or from the purposeful manipulation of biology to cause harm. One could also imagine messy controversies, in which one group engages in an application for biotechnology that others consider dangerous or unethical.

1. Unintended Consequences

Sugarcane farmers in Australia in the 1930’s had a problem: cane beetles were destroying their crop. So, they reasoned that importing a natural predator, the cane toad, could be a natural form of pest control. What could go wrong? Well, the toads became a major nuisance themselves, spreading across the continent and eating the local fauna (except for, ironically, the cane beetle).

While modern biotechnology solutions to society’s problems seem much more sophisticated than airdropping amphibians into Australia, this story should serve as a cautionary tale. To avoid blundering into disaster, the errors of the past should be acknowledged.

  • In 2014, the Center for Disease Control came under scrutiny after repeated errors led to scientists being exposed to Ebola, anthrax, and the flu. And a professor in the Netherlands came under fire in 2011 when his lab engineered a deadly, airborne version of the flu virus, mentioned above, and attempted to publish the details. These and other labs study viruses or toxins to better understand the threats they pose and to try to find cures, but their work could set off a public health emergency if a deadly material is released or mishandled as a result of human error.
  • Mosquitoes are carriers of disease – including harmful and even deadly pathogens like Zika, malaria, and dengue – and they seem to play no productive role in the ecosystem. But civilians and lawmakers are raising concerns about a mosquito control strategy that would genetically alter and destroy disease-carrying species of mosquitoes. Known as a ‘gene drive,’ the technology is designed to spread a gene quickly through a population by sexual reproduction. For example, to control mosquitoes, scientists could release males into the wild that have been modified to produce only sterile offspring. Scientists who work on gene drive have performed risk assessments and equipped them with safeguards to make the trials as safe as possible. But, since a man-made gene drive has never been tested in the wild, it’s impossible to know for certain the impact that a mosquito extinction could have on the environment. Additionally, there is a small possibility that the gene drive could mutate once released in the wild, spreading genes that researchers never planned for. Even armed with strategies to reverse a rogue gene drive, scientists may find gene drives difficult to control once they spread outside the lab.
  • When scientists went digging for clues in the DNA of people who are apparently immune to HIV, they found that the resistant individuals had mutated a protein that serves as the landing pad for HIV on the surface of blood cells. Because these patients were apparently healthy in the absence of the protein, researchers reasoned that deleting its gene in the cells of infected or at-risk patients could be a permanent cure for HIV and AIDS. With the arrival of the new tool, a set of ‘DNA scissors’ called CRISPR/Cas9, that holds the promise of simple gene surgery for HIV, cancer, and many other genetic diseases, the scientific world started to imagine nearly infinite possibilities. But trials of CRISPR/Cas9 in human cells have produced troubling results, with mutations showing up in parts of the genome that shouldn’t have been targeted for DNA changes. While a bad haircut might be embarrassing, the wrong cut by CRISPR/Cas9 could be much more serious, making you sicker instead of healthier. And if those edits were made to embryos, instead of fully formed adult cells, then the mutations could permanently enter the gene pool, meaning they will be passed on to all future generations. So far, prominent scientists and prestigious journals are calling for a moratorium on gene editing in viable embryos until the risks, ethics, and social implications are better understood.

2. Weaponizing Biology

The world recently witnessed the devastating effects of disease outbreaks, in the form of Ebola and the Zika virus – but those were natural in origin. The malicious use of biotechnology could mean that future outbreaks are started on purpose. Whether the perpetrator is a state actor or a terrorist group, the development and release of a bioweapon, such as a poison or infectious disease, would be hard to detect and even harder to stop. Unlike a bullet or a bomb, deadly cells could continue to spread long after being deployed. The US government takes this threat very seriously, and the threat of bioweapons to the environment should not be taken lightly either.

Developed nations, and even impoverished ones, have the resources and know-how to produce bioweapons. For example, North Korea is rumored to have assembled an arsenal containing “anthrax, botulism, hemorrhagic fever, plague, smallpox, typhoid, and yellow fever,” ready in case of attack. It’s not unreasonable to assume that terrorists or other groups are trying to get their hands on bioweapons as well. Indeed, numerous instances of chemical or biological weapon use have been recorded, including the anthrax scare shortly after 9/11, which left 5 dead after the toxic cells were sent through the mail. And new gene editing technologies are increasing the odds that a hypothetical bioweapon targeted at a certain ethnicity, or even a single individual like a world leader, could one day become a reality.

While attacks using traditional weapons may require much less expertise, the dangers of bioweapons should not be ignored. It might seem impossible to make bioweapons without plenty of expensive materials and scientific knowledge, but recent advances in biotechnology may make it even easier for bioweapons to be produced outside of a specialized research lab. The cost to chemically manufacture strands of DNA is falling rapidly, meaning it may one day be affordable to ‘print’ deadly proteins or cells at home. And the openness of science publishing, which has been crucial to our rapid research advances, also means that anyone can freely Google the chemical details of deadly neurotoxins. In fact, the most controversial aspect of the supercharged influenza case was not that the experiments had been carried out, but that the researchers wanted to openly share the details.

On a more hopeful note, scientific advances may allow researchers to find solutions to biotechnology threats as quickly as they arise. Recombinant DNA and biotechnology tools have enabled the rapid invention of new vaccines which could protect against new outbreaks, natural or man-made. For example, less than 5 months after the World Health Organization declared Zika virus a public health emergency, researchers got approval to enroll patients in trials for a DNA vaccine.

The ethics of biotechnology

Biotechnology doesn’t have to be deadly, or even dangerous, to fundamentally change our lives. While humans have been altering genes of plants and animals for millennia – first through selective breeding and more recently with molecular tools and chimeras – we are only just beginning to make changes to our own genomes (amid great controversy).

Cutting-edge tools like CRISPR/Cas9 and DNA synthesis raise important ethical questions that are increasingly urgent to answer. Some question whether altering human genes means “playing God,” and if so, whether we should do that at all. For instance, if gene therapy in humans is acceptable to cure disease, where do you draw the line? Among disease-associated gene mutations, some come with virtual certainty of premature death, while others put you at higher risk for something like Alzheimer’s, but don’t guarantee you’ll get the disease. Many others lie somewhere in between. How do we determine a hard limit for which gene surgery to undertake, and under what circumstances, especially given that the surgery itself comes with the risk of causing genetic damage? Scholars and policymakers have wrestled with these questions for many years, and there is some guidance in documents such as the United Nations’ Universal Declaration on the Human Genome and Human Rights.

And what about ways that biotechnology may contribute to inequality in society? Early work in gene surgery will no doubt be expensive – for example, Novartis plans to charge $475,000 for a one-time treatment of their recently approved cancer therapy, a drug which, in trials, has rescued patients facing certain death. Will today’s income inequality, combined with biotechnology tools and talk of ‘designer babies’, lead to tomorrow’s permanent underclass of people who couldn’t afford genetic enhancement?

Advances in biotechnology are escalating the debate, from questions about altering life to creating it from scratch. For example, a recently announced initiative called GP-Write has the goal of synthesizing an entire human genome from chemical building blocks within the next 10 years. The project organizers have many applications in mind, from bringing back wooly mammoths to growing human organs in pigs. But, as critics pointed out, the technology could make it possible to produce children with no biological parents, or to recreate the genome of another human, like making cellular replicas of Einstein. “To create a human genome from scratch would be an enormous moral gesture,” write two bioethicists regarding the GP-Write project. In response, the organizers of GP-Write insist that they welcome a vigorous ethical debate, and have no intention of turning synthetic cells into living humans. But this doesn’t guarantee that rapidly advancing technology won’t be applied in the future in ways we can’t yet predict.

What are the tools of biotechnology?

1. DNA Sequencing

It’s nearly impossible to imagine modern biotechnology without DNA sequencing. Since virtually all of biology centers around the instructions contained in DNA, biotechnologists who hope to modify the properties of cells, plants, and animals must speak the same molecular language. DNA is made up of four building blocks, or bases, and DNA sequencing is the process of determining the order of those bases in a strand of DNA. Since the publication of the complete human genome in 2003, the cost of DNA sequencing has dropped dramatically, making it a simple and widespread research tool.

Benefits: Sonia Vallabh had just graduated from law school when her mother died from a rare and fatal genetic disease. DNA sequencing showed that Sonia carried the fatal mutation as well. But far from resigning to her fate, Sonia and her husband Eric decided to fight back, and today they are graduate students at Harvard, racing to find a cure. DNA sequencing has also allowed Sonia to become pregnant, since doctors could test her eggs for ones that don’t have the mutation. While most people’s genetic blueprints don’t contain deadly mysteries, our health is increasingly supported by the medical breakthroughs that DNA sequencing has enabled. For example, researchers were able to track the 2014 Ebola epidemic in real time using DNA sequencing. And pharmaceutical companies are designing new anti-cancer drugs targeted to people with a specific DNA mutation. Entire new fields, such as personalized medicine, owe their existence to DNA sequencing technology.

Risks_:_ Simply reading DNA is not harmful, but it is foundational for all of modern biotechnology. As the saying goes, knowledge is power, and the misuse of DNA information could have dire consequences. While DNA sequencing alone cannot make bioweapons, it’s hard to imagine waging biological warfare without being able to analyze the genes of infectious or deadly cells or viruses. And although one’s own DNA information has traditionally been considered personal and private, containing information about your ancestors, family, and medical conditions, governments and corporations increasingly include a person’s DNA signature in the information they collect. Some warn that such databases could be used to track people or discriminate on the basis of private medical records – a dystopian vision of the future familiar to anyone who’s seen the movie GATTACA. Even supplying patients with their own genetic information has come under scrutiny, if it’s done without proper context, as evidenced by the dispute between the FDA and the direct-to-consumer genetic testing service 23andMe. Finally, DNA testing opens the door to sticky ethical questions, such as whether to carry to term a pregnancy after the fetus is found to have a genetic mutation.

2. Recombinant DNA

The modern field of biotechnology was born when scientists first manipulated – or ‘recombined’ – DNA in a test tube, and today almost all aspects of society are impacted by so-called ‘rDNA’. Recombinant DNA tools allow researchers to choose a protein they think may be important for health or industry, and then remove that protein from its original context. Once removed, the protein can be studied in a species that’s simple to manipulate, such as E. coli bacteria. This lets researchers reproduce it in vast quantities, engineer it for improved properties, and/or transplant it into a new species. Modern biomedical research, many best-selling drugs, most of the clothes you wear, and many of the foods you eat rely on rDNA biotechnology.

Benefits: Simply put, our world has been reshaped by rDNA. Modern medical advances are unimaginable without the ability to study cells and proteins with rDNA and the tools used to make it, such as PCRwhich helps researchers ‘copy and paste’ DNA in a test tube. An increasing number of vaccines and drugs are the direct products of rDNA. For example, nearly all insulin used in treating diabetes today is produced recombinantly. Additionally, cheese lovers may be interested to know that rDNA provides ingredients for a majority of hard cheeses produced in the West. Many important crops have been genetically modified to produce higher yields, withstand environmental stress, or grow without pesticides. Facing the unprecedented threats of climate change, many researchers believe rDNA and GMOs will be crucial in humanity’s efforts to adapt to rapid environmental changes.

Risks: The inventors of rDNA themselves warned the public and their colleagues about the dangers of this technology. For example, they feared that rDNA derived from drug-resistant bacteria could escape from the lab, threatening the public with infectious superbugs. And recombinant viruses, useful for introducing genes into cells in a petri dish, might instead infect the human researchers. Some of the initial fears were allayed when scientists realized that genetic modification is much trickier than initially thought, and once the realistic threats were identified – like recombinant viruses or the handling of deadly toxins – safety and regulatory measures were put in place. Still, there are concerns that rogue scientists or bioterrorists could produce weapons with rDNA. For instance, it took researchers just 3 years to make poliovirus from scratch in 2006, and today the same could be accomplished in a matter of weeks. Recent flu epidemics have killed over 200,000, and the malicious release of an engineered virus could be much deadlier – especially if preventative measures, such as vaccine stockpiles, are not in place.

3. DNA Synthesis

Synthesizing DNA has the advantage of offering total researcher control over the final product. With many of the mysteries of DNA still unsolved, some scientists believe the only way to truly understand the genome is to make one from its basic building blocks. Building DNA from scratch has traditionally been too expensive and inefficient to be very practical, but in 2010, researchers did just that, completely synthesizing the genome of a bacteria and injecting it into a living cell. Since then, scientists have made bigger and bigger genomes, and recently, the GP-Write project launched with the intention of tackling perhaps the ultimate goal: chemically fabricating an entire human genome. Meeting this goal – and within a 10 year timeline – will require new technology and an explosion in manufacturing capacity. But the project’s success could signal the impact of synthetic DNA on the future of biotechnology.

Benefits: Plummeting costs and technical advances have made the goal of total genome synthesis seem much more immediate. Scientists hope these advances, and the insights they enable, will ultimately make it easier to make custom cells to serve as medicines or even bomb-sniffing plants. Fantastical applications of DNA synthesis include human cells that are immune to all viruses or DNA-based data storage. Prof. George Church of Harvard has proposed using DNA synthesis technology to ‘de-extinct’ the passenger pigeon, wooly mammoth, or even NeanderthalsOne company hopes to edit pig cells using DNA synthesis technology so that their organs can be transplanted into humans. And DNA is an efficient option for storing data, as researchers recently demonstrated when they stored a movie file in the genome of a cell.

Risks_:_ DNA synthesis has sparked significant controversy and ethical concerns. For example, when the GP-Write project was announced, some criticized the organizers for the troubling possibilities that synthesizing genomes could evoke, likening it to playing God. Would it be ethical, for instance, to synthesize Einstein’s genome and transplant it into cells? The technology to do so does not yet exist, and GP-Write leaders have backed away from making human genomes in living cells, but some are still demanding that the ethical debate happen well in advance of the technology’s arrival. Additionally, cheap DNA synthesis could one day democratize the ability to make bioweapons or other nuisances, as one virologist demonstrated when he made the horsepox virus (related to the virus that causes smallpox) with DNA he ordered over the Internet. (It should be noted, however, that the other ingredients needed to make the horsepox virus are specialized equipment and deep technical expertise.)

4. Genome Editing

Many diseases have a basis in our DNA, and until recently, doctors had very few tools to address the root causes. That appears to have changed with the recent discovery of a DNA editing system called CRISPR/Cas9. (A note on terminology – CRISPR is a bacterial immune system, while Cas9 is one protein component of that system, but both terms are often used to refer to the protein.) It operates in cells like a DNA scissor, opening slots in the genome where scientists can insert their own sequence. While the capability of cutting DNA wasn’t unprecedented, Cas9 dusts the competition with its effectiveness and ease of use. Even though it’s a biotech newcomer, much of the scientific community has already caught ‘CRISPR-fever,’ and biotech companies are racing to turn genome editing tools into the next blockbuster pharmaceutical.

Benefits: Genome editing may be the key to solving currently intractable genetic diseases such as cystic fibrosis, which is caused by a single genetic defect. If Cas9 can somehow be inserted into a patient’s cells, it could fix the mutations that cause such diseases, offering a permanent cure. Even diseases caused by many mutations, like cancer, or caused by a virus, like HIV/AIDS, could be treated using genome editing. Just recently, an FDA panel recommended a gene therapy for cancer, which showed dramatic responses for patients who had exhausted every other treatment. Genome editing tools are also used to make lab models of diseases, cells that store memories, and tools that can detect epidemic viruses like Zika or Ebola. And as described above, if a gene drive, which uses Cas9, is deployed effectively, we could eliminate diseases such as malaria, which kills nearly half a million people each year.

Risks_:_ Cas9 has generated nearly as much controversy as it has excitement, because genome editing carries both safety issues and ethical risks. Cutting and repairing a cell’s DNA is not risk-free, and errors in the process could make a disease worse, not better. Genome editing in reproductive cells, such as sperm or eggs, could result in heritable genetic changes, meaning dangerous mutations could be passed down to future generations. And some warn of unethical uses of genome editing, fearing a rise of ‘designer babies’ if parents are allowed to choose their children’s traits, even though there are currently no straightforward links between one’s genes and their intelligence, appearance, etc. Similarly, a gene drive, despite possibly minimizing the spread of certain diseases, has the potential to create great harm since it is intended to kill or modify an entire species. A successful gene drive could have unintended ecological impacts, be used with malicious intent, or mutate in unexpected ways. Finally, while the capability doesn’t currently exist, it’s not out of the realm of possibility that a rogue agent could develop genetically selective bioweapons to target individuals or populations with certain genetic traits.

Videos

Research Papers

Books

Informational Documents

Articles

Organizations

The organizations above all work on biotechnology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

Special thanks to Jeff Bessen for his help researching and writing this page.

]]>
Genome Editing and the Future of Biowarfare: A Conversation with Dr. Piers Millett https://futureoflife.org/recent-news/genome-editing-and-the-future-of-biowarfare-a-conversation-with-dr-piers-millett/ Fri, 12 Oct 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/genome-editing-and-the-future-of-biowarfare-a-conversation-with-dr-piers-millett/ In both 2016 and 2017, genome editing made it into the annual Worldwide Threat Assessment of the US Intelligence Community. (Update: it was also listed in the 2022 Threat Assessment.) One of biotechnology’s most promising modern developments, it is now consistently deemed a danger to US national security. All of which raises the question: what, exactly, is genome editing, and what can it do?

Most simply, the phrase “genome editing” represents tools and techniques that biotechnologists use to edit the genome – that is, the DNA or RNA of plants, animals, and bacteria. Though the earliest versions of genome editing technology have existed for decades, the introduction of CRISPR in 2013 “brought major improvements to the speed, cost, accuracy, and efficiency of genome editing.

CRISPR, or Clustered Regularly Interspersed Short Palindromic Repeats, is actually an ancient mechanism used by bacteria to remove viruses from their DNA. In the lab, researchers have discovered they can replicate this process by creating a synthetic RNA strand that matches a target DNA sequence in an organism’s genome. The RNA strand, known as a “guide RNA,” is attached to an enzyme that can cut DNA. After the guide RNA locates the targeted DNA sequence, the enzyme cuts the genome at this location. DNA can then be removed, and new DNA can be added. CRISPR has quickly become a powerful tool for editing genomes, with research taking place in a broad range of plants and animals, including humans.

A significant percentage of genome editing research focuses on eliminating genetic diseases. However, with tools like CRISPR, it also becomes possible to alter a pathogen’s DNA to make it more virulent and more contagious. Other potential uses include the creation of “‘killer mosquitos,’ plagues that wipe out staple crops, or even a virus that snips at people’s DNA.”

But does genome editing really deserve a spot among the ranks of global threats like nuclear weapons and cyber hacking? To many members of the scientific community, its inclusion felt like an overreaction. Among them was Dr. Piers Millett, a science policy and international security expert whose work focuses on biotechnology and biowarfare.

Millett wasn’t surprised that biotechnology in general made it into these reports: what he didn’t expect was for one specific tool, genome editing, to be called out. In his words: “I would personally be much more comfortable if it had been a broader sentiment to say ‘Hey, there’s a whole bunch of emerging biotechnologies that could destabilize our traditional risk equation in this space, and we need to be careful with that.’ …But calling out specifically genome editing, I still don’t fully understand any rationale behind it.”

This doesn’t mean, however, that the misuse of genome editing is not cause for concern. Even proper use of the technology often involves the genetic engineering of biological pathogens, research that could very easily be weaponized. Says Millett, “If you’re deliberately trying to create a pathogen that is deadly, spreads easily, and that we don’t have appropriate public health measures to mitigate, then that thing you create is amongst the most dangerous things on the planet.”

Biowarfare Before Genome Editing

A medieval depiction of the Black Plague.

Developments such as CRISPR present new possibilities for biowarfare, but biological weapons caused concern long before the advent of gene editing. The first recorded use of biological pathogens in warfare dates back to 600 BC, when Solon, an Athenian statesman, poisoned enemy water supplies during the siege of Krissa. Many centuries later, during the 1346 AD siege of Caffa, the Mongol army catapulted plague-infested corpses into the city, which is thought to have contributed to the 14th century Black Death pandemic that wiped out up to two thirds of Europe’s population.

Though biological weapons were internationally banned by the 1925 Geneva Convention, state biowarfare programs continued and in many cases expanded during World War II and the Cold War. In 1972, as evidence of these violations mounted, 103 nations signed a treaty known as the Biological Weapons Convention (BWC). The treaty bans the creation of biological arsenals and outlaws offensive biological research, though defensive research is permissible. Each year, signatories are required to submit certain information about their biological research programs to the United Nations, and violations reported to the UN Security Council may result in an inspection.

But inspections can be vetoed by the permanent members of the Security Council, and there are no firm guidelines for enforcement. On top of this, the line that separates permissible defensive biological research from its offensive counterpart is murky and remains a subject of controversy. And though the actual numbers remain unknown, pathologist Dr. Riedel asserts that “the number of state-sponsored programs has increased significantly during the last 30 years.”

Dual Use Research

So biological warfare remains a threat, and it’s one that genome editing technology could hypothetically escalate. Genome editing falls into a category of research and technology that’s known as “dual-use” – that is, it has the potential both for beneficial advances and harmful misuses. “As an enabling technology, it enables you to do things, so it is the intent of the user that determines whether that’s a positive thing or a negative thing,” Millett explains.

And ultimately, what’s considered positive or negative is a matter of perspective. “The same activity can look positive to one group of people, and negative to another. How do we decide which one is right and who gets to make that decision?” Genome editing could be used, for example, to eradicate disease-carrying mosquitoes, an application that many would consider positive. But as Millet points out, some cultures view such blatant manipulation of the ecosystem as harmful or “sacrilegious.”

Millett believes that the most effective way to deal with dual-use research is to get the researchers engaged in the discussion. “We have traditionally treated the scientific community as part of the problem,” he says. “I think we need to move to a point where the scientific community is the key to the solution, where we’re empowering them to be the ones who identify the risks, the ones who initiate the discussion about what forms this research should take.” A good scientist, he adds, is one “who’s not only doing good research, but doing research in a good way.”

DIY Genome Editing

But there is a growing worry that dangerous research might be undertaken by those who are not scientists at all. There are already a number of do-it-yourself (DIY) genome editing kits on the market today, and these relatively inexpensive kits allow anyone, anywhere to edit DNA using CRISPR technology. Do these kits pose a real security threat? Millett explains that risk level can be assessed based on two distinct criteria: likelihood and potential impact. Where the “greatest” risks lie will depend on the criterion.

“If you take risk as a factor of likelihood of impact, the most likely attacks will come from low-powered actors, but have a minimal impact and be based on traditional approaches, existing pathogens, and well characterized risks and threats,” Millett explains. DIY genome editors, for example, may be great in number but are likely unable to produce a biological agent capable of causing widespread harm.

“If you switch it around and say where are the most high impact threats going to come from, then I strongly believe that that requires a level of sophistication and technical competency and resources that are not easy to acquire at this point in time,” says Millett. “If you’re looking for advanced stuff: who could misuse genome editing? States would be my bet in the foreseeable future.”

State Bioweapons Programs

Large-scale bioweapons programs, such as those run by states, pose a double threat: there is always the possibility of accidental release alongside the potential for malicious use. Millett believes that these threats are roughly equal, a conclusion backed by a thousand page report from Gryphon Scientific, a US defense contractor.

Historically, both accidental release and malicious use of biological agents have caused damage. In 1979, there was the accidental release of aerosolized anthrax from the Sverdlovsk bioweapons production facility in the Soviet Union – a clogged air filter in the facility had been removed, but had not been replaced. Ninety-four people were affected by the incident and at least 64 died, along with a number of livestock. The Soviet secret police attempted a cover-up and it was not until years later that the administration admitted the cause of the outbreak.

More recently, Millett says, a US biodefense facility “failed to kill the anthrax that it sent out for various lab trials, and ended up sending out really nasty anthrax around the world.” Though no one was infected, a 2015 government investigation revealed that “over the course of the last decade, 86 facilities in the United States and seven other countries have received low concentrations of live spore samples… thought to be completely inactivated.”

These incidents pale, however, in comparison with Japan’s intentional use of biological weapons during the 1930s and 40s. There is “a published history that suggests up to 30,000 people were killed in China by the Japanese biological weapons program during the lead up to World War II. And if that data is accurate, that is orders of magnitude bigger than anything else,” Millett says.

Given the near-impossibility of controlling the spread of disease, a deliberate attack may have accidental effects far beyond what was intended. The Japanese, for example, may have meant to target only a few Chinese villages, only to unwittingly trigger an epidemic. There are reports, in fact, that thousands of Japan’s own soldiers became infected during a biological attack in 1941.

Despite the 1972 ban on biological weapons programs, Millett believes that many countries still have the capacity to produce biological weapons. As an example, he explains that the Soviets developed “a set of research and development tools that would answer the key questions and give you all the key capabilities to make biological weapons.”

The BWC only bans offensive research, and “underneath the umbrella of a defensive program,” Millett says, “you can do a whole load of research and development to figure out what you would want to weaponize if you were going to make a weapon.” Then, all a country needs to start producing those weapons is “the capacity to scale up production very, very quickly.” The Soviets, for example, built “a set of state-based commercial infrastructure to make things like vaccines.” On a day-to-day basis, they were making things the Soviet Union needed. “But they could be very radically rebooted and repurposed into production facilities for their biological weapons program,” Millett explains. This is known as a “breakout program.”

Says Millett, “I believe there are many, many countries that are well within the scope of a breakout program … so it’s not that they necessarily at this second have a fully prepared and worked-out biological weapons program that they can unleash on the world tomorrow, but they might well have all of the building blocks they need to do that in place, and a plan for how to turn their existing infrastructure towards a weapons program if they ever needed to. These components would be permissible under current international law.”

Biological Weapons Convention

This unsettling reality raises questions about the efficacy of the BWC – namely, what does it do well, and what doesn’t it do well? Millett, who worked for the BWC for well over a decade, has a nuanced view.

“The very fact that we have a ban on these things is brilliant,” he says. “We’re well ahead on biological weapons than many other types of weapons systems. We only got the ban on nuclear weapons – and it was only joined by some tiny number of countries – last year. Chemical weapons, only in 1995. The ban on biological weapons is hugely important. Having a space at the international level to talk about those issues is very important.” But, he adds, “we’re rapidly reaching the end of the space that I can be positive about.”

The ban on biological weapons was motivated, at least in part, by the sense that – unlike chemical weapons – they weren’t particularly useful. Traditionally, chemical and biological weapons were dealt with together. The 1925 Geneva Protocol banned both, and the original proposal for the Biological Weapons Convention, submitted by the UK in 1969, would have dealt with both. But the chemical weapons ban was ultimately dropped from the BWC, Millett says, “because that was during Vietnam, and so there were a number of chemical agents that were being used in Vietnam that weren’t going to be banned.” Once the scope of the ban had been narrowed, however, both the US and the USSR signed on.

Millet describes the resulting document as “aspirational.” He explains,“The Biological Weapons Convention is four pages long, whereas the [1995] Chemical Weapons Convention is 200 pages long, give or take.” And the difference “is about the teeth in the treaty.”

“The BWC is…a short document that’s basically a commitment by states not to make these weapons. The Chemical Weapons Convention is an international regime with an organization, with an inspection regime intended to enforce that. Under the BWC, if you are worried about another state, you’re meant to try to resolve those concerns amicably. But if you can’t do that, we move onto Article Six of the Convention, where you report it to the Security Council. The Security Council is meant to investigate it, but of course if you’re a permanent member of the Security Council, you can veto that, so that doesn’t happen.”

De-escalation

One easy way that states can avoid raising suspicion is to be more transparent. As Millett puts it, “If you’re not doing naughty things, then it’s on you to demonstrate that you’re not.” This doesn’t mean revealing everything to everybody. It means finding ways to show other states that they don’t need to worry.

As an example, Millett cites the heightened security culture that developed in the US after 9/11. Following the 2001 anthrax letter attacks, as well as a large investment in US biodefense programs, an initiative was started to prevent foreigners from working in those biodefense facilities. “I’m very glad they didn’t go down that path,” says Millett, “because the greatest risk, I think, was not that a foreign national would sneak in.” Rather, “the advantage of having foreign nationals in those programs was at the international level, when country Y stands up and accuses the US of having an illicit bioweapons program hidden in its biodefense program, there are three other countries that can stand up and say, ‘Well, wait a minute. Our scientists are in those facilities. We work very closely with that program, and we see no evidence of what you’re saying.’”

Historically, secrecy surrounding bioweapons programs has led other countries to begin their own research. Before World War I, the British began exploring the use of bioweapons. The Germans were aware of this. By the onset of the war, the British had abandoned the idea, but the Germans, not knowing this, began their own bioweapons program in an attempt to keep up. By World War II, Germany no longer had a bioweapons program. But the Allies believed they still did, and the U.S. bioweapons program was born of such fears.

What now?

Asked if he believes genome editing is a bioweapons “game changer”, Millett says no. “I see it as an enabling technology in the short to medium term, then maybe with longer-term implications , but then we’re out into the far distance of what we can reasonably talk about and predict,” he says. “Certainly for now, I think its big impact is it makes it easier, faster, cheaper, and more reliable to do things that you could do using traditional approaches.”

But as biotechnology continues to evolve, so too will biowarfare. For example, it will eventually be possible for governments to alter specific genes in their own populations. “Imagine aerosolizing a lovely genome editor that knocks out a specifically nasty gene in your population,” says Millett. “It’s a passive thing. You breathe it in.” And then it “retroactively alters” the population’s DNA.

A government could use such technology to knock out a gene linked to cancer or other diseases. But, Millett says, “what would happen if you came across a couple of genes that at an individual level were not going to have an impact, but at a population level were connected with something, say, like IQ?” With the help of a genome editor, a government could make their population smarter, on average, by a few IQ points.

Millett says that reliable economic studies support the statistical importance of average IQ. “The GDP of the country will be noticeably affected if we could just get another two or three percent IQ points. There are direct national security implications of that. If, for example, Chinese citizens got smarter on average over the next couple of generations by a couple of IQ points per generation, that has national security implications for both the UK and the US.”

For now, such an endeavor remains in the realm of science fiction. But technology is evolving at a breakneck speed, and it’s more important than ever to consider the potential implications of our advancements. That said, Millett is optimistic about the future. “I think the key is the distribution of bad actors versus good actors,” he says. As long as the bad actors remain the minority, there is more reason to be excited for the future of biotechnology than there is to be afraid of it.

Dr. Piers Millett holds fellowships at the Future of Humanity Institute, the University of Oxford, and the Woodrow Wilson Center for International Policy and works as a consultant for the World Health Organization. He also served at the United Nations as the Deputy Head of the Biological Weapons Convention.  

]]>
Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More https://futureoflife.org/podcast/podcast-martin-rees-on-the-prospects-for-humanity-ai-biotech-climate-change-overpopulation-cryogenics-and-more/ Thu, 11 Oct 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/podcast-martin-rees-on-the-prospects-for-humanity-ai-biotech-climate-change-overpopulation-cryogenics-and-more/ The Future of Humanity Institute Releases Three Papers on Biorisks https://futureoflife.org/biotech/future-humanity-institute-releases-three-papers-biorisks/ Fri, 29 Sep 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/future-humanity-institute-releases-three-papers-biorisks/ Click here to see this page in other languages:  Russian 

Earlier this month, the Future of Humanity Institute (FHI) released three new papers that assess global catastrophic and existential biosecurity risks and offer a cost-benefit analysis of various approaches to dealing with these risks.

The work – done by Piers Millett, Andrew Snyder-Beattie, Sebastian Farquhar, and Owen Cotton-Barratt – looks at what the greatest risks might be, how cost-effective they are to address, and how funding agencies can approach high-risk research.

In one paper, Human Agency and Global Catastrophic Biorisks, Millett and Snyder-Beattie suggest that “the vast majority of global catastrophic biological risk (GCBR) comes from human agency rather than natural resources.” This risk could grow as future technologies allow us to further manipulate our environment and biology. The authors list many of today’s known biological risks but they also highlight how unknown risks in the future could easily arise as technology advances. They call for a GCBR community that will provide “a space for overlapping interests between the health security communities and the global catastrophic risk communities.”

Millett and Snyder-Beattie also authored the paper, Existential Risk and Cost-Effective Biosecurity. This paper looks at the existential threat of future bioweapons to assess whether the risks are high enough to justify investing in threat-mitigation efforts. They consider a spectrum of biosecurity risks, including biocrimes, bioterrorism, and biowarfare, and they look at three models to estimate the risk of extinction from these weapons. As they state in their conclusion: “Although the probability of human extinction from bioweapons may be extremely low, the expected value of reducing the risk (even by a small amount) is still very large, since such risks jeopardize the existence of all future human lives.”

The third paper is Pricing Externalities to Balance Public Risks and Benefits of Research, by Farquhar, Cotton-Barratt, and Snyder-Beattie. Here they consider how scientific funders should “evaluate research with public health risks.” The work was inspired by the controversy surrounding the “gain-of-function” experiments performed on the H5N1 flu virus. The authors propose an approach that translates an estimate of the risk into a financial price, which “can then be included in the cost of the research.” They conclude with the argument that the “approaches discussed would work by aligning the incentives for scientists and for funding bodies more closely with those of society as a whole.”

]]>
Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer https://futureoflife.org/podcast/podcast-choosing-career-tackle-worlds-biggest-problems-rob-wiblin-brenton-mayer/ Fri, 29 Sep 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/podcast-choosing-career-tackle-worlds-biggest-problems-rob-wiblin-brenton-mayer/ FHI Quarterly Update (July 2017) https://futureoflife.org/biotech/fhi-quarterly-update-july-2017/ Thu, 06 Jul 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/fhi-quarterly-update-july-2017/ The following update was originally posted on the FHI website:

In the second 3 months of 2017, FHI has continued its work as before exploring crucial considerations for the long-run flourishing of humanity in our four research focus areas:

  • Macrostrategy – understanding which crucial considerations shape what is at stake for the future of humanity.
  • AI safety – researching computer science techniques for building safer artificially intelligent systems.
  • AI strategy – understanding how geopolitics, governance structures, and strategic trends will affect the development of advanced artificial intelligence.
  • Biorisk – working with institutions around the world to reduce risk from especially dangerous pathogens.

We have been adapting FHI to our growing size. We’ve secured 50% more office space, which will be shared with the proposed Institute for Effective Altruism. We are developing plans to restructure to make our research management more modular and to streamline our operations team.

We have gained two staff in the last quarter. Tanya Singh is joining us as a temporary administrator, coming from a background in tech start-ups. Laura Pomarius has joined us as a Web Officer with a background in design and project management. Two of our staff will be leaving in this quarter. Kathryn Mecrow is continuing her excellent work at the Centre for Effective Altruism where she will be their Office Manager. Sebastian Farquhar will be leaving to do a DPhil at Oxford but expects to continue close collaboration. We thank them for their contributions and wish them both the best!

Key outputs you can read

A number of co-authors including FHI researchers Katja Grace and Owain Evans surveyed hundreds of researchers to understand their expectations about AI performance trajectories. They found significant uncertainty, but the aggregate subjective probability estimate suggested a 50% chance of high-level AI within 45 years. Of course, the estimates are subjective and expert surveys like this are not necessarily accurate forecasts, though they do reflect the current state of opinion. The survey was widely covered in the press.

An earlier overview of funding in the AI safety field by Sebastian Farquhar highlighted slow growth in AI strategy work. Miles Brundage’s latest piece, released via 80,000 Hours, aims to expand the pipeline of workers for AI strategy by suggesting practical paths for people interested in the area.

Anders Sandberg, Stuart Armstrong, and their co-author Milan Cirkovic published a paper outlining a potential strategy for advanced civilizations to postpone computation until the universe is much colder, and thereby producing up to a 1030 multiplier of achievable computation. This might explain the Fermi paradox, although a future paper from FHI suggests there may be no paradox to explain.

Individual research updates

Macrostrategy and AI Strategy

Nick Bostrom has continued work on AI strategy and the foundations of macrostrategy and is investing in advising some key actors in AI policy. He gave a speech at the G30 in London and presented to CEOs of leading Chinese technology firms in addition to a number of other lectures.

Miles Brundage wrote a career guide for AI policy and strategy, published by 80,000 Hours. He ran a scenario planning workshop on uncertainty in AI futures. He began a paper on verifiable and enforceable agreements in AI safety while a review paper on deep reinforcement learning he co-authored was accepted. He spoke at Newspeak House and participated in a RAND workshop on AI and nuclear security.

Owen Cotton-Barratt organised and led a workshop to explore potential quick-to-implement responses to a hypothetical scenario where AI capabilities grow much faster than the median expected case.

Sebastian Farquhar continued work with the Finnish government on pandemic preparedness, existential risk awareness, and geoengineering. They are currently drafting a white paper in three working groups on those subjects. He is contributing to a technical report on AI and security.

Carrick Flynn began working on structuredly transparent crime detection using AI and encryption and attended EAG Boston.

Clare Lyle has joined as a research intern and has been working with Miles Brundage on AI strategy issues including a workshop report on AI and security.

Toby Ord has continued work on a book on existential risk, worked to recruit two research assistants, ran a forecasting exercise on AI timelines and continues his collaboration with DeepMind on AI safety.

Anders Sandberg is beginning preparation for a book on ‘grand futures’.  A paper by him and co-authors on the aestivation hypothesis was published in the Journal of the British Interplanetary Society. He contributed a report on the statistical distribution of great power war to a Yale workshop, spoke at a workshop on AI at the Johns Hopkins Applied Physics Lab, and at the AI For Good summit in Geneva, among many other workshop and conference contributions. Among many media appearances, he can be found in episodes 2-6 of National Geographic’s series Year Million.

AI Safety

Stuart Armstrong has made progress on a paper on oracle designs and low impact AI, a paper on value learning in collaboration with Jan Leike, and several other collaborations including those with DeepMind researchers. A paper on the aestivation hypothesis co-authored with Anders Sandberg was published.

Eric Drexler has been engaged in a technical collaboration addressing the adversarial example problem in machine learning and has been making progress toward a publication that reframes the AI safety landscape in terms of AI services, structured systems, and path-dependencies in AI research and development.

Owain Evans and his co-authors released their survey of AI researchers on their expectations of future trends in AI. It was covered in the New Scientist, MIT Technology Review, and leading newspapers and is under review for publication. Owain’s team completed a paper on using human intervention to help RL systems avoid catastrophe. Owain and his colleagues further promoted their online textbook on modelling agents.

Jan Leike and his co-authors released a paper on universal reinforcement learning, which makes fewer assumptions about its environment than most reinforcement learners. Jan is a research associate at FHI while working at DeepMind.

Girish Sastry, William Saunders, and Neal Jean have joined as interns and have been helping Owain Evans with research and engineering on the prevention of catastrophes during training of reinforcement learning agents.

Biosecurity

Piers Millett has been collaborating with Andrew Snyder-Beattie on a paper on the cost-effectiveness of interventions in biorisk, and the links between catastrophic biorisks and traditional biosecurity. Piers worked with biorisk organisations including the US National Academies of Science, the global technical synthetic biology meeting (SB7), and training for those overseeing Ebola samples among others.

Funding

FHI is currently in a healthy financial position, although we continue to accept donations. We expect to spend approximately £1.3m over the course of 2017. Including three new hires but no further growth, our current funds plus pledged income should last us until early 2020. Additional funding would likely be used to add to our research capacity in machine learning, technical AI safety and AI strategy. If you are interested in discussing ways to further support FHI, please contact Niel Bowerman.

Recruitment

Over the coming months we expect to recruit for a number of positions. At the moment, we are interested in applications for internships from talented individuals with a machine learning background to work in AI safety. We especially encourage applications from demographic groups currently under-represented at FHI.

]]>
GP-write and the Future of Biology https://futureoflife.org/biotech/gp-write-future-biology/ Fri, 12 May 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/gp-write-future-biology/ Imagine going to the airport, but instead of walking through – or waiting in – long and tedious security lines, you could walk through a hallway that looks like a terrarium. No lines or waiting. Just a lush, indoor garden. But these plants aren’t something you can find in your neighbor’s yard – their genes have been redesigned to act as sensors, and the plants will change color if someone walks past with explosives.

The Genome Project Write (GP-write) got off to a rocky start last year when it held a “secret” meeting that prohibited journalists. News of the event leaked, and the press quickly turned to fears of designer babies and Frankenstein-like creations. This year, organizers of the meeting learned from the 2016 debacle. Not only did they invite journalists, but they also highlighted work by researchers like June Medford, whose plants research could lead to advancements like the security garden above.

Jef Boeke, one of the lead authors of the GP-write Grand Challenge, emphasized that this project was not just about writing the human genome. “The notion that we could write a human genome is simultaneously thrilling to some and not so thrilling to others,” Boeke told the group. “We recognize that this will take a lot of discussion.”

Boeke explained that the GP-write project will happen in the cells, and the researchers involved are not trying to produce an organism. He added that this work could be used to solve problems associated with climate change and the environment, invasive species, pathogens, and food insecurity.

To learn more about why this project is important, I spoke with genetics researcher, John Min, about what GP-write is and what it could accomplish. Min is not directly involved with GP-write, but he works with George Church, another one of the lead authors of the project.

Min explained, “We aren’t currently capable of making DNA as long as human chromosomes – we can’t make that from scratch in the laboratory. In this case, they’ll use CRISPR to make very specific cuts in the genome of an existing cell, and either use synthesized DNA to replace whole chunks or add new functionality in.”

He added, “An area of potentially exciting research with this new project is to create a human cell immune to all known viruses. If we can create this in the lab, then we can start to consider how to apply it to people around the world. Or we can use it to build an antibody library against all known viruses. Right now, tackling such a project is completely unaffordable – the costs are just too astronomic.”

But costs aren’t the only reason GP-write is hugely ambitious. It’s also incredibly challenging science. To achieve the objectives mentioned above, scientists will synthesize, from basic chemicals, the building blocks of life. Synthesizing a genome involves slowly editing out tiny segments of genes and replacing them with the new chemical version. Then researchers study each edit to determine what, if anything, changed for the organism involved. Then they repeat this for every single known gene. It is a tedious, time-consuming process, rife with errors and failures that send scientists back to the drawing board over and over, until they finally get just one gene right. On top of that, Min explained, it’s not clear how to tell when a project transitions from editing a cell, to synthesizing it. “How many edits can you make to an organism’s genome before you can say you’ve synthesized it?” he asked.

Clyde Hutchison, working with Craig Venter, recently came closest to answering that question. He and Venter’s team published the first paper depicting attempts to synthesize a simple bacterial genome. The project involved understanding which genes were essential, which genes were inessential, and discovering that some genes are “quasi-essential.” In the process, they uncovered “149 genes with unknown biological functions, suggesting the presence of undiscovered functions that are essential for life.”

This discovery tells us two things. First, it shows just how enormous the GP-write project is. To find 149 unknown genes in simple bacteria offers just a taste of how complicated the genomes of more advanced organisms will be. Kris Saha, Assistant Professor of Biomedical Engineering at the University of Wisconsin-Madison, explained this to the Genetic Experts News Service:

“The evolutionary leap between a bacterial cell, which does not have a nucleus, and a human cell is enormous. The human genome is organized differently and is much more complex. […] We don’t entirely understand how the genome is organized inside of a typical human cell. So given the heroic effort that was needed to make a synthetic bacterial cell, a similar if not more intense effort will be required – even to make a simple mammalian or eukaryotic cell, let alone a human cell.”

Second, this discovery gives us a clue as to how much more GP-write could tell us about how biology and the human body work. If we can uncover unknown functions within DNA, how many diseases could we eliminate? Could we cure aging? Could we increase our energy levels? Could we boost our immunities? Are there risks we need to prepare for?

The best assumption for that last question is: yes.

“Safety is one of our top priorities,” said Church at the event’s press conference, which included other leaders of the project. They said they expect safeguards to be engineered into research “from the get-go,” and part of the review process would include assessments of whether research within the project could be developed to have both positive or negative outcomes, known as Dual Use Research of Concern (DURC)

The meeting included roughly 250 people from 10 countries with backgrounds in science, ethics, law, government, and more. In general, the energy at the conference was one of excitement about the possibilities that GP-write could unleash.

“This project not only changes the way the world works, but it changes the way we work in the world,” said GP-write lead author Nancy J. Kelley.

]]>
Podcast: FLI 2016 – A Year In Review https://futureoflife.org/podcast/11239/ Fri, 30 Dec 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/11239/ Why 2016 Was Actually a Year of Hope https://futureoflife.org/nuclear/2016-actually-year-hope/ Fri, 30 Dec 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/2016-actually-year-hope/ Just about everyone found something to dislike about 2016, from wars to politics and celebrity deaths. But hidden within this year’s news feeds were some really exciting news stories. And some of them can even give us hope for the future.

Artificial Intelligence

Though concerns about the future of AI still loom, 2016 was a great reminder that, when harnessed for good, AI can help humanity thrive.

AI and Health

Some of the most promising and hopefully more immediate breakthroughs and announcements were related to health. Google’s DeepMind announced a new division that would focus on helping doctors improve patient care. Harvard Business Review considered what an AI-enabled hospital might look like, which would improve the hospital experience for the patient, the doctor, and even the patient’s visitors and loved ones. A breakthrough from MIT researchers could see AI used to more quickly and effectively design new drug compounds that could be applied to a range of health needs.

More specifically, Microsoft wants to cure cancer, and the company has been working with research labs and doctors around the country to use AI to improve cancer research and treatment. But Microsoft isn’t the only company that hopes to cure cancer. DeepMind Health also partnered with University College London’s hospitals to apply machine learning to diagnose and treat head and neck cancers.

AI and Society

Other researchers are turning to AI to help solve social issues. While AI has what is known as the “white guy problem” and examples of bias cropped up in many news articles, Fei Fei Li has been working with STEM girls at Stanford to bridge the gender gap. Stanford researchers also published research that suggests  artificial intelligence could help us use satellite data to combat global poverty.

It was also a big year for research on how to keep artificial intelligence safe as it continues to develop. Google and the Future of Humanity Institute made big headlines with their work to design a “kill switch” for AI. Google Brain also published a research agenda on various problems AI researchers should be studying now to help ensure safe AI for the future.

Even the White House got involved in AI this year, hosting four symposia on AI and releasing reports in October and December about the potential impact of AI and the necessary areas of research. The White House reports are especially focused on the possible impact of automation on the economy, but they also look at how the government can contribute to AI safety, especially in the near future.

AI in Action

And of course there was AlphaGo. In January, Google’s DeepMind published a paper, which announced that the company had created a program, AlphaGo, that could beat one of Europe’s top Go players. Then, in March, in front of a live audience, AlphaGo beat the reigning world champion of Go in four out of five games. These results took the AI community by surprise and indicate that artificial intelligence may be progressing more rapidly than many in the field realized.

And AI went beyond research labs this year to be applied practically and beneficially in the real world. Perhaps most hopeful was some of the news that came out about the ways AI has been used to address issues connected with pollution and climate change. For example, IBM has had increasing success with a program that can forecast pollution in China, giving residents advanced warning about days of especially bad air. Meanwhile, Google was able to reduce its power usage by using DeepMind’s AI to manipulate things like its cooling systems.

And speaking of addressing climate change…

Climate Change

With recent news from climate scientists indicating that climate change may be coming on faster and stronger than previously anticipated and with limited political action on the issue, 2016 may not have made climate activists happy. But even here, there was some hopeful news.

Among the biggest news was the ratification of the Paris Climate Agreement. But more generally, countries, communities and businesses came together on various issues of global warming, and Voices of America offers five examples of how this was a year of incredible, global progress.

But there was also news of technological advancements that could soon help us address climate issues more effectively. Scientists at Oak Ridge National Laboratory have discovered a way to convert CO2 into ethanol. A researcher from UC Berkeley has developed a method for artificial photosynthesis, which could help us more effectively harness the energy of the sun. And a multi-disciplinary team has genetically engineered bacteria that could be used to help combat global warming.

Biotechnology

Biotechnology — with fears of designer babies and manmade pandemics – is easily one of most feared technologies. But rather than causing harm, the latest biotech advances could help to save millions of people.

CRISPR

In the course of about two years, CRISPR-cas9 went from a new development to what could become one of the world’s greatest advances in biology. Results of studies early in the year were promising, but as the year progressed, the news just got better. CRISPR was used to successfully remove HIV from human immune cells. A team in China used CRISPR on a patient for the first time in an attempt to treat lung cancer (treatments are still ongoing), and researchers in the US have also received approval to test CRISPR cancer treatment in patients. And CRISPR was also used to partially restore sight to blind animals.

Gene Drive

Where CRISPR could have the most dramatic, life-saving effect is in gene drives. By using CRISPR to modify the genes of an invasive species, we could potentially eliminate the unwelcome plant or animal, reviving the local ecology and saving native species that may be on the brink of extinction. But perhaps most impressive is the hope that gene drive technology could be used to end mosquito- and tick-borne diseases, such as malaria, dengue, Lyme, etc. Eliminating these diseases could easily save over a million lives every year.

Other Biotech News

The year saw other biotech advances as well. Researchers at MIT addressed a major problem in synthetic biology in which engineered genetic circuits interfere with each other. Another team at MIT engineered an antimicrobial peptide that can eliminate many types of bacteria, including some of the antibiotic-resistant “superbugs.” And various groups are also using CRISPR to create new ways to fight antibiotic-resistant bacteria.

Nuclear Weapons

If ever there was a topic that does little to inspire hope, it’s nuclear weapons. Yet even here we saw some positive signs this year. The Cambridge City Council voted to divest their $1 billion pension fund from any companies connected with nuclear weapons, which earned them an official commendation from the U.S. Conference of Mayors. In fact, divestment may prove a useful tool for the general public to express their displeasure with nuclear policy, which will be good, since one cause for hope is that the growing awareness of the nuclear weapons situation will help stigmatize the new nuclear arms race.

In February, Londoners held the largest anti-nuclear rally Britain had seen in decades, and the following month MinutePhysics posted a video about nuclear weapons that’s been seen by nearly 1.3 million people. In May, scientific and religious leaders came together to call for steps to reduce nuclear risks. And all of that pales in comparison to the attention the U.S. elections brought to the risks of nuclear weapons.

As awareness of nuclear risks grows, so do our chances of instigating the change necessary to reduce those risks.

The United Nations Takes on Weapons

But if awareness alone isn’t enough, then recent actions by the United Nations may instead be a source of hope. As October came to a close, the United Nations voted to begin negotiations on a treaty that would ban nuclear weapons. While this might not have an immediate impact on nuclear weapons arsenals, the stigmatization caused by such a ban could increase pressure on countries and companies driving the new nuclear arms race.

The U.N. also announced recently that it would officially begin looking into the possibility of a ban on lethal autonomous weapons, a cause that’s been championed by Elon Musk, Steve Wozniak, Stephen Hawking and thousands of AI researchers and roboticists in an open letter.

Looking Ahead

And why limit our hope and ambition to merely one planet? This year, a group of influential scientists led by Yuri Milner announced an Alpha-Centauri starshot, in which they would send a rocket of space probes to our nearest star system. Elon Musk later announced his plans to colonize Mars. And an MIT scientist wants to make all of these trips possible for humans by using CRISPR to reengineer our own genes to keep us safe in space.

Yet for all of these exciting events and breakthroughs, perhaps what’s most inspiring and hopeful is that this represents only a tiny sampling of all of the amazing stories that made the news this year. If trends like these keep up, there’s plenty to look forward to in 2017.

]]>
Artificial Photosynthesis: Can We Harness the Energy of the Sun as Well as Plants? https://futureoflife.org/recent-news/artificial-photosynthesis/ Fri, 30 Sep 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/artificial-photosynthesis/ Click here to see this page in other languages : Russian 

In the early 1900s, the Italian chemist Giacomo Ciamician recognized that fossil fuel use was unsustainable. And like many of today’s environmentalists, he turned to nature for clues on developing renewable energy solutions, studying the chemistry of plants and their use of solar energy. He admired their unparalleled mastery of photochemical synthesis—the way they use light to synthesize energy from the most fundamental of substances—and how “they reverse the ordinary process of combustion.”

In photosynthesis, Ciamician realized, lay an entirely renewable process of energy creation. When sunlight reaches the surface of a green leaf, it sets off a reaction inside the leaf. Chloroplasts, energized by the light, trigger the production of chemical products—essentially sugars—which store the energy such that the plant can later access it for its biological needs. It is an entirely renewable process; the plant harvests the immense and constant supply of solar energy, absorbs carbon dioxide and water, and releases oxygen. There is no other waste.

If scientists could learn to imitate photosynthesis by providing concentrated carbon dioxide and suitable catalyzers, they could create fuels from solar energy. Ciamician was taken by the seeming simplicity of this solution. Inspired by small successes in chemical manipulation of plants, he wondered, “does it not seem that, with well-adapted systems of cultivation and timely intervention, we may succeed in causing plants to produce, in quantities much larger than the normal ones, the substances which are useful to our modern life?”

In 1912, Ciamician sounded the alarm about the unsustainable use of fossil fuels, and he exhorted the scientific community to explore artificially recreating photosynthesis. But little was done. A century later, however, in the midst of a climate crisis, and armed with improved technology and growing scientific knowledge, his vision reached a major breakthrough.

After more than ten years of research and experimentation, Peidong Yang, a chemist at UC Berkeley, successfully created the first photosynthetic biohybrid system (PBS) in April 2015. This first-generation PBS uses semiconductors and live bacteria to do the photosynthetic work that real leaves do—absorb solar energy and create a chemical product using water and carbon dioxide, while releasing oxygen—but it creates liquid fuels. The process is called artificial photosynthesis, and if the technology continues to improve, it may become the future of energy.

How Does This System Work?

Yang’s PBS can be thought of as a synthetic leaf. It is a one-square-inch tray that contains silicon semiconductors and living bacteria; what Yang calls a semiconductor-bacteria interface.

In order to initiate the process of artificial photosynthesis, Yang dips the tray of materials into water, pumps carbon dioxide into the water, and shines a solar light on it. As the semiconductors harvest solar energy, they generate charges to carry out reactions within the solution. The bacteria take electrons from the semiconductors and use them to transform, or reduce, carbon dioxide molecules and create liquid fuels. In the meantime, water is oxidized on the surface of another semiconductor to release oxygen. After several hours or several days of this process, the chemists can collect the product.

With this first-generation system, Yang successfully produced butanol, acetate, polymers, and pharmaceutical precursors, fulfilling Ciamician’s once-far-fetched vision of imitating plants to create the fuels that we need. This PBS achieved a solar-to-chemical conversion efficiency of 0.38%, which is comparable to the conversion efficiency in a natural, green leaf.

first-g-ap

A diagram of the first-generation artificial photosynthesis, with its four main steps.

Describing his research, Yang says, “Our system has the potential to fundamentally change the chemical and oil industry in that we can produce chemicals and fuels in a totally renewable way, rather than extracting them from deep below the ground.”

If Yang’s system can be successfully scaled up, businesses could build artificial forests that produce the fuel for our cars, planes, and power plants by following the same laws and processes that natural forests follow. Since artificial photosynthesis would absorb and reduce carbon dioxide in order to create fuels, we could continue to use liquid fuel without destroying the environment or warming the planet.

However, in order to ensure that artificial photosynthesis can reliably produce our fuels in the future, it has to be better than nature, as Ciamician foresaw. Our need for renewable energy is urgent, and Yang’s model must be able to provide energy on a global scale if it is eventually to replace fossil fuels.

Recent Developments in Yang’s Artificial Photosynthesis

Since the major breakthrough in April 2015, Yang has continued to improve his system in hopes of eventually producing fuels that are commercially viable, efficient, and durable.

In August 2015, Yang and his team tested his system with a different type of bacteria. The method is the same, except instead of electrons, the bacteria use molecular hydrogen from water molecules to reduce carbon dioxide and create methane, the primary component of natural gas. This process is projected to have an impressive conversion efficiency of 10%, which is much higher than the conversion efficiency in natural leaves.

A conversion efficiency of 10% could potentially be commercially viable, but since methane is a gas it is more difficult to use than liquid fuels such as butanol, which can be transferred through pipes. Overall, this new generation of PBS needs to be designed and assembled in order to achieve a solar-to-liquid-fuel efficiency above 10%.

second-g-ap

A diagram of this second-generation PBS that produces methane.

In December 2015, Yang advanced his system further by making the remarkable discovery that certain bacteria could grow the semiconductors by themselves. This development short-circuited the two-step process of growing the nanowires and then culturing the bacteria in the nanowires. The improved semiconductor-bacteria interface could potentially be more efficient in producing acetate, as well as other chemicals and fuels, according to Yang. And in terms of scaling up, it has the greatest potential.

third-g-ap

A diagram of this third-generation PBS that produces acetate.

In the past few weeks, Yang made yet another important breakthrough in elucidating the electron transfer mechanism between the semiconductor-bacteria interface. This sort of fundamental understanding of the charge transfer at the interface will provide critical insights for the designing of the next generation PBS with better efficiency and durability. He will be releasing the details of this breakthrough shortly.

Despite these important breakthroughs and modifications to the PBS, Yang clarifies, “the physics of the semiconductor-bacteria interface for the solar driven carbon dioxide reduction is now established.” As long as he has an effective semiconductor that absorbs solar energy and feeds electrons to the bacteria, the photosynthetic function will initiate, and the remarkable process of artificial photosynthesis will continue to produce liquid fuels.

Why This Solar Power Is Unique

Peter Forbes, a science writer and the author of Nanoscience: Giants of the Infinitesimal, admires Yang’s work in creating this system. He writes, “It’s a brilliant synthesis: semiconductors are the most efficient light harvesters, and biological systems are the best scavengers of CO2.”

Yang’s artificial photosynthesis only relies on solar energy. But it creates a more useable source of energy than solar panels, which are currently the most popular and commercially viable form of solar power. While the semiconductors in solar panels absorb solar energy and convert it into electricity, in artificial photosynthesis, the semiconductors absorb solar energy and store it in “the carbon-carbon bond or the carbon-hydrogen bond of liquid fuels like methane or butanol.”

This difference is crucial. The electricity generated from solar panels simply cannot meet our diverse energy needs, but these renewable liquid fuels and natural gases can. Unlike solar panels, Yang’s PBS absorbs and breaks down carbon dioxide, releases oxygen, and creates a renewable fuel that can be collected and used. With artificial photosynthesis creating our fuels, driving cars and operating machinery becomes much less harmful. As Katherine Bourzac phrases nicely, “This is one of the best attempts yet to realize the simple equation: sun + water + carbon dioxide = sustainable fuel.”

The Future of Artificial Photosynthesis

Yang’s PBS has been advancing rapidly, but he still has work to do before the technology can be considered commercially viable. Despite encouraging conversion efficiencies, especially with methane, the PBS is not durable enough or cost-effective enough to be marketable.

In order to improve this system, Yang and his team are working to figure out how to replace bacteria with synthetic catalysts. So far, bacteria have proven to be the most efficient catalysts, and they also have high selectivity—that is, they can create a variety of useful compounds such as butanol, acetate, polymers and methane. But since bacteria live and die, they are less durable than a synthetic catalyst and less reliable if this technology is scaled up.

Yang has been testing PBS’s with live bacteria and synthetic catalysts in parallel systems in order to discover which type works best. “From the point of view of efficiency and selectivity of the final product, the bacteria approach is winning,” Yang says, “but if down the road we can find a synthetic catalyst that can produce methane and butanol with similar selectivity, then that is the ultimate solution.” Such a system would give us the ideal fuels and the most durable semiconductor-catalyst interface that can be reliably scaled up.

Another concern is that, unlike natural photosynthesis, artificial photosynthesis requires concentrated carbon dioxide to function. This is easy to do in the lab, but if artificial photosynthesis is scaled up, Yang will have to find a feasible way of supplying concentrated carbon dioxide to the PBS. Peter Forbes argues that Yang’s artificial photosynthesis could be “coupled with carbon-capture technology to pull COfrom smokestack emissions and convert it into fuel”. If this could be done, artificial photosynthesis would contribute to a carbon-neutral future by consuming our carbon emissions and releasing oxygen. This is not the focus of Yang’s research, but it is an integral piece of the puzzle that other scientists must provide if artificial photosynthesis is to supply the fuels we need on a large scale.

When Giacomo Ciamician considered the future of artificial photosynthesis, he imagined a future of abundant energy where humans could master the “photochemical processes that hitherto have been the guarded secret of the plants…to make them bear even more abundant fruit than nature, for nature is not in a hurry and mankind is.” And while the rush was not apparent to scientists in 1912, it is clear now, in 2016.

Peidong Yang has already created a system of artificial photosynthesis that out-produces nature. If he continues to increase the efficiency and durability of his PBS, artificial photosynthesis could revolutionize our energy use and serve as a sustainable model for generations to come. As long as the sun shines, artificial photosynthesis can produce fuels and consume waste. And in this future of artificial photosynthesis, the world would be able to grow and use fuels freely; knowing that the same, natural process that created them would recycle the carbon at the other end.

Yang shares this hope for the future. He explains, “Our vision of a cyborgian evolution—biology augmented with inorganic materials—may bring the PBS concept to full fruition, selectively combining the best of both worlds, and providing society with a renewable solution to solve the energy problem and mitigate climate change.”

If you would like to learn more about Peidong Yang’s research, please visit his website at http://nanowires.berkeley.edu/.

]]>
The Federal Government Updates Biotech Regulations https://futureoflife.org/biotech/federal-government-updates-biotech-regulations/ Thu, 22 Sep 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/federal-government-updates-biotech-regulations/ Click here to see this page in other languages:  Russian 

By Wakanene Kamau

This summer’s GMO labeling bill and the rise of genetic engineering techniques to combat Zika — the virus linked to microcephaly and Guillain-Barre syndrome — have cast new light on how the government ensures public safety.

As researchers and companies scramble to apply the latest advances in synthetic biology, like the gene-editing technique CRISPR, the public has grown increasingly wary of embracing technology that they perceive as a threat to their health or the health of the environment. How, and to what degree, can the drive to develop and deploy new biotechnologies be reconciled with the need to keep the public safe and informed?

Last Friday, the federal government took a big step in framing the debate by releasing two documents that will modernize the 1986 Coordinated Framework for the Regulation of Biotechnology (Coordinated Framework). The Coordinated Framework is the outline for the network of regulatory policies that are used to ensure the safety of biotechnology products.

The Update to the Coordinated Framework, one of the documents released last week, is the first comprehensive review of how the federal government presently regulates biotechnology. It provides case-studies, graphics, and tables to clarify what tools the government uses to make decisions.

The National Strategy for Modernizing the Regulatory System for Biotechnology Products, the second recently released document, provides the long-term vision for how government agencies will handle emerging technologies. It includes oversight by the Food and Drug Administration (FDA), the U.S. Department of Agriculture (USDA), and the Environmental Protection Agency (EPA).

These documents are the result of work than began last summer when the Office of Science and Technology Policy (OSTP) announced a yearlong project to revise the way biotechnology innovations are regulated. The central document, The Coordinated Framework for the Regulation of Biotechnology, was last updated over 20 years ago.

The Coordinated Framework was first issued in 1986 as a response to a new gene-splicing technique that was leaving academic laboratories and entering the marketplace. Researchers had learned to take DNA from multiple sources and splice it together in a process called recombineering. This recombined DNA, known as rDNA, opened the floodgates for new uses that expanded beyond biomedicine and into industries like agriculture and cosmetics.

As researchers saw increasing applications for use in the environment, namely in genetically engineering animals and plants, concerns arose from a variety of stakeholders calling for attention from the federal government. Special interest groups were wary of the effect of commercial rDNA on public and environmental health; outside investors sought assurances that products would be able to legally enter the market; and fledgling biotech companies struggled to navigate regulatory networks.

This tension led the OSTP to develop an interagency effort to outline how to oversee the biotechnology industry. The culmination of this process created a policy framework for how existing legislation would be applied to various kinds of biotechnology. It coordinated across three responsible organizations: the Food and Drug Administration (FDA), the U.S. Department of Agriculture (USDA), and the Environmental Protection Agency (EPA).

Broadly, the FDA regulates genetically modified food and food additives, the USDA oversees genetically modified plants and animals, and the EPA tracks microbial pesticides and engineered algaes. By 1986, the first iteration of the Coordinated Framework was finalized and issued.

The Coordinated Framework was updated in 1992 to more clearly describe the scope of how federal agencies would exercise authority in cases where the established rule of law left room for interpretation. The central premise of the update was to look at the product itself and not the process by which it was made. The OSTP and federal government did not see new biotechnology methods as inherently risky but recognized that their applications could be.

However, since 1992, there have been a number of technologies that have raised new questions on the scope of agency authority. Among these are new methods for new applications, such as bioreactors for the biosynthesis of industrially important chemicals or CRISPR-Cas9 to develop gene drives to combat vector-borne disease.  Researchers are also increasingly using new methods for old applications, such as zinc finger nucleases and transcription activator-like effector nucleases, in addition to CRISPR-Cas9, for genome editing to introduce beneficial traits in crops.

But what kind of risks do these innovations create and how could the Coordinated Framework be used to mitigate them?

In theory, the Coordinated Framework aligns a new innovation with the federal agency that has the most experience working in its respective field. In practice, however, making decisions between agencies with overlapping interests and experience has been difficult.

The recent debate over the review of a genetically modified mosquito developed by the UK-based start-up Oxitec to combat the Zika virus shows how controversial the subject can be. Oxitec’s genetically engineered a male Aedes aegypti mosquito (the host of Zika, along with dengue, yellow fever, and chikungunya viruses) with a gene lethal to offspring it has with wild female mosquitoes. The plan would be to release the genetically engineered male mosquitoes into the wild where they can mate with native female mosquitos and crash the local population.

Using older genetics techniques, this process would have needed approval from the USDA, which has extensive experience with insecticides. However, because the new method is akin to a “new animal drug,” its oversight fell to the FDA. And the FDA created an uproar when it approved field trials of the Oxitec technology in Florida this August.

Confusion and frustration over who is and who should be responsible in cases like this one have brought an end to the 20 year silence on the measure.  In fact, the need to involve a greater amount of clarity, responsibility, and understanding in the regulatory approval process was reaffirmed last year.  The OSTP sent a Memo last summer to the FDA, USDA and EPA announcing the scheduled update to the Coordinated Framework.

Since the Memo was released, the OSTP has organized a series of three “public engagement sessions” (notes available here, here and here) to explain how to the Coordinated Framework presently works, as well as to accept input from the public. The release of the Update to the Coordinated Framework and the National Strategy are two measures of accountability. The Administration will accept feedback on the measures for 40 days following a notice of request for public comment to be published by the Federal Register.

While scientific breakthroughs have the potential to spur wide-ranging innovations, it is important to ensure due respect is given to the potential dangers those innovations present.

You can sign up for updates from the White House on Bioregulation here.

Wakanene is a science writer based in Seattle, Wa. You can reach him on twitter @ws_kamau.

 

]]>
Podcast: Could an Earthquake Destroy Humanity? https://futureoflife.org/recent-news/earthquake-existential-risk/ Mon, 25 Jul 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/earthquake-existential-risk/ Earthquakes as Existential Risks

Earthquakes are not typically considered existential or even global catastrophic risks, and for good reason: they’re localized events. While they may be devastating to the local community, rarely do they impact the whole world. But is there some way an earthquake could become an existential or catastrophic risk? Could a single earthquake put all of humanity at risk? In our increasingly connected world, could an earthquake sufficiently exacerbate a biotech, nuclear or economic hazard, triggering a cascading set of circumstances that could lead to the downfall of modern society?

Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of FLI consider extreme earthquake scenarios to figure out if there’s any way such a risk is remotely plausible. This podcast was produced in a similar vein to Myth Busters and xkcd’s What If series.

We only consider a few scenarios in this podcast, but we’d love to hear from other people. Do you have ideas for an extreme situation that could transform a locally devastating earthquake into a global calamity?

This episode features insight from seismologist Martin Chapman of Virginia Tech.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

]]>