20110227

The Paradox of Dual-Use Research in the 21st century

Predrag Bokšić | perceptron
A few days ago, I attended a short conference called Dangerous Liaisons held at the Biodesign Institute. Speakers included researchers in genetics and synthetic biology, the chains of the National Science Advisory Board for Biosecurity, the senior FBI agent for WMD threats, and a AAAS fellow in biosecurity. The subject of the talk was dual-use research, and how it can be controlled. The problem is that while genetics and synthetic biology offer tremendous benefits for health and new chemical products, at the same time these technologies might empower criminals and terrorists, or even lead to an accidental bio-disaster. How can we regulate dual-use technologies for the safety of mankind?

(As a historical aside, it's only recently that dual-use has taken on these negative connotations. Dual-use used to be good. "Oh, you mean we can use these rockets to kill commies and explore the solar system? Awesome!" But civilian technologies with clear military implications is relatively new phenomenon.)

The primary concern of all the presenters was that whatever form the regulations take, it not impede 'good science.' There were several good justifications for this: regulations that are too stringent will be disliked and evaded by the community, the science is advancing too quickly for central bodies to monitor and control, and impairing biology will both leave America at a disadvantage economically and in terms of responding to an actual incident.

The core problem of dual-use, as identified by NSABB, is research that might make biological agents more deadly or transmissible. Specific research projects include reconstituting the 1918 flu, or improving the deadliness of the mousepox virus, research which could be easily transferred to weaponizing smallpox. In the NSABBs view, the benefits of such research must be carefully balanced against the risks, and such weighing should be carried out at the most basic level, by researchers developing experiments and by existing Institutional Review Boards. The role of groups like NSABB is coordinate and develop guidelines.

NSABB's guidelines might help protect against the accidental release of bioweapon, but what about deliberate attackers? Much of the talk was focused around creating a "culture of security" in biology labs. Background checks to work with select agents may miss many danger signs, and with new techniques, even non-select organism might be dangerous. All presenters spoke about the need for scientists to be alert for dangers in the lab. Special Agent Edward You, in particular, described his job not as catching potential bioterrorists, but about creating a framework so that scientists know who to call at the FBI if they see something. A second side of the culture of security is getting private gene synthesis firms to check orders against known pathogen genomes, and not create smallpox genomes for example, something that firms have current volunteered to do.

On the one hand, this kind of voluntary regulation is the best, and maybe only workable option, on the other hand, I have real concerns about what it means for the actual day to day practice of lab work. Quite literally, it requires that PIs monitor their students, and make sure that they're not spies, terrorists, or psychopaths. Is this really a fair burden to place on scientists, or is it a rerun of the "Red Scare." One attendee asked quite penetrating questions about whether or not he should let Iranian PhD students work at his lab. The universality of science, the concept that a scientist should be judge by the merit of their ideas and not their personal background or place of origin, is not compatible with these kinds of concerns.

While private monitoring among firms is an option now, as the technology becomes cheaper and more widespread (and it will), how can the industry regulate itself against the existence of "grey hat", fly-by-night companies. I'm reminded of the situation with "Research Chemicals", synthetic hallucinogens which are structurally similar to banned substances, but not covered by law, and their production by various shady chemical firms. Particularly, the developing world, where intellectual property restrictions are routinely evade, may offer a fertile breeding group for these malefactors.

So, is there hope for the future? Dr. Kathleen Baily has stated that graduate students with $10,000 in equipment could synthesize substantial quantities of a biological agent. (Although it is worth noting that synthesizing an agent is not carrying out an attack. Many of the more difficult challenges in biological warfare involve distributing an agent, not producing it). Whatever the exact resources requuired, on the spectrum from "the Unabomber in his cabin" to "the Iranian nuclear program", bioterrorism trends towards the lower end. However, while terrorist groups including Al Queda have enthusiastically pursued bioweapons, biological and chemical attacks have so far been extremely dissapointed. The Aum Shinrikyo nerve gas attack on the Tokyo subways killed only 13 people, likely fewer than a conventional bomb. I agree with the presenters that the best defense against dual use research is ironically actively pursuing this kind of research, to develop counter-measures against an attack. Despite media hype, terrorists and lone wolves have been not shown even the minimal organization necessary to carry out a bioweapons attack. We can, at least for the moment, trust the biology community.


No comments:

Post a Comment