nanoHUB-U Biological Engineering: Cellular Despgn Principles/L6.2: Ethics, Technology & Society ======================================== [Slide 1 L6.2] Welcome back. I'm Professor Rickus. Now that we've looked at some of the forward thinking and thinking about what the future might look like, it's important to sort of ground ourselves in some ideas around ethics, technology and society. And that's what we're going to be doing in this lecture. [Slide 2] So first we're going to take a look at some of the common discussion topics and concerns that are often raised when we talk about biotechnology in general and cellular engineering. We're going to talk a little bit about technology in society, and introduce some ethics frameworks for approaching some of these questions and issues. [Slide 3] So some of the most common risks and concerns that immediately come to mind when people start talking about what could go wrong with biotechnology and cellular engineering. And these include things like biosecurity, bioterrorism and war, weaponizing technology. And then on the flip side of that is also biosafety or bioerrorism, as it's often called. And so this is more unintentional negative consequences that either a modified or synthetic organism will escape and have unintended consequences. In other fields this could be an artificial intelligence. This is, the Terminator kind of world scenario, right? Will escape and outthink and destroy us, or some accidental exposure of a biological organism that could cause harm or chemical issues, chemical entities that could cause harm. Then there, of course, is environmental. There's the concern we're engineering biological systems, the argument that biology is unpredictable. And could have unintended consequences on our environment if we're not careful. And this idea that it might further separate humans from nature, the more we engage in biotechnology. And then the other common risks and concern is a little bit more harder to nail down, but often people feel very strongly about. And that's this idea that it's unnatural or that cellular engineering creates abominations, okay. So let's think about that. [Slide 4] So biotechnology is not unique, in that it can be considered a dual use technology. We have many what are called dual use technologies. These include things like nuclear, right? Nuclear technology can be used for energy or power. It can also be used for weapons of mass destruction. More and more in the conversation now, things like unmanned aerial vehicles that can be use for agriculture and commerce, which can also be weaponized and create killer drones, for example. And chemicals are a part of this dual use technology, can be used for consumer products, for example. Again, agriculture, other medicine, but that can also be weaponized and used as chemical weapons. So nearly almost all of our technology can be weaponized in some way and can be considered a dual use technology. [Slide 5] One example we mentioned, the UAVs. Right, so here's an example, we could be using and we do use UAVs for high tech crop management systems. For example, how to optimally monitor crop utilization in resource limited conditions, right? This may be a very important technology for addressing some of our grand challenges in feeding the world. But at the same time, right, these technologies can be co-opted by terrorist groups. For example, recently in the news, weaponized and exploding drones, for example. [Slide 6] So this concept of technology and destruction, both real and the fear of it, is not new to technology. And so if we sort of look a little bit more broadly across human progress, there's some things that we can, some transit that others have observed, and we can sort of think about. So with human progress, it turns out that violence has actually decreased, in other words we are less likely to die at the hands of another human as technology has progressed our society. Okay, so we'll put a green check for that one, that's a good thing. But on the flip side of that, while we may be less likely to die at the hands of another human, our technology, and I like this phrase I have in quotes here, can also increase the lethality of our hatred. So when there is conflict, right, a few are more capable of killing many, and this is something that has happened that our technology enables. Okay, so this may be a negative influence. New technology, such as some of the work we've been talking about, that falls into areas of synthetic biology. We've mentioned some brain computer interface. Also things like artificial intelligence that people are having conversations about. New technology gives us new ways to kill each other, right, and ourselves. But we can't also forget that there is risk to not developing technology. So we can't forget natural possibilities for extinction, for example, cell and DNA based biotechnology may very well be our best hope for fighting infectious disease. And we often see in the news, we have things like drug resistant bacteria, we see viruses, the Zika virus has been a very big deal lately. That can have catastrophic consequences on the human population. Lest we forget the 1918 Spanish Influenza, infected about one out of every five people in the world's population, right? So doing nothing as a choice also carries risk. And so that's an important part of thinking about our risk/benefit assessment as well, that can't be forgotten. So we'll put a green check there. So there's no clear answer. But I think in terms of technology and destruction, and good or bad, but these are all things, I think, to think about as we're having the conversation both broadly, as well as individuals. As engineers, thinking about the kinds of technologies that we're developing and how, more importantly, they're used. And so I think, from this, it's very useful from risks and ethics assessment, to take a technology agnostic approach. [Slide 7] That is to assess the product or the application, not the underlying enabling technology or technique itself. Taking a view that the technology is neutral, this idea that is often used in analogy that a hammer could be used to build a house or it could be used to kill your neighbor, right? So I think using the technology agnostic approach to ethics and risk assessment often exposes the less obvious impacts of the application and the product and how we're using technology, thus to maximize the benefits of our technology and minimize the possible negative outcomes, okay? [Slide 8] So in this, I also want to talk about something that is called technological determinism, and particularly the myth of technological determinism. So technological determinism in this concept says that technology is a primary governing force for societal change. That it shapes society's values, social structure, and history. That social progress is driven by technological innovation, and therefore follows an inevitable course. This framework of thinking actually runs counter to the agnostic view of the technology itself. And has been really debunked, or has fallen out of favor with a lot of those historians and social scientists who study this sort of progress through humanity. However, it has really persisted in areas such as business and policy. So let's look at some of the issues with this. [Slide 9] So the myth of technological determinism, it's really implicitly embedded in our collective memory and sort of historical teachings. So you probably, from your elementary school education if you are educated in The United States for example, have studied things like Eli Whitney and the cotton gin is a very classic example, right? That Eli Whitney invented the cotton gin and out of the cotton gin, slavery grew and we led to Civil War. That Gutenberg invented the printing press, this put the Bible in the hands of the many and fueled the Reformation. Okay, so in this framework of thinking and looking at history, technology suddenly appears and important things happen. There's often some single genius inventor and their consequences rather than the genesis of the invention, okay. So the problem with this is that it decouples the average citizen from technology development. It also ignores the societal forces that drove the technology and that's a really important piece. Does technology drive society or society drive technology? And this leaves us with a feeling of outcomes being inevitable, right? That the single event made this big huge event that had consequences on our history inevitable. And this really persists, this framework of thinking persists broadly in society, business, and in policy makers. Although it's somewhat been debunked and fallen out of favor, as I said, and some of the experts that study this and within the sciences. I think scientists sort of inherently know that invention very rarely happens in this individual, miraculous moment. [Slide 10] So really we need to take an important view and not just focused on technology influencing society but also society driving technology. And so there's some actions that we, as engineers and developers of technology, need to think about and be a part of, and that is improving the technical literacy of citizens that all people need to be a part of this cycle. right? That they're not decoupled from some imaginary single genius inventor. Another action, working on the hatred and keeping technology moving forward. Again, focusing on the technology and the good things we can do with it and minimizing the drivers of the societal factors that will push technology into negative realms or destructive realms. And so cell engineers, I think, need to understand societal forces and have an awareness of the societal forces that are driving the technology that they're working on. And so that's one of my goals and the importance of having a lecture like this in a cellular engineering course, okay? [Slide 11] Another thing to sort of think about, and we raised in our last lecture thinking about synthetic life, and it's this question, does synthetic equal unnatural? And I found in my experience that people often feel strongly about this but cannot always articulate why. And one of the things that drives, I think, your view on this, people have different views of how they view humans in the natural world. In other words, do you see humans as an equal member and a part of nature? Or do you see humans as somehow unique, and dominant, and external to nature? And this is one of the many things that contributes, I think, to different viewpoints on this spectrum of does synthetic, is cellular engineering and re-engineering cells, is this a natural thing to do? Or is this an unnatural thing to do? And there's one extreme view on one side, is that technology such as cellular engineering is a human evolutionary advantage. It's what we naturally have evolved to do to survive. Therefore, all synthetic technology is natural, okay? That's one extreme. The other extreme is to think that we, as humans, are arrogant to think we have any right to engineer life, right. And so this would lead to a strong conclusion that all synthetic life technology is unnatural. Now how individuals feel about, and where they fall on this spectrum, many things feed into that. Their religious, scientific, personal backgrounds. Their education inform this world view. And it's important to note though, however, that it's not a clear science versus religion dichotomy in this. You can actually make both scientific and religious arguments for either side of the extremes, and there are many gradations that exist between these extremes as well. And so you know, as you're listening to this, take a moment to sort of reflect on where you think you fall on this spectrum. As well as how others, perhaps family members and friends, people you know, what they might think and where they might fall on this spectrum. [Slide 12] So, how do we approach then, with a framework, some of the ethics and assessment of our particular applications of technologies, and the things that we might be working on and doing? Well, the Western bioethics is primarily based in something called principlism. The four most common principles that are part of this ethical framework include beneficence, that is basically to do good, non-maleficence, that is basically to do no harm, justice and autonomy. Justice is essentially treating all people fairly, and autonomy is respecting the views and choices of the individual, okay? And so this framework is primarily the foundation for our medical ethics in the Western world, as well as many of the bioethics and views on biotechnology. [Slide 13] In the US, for example, President Obama commissioned the Presidential Commission for the Study of Bioethical Issues to look at synthetic biology. Which is this field that includes some of the topics we've been looking at, these gene circuits, using engineering to engineer cells at the level of DNA, and more broadly, cellular systems. And this commission published this report in 2010. And this report is, and I have the link here, if this is something you want to look up, and I encourage you to, is really based in this framework of principlism that I just discussed a minute ago. And so, this is one of the most common ways. It's not the only way, certainly, to approach some of these issues, but it's one that I want you to be familiar with because it's one of the really strongest foundations, practically, that we use in bio-ethics. And so with that, [Slide 14] I also want to bring up a few additional topics that are recommended for you for future explorations, things we haven't really talked about today. And that is things like cognitive illusions, and this is very important in risk assessment. Because cognitive illusions, what these are, these are things, it turns out things that are very memorable in our minds are often erroneously perceived as more probable. And they can, on an emotional level, cause us to overestimate risk, okay. And so it's something to be aware of. It's part of how us humans, our brains work. And this can often influence risk assessment of technologies and decision making so it's something to be aware of and something to perhaps consider. Also the global oversight of technology. There are differences around the world in some of the frameworks that we use for how we approach ethics, as well as the laws and regulations, many of which may be federal. But increasingly, the impact of the technology we develop is often global. So managing that, those levels of regulation, from individual countries to the world, and groups of countries, and our Earth. That's a very complicated and important issue that needs to be sorted as we move forward with any technology, as well as, interestingly, moral status. So as, we can possibly become more and more sophisticated in our ability to modify and create synthetic organisms and modify the human body with things like brain-computer interface to neural implants, which we mentioned in our biohybrid devices. I'm thinking about autonomy and the moral status of biohybrid robots and organisms. And so these are things, the point of this lecture is really to plant some seeds and point you in the right direction to some further reads, such as that Presidential Commission Report. And I hope, as you progress in your cellular engineering, you will do that. [Slide 15] So coming up, we're going to summarize and review some of the topics that we've covered, and sort of look across the entire course and see where we're at and where you are now, okay? So I'll see you next in our wrap up and summary.