Sunday, March 4, 2012

Technology and the Limits of Ethics

Everyone--or almost everyone--knows the Golden Rule. It's some variant of: Do unto others as you'd have others do unto you. There's the negative form of Confucius:  What you do not wish for yourself, do not do to others. The Ten Commandments tries to lay it out in a bit more detail with its well-known thou shalt's. In fact, many people have tried to show that all ethics are really rooted in the principle of reciprocity. Whether or not this is true, it does not tell us whether or not ethics should be about reciprocity. Why shouldn't ethics evolve over the course history, much in the way science has? Is there anything technology can teach us about ethics, or ethics about technology?

Philosopher of technology Hans Joas argues that, despite the vast differences between religious beliefs and traditions, ethics principals typically share the following four features. See if you agree.

  • First, they are usually anthropocentric. The Golden Rule is explicitly about humans and their relation to other humans. Any consideration of animals, ecosystems, geographic formations, or interstellar bodies must be for the sake of humans.
  • Second, ethical principles typically refer to immediate actions and problems we currently have . They don't mention future generations or even our future selves. The Ten Commandments says to honor thy father and thy mother, but not your children or children's children.
  • Third, ethical principals are about the things we do, not the beings that we are. You cannot turn the Golden Rule into a statement about being, such as become who'd others would want you to be
  • Finally, ethical principals treat technology as ethically neutral or ignore it entirely. Technologies are simply means to ends. It is the ends that are important--the means can be ignored.

Joas argues that recent technological developments have put all of these assumptions into question, and for two reasons. First, technology has increased the range of consequences beyond immediate human interactions. For this reason, knowledge has become essential to being ethical. Today many ethical arguments start with questions like, "Did you know that your chicken comes from..." or "Did you know those clothes are manufactured by the people in..." or "Did you know that such-and-such company gives funding to..."  It is no longer sufficient simply to consider the situation at hand.

Second, technology has made the future of humanity uncertain. We can eliminate human life forever or we can extend it indefinitely (at least, in principal). We will soon be able to create genetic superpeople. Even everyday actions like driving or eating implicate thousands of people around the globe and may have consequences for future generations. And it's not just about people. It is no longer a given that the Earth can heal itself no matter what we do.

What would ethics grounded in the reality of today look like? It's hard not to think that the future of ethics is politics--both state-centered politics involving legislation and treaties as well as as decentralized collective decision-making about the world we want to leave for the future. The latter is already happening at the water cooler and on blogs. Is this enough? Is ethics fine the way it is? Or do we need a prophet of a new set of commandments for the Internet Age?


  1. I think ethics is something that is (or should be) constantly evolving to factor different relationships and causal connections, and how these change over time. I have never considered ethics as something involving reciprocity, which makes me wonder if I sometimes conflate ethics with morality. I consider ethics related to the standards or codes that we expect from a group, and therefore from an individual belonging to it, and I suppose there is a quid pro quo component to it.

    At any rate, it seems to me that a lot of the same concepts that have lead to the development of ethical rule for certain professions (i.e., doctors, lawyers, etc.) also applies to the technology realm. Those rules are based on the notion that people are not always dealing at arm's length with certain professionals, and the codes of ethical conduct are meant to ensure that there is a certain level of transparency, honesty, professionalism, and a lack of self-interest. But, as you've suggested, these traditional models also wouldn't work, because the class of people to whom one might owe a fiduciary (or other) duty might not even exist yet. Interesting stuff.

  2. Lots of ideas here! A few responses:

    I tried to stay away from the difference between ethics and morality, since everyone defines them differently. I like Bernard William's distinction, in which ethics is the broader category and morality is about principals. He argues that the Greeks thought of ethics as a way of life, but the Enlightenment philosophers like Kant tried to turn the question of correct conduct into a set of rules. That is, they reduced ethics to morality.

    Maybe I've been reading too much psychology lately, but psychologists like Jonathan Haidt think about morality and ethics in terms of reciprocity. Still, I think most principals have something to do with reciprocity. If everybody treats each other in a moral way, the world will be a better place. And you might get into heaven.

    Now professional ethics does usually take the form of a set of principals. For instance, as a data professional I'm sworn to not share any sensitive information. An interesting question for me would be if it makes any sense to have professional ethics in the expanded Greek sense Williams talks about. Professional ethics as a way of life?

    So, in the end, I want to know if we need to rethink either ethics or morality in light of new technologies. Are there new principals we should live by? Are there new ways of life we should forge? I suspect both.


Related Posts Plugin for WordPress, Blogger...