Sunday, October 23, 2011

Gödel, Kuhn, and Human/Computer Co-evolution

Kurt Gödel showed that all formal systems are incomplete, meaning that there are truths that can be stated within a system that cannot be demonstrated within it. This means that truth cannot be reducible to axiomatic principles and, therefore, that there is no single system to which all truths could be reduced, whether it be physics, mathematics, logic, or any other system of abstraction.

These principles have become widely accepted within mathematical and scientific communities. After Thomas Kuhn's Structure of Scientific Revolutions, it has become common to think of domains of knowledge as paradigms which are never proven or disproven but which are eventually abandoned in favor of new, more helpful, or more interesting paradigms. What makes a paradigm valuable cannot be shown within the paradigm itself but by the problems it helps solve or the vistas it allows.

I've been trying to think through the implications of these insights for technology ever since reading Douglas Hofstadter's Gödel, Escher, Bach. As I understand it, the book's main point is that any attempts to create artificially intelligent systems that are grounded solely in deductive logic are doomed to failure. Human thought involves deduction, but it involves much more. As Hofstadter writes in his introduction to Ernest Nagel and James Newman's Gödel's Proof, the goal of AI research should be to devise systems "guided by visual imagery, the associative patterns linking concepts, and the intuitive processes of guesswork, analogy, and esthetic choice that every [person] uses."

The kind of AI we are most familiar with is Google. It is probably a stretch to say that the Google engine thinks, but it is certain that the algorithms it uses to filter and aggregate trillions of bits of information are guided by analogies and associations if not esthetic choice. Google gets feedback from users in the form of click-throughs, so it can better predict what sites to show in the future. Feedback loops like this are at the center of thought and learning. The more the Google engine can modify itself instead of relying on engineers to tweak algorithms, the more it can be said to think.

But the computer side of this loop is only one half of the picture. Just as computer systems learn from us, we learn from the systems. It's a dynamic, co-evolutionary process, and we need to think about the kinds of choices that computers make available to us. Eli Parser has coined the term 'filter bubbles' to describe the ways that applications like Google and Facebook filter information for us and thus structure the choices we make. This is not necessarily a bad thing, since we desperately need ways of filtering out information that is not relevant to our purposes. However, it will be a co-evolution that requires careful attention, for, unlike the bee and the flower, we can control our collective destiny.

This brings me back to Kuhn, since he suggests that the terms we currently use to think about our co-evolution are not necessarily the best for the job. One of the interesting things about computing is the way its concepts and language have been used to understand other things. Metaphors from other walks of life permeate computing, like 'the cloud', 'friend', or 'stream', but computing has also shaped the ways we understand each other and the world. It's not simply a matter of words like 'Google', 'text', and 'filter' becoming commonplace, because everything is either a computer or something to be computed today. For example, it's hard not to think of human minds (or 'wetware') along computational lines now. Not only do we 'process data' and 'filter out noise', but we act on information that has been computed so that we can better operate computers. Commercials tell us that our very personhood is threatened if our personal data are lost.

I wonder if we're seeing the waning of a paradigm, or the waxing of a new one. Will the next years of computing see more of the same, only faster? Or will we come to understand thought, action, art, ethics, and even humanity in an entirely different, techno-saturated vein? Are computers just machines (with which have have been long familiar--we are eating machines, sex machines, and poop machines). Or are they something different?

-Jaron Lanier's You Are Not A Gadget


  1. This is great--I like the co-evolutionary model for thinking about AI. This makes sense to me, since intelligence itself is never self-enclosed and always relational. Deleuze's metaphor of the wasp and orchid seems to me another touch point for this way of thinking--pollination instead of mimicry...

  2. Thanks! I really just want to pose the question now: are we thinking about this relationship in the right way? A lot of people think technology is inherently good or bad. This is wrong. A lot of people think technology is just a tool--it can be used for good or bad purposes. This is not quite as wrong, but still not the whole picture, which involves understanding technology as shaping us as we shape it.

  3. Maybe the relationship with food provides an analogy for thinking about how to construct an ethical relationship with technology. An oldie but goodie:

  4. Great suggestion, and a great article. Our relationship with technology is so complex that it's tempting to throw out the question, or to try to answer it with a much smaller question, as 'nutritionists' have done with food. I wonder if it is possible to come up with a simple (but not easy) slogan for technology like "Eat food. Not too much. Mostly plants."

  5. People have a natural tendency to take large, complex systems and break them down into component parts. The downside to this decomposition is that we often end up losing the connections between those components. Working with software, I'm sure you see this all the time on projects.

    As you stated in the very beginning, it's impossible to fully describe the relationship; the relationship itself is a system, and therefore it is more than the sum of its parts.

    I do think that we need to be paying much closer attention to the impact technology has on our lives. Prime examples of this impact can be seen in various social phenomena, like cyber-bullying, flash mobs (both positive and negative ones), and the various uprisings in the Middle East. My concern with much of this technology is that it is becoming pervasive in our lives. Personally, I don't want to be in a state of perpetual connectivity and accessibility. I want to unplug from the rest of the world sometimes. I think it's important to not lose sight of our own inner selves.

  6. I think your concern about the pervasiveness of technology is right-on. Usually it's the big things technology that grab our attention, like the Arab Spring or Hiroshima. But I think it's perhaps more important to think about the everyday uses of technology, like email. Email seems boring, but that's precisely because it's become such a part of our everyday lives. The problem then is that it's hard to evaluate something as massively important as email. Individually, we can make choices about plugging in or out. But it doesn't really make sense to say something like, "Email is good, in general."


Related Posts Plugin for WordPress, Blogger...