I've finished reading two interesting books, David Friedman's Future Imperfect, and Edward Castronova's Exodus to the Virtual World. Both are attempts to forecast what the future looks like, and I would say that both have a time horizon of about
fifty thirty years into the future, although Friedman is explicit in that estimate, whereas Castronova is vague about the amount of time involved.
Future Imperfect tries to work out the implications of a variety of technological changes that will occur, starting with changes in privacy related technology online--encryption, electronic snooping, etc. I found this part of the book somewhat dull. The implications are sometimes interesting: Will the world become a place of zero privacy, where everyone knows whateveryone else is doing? Will this be an oppressive place where the state can easily control everyone (as in 1984), or will the ability to spy on the police, too, make it a better place? The second part of the book is much more interesting, and deals with changes in nanotechnology and biotechnology. What happens if we can build tiny machines that eat and sequester carbon? Could this eliminate global warming? Could we make machines to go into the body and repair it? What are the implications of medical innovations that allow immortality, or the uploading of the content of our brains into computers or new bodies? It could lead to a wonderful paradise. On the other hand, self-replicating nanomachines could mutate, consuming everything they encounter, converting the world into inanimate, lifeless matter--the "grey goo scenario".
Castranova's book looks at the burgeoning world of Massively Multiplayer Online Role-Playing Games, such as World of Warcraft and Everquest (the latter now being a dated reference). His view of the future is almost completely positive. Only at the very end of the book does he consider the possibility that the use of artificial worlds could be pathological--that plugging into an experience machine could be unhealthy. Friedman discusses this possibility as well, and comes to the conclusion that given Nozick's choice between existing in the real world or a slightly better imaginary world, he would choose the real world. He notes that he plays a good deal of World of Warcraft himself, though--I wonder at what price would he be willing to switch to the imaginary world? What if it were not just slightly better, but a great deal better, more fun, and more rewarding?
What I take away most from these books, though, can be summarized in the following graph (for which I am to blame; it's not in Friedman or Castronova):
The vertical axis measures well-being, perhaps measured in per-capita GDP or a happiness index or some other measure. "Business as usual" means continued economic growth around two percent per year over the next
fifty thirty years. This is the view most economists take, and radical technological change aside, it's pretty nice. Two percent growth per year for the next fifty thirty years means that the average American will be making a salary well over $100,000 $80,000 in real terms (i.e., adjusted for inflation). That's a pretty amazing standard of living.
What's really striking, though, is how much better or worse things could be, and how uncertain that is. These technological changes could push things strongly in one direction or another. Medical advances that eliminate disease could push our well-being far above what ordinary economic growth would suggest. On the other hand, if these advances are based on a technology that ultimately wipes out all life, that's pretty much the worst outcome possible. Seen in this light, global warming seems like a pretty small problem. Even the worst case scenario of twenty-foot sea level rise pales compared to the extinction of all life. Furthermore, if nanomachines really do pay off (and don't kill us), they could eliminate the problem of global warming altogether, either by eating the stuff that causes it, or by addressing the effects of global warming (imagine an army of nanomachines that reduce ocean acidification, for example). We really don't have any idea how this will work out, and the range of possiblities, from disastrous to heavenly, is staggering.
It's not clear that policy can allow us to avoid the undesirable outcomes. Even if the U.S. government were to strictly regulate nanotechnology, they would be unlikely to be even as (un)successful at that as they have been at preventing the development of nuclear weapons. The U.S. government can't regulate everyone everywhere. Whatever is going to happen is going to happen, and there's not a lot "we" can do about it through government.
Personally my view is optimistic, but I'm not sure that there is good reason for that. My optimism is heavily based on the last two hundred years of technological change, which has mostly made people better off (granted it has also killed millions of people along the way due to improved military technology, but that must be weighed against the lives saved and made better). If Friedman is to be believed, the next
fifty thirty years are going to result in changes so radical that the last two hundred years will pale by comparison.
Update: Thanks to David Friedman for a correction in the comments, and for linking to the online version of the book.