|
|
Future
Studies |
Nick
Bostrom is the Director of the Future of Humanity Institute at Oxford University |
|
|
|
I
try to think about the future in a different way than that traditionally
associated with "future studies". I don't know how to describe
what is different about my approach; I can only illustrate it by presenting
you with some of my papers. |
- How
Unlikely is a Doomsday Catastrophe?
- Examines the
risk from physics experiments and natural events to the local fabric
of spacetime. Argues that the Brookhaven report overlooks an observation
selection effect. Shows how this limitation can be overcome by using
data on planet formation rates. [With Max Tegmark] [expanded version
of original in Nature, 2005, 438, 754] [pdf]
|
|
-
|
- Technological
Revolutions: Ethics and Policy in the Dark
- Technological revolutions are
among the most important things that happen to humanity. This paper
discusses some of the ethical and policy issues raised by anticipated
technological revolutions, such as nanotechnology. [Nanotechnology
and Society, eds. Nigel M. de S. Cameron and M. Ellen Mitchell (John
Wiley), 2007] [pdf]
|
- The
Future of Human Evolution
- This paper explores some dystopian
scenarios where freewheeling evolutionary developments, while continuing
to produce complex and intelligent forms of organization, lead to the
gradual elimination of all forms of being worth caring about. We then
discuss how such outcomes could be avoided and argue that under certain
conditions the only possible remedy would be a globally coordinated
effort to control human evolution by adopting social policies that modify
the default fitness function of future life forms. [In Death and
Anti-Death, ed. Charles Tandy (Ria University Press, 2005)] [pdf
| html]
|
|
|
- Anthropic
Bias: Observation Selection Effects in Science and Philosophy
- Failure to consider observation
selection effects result in a kind of bias that infest many branches
of science and philosophy. This book presented the first mathematical
theory for how to correct for these biases.
|
- Existential
Risks: Analyzing Human Extinction Scenarios and Related Hazards
- Existential
risks are ways in which we could screw up badly and permanently. Remarkably,
relatively little serious work has been done in this important area.
The point, of course, is not to welter in doom and gloom but to better
understand where the biggest dangers are so that we can develop strategies
for reducing them. [Journal
of Evolution and Technology, 2002, vol. 9]
[html
| pdf]
|
|
|
Astronomical
Waste: The Opportunity Cost of Delayed Technological Development
Suns are illuminating and heating empty
rooms, unused energy is being flushed down black holes, and our great common
endowment of negentropy is being irreversibly degraded into entropy on a
cosmic scale. These are resources that an advanced civilization could have
used to create value-structures, such as sentient beings living worthwhile
lives... [Utilitas,
2003, Vol. 15, No. 3, pp. 308-314] [html
| pdf] |
- Ethical
Issues In Advanced Artificial Intelligence
- Some cursory notes; not very
in-depth. [Cognitive, Emotive and Ethical Aspects of Decision Making
in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et
al., Int. Institute of Advanced Studies in Systems Research and Cybernetics,
2003, pp. 12-17] [html
| pdf]
|
|
|
- Are
You Living in a Computer Simulation?
- This paper argues that at least
one of the following propositions is true: (1) the human species is
very likely to go extinct before reaching the posthuman stage; (2) any
posthuman civilization is extremely unlikely to run significant number
of simulations or (variations) of their evolutionary history; (3) we
are almost certainly living in a computer simulation. It follows that
the naïve transhumanist dogma that there is a significant chance that
we will one day become posthumans who run ancestor-simulations is false,
unless we are currently living in a simulation. A number of other consequences
of this result are also discussed. [Preprint,
Philosophical Quarterly, 2003, Vol. 53, No. 211, pp. 243-255]
[pdf
| html]
Also with a Reply
to Brian Weatherson's comments [Philosophical Quarterly,
Vol. 55, No. 218, pp. 90-97.]
|
-
|
-
|