11 Futures studies
Philosophy of the future—Where are we going?
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
11.1 Risk
11.1.1 Limits to growth
- The Limits to Growth (1972)
- O’Neill, G.K. (1974). The colonization of space. 1
1 O’Neill (1974).
See also:
- Ecology in the Outline on Ethics
11.1.2 Existential threats
- Bostrom, N. (2013). Existential risk prevention as global priority. 2
- Climate change
- WMDs
- Pandemics
- Asteroids
- AI
- …
- Doomsday argument
- Hanson, R. (1998). Critiquing the doomsday argument.
- Baum, S.D. et al. (2019). Long-term trajectories of human civilization. 3
- Bostrom, N. (2019). The vulnerable world hypothesis. 4
- Aschenbrenner, L. (2020). Existential risk and growth.
11.1.3 Fermi paradox
- Fermi paradox
- Freitas, R.A. (1983). Extraterrestrial intelligence in the solar system: Resolving the Fermi paradox.
- Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy. 5
- Sandberg, A., Drexler, E., & Ord, T. (2018). Dissolving the Fermi paradox. 6
- Hanson, R., Martin, D., McCarter, C., & Paulson, J. (2021). If loud aliens explain human earliness, quiet aliens are also rare. 7
- Wong, M.L. & Bartlett, S. (2022). Asymptotic burnout and homeostatic awakening: A possible solution to the Fermi paradox? 8
5 Bostrom (2002).
6 Sandberg, Drexler, & Ord (2018).
7 Hanson, Martin, McCarter, & Paulson (2021).
8 Wong & Bartlett (2022).
Stanley Kubrick interviewed by Playboy Magazine in 1968:
I will say that the God concept is at the heart of 2001—but not any traditional, anthropomorphic image of God. I don’t believe in any of Earth’s monotheistic religions, but I do believe that one can construct an intriguing scientific definition of God, once you accept the fact that there are approximately 100 billion stars in our galaxy alone, that its star is a life-giving sun and that there are approximately 100 billion galaxies in just the visible universe. Given a planet in stable orbit, not too hot and not too cold, and given a few billion years of chance chemical reactions created by the interaction of sun’s energy on the planet’s chemicals, it’s fairly certain that life in one form or another will eventually emerge. It’s reasonable to assume that there must be, in fact, countless billions of such planets where biological life has arisen, and the odds of some proportion of such life developing intelligence are high. Now, the sun is by no means an old star, and its planets are mere children in cosmic age, so it seems likely that there are billions of planets in the universe not only where intelligent life is on a lower scale than man but other billions where it is approximately equal and others still where it is hundreds of thousands of years in advance of us. When you think of the giant technological strides that man has made in a few millennia—less than a microsecond in the cosmology of the universe—can you imagine the evolutionary development that much older life forms have taken? They may have progressed from biological species, which are fragile shells for the mind at best, into immortal machine entities—and then, over innumerable eons, they could emerge from the chrysalis of matter transformed into beings of pure energy and spirit. Their potentialities would be limitless and their intelligence ungraspable by humans. 9
9 Kubrick (1968).
11.2 Technological growth
11.2.1 Future of computing
- Feynman, R.P. (1959). There’s plenty of room at the bottom. 10
- Vinge, V. (1993). The coming technological singularity. 11
See also:
11.2.2 Future of the internet
- Clegg, N. (2022). Making the metaverse: What it is, how it will be built, and why it matters.

11.2.3 Simulation argument
- Simulation argument 12 and patch 13
- Simulation hypothesis
11.3 Artificial intelligence
11.3.1 Outlook
- Good, I.J. (1965). Speculations concerning the first ultraintelligent machine. 14
- Feynman, R.P. (1985). Talk: Can machines think?
- Russell & Norvig 15
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. 16
- Armstrong, S., Sotala, K., & OhEigeartaigh, S. S. (2014). The errors, insights and lessons of famous AI predictions–and what they mean for the future. 17
- Urban, T. (2015). The AI Revolution: The Road to Superintelligence. 18
- Urban, T. (2015). The AI Revolution, Part 2: Our Immortality or Extinction
- Marcus, G. (2018). Deep learning: A critical appraisal. 19
- Gwern. (2020). The scaling hypothesis.
- Zhang, D. et al. (2021). The AI Index 2021 Annual Report. 20
- Zhang, D. et al. (2022). The AI Index 2022 Annual Report. 21
- Marcus, G. (2022). Deep learning is hitting a wall: What would it take for artificial intelligence to make real progress?
- Benaich, N. & Hogarth, I. (2022). State of AI Report 2022.
- Steinhardt, J. (2023). Forecasting ML Benchmarks in 2023.
- Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. 22
- Maslej, N. et al. (2023). The AI Index 2023 Annual Report. 23
- Maslej, N. et al. (2024). The AI Index 2024 Annual Report. 24
- Zuckerberg, M. (2024). Open source AI is the path forward.
- Perrault, R. et al. (2025). Artificial Intelligence Index Report 2025. 25
- Silver, D. & Sutton. R.S. (2025). Welcome to the Era of Experience.
14 Good (1965).
15 Russell & Norvig (1995).
16 Bostrom (2014).
17 Armstrong, Sotala, & OhEigeartaigh (2014).
18 Urban (2015).
19 Marcus (2018).
20 Zhang, D. et al. (2021).
21 Zhang, D. et al. (2022).
22 Bubeck, S. et al. (2023).
23 Maslej, N. et al. (2023).
24 Maslej, N. et al. (2024).
25 Perrault, R. et al. (2025).
See also:
11.3.2 Risks
- Jeremy Howard warning about the abilities of models like GPT-2:
We have the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter. 26
26 Vincent (2019).
- Carroll, S. & Russell, S. (2020). Video: Stuart Russell on making artificial intelligence compatible with humans. Mindscape 94.
- Russell: AI gives us the Midas touch
- Kokotajlo, D. (2021). What 2026 looks like.
- Karnofsky, H. (2022). AI could defeat all of us combined.
- Cotra, A. (2022). Two-year update on my personal AI timelines.
- Future of Life Institute. (2023). Pause giant AI experiments: An open letter.
- Future of Life Institute. (2023). Policymaking in the Pause: What can policymakers do now to combat risks from advanced AI systems?.
- Altman, S. (2023). Planning for AGI and beyond.
- Finnveden, L., Riedel, J., & Shulman, C. (2023). AGI and lock-in.
- Aschenbrenner, L. (2024). Situational Awareness: The decade ahead. 27
- Amodei, D. (2024). Machines of loving grace.
- Gabriel, I. et al. (DeepMind). (2024). The ethics of advanced AI assistants. 28
- Longpre, S. (2024). Consent in crisis: The rapid decline of the AI data commons. 29
- Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025). AI 2027.
- Bengio, Y. et al. (2025). International AI Safety Report. 30
- Sadeddine, Z., Maxwell, W., Varoquaux, G., & Suchanek, F.M. (2025). Large language models as search engines: Societal challenges.
11.4 Transhumanism
- Nietzsche, F. (1883). Thus Spoke Zarathustra.
- Übermensch
- Haldane, J.B.S. (1924). Daedalus; or, Science and the Future.
- Russell, B. (1924). Icarus, or the future of science.
- Huxley, Julian (1957). Transhumanism.
- Moravec, H. (1998). When will computer hardware match the human brain?. 31
- Bostrom, N. (2005). The fable of the dragon-tyrant. 32
11.5 My thoughts
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
11.6 Annotated bibliography
11.6.1 Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy.
- TODO.
11.6.1.1 My thoughts
- TODO.
11.6.2 Bostrom, N. (2003). Are You Living in a Computer Simulation?
- TODO.
11.6.2.1 My thoughts
- TODO.
11.6.3 Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.
- TODO.
11.6.3.1 My thoughts
- TODO.
11.6.4 More articles to do
- TODO.