Future studies
Philosophy of the future—Where are we going?
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Contents
- Risk
- Technological growth
- Artificial intelligence
- Transhumanism
- My thoughts
- Annotated bibliography
- Links and encyclopedia articles
- References
Risk
Limits to growth
See also:
- Ecology in the Outline on Ethics.
Existential threats
- Bostrom, N. (2013). Existential risk prevention as global priority.1
- Climate change
- WMDs
- Pandemics
- …
- Doomsday argument
- Hanson, R. (1998). Critiquing the doomsday argument.
- Baum, S.D. et al. (2019). Long-term trajectories of human civilization.2
- Bostrom, N. (2019). The vulnerable world hypothesis.3
Fermi paradox
- Fermi paradox
- Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy.4
- Sandberg, A., Drexler, E., & Ord, T. (2018). Dissolving the Fermi paradox.5
- Hanson, R., Martin, D., McCarter, C., & Paulson, J. (2021). If loud aliens explain human earliness, quiet aliens are also rare.6
- Wong, M.L. & Bartlett, S. (2022). Asymptotic burnout and homeostatic awakening: A possible solution to the Fermi paradox?7
Technological growth
Future of computing
- Feynman, R.P. (1959). There’s plenty of room at the bottom.8
- Vinge, V. (1993). The coming technological singularity.9
See also:
Future of the internet
- Clegg, N. (2022). Making the metaverse: What it is, how it will be built, and why it matters.

Simulation argument
- Simulation argument10 and patch11
- Simulation hypothesis
Artificial intelligence
Outlook
- Feynman, R.P. (1985). Talk: Can machines think?
- Russell & Norvig12
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.13
- Armstrong, S., Sotala, K., & OhEigeartaigh, S. S. (2014). The errors, insights and lessons of famous AI predictions–and what they mean for the future.14
- Urban, T. (2015). The AI Revolution: The Road to Superintelligence.15
- Marcus, G. (2018). Deep learning: A critical appraisal.16
- Gwern. (2020). The scaling hypothesis.
- Carroll, S. & Russell, S. (2020). Video: Stuart Russell on making artificial intelligence compatible with humans. Mindscape 94.
- Russell: AI gives us the Midas touch
- Zhang, D. et al. (2021). The AI Index 2021 Annual Report.17
- Zhang, D. et al. (2022). The AI Index 2022 Annual Report.18
- Marcus, G. (2022). Deep learning is hitting a wall: What would it take for artificial intelligence to make real progress?
- Benaich, N. & Hogarth, I. (2022). State of AI Report 2022.
- Steinhardt, J. (2023). Forecasting ML Benchmarks in 2023.
- Future of Life Institute. (2023). Pause giant AI experiments: An open letter.
- Future of Life Institute. (2023). Policymaking in the Pause: What can policymakers do now to combat risks from advanced AI systems?.
- Altman, S. (2023). Planning for AGI and beyond.
See also:
Risks
- Jeremy Howard warning about the abilities of models like GPT-2:
We have the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter.19
Transhumanism
- Haldane, J.B.S. (1924). Daedalus; or, Science and the Future.
- Russell, B. (1924). Icarus, or the future of science.
- Huxley, Julian (1957). Transhumanism.
- Bostrom, N. (2005). The fable of the dragon-tyrant.20
My thoughts
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Annotated bibliography
Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy.
- TODO.
My thoughts
- TODO.
Bostrom, N. (2003). Are You Living in a Computer Simulation?
- TODO.
My thoughts
- TODO.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.
- TODO.
My thoughts
- TODO.
More articles to do
- TODO.
Links and encyclopedia articles
SEP
IEP
Wikipedia
Others
Videos
References
Bostrom (2013).↩︎
Baum, S.D. et al. (2019).↩︎
Bostrom (2019).↩︎
Bostrom (2002).↩︎
Sandberg, Drexler, & Ord (2018).↩︎
Hanson, Martin, McCarter, & Paulson (2021).↩︎
Wong & Bartlett (2022).↩︎
Feynman (1959).↩︎
Vinge (1993).↩︎
Bostrom (2003).↩︎
Bostrom (2011).↩︎
Russell & Norvig (1995).↩︎
Bostrom (2014).↩︎
Armstrong, Sotala, & OhEigeartaigh (2014).↩︎
Urban (2015).↩︎
Marcus (2018).↩︎
Zhang, D. et al. (2021).↩︎
Zhang, D. et al. (2022).↩︎
Vincent (2019).↩︎
Bostrom (2005).↩︎