Futures studies
Philosophy of the future—Where are we going?
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Contents
- Risk
- Technological growth
- Artificial intelligence
- Transhumanism
- My thoughts
- Annotated bibliography
- Links and encyclopedia articles
- References
Risk
Limits to growth
- The Limits to Growth (1972)
- O’Neill, G.K. (1974). The colonization of space.1
See also:
- Ecology in the Outline on Ethics
Existential threats
- Bostrom, N. (2013). Existential risk prevention as global priority.2
- Climate change
- WMDs
- Pandemics
- …
- Doomsday argument
- Hanson, R. (1998). Critiquing the doomsday argument.
- Baum, S.D. et al. (2019). Long-term trajectories of human civilization.3
- Bostrom, N. (2019). The vulnerable world hypothesis.4
- Aschenbrenner, L. (2020). Existential risk and growth.
Fermi paradox
- Fermi paradox
- Freitas, R.A. (1983). Extraterrestrial intelligence in the solar system: Resolving the Fermi paradox.
- Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy.5
- Sandberg, A., Drexler, E., & Ord, T. (2018). Dissolving the Fermi paradox.6
- Hanson, R., Martin, D., McCarter, C., & Paulson, J. (2021). If loud aliens explain human earliness, quiet aliens are also rare.7
- Wong, M.L. & Bartlett, S. (2022). Asymptotic burnout and homeostatic awakening: A possible solution to the Fermi paradox?8
Technological growth
Future of computing
- Feynman, R.P. (1959). There’s plenty of room at the bottom.9
- Vinge, V. (1993). The coming technological singularity.10
See also:
Future of the internet
- Clegg, N. (2022). Making the metaverse: What it is, how it will be built, and why it matters.
Simulation argument
- Simulation argument11 and patch12
- Simulation hypothesis
Artificial intelligence
Outlook
- Feynman, R.P. (1985). Talk: Can machines think?
- Russell & Norvig13
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.14
- Armstrong, S., Sotala, K., & OhEigeartaigh, S. S. (2014). The errors, insights and lessons of famous AI predictions–and what they mean for the future.15
- Urban, T. (2015). The AI Revolution: The Road to Superintelligence.16
- Marcus, G. (2018). Deep learning: A critical appraisal.17
- Gwern. (2020). The scaling hypothesis.
- Carroll, S. & Russell, S. (2020). Video: Stuart Russell on making artificial intelligence compatible with humans. Mindscape 94.
- Russell: AI gives us the Midas touch
- Zhang, D. et al. (2021). The AI Index 2021 Annual Report.18
- Zhang, D. et al. (2022). The AI Index 2022 Annual Report.19
- Marcus, G. (2022). Deep learning is hitting a wall: What would it take for artificial intelligence to make real progress?
- Benaich, N. & Hogarth, I. (2022). State of AI Report 2022.
- Cotra, A. (2022). Two-year update on my personal AI timelines.
- Steinhardt, J. (2023). Forecasting ML Benchmarks in 2023.
- Future of Life Institute. (2023). Pause giant AI experiments: An open letter.
- Future of Life Institute. (2023). Policymaking in the Pause: What can policymakers do now to combat risks from advanced AI systems?.
- Altman, S. (2023). Planning for AGI and beyond.
- Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4.20
- Maslej, N. et al. (2023). The AI Index 2023 Annual Report.21
- Maslej, N. et al. (2024). The AI Index 2024 Annual Report.22
- Aschenbrenner, L. (2024). Situational Awareness: The decade ahead.23
- Zuckerberg, M. (2024). Open source AI is the path forward,
- Amodei, D. (2024). Machines of loving grace.
See also:
Risks
- Jeremy Howard warning about the abilities of models like GPT-2:
We have the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter.24
- Gabriel, I. et al. (DeepMind). (2024). The ethics of advanced AI assistants.25
- Longpre, S. (2024). Consent in crisis: The rapid decline of the AI data commons.26
Transhumanism
- Nietzsche, F. (1883). Thus Spoke Zarathustra.
- Übermensch
- Haldane, J.B.S. (1924). Daedalus; or, Science and the Future.
- Russell, B. (1924). Icarus, or the future of science.
- Huxley, Julian (1957). Transhumanism.
- Bostrom, N. (2005). The fable of the dragon-tyrant.27
My thoughts
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Annotated bibliography
Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy.
- TODO.
My thoughts
- TODO.
Bostrom, N. (2003). Are You Living in a Computer Simulation?
- TODO.
My thoughts
- TODO.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.
- TODO.
My thoughts
- TODO.
More articles to do
- TODO.
Links and encyclopedia articles
SEP
IEP
Wikipedia
Others
Videos
References
O’Neill (1974).↩︎
Bostrom (2013).↩︎
Baum, S.D. et al. (2019).↩︎
Bostrom (2019).↩︎
Bostrom (2002).↩︎
Sandberg, Drexler, & Ord (2018).↩︎
Hanson, Martin, McCarter, & Paulson (2021).↩︎
Wong & Bartlett (2022).↩︎
Feynman (1959).↩︎
Vinge (1993).↩︎
Bostrom (2003).↩︎
Bostrom (2011).↩︎
Russell & Norvig (1995).↩︎
Bostrom (2014).↩︎
Armstrong, Sotala, & OhEigeartaigh (2014).↩︎
Urban (2015).↩︎
Marcus (2018).↩︎
Zhang, D. et al. (2021).↩︎
Zhang, D. et al. (2022).↩︎
Bubeck, S. et al. (2023).↩︎
Maslej, N. et al. (2023).↩︎
Maslej, N. et al. (2024).↩︎
Aschenbrenner (2024).↩︎
Vincent (2019).↩︎
Gabriel, I. et al. (2024).↩︎
Longpre (2024).↩︎
Bostrom (2005).↩︎