Futures studies

Philosophy of the future—Where are we going?

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Contents

  1. Risk
    1. Limits to growth
    2. Existential threats
    3. Fermi paradox
  2. Technological growth
    1. Future of computing
    2. Future of the internet
    3. Simulation argument
  3. Artificial intelligence
    1. Outlook
    2. Risks
  4. Transhumanism
  5. My thoughts
  6. Annotated bibliography
    1. Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy.
    2. Bostrom, N. (2003). Are You Living in a Computer Simulation?
    3. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.
    4. More articles to do
  7. Links and encyclopedia articles
    1. SEP
    2. IEP
    3. Wikipedia
    4. Others
    5. Videos
  8. References

Risk

Limits to growth

See also:

Existential threats

Fermi paradox

Technological growth

Future of computing

See also:

Future of the internet

Figure 1: ChatGPT has had faster user growth than any other app (source: yahoo!finance, Feb. 2023).

Simulation argument

Artificial intelligence

Outlook

See also:

Risks

We have the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter.24

Transhumanism

My thoughts

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Annotated bibliography

Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy.

  • TODO.

My thoughts

  • TODO.

Bostrom, N. (2003). Are You Living in a Computer Simulation?

  • TODO.

My thoughts

  • TODO.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.

  • TODO.

My thoughts

  • TODO.

  • TODO.

References

Armstrong, S., Sotala, K., & OhEigeartaigh, S. S. (2014). The errors, insights and lessons of famous AI predictions–and what they mean for the future. Journal of Experimental & Theoretical Artificial Intelligence, 26, 317–342. https://www.fhi.ox.ac.uk/wp-content/uploads/FAIC.pdf
Aschenbrenner, L. (2024). Situational Awareness: The decade ahead. https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf
Baum, S.D. et al. (2019). Long-term trajectories of human civilization. Foresight, 21, 55–83.
Bostrom, N. (2002). Anthropic Bias: Observation selection effects in science and philosophy. Routledge.
———. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53, 243–255.
———. (2005). The fable of the dragon-tyrant. Journal of Medical Ethics, 31, 273–277. https://www.nickbostrom.com/fable/dragon.html
———. (2011). A patch for the simulation argument. Analysis, 71, 54–61.
———. (2013). Existential risk prevention as global priority. Global Policy, 4, 15–31. https://www.existential-risk.org/concept.pdf
———. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
———. (2019). The vulnerable world hypothesis,. Global Policy, 10, 455–476. https://nickbostrom.com/papers/vulnerable.pdf
Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. https://arxiv.org/abs/2303.12712
Feynman, R. P. (1959). There’s plenty of room at the bottom. https://calteches.library.caltech.edu/1976/1/1960Bottom.pdf
Gabriel, I. et al. (2024). The ethics of advanced AI assistants. https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/ethics-of-advanced-ai-assistants/the-ethics-of-advanced-ai-assistants-2024-i.pdf
Hanson, R., Martin, D., McCarter, C., & Paulson, J. (2021). If loud aliens explain human earliness, quiet aliens are also rare. The Astrophysical Journal, 922, 182. https://iopscience.iop.org/article/10.3847/1538-4357/ac2369
Longpre, S. (2024). Consent in crisis: The rapid decline of the AI data commons. https://www.dataprovenance.org/Consent_in_Crisis.pdf
Marcus, G. (2018). Deep learning: A critical appraisal. https://arxiv.org/abs/1801.00631
Maslej, N. et al. (2023). The AI Index 2023 Annual Report. https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
———. (2024). The AI Index 2024 Annual Report. https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf
O’Neill, G. K. (1974). The colonization of space. Physics Today, 27, 32–40. https://pubs.aip.org/physicstoday/article/27/9/32/429507/The-colonization-of-spaceCareful-engineering-and
Russell, S. & Norvig, P. (1995). Artificial Intelligence: A modern approach (3rd ed.). Pearson.
Sandberg, A., Drexler, E., & Ord, T. (2018). Dissolving the Fermi paradox. https://arxiv.org/abs/1806.02404
Urban, T. (2015). The AI Revolution: The Road to Superintelligence. https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Vincent, J. (2019). OpenAI’s new multitalented AI writes, translates, and slanders: A step forward in AI text-generation that also spells trouble. https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2
Vinge, V. (1993). The coming technological singularity. https://edoras.sdsu.edu/~vinge/misc/singularity.html
Wong, M. L. & Bartlett, S. (2022). Asymptotic burnout and homeostatic awakening: A possible solution to the Fermi paradox? Journal of the Royal Society Interface, 19, 20220029. https://royalsocietypublishing.org/doi/full/10.1098/rsif.2022.0029
Zhang, D. et al. (2021). The AI Index 2021 Annual Report. Human-Centered Artificial Intelligence, Stanford University. https://arxiv.org/abs/2103.06312
———. (2022). The AI Index 2022 Annual Report. Human-Centered Artificial Intelligence, Stanford University. https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf

  1. O’Neill (1974).↩︎

  2. Bostrom (2013).↩︎

  3. Baum, S.D. et al. (2019).↩︎

  4. Bostrom (2019).↩︎

  5. Bostrom (2002).↩︎

  6. Sandberg, Drexler, & Ord (2018).↩︎

  7. Hanson, Martin, McCarter, & Paulson (2021).↩︎

  8. Wong & Bartlett (2022).↩︎

  9. Feynman (1959).↩︎

  10. Vinge (1993).↩︎

  11. Bostrom (2003).↩︎

  12. Bostrom (2011).↩︎

  13. Russell & Norvig (1995).↩︎

  14. Bostrom (2014).↩︎

  15. Armstrong, Sotala, & OhEigeartaigh (2014).↩︎

  16. Urban (2015).↩︎

  17. Marcus (2018).↩︎

  18. Zhang, D. et al. (2021).↩︎

  19. Zhang, D. et al. (2022).↩︎

  20. Bubeck, S. et al. (2023).↩︎

  21. Maslej, N. et al. (2023).↩︎

  22. Maslej, N. et al. (2024).↩︎

  23. Aschenbrenner (2024).↩︎

  24. Vincent (2019).↩︎

  25. Gabriel, I. et al. (2024).↩︎

  26. Longpre (2024).↩︎

  27. Bostrom (2005).↩︎