Resolving Human Concerns about AI and Technology with Non-Axiomatic Reasoning Systems
Abstract
This article discusses some of the challenges humanity experiences with modern technologies and argues some potential ways to address these challenges with AI. The challenges include our lack of access to the technologies, lack of trust in technologies, inadequate understanding of why these technologies behave as they do, and inequalities related to technology. We discuss how the non-axiomatic reasoning system (NARS), an AI model capable of general-purpose reasoning, can provide solutions to these issues in a trustworthy and explainable way.
References
Atlantic Re:think (2018). Hewlett Packard Enterprise - moral code: The ethics of AI [Video]. 8:03. June 29, 2018. https://youtu.be/GboOXAjGevA.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610–623.
Borgmann, A. (1984). Technology and the Character of Contemporary Life. University of Chicago Press, Chicago, IL.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Burke, J. & Ornstein, R. E. (1997). The Axemaker’s Gift: Technology’s Capture and Control of Our Minds and Culture. Putnam, New York, NY.
Busch, V., Ananda Manders, L., & Rob Josephus de Leeuw, J. (2013). Screen time associated with health behaviors and outcomes in adolescents. American Journal of Health Behavior, 37(6), 819–830.
Denison, C., MacDiarmid, M., Barez, F., Duvenaud, D., Kravec, S., Marks, S., Schiefer, N., Soklaski, R., Tamkin, A., Kaplan, J., et al. (2024). Sycophancy to subterfuge: Investigating reward-tampering in large language models. arXiv preprint arXiv:2406.10162.
Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3–71.
Foucault, M. (2009). Panopticism. In D. M. Kaplan (Ed.), Readings in the Philosophy of Technology (pp. 264–277). Rowman & Littlefield, Plymouth, UK.
Furlong, G. (2015). Designs for a panopticon prison by Jeremy Bentham: Section of an inspection house; plan of houses of inspection; section plan, c. 1791. In G. Furlong (Ed.), Treasures from UCL (pp. 136–139). UCL Press, London, UK. https://doi.org/10.2307/j.ctt1g69xrh.46.
Garfield, B. (2023). Mistrust Issues: How Technology Discourses Quantify, Extract and Legitimize Inequalities. Bristol University Press, Bristol, UK. https://doi.org/10.2307/jj.6445828.6.
Gunning, D. & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI magazine, 40(2), 44–58.
Hagendorff, T. (2024). Deception abilities emerged in large language models. Proceedings of the National Academy of Sciences, 121(24), e2317967121.
Hahm, C. (2024). Considering simple triangle vision perception with eye movements in NARS. NARS Workshop at AGI-24.
Hahm, C., Gabriel, M., Hammer, P., Isaev, P., & Wang, P. (2023). NARS in TruePal: a trusted and explainable AGI partner for first responders. Technical Report 19, Temple University AGI Team.
Hahm, C., Xu, B., & Wang, P. (2021). Goal generation and management in NARS. In 14th International Conference on Artificial General Intelligence (AGI 2021), Vol. 14, pp. 96–105. Springer.
Hammer, P. (2018). Data mining by non-axiomatic reasoning. FAIM Workshop on Architectures and Evaluation for Generality, Autonomy & Progress in AI.
Hammer, P. (2021). Autonomy through real-time learning and OpenNARS for Applications. PhD thesis, Temple University.
Hammer, P., Lofthouse, T., & Wang, P. (2016). The OpenNARS implementation of the non-axiomatic reasoning system. In Artificial General Intelligence: 9th International Conference, AGI 2016, New York, NY, USA, July 16-19, 2016, Proceedings 9, pp. 160–170. Springer.
Heidegger, M. (1966). Discourse on Thinking. Harper Row, New York, NY.
Hrafnkelsdottir, S. M., Brychta, R. J., Rognvaldsdottir, V., Gestsdottir, S., Chen, K. Y., Johannsson, E., Guðmundsdottir, S. L., & Arngrimsson, S. A. (2018). Less screen time and more frequent vigorous physical activity is associated with lower risk of reporting negative mental health symptoms among Icelandic adolescents. PLOS One, 13(4), e0196286.
Ireland, D. (2023). Primum non nocere: The ethical beginnings of a non-axiomatic reasoning system. In International Conference on Artificial General Intelligence, pp. 136–146. Springer.
Ireland, D. (2024). Mirabile dictu: Language acquisition in the non-axiomatic reasoning system. In International Conference on Artificial General Intelligence, pp. 99–108. Springer.
Jasanoff, S. (2016). The Ethics of Invention: Technology and the Human Future. W. W. Norton Company, New York, NY.
Kilic, O. (2015). Intelligent reasoning on natural language data: a non-axiomatic reasoning system approach. Master’s thesis, Temple University.
Lang, F. (1927). Metropolis. https://www.amazon.com/gp/video/detail/B0CH62D56T/.
Li, X. (2021). Functionalist Emotion Model in Artificial General Intelligence. PhD thesis, Temple University.
Li, X., Hammer, P., Wang, P., & Xie, H. (2018). Functionalist emotion model in NARS. In Artificial General Intelligence: 11th International Conference, AGI 2018, Prague, Czech Republic, August 22-25, 2018, Proceedings 11, pp. 119–129. Springer.
Michelfelder, D. P. (2009). Technological ethics in a different voice. In D. M. Kaplan (Ed.), Readings in the Philosophy of Technology (pp. 198–207). Rowman & Littlefield, Plymouth, UK.
Minsky, M. L. (1991). Logical versus analogical or symbolic versus connectionist or neat versus scruffy. AI Magazine, 12(2), 34–34.
Mirzadeh, I., Alizadeh, K., Shahrokhi, H., Tuzel, O., Bengio, S., & Farajtabar, M. (2024). GSM-Symbolic: Understanding the limitations of mathematical reasoning in large language models. arXiv preprint arXiv:2410.05229.
Narodytska, N. & Kasiviswanathan, S. P. (2017). Simple black-box adversarial attacks on deep neural networks. In CVPR Workshops, Vol. 2, p. 2.
Nezhurina, M., Cipolina-Kun, L., Cherti, M., & Jitsev, J. (2024). Alice in wonderland: Simple tasks showing complete reasoning breakdown in state-of-the-art large language models. arXiv preprint arXiv:2406.02061.
Pollard, T. (2020). Popular culture’s AI fantasies: Killers and exploiters or assistants and companions? Perspectives on Global Development & Technology, 19(1/2), 97––109. https://doi.org/10.1163/1569149712341543.
Rawte, V., Sheth, A., & Das, A. (2023). A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922.
Russell, S. J. & Norvig, P. (2016). Artificial intelligence: a modern approach. Pearson.
Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and brain sciences, 11(1), 1–23.
Song, S. (2019). Preventing a Butlerian Jihad: Articulating a global vision for the future of artificial intelligence. Journal of International Affairs, 72(1), 135–142. https://www.jstor.org/stable/26588349.
Stout, D., Passingham, R., Frith, C., Apel, J., & Chaminade, T. (2011). Technology, expertise and social cognition in human evolution. European Journal of Neuroscience, 33(7), 1328–1338. https://doi.org/10.1111/j.1460-9568.2011.07619.x.
Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.
van der Sluis, D. (2023). Nuts, NARS, and speech. In International Conference on Artificial General Intelligence, pp. 307–316. Springer.
Wang, P. (1995). Non-Axiomatic Reasoning System: Exploring the Essence of Intelligence. PhD thesis, Indiana University.
Wang, P. (1998). Why recommendation is special. In Working Notes of the AAAI Workshop on Recommender System, pp. 111–13.
Wang, P. (2013a). Natural language processing by reasoning and learning. In Artificial General Intelligence: 6th International Conference, AGI 2013, Beijing, China, July 31–August 3, 2013 Proceedings 6, pp. 160–169. Springer.
Wang, P. (2013b). Non-axiomatic logic: A model of intelligent reasoning. World Scientific.
Wang, P. (2018). Perception in NARS. Technical Report 7, Temple University AGI Team.
Wang, P., Hahm, C., & Hammer, P. (2022). A model of unified perception and cognition. Frontiers in Artificial Intelligence, 5, 806403.
Wang, P., Li, X., & Hammer, P. (2018). Self in NARS, an AGI system. Frontiers in Robotics and AI, 5, 20.
Wang, P., Power, B., & Li, X. (2020). A NARS-based diagnostic model. Technical Report 10, Temple University AGI Team.
Wang, P., Talanov, M., & Hammer, P. (2016). The emotional mechanisms in NARS. In Artificial General Intelligence: 9th International Conference, AGI 2016, New York, NY, USA, July 16-19, 2016, Proceedings 9, pp. 150–159. Springer.
Wiener, N. (1960). Some moral and technical consequences of automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. Science, 131(3410), 1355–1358.
Zeiler, M. D. & Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, pp. 818–833. Springer.
Copyright (c) 2025 Christian Hahm, Russell Suereth

This work is licensed under a Creative Commons Attribution 4.0 International License.