Considerations for the Safety Analysis of AI-Enable Systems
Abstract
This study explored the applicability of hazard analysis techniques to Artificial Intelligence/Machine Learning AI-enabled systems, a growing area of concern in safety-critical domains. The study evaluates 127 hazard analysis techniques described in the System Safety Society’s System Safety Analysis Handbook (1997) for their relevance to the unique challenges posed by AI-enabled systems. A qualitative criteria-based assessment framework was employed to systematically analyze each technique against key AI-specific considerations, including complexity management, human-AI interaction, dynamic and adaptive behavior, software-centric focus, probabilistic and uncertainty handling, and iterative development compatibility. The evaluation process involved defining criteria to address AI/ML systems' distinctive characteristics, assessing each method's applicability, and ranking techniques based on their alignment with AI-related challenges. Findings indicate that Fault Tree Analysis (FTA) and Human Reliability Analysis (HRA) are highly relevant for performing safety on AI-enabled systems. Other techniques, such as What-If Analysis, require adaptation to address emergent behaviors. This study provides a framework for selecting and tailoring hazard analysis methods for AI-enabled systems, contributing to developing robust safety assurance practices in an increasingly intelligent and autonomous era.
Keywords
Full Text:
PDFReferences
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems In AI Safety. arXiv.org. https://arxiv.org/abs/1606.06565
Carter, H. G., Chan, A., Vinegar, C., & Rupert, J. (2022). Proposing The Use Of Hazard Analysis For Machine Learning Data Sets. Journal of System Safety, 58(2). https://doi.org/10.56094/jss.v58i2.253
Cummings, M. L. (2024). A Taxonomy For AI Hazard Analysis. Journal of Cognitive Engineering and Decision Making, 18(4), 327–332. https://doi.org/10.1177/15553434231224096
Dobbe, R. (2022). System Safety And Artificial Intelligence. In FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (p. 1584). https://doi.org/10.1145/3531146.3533215
Department of Defense (DOD). (2012). MIL-STD-882E: Department Of Defense Standard Practice System Safety.
Ericson, C. A. (2005). Hazard Analysis Techniques For System Safety (2nd ed.). John Wiley & Sons.
Garvin, T., & Kimbleton, S. (2021). Artificial Intelligence As Ally In Hazard Analysis. In American Institute of Chemical Engineers 2020 Spring Meeting and 16th Global Congress on Process Safety (August 16–20, 2020).
Johnson, B. (2022). Metacognition For Artificial Intelligence System Safety – An Approach To Safe And Desired Behavior. Safety Science, 151, 105743. https://doi.org/10.1016/j.ssci.2022.105743
Martelaro, N., Smith, C. J., & Zilovic, T. (2022). Exploring Opportunities In Usable Hazard Analysis Processes For AI Engineering. arXiv preprint. https://arxiv.org/abs/2203.15628
Popović, V. M., & Vasić, B. (2008). Review Of Hazard Analysis Methods And Their Basic Characteristics. FME Transactions, 36(4).
System Safety Society. (1997). System Safety Analysis Handbook (2nd ed.).
Yampolskiy, R. V.. (2019). Artificial Intelligence Safety And Security. CRC Press/Taylor & Francis Group. https://doi.org/10.1201/9781351251389
DOI: https://doi.org/10.53889/citj.v3i2.670
Article Metrics
Abstract view : 12 timesPDF - 7 times
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Cybersecurity and Innovative Technology Journal

This work is licensed under a Creative Commons Attribution 4.0 International License.

