AI Trust Framework and Maturity Model: Improving Security, Ethics and Trust in AI
Abstract
The following article develops an AI Trust Framework and Maturity Model (AI-TFMM) to improve trust in AI technologies used by Autonomous Human Machine Teams & Systems (A-HMT-S). The framework establishes a methodology to improve quantification of trust in AI technologies. Key areas of exploration include security, privacy, explainability, transparency and other requirements for AI technologies to be ethical in their development and application. A maturity model framework approach to measuring trust is applied to improve gaps in quantifying trust and associated metrics of evaluation. Finding the right balance between performance, governance and ethics also raises several critical questions on AI technology and trust. Research examines methods needed to develop an AI-TFMM and validates it against a popular AI technology (Chat GPT). OpenAI's GPT, which stands for "Generative Pre-training Transformer," is a deep learning language model that can generate human-like text by predicting the next word in a sequence based on a given prompt. ChatGPT is a version of GPT that is tailored for conversation and dialogue, and it has been trained on a dataset of human conversations to generate responses that are coherent and relevant to the context. The article concludes with results and conclusions from testing the AI Trust Framework and Maturity Model (AI-TFMM) applied to AI technology. Based on these findings, this paper highlights gaps that could be filled with future research to improve the accuracy, efficacy, application, and methodology of the AI-TFMM.
Keywords
Full Text:
PDFReferences
Anonimous (nda). Microsoft’s principles for ethical AI. Accessed on January 3, 2023 at INSERT
Anonimous (ndb). Data Robot. Machine Learning Life Cycle. Accessed on January 4, 2023 at https://www.datarobot.com/wiki/machine-learning-life-cycle/
Anonimous (ndc). "ChatGPT" (December 15 Model). OpenAI. Accessed on January 2, 2023
Anonimous (ndd). ChatGPT AI development and training methods. Accessed on January 2, 2023, at https://openai.com/blog/chatgpt/
Anonimous (nde). ChatGPT Documentation. Accessed on Jan 2, 2023 at https://openai.com/blog/chatgpt/
Deloitte, A. B. (2021). 2021 Transparency Report.
Hatchwell, B. J. (2017). Replication in behavioural ecology: a comment on Ihle et al. Behavioral Ecology, 28(2), 360-360.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50-80.
Mylrea, M., Gourisetti, S. N. G., & Nicholls, A. (2017, November). An introduction to buildings cybersecurity framework. In 2017 IEEE symposium series on computational intelligence (SSCI) (pp. 1-7). IEEE.
O’donovan, P., Leahy, K., Bruton, K., & O’Sullivan, D. T. (2015). Big data in manufacturing: a systematic mapping study. Journal of Big Data, 2, 1-22.
Roovers, R. (2019). Transparency and responsibility in artificial intelligence. A call for explainable AI. Accessed on December 25, 2022 at https://www2.deloitte.com/content/dam/Deloitte/nl/Documents/innovatie/deloitte-nl-innovation-bringing-transparency-and-ethics-into-ai.pdf
Rosenfeld, A. (2021). Better metrics for evaluating explainable artificial intelligence. In Proceedings of the 20th international conference on autonomous agents and multiagent systems (pp. 45-50).
Rosenfeld, A., & Richardson, A. (2019). Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems, 33, 673-705.
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5), 206-215.
Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal, 27(3), 379-423.
DOI: https://doi.org/10.53889/citj.v1i1.198
Article Metrics
Abstract view : 1093 timesPDF - 1182 times
Refbacks
- There are currently no refbacks.
Copyright (c) 2023 Cybersecurity and Innovative Technology Journal

This work is licensed under a Creative Commons Attribution 4.0 International License.