AI for SMEs – a safe implementation guide from the National Institute of Standards and Technology
Artificial intelligence, machine learning and the use of AI tools occupy an important position in the strategic development of small and medium-sized companies. Even if suitable use cases have not yet been found everywhere, AI technology is here to stay. This makes it all the more important to deal with the topic strategically now and to weigh up the costs and benefits. The risks associated with the use of AI should also be weighed up before any projects are launched. We want to provide a sensible overview of all the assumptions that should be made before introducing AI in companies. We are basing this on the almost 50-page AI Risk Management Framework from the National Institute of Standards and Technologies.
AI for SMEs – how do you get started?
The right AI strategy and a comprehensive understanding of potential use cases for AI applications are the starting point for a meaningful AI corporate strategy. You should take the necessary time to really consider all perspectives and weigh up the benefits and risks of AI in your own company. It is advisable to bring together individual employees from the specialist departments to form a committee and work out the pros and cons as well as the wishes and challenges surrounding the introduction of AI together.
A checklist for further project design should cover the following topics:
- Which use cases do we want to implement and with what priority?
- Who should work with AI in the company?
- Which AI usage rules do we want to adopt in our company?
- Which providers of AI solutions should be used in our company? Do we want to license a large LLM – such as Chat GPT – or would we prefer to use specialized, smaller models?
- How can we involve and train our employees in the introduction of AI?
- Do we have a budget for the introduction of AI software – including a consulting budget for AI compliance and AI risk management?
- Do we have our own training data or is our data available in a structured form and can we make it usable for AI applications? Do we first need to decommission legacy in order to be AI ready?
- In what time frame do we want to use the new technology productively?
- Who in the company will take on the role of project manager, who will be an internal ambassador?
- Which key figures should determine the success or failure of AI deployment, how will these be collected and do we already have basic data for this?
What does a proper AI strategy for SMEs look like?
Once the basic roadmap for the use of AI in the company has been defined, a corresponding strategy is an important cornerstone for implementing the project.
A strategy should always consist of the following pillars:
- Objectives/targets: In addition to the KPI targets that determine success and failure, there are also soft target factors such as productivity growth, increased employee motivation, etc. Above all, important milestones should be defined to accompany the project.
- Vision: Your company’s AI vision is a combination of the desired goals (KPIs) and use case outcomes. At best, a vision is always formulated in such a way that it describes the state of success achieved.
- Strategy: The strategy defines the framework conditions within which the envisaged vision is to be achieved.
- Capabilities: Do we have the right specialists in the company to implement the project and the necessary management culture for accompanying change management that contributes to the success of the project? Are any external resources required and do we already have the right partners?
- Architecture: Is our infrastructure – especially IT and data flows in the field of AI – transparent, secure and addressable? Do we have the necessary prerequisites – including personnel – to avoid creating a bottleneck here?
- Roadmap: Is there a clear schedule of milestones for the project?
- Projects and Programs: Are there specific action-related projects aimed at implementing the strategy?
This interesting graphic follows the well-known change management framework by Lippitt-Knoster. The model of the same name shows that only the combination of all necessary building blocks leads projects to success. The adaptation shown here is an interpretation by IT expert Jeff Winter.
Why is it important to implement risk management in the field of AI?
AI has the potential to revolutionize entire industries and have a lasting impact on the way we work. Many journalists and opinion leaders are already proclaiming a new era and comparing the introduction of AI to the invention of the PC or the introduction of the internet.
However, such a powerful technology also harbors risks and downsides. Even leading experts are not always sure what exactly happens between input and output in an AI model – and we have not yet completely solved the fundamental problems of AI algorithms: AIs fantasize, they can misinterpret data or simply make false statements. The training of an AI is often only as good as the data that is available for training.
In addition, there are many political uncertainties regarding the regulation of AI systems. With the EU AI Act, there are already regulations that need to be considered today before companies invest resources in the implementation of AI projects.
The National Institute of Standard and Technology has published a voluntary guideline on AI risk management in response to this situation.
The so-called AI RMF – AI Risk Management Framework deals comprehensively with the potential risks of AI technology and how to counter them in over 50 pages.
What is the NIST and how did the NIST
Framework come about?
Fotocredit: J. Stoughton/NIST
- The National Institute of Standards and Technology (NIST) is a federal agency of the United States of America.
- The NIST acts as the central US large-scale research institution for standardization processes and metrology of physical-technical constants, similar to the German Physikalisch-Technische Bundesanstalt (PTB). Today, however, the NIST also deals with technologies outside of physics – i.e. computer technologies such as artificial intelligence. The Risk Management Framework for AI (AI RMF for short) was developed by a NIST research group in collaboration with researchers and industry.
In its most current version, the NIST AI RMF was first published on January 26, 2023. It is available as a voluntary guideline and is not a binding legal document. However, because it provides a very comprehensive overview of the aspects that need to be considered when introducing AI technology, AI experts and companies from Europe are also looking at the basics of this document.
What is the basis for solid
AI risk management in the company?
One of the biggest risks when using AI is that the novelty of the technology means that risks cannot be measured and assessed correctly. This makes quantitative and qualitative risk management work more difficult. For sound risk management, there should first be a good awareness of the three basic attitudes to risk in the company:
- Risk tolerance: In a fast-moving technological development such as AI, not every risk can be avoided one hundred percent. A certain tolerance and willingness to take risks is therefore important. However, it must always be ensured that special attention is paid to relevant “red lines” such as regulatory or business risks.
- Risk prioritization: AI risk assessment should take its place alongside risks outside of AI implementation – such as cybersecurity or regulation in highly regulated markets – rather than replacing them. There must always be a clear prioritization of risks.
- Risk management resources: The risks in the field of AI are manifold. It is often important to involve several stakeholders in the company in the risk management working group.
An important basis for the use of AI – modernizing legacy systems – find out how we do this in our Case Study.
What framework conditions can be used to
can a trustworthy application be determined?
According to NIST, a trustworthy and therefore low-risk AI application is characterized by the following properties:
- Accuracy, robustness and reliability: are critical to the quality of AI systems. Ongoing testing and monitoring are necessary to confirm that AI systems work as intended under different conditions and use cases. This includes challenging any AI software vendor on known issues such as AI fantasizing.
- Safety: AI systems should not pose a risk to human life, health, property or the environment under defined conditions. Responsible design, development and deployment practices and clear information for users and end users are critical to ensuring AI safety. The resource-efficient use of computing capacity is just as much a part of this as the currently still very important “human in the loop” approach.
- Reduced bias: Bias can become deeply embedded in automated systems and has the potential to amplify and perpetuate damage on an unprecedented scale. Dealing with systemic, computational and human cognitive biases is critical to ensuring fairness in AI. Fair AI deployment – such as avoiding the perpetuation of racist motives in the sorting of personal data – is not only required by law but is also in the interest of democratic societies.
- Resilience: AI systems and their ecosystems should be able to withstand unexpected negative events or changes, maintain their functions and structure and be dismantled safely if necessary. This includes taking into account security aspects such as adversarial examples, harmful data manipulation or the reading and extraction of models, training data or intellectual property by third parties (so-called AI hacking).
- Transparency: In the field of AI, transparency primarily refers to data transparency. This means providing an appropriate level of information about an AI system, including its design, training data, model structure, planned use cases and deployment decisions. This transparency is necessary in order to be able to take corrective action in the event of incorrect or negative AI results.
- Explainability: refers to understanding the underlying mechanisms of how an AI system works, while interpretability involves understanding the meaning and context of system outputs. Taken together, these properties help users, operators and regulators gain deeper insights into the functionality and trustworthiness of an AI system.
- Data protection-compliant: AI systems require huge amounts of data in order to function and train in the best possible way. In data protection-sensitive environments such as Europe, there is often a discrepancy between improving the AI model and complying with data minimization requirements. In case of doubt, data protection should be taken particularly seriously in the area of AI implementation, as it is often impossible to get data out of the system once it has been fed in..
How to implement a functioning AI Risk Management in my company?
A structured approach is presented in the NIST AI RMF,
designed to make it easier to effectively manage risks associated with AI deployment. This includes four core activities:
- Establish company guidelines: The clear implementation of company policies regarding the selection, use and deployment of AI technologies is at the beginning of good AI risk management. This includes establishing transparent policies and procedures, defining clear roles and responsibilities, prioritizing diversity and inclusion in the workforce, fostering a culture of critical thinking and safety, and engaging relevant stakeholders.
- Locating and mapping dependencies of risks: A comprehensive mapping of risks associated with AI systems provides a solid context for risk assessment. Ideally, a differentiated picture of the complex interactions between the various actors and activities involved in the AI lifecycle will emerge. This enables companies to avoid negative risks and make decisions about model management and the appropriateness of an AI solution.
- Assessing risks: An AI risk assessment uses various quantitative, qualitative and mixed-method tools and techniques to analyze, evaluate, compare and monitor AI risks and the associated impacts. This includes rigorous software testing, performance evaluations, independent reviews and comprehensive documentation of results. Comprehensive monitoring of cost-benefit effects is also important.
- Maintaining risk management: An initial set of risk maps and assessments alone is not sufficient for profitable risk management. In addition to the allocation of risk resources for mapped and measured risks, there should be concrete options for implementing risk treatment plans. There needs to be dedicated responsibilities and resources to respond to, recover from and inform about AI risk incidents or events.
Outlook: AI is not for everyone, but everyone should get involved!
With the EU AI Act at the latest, the risks of AI applications must be weighed up even more carefully. A targeted classification of cost-benefit effects and the company-wide adaptation of structures require more than just a small pilot project. Companies that want to implement AI profitably should position themselves strategically at an early stage. Positions such as Chief AI Officer are now being created to involve people in the responsibility and training of this important future topic at an early stage. We would be happy to discuss with you whether you should also create such a position. Nevertheless, AI is (still) a field of research for many companies today – in view of the rapid developments, it is nevertheless important to make strategic considerations today. Our insurance customers also have the challenge of examining further regulatory requirements, such as the VAIT-Verordnung, with regard to the future use of AI. Even those who use AI in the cloud cannot avoid certain regulatory requirements. Regardless of how you approach the use of machine learning and other AI technologies in the future, we recommend that you start with a comprehensive strategy.
We are happy to help you identify the necessary steps with our AI assessment workshop and accompany you on your way to your individual AI future.
Please feel free to contact us.
Get strategic AI sparring now!
Your contact person: Paul Giordano Bruno