Understanding the Five R’s of AI Implementation

Understanding the Five R’s of AI Implementation

The development and implementation of artificial intelligence is rapidly gaining momentum, with Gartner predicting that the AI software industry will total $62.5 billion in 2022. If accurate, this prediction would reflect a 21.3% increase from 2021, a statistic that is indicative of the current surge of interest in AI adoption. While organizations pursue the transformational benefits of the technology, it is vital that they understand the supporting structures, theories, and philosophies that are critical to successful deployments.

At BCG, we have found that the most successful pioneers of AI at scale put 70% of their investment into embedding the technology into business processes and agile ways of working. This means that these AI leaders invest considerably more in their people and processes when implementing the technology than in the technology itself. In most cases, these organizations have a strong understanding of what we call the 5 R’s, which encapsulate a set of foundational principles that support the successful integration of AI.

Responsible - We need to make AI do the right & humane thing

Responsible AI means ensuring that systems are designed to put humans at the center of technology development, maintaining the authority and wellbeing of the individuals and communities that use or are influenced by the AI systems. Accountability is a crucial component in this process, which couples with fairness, equity, and transparency to ensure that an organization’s people maintain responsibility for the use of AI. The social and environmental impacts of using artificial intelligence should be carefully considered, ensuring that the technology is supportive of these topics rather than a detriment.

To be responsible requires the consistent review of the safety measures, security controls, and data privacy measures in the system to ensure they are enforced and ethical.

The use of AI presents significant opportunities and risks, which requires organizations to adopt specialized security and data protection capabilities and practices if they are to proceed in a responsible way.

Retraceable - We need to know what it is doing

Maintaining control is of paramount importance when integrating AI technology into a business, and for that to be possible, systems need to be built in such a way that their activity can be retraced. At the heart of this is the ability to explain every part of an AI system, including its decisions, regardless of its complexity. A good example of this is the interlingua created by Google Translate to aid in the translation between two languages it was never directly trained to translate between. The engineers were able to peer inside of the neural network to understand what the system had created and how it is being used. In this case, the system was performing better than expected. Transparency plays an

important role here, as teams need to be able to understand, explain and follow the system in order to clearly determine the cause of issues and anomalies when they arise.

Teams must be able to go deeper and conduct forensics and root cause analysis in the event of incidents. The purpose of this is to highlight the causality and determine how to prevent an issue reoccurring, subsequently enhancing the overall AI system. Organizations will have to be well-prepared and diligent when it comes to enforcing this approach, because AI and ML model inputs may not always result in the same outputs.

Robust - We need to design AI systems to stand up to attack

Throughout history, humans have slowly developed new systems which allowed for limited exposure testing, incremental improvements and corrections before mass deployment. All systems are prone to errors or areas of improvement, and require corrections in their development. The slow roll out of these weak and untested systems limited the amount of damage that could be done while errors and issues were addressed, and defenses created. Recently our ability to understand the technology and the attack surface surpassed our proper and robust implementation of it, which resulted in negative consequences. Some spectacular failures have occurred when we didn’t ensure a robust and thorough testing process that also challenged the strength of our assumptions and the weaknesses in our own human systems.

For example, facial recognition systems are being implemented in many countries and being used against the population in ways some people don’t expect or appreciate. What are the attacks that can be leveraged against these systems? Masks? Photographs? Reflective materials that confuse the sensors? Another example of systems being implemented without sufficient testing is the Northeast Blackout of 2003, a massive power outage caused by a software bug that impacted 55 million people across the Northeast and Midwest of the United States. In these cases, systems were automated, implemented and moved into the daily life of millions of users without considering the implications it might have if not built in a safe, solid, and well tested way.

Ultimately, the indirect users of new systems must be considered when implementing these systems. Strong defenses need to be built into and around these systems so they aren’t misused, disrupted or attacked in ways that could be catastrophic to life and safety. These systems also need to implement robust measures to identify, detect and respond to errors and anomalies caused by system changes or new situations not previously considered.

AI systems are being designed to process and operate at levels beyond human comprehension which emphasizes the vital need for humans to constantly be in the loop as part of a robust system of checks and balances. Not only do these systems need to be controlled and monitored carefully following implementation, but they must also be built from the ground up with precision and pre-emptive considerations to defend against attacks and be able to detect anomalies and ask for help.

Resilient - When degraded, it can recover and continue safely

While resiliency in the context of an AI system may seem to be covered by responsibility and robustness, it is a nuanced point to stress due to the scale of the challenge. AI systems are set to be capable of far more than just basic computing, and this much larger remit and ability to learn results in greater chance for negative developments and malicious manipulation. As a result of this dynamic, truly resilient AI systems will need to be developed for AI risk to be effectively managed.

When we discuss resilience, we refer to the ability of the system to detect its own anomalies, attacks against it, and continue producing expected and safe output. It could even mean that the system will move into a protected state, slow down operations and maybe even tag certain decisions as requiring review. If these systems become life critical (electrical grid, health care, self-navigating vehicles) they should be able to respond and recover quickly to attacks or put themselves and anyone near them in a safe state until a solution is available. This applies not only to the code and AI models implementing the system, but also to the data that will cause the system to potentially respond in an unexpected way.

At the outset, new AI systems must be instilled with the best representation of humankind possible, equipping them to challenge attempts to manipulate its capabilities and to be resilient to developing in an undesirable way. The case of the Microsoft ML bot that learned to use hateful language is a prime example of the kind of outcome that can be avoided through more resilient design. Training systems could have been implemented to tag certain behavior as unwanted, or as responses were becoming aggressive, to be questioned and delayed for review. When built and developed in a resilient way, AI systems would be capable of delivering exceptional results while operating with a more sophisticated understanding of its goals and really understanding external influences.

Respect - Implementers must understand the power these systems can have over society

Closely intertwined with resilience is the theme of respect, and not just respect for the powerful and transformative technologies we are implementing. Respect for the people who are cautiously and diligently working on and learning from these technologies is also of critical importance, and for the people who are going to feel the results of these systems. For perspective, consider simple clerical errors that determine whether people are a high risk for a credit decision. With this example in mind, there is the potential for many people to be unfairly judged before someone moves to question the AI system.

AI is set to potentially have a greater impact than any technology ever has before, with the ability to take on multitudes of tasks that currently require manual effort and accelerate our progress dramatically across all industries. While we use AI to enhance our pursuit of breakthroughs in areas like healthcare and quantum computing, it is of the utmost

importance that we never lose respect for the humans it will impact, and for the human operators staying in the loop and prioritizing human ethics and values.

Today, AI is still in its infancy, and we must tread lightly and carefully. It will be crucial to listen to the viewpoints of all those who work in the field, especially those offering minority viewpoints that could easily go unnoticed. As we head towards a future that may all be changed by AI but the laws of physics, respect is the R that will help us to maintain resiliency, robustness, retractability, and responsibility.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics