Deploying AI solutions? Here are 5 risks to mitigate

Deploying AI solution

Follow Us:

Artificial Intelligence (AI) systems are proliferating across industries, moving away from the buzz to industrialised implementations. Financial services, telecom, retail, logistics, media are examples of industries where AI has begun to be embedded into mainstream applications. Examples like fraud detection and fraud management, revenue assurance, rendering the right content through search, already have AI built into it, without us realising it.

However, as it gets more mature, and easier to implement, and as industries get adventurous with trying out newer use cases for AI implementations, there comes the accompanying risks of releasing untested AI systems. International businesses and regulators, whilst realising the benefits, are also mindful of the potential risks & unintended consequences.

Here are 5 risks one needs to worry about:

Technology Risks

The risks brought to fore by the diversity of platforms (Google AI, Azure AI, IBM Watson or Amazon AI), model marketplaces (Acumos, Kaggle amongst others), and lack of wide scale expertise and knowledge of AI technologies.

Data Risks

AI deals with data, and the ability to ‘learn’ from existing data. In most cases risks around data hinge around, the integrity of the data, sufficiency of data to train and possible biases in the available datasets. An example could be deploying AI systems with poorly trained samples (skewed data sets) or insufficiently trained with possible real-world examples.

Regulatory Risks

Data in today’s day and age, needs to be carefully managed and used for the right purposes. Data use regulations (GDPR for example) – impose demands like protection of privacy to usage within geographical boundaries, which can also bring in its own set of risks that need to be mitigated. Examples could be how applications use AI is to make decisions in highly regulated environments like Healthcare (diagnosis for example) and Finance (investment decisions for example) and its legal or regulatory impact, would need to managed.

Model Behaviour Risks

When using pre-trained models, there are risks associated with the appropriateness of the underlying algorithms being used and its suitability to the business context, the accuracy & precision of the predictions, possible biases creeping into the model that need to be mitigated. For example models pretrained on data from particular country may provide wrong outcomes when used in another country.

Privacy & Ethical Risks

In today’s world, there is an increasing fear that the insights that systems provide should not be inherently biased and thus wrongly influence the outcomes. An example of privacy abuse is using faces of people who have not consented to their images being used to train facial recognition systems. Another example of violation of ethics, is if racial or class bias creeps into the way the algorithms work due to the skew in the training data.

At Last Mile, we believe that before deploying AI systems, an effective test strategy that would encompass and prioritize the impact of the above risks and tailor the testing needs accordingly must be put in place.

About the Author

Diwakar Menon is the Co-founder of Last Mile Consultants, and has over, his 30 year career, held various senior management positions in large multi-national organisations like CMC, Alstom, Deutsche Bank, Dell and Tech Mahindra. He consults organizations on catalysing their test organizations and improve their approaches to risk mitigation through effective process, application automation & delivery assurance practices.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Scroll to Top

Hire Us To Spread Your Content

Fill this form and we will call you.