Skip to content
The Think Blog

4 Key priorities for smart AI implementation in regulated industries

AI icon overlaid on an image of people working in a boardroom at an organization in a regulated industry

AI and, specifically, generative AI are all top of mind for organizational leadership. The fear of falling behind is strong, and it is a rare organization whose CIO is not under intense pressure to deploy it. And while it’s good advice for any organization to develop a thoughtful strategy that mitigates risk and maximizes business value before adopting any kind of AI, highly-regulated industries like pharmaceuticals, healthcare, and financial services must be especially careful. After all, AI won’t get a free pass to violate laws and regulations concerning how they use data and what they communicate to consumers.

That said, though navigating this regulated landscape can be tricky, it’s worth it for the value it can bring. However, to realize this value, the entire leadership team must develop a detailed plan to ensure that they achieve their goals without alienating customers and employees and getting into regulatory trouble.

Alright, so how do you get started? In this blog post, we’ll discuss the challenges regulated industries face in implementing both traditional and generative AI—and provide four priorities to consider as companies in this space develop their AI strategy.

Start with a focus on customer and employee trust 

If the people who will interact with AI—patients, customers, doctors, brokers—don’t trust it, they won’t use it. And if the AI you deploy goes unused, you’ve wasted an enormous amount of time, resources, and money.

Employ change management

For customers and patients, strong change management is critical. For example, even in the 2020s, some hospital systems are still using paper charts instead of electronic medical records (EMR), and even in the systems that have made this transition, a small but significant number of physicians decided to retire rather than use EMR. A digital assistant is a much bigger leap for a physician to make. Incorporating AI into one’s daily work is an enormous change that will require building trust to be successful.

Create a thoughtful communication plan

Introduce AI gradually and ensure that there is a thoughtful, comprehensive marketing and sales plan to communicate how and why your organization is providing AI capabilities. Make sure to provide a human alternative, especially for activities like customer service or speaking with a healthcare provider that would have previously involved a human being. It will take time for customers and patients to become comfortable interacting with AI. 

Demystify how AI and machine learning work

With employees, it’s important to demystify how AI and machine learning (ML) work. AI is often viewed—and in some cases actually is—a black box where data goes in, results come out, and no one has any idea how the algorithm reached its conclusions. Wherever possible, employ transparent AI that enables end-users to follow its logic. When this is not possible, conduct workshops in which you conduct AI analysis of situations where the outcome is known to demonstrate accuracy and build confidence.

Establish a training program and provide helpful information

Just as with customers, introduce AI gradually and, at each step, ensure everyone is well-informed and well-trained. Communication from all levels of the organization—especially from the C-Suite—must clearly and frequently explain the goals and benefits that AI will help achieve. 

Be clear that AI will support employees—not replace them

Additionally, don’t use AI to replace human beings—a move that is both unethical and ineffective—but rather to assist them. If people believe they’ll lose their jobs to AI, morale will plummet, and trust will be nonexistent.

Experiment with risk reduction tactics

AI and generative AI, especially, present risks to highly regulated industries. In pharmaceuticals and healthcare, regulations very strictly govern the claims, language, and even layout with which pharma companies can communicate about their products. HIPAA and other regulations protect the privacy of patient information, which can place limitations on how healthcare data may be used to train LLMs. And don’t forget that generative AI has a tendency to “hallucinate”—if it gives wrong information to customers or patients, people could be harmed or even die. 

Thinking about AI and your digital products?

Our team of experts can help.

The risks are also stark in financial services, even if life and limb may not be at stake. Biased or hallucinating AI can make poor predictions or provide inaccurate information, leading employees to make bad investment decisions that cost the company millions of dollars—or cause customers to lose money and financial stability. 

To protect against these scenarios, organizations must:

  • Establish rigorous testing protocols that occur both before deployment and on an ongoing basis. 
  • Ensure that strong guardrails are established to ensure that results are accurate and within the strictures of relevant regulations. 
  • Avoid letting AI run on its own without human intervention unless the stakes are very low. AI should assist human beings, who ultimately make the final decision.

Look for privacy-friendly alternatives

Beyond health information, personally identifying information (PII) is governed by a wide array of state and national regulations, including the California Consumer Protection Act and GDPR. Using PII and health data to train AI or, even worse, having PII turn up in responses could expose the organization to severe fines. 

Anonymize all data from individuals before using it in training, and only use the data you need to obtain the desired result. Some organizations use synthetic data for training. Synthetic data is created by an algorithm such that the artificial set replicates the relationships within the original set. In this way, organizations eliminate the problem of PII and health information, but extensive testing is required to ensure results from such training are valid.

Keep ethics in AI front-and-center

An AI is only as good as the data on which it is trained, and data can contain hidden biases, which create biases in the AI. 

For example, a healthcare prediction AI used by US hospitals and insurers to identify patients that would need “high-risk care management” was found to significantly underestimate black patients’ level of severity when ill. An investigation found that the algorithm was using healthcare spending as a key indicator of the patient’s medical needs, and black patients who were sick spent roughly the same amount as white patients who were healthy, which led the algorithm to underestimate their healthcare needs.  

In the financial services industry, instances of AI bias have been more difficult to prove, but it’s clear that bias is pervasive within the industry, which often uses predictive and AI algorithms to assist with mortgage loan approvals. A study conducted by The Markup and distributed by the Associated Press found that, when compared to similar white applicants, lenders were much more likely to reject people of color. Black applicants were 80% more likely to be rejected, Native Americans 70% more likely, and Latinos 40% more likely. 

Biased AI will produce inaccurate results and could expose the organization to liability. Employ a data scientist to ensure the data is as unbiased as possible. Rigorously test for biases before deployment and constantly during production, adjusting the model as needed.

Be intentional with your AI strategy

Deploying AI in a highly-regulated industry is tricky, and requires careful research and planning; change management that comes from the very top of the organization; strong guardrails to ensure accuracy and prevent hallucination; and extensive testing to ensure that the data and results aren’t biased or inaccurate. It’s not a minor effort, and the entire organization will need to get behind it, but in the end, it’s worth the investment to achieve the benefits of AI.

Is your organization standing up an AI strategy? Our team of experts understands the intricacies of compliance and compliance in regulated industries—we can help.


Stay in the know

Receive blog posts, useful tools, and company updates straight to your inbox.

We keep it brief, make it easy to unsubscribe, and never share your information.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Send us a postcard, drop us a line

Interested in working with us?

We scope projects and build teams to meet your organization's unique design and development needs. Tell us about your project today to start the conversation.

Learn More