Newsletters

Customer Support:   (972) 395-3225

Home

Articles, News, Announcements - click Main News Page
Previous Story       Next Story
    
Navigating AI Regulations in Customer Service Delivery

by Jennifer Lee, President and Co-CEO, Intradiem - April 1, 2025

Navigating AI Regulations in Customer Service Delivery

By Jennifer Lee, President & Co-CEO of Intradiem

The potential of AI to transform customer service is immense—promising smarter tools, quicker response times, and increasingly personalized customer interactions. But as these powerful capabilities evolve, so does scrutiny around their responsible use, especially in sensitive, high-trust environments like contact centers. Regulations around AI usage are beginning to take shape, yet the current landscape remains fragmented, inconsistent, and unclear, causing uncertainty and challenges for organizations.

Navigating the Risks of a Fragmented Regulatory Landscape

Right now, the United States lacks concrete federal regulations guiding the responsible use of AI. Many policymakers lean toward a lighter regulatory approach, believing it will foster innovation and maintain global competitiveness. While this stance has advantages, it also introduces significant concerns. Consider traffic laws as an example of why some regulation is important: we accept and understand uniform rules for road safety because consistent stop sign colors and clearly defined driving laws make travel seamless and safe. Now imagine if each state, city, or town decided independently what color stop signs should be—red here, blue there, yellow somewhere else. The confusion, inefficiency, and risks would be substantial.

This scenario isn't so different from what’s unfolding today with AI regulation. Without clear federal guidance, individual states are stepping in to create their own unique regulatory frameworks. But unlike physical roads, the digital infrastructure supporting AI-powered customer service crosses state lines effortlessly and continuously. For example, if your contact center is headquartered in Georgia but serves customers in California, your operations must adapt to comply with California’s more stringent data protection laws. Managing this complexity puts significant strain on resources, stifles agility, and ultimately slows down the pace of innovation.

Just as standardized traffic rules facilitate safe, efficient travel, cohesive and well-defined AI regulations are necessary to safely and effectively drive innovation forward. Until we have uniform national guidelines, companies must remain vigilant and proactive. It will be essential to closely monitor evolving regulations, anticipating and adapting quickly rather than scrambling later to interpret and comply with a patchwork of different requirements.

Oversight, Accountability, and AI Hype

The real challenge isn’t simply compliance—it’s building a system of trust. Contact centers handle sensitive customer data every day, and integrity, discretion and confidentiality are essential. I know of a case in which a company implemented an AI hiring tool that, unbeknownst to them, had been trained exclusively on male resumes. As a result, female applicants were systematically filtered out. That wasn’t just a failure of compliance—it was a failure of oversight.

That’s why clear accountability structures are essential. Every organization that uses or plans to use AI should designate someone—whether it’s a Chief Data Officer, a Head of Compliance, or a specialized AI Governance Lead—to own this space. This isn’t a side project. It’s a full-time job. AI policies will need to be continually reviewed and adjusted as laws evolve and new tools enter the market.

Another issue is the hype that surrounds AI. I’ve spoken with a number of company leaders who are still not sure what truly qualifies as AI, or how to vet vendors, or even whether AI solutions can deliver what they promise. The confusion is understandable, because not all AI models are the same. 

Deterministic models, for example, provide a precise mapping from inputs to outputs, which is ideal for simple problems with clear-cut relationships. That’s different from probabilistic models like ChatGPT, which embrace uncertainty by generating distributions over potential outcomes. Because the market tends to lump everything together under the “AI” label, decision-makers are left to navigate an increasingly noisy, and often misleading, landscape. 

Transparency for Employees and Customers

AI is still intimidating for many. Employees wonder: “Is this replacing me?” or “What data is being used, and how?” Without clear, consistent internal messaging about intentions and usage, fear may take hold. That’s why transparency with employees is just as important as compliance with regulators. Companies must tell employees what they’re doing with AI—and what they’re not doing. Customers also deserve clarity. If AI is helping to answer their questions, route their call, or identify their issue, they should know that. Transparency builds trust. And trust, once lost, is nearly impossible to regain.

Alongside compliance and innovation, there’s a third, equally critical factor in the AI conversation: reputation. We’re seeing more and more examples of what I call the “customer experience cliff,” when companies chase efficiency so aggressively that they lose sight of what makes their brand trustworthy. If using AI for a few things increases efficiency and productivity, it seems logical that more AI will increase them more. That is true, but only up to a certain point. If customer service centers try to use AI as a substitute for human workers, they risk losing the human element which is at the core of customer service. In our digital age, a single poor customer experience can go viral in minutes. And rebuilding a damaged reputation takes years.

The good news is that AI can lead to much better outcomes when deployed thoughtfully. By automating low-value tasks, it frees agents to focus on high-stakes moments that require empathy, nuance, and real-time judgment. Tools like real-time sentiment analysis allow agents to receive prompts when a customer sounds frustrated or confused—so they can respond faster and more effectively. When used this way, AI becomes a co-pilot, rather than a substitute pilot.

A Framework for Responsible Innovation

We need to be honest about the pace of regulation. The current iteration of AI is new, so agreeing on a consistent set of guardrails has been a painfully slow process. We're still governing the internet using laws written three decades ago. If we wait for government to catch up before we act, we may need to wait a long time. That’s why many organizations are adopting a “move forward and course-correct” approach. Innovation can’t wait. But it must be done with care, intention, and integrity.

For contact center leaders, this means establishing internal policies even before external rules are finalized. That means defining your principles. Documenting your data practices. Creating escalation paths for when things go wrong. And above all, remaining flexible. As AI regulations mature, your ability to adapt will determine your long-term success.

The path forward isn’t about avoiding risk entirely. It’s about understanding risk, owning it, and communicating around it. Compliance shouldn’t be seen as a constraint, but rather as a framework that enables responsible innovation to flourish. And that innovation, when anchored in empathy, clarity, and trust, is what will set the best contact centers apart.

So don’t wait. Get your data in order. Designate someone to own AI governance. Be transparent about what you’re doing—and why you’re doing it. And most importantly, don’t lose sight of the human experience which is and will remain at the heart of customer service. 

About Jennifer Lee

Jennifer has 20 years’ experience in the contact center industry with more than 15 years as a people leader. Throughout her career, Jennifer has served in a variety of roles in the contact center space, including operations, quality, workforce management, and client services. As President and Co-CEO, Jennifer leads the operations and people management of the organization. Prior to this role, Jennifer has served as Chief Operating Officer, Chief Strategy Officer and has led the Customer Success organization.

Summary: The transformative potential of AI in customer service includes smarter tools, quicker responses, and personalized interactions. However, the fragmented and inconsistent regulatory landscape in the U.S. complicates compliance and innovation for organizations. Clear accountability structures and transparency with employees and customers are essential to build trust. The hype surrounding AI and the risks of losing the human element in customer service are important discussions for enterprises. Proactive internal policies and responsible innovation are necessary to navigate the evolving AI regulations effectively.

 
 
 

 
Return to main news page