This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 1 minute read

Federal Agencies Unite to Combat AI Violations: A New Era of Enforcement

In an era where “the use of automated systems, including those sometimes marketed as 'artificial intelligence' or 'AI,' is becoming increasingly common in our daily lives,” nine federal agencies have joined forces to address the potential risks and challenges posed by these technologies. The Department of Labor, the Federal Trade Commission, the Equal Employment Opportunity Commission, the Consumer Financial Protection Bureau, the Department of Justice, the Department of Housing and Urban Development, the Department of Education, the Department of Health and Human Services, and the Department of Homeland Security have pledged to "vigorously use [their] collective authorities to protect individuals' rights regardless of whether legal violations occur through traditional means or advanced technologies."

The joint statement, signed by these agencies, highlights the significance of their collaborative effort in enforcing civil rights, non-discrimination, fair competition, consumer protection, and equal opportunity laws in the context of automated systems. While acknowledging the potential benefits of AI, the agencies also express concern about the "potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes."

Enforcement priorities and concerns include addressing biases stemming from problematic data and datasets, model opacity and access, and design and use of AI systems. The agencies emphasize that "existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices," signaling their commitment to holding companies accountable for AI-related infractions.

This could mean increased scrutiny of AI-based decision-making processes and the need for companies to ensure compliance with existing laws and regulations. The FTC's recent ban on Rite Aid's use of AI-based facial recognition technology for surveillance purposes serves as an example of the potential consequences of AI-related violations.

Looking ahead, agencies are actively working on providing guidance and considering new regulations to address AI-related risks. For instance, the CFPB is "considering a rulemaking under the Fair Credit Reporting Act to prevent the misuse and abuse of Americans' sensitive data by data brokers" that use AI to make predictions and decisions. Additionally, HHS is collaborating with other federal agencies to promote responsible AI usage.

As the use of AI continues to expand, the joint statement serves as a reminder of the federal agencies' resolve to monitor the development and use of automated systems and promote responsible innovation while protecting individuals' rights. This could herald a new era of enforcement and regulatory oversight in the AI landscape, with far-reaching implications for companies and consumers alike.

These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.