← KeepSanity
Apr 08, 2026

State Artificial Intelligence: How the U.S. States and the State Department Are Shaping AI Policy

The term “state artificial intelligence” in 2025–2026 carries two distinct meanings: AI systems deployed by the U.S. Department of State for diplomacy (like StateChat and NorthStar), and the growin...

Key Takeaways

Introduction: What “State Artificial Intelligence” Means in 2025–2026

The phrase “state artificial intelligence” means two very different things depending on who’s using it.

First, it refers to how the U.S. Department of State-the federal agency responsible for diplomacy and foreign affairs-is deploying artificial intelligence to modernize how America engages with the world. Second, it describes the regulatory and legislative activity happening across 50 individual U.S. states, each developing their own frameworks to regulate ai within their borders.

This dual meaning isn’t just a semantic curiosity. It reflects a deeper constitutional and political fault line that’s reshaping technology policy in real time.

The key dates tell the story: President Biden signed his AI Executive Order on October 30, 2023, establishing early federal frameworks. Between late 2023 and 2025, more than one hundred state AI bills were introduced or enacted. Then, on December 11, 2025, President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” which explicitly sought to preempt conflicting state laws and create uniform federal authority over ai governance.

What was once a purely technical topic has become a core instrument of foreign policy, economic competitiveness, and domestic lawmaking. Innovators want national consistency so they don’t have to navigate 50 different compliance regimes. Civil-society groups and many states want strong safeguards to protect residents from algorithmic discrimination and other harms. Federal actors are trying to assert primacy while Congress remains gridlocked.

This article covers the State Department’s AI strategy, model state AI acts and their “Learning Laboratory” experiments, the federal preemption push, and the messy politics surrounding all of it. If tracking every micro-update sounds exhausting, you’re not alone-that’s exactly why KeepSanity AI exists as a weekly, curated source for the major shifts without the daily noise.

The image depicts the U.S. Capitol building, overlaid with modern digital elements that symbolize ongoing debates about artificial intelligence governance and technology policy. This visual representation highlights the intersection of federal law and AI innovation, emphasizing the role of policymakers in regulating AI systems for the benefit of the American people.

The U.S. Department of State’s Use of Artificial Intelligence

The Department of State views AI not as a marginal IT upgrade, but as a centerpiece of 21st-century diplomacy. Secretary of State Marco Rubio has stated explicitly: “Winning the AI race is nonnegotiable. America must continue to be the dominant force in artificial intelligence to promote prosperity and protect our economic and national security.”

This framing positions ai technology directly within U.S. competition with China and the European Union on AI norms, standards, and diplomatic influence. The stakes are about more than efficiency-they’re about who shapes the future of global governance.

AI helps the State Department process and synthesize enormous volumes of unstructured data that would overwhelm human analysts working alone:

Concrete AI Systems in Production

The department has moved beyond pilots to actual deployed systems:

StateChat is the department’s first generative ai chatbot, approved for use with Sensitive But Unclassified (SBU) data. Diplomats use it for:

NorthStar is a large-scale open-source intelligence tool that ingests millions of global news articles daily. It applies AI clustering and priority-ranking to help analysts identify emerging narratives, regional trends, and early warning signals in strategically important regions.

Both ai tools are framed as augmenting-not replacing-diplomats. Written guidance emphasizes that AI outputs are drafts and recommendations, not authoritative intelligence. Staff are reminded to apply their own expertise before relying on AI outputs for policy, negotiations, or public statements.

Key Risks the State Department Must Manage

Risk Category

Description

Data Security

Preventing classified or diplomatic-sensitive data from leaking into AI systems

Algorithmic Bias

Models trained on Western sources may skew analysis toward certain regions

Over-Reliance

Risk of institutional deskilling if diplomats depend too heavily on AI summaries

Adversarial Manipulation

Sophisticated actors could feed false information to trick AI detection systems

For readers tracking whether these ai systems actually matter, KeepSanity AI selectively covers major State or White House ai governance moves-like new public frameworks or governance boards-rather than every minor pilot experiment.

Enterprise Data and AI Strategy at the State Department

On September 30, 2025, the State Department unveiled its “Enterprise Data and Artificial Intelligence Strategy for 2026”-a three-year roadmap to modernize how the department handles data and AI across all bureaus and embassies worldwide.

The strategy is explicitly aligned with two major federal directives:

  1. OMB Memorandum M-25-21 (released late 2024): Sets baseline standards for federal AI adoption and ai governance

  2. President Biden’s 2023 AI Executive Order: Established principles for responsible AI development and deployment in federal agencies

Three Strategic Pillars

The strategy organizes itself around three interrelated pillars that mirror OMB language:

Innovation and Rapid Experimentation Creating mechanisms to rapidly pilot and test ai systems across different bureaus, learn from pilots, and scale successful approaches. The emphasis is on moving at the “speed of relevance” without analysis paralysis.

Governance and Institutional Accountability Establishing clear roles, responsibilities, and oversight structures. This includes:

Public Trust and Civil Rights Acknowledging that deploying ai systems affecting diplomatic outcomes, visa adjudication, or public communications requires protecting civil rights, maintaining privacy, and being transparent about how AI is used.

The strategy includes an internal Implementation Plan with concrete metrics-though specific numbers aren’t public. These likely cover the percentage of mission-critical processes with AI support, reduction in manual analytic hours, and number of ai systems inventoried and risk-assessed.

This strategy overlaps with broader U.S. diplomacy aims, including promoting democratic AI norms in multilateral forums like the G7 Hiroshima AI Process and the U.K. AI Safety Summit. Readers can expect these big diplomatic AI milestones-not internal minutiae-covered in KeepSanity’s weekly briefings.

The image depicts diplomats engaged in discussions within a modern conference room, surrounded by digital displays that highlight topics related to international AI policy, including regulations, governance, and the implications of artificial intelligence on society. This setting suggests a collaborative effort to establish effective state and federal AI laws and address concerns surrounding AI technology and its impact on human flourishing.

Compliance with OMB Memorandum M-25-21 and Federal AI Governance

OMB Memorandum M-25-21 serves as the central White House directive setting standards for how all federal agencies-including the State Department-must inventory, manage, and govern ai systems.

The April 2025 White House guidance that built on M-25-21 instructed all federal agencies to:

The guidance also emphasized “American-made” AI procurement preferences and prioritized faster, more interoperable AI acquisition.

The State Department’s Compliance Plan

The department’s Compliance Plan operationalizes M-25-21 through several concrete steps:

Requirement

Implementation

AI Inventory

Cataloging all ai systems with metadata including purpose, data inputs, vendor info, and risk classification

Risk-Tiering

Classifying systems by impact level with appropriate governance gates

Civil Rights Compliance

Testing for bias in systems affecting visa adjudication, security clearances, or enforcement

Privacy Protection

Ensuring AI systems comply with the Privacy Act and classified information rules

Red-Team Testing

Conducting adversarial evaluation to identify vulnerabilities before deployment

The Compliance Plan emphasizes alignment with “American values”-fairness, nondiscrimination, and due process. It explicitly addresses concerns about algorithmic discrimination in immigration and consular systems, recognizing that AI models trained on historical data can perpetuate existing biases.

From a news-monitoring standpoint, KeepSanity AI focuses on the handful of major federal governance steps-like new OMB guidance or landmark Department of Justice and Federal Trade Commission actions-rather than every internal compliance deadline.

How Is the State Department Using AI in Practice?

Moving from strategy documents to the actual tools diplomats use at desks in Foggy Bottom and at posts abroad reveals a more concrete picture.

StateChat in Detail

StateChat operates with specific capabilities and constraints:

Core Capabilities:

Operational Guardrails:

NorthStar for Open-Source Intelligence

NorthStar functions as a “world brain” for situational awareness, continuously scanning global information flows:

The Indo-Pacific, Eastern Europe, and the Sahel are explicitly mentioned as regions benefiting from this capability-areas where early detection of emerging crises could give U.S. policymakers valuable lead time.

Other AI Use Cases

None of these ai systems currently make final sovereign decisions. They feed into human decision-makers who remain accountable for policy.

In a typical week, dozens of small pilots launch across the federal government. Only a handful are transformative enough to feature in a curated digest like KeepSanity AI.

Model State Artificial Intelligence Acts and Learning Laboratories

Shifting from federal and foreign policy to domestic state-level experimentation reveals a different approach to ai governance.

Organizations like the American Legislative Exchange Council (ALEC) have promoted “model” state AI acts-template legislation that states can adopt as blueprints. These acts embody a particular philosophy:

This approach represents a conscious alternative to stricter regulatory models like Colorado’s algorithmic discrimination law.

Typical Structure of Model Acts

Definitions Section Defines what counts as “artificial intelligence” for purposes of the law, identifies covered entities and sectors, and establishes scope of application.

Office of Artificial Intelligence Policy Creates a new office (usually within the attorney general’s office or governor’s office) tasked with:

State Agency Inventory Requirements State agencies must maintain inventories documenting:

These inventories give lawmakers a baseline picture of public-sector AI before regulating.

Learning Laboratories as Regulatory Sandboxes

The “Learning Laboratory” concept is central to these model acts. Here’s how they work in practice:

Feature

Description

Participants

Selected startups, universities, open-source projects

Duration

Typically 12 months, with possible 12-month extension

Benefits

Lighter regulatory conditions, reduced penalties

Requirements

Share data, best practices, and incident reports

Oversight

Supervised by Office of AI Policy with revocation authority

Participants agree to operate in specified geographic areas, share performance reports, implement specific safeguards, and allow state audits. In exchange, they receive relief from certain regulatory requirements-though core consumer protection, privacy, and civil-rights laws still apply.

Regulatory mitigation agreements are time-limited waivers tied to specific safeguards and revocable for violations. The effective date for these programs typically begins after state agency approval and participant agreement signing.

The sandbox model is positioned as a pro-innovation alternative to strict rules-but whether it actually protects the american people while enabling ai development remains to be seen.

KeepSanity AI watchers should pay attention to which other states actually adopt these blueprints and with what results.

The image showcases a collection of diverse state capitol buildings, symbolizing the varied state-level AI laws and governance approaches across the United States. This visual representation highlights the patchwork of regulations and policies related to artificial intelligence, reflecting the efforts of state governments to address AI innovation and its implications for the American people.

Federal Efforts to Preempt State AI Laws

By late 2025, dozens of differing state AI bills had produced a patchwork of compliance obligations that major tech firms and many policymakers saw as unsustainable for national AI competitiveness.

Large AI deployers faced a genuine problem: complying with 50 different standards for documentation, impact assessment, transparency, and risk management is enormously costly. A company deploying an AI hiring tool nationally might need different impact assessments, transparency disclosures, and decision algorithms in different states.

The December 2025 Executive Order

President Trump signed “Ensuring a National Policy Framework for Artificial Intelligence” (Executive Order 14365) on December 11, 2025. The order asserts federal supremacy over many aspects of AI regulation through several key provisions:

Policy Goals:

AI Litigation Task Force The order directs the Department of Justice to establish a task force mandated to challenge state AI statutes that allegedly violate the Commerce Clause or conflict with federal directives. Primary targets include state algorithmic discrimination laws in Colorado and California.

Commerce Department Review The Secretary of Commerce must, within approximately 90 days, publish a review of state laws that conflict with federal law and might trigger funding consequences.

Other Key Levers

Mechanism

Purpose

BEAD Funding Conditions

Conditioning eligibility for Broadband Equity Access and Deployment funds on not enacting certain ai laws

FCC AI Reporting

Encouraging the FCC to consider a federal AI reporting standard to preempt state requirements

FTC Clarification

Directing the Federal Trade Commission to clarify when state mandates requiring “truthful output alteration” are preempted as deceptive practices

The order also creates roles like a special advisor on AI to coordinate federal efforts and consult with stakeholders across the administration.

This is a bold, contested use of executive power rather than a bipartisan statute-primed to trigger lawsuits by states and civil-society groups.

KeepSanity AI would cover these disputes only at key inflection points-landmark court rulings, major injunctions-not every filing or hearing.

Background: How We Reached the State–Federal AI Standoff

Federal legislative attempts to preempt state AI regulation stalled in 2025 after bipartisan resistance. A proposed 10-year moratorium championed by Senator Ted Cruz (R-TX) aimed to establish uniform federal authority but failed to advance.

Similar moratorium language appeared in:

Both efforts were stripped out during negotiations and government shutdown brinkmanship.

States Filled the Vacuum

In the absence of comprehensive federal AI legislation, several states moved forward with their own approaches:

State

Focus Areas

Colorado

Algorithmic discrimination, impact assessments

California

Consumer protection, transparency, child safety

Texas

Employment AI, government procurement

Tennessee

Deepfakes, creative industries

New York

Hiring AI, biometric data

This state-driven activity created genuine compliance barriers for large AI deployers-potentially 50 different standards for documentation, transparency, and risk assessments across the businesses deploying ai systems nationally.

The December 2025 executive order is best understood as an attempt to reclaim initiative from Congress and the states-using executive power and agency actions while waiting for a durable legislative framework that still does not exist.

This pattern-states moving first, the federal government reacting later-is common in tech policy.

KeepSanity AI’s weekly curation helps readers follow only the big structural moves rather than every committee hearing or amendment proposed.

The Politics of State AI Regulation and Federal Preemption

AI governance has scrambled traditional partisan lines, producing fractures inside both parties and unusual state-federal alliances.

Complicated Politics on Both Sides

Republican Divisions:

Democratic Divisions:

Cross-Cutting Concerns

Numerous states across party lines are pursuing legislation addressing:

Public polling shows strong support for AI rules and skepticism toward bans on state action. Majorities of the american people across the political spectrum support limits on AI discrimination, transparency requirements, and child protections-suggesting aggressive federal preemption could carry significant political risks.

The information-overload problem is real: policymakers and innovators are bombarded with overlapping narratives and fear-mongering.

A focused, once-a-week digest like KeepSanity AI offers a sanity-preserving way to track what legislation actually passes and what actually affects ai development and deployment-without the daily noise that burns focus and energy.

Implications for Innovators, Policymakers, and the Public

For anyone building, deploying, or governing ai systems in America, the current landscape requires navigating three overlapping regulatory layers:

Federal Executive Branch Rules OMB memoranda, FTC guidance, DOJ enforcement priorities, and agency-specific AI strategies carry binding or quasi-binding force. These set baseline expectations for responsible ai use across federal agencies and, increasingly, for companies seeking federal contracts.

Emerging Federal Legislation As Congress debates comprehensive AI regulation, new requirements will emerge through standalone bills, amendments to existing statutes, or appropriations riders. The subject matter could include liability rules, safety standards, or data-protection requirements.

State Statutes and Sandboxes A growing web of state ai laws, ranging from algorithmic discrimination requirements to child-safety protections, creates fragmented compliance obligations. Learning Laboratory programs in some states offer flexibility, but participation conditions impose their own governance costs.

Practical Guidance

For policymakers and staffers: “Model acts” and preemption EOs are starting points, not final answers. They will be re-interpreted by courts and reshaped by public opinion over the next 2–3 years. Assess rather than assume what any particular provision will mean in practice.

For businesses and innovators: The manner in which you document AI capabilities, conduct bias assessments, and establish governance structures now will affect compliance under whatever framework eventually emerges. Building for regulatory flexibility is essential.

For ordinary citizens: You’re primarily affected through downstream issues-availability of AI-powered services, protections against algorithmic discrimination, deepfake and child-safety safeguards, and stability of internet infrastructure funded by programs like BEAD that impose conditions on federal funds.

Because so much is in flux, it’s unrealistic for busy professionals to follow daily updates. Relying on curated, high-signal sources like KeepSanity AI that surface only the major shifts is a strategic advantage-not a shortcut.

The balance between state experimentation and federal uniformity will likely be one of the defining ai policy stories of the late 2020s. Staying calmly informed-not doom-scrolling-positions you to advance your work regardless of how the circumstances evolve.

The image depicts a diverse group of professionals collaborating in a modern office, actively reviewing documents and discussing information displayed on digital screens. This setting reflects the integration of artificial intelligence in the workplace, showcasing teamwork and the exchange of ideas essential for effective ai governance and regulation.

FAQ: State Artificial Intelligence in the United States

What is the difference between federal AI policy and state AI policy?

Federal ai policy is set mainly by Congress, the White House, and agencies like OMB, FTC, DOJ, and Commerce. It governs interstate commerce, federal procurement, national security, and civil-rights enforcement at the national level. State ai policy, by contrast, is made by state legislatures and agencies and tends to focus on consumer protection, employment practices, education, and state-procured systems. When federal and state laws conflict, federal law often preempts state law under the Supremacy Clause-but the boundaries are actively contested. The December 2025 Executive Order is specifically designed to test and expand these boundaries through litigation and funding conditions, which is why the legal outcomes over the next few years will reshape what authority states actually retain.

How do state AI “Learning Laboratories” actually work for companies?

Learning Laboratories function like regulatory sandboxes where selected participants-startups, universities, or open-source projects-agree to share data about their ai systems, risks, and incidents in exchange for time-limited waivers or reduced penalties. Participants operate under supervision from a state Office of AI Policy, typically for 12-month terms that can be extended once. These programs are not blanket immunities from all regulation. Participants must still comply with core consumer-protection, privacy, and civil-rights laws, and can be removed from the program if they violate their regulatory mitigation agreement. For companies considering participation, the benefits include lighter compliance burdens during testing phases, but the transparency requirements and audit obligations create their own management overhead.

Could federal preemption stop states from protecting residents from AI harms?

Strong preemption language could limit certain kinds of state laws, especially those that directly regulate how AI models function across state lines or impose documentation requirements that burden interstate commerce. However, most proposals-including the Trump administration’s December 2025 EO-carve out specific areas where states retain authority, including child safety, state procurement rules, and infrastructure policy. Any sweeping attempt to ban state AI restrictions will likely face legal challenges from state attorneys general and civil-society groups, so the outcome will depend on court rulings and future congressional action. The security of current state protections is genuinely uncertain, making this an area worth monitoring through curated sources rather than assuming any single EO represents final resolution.

How should a startup or AI team keep up with all these fast-moving rules without burning out?

Designate one owner for ai governance tracking rather than expecting everyone to follow everything. Lean on high-quality secondary sources like weekly, ad-free newsletters (KeepSanity AI being designed specifically for this purpose) that filter the signal from the noise. Watch for official guidance from key regulators-FTC, OMB, state attorneys general-and prioritize actual binding laws and enforcement actions over draft bills and op-eds. Chasing every hearing or rumor is counterproductive; focusing on the handful of real regulatory turning points per month is usually sufficient to stay compliant and strategically informed. Build your internal processes for flexibility so you can adapt when rules change rather than scrambling to catch up.

Where can I find official documents related to U.S. state artificial intelligence policy?

For federal materials, the White House website and Federal Register contain executive orders, OMB memos, and agency guidance. State legislative websites host bills and enacted statutes-search by state and “artificial intelligence” to find current provisions and their effective dates. Official agency pages (DOJ, FTC, Department of State) publish ai strategies and compliance plans. For further information on model legislation, organizations like ALEC publish their template bills publicly. Be careful to distinguish between official government texts, advocacy group “model bills,” and secondary commentary from law firms or media outlets-each has different authority and potential bias. Curated outlets like KeepSanity AI help readers avoid wading through hundreds of pages each week by surfacing only what actually matters.