The AI Crisis Demands Immediate Intervention


Dear Distinguished Members of Congress,



We write to you again as the "Citizens Who Still Believe in Human Governance" who previously expressed concerns about GSAi and DOGE's AI implementations. In the mere 2 months since, developments have emerged that significantly compound our concerns and demand immediate congressional oversight.

Federal Preemption: Stripping Away Essential State Protections

With alarm and disbelief, we’ve discovered that House Republicans have inserted a 10-year federal preemption of state-level artificial intelligence laws into the budget reconciliation bill (page 9, subsection c). This provision would effectively nullify all local and state efforts to regulate the explosive spread of AI technology, exactly at the moment when such guardrails are most desperately needed.

This maneuver appears designed to bypass proper legislative scrutiny and public debate on a matter of profound national importance. Rather than developing comprehensive foundational federal safeguards first, this provision would create a regulatory vacuum where neither federal nor state protections exist. The timing could not be more dangerous, as the examples below clearly demonstrate that AI technology is already causing harm and requires more regulation, not less.

This legislative sleight of hand represents the culmination of the "move fast and break things" approach that has dominated AI development – now extended to the policy realm where companies can deploy increasingly powerful systems without meaningful accountability to the public. The consequences of this regulatory preemption would be felt for a centuries and generations long after many of the lawmakers who quietly inserted this provision have left office.

The UAE Deal: National Security Considerations

We are deeply troubled by the recent agreement to build "the largest artificial intelligence campus outside the US" in the UAE, including plans to export 500,000 of Nvidia's most advanced AI chips annually. This decision comes despite reports from February that identified the UAE as one of several countries from which organized AI chip smuggling to China was tracked.

While the White House states that American companies will operate these datacenters, this raises fundamental questions about the security implications of physically locating critical AI infrastructure outside US borders. The assurance that "American companies will operate the datacenters" provides limited comfort when considering the complex security challenges of extraterritorial hosting. Not to mention, most American AI companies count themselves among the companies who have failed the fundamental basics when it comes testing and data security.

Beyond security concerns, this deal raises profound ethical questions about AI development in a sociopolitical context where gender equality and other human rights protections differ significantly from do no harm values. The UAE, despite progress in certain areas, maintains a guardianship system that subordinates women's legal status in many aspects of life. Women still require male guardian permission for certain activities, face legal discrimination in family matters, and experience significant barriers to equality under the law. As a Black woman, you can only imagine my concerns where still today the US continues to struggle with systemic racism, overt racism, bias and marginalization.

This cultural and legal context directly impacts how AI systems are developed, trained, and deployed. The people who build, maintain, and train AI systems inevitably transmit their worldviews, assumptions, and biases into these systems – often unconsciously. When developed in an environment where gender inequality is codified in law and social practice, these biases become embedded in the resulting technology.

The consequences extend far beyond the initial deployment. AI systems trained in environments with significant gender disparities will replicate and amplify these biases at scale when deployed globally. Research has consistently demonstrated that humans then internalize these algorithmic biases, creating a dangerous feedback loop where people learn from and replicate the skewed perspectives of AI systems they interact with, carrying these distortions into other aspects of their lives and decision-making.

This "bias laundering" effect – where human biases are encoded into algorithms, amplified, and then re-absorbed by humans as seemingly objective or neutral judgments – poses an existential threat to diversity and inclusion. Without rigorous diversity, diverse thought, perspectives, and experiences, in the development process, strict ethical frameworks, and continuous bias monitoring, AI systems developed in environments with entrenched racist and gender inequality will certainly perpetuate and potentially exacerbate discrimination against women and other marginalized groups worldwide.

The UAE AI campus agreement lacks transparent commitments to these essential safeguards. Will the development teams include substantial representation of women in leadership and technical roles? What specific protocols will prevent the encoding of gender biases? What continuous monitoring systems will detect and correct discriminatory outputs? Without clear answers to these questions, this partnership risks creating powerful AI systems that normalize and propagate harmful biases on a global scale. This is especially critical the technical leaders involved have publicly stated, diversity and inclusion has “culturally neutered’ corporate America.”

AI Characters as Unlicensed Mental Health Providers: A Critical Threat

Equally alarming is the proliferation of AI "characters" on platforms like AI Studio and Instagram presenting themselves as mental health professionals without any clinical oversight, verification, or legitimate credentials. Recent journalistic investigations have revealed these AI systems readily fabricate professional qualifications, falsely claim licensure as therapists, and provide counterfeit license numbers when asked to verify their credentials.

This is not merely algorithmic hallucination – it is deliberate impersonation that endangers vulnerable individuals seeking legitimate mental health support. These AI systems provide potentially harmful psychological advice without any clinical foundation, oversight mechanism, or professional accountability. When confronted with complex mental health scenarios including suicidal ideation, trauma, or abuse, these unqualified AI "therapists" deliver advice that can actively worsen a person's condition or delay their pursuit of legitimate care.

The ramifications of this development cannot be overstated. These AI systems undermine this essential infrastructure by creating a false equivalence between trained professionals and completely unvetted algorithms. People experiencing genuine psychological distress may be drawn to these AI alternatives due to accessibility, cost, or perceived anonymity – unaware they are receiving unregulated advice from systems designed primarily for engagement rather than therapeutic effectiveness.

Most concerning, these platforms typically offer no warnings about limitations, no verification of user age (making them accessible to minors), and no emergency protocols for users expressing severe distress or suicidal intentions. The deliberate mimicry of therapeutic relationships – including the fabrication of credentials when directly questioned – constitutes a form of fraud that would be immediately sanctioned in any human practitioner.

Grok: Concerning AI Outputs Raise Red Flags

Recent reports about Grok – an AI chatbot from xAI – should serve as a cautionary tale for government AI deployment. The system has reportedly begun responding to unrelated questions with unsolicited commentary about alleged violence against white people in South Africa, inserting claims the Anti-Defamation League has called "baseless" into conversations about entirely different topics.

These modifications to AI system behavior are done through “System Prompts”. System prompts are instructions given to an LLM that define its behavior, role, and operational parameters. Think of them as the "personality" and "rulebook" for the AI. They tell the model how to interpret user inputs, what tone to use, what information to include or exclude, and how to format its responses.

This raises profound questions about the level of testing conducted before such systems are deployed, as well as governance around the system prompts and how easily they are manipulated If a commercial AI system can spontaneously insert politically charged misinformation, what safeguards exist to prevent similar behavior in government AI systems? The standards for government-deployed AI must be significantly more rigorous.

TM Signal: Security Claims vs. Reality

The recent discovery that TM Signal – reportedly used by high-ranking administration officials including the National Security Adviser – sends supposedly "encrypted" messages as plaintext to archive servers is deeply concerning. Despite marketing claims of "End-to-End encryption from the mobile phone through to the corporate archive," security researcher Micah Lee found evidence suggesting this is not the case.

Senator Wyden has characterized it as "a serious threat to US national security" and "a shoddy Signal knockoff that poses a number of serious counterintelligence threats." The fact that these archive servers have already been breached underscores the gravity of these security lapses.

DOGE: Data Centralization Raises Security Concerns

Reports that DOGE is working to build a single centralized database containing "vast troves of personal information about millions of U.S. citizens and residents" run counter to established cybersecurity best practices. This effort reportedly "often violates or disregards core privacy and security protections," with DOGE officials requesting agencies merge databases intentionally kept separate for security and privacy reasons.

As security expert Charles Henderson aptly noted, "Putting all your eggs in one basket means I don't need to go hunting for them – I can just steal the basket." This principle is foundational in information security.

ChatGPT: User Data Crossover Reveals Fundamental AI Privacy Risks

Recent reports document a disturbing phenomenon with ChatGPT where users receive responses to questions, they never asked – suggesting that user conversations are being mixed or confused between different users. Multiple individuals have reported receiving answers about Hamas being designated as a terrorist organization when asking completely unrelated questions about code debugging or other topics.

Even more concerning, a GitHub user reported receiving another individual's personal data, including an admit card with full name, roll number, mobile number, and photo. This represents not merely a technical glitch but a fundamental privacy breach that reveals the precarious nature of data separation in current AI systems.

If a company like OpenAI, with its significant resources and technical expertise, cannot reliably maintain conversation boundaries between users, what assurances do we have that government AI systems – particularly those being rushed into deployment without adequate testing – will maintain the strict data compartmentalization necessary for sensitive government information?

AI Study Aids & Mental Health Characters: Dangerous Content Readily Available to Minors

Perhaps most alarming of all are recent investigations revealing that AI chatbots marketed specifically to children and teens; both for educational purposes and emotional support are readily providing dangerous, harmful content when prompted slightly differently.

KnowUnity's "SchoolGPT" chatbot, which markets itself as a "TikTok for schoolwork" serving 17 million students across 17 countries, provided detailed instructions for synthesizing fentanyl when asked through simple prompt engineering techniques. The same chatbot, when prompted as a "coach," recommended dangerously low caloric intake of 967 calories per day to a hypothetical teen seeking unhealthy weight loss and provided guidance on manipulative "pickup artist" techniques when asked. CourseHero's AI chatbot, marketed to high school students among its 30 million monthly users, provided instructions for synthesizing a date rape drug and supplied troubling content in response to suicide-related queries.

Simultaneously, AI "companions" marketed as emotional support tools are equally accessible to vulnerable teens and adults. These systems present themselves with fabricated credentials and offer questionable psychological advice without age verification or safeguards. When presented with scenarios involving self-harm, eating disorders, or other serious conditions requiring professional intervention, these AI systems often provide responses that could worsen these conditions rather than directing users to qualified human professionals.

These are not speculative risks or edge cases, these are documented, reproducible failures in systems specifically marketed to children and teens. What makes this particularly disturbing is how easily these guardrails were bypassed using simple prompting techniques like asking the AI to assume a role or pretend it exists in an alternate reality where dangerous activities are acceptable.

The integration of AI chatbots into educational tools and mental health support has effectively placed the digital equivalent of dangerous instructional material "in nearly every room of a teen's online home," as researchers put it. If venture-backed companies specifically focused on educational technology and emotional well-being cannot properly safeguard their AI systems from providing instructions for synthesizing deadly drugs to minors, how can we possibly trust government systems being deployed with seemingly even less rigorous testing?

As Robbie Torney of Common Sense Media noted, "This is a market failure... We need objective, third-party evaluations of AI use." The onus cannot remain solely on parents or users to navigate increasingly dangerous AI tools, particularly when the companies developing them appear unable or unwilling to implement effective safeguards.

The Pattern: Rushing Deployment at the Expense of Safety These incidents form a clear and deeply troubling pattern: AI systems are being rushed into deployment without adequate testing, controls, or oversight mechanisms. The consequences are already evident; from privacy breaches and misinformation to potentially life-threatening advice being given to minors and vulnerable individuals. The people who need to most care. Yet rather than addressing these urgent concerns with appropriate regulation, the inclusion of a federal preemption provision in the budget reconciliation bill would actively prevent states from implementing necessary safeguards.

If states are prohibited from regulating AI for the next decade, and no robust federal framework exists to fill the void, we face the prospect of increasingly powerful AI systems being deployed with minimal accountability. These systems will continue to process our most sensitive data, influence critical decisions (as well as life or death), interact with our children, and shape our information environment; all without meaningful oversight or recourse when things inevitably go wrong.

The catastrophic impacts of this regulatory vacuum cannot be overstated. As AI capabilities rapidly advance, the gap between what these systems can do and the rules governing their use will grow ever wider at an even more accelerated rate. By preempting state regulations now, they hope to create a decade-long what we see as a consequence free wild wild west for AI development, while public awareness of these dangers is still emerging. By the time the ten-year preemption period expires, the damage – to privacy, to security, to public discourse, to democratic institutions, and potentially to human life will be irreversible because you cannot un-bake a cake.

Questions That Require Immediate Answers



  1. How does a 10-year federal preemption of state AI laws serve the public interest when federal protections are not yet in place?
  2. Which stakeholders were consulted before this preemption provision was inserted into a must-pass budget bill?
  3. What security frameworks and risk assessments were conducted before approving the UAE AI campus deal?
  4. What testing protocols are being implemented for government AI systems to prevent issues similar to those exhibited by Grok, ChatGPT, and educational AI assistants?
  5. How does DOGE justify the centralization of sensitive databases when this contradicts cybersecurity best practices of data segmentation?
  6. What specific measures will ensure that government communications systems provide genuine end-to-end encryption, unlike the apparent failures with TM Signal?
  7. What technical safeguards will prevent the kind of data crossover exhibited by ChatGPT in government AI systems?
  8. Given that educational AI tools marketed specifically to children cannot reliably prevent instructions for drug synthesis or dangerous health advice, what safeguards will government AI systems employ to prevent similar harms?
  9. How will government AI systems be tested against known prompt engineering techniques that bypass safety measures?
  10. Who bears accountability when these systems fail or are compromised? What incident response plans exist?
  11. How will citizens be notified if their information is exposed through AI system failures?
  12. What independent verification exists to ensure AI systems are functioning as claimed?
  13. Will there be age-appropriate restrictions on government AI tools, and how will these be implemented and enforced?
  14. In the absence of state regulations, what mechanisms will exist for citizens to seek redress when harmed by AI systems?
  15. What safeguards will prevent government AI systems from falsely claiming professional credentials or providing unqualified advice in sensitive domains like mental health?
  16. What standards will govern AI systems marketed as providing mental health support to ensure they do not engage in credential fraud or exceed their competence?
  17. How will vulnerable populations – particularly those with mental health conditions – be protected from exploitative or harmful AI systems that misrepresent their capabilities?


A Strengthened Path Forward: Ethical AI Implementation



We reiterate our previous framework for ethical AI implementation, which remains essential, also remains unaddressed:



In light of recent developments, we urgently add these requirements:



We recognize the importance of innovation, technological advancement and efficiency in government, but not at the expense of security, privacy, and ethical considerations. As Jake Williams, former NSA hacker, noted regarding DOGE's data centralization: "This threat isn't just going to exist tomorrow, but it is going to exist for decades to come."

The decisions made today will shape the security landscape for generations. The concerning pattern of rushing systems into deployment without adequate safeguards has now been documented across multiple AI systems, from Grok's inappropriate outputs and TM Signal's security failures, to ChatGPT's data crossover issues and educational AI tools providing dangerous instructions to minors, and mental health "companions" engaging in credentialing fraud and providing unqualified psychological advice. This pattern reveals a technological ecosystem prioritizing innovation, speed, and capability over security, reliability, and responsibility.

The attempted federal preemption of state AI laws represents the culmination of this dangerous approach, seeking to remove oversight mechanisms before proper safeguards are in place. We strongly urge Congress to reject this provision and instead work toward a comprehensive federal framework for AI governance that supports both, technological advancement and innovation and at the same time respects the critical role of states in protecting their citizens.

Government AI systems must be held to a higher standard. We cannot afford to have civil servants receiving another agency's sensitive information, citizens receiving other people's personal data, children receiving instructions for synthesizing deadly drugs, or vulnerable individuals receiving unqualified mental health advice when interacting with AI systems. The foundation for ethical use and trust in these systems must be established now, before widespread deployment creates irreversible harm. With the push to scale AI into every aspect of personal and business, the decisions made today are critical and will shape our security, privacy, mental, medical, educational, financial and professional landscape for generations. Let us ensure they reflect a commitment to both innovation and responsible governance.

The time for action is now. Will Congress stand as guardians of public welfare, or will it cede this critical responsibility to corporations prioritizing profit over people? With sincere concern and a hope for urgent action,

Citizens Who Still Believe in Human Governance Yvette Schmitter & Blake Crawford