Democrats Demand Answers on DOGE’s Use of AI

Democrats Demand Answers on DOGE’s Use of AI

Democrats on the House Oversight Committee fired off two dozen requests Wednesday morning pressing federal agency leaders for information about plans to install AI software throughout federal agencies amid the ongoing cuts to the government's workforce.

The barrage of inquiries follow recent reporting by WIRED and The Washington Post concerning efforts by Elon Musk’s so-called Department of Government Efficiency (DOGE) to automate tasks with a variety of proprietary AI tools and access sensitive data.

“The American people entrust the federal government with sensitive personal information related to their health, finances, and other biographical information on the basis that this information will not be disclosed or improperly used without their consent,” the requests read, “including through the use of an unapproved and unaccountable third-party AI software.”

The requests, first obtained by WIRED, are signed by Gerald Connolly, a Democratic congressman from Virginia.

The central purpose of the requests is to press the agencies into demonstrating that any potential use of AI is legal and that steps are being taken to safeguard Americans’ private data. The Democrats also want to know whether any use of AI will financially benefit Musk, who founded xAI and whose troubled electric car company, Tesla, is working to pivot toward robotics and AI. The Democrats are further concerned, Connolly says, that Musk could be using his access to sensitive government data for personal enrichment, leveraging the data to “supercharge” his own proprietary AI model, known as Grok.

In the requests, Connolly notes that federal agencies are “bound by multiple statutory requirements in their use of AI software,” pointing chiefly to the Federal Risk and Authorization Management Program, which works to standardize the government’s approach to cloud services and ensure AI-based tools are properly assessed for security risks. He also points to the Advancing American AI Act, which requires federal agencies to “prepare and maintain an inventory of the artificial intelligence use cases of the agency,” as well as “make agency inventories available to the public.”

Documents obtained by WIRED last week show that DOGE operatives have deployed a proprietary chatbot called GSAi to approximately 1,500 federal workers. The GSA oversees federal government properties and supplies information technology services to many agencies.

A memo obtained by WIRED reporters shows employees have been warned against feeding the software any controlled unclassified information. Other agencies, including the departments of Treasury and Health and Human Services, have considered using a chatbot, though not necessarily GSAi, according to documents viewed by WIRED.

WIRED has also reported that the United States Army is currently using software dubbed CamoGPT to scan its records systems for any references to diversity, equity, inclusion, and accessibility. An Army spokesperson confirmed the existence of the tool but declined to provide further information about how the Army plans to use it.

In the requests, Connolly writes that the Department of Education possesses personally identifiable information on more than 43 million people tied to federal student aid programs. “Due to the opaque and frenetic pace at which DOGE seems to be operating,” he writes, “I am deeply concerned that students’, parents’, spouses’, family members’ and all other borrowers’ sensitive information is being handled by secretive members of the DOGE team for unclear purposes and with no safeguards to prevent disclosure or improper, unethical use.” The Washington Post previously reported that DOGE had begun feeding sensitive federal data drawn from record systems at the Department of Education to analyze its spending.

Education secretary Linda McMahon said Tuesday that she was proceeding with plans to fire more than a thousand workers at the department, joining hundreds of others who accepted DOGE “buyouts” last month. The Education Department has lost nearly half of its workforce—the first step, McMahon says, in fully abolishing the agency.

“The use of AI to evaluate sensitive data is fraught with serious hazards beyond improper disclosure,” Connolly writes, warning that “inputs used and the parameters selected for analysis may be flawed, errors may be introduced through the design of the AI software, and staff may misinterpret AI recommendations, among other concerns.”

He adds: “Without clear purpose behind the use of AI, guardrails to ensure appropriate handling of data, and adequate oversight and transparency, the application of AI is dangerous and potentially violates federal law.”

Read more

EMEA Tribune is not responsible for this news, news agencies have provided us this news.
Follow us on our WhatsApp channel here .

Read more