Federal CISO Urges Caution as Agencies Explore Generative AI

Federal CISO Urges Caution as Agencies Explore Generative AI

By

The U.S. government's federal chief information security officer, Chris DeRusha, has cautioned federal agencies to exercise prudence when delving into the realm of generative artificial intelligence (AI). Speaking at a FAIR Institute conference of cybersecurity and risk managers, DeRusha emphasized the need to address risks associated with this technology before wholeheartedly embracing it.

Generative AI tools, such as OpenAI's ChatGPT and Google's Bard, have been increasingly catching the attention of both companies and government agencies for their remarkable capacity to parse vast datasets and provide conversational responses. However, in the realm of government oversight, DeRusha highlights the importance of exercising caution.

"We can't just break the rules and have use without understanding the risk," DeRusha stated. He emphasized that comprehensive federal policies and parameters for evaluating the cybersecurity risks linked to AI are on the horizon.

The Biden administration has previously expressed its intention to release a federal policy governing the use of generative AI in the fall, along with an executive order. DeRusha pointed out that this process is being overseen by the West Wing chief of staff's office, indicating the high priority it holds within the administration.

Several federal agencies have submitted numerous proposals for employing generative AI and other forms of AI. For instance, the Energy Department aims to use AI to create intuitive interfaces for its extensive databases and employees. The Department of Homeland Security, home to the Cybersecurity and Infrastructure Security Agency (CISA), seeks to harness AI to manage cybersecurity alerts.

DeRusha emphasized the necessity for agencies to gain a profound understanding of the data used to train AI systems and to conduct thorough testing for potential security vulnerabilities. He also stressed the importance of establishing a robust process for disclosing any identified vulnerabilities.

Eric Goldstein, the executive assistant director for cybersecurity at CISA, acknowledged that this journey of understanding and securing AI is ongoing and challenging. Corporate cybersecurity executives are grappling with similar issues as the proliferation of generative AI tools introduces risks at a rapid pace. Many of these tools are readily available, enabling employees to experiment with them outside the purview of their organization's cybersecurity departments.

Kurt John, the chief security officer at Expedia Group, expressed his desire to leverage generative AI to interpret data from various sources and gain insights into broader cybersecurity trends within and outside the company.

Besides potential vulnerabilities within AI-built systems, organizations must also consider new risks, including the potential for hackers to infiltrate networks and manipulate AI models or the data that fuels them. DeRusha urged organizations using AI to contemplate the potential consequences of AI making incorrect decisions.

Furthermore, he encouraged corporate security chiefs to move beyond thinking about cybersecurity risk in isolation. DeRusha emphasized the importance of CISOs being willing to disclose vulnerabilities they encounter, as these issues may be relevant to other entities, thus enhancing collective security. He concluded by saying, "We together are managing the nation's risk."