Four steps for successful generative AI governance


With the federal government clearly embracing the potential benefits of generative AI and a skyrocketing number of deployments across agencies, we can expect more debate between those who want to jump into the AI pool quickly and with both feet to reap the benefits and those more concerned about the security and privacy implications and want to take a more cautious approach. 

Both sides of the argument have their merits. Ultimately, agencies should proceed with their AI plans while taking steps to address the very real security concerns.

On one hand, it would appear the faction that wants to jump in quickly has the advantage. We recently saw the release of a White House AI Action Plan and some accompanying executive orders that may effectively lift the guardrails previously put in place by those advocating caution. Even more to the point, the Government Accountability Office recently reported a ninefold increase in generative AI use cases across federal agencies from 2023 to 2024, and the General Services Administration has awarded multiple deeply discounted contracts for generative AI solutions to OpenAI, Anthropic and others – indicating that this train has pretty much already left the station.

Still, legitimate security and privacy concerns abound. Many in government raised concerns about how these AI tools are capturing federal government data and where that data is going, both internally and externally. Other issues revolve around the cost of storing all of the data that platforms are generating on the back end and dealing with the information overload resulting from AI deployments.

It is clear that the federal leadership is betting big on AI. While security and privacy concerns are always relevant, it’s important to reiterate that security is a business-enabling service line that maintains compliance and drives down risk to the mission to an acceptable level.

To do this, agencies should consider four steps to address security and privacy concerns when deploying new AI use cases.

Step 1: Understand their data, the data’s sensitivity, and the value of the data. Agency leaders should conduct an inventory of all data that will touch AI tools either on the front or back end. Having that understanding will dictate the degree of proliferation of various data and determine where it safely can and cannot go.

Step 2: Conduct a readiness review. This review should tie governance to intake, ensuring that platforms not only meet ethical or privacy guidelines, but also help give policy enforcement points visibility and apply technical controls. Similar to the cloud readiness reviews that were conducted when agencies began to embrace cloud years ago, agencies should have a model for assessing risk for generative AI as opposed to blocking entire platforms.

This step connects to the mission goal, use case, and how to safely enable the project. This can be done with scans for sensitive data moving to and from the platform and within user activities with the platform such as “post,” “share,” “copy,” or “edit.” It should also take into account the large language models (LLMs) each platform uses, whether the platform has ever been breached, its presence in countries that are banned from a U.S. federal perspective, how that platform may use data externally that is input from the government and other factors.

Step 3: Determine the future goals of the project. It’s important to keep in mind how AI use cases are likely to evolve to avoid “algorithmic lock-in” and the future need to rip and replace – which will be very difficult when it comes to AI. This should be accompanied by policies governing the instances of particular platforms, not only addressing what the boundaries are, but also questions such as whether the platforms should reside in a completely air-gapped environment, how to bring training models in and out, understanding and being able to control user activities, and the sensitivity of the data that goes to and from platforms.

Step 4: Ask for help. The steps above may sound straightforward and easy to implement, but they require a lot of work and resources. Fortunately, private sector companies have experience with these initiatives and can handle much of the extensive manual work involved in risk assessment and management. Take advantage of these resources whenever possible.

The federal government is currently positioned to fully immerse itself into the AI revolution, while simultaneously pulling back from it in some ways. Ultimately, the government can take advantage of the extraordinary benefits of these tools if it fully assesses the risks and proceeds carefully. In this way, they can take advantage of technology now without increasing risk to the agency in the future.

Mark Mitchell, enterprise security architect with Netskope, specializes in zero-trust and TIC 3.0 architectures, technical design and security modeling over multiple platforms including mobile devices and cloud services. Prior to joining Netskope, he served as an enterprise architect and enterprise security architect at the Office of the Comptroller of the Currency.





Source link

Leave a Reply

Translate »
Share via
Copy link