Sep 23, 2024
The spotlight that Generative AI has placed on the use of advancing computing capabilities and the business’s need to consume them has created an unbalanced pressure spot for CISO’s, CPO’s, and CRO’s. These leaders are charged with ensuring the sustainability of operations by applying controls, programs, and technologies that provide compliance assurance capabilities necessary to do business and compete in an incredibly regulated world. The use of AI in your company will change digital business ecosystems financially, operationally, and organizationally. Security, risk, and privacy executives are at the tip of the spear in terms of ensuring these changes can happen and your company can remain secure. The Bow Wave of AI & the CISO’s Imperative Not even a year ago the concept of prioritizing precious cybersecurity and risk management resources for AI enablement for most companies would have received the eye roll that often accompanies the use of resources for R&D programs associated with quantum computing and encryption defense. But as the world of business and technology goes, that changed on a dime late last year with the advancement and availability of generative AI and large language model (LLM) processing. Except for those businesses in the machine learning or AI space, academia, or advanced technology companies, users within the normal business operating environment were not engaged in the use of advanced AI technology to support business outcomes. But that has all changed. The question for security, risk and privacy practitioners in the coming months is not just what has changed but what are the priorities on which we must focus. Generative AI is not just the new super-fancy iPhone that users want to use; it truly has the capability to transform how they work, and their effectiveness, efficiency, and output. Like the modern calculator in the late 60s, users will flock to learn to use the technology and redefine how they work. However, the business impact goes way beyond a user-centric embrace of any specific technology. The reality is that the common availability of these technological capabilities, aligned in any one of our given business sectors, has a massive positive impact to our organizations. For some businesses, many of these initial capabilities may be in the form of improved consumer or customer engagement, technology and product development, internal financial business processing, fraud and financial crimes defense, marketing platforms, or even technology and security initiatives. In others, the changes could be more stark including the revolutionizing of their entire go-to-market, product delivery stack, or supply chain. In all cases for Security, Risk, and Privacy Programs, AI has gone from a below the line nice-to-have to an above-the-line priority to ensure the ability of businesses to embrace these formidable new technology opportunities and continue their digital initiatives. Just as important, businesses and agencies are turning to their C(I)SOs to be a business enabler through the development of oversight, controls, and thought leadership in this space while continuing to protect the business through the advancement of their cybersecurity program capabilities. As businesses continue to understand their needs around AI, and ways in which their employees are starting to use generative AI, it is important for organizations to establish policy and guardrails to avoid catastrophic intellectual property loss, the creation of legal liabilities, or even the introduction of significant security risks into technology development which can result in regulatory and compliance violations. The following are some simple suggestions on how to approach successful implementations of policy and oversight for the use of AI within your organization: Create a cross-functional oversight team that includes executive leadership, business stakeholders, technology, and security and privacy leadership. This is a cross-functional issue, not a technology issue, and should be run like other business initiatives. Consider a short policy that explicitly calls out the “must never do’s” and provides guidance to help employees understand the company's position and the ways in which they are accountable. Consider bucketing AI use scenarios into categories such as: OK to proceed without approval Use caution and seek advice if necessary Obtain approval before proceeding Prohibited Enable bidirectional communication, education, and help-desk-like capabilities to encourage your users to ask questions, get fast answers, and for the business to begin to collect and understand broader usage patterns and considerations. Create an AI-specific progressive disciplinary policy to ensure employees understand consequences and the organization drives accountability. Although policy and oversight are foundational elements by which a business can move forward and enable their employees to operate these new technologies, the C(I)SO still has a core responsibility and accountability for business operations protection, even when cyber-defensive technologies are not yet available for new technology operating areas. There are five operational imperatives that security, risk, and privacy organizations can use to get ahead of what will be one of the most advanced technical advancements during their careers. 1. Education The first is nothing technical at all. It's all about education. How we educate our users on the challenges, the requirements, the limits, and their responsibilities are as important as all of the other end-user awareness programs that organizations implement to protect their digital businesses today. Additionally, security risk and privacy executives need to educate their cyber workforce and advance their skill sets to ensure that their people have the right level of capabilities to do the work necessary to protect and enable their business. 2. Catalog AI & ML in All Areas of Your Business Second, organizations should focus on discovering and cataloging AI and machine learning in all areas of their business including development, production, and user-centric needs. This should include the types of technologies being used, the systems, subsystems, and microservices used internally and in the cloud, and the applications or business systems utilizing advanced technologies. This is no different than a CMDB effort with a specific focus on understanding and articulating where AI is being used in your company. Like the military saying “you can't protect what you can't see” the same goes for technology. 3. Apply Safeguards Third, after understanding where technology is being used, and in which infrastructure it is being used, apply to AI all of the normal safeguards we typically leverage within our environments to ensure they are protected, defensible, and resilient technology platforms, including good hygiene, patch management, and lifecycle controls assurance. Many times, these AI use cases are found within R&D environments and should be migrated to segmented production environments or, they are simply treated as test environments when in fact many components of production infrastructure are being used. By setting the standard, the tone, and the engagement of infrastructure defense in advanced technology, you will be able to quickly add next-generation protective services to these environments as they become available in the market. 4. Data & Defense Insurance The fourth area is Data Defense and Access Assurance. Obviously, the magic behind LLMs and other integrated AI technologies is their use of data sets to train, process, and create the output for their business use. That data is no different than other information entrusted to your business and needs to be protected for your organization to have the confidence to use it with AI efforts. Basic considerations like where is the data, who has access to it, what type of data is it, where did it come from, and where is it moving to; all need to be answered before data is used within advanced technology platforms. These are critical for appropriate use, certification, and accreditation of your AI programs, and most probably, jurisdictional and legal regulations and requirements. 5. Threat & Risk Management for AI The fifth and final area includes purpose-built threat and risk management processes for your AI environments. From a risk perspective, the business will need to make decisions on appropriate use, implications to legal matters, and market considerations. To be well-informed, business leadership will need to understand the risks associated with the use of any given technology. Risk programs should consider creating risk models and assessments specifically focused on critical areas of AI risk like intellectual property, data and privacy, and technology insertion/injection defenses. From a threat and incident response point of view, this will be a continuously changing environment as technology continues to grow in this area, and threat management teams should have a specific focus on understanding, tracking, and educating the entirety of the security organization on those threats. Also, monitoring and incident responders should create playbooks for enterprise, cloud, and supply chain issues or incidents associated with their business’s AI efforts. These few suggestions are the tip of the iceberg on what organizations should be considered specific to their AI initiatives. Depending on your industry, other clear focus areas could include changes in your security and privacy by design, your platform resiliency programs, or even your supply chain defense. But by focusing on five simple areas that are already core to your existing operations, only now with a focus on AI, you'll be more prepared to answer the needs of your business as they continue their advancements into next-generation technology.