This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Black Hat USA 2024: Eight ways to achieve ‘Secure by Design’ AI
Balancing the need to innovate and develop at speed with the need for security is keeping many cyber folks awake at night, or at least it was preying on the minds of the speakers who addressed Black Hat’s inaugural AI Summit, which took place in Las Vegas last month.
Occurring just a couple of weeks after the global CrowdStrike IT outage, which ground airports to a halt and forced medical facilities to resort to pen and paper, it felt the right time to reflect as firms find themselves under pressure to adopt AI faster and release products before they are properly evaluated.
Lisa Einstein, senior AI advisor at the US Cybersecurity and Infrastructure Security Agency (CISA), compared what she called “the AI gold rush” to previous generations of software vulnerabilities that were shipped to market without security in mind.
“We see people not being fully clear about how security implications are brought in. With the CrowdStrike incident, no malicious actors were involved, but there was a failure in the design and implementation that impacted people globally.
“We need the developers of these systems to treat safety, security and reliability as a core business priority,” she added.
The Internet Security Alliance’s (ISA) president and CEO, Larry Clinton, put it more bluntly: “Speed kills — today we’re all about getting the product to market quickly — and that’s a recipe for disaster in terms of AI.”
He added: “Fundamentally, we need to reorientate the whole business model of IT, which is ‘Get to market quick and patch’. We need to move to a ‘Secure by Design’ model and to work with government partners so we are competitive and secure.”
Many of the event’s sessions, which featured speakers from WTT, Microsoft, CISA, Nvidia, as well as the CIA’s first chief technology officer, were focussed on how organisations might achieve ‘Secure by Design’ AI, which TechInformed has summarised in eight key takeaways.
1. Do the basics and do them well
“You can’t forget the basics,” stressed veteran CIA agent Bob Flores during one of the event’s panel sessions. “You have to test systems and applications and the connections between the applications, and you have to understand what your environment looks like,” he added.
Flores, who, towards the end of his CIA career, spent three years as the agency’s first enterprise chief technology officer, asked Black Hat’s AI conference delegates: “How many of you out there have machines that are attached to the internet that you don’t know about? Everyone’s got one, right?”
He also warned that, with AI, understanding what’s in your network needs to happen fast “because the bad guys are getting faster. They can overcome everything you put in place.”
And while enterprises might think it’s safer to develop their own LLMs rather than to rely on internet-accessible chatbots such as ChatGPT, Flores is concerned that they might not be building in security from the beginning. “It’s still an afterthought. As you build these LLMs, you must think, every step of the way, like a bad guy and wonder if you can get into this thing and exploit it.”
2. Architect it out
Bartley Richardson, cybersecurity AI lead at GPU giant NVIDIA, advised the Black Hat crowd to look at AI safety from an engineering perspective.
“When you put together an LLM application, don’t just look at every block you’ve architected there; look at the connections between those blocks and ask: ‘Am I doing the best possible security at each of those stages?’ ‘Is my model encrypted at rest?’ Are you putting safeguards in place for your prompt injections?’ This is all Security by Design. When you architect it out, these things become apparent, and you have these feedback loops where you need to put in security,” he explained.
3. Create a safe space to experiment
Matt Martin, founder of US cyber consulting firm Two Candlesticks and an AI Security Council member for Black Hat, advised that creating a controlled sandbox environment within which employees can experiment was important. “A lot of people want to use AI, but they don’t know what they want to do with it just yet – so giving them a safe space to do that can mitigate risk,” he said.
Martin added that it was important to understand the business context and how it was going to be applied. “Ensure someone in the company is in overall control of the projects. Otherwise, you’ll end up with 15 different AI projects that you can’t actually control and don’t have the budget for.”
4. Red team your products
Brandon Dixon, AI partner strategist at Microsoft, explained how the software giant is balancing advances in AI with security. “We’ve done that through the formation of a deployment safety board that looks at every GenAI feature that we’ve deployed and attaching a red teaming process to it before it reaches our customers,” he says.
Red teaming is an attack technique used in cybersecurity to test how an organisation would respond to a genuine cyber-attack.
Check out our healthcare cybersecurity tabletop coverage here
“We’ve also formed very comprehensive guidance around responsible AI both internally and externally, consulting experts, which has enabled us to balance moving very quickly from the product side in a way that doesn’t surprise customers,” he added.
5. Partnerships are paramount
According to CISA’s Lisa Einstein, ‘Secure by Design’ relies on public and private enterprise partnerships. She added that this is particularly important in terms of sectors that provide critical infrastructure.
To this end, in 2021, CISA established the Joint Cyber Defense Collaborative (JCDC). This public-private partnership aims to reduce cyber risk to the nation by combining the capabilities of the federal government with private sector innovation and insight.
Einstein told conference delegates: “CISA only succeeds through partnerships because more than 80% of critical infrastructure is in the private sector in the US.
“We have a collective and shared responsibility. I’m seeing organisations that didn’t think they were part of this ecosystem, not realising that they have part of the responsibility. Tech providers also need to help these enterprises become more secure and keep everything safe,” she said.
Partnerships with and between vendors were also emphasised at the event. Jim Kavanaugh, longtime CEO and technology guru of $20 billion IT powerhouse World Wide Technology, spoke on the benefits of the firm’s long-term partnership with chipmaker Nvidia, including advances with AI.
In March this year, WWT committed $500 million over the next three years to spur AI development and customer adoption. The investment includes a new AI-proving ground lab environment and a collaboration ecosystem that uses tools from partners, including Nvidia.
While former CIA agent Flores recognised that such partnerships were crucial, he also stressed the need for firms to conduct robust assessments before onboarding.
“Every one of your vendors is a partner for success, but there are also vulnerabilities. They must be able to secure their systems, and you must be able to secure yours. And together, you must secure whatever links them,” he noted.
6. Appoint an AI officer
The conference noted the rise of the chief AI officer, who oversees the safe implementation of AI in organisations. This appointment is now mandatory for some US government agencies following the Biden Administration’s Executive Order on the Safe, Secure and Trustworthy Development and Use of AI.
These execs are required to evaluate different ways to deploy robust processes for evaluating use cases and AI governance.
While it was not a requirement for CISA to appoint a chief AI officer, Lisa Einstein stepped up to the role last month as the organisation recognised that it was important to its mission beyond having an internal AI use case lead.
“CISA wanted someone responsible for coordinating those efforts to ensure we were all going in the same direction with a technically sound perspective and to make sure that the work we’re doing internally and the advice we are giving externally is aligned so that we can adapt and be nimble, “she explained.
While this doesn’t have to be a board-level appointment, Einstein added that the person needs to be in the room with an ever-expanding roster of C-Suit players: the CIO, the CSO, the legal and privacy teams, and the data officers when decisions and policies on AI are made.
Einstein added that, within ten years, the position should be redundant if she’s done her job well. “By then, what we do should be so ingrained in us that we won’t need the role anymore. It would be like employing a chief electricity officer. Everyone understands the role they must play and their shared responsibility for securing AI systems and using them responsibly.”
7. Weave AI into your business operations
For ISA chief Larry Clinton, Secure by Design starts with theory. For over a decade, his organisation has collaborated with the US National Association of Corporate Directors (NACD), the US Departments of Homeland Security, and the Board of Direct Justice on an annual handbook for corporate boards to analyse cyber risk.
According to Clinton, ISA is currently developing a version of this handbook specifically for working with AI, which will be released this fall.
Clinton claimed that enterprises need to bring three core issues to the board level.
“AI deployment needs to be done strategically. Organisations underestimate risks associated with AI and overestimate the ability of staff to manage those risks. This comes from an idiosyncratic adaptation of AI, which needs to be woven into the full process of business operations, not just added on independently to various projects,” he says.
The second issue, he said, was education and the need to explain AI impacts to board members rather than explaining the nuts and bolts of how various AI deployments work.
The third issue, he added, was communication. “It’s critical that we move AI out of the IT bubble and make it part of the entire organisation. This is exactly the same advice we give with respect to cybersecurity. AI is an enterprise-wide function, not an IT function.”
8. Limiting functionality mitigates risk
According to Microsoft’s Brandon Dixon, limiting the actions that an AI system is capable of is well within a human’s control and should, at times, be acted upon. The computer giant has done this with many of its first-generation copilot tools, he added.
“What we’ve implemented today is a lot of ‘read-only’ operations. There aren’t a lot of AI systems that are automatically acting on behalf of the user to isolate systems. And I think that’s an important distinction to make — because risk comes in when AI automatically does things that a human might do when it may not be fully informed. If it’s just reading and providing summaries and explaining results, these can be very useful and low-risk functions.”
According to Dixon, the next stage will be to examine “how we go from assertive agency to partial autonomy to high autonomy to full autonomy. At each one of those levels, we need to ask what safety systems and security considerations we need to have to ensure that we don’t introduce unnecessary risk.”
#BeInformed
Subscribe to our Editor's weekly newsletter