In this Commonfund Forum Spotlight they explore how artificial intelligence (AI) is being leveraged by both defenders and threat actors, referencing recent incidents involving Anthropic and emphasizing the need for secure-by-design principles in technology development. They discuss the importance of preparing for potential retaliatory cyberattacks and building quality into technology products rather than relying solely on aftermarket cybersecurity solutions. The session concludes with an overview of RSA's role as a global cybersecurity community platform, including their conference, innovation ecosystem for startups, and membership community focused on collaborative cyber defense.
news, top of mind. And certainly, I think from a topic perspective, I have a litany of questions, but we're gonna start with the the big breaking news, which is Iran. Yeah. Right? And from your positions that you've held and from CISA's perspective, what should we expect the United States and those of us who are in cybersecurity and those of us out on the floor here? How do we protect ourselves and and should we expect to see some notifications or whatnot? I haven't seen anything yet. We subscribe to CISA, but we haven't seen anything yet. Yeah. So, you know, I mentioned this a little bit earlier. When we knew that Russia was going to invade Ukraine and Russia, pretty formidable cyber power, and we had received some information that Russian retaliatory attacks given US support for Ukraine that they may blow back on US critical infrastructure and critical infrastructure around the world. That's when we launched the Shields Up campaign, and it's still on the CISA website. But very specific guidance that can be done pretty urgently at the CEO level, at the board level, at the CIO level, at the CISA level to actually help shore up defenses and and infrastructure in a period of time. And you know, I did a post on this recently, but frankly, should be thinking the same way about Iran because Iran is a pretty formidable cyber actor. One of the reasons why financial institutions actually got really good at cyber before many other industries were doing it was because in the twenty twelve to twenty fourteen time frame, Iran actually went after the banks with these what's called distributed denial of service attacks actually just affecting the web interfaces, not actually the deep infrastructure. But Iran has gotten more and more sophisticated and we've seen them consistently go after targets. Certainly, when I was at CISA, we saw them go after things like water. And so at the end of the day, it kinda goes back to my point about imagining and embracing that worst case scenario. We have to prepare ourselves for potential retaliation given what's happening in Iran now. They do have a formidable capability both for cyber, but frankly, for physical attacks as well. And I have seen some warnings about things like potential terrorist attacks and around the world. But everybody should be thinking about what are the things that we need to do in the near term to react to any retaliation that may come from Iran. So if if we think about Iran and then if we, you know, extend that out to China Yeah. North Korea, Russia, these are nation state sponsored bad actors training their their their their population to cause harm across the world. CISA certainly has the focus of trying to inform us and trying to keep us ahead. Where are you seeing based on your time and what are you hearing like from a institutional perspective where some of those focuses and attacks are? I mean, we have lots of money in this room and around from a financial services perspective. But aside from that, what what do you see? Well, finance will always be a big target. But, you know, one of the things that we recognize when I was at Morgan Stanley was you could invest, hundreds of millions of dollars in your cybersecurity capability. But if you didn't have all the enabling infrastructure, right, if you didn't have communications, if, you know, you didn't have an ability to get people to work, if you didn't have access to water or power, it really didn't matter how much you actually put in place and invested in your native cybersecurity capabilities of the firm. So I think you have to look at this as potential impacts across the board. So I mentioned you you just mentioned China. And when we look at the big four, we think about Russia and China. China really being the most formidable kind of the pacing threat. But again, Iran has pretty formidable cyber capabilities. And then, of course, North Korea, but you think about the big China threat, like that went after very specifically water and power and communications and transportation. But the whole goal was to insight it sounds very creepy, but we saw it in the assessments from our intelligence community to insight societal chaos and panic across the US. And that, of course, would impact any business, right, across the country. I mean, for example, folks will remember Colonial Pipeline, the ransomware attack on the big pipeline that happened in May of twenty twenty one. That was just one pipeline that shut down, gas to the eastern seaboard for about four days, but that caused a huge issue. And so just think about that on a on a macro scale with going after with mass disruption. And, you know, we don't know what's happening with Iran and its, Peter Tibales, whether that's Russia or whether there's discussions going on with other big adversaries. But it's hard, to imagine that there won't be some sort of backlash attacks that could impact, American citizens and our businesses. So, you know, as I've been saying, for the past week or so, we should be prepared for it. Thank you. I'm gonna double click because you you talked about AI in your presentation, and we had some news that came out last night regarding anthropics. So we'll talk about that in a second. But bad actors leverage AI too, and probably better than we are using it as a society. So how do we defend ourselves when I always use the reference of whack a mole, the game of whack a mole and trying to you know it is a Rubik's cube trying to solve these things. But now that AI is there, know, are are we more vulnerable? Yeah. I mean, the truth is that we are vulnerable as a society, right, at the end of the day. It goes back to less about the whack a mole on threat actors going after our infrastructure and more about, again, thinking differently about the quality of the products that we rely upon. Because, you know, even as formidable as China was, I think I made this point, like, they are they have a huge amount of resources investing in their hacking ability. They were not using exotic cyber weapons to get into our critical infrastructure. They were just taking advantage of the common product defects in technology, which is why, yes, we should be concerned about threat actors using AI, in particular, that massive amount of social media that everybody's put out there because that helps tailor phishing emails to make them so realistic that you will click on them. But as as at scale, what we really need to do is to demand more from the technology that we rely upon, and I actually think that's where AI can make a difference. If we can radically increase that software quality, that will make it harder for those threat actors to actually have an impact on us. It will create the enormous amount of friction that has just never been there before because we've never held vendors accountable for the quality of their products. So that's a good segue into anthropic and specifically where threat actor bad actors leveraged Anthropic recently. I think it was over the summer. Leveraged it for nefarious ways against companies that were utilizing Anthropic. And they never saw that their tool set could be used in that way. But then Dario came out and said, okay, now that we understand this, we're gonna protect that. So is that more aligned with what you're saying? Yeah. I mean, you know, we talked about launching the secure by design campaign, which was all about making sure that these products, whether they're technology products, software products, or AI products, were developed with security at the design phase, which might sound kind of obvious, but that's not the way it's been for years and years and You know this very Yeah. Well as the chief technology, officer. You know, at the end of the day, the incentives were all about speeding to market and cool features, not about security. Again, that's why we created the whole cyber security aftermarket. And I think part of the good news with what we're seeing from the AI labs, and Anthropic was the one who came out with the whole report about Chinese related actors taking advantage of its capabilities to actually successfully hack into a whole bunch of critical infrastructure targets. But we're seeing that now very, you know, relatively early. It's only been three years since the introduction of ChatGPT four point o. So you think about the fact that now companies recognize the importance of safety and security and these capabilities very different from when software was invented. We really weren't even talking about security for another twenty years after that. So I think that's one reason to be encouraged, but you know, we could probably spend a whole other hour talking about AI and safety and security. But you know, I go back to this quote that I think really is something to think about in today's age and it's by a guy named Edward o Wilson, who's a pretty famous social biologist. And, he said, the problem with humanity is that we have paleolithic emotions, medieval institutions, and godlike technology. And I think that really sums things up. I mean, particularly with AI, think about how powerful, how fast moving, how unpredictable the makers, the labs who build these tools, they will tell you they can't actually predict exactly what they will do. And so we really do, as a society, need to think about the types of mechanisms that we'll put in place to govern these capabilities to ensure the safety and security. And this tension is what we're seeing manifest overnight between the government, and a company that has taken a stand on what they believe, is the right thing to do from a safety and security perspective. So we'll see what happens, but just from reading the press, the two things that the company sort of laid a red line down on was the use of these tools for mass domestic surveillance. That seems to me like something we don't want a red line on, and the use of them for autonomous weapons. And so I think this is a really, really important moment in time to sort of pause around where these capabilities are gonna be in the next twenty years. Because as I said, it was about twenty years ago that we saw the first iPhone, that Twitter came on the scene, Facebook came on the scene, and I think we can probably agree, anybody certainly who's a parent out there can agree that the move fast, break things culture of social media has not, on balance, been a great thing. And so we just cannot move fast and break things with AI, the most consequential, powerful technology of our lifetimes. And, you know, that's why at the end of the day, I hope that, at some point in time, our congress does look really hard at what we need to do to make sure we can still benefit from these magical, incredible capabilities that can lead to enormous economic prosperity, massive improvements in research and help and health, but also make sure that we are not implicating enormous risks that can ultimately greatly impact and damage humanity. Thank you. So I wanna end on your new position as CEO for RSA. And if you could just talk a few moments about, you know, what your role is for RSA conference and and the goal that goals that you're trying to achieve. Yeah. So RSAC is the largest global cybersecurity community platform. It was a company started a couple years ago that was built around what's called the RSA conference, which has been around for thirty five years rooted in cryptography. How do you secure data? And we built the company to do a couple things. First of all, the conference is the largest, the most innovative in the world. Forty five thousand people, incredible speakers, incredible technical interface of information and learning. But we also have this, this is the thing I'm probably most excited about is the innovation ecosystem. So we do a competition for startups, the most innovative startup. And over the years, we've had come out of that eighteen billion dollars of investment in some of the biggest cybersecurity names that you're probably well familiar with. So we do a lot of incubating of fantastic ideas in cyber security, in quantum, and in AI security. So and we're taking that on the road with some of our international partners, so that's very cool. And then we're building a membership community because at the end of the day, cyber security is a team sport. It's one that requires folks across industry, across the international world, across government to come together to build a trust platform that will help us effectively reduce risk. And so we're building that membership community as well. And it's not that different from what I did at CISA as the head of America's Cyber Defense Agency, was really building and catalyzing trusted relationships to help reduce risk to global cyberspace. So it's equally fun. Excellent. Thank you. And I know at Common Fund my colleagues get sick and tired of me and my emails when I always say security is a shared responsibility. Hundred percent. So it's all of yours. Don't get sick of those emails. Yes. Read them. So with that, we'll conclude, and I thank you so very much for such an engaging session with us. Yeah. My pleasure. Thanks, everybody.