Let's Talk SoC

Beyond the Hype: What AI Really Means for Cybersecurity

Episode Summary

How is artificial intelligence (AI) changing the cyber security landscape? Will it reduce, or add to, your cyber security risk? In this episode, we explore how AI-based cyber security can be a powerful force for good. Join Tom Harrison, Secureworks Senior Security Operations Manager, to learn more about the intelligence behind AI. From machine learning to generative AI, you’ll hear how Secureworks uses AI to accelerate and enhance detection and response, and how it makes analysts more efficient. Delve into the ethics and explore the partnership between human and machine. You’ll also get a glimpse of what tomorrow holds, as we look ahead to AI’s role in the future of cyber security.

Episode Notes

What We'll Cover

Episode Transcription

Secureworks Interview with Tom Harrison

Sally :  Hi everyone and a very warm welcome to Let's Talk SoC with today's guest, Tom Harrison, who is Senior Secure Ops Manager for the XDR program at Secureworks. Welcome Tom, great to have you here!

Tom : Thanks, happy to join!

Sally : No problem at all. And today our discussion area, very much a deep dive, I would say, on all things artificial intelligence in this space. Perhaps we can start Tom by just introducing you a little bit more and what your role really means. And then let's explore that definition of AI and its relevance to the sector.

Tom : Absolutely. So as said before, my name is Tom Harrison. I am a manager for the MXDR program over at SecureWorks. We do manage detection and response. My day-to-day activities include leading our team of security analysts, interfacing with our customers, and working really closely with our engineering team to continue to grow and push our detection capabilities.

Sally : Fantastic, I love already what you were talking about there with the different teams coming together because again absolutely critical for shared responsibility around this area, so brilliant.  I love that already and then drilling into kind of our main topic area for day really leveraging AI in the best way possible. How are you doing that from a secure works perspective within your solutions really baking that in by design? And I'd love to explore kind of the evolution of this as well because again I think, kind of, seeing where you've come from the innovation and the trajectory ahead bringing those together can make a real difference

Tom : Yeah, absolutely. So, in order to successfully talk about that, we kind of need to define a few things because it's been very much talked about in media as of late. And there are some terms that can be a little bit confusing. So, to start off with AI is a huge umbrella term. It includes machine learning, neural networks and natural language processing. So, intelligence in general just relates to insight and understanding. Artificial intelligence would be any application or technology where computer systems help us humans gather that insight and understanding that we need. So, for security operations, we deal with a lot of data, a massive amount of data. Applying AI with things like machine learning, it's able to assist us with gaining those insights on these large amounts of data. We can even automate those actions based on those insights, making our responses faster and higher quality.

Sally : It's that active intelligence, isn't it? That agency to act and kind of

Tom : Mm-hmm.

Sally : filter through some of the noise of that data. And I think particularly when we look at so many different technologies coming together, very much it's age of convergence we're in right now, let alone with the evolution of adoption of 5G. We're just going to get more and more of this data. So being able to kind of manage that and use technology in the right way to filter to those nuggets and kind of get rid of the noise, so to speak, is so, so valuable. It's vital, I would say.

Tom : Over at Secureworks, AI has been a part of us since we started. We've been incorporating machine learning into our detections and processes ever since the beginning. We have created tons of great features like a hands-on keyboard detector. We have a new prioritization engine that really helps eliminate a lot of noise and false positives. More recently, we've actually been incorporating new generative AI-based utilities to do things like help our analysts improve on their writing investigations and explaining complex command lines and data structures.

Sally : I love that. And again, that explainability massively important too. And you mentioned there a little bit about the buzz. You know, I've just come back from an event where very much kind of generative AI, chat GDP was everywhere. But one of the most interesting things I found about that was demonstrating how it could be used to get very granular and actually supporting that knowledge generation and getting that kind of specificity of answers to really support you from a knowledge point of view as well. So, that was particularly interesting. So, I wonder what you're seeing there, kind of getting beyond the hype, should we say, that we're seeing in some circles at the moment. What have you seen as being the biggest catalyst for this uplift in interest? And why, kind of, is that moment now? What should people be looking at?

Tom : Well, I mean, people are crazy about it because it's the new thing. I mean, it's not really around AI in general, but specifically around generative AI, like chat GPT, it's incredibly fascinating. It has the potential to really improve so many processes and things that we do, save so much time. I mean, now the other thing that plays into the media hype is that we've all seen a lot of dystopian sci-fi movies that some AI that wants to destroy the world, but those are all coming from a place of understanding. If we lack the understanding of large language models in generative AI, it's kind of hard to tell what's fact and fiction. The fact of it is it's an incredibly powerful tool and it's a very interesting tool. Many technologies we come up with, we have a problem we're trying to solve and we develop a tool to solve that problem. With this... It's as if this new tool has landed out of nowhere. We've made a discovery of this new technology and we don't even understand all of the potential things that it can do. It's a really intense and exciting time of just pure discovery.

Sally : Exactly, exactly. And the fact that so many people get involved in this as well and have the opportunity because of Democritized Access frankly to actually explore this as well is really exciting but also just shows how much context and the timing matters too. I remember being at a keynote session I was one of the speakers and we did a demonstration using a version of this back in I think it was early 2019, but again certain things weren't quite ready for this to take off and the way it is done now with all these different elements coming together and the kind of speed and scope of change so really, really interesting area. And also, I think kind of looking at from different perspectives, like again, many flavors of AI, I kind of look at it a left brain, right brain AI in many ways, in terms of whether you're evaluating or doing decision making. So making that awareness out there absolutely key. And how do you think this fits into another really important area about responsibility? Again, whether we're talking from a consumer perspective or our ecosystem partners, everybody now is talking far more about how to embed responsibility in the bedding of the development of AI and making it more transparent, but even beyond that, more accountable, I would say too. What is your take on that? How is Secureworks getting involved? What are you seeing about the changes there and how this is resonating probably never more strongly?

Tom : Absolutely. So, being responsible and transparent with AI is probably one of the most important aspects of dealing with this technology. It is incredibly powerful and can seem at first glance like it can easily replace whole tasks and process and stuff we do. We need to understand that it is not our replacement, nor is it something that's so scary that we should abandon it entirely. It is a tool to be used alongside the human element. Also, along with that is our responsibility to make sure that we do not rely on it so much that it starts having unintended consequences. It's powerful and it can take us many places, but a human needs to have a hand on the wheel. As far as transparency goes, it's also really key. So, we'll find more and more applications for this, especially for service-related applications as we continue to develop this technology. Many studies have shown that when it's used with things like say a chat bot, customers are actually more accepting of a well-functioning AI chatbot than we may have previously thought, but only if we clearly say that it is an AI chatbot and we're not trying to pass it off as a human. So, I actually think the transparency really aids to the overall acceptance of the technology.

Sally : Definitely, I totally agree with that. And again, some research that came out very recently was looking about trust in AI development as well. And again, the transparency of what your data is being used for, is it a chatbot? Is it a real person or not? It’s absolutely right up there. And even like the bastions of trust has changed, you know, like work by say Edelman, for example, they've benchmarked kind of trust for over 17 years now. And big business has actually been kind of overtaken, so we say NGOs and government, as being the bastions of trust for most consumers today. So, it's even raised those expectations around delivering on this even stronger. And also, maybe areas we don't talk about as much about how can different flavors of AI be used together. So, for example, using some of the kind of chatGBT functionality to train a chat bot and give those answers more specificity. So you could actually kind of. ring-fenced it in a certain area for certain types of data responses. I think it's interesting interplays that we can get into as well. So, such an important area, but absolutely the ethics of this responsibility, transparency and accountability, again, it has to be right up there. But I love the fact that we're getting more kind of data literate, AI literate, you know, understanding right across the population now to kind of catalyze that need to do that as well. So, I think that's really interesting too.

Tom : Yeah, I mean, along the lines of trust, we also have to make sure that when we're developing these AI technologies that we're making the outputs of them as accurate and as trusting as possible. I know that when it comes to generative AI, there's this larger trend of hallucinations, making sure that the output is consistent and not being too generative to the point where it's generating something that's not based in the data that we're feeding it. I know we're constantly getting better and better with that. We actually made our own little discovery when we were working on a specific application with General Trip AI, where we have developed a guard rail that we put within our prompts we send to it that has been eliminating our hallucination issues and that we've been really, really excited about.

Sally : Oh, I love that. Absolutely brilliant. I've got to find out more about that myself. That sounds superb. Looking ahead, Tom, as well, where are you seeing next? Now, this is a very difficult question. I mean, just thinking what's happened in the last five months with Gen AI in particular is a really, really good example of kind of the speed and scale and scope of change. But where do you think we're going next in terms of some of the key trends that AI is going to be at the heart of that's going to really impact security and cyber security, perhaps in the next year or so, just to keep that time window probably more realistic in terms of making those educated kind of assumptions going forward?

Tom : So it's going to do a few amazing things, things that I'm especially really excited about because I take on the old school programmer mindset of I hate doing things more than once so I will spend eight hours writing a script that saves me 15 seconds. AI is very good at doing small focus, technical, tactical items. So, when you combine that with automation, it can really help us by tackling more of those time consuming items, allowing the human element to focus on the strategic. Simple single security tasks that can be automated. Content of deliverables will be improved, like our writing and our communications with customers and most importantly, we're going to be able to react to the constantly changing threat landscape we have in a significantly faster manner. It's truly incredible, and I cannot wait to see what comes next.

Sally : I couldn't agree more. For me, I think AI is kind of the catalyst or the fuel that is really bringing actualization to the convergence of technologies today. I totally agree with that. And particularly, I've got this little phrase about agency to act. But for me, all the things you were talking about there is the complementary strengths, isn't it, of technology and people working hand in hand, and particularly in this space, and reducing things that are overburdened too. We've got so many different kind of threat signals, so much data coming through, more pressure on teams as well. So again, getting this right supports, for example, from a well-being point of view, reducing churn in your organization too, and also attracting more people into the industry as well, because obviously quite a lot of supply and demand gaps in this space at the moment too. Maybe that would be a final question too. I've had a lot of feedback on previous Let's Talk SoC episodes about diversity and inclusion and getting more and more people into the space. So, if I could throw you a final question, if I may, what would your recommendation be to people looking at this sector now, maybe thinking of getting involved, but maybe not quite sure if they've got the right background because I love to kind of change the narrative and say... You know what? All backgrounds are really valuable. All different experiences can be absolutely what a team might be needing. So, I wonder if I could throw that to you as a final thought.

Tom : Absolutely. So IT and security in general benefits from vast backgrounds, vast amounts of experience. Full confession, I actually on top of my degrees in cybersecurity, I was a former professional musician. I have a degree playing the trombone. So having a different background is not necessarily a bad thing. It's good. It brings perspective. It brings experience from all sorts of different things and it absolutely helps in most of what you're doing. We're also in a really amazing time right now where there's so many learning resources, there's so many ways to gather up the knowledge you need and start getting into the actual industry. It is an incredible time to start.

Sally : I couldn't agree more. Sometimes we focus a lot on kind of how the barriers of access to hackers and the bad actors have come down. We've seen the price of an attack, for example, there's several coffees over a week, you could possibly buy a ransomware kit for that. So, we've seen how that's happening. We've seen how more of bad actors have come together. Let's pivot that around and show how more we can come together as a sector to negate those threats, but also how many more opportunities there are to be involved. And like you said, so many skills training is available at no cost or very reduced. And again, there's so much out there that's publicly available that you can get in and you can try and hackathons, all sorts of different things. And just because you haven't come from a tech background previously, doesn't mean your experience isn't valuable. I totally agree with you. And guitar is my one. And from the music point of view, that's my love is the guitar. So, I love to share about that. Cause again, I think it helps to kind of change the narrative about what a tech career or cybersecurity career actually looks like. So, thank you, Tom, for kind of spending a moment on that too.

Tom : My pleasure.

Sally : And thank you all, honestly, so many insights in this session, I'd love to come back to it very soon because I think we could do an even bigger, deeper dive. But thank you for now, Tom, for joining us on this special episode all around artificial Intelligence insecurity on Let's Talk SoC.