Let's Talk SoC

The First 24 Hours After a Cyberattack

Episode Summary

With insights gained from front-line incident response to cybercrime, Lina Lau, Principal Incident Response Consultant at Secureworks, shares top tips to improve your incident preparedness — and the three most common mistakes she sees organizations make when they fall victim to a cyberattack. In this Let’s Talk SOC podcast, Lau provides actionable steps to improve the outcome of an incident and discusses the limitations of EDR. Learn about the increased value of responding to incidents with improved speed, expert insight, and security analytics.

Episode Notes

The First 24 Hours After a Cyberattack
Lina Lau, Principal Incident Response Consultant

What we will cover in this episode:

In this Let’s Talk SOC podcast, Lina Lau, Principal Incident Response Consultant at Secureworks, shares top tips to improve your incident preparedness. Learn the three most common mistakes Lau sees organizations make when they fall victim to a cyberattack, while getting her most-recommended steps to improve the outcome of an incident. Finally, listen in as Lau discusses the limitations of EDR — and how organizations like yours can experience the value of responding to incidents with improved speed, expert insight, and security analytics.

Episode Transcription

Host   

Hello and welcome to Let's Talk Soc, a podcast series brought to you by Secure Works. A leader in cybersecurity focused on empowering security and IT teams worldwide to better prevent, detect, and respond to cyber threats. I'm Professor Sally, your host, and joining me today is Lina from Secureworks. Welcome Lina.  

Lina    

Hi, it's so nice to meet you.  

Host    

Perhaps a great place to start is just to share a little more with the audience about yourself and your role.

Lina    

Yeah, sure. So my name's Lina Lau, I am a principal incident response consultant at Secureworks. What that means is I basically lead and run incident response engagements across the PJ South region.  

Host  

Fantastic. So I'll drill now into our main sort of subject area for today, kind of phrasing this around the first 24 hours of that critical time period when it comes to instant response. So perhaps we can open out by looking at how those insights that are gained from responding to incidents on that frontline inform your top tips for preparedness and perhaps we could share some key takeaways there with our audience.  

Lina    

Yeah, sure. I think that I've probably got around three top takeaways to share. The first one is to always make sure you have an instant response plan and playbooks built out. A great way to build out an instant response plan and playbooks is through running something called a tabletop exercise. Now these are basically simulated role plays where you get someone like me or someone who only does tabletop exercises come in and we come up with a simulated scenario. So a pretend incident and then we run through the incident from start to finish. As if it were a real incident with the entire security team, potentially C-suite, sometimes the board, depending on what the client's after. It's a really great way for allowing clients to understand what kind of incidents they might be faced with and to identify any kind of gaps that exist in their current incident response plan and playbooks, it’s real. Also a really good way to allow clients who haven't necessarily gone through an incident to get a real feel for the stressful nature of an incident and the kind of questions and issues that might come up that they haven't already pre-planned for. The second tip I have is to always make sure that there's logging occurring in the organization. So not just in terms of having endpoint logs, but also logs from network devices, peripheral devices, and making sure all of those are logged appropriately and not just the basic configuration. Most of the time organizations will have a file set with out of the box logging without any kind of additional telemetry getting collected. So I would always check that all the appropriate things are being logged. Now, this is really important, if an organization isn't collecting logs, that means there's nothing for us to analyze. Which makes it really difficult to understand what occurred when an incident actually happens. We always say we recommend a seam for compliance and XDR for security. That's really the benefit of having a product like taegis, is that responders can jump in and access the network logs, endpoint logs and alerts all in one place, making analysis so much faster. Now the third takeaway I have is to ensure that an organization always has predetermined incident roles. So the second an incident hits, who's running it now, who's running it is different than who is performing the analysis. When you're running an incident, there are major different stakeholders at play, the security team, the server team, applications team, the C-suite, the board. All these different teams that need to get informed and get potential tasks to do during the incident. If you are sitting there performing the analysis, you don't have time to manage all these various work streams and ensure that there's no silos going on in the organization.  

Host    

Really great tip Lina. Absolutely and I love the way that you joined together, different areas as well. So kind of recapping that there was the simulation side of things very much that learning from doing from experience. Which I love to see the advanced log logging with telemetry and also that clarity around roles if and when in many, many cases and incidents should occur. So I couldn't agree more strongly and again, we talk about shared responsibility so much, don't we? When it comes to instant management and security.

Lina

Yeah.

Host

To do that you need to have that shared experience, every department matters. The comms around an instant management issue is huge in itself. So I think you really bring to the fore some of those different elements that come to play alongside things like skills, uplift and things too. Perhaps we can have a look at some of the mistakes that people made. What are you seeing as say the top three mistakes you see commonly made? If an organization falls victim to a cyber attack, what do they most often do wrong in that scenario?  

Lina

Well, I would say the first thing that we always tend to see, especially and this occurs based on how mature an organization is generally and the security function. But the first thing is when a major incident happens a lot of organizations or the security teams, they freak out. They immediately skip to a step of the incident response, that shouldn't be the first step, they jump straight to eradication. Their first thought is, okay. How do we evict the threat actor? So this is extremely problematic because the first step is always identification. The problem with skipping straight to eradication, which is where clients will start resetting credentials or restoring hosts. It leads to the question of how do you eradicate a threat when you have no idea how widespread the threat is, where they are, what they've accessed, and how they might get back in. If you start resetting passwords and shutting down hosts, you are playing cat and mouse without an understanding of the full scope of the incident. That makes it really difficult based on the sophistication of the threat actor. Every single time we've worked incidents concerning nation state threat groups. We've always seen that eradication steps like resetting passwords, results in the threat actors coming back and coming back in a more aggressive manner. So if you haven't performed the first step of working out where the threat actors have gone, what they've accessed, what they've done. You run the risk of them intruding back into the environment at an extremely vulnerable critical state when you're still kind of resetting hosts and trying to get trust back into the environment. Now the second common mistake is kind of linked to the first, it's the restoration of hosts and changing settings. So the second, clients see, oh no, there's something happening on this server, they shut it down and they start rebuilding it or they, you know, get rid of it completely. The issue with that is it impacts forensics and it impacts the state of the evidence that's preserved on the host. Now, this will impact the investigation in the future because it might lead to loss of evidence, It might lead to evidence being written over. Especially if clients start running anti-virus tools and deleting files. When clients start reacting like this, a lot of the time these actions aren't actually even being recorded. So it becomes even more difficult during a large-scale incident response when there's minute actions at play amongst varying, varying teams and none of these are really being recorded. Now the third mistake we see is that clients don't have a tool that grants visibility into the infrastructure. So what this means is that there's no realtime tool that's collecting telemetry from all the endpoints and logs that allows an analyst or responder to see in real time what the threat actors are doing. We can't really identify anything if we have no visibility across everything. Which is why our first point during an incident is always to check that the client has some kind of XDR installed. If they don't, we recommend taegis and that's normally the first point.  

Host   

I think you're absolutely right that making sure you understand what led up to that point is the way to learn and prevent that happening again and again, and share that learning and understanding. So vital, great points and thank you so much. Perhaps we can also look at the next step in this in terms of how we can get ahead in terms of better awareness and better preparedness. Not just insecurity teams, but beyond that as we said already, this is a shared responsibility. It crosses boundaries across IT and security areas of organizations right through to the board. So what, more things again, these tangible examples we could give to people to help improve that. So awareness and preparedness right across the organization.  

Lina    

I'm gonna sound like a broken record because I've already mentioned this, but honestly I think the best thing to do is run tabletop exercises. A lot of the time security teams are struggling to secure a budget. But a lot of the time the reason for that comes down to a lack of understanding as the value of security tools or you know, the varying things that a security team needs in order to function and do their job correctly. That's exactly why tabletop exercises come in. Typically I've run them in the past with technical teams, but I've also run them in the past with board members and c-suites with the actual goal of allowing the board members in C-suite to understand exactly what happens during a security incident and why these tools are necessary. For them to understand the existing gaps in the current company in terms of tooling or skill set or headcount that impact the company's ability to return back to a nominal state. These functional tabletop exercises are really handy because it identifies silos and comms between the teams, because you'll often see board members and c-suites being confused. At what point are they looped in? What decisions are escalated all the way up to them? Little questions around how do you loop in the legal team? At what point does the legal team get involved? How does that impact the security team's workflows? Little questions that you don't think about during a high-stress situation that really do need to be written down and fleshed out in a playbook. It also brings up questions like, What happens if it's a large-scale incident where the business might be down for several days? For example, a ransomware incident, it ties in different plans like the business continuity plan and disaster recovery plan with the incident response plan. Because when it comes to a large-scale incident, all of these processes are linked and tabletop exercises are the way to go. Now you can also run other things to generate awareness and preparedness. For example, you can also put out phishing simulation exercises through Microsoft 365 just to check whether or not your users in the company are actually paying attention to the vision training that compliance that's out every year. You can also get to a stage where you're running adversarial exercises with thread teams that are actively trying to break into the company and testing how you respond to their attack in real-time.  

Host   

I love those examples there, just coming back to a couple of them. The way you've emphasized this parity really in terms of steps, but also roles and responsibilities I think is huge. I always compare it a bit to a friend of mine who works for the NGSB and in again a high-stress situation where you need to make decisions very quickly with very little time with the different checklists they have in that cockpit environment. Literally to look at the issues you are talking about there under high stress. It can be easy to make the wrong decision and having those guidelines and the steps I said what matters first, whose responsibilities, etc makes all the difference in the world. So I think there's a good analogy to how to manage situations like this and the phishing one you said. Again, I think that's so, so key. There was an incident recently where there was a lot of attacks being targeted at CEOs of organizations partly using social media and fake profiles, online. That was being used to infiltrate conversations and that went through to email, etc. They took it to such a strong level, it was so impactful and it's such a learning opportunity. So I couldn't agree more strongly. Great examples Lina, thank you and again, going back to the question of roles again. In terms of threat intelligence and the context actually, what role do both of these play instant response today, but also how can we use those to improve the outcome of an incident too? It'd be great to talk about that in a little bit more detail.  

Lina  

I think threat intelligence is absolutely pivotal when it comes to an incident. It begs the question of how are you supposed to respond to a threat if you have no idea what you're planning for and who you are up against. This is why we always recommend that companies have a clear idea of the nature of the threats they're faced with. Starting with just opportunistic threats that impact almost every business that's online, which basically is every business.  So, network scanning, root force attacks, exploitation of external facing assets, single-factor authentication, etc. Then it comes to the next stage and this is where threat intelligence really shines. So, it becomes what sector is your business in? How large is your business? What information is the crown jewel of your business? Like what are you protecting? And then it begs the question of, who then targets you? Are they nation-state threat actors? Are they advanced persistent threats or is it more commodity-style threat actors, cybercriminals? And all of this aids in the company's ability to be able to plan and forecast risk. So, who's gonna target us? What kind of techniques are they going to use? And it allows organizations to be able to predict the next steps of what an adversary might do next and govern how the actual response is handled. For example, in a ransomware threat, right before ransomware gets hit, there are telltale signs like C2 deployment, network scanning, lateral movement and specific tools that different threat groups will use. Knowing that ransomware is likely coming, the company can take precautions to prevent the incident from escalating. We actually had a case a few months ago where this client was getting hit by the black cat ransomware and we started to see telltale signs of ransomware deployment potentially coming. We urged the client to isolate the network and they did. That actually resulted in them not getting hit by the black cat deployment at all, so they were not ransom. Threat intel is completely important and the very intel of this is just to make sure that companies have security tools that have the capability to detect the various techniques from the threat groups that they're most likely to get targeted with. All of this threat intel gets rolled into tabletop exercises where you get a run-through playbooks on how to respond to these different threat actors and it really helps a company uplift their security maturity.  

Host     

I like your point Lina, about threat intelligence. It's that the rise of active intelligence that you can act on, increasingly real-time filters out the noise of kind of false alerts around attacks as well. Again, giving back more time to hard-pressed operations teams, etc too. I love that link between then using that threat intelligence and acting on it through that simulation play that we were discussing. Again, I think they've linked those two so, so well together. But also just going back to threat intelligence that the devil is in the detail, isn't it? The context really does matter. I was involved in something on Ddos recently and looked at the latest state of that particular threat and the contextual differences across regionalities, even countries within regions, obviously different financial verticals, different sectors, etc. Really makes a difference about how you approach that particular attack and what might be coming next. Really shows different growth trends across different regions and sectors, etc. So it makes a huge difference and the more we can come together and share that. Again, the good guys can come together rather than the bad actors I was talking about earlier. So I love that marrying together of threat intelligence and contacts there. Thank you, Lina, brilliant stuff and I know we're running outta time at the moment, but I'm gonna throw in one final question if I may. You've naturally teed this up earlier with some of your recommendations, but we all know much bigger focus at the moment on things like MDR and XDR, but also EDR and solutions and software that can give a more effective response. As part of this, it's also important to acknowledge if there are any limitations. What are you seeing that can't be automated when it comes to cybersecurity, particularly instant response? Where does the strength of the human experts really come into play right now and into the future?  

Lina  

Honestly, there's only so much an automated tool can do and there's so much more to incident response than just the forensics and detection side because an alert only tells you one tip of the iceberg. It doesn't tell you the entire context as to, How did that threat get there? What do you do with the threat? How do you remediate the threat? What could you be doing to detect it faster earlier in the attack stage? These are all things that can't be automated because that's the human touch and during a high severity, large scale incident. The thing that makes incident response stand out or a really good incident response team is their ability to communicate with the client and it's not just the forensic work. You might have really good forensic analysts, but if they're unable to communicate that to the C-suite, to the CISO, then the client's not going to understand the impact of the incident and how to react to it. Your job is to communicate what's occurred and allow the client to make decisions based on the information that you provide. That's really what's missing when it comes to automated tools.  

Host   

So I love that the communication but also the enablement, that choice that you're giving and that facilitation of, of these issues and challenges, etc, couldn't agree more strongly. At the end of the day, I think like all of these things, it's human and technology and partnership isn't it, how we get the best possible results and supported by those other pillars, of culture and skills and the right type of process, right type of change management, etc as well. It's really that holistic, the integrated approach that makes all the difference. Great examples Lina, I really love that. I think it really brings a subject to the life of the audience and also shows you the dynamism in the space as well. On a different note as well, everybody listening at the moment, if you're looking at a new career opportunity, whether you're at school at the moment or maybe an older adult looking to reskill or upskill. What an exciting dynamic place to be to help make a difference to challenges like this. So Lina, thank you so much for joining us, it's an absolute pleasure.  

Lina    

Thank you so much, Sally. Thank you for having me.  

Host   

And thank you all for listening. Let's Talk Soc is a podcast series brought to you by Secureworks. A leader in cybersecurity, helping organizations reduce their risk, maximize their existing security investments, and fill their talent gaps with their cloud-native security analytics platform Taegis. They offer MDR and XDR solutions, the better threat prevention, detection, and response. To learn more, visit secureworks.com.