December 14, 2023
Security pundits often say “humans are the weakest link” and I agree – when you are talking about the security pundits with such narrow vision.
My epiphany came over a decade ago. Our prototype system had identified a geo-spatial security anomaly that immediately caught our attention: an employee logged in from Hong Kong, and then ten minutes later, they logged in from China.
Such black-and-white cases rarely happen in security; more often, my team would spend hours pouring over activity logs, checking for C2s, and analyzing data across multiple systems just to infer whether or not there’s foul play. But in this instance, our expectation for geo-spatial anomalies based convictions was high.
We kicked into gear, spinning up all the resources necessary to mitigate the threat and recover the employee’s account. Since we saw the event in real time, we could react in real time as well as gather significant data on the purported attacker’s tools, techniques, and procedures. However, once we contacted the employee for verification of what we had observed, they explained the reality that contradicted our assessment.
It turns out, the employee had actually performed both logins themselves. They were on a business trip in China and had first logged into the VPN from their hotel room, which routed its Internet traffic through Hong Kong. The second time, they had logged on from the conference hall wireless network on the ground floor of the hotel.
We had spun up our response resources to address the event based on the alert, without full context of the employee’s activity.
My team was spending so much time and resources running through our playbooks: Is the box compromised? Is there malware installed? Are there outbound communications? and so on, to try to identify whether an anomaly was a security threat or just employees’ accidental risky behavior. I always thought that this was unnecessarily painful and time-consuming. Determination of insider risk versus external threat can be a challenge, and often we eventually have to risk it and just ask the employee to fill in the blanks.
Obviously, if it is an insider threat, then you tip your hand. But barring a strong suspicion of malintent, reaching out to users is a great way to build context quickly. Not only will you move through the traditional identification phase quicker, but you’ll also avoid putting employees on the defensive.
Time and time again, I’ve seen how employees are our best frontline sensors. Humans have an uncanny ability to know when something is “off,” so much so that the best security trainings tell employees to always “trust their gut” if they feel that an email message or hyperlink isn’t quite right. Now imagine if security teams could tap into employees’ collective gut?
Okay, so that’s not the prettiest metaphor, but bear with me. What I’m getting at is, employees are the operators of every one of your business functions. They’re a lot closer to their operations than the security team. I know, for example, that a procurement manager will be much better at identifying someone trying to pass off a false invoice on their system than I will.
Even with all the security automations I have leveraged at my various places of employment,, my employees are the best sensors at the procedural level of my business. And when they see my security team as a collaborative and helpful resource, they come to us with problems. This enables us to quickly get context when we notice suspicious behavior and identify incidents faster based on their proactive observations.
For me, human-centric means acknowledging the capabilities of humans to be part of security solutions. The world of cybersecurity is highly complex — with outliers, exceptions, and one-offs that are all driven by humans. This is only heightened by our increased reliance on remote workers, part-time employees, and contractors accessing your systems from across geographies and time zones.
Don’t get me wrong, I advocate my fair share of automations. But automation can only tell you what is happening — not why it’s happening. There’s so much nuance in the “why,” and this has led me to reconsider how much confidence we place in the conviction capabilities of tools such as SOAR, DLP, and UEBA.
For example, a company might have 14,000 alerts for unpatched assets reported in a vulnerability management platform. If I were to enable whac-a-mole event-triggered automations for every vulnerability, employees would be flooded with messages for each issue. I’m pretty sure that my fellow employees would hate me at that point (and I wouldn’t blame them). Personal emotions aside, having that many alerts wouldn’t actually make us any safer. It would be a terrible version of alert fatigue transference from my team to our peers. Employees would get overwhelmed by interruptions and would probably end up creating a filter to avoid the constant notifications.
Instead of bombarding employees with alerts, a human-centric practice would be sending one message that explains the bigger picture. If an employee doesn’t patch enough, for instance, the message could provide context into why patching is important to their job function, give straightforward instructions, and allow the employee to choose a convenient time to get it all done.
As threats have evolved, the security community has responded with technical and procedural controls; we as security professionals have changed our way of managing and responding to them. The problem is, most organizations haven’t significantly changed their approach to threat management. We’ve kept the old silos between those responsible for security monitoring and response and the rest of the employee base.
Security teams can no longer operate this way. Keeping organizations safe requires more than a yearly training or policy update in the employee handbook. Security has to be embedded in every function, so it’s not just about doing your job well, but doing it securely. That means empowering all employees to participate in security.
One of the core values of my current amazing security team is to build commitment over compliance. While we will happily take the win of getting a technical solution deployed, what we really celebrate is when our peers do the work without coercion or engage us proactively because they know we will do everything possible to enable and not gate them. It’s very similar to the “teach a man to fish” proverb – we will put in the work to get the long term effect rather than immediate gratification.
Focusing on the “what” in your messaging limits employees’ engagement tactically only to that situation. Expanding your engagement to include the “why” opens up the possibility of the employee applying that ‘why’ to things you may not consider. Coupling the “why” with the “to learn more” avenue can also help you identify and nurture security champions.
Employees have untapped potential for threat detection, threat modelings, tabletop exercises, and security tool optimization. With the right training, they could facilitate the various domains of your security program. Maybe you’ve trained employees to not click phishing links or to report when their laptop is stolen, but that’s only partially tuning your human detection system. Human-centric security enables you to fully tune it. And that’s part of our fundamental job function. I mean, you’d never only half tune your IDS, right?