Skip to Content

Cybersecurity & Standards, Government Surveillance, Privacy & Data

Learn to think like an attacker to stay safe online

2015-10-06-safe-online

It’s National Cyber Security Awareness Month. You’ll be seeing a lot of encouraging words to “be aware” of cybersecurity. But how the heck are we supposed to do that? Cybersecurity done well is ideally something people don’t have to think about! However, a big part of cybersecurity awareness involves being able to estimate and plan accordingly for common threats. In the technical community, we call this threat modeling.

At a high level, threat modeling is a discipline in which we learn to think like an attacker. That can be counter-intuitive; if attackers are bad, why should we try to think more like they do? Well, the answer is simple: putting yourself in the mindset of an attacker can be very valuable in terms of protecting yourself, your data, and your devices. When we model threats we ask questions like: What would be an attacker’s motivations? What are their potential resources? What are the methods they’re likely to use?

Humans engage in threat modeling all the time, but we may not realize it. We decide whether it’s safe to listen to our iPod at a particular bus stop. We decide if we can leave our laptop for a second when we go to throw something away at the airport. In these scenarios, we make judgement calls about how attackers might act, then we change our behavior accordingly.

Similarly, in the computing context, we also think about our potential attackers’ motivations, resources, and methods. For example, when we think about motivations we first ask ourselves: why might someone want to attack me? Monetary gain? Disrupting my ability to communicate? Infiltrating my computer to get access to other computers or contacts of mine?

Attackers will operate differently depending on their motivations. For example, an attacker who is interested in espionage probably won’t seek to harm the systems they have infiltrated; they want to go unnoticed lest they tip off their target. They would also prefer the system continue to work as it had before they intervened so they can analyze its use.

Conversely, maybe an attacker will just want to mooch or free-ride off of computing resources. For example, an infected computer can be used to send out spam or to perform denial-of-service attacks. This kind of attacker doesn’t care much about what systems it can compromise, and probably doesn’t plan on using any expensive resources to do so – for example, zero-day vulnerabilities (previously unknown, unpatched bugs that can go for six figures on the black market) – or require especially sophisticated defenses; keeping your software up-to-date is probably sufficient.

The next question to ask is, what resources will the attacker have? Is this a government with dedicated, well-funded teams and the possibility of agents in the field capable of physically breaking in? Or, might it be a terrorist group that must hack you remotely, and be unlikely to have confidants near you? Or, could it be a kid in her parents’ basement down the block, who doesn’t have a lot of resources but may be quite smart? All of these attackers have different means, and different defenses would be needed for each.

Finally, what methods will be used by our attackers? Is the attacker going to be a relatively low-skilled “script kiddie” who relies on programs and exploits written by others, or an “elite hacker” who will craft their own tools — for example, an email claiming to be from your boss — then take over your computer when you open an infected attachment? Might the attacker you’re worried about be able to physically break in? Or maybe it’s a government with access to the supply chain for the manufacturing of your device or software, with agents who will physically access the computers?

By asking these sorts of questions about themselves and their organizations, users can start to make better security decisions. While specific technologies like two-factor are important, a critical eye can spot most issues.

Users should be alert for warning signs that someone is scamming them; we’ll call these “red flags”. When a self-identified reporter calls 9 times demanding you open a PDF, that’s a red flag; they might be trying to attack you! Or maybe you receive a phone call from “card services” without any bank identified — if they ask you to verify your sensitive financial information, that’s a red flag! Or perhaps a construction worker smoking a cigarette outside your office walks in behind you after you swipe your security badge, getting into your building without having an access card. All of these are examples of situations where an alert user could notice nefariousness was afoot and take action.

If you want to educate your organization on how to engage in threat modeling, Tamara Denning, Batya Friedman, and Tadayoshi Kohno at the University of Washington have developed a Creative Commons licensed threat modeling card game which can help your organization train its staff.