Unexpected Blocks: Embracing the Human Element


This is Part 2 of a series about unexpected blocks. The first installment, “The Truth About Unexpected Blocks,” explains the difference between unexpected blocks and false positives. 

A quick definition: unexpected blocks

“Unexpected blocks” is an umbrella term for false positives, misunderstood indicators, and blocked malicious traffic on a site you weren’t expecting to be blocked. While many use the terms “false positive” and “unexpected block” interchangeably, false positives are only one kind of unexpected block, and thus the connection shouldn’t be allowed to automatically pass through. 

Where we are currently with unexpected blocks

Wherever there is a security mechanism in place filtering traffic, there will always be unexpected blocks. These almost always arrive in the form of end users trying to access something they were expecting to be safe (such as browsing to a local mom-and-pop’s restaurant menu and finding the site blocked). Unfortunately, due to the vast usage of multi-homed servers and CDNs, many of these unexpected blocks can’t be called false positives because they do actually contain malicious traffic. 

Remember: like squares and rectangles, all false positives are unexpected blocks, but not all unexpected blocks are false positives. 

Since unexpected blocks are a way of life in this intelligence-driven world, all of this begs the questions: 

  • How do you decide what to let through, and what to keep blocked? 
  • How do you mitigate those risks? 

Decisions to make after an unexpected block

Don’t assume the traffic is safe.

Often the person encountering the unexpected block will be frustrated (because, of course, it is frustrating!) and ask – hopefully in a polite way – for the request to be allowed to go through. Once this happens, decisions have to be made. 

Remember, not all unexpected blocks are necessarily safe traffic erroneously tagged as malicious. And while these false positives do happen, threat actors love to target multi-homed servers and CDNs because root access on these means compromising many, many sites, and not just one. We saw a headline-making example of this in May of 2023 when an exploited WordPress plugin resulted in the compromise of over a million websites. That attack was particularly notorious since the associated privilege escalation risk meant any and all services on those WordPress hosting machines – both standalone and multi-homed – could easily become compromised.

Clearly, simply allowing the unexpectedly blocked traffic through without careful consideration isn’t the right call, as it could very well be malicious. (And note that this fact is true no matter how, ahem, “politely” the frustrated requester asks, by the way.) 

Ask questions about the blocked traffic. 

After the investigation commences a series of questions about the traffic will help inform the decisions, such as: 

  • Where is the traffic from, and where is it going to? 
  • What services is it running on? (remember that multi-homed servers and CDNs are two common culprits) 
  • Why was this flagged as malicious traffic? 

If you’re lucky, these answers will be easy for you to act on confidently in either direction. However, and I say this with empathy, you’re not always going to be lucky in that. 

AI can calculate risk, but must not assume it. 

At this point many might assume this whole process is run through some fancy AI that can make these decisions instantaneously. Unfortunately, this is often not so simple. 

AI can generally calculate risk, but must not assume it. At some point, a human has to decide exactly how much risk they and the organization are willing to assume. This idea is known as Human-in-the-Loop, where the decision on exactly how much risk is too much has a human involved as part of an AI-fueled decision-making process. In the case of unexpected blocks, humans should be involved in the decision of overriding block decisions over a certain threshold of risk. AI should never be exclusively relied upon.

Educate and create trust with your end users

In the world of unexpected blocks, no matter how much AI you use, no matter how trained your people are, there is always a possibility of making a decision that ends up allowing in malicious and/or damaging traffic. And this is where building a good culture of security becomes crucial to your security strategy. 

After you allow the traffic through, acknowledge your end users are acting in good faith and educate them on what to look out for if the traffic does wind up being malicious. Empower your users to look for their devices doing unexpected things, such as unexpected extreme performance drops, random shutdowns, etc. and encourage them to report those events if they happen, especially if they occur relatively soon after potentially malicious (the ‘unexpected blocks’) traffic was allowed through. And, of course, never make them feel ashamed of the situation (unless you never want to be informed about potential threats to your network, that is. And while peaceful, this is not advised as a good security strategy). 

Unexpected blocks are not to be feared; they are a necessary evil. However, by combining machine learning with Human-in-the-Loop, we can make better and faster decisions about the levels of risk we can or cannot take on. 

In our next and final post (for now) about unexpected blocks, we’ll break down how to make the management of these unexpected blocks easier for everyone, so stay tuned!