Triaging new detection ideas is an important aspect of detection engineering, as it allow us to focus on the most important tasks and to optimize the utilization of the existing limited resources (both human and technology).
It doesn't have to be perfect, but it needs to minimize the effect of personal preferences and the tendency or desire to catch the most advanced or recent stuff. To do so, we need a way to assign qualitative scores for certain critical questions, the threshold can be adjusted per your need and context.
The following diagram try to summarize the most relevant questions to consider while deciding on whether or not to implement a new detection idea. Note that if a detection does not fit the agreed threshold, it can move as a hunt or a scheduled report (the goal here is to scope rules running in near real-time and that tend to consume more computing processing power).
Detecting all known common LOLBINS connecting to the internet, at first glance seems to be a good idea, but if we take it through the above process :
- Coverage width is high since a variety of malware droppers tend to involve some kind of lolbins -> DS = 5 (having access to a malware sandbox helps with this point)
- Performance impact is medium : although the number of lolbin binaries is considerably high, at least 25 processes, we are still using only one type of event (network), no correlation, logic is simple (if network event detected and process is in lolbins_list alert) -> DS = 10
- It is a critical technique (matches Initial Access & Execution and partially defense evasion too) -> DS=15
- Triage experience and noise ratio: a considerable number of LOLBINS tend to connect to the internet for legit activities, this makes quick assessment a little bit harder, the same impact FP rate (False Positives) -> DS=15 (no changes to the score)
- resilience to bypass: low effort via renaming a LOLBIN process name to something else -> DS=12 (15-3)