The major problem posed by botnets nowadays justifies the effort that is being put by computer security firms, universities, CERTs, the police and others into developing and implementing techniques to detect and prevent them. This is involves both analysing infected systems and aiding the victims of their actions (hit by denial of service attacks, spam and the like).
However, the fight against botnets is not an easily won battle, as they are a constantly evolving challenge, where cyber-criminals never cease creating new techniques to evade current detection systems, trying to keep these networks of zombie computers going for as long as possible and with the greatest feasible effect. The use of techniques like Fast Flux, dynamic domain name generation, encryption of communications, communicating through P2P or through the Tor network, and implementation of anti-reversing measures, among others, is an attempt to avoid or obstruct detection of this sort of malware.
Antibotnet Service of the Oficina de Seguridad del Internauta
There are various classifications of these kinds of technique. However, in this document we will use the proposal by ENISA in its report entitled "Botnets: Detection, Measurement, Disinfection and Defence". This report divides techniques into two groups: passive and active.
Main Techniques for Detecting Botnets
This category of techniques brings together any botnet detection systems or procedures based only on observation, not interfering with or modifying any of the elements involved. Although these techniques suffer from limitations on the information they can gather, they are also harder for botnet owners to detect. The following are the main characteristics and functions of the first of the passive techniques for detecting botnets:
This technique basically involves inspecting the packets in network traffic, looking for predefined patterns or characteristics that are intended to detect possible threats. Examples of the use of this approach would be checking the destination IP of packets against a blacklist of IP classified as command and control centres for botnets, monitoring connections to ports that differ from those normally used, or searching for strings of special characters that might identify a threat.
On the basis of what kind of packet analysis is undertaken, a distinction can be made between a more superficial inspection of packets, termed Stateful Packet Inspection or SPI, and a more profound inspection, termed Deep Packet Inspection, or DPI. An SPI analysis would concentrate exclusively on header data, such as IPs, ports or protocols, without looking into the contents of the data properly so called in the packet. This sort of SPI check is what is normally carried out by firewalls.
On the other hand, a DPI analysis not only checks the header data of the packet, it also looks at the contents or useful data of the packet. This sort of investigation is of greater interest in attempting to detect botnets, as it uses a combination of analytic approaches based on signatures, statistical analyses and anomaly checking in order to detect possible threats on the network being monitored. Given appropriate parameters, these systems can detect not just known and identified threats, but also new threats by noticing anomalous patterns or traffic.
The kinds of systems or applications currently in existence that are aimed at performing DPI analyses can be classed as Intrusion Detection System (IDSs) and Intrusion Prevention Systems (IPSs). IDSs are passive systems that merely trigger alerts, whilst IPSs take active measures, such as shutting down connections that appear suspicious.
One example of this sort of IDS/IPS software would be SNORT, developed by SourceFire, which is the de facto standard at present. Another would be Suricata from the Open Information Security Foundation (OISF), and a third would be Bro, currently under development by the International Computer Science Institute and the National Center for Supercomputing Applications. These three systems are open code IDSs/IPSs and their purpose is analyse network traffic on the basis of a large set of rules for detecting threats and minimizing them through configured actions.
Logos for the IDSs SNORT, SURICATA and BRO
These IDSs can be configures to detect one or another sort of event by using rules. For instance, the following is a SNORT rule for detecting connections established by the Zeus botnet:
alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"MALWARE-OTHER Win.Trojan.Zeus Spam 2013 dated zip/exe HTTP Response - potential malware download"; flow:to_client,established; content:"-2013.zip|0D 0A|"; fast_pattern:only; content:"-2013.zip|0D 0A|"; http_header; content:"-"; within:1; distance:-14; http_header; file_data; content:"-2013.exe"; content:"-"; within:1; distance:-14; metadata:impact_flag red, policy balanced-ips drop, policy security-ips drop, ruleset community, service http; reference:url,www.virustotal.com/en/file/2eff3ee6ac7f5bf85e4ebcbe51974d0708cef666581ef1385c628233614b22c0/analysis/; classtype:trojan-activity; sid:26470; rev:1;)
Example of a Rule from SNORT for Detecting the Zeus Botnet
While this detection technique is useful, it does suffer from several problems:
- A lack of scalability, as in networks with high traffic loads the amount of information may create a bottleneck if the system’s rules are not carefully defined.
- A high rate of false positives.
- Difficulties in analysing packets if communications are encrypted, since the contents of the packets to be analysed would not be readable.
- Possible legal problems with the information sent in packets.
- Expensive initial installation, as it is necessary to go through a configuration process to adapt the system to each specific environment where it is to be used.
On the other hand, packet analysis does offer the following advantages:
- Fast reaction time, as a "simple" update to the rules allows a system to be protected against new threats.
- Flexibility, since the system is able to detect undefined threats by discovering anomalous behaviour patterns.
- The possibility of a "post-mortem" analysis if network traffic is recorded.