In information security, "air gap" is the name given to a network security measure consisting of physical isolation of one network from others that are considered insecure. This security measure is widely utilized in certain environments, such as industrial control systems, and government or defence networks.
In fact, in industrial control systems air gaps were one of the most basic protective measures, along with the security arising from obscurity. However, Stuxnet then made its appearance. The air gap was got round by using a USB pen-drive with the malware, infecting the isolated network and destroying 20% of Iran’s nuclear centrifuges. Once Stuxnet was past the air gap, it propagated itself independently until it found the industrial control systems (which had Siemens’ WinCC) in order to infect the programmable logic controller (PLC) devices that controlled the centrifuges, deleting itself thereafter.
Stuxnet changed the line of thought that systems could be effectively isolated. The idea of security from an air gap became a myth. Manufacturers of industrial equipment have adjusted their slogan from "keeping networks isolated (air gaps)" to "segmenting networks and protecting them adequately" when making security recommendations, after seeing that in practice an air gap cannot on its own provide any security. Defence-in-depth is the trend to follow, setting several defence lines weaker indiviaually instead of an unique stronger line.
In fact, even if a true air gap existed, by itself it is of little use in protecting a system, since cyber-attackers have long since gone beyond the limits of physical cabling. Some of the assets deployed in networks include Wi-Fi capacities embedded at the microprocessor level that can be taken advantage of either by cyber-criminals or by disgruntled workers. Moreover, as in the case of Stuxnet, the air gap can be compromised thanks to the actions of a human, whether intentionally o accidentally.
In general, when the intention is to compromise a network (whether or not isolated) three problems have to be solved:
- How to get malware into an isolated network.
- How to send commands to the malware once it is inside the isolated network.
- How to get information out of an isolated network.
When considering isolated networks, the step of introducing malware into the network could be achieved by three possible routes, if it is assumed that the malware itself is unable to cross an air gap in the way that it is assumed BadBIOS was able to do. The use of social engineering is a good bet if it is possible to trick a worker with physical access to the isolated network into some action that will get around the air gap. Another similar vector would obviously be a malicious employee, who would voluntarily and intentionally avoid the air gap to put the network at risk. A further possibility would be to supply chain attack. This would involve intercepting and manipulating a piece of equipment destined for the isolated network, so that when it is connected up to the network it is already compromised.
In respect of the problem of communicating with the malware once it is deployed, there are two basic requirements that ideally should be fulfilled. On the one hand, it should be possible to send commands in real time (or nearly so). On the other, the malware should be capable of being updated in some way, so as to improve its capacities or avoid detection by security elements of the network. There are a good number of imaginative pieces of research in this area, some of which even attempt to communicate with malware in an isolated network by modulating the temperature in a room to transmit commands.
The third and final challenge for an attacker is how to get information out of the network, either via some communication channel that will not be noticed or through the people who have access to the isolated network. Among unnoticed means of communication the TEMPEST initiative should be highlighted, this being certified by NATO and also specified by the United States National Security Agency. This involves trying to obtain information from equipment through physical phenomena such as vibrations or unintentional emissions of radio-frequency signals, in the so called side channel attacks. Apart from TEMPEST, other technologies are being developed to perform this same function. An example of current work in this area would be AirHopper, a method proposed by researchers at the Ben-Gurion University in Israel. This technique involves using the graphics card of a compromised computer as an FM transmitter that is able to send information to an adapted mobile phone up to seven metres away. It is true that the speed of transmission is not very high, at somewhere between thirteen and sixty bits per second, but even so only a quarter of an hour would be necessary to send the information gathered by a key-logger over the course of a day.
Even if a perfect air gap, in theory impregnable, could be achieved, it would be impractical in day-to-day operations. It is true that a control system could simply be left without any connection to all other networks, but the control system itself would be accessed so many times that the air gap would just be unmanageable. There would be times when it would be necessary to deploy new software needed for operations, times when a major up-date would be needed to counter a zero-day problem, and so forth. Whilst these adjustments might be made exclusively with extractable memory devices, that was the way that Stuxnet was transmitted.
Hence, those responsible for this kind of system must ensure that its protection evolves away from air gaps towards proper segmentation and fortification of networks. Some of the preventive actions that might be taken would be along the following lines:
- Protecting the supply chain. As proposed by the Internet Security Alliance (ISA) in its document Securing the Supply Chain for Electronic Equipment: A Strategy and Framework, this would have four main strands. These are protection against interruption of the supply chain, protection against the insertion of malware into equipment, protection against the destruction of trust and protection against loss of control over information.
- Detection and prevention of malicious employees. This would involve measures to detect workers with malicious intentions, either by monitoring user behaviour or by checking on communications. It would also take steps to prevent harmful actions, such as classifying information properly and giving the minimum possible user permissions.
- Raising awareness and training of employees. This is in order to avoid their being tricked and thereafter used to compromise system security.
- Looking for covert communication channels www.sans.org/security-resources/idfaq/covert_chan.php (Link currently unavailable). This is without doubt the greatest challenge for defenders. A hidden channel is a communications technique which involves transferring information between systems by means not intended for this purpose. For instance, header fields that are unused or not often used could be employed, or protocols like a Domain Name System (DNS) might simply be used for communications, as has been done by some botnets. Some current intruder detection systems (IDS) or intruder protection systems (IPS), like SNORT, now include mechanisms for discovering hidden channels, but it is necessary to keep up to date and to incorporate any new trends.
However, there is also a need to establish safe perimeters for enclaves. As there will always be connections to the system, it is best to try to run them all through a single link which can be properly monitored and controlled. Furthermore, it will be necessary to deploy one or more security set-ups to monitor and check traffic over the network.
Illustration 1: A Correctly Segmented ICS According to Siemens.
On the perimeter there will be a need to deploy at the very least a firewall, although further security equipment, like IDS or IPS, unified threat managers (UTM), application monitors, and so on are also advisable in accordance with how critical the enclave is, as judged by the CIP 005-5 guidelines Cyber Security - Electronic Security Perimeter(s) of the North American Electric Reliability Corporation (NERC). Security measures inside the network include all those in general use (up-to-date antivirus programs, HIDS, firewalls in terminals, white lists of permitted applications and the like).