by Del Rodillas, Palo Alto Networks
One of the most interesting talks during DistribuTECH Conference & Exhibition was a case study involving sensitive data exfiltration from the operational technology (OT) environment of a North American electric utility.
We often hear about the dangers of a cyberattack’s taking down the grid, but seldom is public information available about the loss of sensitive information, particularly from the OT. The confidential information in this case was the utility’s smart grid and metering R&D knowledge base, which is intellectual property and information that attackers can use to compromise the smart grid.
The utility was short on engineers, so it hired third-party resources from East Asia to augment its work force. These supplementary employees were segregated in an off-premises enclave that had a virtual private network (VPN) connection into the operational network where the main R&D was conducted.
One day, the utility noticed computers in the OT randomly were sending unusual traffic to the network. This activity continued for months, and based on its initial analysis, the utility concluded it was in the midst of a targeted attack by an advanced persistent threat (APT). There was a growing suspicion that the third-party workers had something to do with it, but no hard evidence supported this belief.
The utility employed a range of security devices to gather forensics information over months. Eventually, and consistent with suspicions, the traffic was traced back to the third-party enclave. A 4-G “puck” was found that the spies used to transmit sensitive information back to the Far East. These actors were caught, but only after the loss of confidential data and a lot of time and resources were expended on forensics. Analysis shows that the attackers were exploiting open ports and using a legacy high-speed token ring protocol, encapsulated in Internet Protocol, as a means for stealthy communications. We don’t know all the details and can’t make full conclusions, but let’s look at some best practices and technologies that possibly could have prevented or at least mitigated this event:
- A zero-trust model must always be in force. Organizations that operate critical infrastructure often focus on defending against external, Internet-borne threats. Insiders behind the information technology (IT)-OT perimeter, whether direct employees or third-party workers, could be assumed as “trusted” and not be subject to as much auditing as traffic coming from, say, the business network. But many threats originate from within, so a zero-trust approach involving micro-segmentation of the OT network and the mindset that no networks can be assumed safe must be employed. Surely there is a point where there could be over-segmentation, for example, creating security zones for each engineering workstation in the development environment where all employees are internal, but often OT networks are closer to being too flat vs. having enough points of segregation and inspection. VPNs help provide authentication and encryption for the allowed insiders and segment off the outside world; however, VPNs do not guarantee the security of the traffic carried within the private tunnel. Organizations also need to put points of inspection and access control to even the traffic that traverses a VPN connection as part of a zero-trust approach.
- Application-layer visibility is no longer just a nice-to-have. Stateful-inspection firewalls that provide visibility and access control simply at the port level remain common in many industrial automation environments; however, as this and other high-profile APT attacks such as Havex and Stuxnet have shown, attackers are exploiting applications and even industrial control protocols in several ways from gaining the initial foothold into organizations, establishing command and control infrastructure and communications, payload propagation, to data exfiltration. Application visibility, which is a core value proposition of our enterprise security platform, is crucial for being able to definitively identify network traffic. Any traffic that is not positively identified will show up as unknown TCP or unknown UPD and must be analyzed further to assess its validity.
- Reduce the attack surface by controlling applications. One point that was repeated often during the DistribuTECH cybersecurity track is that compliance to the North American Electric Reliability Corp. (NERC) critical infrastructure protection (CIP) standard is a good baseline posture but doesn’t guarantee security. NERC CIP mandates the creation of electronic security perimeters (ESPs) where inbound and outbound access permissions are enforced, denying all other access by default. This enforcement is simply at the port and service level; however, as was the case with this attack, today’s advanced attacker likely will know which ports are open and stealthily exploit this open vector to conduct his or her attack. The attack footprint would be reduced dramatically if traffic could be constrained at the application level vs. just the port level. The custom application used by the attacker likely would have shown up as unknown TCP or UDP and stood out as an anomaly vs. the positively identified applications. As mentioned, such suspicious unknown traffic should be investigated further and blocked if deemed malicious. Sometimes, however, unknown traffic ends up being valid, as we saw in one of Palo Alto Networks’ free application visibility and risk report (AVR) assessments for a South American utility that was encapsulating a serial IEC protocol in TCP/IP. A useful feature of our next-generation firewall is the ability to create custom application signatures. This capability can be used to conclusively identify custom application traffic then apply policy to positively allow this traffic. In that case, the utility created a custom App-ID to ensure it was able to identify this valid application vs. being left to guess. In addition to whitelisting protocols and applications, an additional layer of segmentation based on user or user group could be applied to further reduce the attack footprint. This concept of role-based access control is a critical concept that can be implemented on Palo Alto Networks’ next-generation firewalls using User-ID technology.
- Implement content inspection/blocking for sensitive data and file types. Another capability that might have helped stop this attack is the content inspection technology, or Content-ID. The classification engine looks at applications, users and the payload of the traffic. All three parameters are inspected in parallel to ensure high performance, low latency and shared context. Content-ID enables users to implement policies that reduce the risks associated with the transfer of unauthorized files and data. The latest version of NERC CIP standards includes CIP-011-1 Information Protection, whose purpose is “to prevent unauthorized access to BES Cyber System Information by specifying information protection requirements in support of protecting BES Cyber Systems against compromise that could lead to misoperation or instability in the BES.” The capability to inspect and control content at the detailed level will be helpful in being able to address the requirements within CIP-011-1.
One thing that wasn’t covered in the DistribuTECH session or that I might have missed is the role of malware and exploits in this breach. If exploits and malware were part of the attack, cloud-based threat intelligence can stop known and unknown threats that traverse the network, and Advanced Endpoint Prevention technology stops threats at endpoints such as HMIs, automation servers and engineering workstations. Access control reduces the attack vectors, but utilities need threat prevention to stop malicious traffic that might have gotten into the control systems over valid traffic or directly at endpoints via removable media.
Del Rodillas is senior security manager, SCADA and industrial control systems, at Palo Alto Networks.