A mini rant disguised as a cybersecurity taxonomy
Hacktivist / defacer / skidiot
These operators don’t usually look for a long dwell time, instead seeking media attention (or peer recognition.) Skills vary widely, from the sophisticated Phineas Phisher (who discovered an 0day vulnerability in a router, built an exploit that they honed attacking third party systems, and custom assembled a post exploitation toolkit to tunnel through the victim’s router) to the…enthusiastic LulzSec crew who terrified corporations with such amazing feats as defacing a Sony subsidiary website using SQL injection.
The defining characteristic of these operators is their Intent — their plans for the final operational phase — Exploitation. They attempt to quickly promote the success of their cyber operation and use that attention to highlight their agenda. Consequently hacktivists, while they may be sophisticated and persistent, are primarily an image problem. They do not pose an existential threat to their victims (although I’m sure Phineas Phisher wishes they could hack their targets out of existence.)
Penetration Testers / Red Teams
These are not operators seeking long dwell time, their Intent is benevolent. Although their sophistication and capabilities tend to skew upwards they seldom present a realistic attack scenario (or phrased differently: their cyber kill chain is unlikely to bear much resemblance to a real threat actor.)
The pen testers Intent is to gain access, or enumerate vulnerabilities, within the boundaries of the scope (defined by the target) and write a report — fast. This is seldom the intent of a malicious threat actor. Consequently, the actions of penetration testers are not generally reflective of the actions of malicious actors.
Penetration testers typically don’t aim for high levels of covertness, though they’re seldom detected by the Silver Bullet Boxes, or the SIEM, and they consistently refuse to use tools that should be detected by the “Tres Porsche” Threat Intel IOC stream. The pen testers also don’t use
0days against Office, phishing against the CFO, or autonomous cyber agents tunnelling in over trusted partner’s networks (“out of scope!”). Finally, those guys are fast. They have two weeks to work and they hardly need it before they have DA.
The final operational phase of a pen tester involves emailing a report that strongly recommends segmenting networks and hardening Domain Admin systems. This report can be placed in a pride of place with all the previous reports saying the same thing… none of which bother to mention where to get the budget, headcount, time or sign off from corporate to drastically alter the “ain’t broken” critical business infrastructure.
The plus side is that the penetration test audit requirement is passed for another year, and maybe someday there’ll be the opportunity, resources and a mandate to fix the legacy intranet.
So, pen tests are not much like real attacks (except perhaps for that “no old IOCs” thing)… Clients who do not scope a project to make pentester intent mimic malicious actor intent are not getting value from their services.
The defining characteristic of the cyber criminal is how they intend to exploit operational success — by making a profit from all that hacking. There is no shortage of methods to convert illicit access to a company’s network into cash.
- Stealing and selling data.
- Ransoming their own data and systems back to them.
- Impersonating an angry vendor and demanding invoices be settled, or
- Simply sending directives straight to Accounts Payable.
- Stealing authentication credentials from systems and plundering the bank accounts.
- Worst case, sell the access to another criminal to monetise.
It may be a slow process of leeching and selling CCs, or a more near instantaneous ransomware demand. The time it takes to monetise a breach, the method of post exploitation exploitation, aren’t predictable, but the intent is. Convert a compromise into cash.
A cyber criminal may need significant dwell time for their preferred money extraction scheme, or they may not. They absolutely need covertness until (and possibly during) their exploitation of the operation. This means that for an arbitrary period of time cyber criminals are exposed, sitting in an area of vulnerability. It is worth noting that there is another vulnerability once the operation has reached the execution phase — law enforcement. The duration of this vulnerability extends after the operational cycle is complete (for a pure CFAA violation that is 5 years).
Criminals are frequently opportunistic and easily dissuaded when their existing capabilities prove insufficient. Unfortunately, they are motivated, strangely persistent, and their capabilities are usually sufficient. While their capabilities might not be particularly impressive, they will try every single one against every target. And when they get a new capability, they’ll go back and try it against everyone again.
They exploit trust relationships and poor compartmentation [Ashley Madison password dump reuse for altcoin theft]. They make effective use of manual attacks executed by an insider (such as the infamous “Enable Content” on Microsoft Word, or running “Important.xlsx.exe”). The upside is that they can be often be defeated in a capabilities arms race by simply hardening your security posture (e.g. apply patches, use safer tools, limit attack surface, reduce the amount of data available for theft, etc.)
The primary characteristic that sets the Services teams apart are that they are state sanctioned and therefore, essentially, legal. Across the globe there are a number of threat actors in the Services with a wide disparity of skills, capabilities, objectives, tools, etc. There is chaotic Iran, standardized NSA, multiheaded hydra Russia, etc. There is a tendency to want to rank the Services, but this is not especially fruitful. More interesting is the culture of the Services teams, their nature, their agility, the problems that the team are expected to address, whether they have internal capacity or rely on third parties, and so on. More relevant is trying to understand what they exist to accomplish, how capable they are of doing that, how agile they are in term of changing their MO and if and how well they can accomplish other goals.
In most cases Services need long dwell time and they are extremely persistent about a target. Given their legal protection, the necessity and the self righteous (“patriotic”) purpose driving their operations they can be extremely flexible in deciding what is a legal target. Some take covertness very seriously as part of the operation, although they might apply that to different phases: some Services are super secretive once after a breach, but not before; others are focussed on stealing the plans to “build a better washing machine” and don’t know what washing machine they’ll need to build in 10 years, so covertness is not worth the expense.
The ability to exploit a successful operation, primarily political or military, is often contingent upon the covertness. An example: an NSA hack puts an implant into the FSB’s main network and allows monitoring their internal comms — mission failure occurs once the implant is detected as then FSB will feed it false info or remove it.
These teams are semi formal, something like deputised civilians in a posse. They can be very unpredictable. Highly skilled and extremely cautious, or simply repurposed penetration testers who lack the experience and therefore cannot operate covertly. Sometimes they’re freelancers, criminals who simply sell intelligence data they find by serendipity (“it fell of the back of a bot net…”)
There are problems with these sorts of privateers, for example they don’t know the tasking requirements of the Services and thus may not know they have something of value. Frequently documents that will become classified start out as documents exchanged and edited between non government colleagues. Once the document, or section of the document, being worked on has been polished it is handed to a government agency where it is classified.
This creates a window of opportunity where a compromised non-classified, non-government machine may expose data that will become valuable in the future. However, the privateers aren’t looking for these documents, they don’t know how. The Services can’t give them useful keywords to search for either — “Classified” won’t yield anything useful. Unless they know what to look for, proto-documents like this are destined to remain “unknown unknowns.”
This is an inherent problem with hybrid public/private teams — information sharing. While the private component will probably have superior skills and breadth and depth of operational experience, their lack of big picture understanding will prevent them from surfacing ideas, making connections, or otherwise providing insight to help advance the operation. Generally, having the people actually doing the work involved in suggesting improvements is a good way to improve. Similarly, having the people with wide access to botnet victims know what sort of data will get them paid will produce a greater volume of potentially interesting data.
The penetration testers of today very seldom operate or behave like typical cyber criminals or even APT teams. They behave like pen testers. Not that theres anything wrong with that, some of my best friends are pen testers. But companies that are looking for actual value, not just a checkbox to satisfy an auditor would do well to talk with their cyber vendor to make sure that the scope and TTP most accurately reflects the threats that they really face. Good pen test teams will rise to the challenge and supply the sort of assessment that actually generates value.
Criminals are dumb, persistent, and lucky
Cyber criminals tend to be more scary than they deserve because of the poor state of the victims they target. Take the recent Equifax hack which used a public proof of concept exploit and then, because they had difficulty operating any better, involved 30 web shells to exfiltrate data. These kids were able to succeed due to the poor cyber security posture of their victim, not because they were amazing hackers. Any company that buys mobile device management systems and advanced next gen paradigm shifting security solutions, yet doesn’t notice a) 30 web shells installed on their infrastructure for almost 5 months, and b) data for 143 million people going the wrong way out their pipe is clearly lacking in the basics fundamentals of cyber security. Cyber criminals get lucky, but they also try everything they got against everyone. Its easy to get lucky when you’re casting such a wide net. Importantly, it is easy to evade them as well — apply patches, least privilege architecture, applied stupidity monitoring, and eternal vigilance. Or at least, periodic vigilance.
The important things are always simple. The simple things are always hard. The easy way is always mined.
— Murphy’s Laws of Information Security
(Hint: buying the top right quadrant of the Gartner report every year is the easy way.)
The Services and Hybrid Teams
Although hybrid teams are fluid, hard to predict and analyse, and appear to be scarier than just the regular state sponsored groups, they have problems too. There are inherent problems in how they are forced to function, a sort of Conway’s Law for cyber conflict. They aren’t following the best practices of intelligence collection and analysis advocated by R.V. Jones and as a result they aren’t optimised for collection. They are optimised for scalable flexibility in capacity — just-in-time hacking. This is great for bringing in the best resources for the job, but it requires that the job have strict parameters and requirements up front because the operators won’t know enough of the big picture to improvise. Hybrid teams face difficulty maximising their creative potential. They allow for on demand highly technical skills, access, and capabilities, but they aren’t capable of delivering creative solutions to problems that they aren’t even aware exist.
There’s an awful lot of hackers out there.