I’ve a larger writing project afoot… putting it down to see how it looked in a different format.
zen@trouble.org/2012/Draft
Your most important systems are your least secure
Here’s an easy one. Do you think that your most critical computers – the ones that are most important to your company’s health, wealth, and well-being – are among your most secure? If you answered yes you’re not alone in thinking so, but you’re probably wrong.
The roaring nineties saw the migration of wealth to the Internet – a land of seemingly endless opportunities – and spawned such capitalistic giants as Amazon, eBay, and Google. Venture capitalists poured hundreds of billions of dollars to fund innovative (and, in some cases, just plain crazy) ideas in an effort to profit from and tame this strange new landscape. Serious commercial and corporate computer security was still in its infancy, and the security product marketplace was just getting off the ground.
Despite the money flowing into the ether very little was known about how good or bad security was in the wild. So in 1996 I took a sort of security health examination over a wide variety of high profile systems in order to assess their security. Running a modified version of SATAN (a network security scanner I had co-authored with Wietse Venema) I tried to examine the most important and interesting sites on the Internet at the time, including online banks, newspapers, credit unions, government institutions, and one of the earliest profitable net ventures: porn.
It turned out that of the approximately 2,000 Internet facing servers I examined, nearly two thirds of them had significant security problems. About a third of them could have been broken into with almost no effort at all. All in all about 3/4 of all of these surveyed sites could have been compromised if significant force and effort had been applied[1].
While bad in itself, what seemed even more curious was the fact that if you compared these important sites to a set of random sites I had also scanned as a scientific control, the random systems only had about half the problems as their critical cousins! It might be worth noting that the porn sites were better defended than the banks – perhaps at the time that was were the real online money was – but c’est la vie.
While initially counter-intuitive, there were a variety of reasons the results held true. These busy and important computers simply did more, juggling web services, email, name service and other important data – keeping track of all of these metaphorical balls was a difficult task. Businesses and other organizations were coming to the net expecting turnkey solutions to their company needs that simply weren’t available at the time, and it showed. Combine this with a general ignorance about what is important about security along with an emphasis on functionality and performance, you have a situation like – well, a situation that looks pretty similar to what we have now.
In general the security of a computer degrades in proportion to the amount the system is used. Run a vulnerability scan against your average big server and a laptop, and the scanner will uncover fewer security issues on the laptop than the server. The laptop may seem to have paper-thin walls, but it is generally used to perform simple tasks and functions. Keeping all the different and complicated types of servers with a cornucopia of operating systems, applications, configurations, owners, processes and procedures secure is an extremely difficult task. Laptops, on the other hand, are mostly used as provisioned by IT, are relatively identical, and can generally be kept in a much more consistent and secure state with central management.
Big iron, big problems
Fast-forwarding to today’s world and this book’s emphasis on internal networks, what does this mean? Internally – that is, behind your company’s firewall or external network defenses – it’s even worse. The odds are good that your SAP ERP, PeopleSoft, Oracle and other large applications are probably among the worst secured systems in your company. But how could this be? After all, after spending tens of millions of dollars on the damn things[2] why couldn’t they have put a decent lock on the door?
To start with these large environments are complex and use non-standard applications that leverage highly customized (remember that professional services bill?) programs that are uniquely tailored to your environment. Often conceived and created by 3rd parties, your internal security and design experts often had little or no input into the design or rollout of the application. And of course by now the original designers and system acolytes who really knew the system at the start have moved on or moved up, and they no longer have any operational duties or understanding of the current state. Secret backdoors that grant dangerous access could have been put in by the vendor (“so we can get back in and fix things in case of emergency”) or even by one of your own employees, and old accounts from bygone employees and support staff remain eternal because they’re not linked into the same account provisioning system as the rest of your business. This means that when you think you’ve disabled access to users, a variety of one-off accounts remain. And coordinating the communications and marching orders of all the teams involved – networking, operating systems, application, etc. – is a real challenge. Most of the maintenance is performed when reacting to operational issues that prevent the application from performing its duty – simply keeping it alive, not secure, is the order du jour.
Often such applications are the culmination of several projects, and when the dust had settled years have passed before they were fully operational. Even if they used cutting edge designs and architecture when you started – and they weren’t, because you don’t use cutting edge technologies with critical systems – by the time the project is complete, with the usual budget and time overruns, you have a system that’s nearly obsolete as it goes into production. All the change requests you made to add functionality or change the workflow have damaged the initial clean design and have added bloat that is ill understood and introduces uncertainty. And as the years go by, the optimistic original date given to your by the consultants is a distant memory, and they’ve finally gotten it to work – well by then you try to ignore the duct tape that holds it all together and are happy it runs at all. Not to mention you can’t replace it without spending many millions more, so you’re stuck with these decade old dinosaurs that no one knows how to fix anymore and are often past end of life for the vendors that originally sold you the system (if the vendors themselves haven’t been sold, gone out of business, or given up computers and gone into vacuum cleaner sales.)
These initial details would be enough to sink the security of most – old, very complex, ill-understood, non-standard systems that are unsupported by the major vendors are a recipe for disaster. But unfortunately the business and operational problems are often a bit worse.
A little knowledge… doesn’t get you very far
So what do you think happens when the corporate security team finds a problem with one of these crucial applications (perhaps through a scheduled audit, network scan, or some other mechanism) and tells them there’s a serious security problem that must be fixed? In a production environment the number one goal for these critical components is to have them work when you need them – e.g. have high uptime and availability. So naturally the application owners view anything that might jeopardize this ambition with deep suspicion, if not outright hostility.
Owners resist change mightily, and might feel justified in asking reasonable questions like “can you guarantee that patch or change the system won’t harm anything?” Or perhaps the dreaded “what’s the business justification?” Quotes along the lines of “it costs the company $1,000 per second this is down!” are common and somewhat reasonable pushbacks that security professionals encounter.
And because of the great expense in setting up one of these monoliths, there are often either very limited or no test labs or QA environments that may be used for checking out any proposed changes, causing changes become even more dangerous when you can’t study the impact in a risk-free environment. Even if the repair is deemed critical (or at least worthwhile), it’s going to take time – sometimes years – to repair issues in anything but extraordinary situations. And simply forget about fixing anything if the date is anywhere near the end of quarter, or the fiscal or calendar year, or – god forbid – if you’ve missed the maintenance window (it’s common for larger companies to have a freezes or slowdown windows of maintenance that consumes 25-50% of the calendar year or more.)
Even with off-the-shelf solutions vendors will issue dire warnings or void your warranty (and their liability, as tenuous as that ever is.) This is especially true but not unique for health-related or mission critical systems: for instance SIEMENS, one of the largest manufacturers of industrial control systems, has very specific advice:
A running process control plant should never be checked with penetration test tools! The use of penetration test tools is always associated with the risk of permanent damage to the tested system (or the installation or configuration of the system)[3]
One might wonder if it’s true that a scanner can cause permanent damage it might be a better idea of increasing the system defenses rather than not test it. Of course you don’t want to damage something as critical as a Fizbit 970k that monitors the Gonzolar Zed/2 that helps safeguard the plutonium in your reactor. But with the increased exposure of all systems to the Internet virus it becomes more and more vital to ensure that in real-world areas security remains bona fide. If a system is too dangerous to test, it might be too dangerous to run.
But even if security can find issues and convince the owners to fix the problem, the windows of vulnerability that the problems exist are substantially longer than with more homogenous computers where patches and fixes can be rolled out in a more moderate length of time. All this might prompt you to ask – why bother to fix the problems, if it’s so hard to address in these ancient systems? Intruders break into systems by exploiting commonly known or easy to exploit with a punch of a button. At some point in time you might ask – are all those potential buttons that you let stay on your critical systems worth fixing or not?
Of course getting consensus on what is really important and deserving of protection is not an easy task. But that pales in comparison as to tracking down who is responsible for these business assets and who is in charge of ensuring the various components that make up the systems that run the applications or storing key assets are in good shape. We can thank Google and their contributions on making search such a tremendous tool that it’s easy to forget what a hard problem it was. However, the differences in the nature of information on the Internet – where people most put it out to be found – and data that lives inside a large corporation where most of the time business owners only want a small fragment of data to be discovered (e.g. their business unit landing page) is large. And this doesn’t even begin to touch upon the difficulties that multinational companies have, nor with the near constant M&A’s that larger companies seem to thrive upon. How well do you know what goes on inside of your new acquisitions? I recall being at a company that couldn’t even install security software on its European branches because of EU privacy concerns.
On one hand it might seem striking that there are such disconnects, but in larger organizations the insular nature of these systems, which are generally out of the direct involvement of central IT, owned by a business unit that doesn’t have the resources or knowledge on how to run complex systems, let alone how to integrate them into the processes and standards of rest of their company. All this conspires to ensure that your crown jewels often rest on systems that are run in an ad hoc, insecure fashion that are out of sight of security and the rest of the business.
But even assuming the security or audit teams know of the existence of a given application, actually discovering security weaknesses is yet another adventure. It’s pretty much a given that any computer that is not well-managed and without frequent security updates and appropriate security controls in place will give numerous errors when scanned by vulnerability scanners, configuration auditors, denial of service testers, application probers, and the like. Many of these tools are tried and true arrows in the security tools quiver; while aggressive, and certainly not perfect, they can provide hard data on how bad things are for a given application. But turning on the heat and running these aggressive products against even relatively stable computers is dangerous – this is because by design security probes steal or starve resources, impact performance, crash programs or take entire systems down routinely. Worse still application scanners can inject bad or random data into your systems and databases as part of a testing. These injections of data can potentially damage, corrupt, or otherwise compromising the integrity of your data. None of these side effects will make the owner happy, who on good days doesn’t want to look at their application too closely lest it keel over.
As a result many companies simply disallow the really heavy scanning or security testing of critical assets. While understandable – you don’t want your ERP system capsizing – do you really think that the bad folks will set their phasers on stun? It’s going to be an all-out assault, set to kill. And they will.
In the end it’s a simple case of dueling metrics – do you keep the systems purring and happy, or introduce business risks by finding and fixing the problems? The rallying cry of “it’s never been a problem before!” rings through the halls of business as a talisman against change. And as a result there is an inverse correlation between importance of systems and security.
Give up Hope?
I’m sure most of us would just love security to go away. But it isn’t, and it’s growing in import as our most crucial assets, services, and information are entrenched in the virtual world. And to be fair, some crucial applications are pretty well secured by nearly any measure. As for the rest, all is not lost. With a bit (sometimes a large bit!) of effort even the worst offenders can become productive and secure members of society. It’s easy to say high-level statements such as one should ensure that IT and security should be serving the business’ needs while leveraging a good governance framework and ensuring that resources and efforts be directed at high business value assets that are at risk. But changing how IT and security works isn’t something that can change overnight – or, if it could, you wouldn’t like the results. It’s more like steering an oil supertanker, you have to plan a fair bit in advance and wait for the results.
Paralysis is an option with so many choices to be made. The common consensus is to use one or more of the popular large frameworks to guide your operations. ITIL, COBIT, ISO-27001, and others aren’t a panacea but are a nice structure to hang your proverbial hat on. Without the proper processes and business practices in place to respond to changes to the risk and threat landscape, you’ll quickly end up exactly where you started. The goal should be to not let the business owners mandate their own security – the holy trinity of business, process, and technology have to be in congruence to the strategic aims of the rest of the organization. Treating these mission critical assets as nothing unique or strange, as well as to ensure that they fit into your normal processes and best practices, are the key. Because of how they’re put together they naturally they won’t be identical to other servers and hosts within your organization, but they absolutely must be well understood, documented, and integrated within your business and process framework.
Of course in extreme cases the owners might claim that any changes at all will sink the application in question. If it really is this brittle you have little choice but to start thinking how to replace either the owners or the application itself; having a critical resource that could be destroyed by change is a serious risk to your business and should be addressed before disaster hits.
If at all possible your best architects and security professionals should be engaged as part of your normal design process – let them help work alongside the business and 3rd party teams to create a set of resilient, redundant, and mature processes and technologies that will dovetail and amplify the business goals and success of the application. Have security not only baked into the design from the start but integrating security into the ongoing monitoring and maintenance processes is vital.
To be sure, use suppliers and vendors with long-standing history of stability and reliability, and require that they fully document any work so as time goes on this knowledge won’t be lost (but try to engage in dialogue and talk some sense into suppliers like SIEMENS who don’t want you to even look at their systems!) If at all possible segregate mission critical applications either virtually or physically from the rest of your environment; this will prevent security incidents in lesser systems from metastasizing to the big guns – but don’t lose track of them! Always document the purpose of such large systems and catalogue such basics as who the business owner is, how the security is handled as well as the controls that are in place and how to monitor their health and wellbeing.
Above all never let fear or uncertainty drive your actions – integrate an appropriate understanding of business goals into the equation and make decisions based on an ongoing survey of the processes and risks involved. The results might cost you money – but paralysis or indecision can cost much more.
[1] I personally didn’t compromise any systems, just essentially ran some popular and very well-known security diagnostic tests, so the numbers were fairly conservative; any attackers would presumably be more aggressive. Large-scale non-intrusive surveys by others have had surprisingly similar numbers no matter what methods used.
[2] Even moderately sized companies (say, 1000 people or more) routinely spend millions on large and important projects. The really expensive projects, such as an ERP implementation, easily can run orders of magnitude more. Large enterprise upgrade or retrofitting programs routinely exceed one hundred million or more in hardware, software, services, and upgrade costs, while history is full of multi-billion dollar IT project failures.
[3] From the SIEMENS security whitepaper “Security concept PCS 7 and WinCC”.
Sorry, the comment form is closed at this time.