Jan 102012
 

I’ve been thinking of virtual systems and probing and prodding the same. Virtualizing is sort of sticking something in amber, but instead of being a dead or frozen system it’s a place you can run anything you want for as long as you want, it’s alive; and it can be exactly like a target you want to hit, analyze, tear apart.

To me in the virtual flytrap world things get interesting because the baseline assumptions one makes are considerably different when calculating things in the real world. For instance there are a few items that are inherently large parts of any scanner/audit/probe tool (although Metasploit and some other attack tools are a bit different, I still think it holds true.) A great deal of time, effort, and complexity are spent on such things as:

  • Targeting. Simply getting the machinery to do the right thing at what you want.
  • Identity. Is that the same thing you scanned last time? Also – don’t scan the same things twice in one run, etcetera
  • Time. You’ve only so many hours in a day to scan something. This is particularly true en masse.
  • Intensity. If you hit something too hard and it breaks, that’s usually bad.

There are others, I’m sure, but I can assure you if those issues went away attacking would be a lot easier! However… Why do we scan things?   Well, generally to see if there are weaknesses that can be compromised. The usual logic goes that no matter how good the attacker is they only have a certain amount of time to break something because eventually things will change. And for awhile I was thinking the same thing….

But actually… Time isn’t really time if you can toss things into a virtual jail. As a matter of fact, things get kind of strange in there… makes me think of black holes and time. What if an attacker had until the end of time to break in? You’d think they’d eventually get in.

Because if you can virtualize something you almost have eternity to test it, and throw attacks that in the real world simply aren’t feasible because of the resources it takes to break it. This is because of the magic of parallelization, but also – well, that system isn’t going anywhere. You can test things for a year or more, on many copies… no one outside will ever know.

Now you might say well, well, we don’t clone people (yet!), sure, but but you will be able to create something like expert systems or codify the approach that a skilled attacker would take. It doesn’t matter if it’s inefficient, because you have all the time in the world.

This is very different than the past, because while you could duplicate methods you couldn’t duplicate the live target (well obviously you could have in theory, we’re all just turing machines) – and who cares if you exploit a system long after it’s changed in the meantime?

Attack tools have the notion of time moving forward (not a bad assumption, admittedly) baked into them now. This will change.

Forensics and system analytics will change as well, not to mention performance analysis and metrics.  The ability to endlessly play with systems under varying the exact same conditions is something that’s really striking.

I suspect there are a class of probes, analytics, attacks, etc. that haven’t been really considered because they just take too long – but who cares if you run tests for some months on a system to try and break something… Or the equivalent of 100 or 10,000 years of testing that’s been split up over a bunch of machines that all are identical?  What could you do to a machine if you had a thousand years to break it, or break in?

You can not only brute force attack something. You can run any attacks you want for essentially infinite amounts of time. You can also have parallel people attacking the system as well, so that an entire community or squadron of people are all at the same time hitting something. You can run essentially infinite attack scenarios. What could you do in a thousand years of attacking something? The flow of time changing is what I’m talking about.

Dodge This!

Fuzzing certainly comes to mind as a simple if dull example that people are working on (with no disrespect to the fuzzy folks ;))  Sane folks generally never run fuzzing tools because it takes so long to run them (not to mention probably rarely have truly actionable output, but I wouldn’t know, having never run one.) But perhaps if you run one long enough you might actually come up with useful data.

Some of the other attributes are also interesting… like intensity – how hard do you hit? The usual thing people do is when something goes down you stop testing it ;) Or, as the old joke goes “doctor, I broke my leg in two place…” “Well then stop going to those two places!” Of course you can hit virtual punching bags harder than real systems, you can just reboot them. But rather than stopping, however, marking what breaks things and continue to catalogue all the break points. It’s not often done in the real world, whatever that is.

As an added bonus all those tools that run on windows that you’ve always wanted to run (or vice-versa, the linux ones, or w/e) – sure, some tools only run on windows, some on linux, some on macs, etc. Rather than have a single platform to attack have an armarda. It’s only memory, CPU time, and your imagination….

I suspect that the next rev of things is going to get more and more interesting. We shall see. Of course there’s probably already a distro that does all this ;)  But don’t let people get to your backups or get an image of your system ;)

Happy happy new year.

Sorry, the comment form is closed at this time.