There were two big surprises, for myself and (I think) the rest of the team: First, the sheer mind-boggling number of exposed vulnerable services -- upwards of 40, I'm pretty sure -- and second, the inability to directly attack other teams.
Exploits had to be developed as Python scripts defining an "exploit" class implementing a certain interface. These scripts were to be submitted to the organizers, who would test them and, if the tests were successful, run them on a team's behalf against all the other teams. Scripts were to retrieve flags -- doing anything else was frowned upon. Damaging or backdooring servers was discouraged, and even if you could install a backdoor on a competitor's server, the VPN was structured so you couldn't access it, and any script designed to go after it would fizzle on the fresh image used for testing.
At first, I thought (to be blunt) that this level of structure and restriction was just idiotic. In retrospect, though, I see that it was necessary. Considering the number of services exposed (many reused from previous years, with the same holes!!), getting pwned was a guarantee, and this system mercifully prevented everyone from having to revert to backups every five minutes. It was in some ways a disappointment, though, since we went into the competition blind but with a fair number of cool general-purpose CTF scripts at the ready, all of which turned out to be useless in this highly structured context.
One tricky angle was that not everyone on our team knew Python well enough to write the deliverable scripts required by this framework. We ended up, near the end of the competition, with several attacks we could run by hand but which nobody with the proper skills had time to write up. If we'd had more people and a more clearly defined group structure, we could have set up some sort of "assembly line", with people hunting for bugs and putting them into a "queue" on a whiteboard. This queue could be referenced when patching our own server and writing up attacks to launch against everyone else's servers, and there likely would have been a tangible increase in overall productivity.
I think that one of our main strengths was in setting priorities when looking for holes to exploit. Our first task, as soon as we got our VM image decrypted, was to see if we had recyclable exploits on hand for any of the services. As soon as we'd deployed those, we started looking through any service which had source code available. Some grepping around identified unprotected calls to exec() and its ilk, which yielded some more easy exploits that kept paying dividends throughout the competition. It turns out that this is a very easy process to automate, and in fact one of the few still-useful scripts of ours that was focused on doing just that. Another tactic that paid big returns was looking for default passwords -- I was astounded by how many teams didn't seem to notice that bypasses for all their database protections were hardcoded into their web scripts!
We maintained (and continue to maintain... ssh...) a private Github repo which at first was just a place to put all the scripts we thought we'd have cause to use. As the different nature of the competition became clear, though, this repo quickly morphed into a different beast. Right off the bat I put in some template Python scripts which the less initiated could base automated exploits off of. As soon as any exploit of ours was accepted by the server, we added it to a folder inside the repo, so it could be used as a reference. Several references (e.g. network-related commands, syntax for common tasks) were also maintained in here. We also kept, separately, a Google doc where things like port mappings were recorded, for reference. We also communicated across computer labs and shared code snippets via WWU's Slack channel. All of these tools proved to be very useful.
Speaking of Github, one of our team members rediscovered this about halfway through the competition and realized that the "services" directory contains full source for five of the services which were running on our boxes. Several backdoors turned out to have been removed in the versions we got, but many other holes remained -- and, remarkably, the docs on Github enumerated many of these holes, sometimes even with example exploit code!
Who doesn't love open source? |
One nice thing about this particular competition image, with its nigh-uncountable number of vulnerable services, is that now, in the aftermath, we're left with a whole ton of toys to play with. I'm looking forward to spending my free time breaking as many of these as I can. The prospect of organizing internal mock CTFs using server images derived from this year's competition image, with different subsets of its services activated, is also an interesting one.
I'm curious to see the degree to which next year's competition will resemble this year's. Hopefully most of these lessons will transfer fairly well. After all, now we have a score to beat!
No comments:
Post a Comment