Cатсн²² (in)sесuяitу / ChrisJohnRiley

Because we're damned if we do, and we're damned if we don't!

Category Archives: Conference

[DeepSec 2014] A Myth or Reality – BIOS-based Hypervisor Threat – Mikhail Utin


A Myth or Reality – BIOS-based Hypervisor Threat – Mikhail Utin

The talk is a status report of BIOS-based hypervisor research.

Myths and Reality often interest and interchange… this is how life works.

A myth about a Malicious Hypervisor (Russian Ghost) appeared on Russian Hacker’ website at the end of 2011. It has all myth’s attributes. There were rumors about the post, and the storyteller described it as reality.

We believe that it was real or may still exist, and we possibly know where it was born and eventually escaped from.

This research follows 3 individual cases

Case #1: Malicious BIOS Loaded Hypervisor – MBLH (released 2011)

Published in Russian on a Russian site (in Russian language)

Typical Russian computer science project to develop high performance computer system (not associated with Information Security). Troubleshooting issues from the project revealed that Chinese made motherboards contained additional software modules, embedded in the BIOS, and the standard analysis software didn’t see them.

Although the boards were labelled as “assembled in Canada” a majority of the components where of Chinese origin

Chinese boards had two software systems working simultaneously – there is a malicious hypervisor embedded into the BIOS which utilizes hardware virtualization Intel CPU capability.

By checking execution time of systems commands between boards from “China” and “Canada”

Boards without MBLH showed a significantly lower execution time (60x slower), allowing for detection of the hidden hypervisor

All attempts to bring this issue to light within Russia were dismissed… however the author was able to confirm (with some missing details) that the malicious hypervisor is embedded in the BMC BIOS.

Case #2: “SubVirt: Implementing Malware with Virtual Machines” –  University of Michigan and Microsoft Research

2005/2006 research paper – Virtual Machine Based Rootkit (VMBR)

We demonstrated that a VMBR can be implemented on commodity hardware and can be used to implement a wide range of malicious services

Installed as a shim between the BIOS and the Operating system. The VMBR only loses control of the system in the period of time when the system reboots and the BIOS is in control.

This research was performed on systems that did not support hardware virtualization support.

Research timeline for Case #1 (2007-2010) starts straight after the SubVirt research was released (2006)

Case #3: Widespread Distribution of Malicious Hypervisor via IPMI vulnerability (2013)

“illuminating the security issues surrounding lights out server management” – University of Michigan

IPMI malware carries similar threats to BIOS and is likely easier to develop, since many BMCs run a standard operating system… if widely used IPMI devices can be compromised remotely, they can be leveraged to create a large network of bots”

Attack scenarios highlighted in this research map (4 out of 5) to those seen in case #1.

These attacks cannot be defended against without vendor assistance. It’s not easy to detect an infection

With a modern trend to move toward cloud services, this may affect overall information security.


These style of attacks are dangerous and can infiltrate millions of servers worldwide

In theory these infections cannot be identified… but we still have a chance

There’s no protection against this, put your server in a dumpster – special thanks to IPMI

No security standard calls for secure management (IPMI) protection

References slide:



[DeepSec 2014] Addressing the Skills Gap – Colin McLean



Addressing the Skills Gap – Colin McLean

Mark Weatherford of the US Department of Homeland Security has stated “The lack of people with cyber security skills requires urgent attention. The DoHS can’t find enough people to hire”. The United Kingdom’s National Audit Office has also stated “This shortage of ICT skills hampers the UK’s ability to protect itself in cyberspace and promote the use of the internet both now and in the future”.

It is evident that there is a world-wide cyber-security skills shortage but what can be done about it?

The University of Abertay Dundee in Scotland was the first university to offer an undergraduate “hacking” degree in the UK, starting in 2006. The course is now widely recognised in the UK as a vocational supplier of security testing graduates, with many of the graduates receiving several job offers before they’ve even completed the course.

This talk focuses on the experiences of running the course and examines how the cyber security skills shortage can be addressed. Some of the issues discussed will be: –

Academia; There are many degrees with titles sounding like they may be producing the correct graduates, however, does the content match the type of skills required?

Industry; What can the security industry do to influence the content of academic courses to enable the correct type of graduate to be produced?

Extent of the problem

What is the extent of the skills gap we’re facing.

UK and the USA both state that they can’t find enough people to fill InfoSec positions.

Current InfoSec workers 2.87 million (4.90 million required by 2017). By 2017 we’ll have a skills gap of 2 million people (source)

Academic solution

Lots of classes popping up. However they have their detractors.

Common complaint, is lack of real-world experience.

Academics train theoretical classes, Companies blame academia for teaching too much theoretical stuff. It’s a blame game and nobody wants to back down.

Examining the problems companies are facing, many of them are vocational.

Vocational vs. Theoretical

Mathematical / Theoretical courses are being largely addressed.

Vocational courses are required as theoretical solutions are not being adopted. Better vocational courses, and better courses are needed. Not being dealt with as well as the theoretical side of things.

What skills are needed

  • Core technical knowledge
  • Core practical skills
  • Documentation

Often forgotten…

  • Business appreciation
  • REAL practical skills
  • business documentation
  • thinking out of the box
  • criminal mindset

How can you teach these often forgotten points? It’s not a technical subject, how can you teach a criminal mindset as an academic.

These CAN be catered for during a degree… using things like assessments and extra-curricular activities. As well as support from external partner companies. Student projects with 3rd party companies (such as NCR) have given back to both the students, the lecturers, and the company.

By moulding the class, external companies get the candidates they’re looking for. They can take students straight from the course and put them to use.

Ex-students now regularly come back to give talks at Abertay… not just to teach, but also to inspire.

Graduates are better for this interaction.

Do we need more degrees like Abertay? Yes, but different. Mould it to slightly different ends. Industry driven or guided to what we need in the industry in a few years.

Let students loose… we mustn’t stifle their enthusiasm

Attracting people

Students that are still part of the program, or that have left talk at lots of conferences… this is a great way to attract people to the University. Abertay even run there own conference now to attract the next generation of hackers.

Exchange with other universities helps exchange knowledge and build contacts.

“women in security” initiatives to try and bring in more women into the fold.

Have to enthuse school kids that this is an interesting and possible career path.

Further initiatives

Ask for skills from companies to help teach the next generation

Companies should be approaching academia to try and model what THEY need.

Vocational CAN be academic! Adding research into a purely academic course can give valuable vocational skills

Companies need to work WITH universities… it has to be a partnetship

Companies shouldn’t expect graduates to be experts… they will by definition be generalists because they have to cover everything.


[DeepSec 2014] The Measured CSO – Alex Hutton


The Measured CSO – Alex Hutton

One of the most significant changes technology has wrought over the last decade is the current movement to use data and quantification as a means to better our everyday lives. In both our work life and leisure life, almost no aspect of modern life has escaped our desire to become better using evidence, data, and quantitative methods.

This talk discusses one method to help a Security Department build a better understanding of historically amorphous goals like “effectiveness, efficiency, secure, and risk” using data and models.

Where are we as an industry?

“… when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.” – Lord Kelvin

This is the journey towards knowledge, and therefore security. We are at the point where we can’t talk about risk using high, medium, low. How would your investors feel if your CEO talked about profit as High, medium or low! We need to talk about things in a different way.

CVSS… “I use it every day, and I’m about to bash it!”

Where we’re at with our risk calculations:

  • somewhat random fact gathering
  • interesting, trivial, irrelevant observations
  • little guidance to data gathering

First Mistake: Limiting ourselves

Security is an engineering issue… Looking at security only as a piece of the OSI layer.

Second Mistake: Blind leading the blind

Example: mobile malware is trending… this must be what we focus on. The FUD factory

Using the DBIR you can pull out more targeted and industry specific metrics that speak a lot more to the real threats. Looking at the DBIR it’s less than 1%. What we should focus on as an “industry” is not what’s hot right now!

mobile malware does not move the needle in out stats as we focus on organizazional security incidents as opposed to consumer device compromise

We’re dealing with complex systems… You can’t make point predictions in a complex system (Freidrich Hayek)

Correlation between CVSSv2 ratings and actual exploitations shows that even the highest rated CVSS vulnerabilities are not that widely exploited.

The measured CSO

The measured CSO must be more like W.E Deming

The potential for improving the system is continuous and never ending… there is no perfect system. The only people who knows where the opportunities to improve are, are the workers themselves. There are countless ways for the system to go wrong.

Having workers are management speak the same language is important… having workers record and analyse statistical information helps to improve the system and evaluate changes easily. Everybody in the system has to be responsible for working towards improvement.

How many of us spend an hour doing statistical analysis on the other 38 hours of work we’ve done!

A measured CSO:

  • Relies on metrics, data, intel for good decisions
  • Invests in improvements to people, process and technology

To provide the best and least-cost security for shareholders, and continuity of employment for his workers

  • We as an industry, know that “best” and ” least-cost” are not necessarily contradictors
  • We also have a HUGE continuity issue

Extending something like VERIS is incorporate controls data can assist a measured CSO in understanding where they stand. Using map reduce (HADOOP) this information can be modeled and look for IOC. The key to this is enriching the data with as much metadata as possible.

Framework <–> Models <–> Data

The Metrics and models that “defend” against threat patterns

Mobile malware might not be an issue now, but we need to plan, build, and manage to ensure when it is an issue, we have things already in place.

A Micromort… a one in a million chance of death… we can apply that

We’re bad at combining all those metrics… overweight, on drugs, and doing something stupid.

Becoming measured

What does that mean? What do we need?

Most metrics programs are gathering of some information without any context.

A metric is like a lego piece. It has no context until you build something with all the lego pieces you have.

How do you get context?

Goal, Question, Metric (GQM)

  • Execution: Define goals
  • Models: Question how this can be measured
  • Data: Define metrics that answer the question

The measured CSO creates a scorecard of KRI’s and KPI’s that he can use to evaluate where they currently stand

Framework for GQM –> NIST CSF (Cyber Security Framework)


[SecTorCA] Reverse Engineering a Web Application – for fun, behavior & WAF Detection

Reverse Engineering a Web Application

For fun, behavior & WAF Detection

by Rodrigo “Sp0oKeR” Montoro (Sucuri Security)


Screening HTTP traffic can be something really tricky and attacks to applications are becoming increasingly complex day by day. By analyzing thousands upon thousands of infections, we noticed that regular blacklisting is increasingly failing so we started research on a new approach to mitigate the problem. We started with reverse engineering the most popular CMS applications such as Joomla, vBulletin and WordPress, which led to us to creating a way to detect attackers based on whitelist protection in combination with behavior analysis.  Integrating traffic analysis with log correlation has resulted in more than 2500 websites now being protected, generating 2 to 3 million alerts daily with a low false positive rate. In this presentation we will share some of our research, our results and how we have maintained WAF (Web Application Firewall) using very low CPU processes and high detection rates.

Presenation is based on WordPress / NGINX, but concepts can be applied to any Wed Application / CMS technologies. The goal of this talk to better protect CMSs, with better performance (less rules is better), but also protect against new vulnerabilities as they are released/discovered.


By reverse engineering common CMSs (in this case WordPress) it is possible to better understand how they work.

WAF Detection (breakdown):

  • Traffic Analysis
  • Application Structural Analysis
  • Behavior

Detection steps

Reverse Engineering Traffic

As we’re taking about web applications we’re mostly talking about HTTP here. By breakdown down the traffic into specific categories it’s possible to better understand the traffic. We include such as IP source in this section.

Crawling the application

Various ways to crawl the application from a blackbox perspective (Burp Suite for example). From a whitebox rerspective there are various other options.

Looking at requests

By looking at the parameters used by the applications it’s possible to identify parts where an application is only sending numbers or letters as part of the parameter. For example, a name field should not contain numbers. However this could be problematic if you don’t consider edge case situations, like names with special characters.

Looking at the common headers, it’s easy to identify headers values that must fall within specific whitelists. E.g. HTTP/1.0 or HTTP/1.1. Anything else is either corrupted data or somebody fiddling with the date being sent.

With wordpress.com, the response contains an x-hacker header saying “if you should read this you should apply for a job…”

Brute-force attacks are on the rise, so if you can, compare users passwords against a list of the top X passwords and inform the user that it’s weak.

Malicious user-agent strings tend to be shorter than legitimate user agent strings. They also tend to send more complete request headers (often over 8 headers). Also, you don’t see normal browsers sending HTTP/1.0 requests anymore. Drop these simple things. Checking that all expected parameters are sent is also important. A lot of attackers only send the parameters they need, and ignore the others. This can be checked easily enough.

A regular user is also not going to request a whole load of pages that result in a 404. If there are a lot of request that end in a 404, this looks more like a attack than a normal users traffic.


Using a PCAP of real traffic and simple regex matching, it’s possible to test your logic to list what requests would normally be dropped BEFORE implementing something as a rule. You can then tweak the matching logic before going live.

NGINX is meant to be quick, so doesn’t allow IF ELSE, only IF statements.


if ($request_method != <something>){
     return <status_code>

WordPress has a lot of files (check the tarball for a full list). So we can slim that down a bit by removing things like initial config and setup files. Administration (/wp-admin) console is also something that can either by disabled or restricted to specific source addresses (think 2FA). Core files (wp-includes) are not meant to be externally accessible, same with uploaded content (wp-content). WordPress also has an XML-RPC interface that allows somebody to perform specific actions (e.g. ping-backs, comments, user-auth, …). Redirecting them to a honeypot might be an option for you.

<ifModule mod_alias.c>
     Redirect 301 /xmlrpc.php

Lots of brute-forces seen from June 2014 using the xmlrpc.php. Similar rise in traffic seen in the use of xmlrpc.php as a DDoS tool in March 2014. By looking at the logs it’s easy to see spikes where there may be new attacks or new methods being tried out.

To secure things further, deny specific filetypes in directories where you may have user content or data (e.g. uploads, logs, …).

Mitigating the attack surface

Turn off the machine and remove the network cable –> Not really an option

OSSEC for real-time monitoring.

Monitor specific locations for alteration or addition of files to ensure you get visibility on the web application.

Threshold ideas

Too many 404s –> somebody searching the web app

GET/POST per time for same IP source –> automated user hitting the site (not a normal user)

File specific: Set files on Linux as immutable (lsattr)

Statistical data

Useful for counter intelligence and to find behaviors, new trends and alerts.

Instead of blocking “user-agent: ABCD”, think about blocking connections from user-agents with < 19 bytes (maybe a few false positives, but less specific).

GEO-IP Blocking –> based on top countries, you could block specific countries if you don’t have business reasons to allow traffic from them. This may change week by week however.

Methods –> If your application only allows GET/POST, then drop everything else

HTTP Version –> If you only accept HTTP/1.1, then drop 1.0 and all malformed versions (stats from Sucuri show 1 mill hits a week dropped by this rule alone)


This is a constant process, not set it and forget it.


  • Developers
  • plug-ins
  • Bad Code
  • languages

Next steps:

  • Integration with SCAP
  • open source PCAP parser tool
  • build rule-set for CMSs under OWASP banner