Cатсн²² (in)sесuяitу / ChrisJohnRiley

Because we're damned if we do, and we're damned if we don't!

[DeepSec 2014] Trusting Your Cloud Provider. Protecting Private Virtual Machines – Armin Simma

DeepSecLogoTrusting Your Cloud Provider. Protecting Private Virtual Machines – Armin Simma

SECRETS: My talk is first and foremost about secrets.
Most people refer to data at rest or data in motion by the term “secrets”. When we talk about secrets usually we mean data at rest or data in motion. There are effective measures to protect these data, one of which is encryption. As you write in CfP 2013: “..uses encryption, access control…”. Concerning (IaaS-)clouds we have data IN EXECUTION. That is, the virtual image / virtual machine (VM) sent to the cloud provider is the secret to be protected. The problem is: this secret must execute on someone else’s system. Of course, we cannot simply encrypt the VM and send it to the provider. Homomorphic encryption would be a solution to this problem but at the time of writing it is academic i.e. it is not ready (and secure enough) to be used in real systems. In my talk (and our project) I want to show that it is possible to protect secrets (VM of the cloud customer) running on the providers host system using Trusted Computing technology.
FAILURES: Root users (superusers) usually have full control over and full access to a system. In our case the root user at the cloud providers site has full access to the provider’s host system. Thus he has full access to the guest image (i.e. the VM of the customer). What if root is doing wrong or malicious action? He could gain insight or manipulate the guest image. Here is potential failure. In my talk I want to show how to keep root users from failures.
VISIONS: In our project we were building a prototype to show that it is possible to build the proposed system. But the technical system is not enough. We need an “ecosystem” to bring our idea to real life. This is my vision: We have a trusted third party (I call it TTT trusted third tester) that vouches for a trustworthy (in that case thoroughly tested) system and publishes reference hash values to compare with the running system. The cloud customer can use these reference values plus attestation technology to check that a trustworthy system is running on the provider’s host. Using so-called sealing technology the VM will be decrypted on the provider’s site only if the provider’s system matches the reference hashes.

Cloud | Trust | Security

Security is the top inhibitor for companies not moving to the cloud

Problem Statement: Insider attacks within the cloud

There are a number of real-world examples of insider attacks on cloud based systems (including Salesforce.com leak of confidential information). These date back to 2007…

DBIR provides a classification of “insider attacks” as 18% – Quoted from Alex Hutton’s DeepSec keynote

How can we ensure that a [malicious] insider doesn’t have access to the customers VM.


  • no physical access
  • cloud provider is “partially” trusted at least


Encrypt the VM

Simple idea, but not possible as the provider will need to decrypt it in some way to run it. Decryption key must be known in some way to the provider.

Security based on trust

Assurance needs a kind of “proof”

Typically such “proofs” are given by service level agreements, but do not often cover security of the service. If any “proof” exists at all, it is done by an external audit or 3rd party certification of the cloud provider.

Suggested solution in a nutshell

Based on MAC (Mandatory Access Control) to allow more fine-grained policies and system-wide policies.

SELinux is an implementation of MAC for Linux

Dan Walsh gives many examples where SELinux would have prevented known vulnerabilities (e.g. shellshock, CVE-2011-1751)

By using SELinux it is possible to create a configuration to:

  • Restrict root from copying snapshots and VM images
  • Restrict root from doing other malicious things
  • Prevent logs from being removed or manipulated

SELinux policy files can however become VERY complex over time…

The conceptual problem is that an administrator must be able to perform maintenance and install new patches. This could allow an administrator to bypass protections. This can be prevented allowing only patches to be applied by a secondary root user who must validate and logon physically with hardware token. Reducing the possible exposure.

Privileged Identity Management –  a number of standards exist and should be taken into consideration (especially when it comes to logging actions)

TPM (Trusted Platform Module)

The second piece of the puzzle… used to store and protect keys against external attack and theft.

Tamper resistant, and relatively cheap hardware (1-2$). Currently over 600 million+ TPM chips exist

Version 2 is on the horizon

Root of Trust (RoT) gives an initial anchor for a security system

TPM was used in this solution for:

  • authentication / identification of the machine
  • storage of hash values
  • integrity measurement
  • attestation

TPM measured boot:

Before any component is executed, it is measured (i.e. hash is logged into the PCR) – Measure > Extend > Execute

After hashes are stored into the PCR a remote entity can implement reporting (attestation). This attestation can be compared against reference values (golden values) to ensure the process has not be altered.

POC for this was developed over the past few years, but now seems invalid due to the release of OpenAttestation

Overall ecosystem

A Trusted Third Tester (TTT) would need to publish signed integrity values for the “golden values” checks

A Cloud Attestor and Advisor (CAA) would need to have software on the customer site to perform the attestation

Related work

  • Intel
    • Mt. Wilson
    • Mystery Hill
  • Haven and SGX

Homomorphic Encryption

Main idea: Perform operations on the cipher-text, only person with the key can view the results decrypted

Would resolve all these issues if it’s fast/reliable enough

Problems with TPM-based solutions

  • Inefficiency problem
    • small micro-processor
  • Whitelist “golden values” –
    • Systems are NOT always the same, and therefore need/have different “golden values”.
    • Reference systems need to be 100% the same
  • Attacks on TPMs
    • Some are known (presented later at DeepSec)
    • Hardware and Software-based issues
    • Time Of Check, Time Of Use
    • Encryption of VMI using sealing
  • Cloud provider must disclose details of their platform
    • Necessary for attestation


[DeepSec 2014] A Myth or Reality – BIOS-based Hypervisor Threat – Mikhail Utin


A Myth or Reality – BIOS-based Hypervisor Threat – Mikhail Utin

The talk is a status report of BIOS-based hypervisor research.

Myths and Reality often interest and interchange… this is how life works.

A myth about a Malicious Hypervisor (Russian Ghost) appeared on Russian Hacker’ website at the end of 2011. It has all myth’s attributes. There were rumors about the post, and the storyteller described it as reality.

We believe that it was real or may still exist, and we possibly know where it was born and eventually escaped from.

This research follows 3 individual cases

Case #1: Malicious BIOS Loaded Hypervisor – MBLH (released 2011)

Published in Russian on a Russian site (in Russian language)

Typical Russian computer science project to develop high performance computer system (not associated with Information Security). Troubleshooting issues from the project revealed that Chinese made motherboards contained additional software modules, embedded in the BIOS, and the standard analysis software didn’t see them.

Although the boards were labelled as “assembled in Canada” a majority of the components where of Chinese origin

Chinese boards had two software systems working simultaneously – there is a malicious hypervisor embedded into the BIOS which utilizes hardware virtualization Intel CPU capability.

By checking execution time of systems commands between boards from “China” and “Canada”

Boards without MBLH showed a significantly lower execution time (60x slower), allowing for detection of the hidden hypervisor

All attempts to bring this issue to light within Russia were dismissed… however the author was able to confirm (with some missing details) that the malicious hypervisor is embedded in the BMC BIOS.

Case #2: “SubVirt: Implementing Malware with Virtual Machines” –  University of Michigan and Microsoft Research

2005/2006 research paper – Virtual Machine Based Rootkit (VMBR)

We demonstrated that a VMBR can be implemented on commodity hardware and can be used to implement a wide range of malicious services

Installed as a shim between the BIOS and the Operating system. The VMBR only loses control of the system in the period of time when the system reboots and the BIOS is in control.

This research was performed on systems that did not support hardware virtualization support.

Research timeline for Case #1 (2007-2010) starts straight after the SubVirt research was released (2006)

Case #3: Widespread Distribution of Malicious Hypervisor via IPMI vulnerability (2013)

“illuminating the security issues surrounding lights out server management” – University of Michigan

IPMI malware carries similar threats to BIOS and is likely easier to develop, since many BMCs run a standard operating system… if widely used IPMI devices can be compromised remotely, they can be leveraged to create a large network of bots”

Attack scenarios highlighted in this research map (4 out of 5) to those seen in case #1.

These attacks cannot be defended against without vendor assistance. It’s not easy to detect an infection

With a modern trend to move toward cloud services, this may affect overall information security.


These style of attacks are dangerous and can infiltrate millions of servers worldwide

In theory these infections cannot be identified… but we still have a chance

There’s no protection against this, put your server in a dumpster – special thanks to IPMI

No security standard calls for secure management (IPMI) protection

References slide:



[DeepSec 2014] Addressing the Skills Gap – Colin McLean



Addressing the Skills Gap – Colin McLean

Mark Weatherford of the US Department of Homeland Security has stated “The lack of people with cyber security skills requires urgent attention. The DoHS can’t find enough people to hire”. The United Kingdom’s National Audit Office has also stated “This shortage of ICT skills hampers the UK’s ability to protect itself in cyberspace and promote the use of the internet both now and in the future”.

It is evident that there is a world-wide cyber-security skills shortage but what can be done about it?

The University of Abertay Dundee in Scotland was the first university to offer an undergraduate “hacking” degree in the UK, starting in 2006. The course is now widely recognised in the UK as a vocational supplier of security testing graduates, with many of the graduates receiving several job offers before they’ve even completed the course.

This talk focuses on the experiences of running the course and examines how the cyber security skills shortage can be addressed. Some of the issues discussed will be: –

Academia; There are many degrees with titles sounding like they may be producing the correct graduates, however, does the content match the type of skills required?

Industry; What can the security industry do to influence the content of academic courses to enable the correct type of graduate to be produced?

Extent of the problem

What is the extent of the skills gap we’re facing.

UK and the USA both state that they can’t find enough people to fill InfoSec positions.

Current InfoSec workers 2.87 million (4.90 million required by 2017). By 2017 we’ll have a skills gap of 2 million people (source)

Academic solution

Lots of classes popping up. However they have their detractors.

Common complaint, is lack of real-world experience.

Academics train theoretical classes, Companies blame academia for teaching too much theoretical stuff. It’s a blame game and nobody wants to back down.

Examining the problems companies are facing, many of them are vocational.

Vocational vs. Theoretical

Mathematical / Theoretical courses are being largely addressed.

Vocational courses are required as theoretical solutions are not being adopted. Better vocational courses, and better courses are needed. Not being dealt with as well as the theoretical side of things.

What skills are needed

  • Core technical knowledge
  • Core practical skills
  • Documentation

Often forgotten…

  • Business appreciation
  • REAL practical skills
  • business documentation
  • thinking out of the box
  • criminal mindset

How can you teach these often forgotten points? It’s not a technical subject, how can you teach a criminal mindset as an academic.

These CAN be catered for during a degree… using things like assessments and extra-curricular activities. As well as support from external partner companies. Student projects with 3rd party companies (such as NCR) have given back to both the students, the lecturers, and the company.

By moulding the class, external companies get the candidates they’re looking for. They can take students straight from the course and put them to use.

Ex-students now regularly come back to give talks at Abertay… not just to teach, but also to inspire.

Graduates are better for this interaction.

Do we need more degrees like Abertay? Yes, but different. Mould it to slightly different ends. Industry driven or guided to what we need in the industry in a few years.

Let students loose… we mustn’t stifle their enthusiasm

Attracting people

Students that are still part of the program, or that have left talk at lots of conferences… this is a great way to attract people to the University. Abertay even run there own conference now to attract the next generation of hackers.

Exchange with other universities helps exchange knowledge and build contacts.

“women in security” initiatives to try and bring in more women into the fold.

Have to enthuse school kids that this is an interesting and possible career path.

Further initiatives

Ask for skills from companies to help teach the next generation

Companies should be approaching academia to try and model what THEY need.

Vocational CAN be academic! Adding research into a purely academic course can give valuable vocational skills

Companies need to work WITH universities… it has to be a partnetship

Companies shouldn’t expect graduates to be experts… they will by definition be generalists because they have to cover everything.


[DeepSec 2014] The Measured CSO – Alex Hutton


The Measured CSO – Alex Hutton

One of the most significant changes technology has wrought over the last decade is the current movement to use data and quantification as a means to better our everyday lives. In both our work life and leisure life, almost no aspect of modern life has escaped our desire to become better using evidence, data, and quantitative methods.

This talk discusses one method to help a Security Department build a better understanding of historically amorphous goals like “effectiveness, efficiency, secure, and risk” using data and models.

Where are we as an industry?

“… when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.” – Lord Kelvin

This is the journey towards knowledge, and therefore security. We are at the point where we can’t talk about risk using high, medium, low. How would your investors feel if your CEO talked about profit as High, medium or low! We need to talk about things in a different way.

CVSS… “I use it every day, and I’m about to bash it!”

Where we’re at with our risk calculations:

  • somewhat random fact gathering
  • interesting, trivial, irrelevant observations
  • little guidance to data gathering

First Mistake: Limiting ourselves

Security is an engineering issue… Looking at security only as a piece of the OSI layer.

Second Mistake: Blind leading the blind

Example: mobile malware is trending… this must be what we focus on. The FUD factory

Using the DBIR you can pull out more targeted and industry specific metrics that speak a lot more to the real threats. Looking at the DBIR it’s less than 1%. What we should focus on as an “industry” is not what’s hot right now!

mobile malware does not move the needle in out stats as we focus on organizazional security incidents as opposed to consumer device compromise

We’re dealing with complex systems… You can’t make point predictions in a complex system (Freidrich Hayek)

Correlation between CVSSv2 ratings and actual exploitations shows that even the highest rated CVSS vulnerabilities are not that widely exploited.

The measured CSO

The measured CSO must be more like W.E Deming

The potential for improving the system is continuous and never ending… there is no perfect system. The only people who knows where the opportunities to improve are, are the workers themselves. There are countless ways for the system to go wrong.

Having workers are management speak the same language is important… having workers record and analyse statistical information helps to improve the system and evaluate changes easily. Everybody in the system has to be responsible for working towards improvement.

How many of us spend an hour doing statistical analysis on the other 38 hours of work we’ve done!

A measured CSO:

  • Relies on metrics, data, intel for good decisions
  • Invests in improvements to people, process and technology

To provide the best and least-cost security for shareholders, and continuity of employment for his workers

  • We as an industry, know that “best” and ” least-cost” are not necessarily contradictors
  • We also have a HUGE continuity issue

Extending something like VERIS is incorporate controls data can assist a measured CSO in understanding where they stand. Using map reduce (HADOOP) this information can be modeled and look for IOC. The key to this is enriching the data with as much metadata as possible.

Framework <–> Models <–> Data

The Metrics and models that “defend” against threat patterns

Mobile malware might not be an issue now, but we need to plan, build, and manage to ensure when it is an issue, we have things already in place.

A Micromort… a one in a million chance of death… we can apply that

We’re bad at combining all those metrics… overweight, on drugs, and doing something stupid.

Becoming measured

What does that mean? What do we need?

Most metrics programs are gathering of some information without any context.

A metric is like a lego piece. It has no context until you build something with all the lego pieces you have.

How do you get context?

Goal, Question, Metric (GQM)

  • Execution: Define goals
  • Models: Question how this can be measured
  • Data: Define metrics that answer the question

The measured CSO creates a scorecard of KRI’s and KPI’s that he can use to evaluate where they currently stand

Framework for GQM –> NIST CSF (Cyber Security Framework)



Get every new post delivered to your Inbox.

Join 2,472 other followers