Cатсн²² (in)sесuяitу / ChrisJohnRiley

Because we're damned if we do, and we're damned if we don't!

Tag Archives: incident response

How not to do incident response

Now as you probably already know, I’m not an experienced incident responder. Sure it interests me, but until recently I’ve never really had the cause, or chance, to be in the front-line handling a live incident. I know, I know, I must be the only person in IT Security that DIDN’T get into it because they got hacked. Anyway, I’m getting off-topic.

apache_featherBack on 28th August the Apache Software Foundation suffered a hack. Most people know this already, it’s not front-page news. Why isn’t it front-page news, because of their communications. Almost from the moment they took down their websites they had an open dialog with the public, telling everything they knew about the problem. Even thought their services were unavailable, and even though, as a group it was embarrassing to suffer this kind of attack, they still did the right thing in communicating the full information to everybody who needed or wanted to know. As a result I’ve seen nothing but praise for the way the Apache Software Foundation and it’s engineers dealt with the hack and recovered. Not only did they put ALL the information out their, they also were good enough to put an open review of what worked and didn’t work during their own incident response. Nobody could have asked for more. As a result not only the Apache Software Foundation has benefited from reviewing their IR process. Everybody with an interest can review their notes and adapt their own in-house IR plans to avoid the problems Apache saw. Sure, we all wish they didn’t get hacked, but considering the issues you could almost say it was a good PR exercise for them. Although it would have been nice to see a post on their official Twitter account as well. With a follower list more than 1,300 strong, and a presence (although a little thin on the ground) since last December, it would have been a good aditional source of micro-updates. Still we can’t have it all can we 😉

4272b518e2322261dc10b955d98c7ffcIn contrast we have COLT Telecom, a European telecom provider that suffered what appears (at this time) to be a full outage of services at 12:00:03 on the 8th September. Just before 8pm that night (after 20 hours of downtime with no communications with customers) COLT registered a Twitter account called COLToutagenews and posted the following “We’ve network issues in 8 cities. We are going through systematic process with urgency to identify, resolve & minimise customer impact”. This falls directly into something Martin McKeay covered in his FIRST 2009 presentation “Using Social Media in Incident Response“. This Twitter account was setup by COLT to serve as the ONLY communication with it’s customers after the event had already happened. I’m sure you can see the problem with this. To this date the highest number of followers for this temporary account is 83.

coltgraph

That’s not many considering their network runs European-wide and the outage effected 8 cities. Does that make Twitter a bad communications medium for Incident Response ? No. Not really. However you can’t expect your customers to go to Twitter and search on new accounts to find your communications channel once you get around to setting one up. Social networks like Twitter will only work in this way if your company invests in a presence 24/7 and not just when the need serves the company. If people aren’t following your company account because it doesn’t exist, is nothing but 100 marketing messages, or never posts anything, then your incident response isn’t going to work as you imagined it. From the looks of things COLT are back up and running (at least from the feedback I’ve seen). However the reason for the outage, and any communication with customers still hasn’t happened. Even the link provided by the COLToutagenews twitter account points to nothing but a list of contact phone numbers.

How can you even start to compare these outages. The services these companies offer are very different, and I feel bad comparing them as equals (apples and oranges). I’ve also no doubt that a lot of good people worked (and are probably still working) around the clock on the COLT network issues. However the fault isn’t with the hack, or whatever has caused COLT’s downtime. The big issue here is the response, communication and open flow of information.

We can learn a lot from these 2 examples. I can see these issues appearing in IR training courses for a long while to come.

Advertisements

21st FIRST Conference – Day 5

Today is the last day of the conference. It’s been great so far, but it’s not over yet 😉

09:00
Security and the younger generation – Ray Stanton

This is the first time in history that many countries have 4 generations of people working alongside each other.

Who are the younger generation ?

  • Language
  • Culture
  • Demands
  • Speed of learning
  • Expectations
  • Pushing the boundaries

Generation Y (refers to a specific group of people born between 1982-2000). The majority of Gen Y spent a significant amount of time leading an online lifestyle.

Gen Y statistics (Junco and Mastrodicasa Study 2007)

  • 97% own a computer
  • 94% own a mobile
  • 76% use Instant Messenger (15% logged in 24/7)
  • 34% use websites as their primary source of news
  • 28% author a blog and 44% read blogs
  • 49% download music using peer-to.peer file sharing
  • 75% of college students have a Facebook account
  • 60% own some type of portable music and/or video device such as an iPod

Companies must make their security policies relevant to Gen Y. Language that is easily understood and interpreted by Gen X, is interpreted differently by the newer generation.

Gen X are fast at adopting the new technologies (they are also the largest group), however Gen Y are the ones the push the boundaries and innovate.

Find ways to make it work

  • Moodles – styles of eduction
  • Listen
  • Engage
  • Never, ever, say No! They will just go around you
  • Participate
  • Embrace

10:15 Conficker Research project – Aaron Kaplan (CERT.AT)

The guys from Cert.at gave a nice graphic geographical representation of infection rates of conficker using a google earth overlay.

By looking at the the scale of infections over time it was possible to create an animation showing the  clean-up process in various countries around the world. To of the list of infections were China, Brazil, and Russia

11.00 Show me the evil: A graphical look at online crime – Dave Deitrich (Team Cymru)

Bad neighborhoods (most infected systems) – China, US, German and Brazil are top. However these are the locations with the largest number of online users. When looking as a percentage of online users, these countries are way down on the list.

Charts of infections by IP-ranges and statistical information available on Team Cymru’s website.

Concentration of bot-net controllers in more developed regions – US, Germany, Korea

When tracking bot activity (conficker) the number of bots reporting in dips on a Sunday due to systems in companies being powered off. The time of systems reporting in also supported this assumption as most bots where reporting in within working hours (depending on the region of the bot)

DDoS tracking showed the targets in US and Europe were most popular. However everybody is effected as service providers spread the cost of DDoS protection between all of it’s customers.

Lack of data collection in areas such as Africa is a problem when forming statistics. If more data was available then a completer picture could be put together.

11:30 Internet Analysis System (IAS): Module of the German IT early Warning System – Martin Bierwirth/Andre Vorbach

Designed as part of a project to protect critical infrastructure. Passive sensors are located at partner sites and provide information (filtered and anonymous) on network traffic. These partners are most in government networks, but are also installed in other partner networks.

Every five minutes a sensor transmits about 560kb of data

IAS data privacy

  • Does not monitor data with personal reference
  • Does not reassemble TCP flows
  • Independent of IDS systems
  • Revoke context of a packet after building it’s counter

Manual research on the data is done through a program developed in co-ordination with German universities. Tracking of outbound HTTP traffic get-requests allowed BSI to check the agent strings and confirm that users are not using insecure versions of software like Firefox. Charting showed versions from 1.0 through to the latest (at the time of data) 3.0x version.

IAS gives data to be able to ask the right questions. By using profiling it is possible to automatically identify traffic patterns that are out of the norm for the network.

Example: DNS spike of traffic across 3 separate networks. By looking at the traffic it appeared to be a large number of spoofed DNS requests for the “.” record. Source IP-Addresses were spoofed. It was discovered that the 3 networks were being used as part of a reflective DDoS attack. By tracking across multiple partners it is easier to see when attacks are targeted. When monitoring government networks it is easy to think that all attacks are targeted ones.

Aggregated data extends the perspective of individual networks.

Prospects: Deploy more sensors, Automatic correlation of data

13:30 New Developments on Brazilian Phishing Malware – Jacomo Piccolini

Changes from 2008 to 2009

Malware levels have remained around the same. 30-40,000 unique per year. Although the amount has remained constant, the quality has increased. Less usage of Delphi, Visual Basic in programming. Move towards Java / C++.

Targeted attacks are rare. Most attacks spread through the use of spam email giving links to users (themed attacks).

Many malware samples are using simple attacks that alter the local hosts file to redirect victims to phishing sites (password stealing, etc…). This is not rocket science. Keep it simple. Most examples concentrate on adding hosts entries for Brazilian banks.

InfoSEG

Attacks against government information systems using malware. Access to the Brazilian database gives you access to all information on a citizen (car registration, job information, income, Tax information, travel permits, Visas, family ties, personal information, arrest history, picture, gun permit information, signature, etc…). The malware uses a simple overlay to steal the logon for a user. These logons were on sale on the streets of Sao Paulo, Brazil for $1000.

This information was covered in the media in Brazil. I can’t find the footage in English, however the original is available on youtube.

Newer malware used to install a BHO (Browser Helper Object). This then routes all your traffic through a single proxy when the traffic meets specific criteria (in most cases access to bank websites). The proxy would then re-direct to another phishing server to steal the credentials. When this malware was run through virustotal, no AV vendor discovered it.

Stronger focus on malware. Blocking access to documents, pictures and a range of other files. The machine then warned the user to use a specific AV to clean the system. For $10 you can then buy the AV product (scam product) to regain access to your files. Low cost extortion. The malware locks the files through “GetActiveWindow” call to block the applications. All files are still present. If you copy the file off to another machine then they are all available again.

DNS cache poisoning is also still an issue. On 11th April 2009 one of the biggest banks in Brazil suffered a dns poisoning and redirected traffic to a phishing site. This issue was resolved in 7 hours (it was on a Sunday).

Brazilian Initiatives

Defensive Line website (www.linhadefensiva.org) –> community blog that deals with end-user infections. Acts as a CSIRT team (ARIS-LD) and also provides anti-malware tool (bankerfix). This team is looking for assistance in developing their new software solution. Please check the website if you can assist them.

Malware partol (www.malwarepatrol.net)–> Provides blocking lists to many applications (mta, proxy, dns…). These are updated on an hourly basis and made available for any purpose. Some files tracked by Andre are still online after 4 years of tracking them.

Federal Police: Operation Trilha –> 691 law enforcement agents, 139 arrest warrants, 136 search warrants, 12 Brazilian states (28 cities), in addition to arrests in Brazil, 1 arrest was made in the US.

Malware is an alternative source of income and for some “just a job” – social issue

My take from this presentation is that the Brazilian malware economy is very internally focused. Brazilians targeting specifically other Brazilians and Brazilian banks in particular. This is something to watch in the future however, as the malware authors become more proficient they are beginning to branch out into other markets.

FIN
Overall the FIRST conference was a great experience and gave me a different perspective than my normal conferences (mostly hacker style cons like CCC, Blackhet, etc…). The fact it was in Japan just adds to the overall effect of course. I wish I had more time to look around and really take in the culture. Hopefully I can arrange some time next year to make a long tour of Japan with my lovely girlfriend. I’m accepting donations 😉 Here’s to next years FIRST conference in Miami.

21st FIRST Conference – Day 4

Today’s a short day due to the AGM taking place this afternoon. I’m hoping to make the most of the time and visit the Kyoto Imperial Palace or the craft market. Still, you never know what’ll happen here.

09:00 A Railway Operator’s Perspective on the lessons of the Great Hanshin-Awaji Earthquake – Takayuki Sasaki

Mr Saskaki (from JR West Railways) talked about how to manage an unexpected disaster recovery situation. The earthquake hit 7.3 on the Richter scale and caused in excess of 10 Trillion Yen in damages. 6433 people lost their lives in the disaster.

3 rival railway companies (JR West, Hankyu, and Hanshin) teamed to ensure that passengers were able to travel easily.

Crisis control methods put in place after the earthquake .:

  • Introduction of urgent earthquake detection and alarm system
  • Anti-seismic reinforcement work
  • Establishment of a second Shinkansen General Control Center

11:00 In The Cloud Security – Greg Day

Estimated in excess of $1trillion loss through cybercrime and data loss in 2008.

Q1 2009 – 12 million new IP’s zombied since January –> 50% increase since 2008

Koobface – more than 800 new variants in March 2009

In 1990 Dr Solomon’s Antivirus had signatures for 296 viruses (+61 variants). Wouldn’t it be nice to go back to those days.

Historically AV has been designed to protect against single large threats. The new method is much more smaller viruses and variants designed to be re-wrapped and re-used over and over again.

How the virus landscape has change .:

  • 1987-2000 “Method” –> Viruses were all about proving that it was possible. Viruses were slow and not released often.
  • 2000-2003 “Speed”–> Speed of spread was the key. These viruses could be seen just by looking at the backbone and seeing the increase in traffic.
  • 2004-xxxx “Volume” –> The idea of hitting a target with many different variants until one bypassed your defenses.

Proactive behavioral controls work by examining processes to form a baseline of their memory usage and data-flows. If larger amounts of data are seen that would cause a buffer-overflow, then an alarm is triggered. Even in a standard environment it’s not easy to implement a complete lockdown of the system using this technique. This means that reaching a mid-point between the newer techniques and the older style signature and change control checks (monitoring registry changes, etc…).

The huge increase in malware in recent years has been caused by the move away from smart people writing and using malware. The people capable of writing the malware are now moving to a better business model were they sell the tools for creation of a virus/malware. Tools such as Shark can create many combinations of malware depending on the settings selected, and packers used. This lowers the level of entry that anybody can have customized malware.

** 30 minutes in…. first mention of cloud (yes, he really did use the Final Fantasy character in his presentation)

The cloud can be used to increase the response times by returning metadata on files seen in order to track possible malicious traits. This information is sent up to the servers and compared to other gathered data. The existing DNS protocol is used to transfer the metadata to the server and respond. If the suspicious file is know to the AV vendor then they can alert you through a DNS response (signed and encrypted). All data sent is anonymous.

In the case of targeted attacks, the aggregation point in the cloud is used to map the fingerprint of possible attacks. Artemis clients sent fingerprints ~2 hours before samples are received.

The trend has moved away from self-replicating malware and moved more to self-infecting (user infects themselves by visiting a website)

Using fingerprinting it’s also possible to recognise sites that are infected with hidden iframes and develop blacklists on the fly from information gathered from all users. Once a threat fingerprint has been identified, it can be detected on any site that has been infected using the same method/injection. This same scenario can also be used with message reputation to improve existing spam mail filtering.

http://www.trustedsource.org –> Gives an overview of the data seen from these initiatives

Problem of gathering more intelligence. Most customers request more intelligence on what is currently being seen, however are not happy to be one of the sources to privide information. This has hopefully been resolved by restricting the information sent to hashes of what is seen, and not files or other identifiable information.

13:30 Chinese Hacker Community and Culture, Underground Malware Industry – Zhao Wai (KnownSec)

Part 1: Chinese hacker culture
Part 2: Underground industry
Part 3: How do we fight back ?

Trends (Blackhats and Whitehats)

  • 1998-2003 : Server-Side
  • 2002-2007 : Client-Side (Image format, Office documents, IE)
  • 2006 – xxxx : 3rd Party –> For Profit

3rd Party applications are very weak on security and full of bugs.

Sometimes legitimate security researchers in China have their research accidentally released or used in attacks. This is because of other hacking groups attacking the researchers networks, or the researchers selling the exploits through untrustworthy services.

Blackhats in China

  • Age: Young (maybe not), Talented, and rich
  • Most are not in big cities
    • Why? Economic related ?
    • More fired engineers – more hackers ?
  • Blackhat Culture: Baidu zhidao forum, QQ (Chinese social networking)
  • Underground industry: everybody has a role
  • Not using IRC anymore. More often on public forums or QQ
  • International ? Not yet ?

There are currently 300 Million internet users in China. People in China are scared to use e-commerce websites due to fear of being hacked and having their information stolen. Malware is not only written in China, but is also a problem for users in China.

China is not only the world’s factory, but also the world’s malware factory.

New Kanji and phrases have been created in the last few years to describe malware functions. (e.g. GuaMa: Hooking Horse,Injecting malcode into websites)

Many different teams are involved in the malware process. Seperate teams deal with the exploitation, sales, and various other parts of the process.

In 2006/2007 a majority of malware used known vulnerabilities in 3rd party applications. Currently 0-day attack code is used in various 3rd party products. The recent malware versions like to exploit logic bugs (Baidu toolbar, snapshot).

0-day market underground

  • They love client-side vulnerabilities
    • Easier to find
  • Price is better than ZDI
    • Researchers still prefer ZDI
  • Sometimes 0-days are leaked to the market
    • Security professionals
    • Professional whitehats

The KnownSec team talked last year at Xkungfoo(xcon) about the SNS worm plus drive-by download attacks. This year there is a worm spreading through the QQ social network.

The KnownSec team are trying to crawl Chinese websites to find and label the malicious websites. However they’re not google. More water vapour than cloud computing. Easy to DDoS. It was found that a majority of the malicous servers are located in a small area of China. Some large areas have no malicious servers at all. Within China .com domains account for 52% of the malicious domains (followed closely by .cn with 32%). The team download around 858 downloaders per day. Of these 16% are brand new. The average detection rate of these downloaders on VT is 60%. In tests McAfee-GW-Edition, AntiVir and eSafe find the most samples. ClamAV finds only 10% of the samples. 50% of the malicious websites are not found by the Google safe browsing filters.

Some information on these figures can be found on the KnownSec website.

This research has been used to create a better filtering and protection engine. Webmon API –> Currently in private BETA

Day 4 was a littler shorter than the others, but it means I might finally get some sleep. I should be over my jet-lag by the time I get to fly home I’m sure 😉

21st FIRST Conference – Day 3

Today’s big thing is the Volatility training from Andreas Schuster. I’ll be trying to attend some talks after the workshop is over were possible.

09:00 Attacks Against the Cloud: Combating Denial-of-Service – Jose Nazario

Cisco anticipates that 90% of bandwidth usage will eventually be used for video services. Historically the Internet has been a nice to have. As services such as communications move to the Internet it becomes a need to have. Availability and quality of service becomes more of a serious issue. Telephony services are a key point. People need to communicate, especially in the event of an emergency. 911 emergency services need to be available all the time.

The generic definition of the cloud – “If somebody elses stuff breaks, your business grinds to a halt”

Internet attack sizes have grown from 200Mbps in 1999 (based on code-red traffic levels) to 50 Gbps in 2009. The trend appears to be a doubling of attack bandwidth year on year.

Historical effects of Denial of Service attacks

  • 1999 –> Routers die under forwarding load
  • 2002 –> Servers die under load
  • 2005 –> Renewed targets in infrastructure (DNS, peering points)
  • 2008 –> Web services

Service providers are now able to cope with the attack traffic. In turn this has caused the problem to move on to the customer. With the event of caching servers and load balancers the attackers have moved to use methods designed overload backend servers with hard to resolve requests. This bypasses the protections put in place in most organisations.

The change of Denial of Service from fun to criminal

  • 1999 –> IRC wars to lead to widespread adoption of primative DoS/DDoS tools
  • 2001 –> Worms: Code Red, Nimda, SQL Slammer
  • 2004 –> Rise of the IRC botnets: Online attacks go criminal
  • 2007 –> Estonia attacks cause governments around the world to worry about cyberwar
  • 2009 –> Iran election results lead to DDoS, Twitter, etc…

Providers have responded by protecting their own infrastructure and using manual blackhole routing as a protection measure (2001). Providers then start to offer DoS protection as a service to customers using BGP injections (2003). Finally providers begin protecting key converged services such as VoIP and IPTV using multi-GBbps inline filtering (2007).

09:30 I am a Kanji: Understanding security one character at a time – Kurt Sauer

Teaching somebody about security is not unlike teaching somebody how to understand Japanese kanji. There are around 50,000 kanji, and they can have multiple definitions. Not an easy task.

Great presentation. I’d suggest looking at the slides/video to get the full effect. Nothing ground breaking, but definitely worth the time.

11:00 Windows Memory Forensics with Volatility – Andreas Schuster

Like yesterday, this is more of a workshop than a presentation. I’ll post up any links / information that might be useful however. You can find the slides and workshop information (along with other good information) on Andreas’ blog

Part 1: Refresher – Memory Fundamentals, acquisition, kernel objects, analysis techniques
Part 2
: Using Volatility – Volatility Overview, analysis w/ Volatility
Part 3: Programming – Developing plug-ins for volatility

PART 1: Refresher

Live Response

  • Focus on “time”
  • Acquisition and analysis in one step
    • Untrusted environment
    • Not repeatable
  • Tools tend to be obtrusive

Research from Aaron Walters and Patroni (2006) that details the percentage of RAM changed when a system is in idle state and during a memory dump –> Blackhat DC presentation from 2007 details this information. Research shows that 90% of freed process objects are still available after 24 hours of idle activity.

Focus of live response is on Main memory, Network status and Processes. When performing memory acquisition, Installing agents prior to the incident can help to minimize the impact.

Expert Witness Format – Used primarily by Encase. libewf project – Joachim Metz (http://sourceforge.net/projects/libewf/)

powercfg /hibernate on” –> Enables hibernate from the command-line on Windows systems. More information on the powercfg command can be found on Microsoft Technet.

Basic memory analysis: piping memory through strings and looking at interesting results. –> Remember to set ASCII/ANSI and UNICODE

  • Many false positives
  • Memory is fragmented
  • Conclusions hard to form (hard to prove connections between strings, discoveries might be mis-leading)
  • Obfuscation

List walking: Find initial pointer and traverse the doubly linked process list –> Applies to single lists and trees

  • Easy to subvert (anti-forensics)

Scanning: Define signature and scan the entire memory for a match

  • Slow
  • OS dependant (patches can break things)

There are also a number of hybrid methods used.

PART 2: Using Volatility

Originally developed in 2006 as a tool called FATkit. This was then used as the basis for VolaTools in 2007. The VolaTools project was transferred to Microsoft through a company buyout. The project was restarted and completely reprogrammed at  an open-source project – Volatility

SVN version is available from http.//code.google.com/p/volatility/

Standard options for Volatility are .:

-h, –help            show this help message and exit
-f FILENAME, –file=FILENAME (required) XP SP2 Image file
-b BASE, –base=BASE  (optional, otherwise best guess is made) Physical offset (in hex) of directory table base
-t TYPE, –type=TYPE  (optional, default=”auto”) Identify the image type (pae, nopae, auto)

For help on specific functions, volatility <function_name> -h

http://www.forensicswiki.org/wiki/List_of_Volatility_Plugins contains a list of published plug-ins for the framework

A number of functions within Volatility have been updated to improve speed and reliability. An example of this is the thrdscan/thrdscan2 function. The updated versions usually run faster than the original versions. It may be best to check the output between the older/newer versions to ensure that you are receiving consistent output and not any false positives/negatives.

volatility modules -f <memory_dump> –> outputs a list of loaded modules (driver files). The modscan2 function will give a more comprehensive list of loaded (and previously loaded) modules (if the metadata is still present in memory). The moddump function can be used to extract a module from the memory dump for further analysis.

Using scanning modules is very helpful as it will reveal information on not only what was loaded at the time of the memory dump, but also scan the entire memory for any matching signatures of what may have been loaded/unloaded prior to the dump. This is the case for not only modules, but also processes as well.

The pstree plug-in is useful to output a process tree (ASCII style).

However not all processes are listed. By using pcscan2 with the “-d > output.file” option you can create a file that can be opened in ZGRViewer. This shows a full graphical output of the process list including start and end-time of each process. ZGRViewer also offers very helpful search functionality to help find processes within the tree.

By finding the PID of a process you can then use the files, and dlllist to find the open files for the process in question. By running getsids you can examine what account started the process, as well as information on if it was interactive or not. You can also use the regobjkeys to examine what was in use within the registry by each PID. By running the procdump function (with the PID of the suspicious executable) you can extract the process into a file for further examination.

Examine open connections and sockets is also simple. Volatility has the connections function (along with connscan/connscan2) and the sockets function (along with sockscan/sockscan2). These functions will output the PID of the creating process so it can be mapped back to processes discovered in the process listing performed earlier. As with a majority of the scanning functions, they may find information on sockets/connections that existed, but were no longer open when the memory dump was performed.

As part of the VolReg plug-in you can also output things like the LSA secrets, as well as password hashes. These rely on information form the hivescan/hivelist functions. Performing the hashdump is something that’s been covered before on various blogs. If you want a rundown of how to perform a complete dump of password hashes, I’d suggest checking out Chris Gates’ post on the carnal0wnage blog, or the forensiczone walkthrough.

Some interesting plug-ins to explore –> cryptscan (to search memory for Truecrypt passwords), malfind (to search for possible injected code), dmp2raw/raw2dmp (for converting formats — Crash Dump format).

This was a great workshop. I’d love to have spent longer going through things. However the point was to demonstrate how to use the Volatility tool and not to teach n00bs like me how to do proper analysis. I’ll have to take the time to run through the examples and slides again to really get to know the process fully.

16:00 Incident Response and Voice for Voice services Lee Sutterfield

Protection of dedicated VoIP services by using a specifically designed Voice Firewall alongside existing firewall technologies. Unlike a normal IP-based firewall, a Voice Firewall is designed to restrict the actions of a user within the policy of the organisation. This includes restrictions such as limiting the destination of dialling from specific extensions (block long distance, international), limit on the time and duration of calls. It also offers the ability to restrict incoming calls to prevent access to dial-up services from unwanted numbers. This can be implemented to prevent war-dialling attacks by restricting based on the source of the call and pattern of inbound calls.

A Voice Firewall is primarily designed to prevent things like toll fraud instead of defending the system from attack. It also offers better visibility into the usage of your voice network and provides alerts when certain trigger levels are exceeded.

Another usage of Voice Firewalls is blocking calls to prevent crank calls, or active war-dialling attempts. Any calls from this number are then blocked and the attempt is logged. Using this technology it is possible to protect against / log attempts of vishing attacks.

Voice Firewalls can be used to assist with the following security points .:

  • Unauthorised modems
  • Remote access (modem) attacks
  • Phone service misuse
  • Toll fraud
  • Harassing/Threatening calls
  • Malicious software (triggering calls to pay numbers)
  • Vishing attacks
  • Denial of Service (against VoIP services)
  • Bomb threats (call  recording)
  • Improve voice uptime
  • Stop data leakage
  • Reduce telecom total cost
  • Baseline/plan/optimize (new VoIP deployments)
  • Policy driven call recording

Voice Firewall technology address age old vulnerabilities and lock-down mechanisms. This increases your ability to provide improved incident response and preventative services to the enterprise as well as via Managed Security Services for Voice (MSSVs).

Well we”ve hit the mid-point of the conference. It’s been great so far. Tonight we have the FIRST conference banquet. Hope they have sushi else there might be a riot 😉