Saturday, October 22, 2016

Exploit generation and JavaScript analysis automation with WinDBG

The material for our talk "Exploit generation and JavaScript analysis automation with WinDBG" with Miklos Desbordes-Korcsev can be found in the below links. 2016 recording:

Tools we developed and used in the demos:

Slides will be available here:

and on Hacktivity's website ( Will update this post when everything is online.


Saturday, September 24, 2016

Offensive Security - Advanced Web Attacks and Exploitation (AWAE) review

I had the opportunity to attend OffSec's AWAE training this year at BlackHat. The challenge started with the registration, with monitoring past years events, I knew, that if I don't sign up in the first 24 hours, I need to wait one more year. I went for my employer approval way ahead of the registration opening, and luckily I had it a few days before. As soon as I got the BH newsletter about registration opened, I throw away everything and went to the computer to sign up. 21% of the course was already full!!! Luckily I could secure my place, and after that I read that this year the course filled up in 8!! hours. If you want to sign up, you have to be fast.

What background you need?

I'm still a guy working in incident response, so I don't do too much web application testing (I do exactly 0), all of my background knowledge comes from OSCP and OSCE. If you took those courses, you will be absolutely fine. What I missed is my lack of JavaScript coding experience. I can read JS, but can't write, which made things a bit harder, but it was still manageable. My advise is to learn some JS before this course.

The course:

There is one review of this course on OffSec's website, with the name "Story telling with muts", but that link is no longer valid. The title however is 100% right. There are 10+ case studies in this course to walk you through interesting techniques, chain of exploits, etc... and muts has a story to each of them, which makes the course really interesting, you not only get some in-depth knowledge, but also a couple of cool tales :)

I can't really split up the course into particular days, like I did with AWE, it's about the same level of difficulty through the entire 4 days. It does increase a bit, but overall it doesn't have big spikes. Compared to AWE this course is lighter and not in a negative sense. The fact that you don't need to build ROP chains manually, debug kernel, and hunt for bits in memory makes it much more brain friendly, and you don't fall apart after day 2 like at AWE :) You will learn / see plenty of examples of real hacker mindset, out-of-the-box thinking. You will see vectors, that maybe before you didn't even think before (e.g.: XSS via SNMP), some really cool exploit chains, where the exploits by themselv are not serious, but when applied together they give you remote code execution. Again, I think this course's main strength is not the techniques you learn or the bunch of 0-days you get (yes, you leave the course with a handful of them), but the mindset you get, you will look on webapps differently after this course. The course is 100% hands-on, they build upon the basics, and there is some theory covered on the fly, but it's fully practical, which you can't say to any other course generally. Typical OffSec course, and you will have plenty of chance to practice, practice and practice. You will be much more comfortable with testing web apps after this 4 days.

The bonus I got out of this training is that before it I hated playing with webapps, it simply didn't look interesting. This course changed my view and feelings, testing webapps can be really cool. :)

Mati (@muts) and Steven Seeley (@steventseeley) were the instructors, probably the best two people you can get for this kind of course.

The exam:

Well... it's not yet available.

Some closing thoughts:

I started my InfoSec journey back in 2012, and I quickly became aware of the Offensive Security trainings and exams, and after reading plenty of reviews, articles, I knew that time, that I want to be an OSCP and potentially go on with OSCE and the others. That time all of this looked nearly impossible to achieve, and were far far away in the big unknown, they were like a dream. I freaked out even from the OSCP reviews, not to talk about the rest. I started with OSWP in 2012, and then every year I managed to do one more, slowly progressing towards the end; 2013 - OSCP, 2014 - OSCE, 2015 - OSEE. I did other courses during the years, but definitely these were the most rewarding ones, especially that this was my big dream when I started. Even without OSWE at the moment, I'm very happy, and it feels really good, when you work hard (and in these cases really-really hard) towards some big goal, and you finally achieve it. This is not the end of the journey, but definitely a major milestone in my life.

Finally I want to say thank you:
1. To my family, who always supported me, and accepted the fact that I have less time for them when preparing for these courses / exams.
2. To Offensive Security for creating the trainings.
3. To my employer for paying the courses.

Thursday, June 23, 2016

Defend against malware with fake debugger windows

This post will be quite similar to my previous post about making your desktop look like a VM: The idea here is another trick malware commonly uses to detect malware analyst's machine: enumerating the windows and check if there are titles such as "OllyDBG", "WinDBG", "Wireshark", etc... it will frequently look for debuggers or various malware analysis tools.
I was thinking about, what if I place an empty window with this name, will malware detect it? I didn't wanted to hook into functions this time, but place an actual window. My other criteria was, that I want the window to be hidden, so it doesn't disturb people, but still can serve its purpose.

First let's see how malware detects / enumerates window titles. As described here: and many other places, it uses the "FindWindow" Windows API call to look for names. This is a quite simple function with two parameters:

  _In_opt_ LPCTSTR lpClassName,
  _In_opt_ LPCTSTR lpWindowName

As per MSDN it will search for the string specified in lpClassName, which is the registered class name for the Window, or if that is NULL, it will search based on the window title, which is the second parameter. Malware typically will search for the class name. If you wonder how frequently this being used, here is a little help, to find malware which uses this technique. Cuckoo sandbox recently added some malware behaviour signatures to its feature set, and one of them is looking for this exactly, it can be found here: It will add the description into the analysis page. uses Cuckoo sandbox to perform malware analysis, and it's searchable on Google, with that simple search for this on Google, and you will get a bunch of samples:
"Checks for the presence of known windows from debuggers and forensic tools"
For example:
Various malware will behave differently if it finds the window, it might exit, or will try to close the window, etc...

With this, let's create our window.

I still can't really do Win32 API programming, so I use the power of copy-paste. I found two very good articles about making your first, empty window in Windows: 
You can read through this, the only thing I want to highlight is that the window will not be visible by default, you either create it with the appropriate flag, or call ShowWindow, as specified in the example. What if we don't show it? Well, as you would expect, it's not seen, you don't see it on the tray, nowhere, the program runs, and you can find it in the task manager, but no window. What happens if we call FindWindow this way? Based on numerous tests I did, it will always find it, regardless if you search based on class name or title. This is great, because we can start a process, which is completely hidden to the average user, consumes about 500kB memory, and about 0% CPU resources, but a malware will find it. Cool! Now we can hope that it will actually exit if it finds it.
Next question, is: do we need multiple processes to create multiple windows with different names? Based on my tests: NO. You can create multiple windows with different names, and you can hide all of them, and the FindWindow function will find all of them.
I also tried, what happens if I start WinDBG, which will register the same class name, but apparently it doesn't matter. You can run both programs at the same time, you can close them, and it won't cause any conflicts.
There is another method to find windows with GetWindowText based on the title, described here:, and that also works, it will find the window as well.

If I was an AV developer, I would hook the FindWindow function, and check if a process is explicitly looking for these windows, and if yes, I would raise a flag, because it's definitely not normal. Not sure if anyone does it, but it would be a fairly easy thing to do.

I created a PoC code to create an OllyDBG and WinDBG window, and another one, which will use FindWindow to find them. Both of them are available from my Github page:

The related Visual Studio projects are:

  • FindWindow
  • FakeDebuggerWindows

Wednesday, June 8, 2016

About IOCs...

I usually stay away blogging about my opinion, but I so fed up with the IOC hype that I have to write it down. (My next topic might be threat intel, along the same lines).

First about sharing:
There is a huge number of possible IOC types, like IP, domain, registry modified, files created, etc... still what you can most commonly find in any malware analysis paper, or IOC feeds, or any generic sharing, are: IPs, domains, filenames, hashes. No more. Now there are multiple issues when it comes to sharing:
Why it can't be shared in a standard format, like STIX? Typically if you read a report, you will have this information at the end of the article as text, which might be OK as someone posting details doesn't necessarily have the option to upload files, but when it comes to big security vendors who often publish IOCs in a separate PDF(!!!!) there is no excuse. I really don't understand why it can't be in a CSV as a minimum, or more preferred in STIX XML format. Vendors should have the ability to generate these, and you could more easily feed it to some other tools, without making difficult copy-paste tricks from a PDF.
My second big headache is, why almost every big vendor shares only MD5 hashes of malware samples??? It's not just that MD5 is more and more subject to various collision attacks, but some logs you have, might only contain SHA-256 hashes of executables seen in an environment, so you have no chance at all to search that data. Why is it so much trouble for someone to calculate an additional SHA-1 and SHA-256 hashes besides and MD5 and sharing that? Why does it hurt anyone? As a backup you can hope that the sample gets uploaded to VirusTotal, and you can get the hashes from there, but that's an incredible big amount of additional, unnecessary work (even with a script) to get that information from another source - if you can at all.
Why vendors don't share other IOCs in a summarized form? Like registry entries created, etc... You might find them if you read through the 20+ page article, but who has the time to read through every single malware report?
With that my request to vendors, who commonly share plenty of IOCs:
  1. Please share them in STIX format but at a minimum in a CSV
  2. Please share SHA-1 and SHA-256 hashes as well beside MD5 (share all 3 not just 1 of them)
  3. Please summarize other IOC information as well, not just IP, domain, filenames and hashes
More on hashes:
Some tools allow you to search files across your environment. Guess what!? Some products use proper SHA-256 and some use MD5, but most products can search only one of them! If you consider how sharing is being done in the community (you only get one of the 3 popular hashes), this is setup for failure. By design. I don't want to write more about this.

On usefulness:
I don't think IOCs are from evil. It can be good, and you can potentially find some badness based on that, so it has its place in incident response, but it won't solve core security problems, and IOCs won't be the ultimate solution for everything. The problem comes when vendors start to rely 100% on this data. For example calling something a 'hunting' module, when it's only an advanced IOC search with a nice GUI, I think is really bad, and something conceptionally went wrong with the entire product.
These days vendors seem to think that IOCs will save the world, and they are extremely important, and everyone wants to sell you more and more IOCs for huge amount of money. I really mean huge! I could go into how the importance of threat intel in general is overrated, but that might be another post.
Just think about how these IOCs are really helpful:
  • hashes - considering the speed of sample generation (half million new samples / day), do people really think that 2-3 particular hashes are important? It's the oldest and dumbest signatures AVs can use. So if you honestly think about it, hashes are basically poor man's malware signatures.
  • IP - most of the time there are 100+ websites hosted on a single IP, if one site becomes infected, and there is a popular harmless site on the same IP, you could immediately flag significant part of your network traffic as malicious. Good luck in figuring out which one might be really bad in a big network... you will give up immediately as soon as you see the amount of data.
  • filenames - somewhat useful if the name is unique, but almost the same issue as with the hash, could be slightly better however.
  • domain names - probably the most useful ones in general.
In summary I think IOCs are just poor signatures, which are way too overrated by most vendors, especially when it comes around the topic threat intel. They can be useful, but the hype going around them these days is a shame. If you go and buy them, then they are also way too expensive if you compare it to standard AV signatures, and as noted above they are not better.

Tuesday, May 3, 2016

JavaScript deobfuscation: criminal case against you.wsf

A few months ago, I came across a malware dropper which was a javascript inside a Windows script file (WSF). The filename was: "criminal case against you.wsf". Typical... I'm a bit fed up with the naming, but anyhow... The file itself is somewhat interesting, because it can contain many types of scripts, and get them run in Windows if there is an interpreter. But this is not what I want to write about. The deobfuscation itself is not super hard, but after doing it I came across two really useful online tools, which can do this in a matter of seconds, and this is why making this quick post.

This one was new to me, and it is pretty handy:

I already knew about this, and it was useful in the last step:

I really recommend everyone checking them out.

For completeness here is the file, which contains the original JS, and then each step of the decoding, it had 4 layers of obfuscation.
criminal case js deobfuscate.txt

Friday, February 19, 2016

CVE-2015-8285 - QuickHeal webssx.sys driver DOS vulnerability

A few months back I decided to practice my skills learned in the AWE course, in order to maintain it in my head, and keep it as an "active" knowledge. In general I don't have too much time these days, but I sacrificed some time for this. I also wanted to find a new vulnerability instead of writing an exploit code for an existing one, which didn't make things easier.
As I don't plan to do such kind of activity too often, I decided to look for bugs manually with reversing a kernel driver and look for possibly vulnerable IOCTL codes. I was probably lucky or these bugs are really frequent, but after some trials with a few products I found one in QuickHeal AV 16. There is a DOS vulnerability in the webssx.sys driver. Here is the document I made with all the details:

and here is my POC code:

Due to the reasons described in the document I didn't find a way to make a privilege escalation exploit out of this, so if someone see a possibility, please let me know :) With that it was still a very good experience, and I definitely learned new stuff with this.

This is also my first ever bug and CVE. This part was also a very interesting journey. How to report a bug, get CVE assigned, etc... It didn't went smoothly, and I had a few challenges initially to contact the vendor, but it all sorted out at the end. It took about 3 months from my initial trials of submitting the details to QuickHeal till they actually released a fix.

Sunday, October 25, 2015

Make your desktop a fake Virtual Machine to defend against malware

I had an idea about one and half year ago, which was the following: If various malware check for the presence of debuggers, check if they run inside a VM and if they find any of those, then they exit. I wondered why we can't use this to our benefit. I always run into solutions, which try to hide the VM / debugger, so the malware can be analyzed, but I never heard about the other way around. What if we make our regular desktop to look like a VM, thus if the malware detects it, if will simply exit, without doing any harm. I know that the number of VM aware malware is decreasing, but we could still stop a fairly good amount of bad stuff, which is always a benefit, and perfectly fits into the "multi-layer defense" approach in my mind.
In the past one and half year, I run into this idea twice, so luckily I'm not the only one who thought about this. Apparently talk about this stuff is so rare that I have to highlight them:
  1. The HitmanPro Alert protection tool integrates this feature. I don't know to what extent, and what it will fake exactly, but certainly a welcomed approach.
  2. The article from Rapid7 Vaccinating systems against VM-aware malware talks about this.
I always wondered why various AV solution doesn't integrate this technique to their toolset, like HitmanPro did, I think it shouldn't be that hard. Probably it could raise some compatibility issues, with having fake services, files, registry keys, etc... but some of those could be done without any harm.
To prove my point, I wanted to develop something that could do this for me. The easy way would have been to actually place the files, registry keys, etc... in the system, like what Rapid7 did, but I think that it might not be easily rolled back, and wanted to see if there is another way. I was always interested how SSDT hooking works in rootkits, so I decided to go down that road, I could learn how SSDT hooking works at the code level, and maybe also produce something useful. This or similar approach could be integrated to AV software as well, like Symantec, which already does bunch of SSDT hooks in x86 systems, why not to add a few more?
I also wanted to see how complex it is to do this. The last time I wrote any line of code in C language was back in 2002 when I was a university student, and it was basic C, nothing to do with Windows API, so the challenge was given.
Luckily the Internet is full of examples, so it took 3 afternoons to develop the first version of my SSDT hooking kernel driver which was working. It took me a couple of more to come up with the final POC, which I will post in GitHub. The POC code can give false information about registry keys, files and devices, when a malware tries to look for them. I stopped here, because
  1. I think it proves my point
  2. I don't have time to develop it in more detail for other checks, and I also don't have the need
I know that there are plenty of other checks against VMs (special ports, mac address, red pills, screen resolution, CPU cores, etc...) which could be much harder to fake, but many malware will check for files and registry entries as well, and my goal wasn't to develop a complete solution.

With that my question is: if me, who had "0" experience with developing kernel drivers, or any Windows app in C language, and achieved - what I will show below - in only a few days, then a professional developer, could do this much more fast, and probably add a whole lot of other features as well, easily - and then, why no one does it?

Let's see my kernel driver (I will not go into the details how to write a driver, etc... you can find plenty of articles about that on Google), you can download it from my GitHub:

In order to install it you need to create a service:
sc create fakevm binPath= [path to your .sys file] type= kernel

After that, it can be started / stopped with the following command
sc start fakevm
sc stop fakevm

You need to have admin rights to do this.

Once it's started it will hook three functions to alter the execution flow, and by default it will start giving false information about files, devices and registry keys. I added a few IOCTLs to the driver (just to learn a bit about that as well), and thus we can control it, and turn on/off hooks, and if we want to make our desktop a fake VMware VM or VirtualBox VM (by default both are enabled). As I more comfortable with python, I created the controller it in that language, here is the usage of that script:

Usage: [options]

  -h, --help    show this help message and exit
  -w, --vmware  Switch fake VMware ON/OFF
  -x, --vbox    Switch fake VBox ON/OFF
  -o, --hook    Hook all functions
  -u, --unhook  Unhook all functions

I used pafish to get ideas what to fake, and also to verify my driver. This is pafish's output without the driver being loaded (note that I used a VM for the tests, because I have a MacBook + if I had a Windows I wouldn't want to BSOD it with a poorly written kernel driver):

We can see that VBox checks and most VMware checks didn't found anything. Of course a few VMware checks were successful, because I'm in a real VMware VM. This is what happens when I start the service:

We can see that many of the checks show that VBox or VMware was found. In the meantime the Windows OS seems to work properly, I can browse the web, open files, etc... doesn't seem to cause any harm. In WinDBG we can see the SSDT hooks:

With having a controller, this is what happens when I turn off VBox for example:

The hooks are still there, but they become a bit more transparent (I still fake VMware indicators).

I haven't done extensive testing of my driver, it might have bugs (most likely it has), I didn't prepared it for a whole lot of error scenarios, but I think overall it's quite stable, and if someone wants can take it further. This is just a simple POC to prove that with a simple kernel driver we could defend against some amount of malware, and I really wish this concept to be widely used by AV vendors, because it can certainly add to the protection level. Every single step counts.