Binary Ninja Licenses For Everyone

Last fall we ran an experiment where we gave away a $99 Binary Ninja personal license to every student in our Fuzzing For Vulnerabilities course. The feedback was very positive and overall it made the code coverage exercise much easier to understand when the students were able to visualize the execution path.

After some discussions we’ve decided this is something we want to continue because it both supports a great project and gives students something they can continue to use and learn with even after the training ends. Our next public offering of Fuzzing For Vulnerabilities will be at Blackhat USA 2017 and we look forward to seeing everyone there.

BlackHat USA 2017 Training

It seems like only a few months ago that Kyle and I were in Las Vegas teaching at Blackhat but it’s already time to start registration for this years courses. We’re pleased to announce that we’ll be back for the 7th time teaching at a Blackhat event!

Each time we’ve taught Fuzzing For Vulnerabilities, we’ve incorporated student feedback to improve our material and this offering is no exception. We’ve added several exercises and will be re-working the smart fuzzing exercise to make it easier for students new to fuzzing to wrap their heads around. Block-based fuzzing is a hard concept to demonstrate to students, especially discussing how an input is decomposed into smaller chunks. The original exercise used a custom fuzzer that I wrote and required the students to define each of the building blocks that made up the complex structure of an MPEG4 file. Ultimately this left more people scratching their head than we would like.

Rather than redesign the fuzzer, we’re going to switch to using BooFuzz (an updated and re-factored version of Sulley). This allows us to focus more on the topic of how the input is decomposed from a high level and we’ll pair that with several examples to show how to define the building blocks that make the approach so powerful. Stay tuned for additional details.

If you’re interested in registering for the course, you can find the registration information on the Blackhat site. If you would like us to come to your facility and give a private training please contact me and we can set something up.

Free Binary Ninja License

When I updated my Fuzzing For Vulnerabilities course for Blackhat USA 2016, I switched the exercise virtual machine from Windows 7 x86 to Ubuntu 16.04 x86_64. I did this for several reasons including licensing issues, performance increases, and a general lack of fuzzing tools and OS support. This meant that I could now provide exercises and hands on experience for things like AFL and Radamsa but one major downside was that the free demo version of IDA Pro does not support 64-bit executables. The course doesn’t heavily rely on reverse engineering but it is helpful for some of the exercises and really fundamental to visualizing code coverage results. Until today I didn’t have a good answer to this problem.

After talking with the guys over at Vector 35, I’ve decided that not only am I going to integrate Binary Ninja into the course, I’m going to give every student a complementary personal license! This way every student will be able to disassemble and inspect all of the exercise binaries and learn a little about writing scripts to visualize the results of their fuzzing. I’m really happy to see how far this tool has come and support some great friends and colleagues.

If you’re interested in registering for the course and claiming your free Binary Ninja license, please email me at cbisnett@gmail.com.

Fuzzing For Vulnerabilities - October 2016

After a great response on Twitter I’ve scheduled another offering of Fuzzing For Vulnerabilities! This public offering is not part of a conference which means we can offer it at a significant discount. After talking with a number of interested folks we decided on a location in Columbia, Maryland. Hopefully this location works for most everyone. If you would like us to host the training in your area or at your company please contact us.

Details

This is the same course we taught at Blackhat USA 2016 and we received great feedback. We’ve added a number of exercises including fuzzing with AFL (American Fuzzy Lop) and Radamsa.

For a complete list of topics, prerequisites, and trainer biographies, check out the course details page

Day 1

Students start the course by learning fuzzing fundamentals. We find it’s best if everyone understands all the parts to a successful and scalable fuzzing framework. Once all of those parts have been discussed and setup, the students will write their first fuzzer (dumbfuzz). After running this fuzzer against the most recent release of VLC Media Player students will find a 0-day vulnerability! All this on the first day.

Day 2

Day 2 picks up where we left off and continues to build upon whats already been taught as we dive into format-aware (smart) fuzzers. The remainder of day 2 is where we cover a number of advanced topics to get students on the path to mastery. We will discuss AddressSanitizer; how it works, how it can help find additional vulnerabilities, and how to set it up. We also cover a number of other topics including: code coverage, corpus distillation, in-memory fuzzing, and differential fuzzing. Finally we will discuss crash analysis to automate analysis for thousands of crashes to determine unique vulnerabilities.

Pricing

  • Location: ANRC Training Center, 6716 Alexander Bell Dr #100, Columbia, MD 21046
  • Dates: October 20th-21st
  • Seats: 20/20
  • Cost: $1950 (until 10/14) ; $2150 (after 10/14)
  • Schedule:
    • 0800 - Start
    • 1030 - Break
    • 1200 - Lunch (provided)
    • 1430 - Break
    • 1700 - End

Registration

To register or for additional information, please contact me at cbisnett@gmail.com or through twitter @chrisbisnett

Bring Back Copy-Paste

Have you ever tried to copy and paste your password from your password manager (you do use a password manager right?) only to find that the developers removed the ability to paste for “security reasons”? First I’m not sure why pasting my password makes me less secure, but I can only assume this was invented to deter users from saving their passwords in passwords.txt files. Really this isn’t securing those users since they will still use those files and will just choose passwords that are easier to type.

With the increase in popularity of password managers like LastPass, 1Password, and KeePass, users, like myself, are making increasingly complex passwords that are very annoying to type but also more secure. Sometimes I still find myself needing to copy and past the password because of poor or “interesting” design decisions on behalf of the website.

I keep running into this so I figured I would pass along my usual solutions for getting around this security anti-pattern. Both of these methods require you to use the Chrome Console. To open the console with the textbox already selected, right-click on the control and select Inspect. Then switch to the console view or by pressing Esc or clicking the “Console” tab. The following commands assume the offending textbox is still selected.

Removing the onPaste() handler

Often the developer did something simple like setting a handler that gets called for the paste event. It can be something as simple as this:<input type="text" value="" id="myInput" onpaste="return false" />

The easiest way to remove this is to clear the handler function. Type this command into the console $0.onpaste = null

Removing all event listeners

Sometimes the developer was extra evil and added multiple methods to keep the user from pasting. In these cases I usually just nuke all the event listeners. You can get a list of the registered listeners in the right-hand pane of the console. Each of the entries can be nuked with the above command, just remember that each entry is prefixed with on.

If the site uses jQuery (and who isn’t these days ;) then simply $($0).unbind() and all event listeners will be gone. I’ve had this go bad for me before where one of the events was testing the password strength and would only let you submit the form if the password was good enough so you may find that you have to do more hacking ;) or only nuke certain event listeners.

More evil?

I know there are likely even more evil tricks developers are using. If you come across one, let me know on Twitter @chrisbisnett. Maybe one day developers and organizations will realize this practice does not make their users more secure.

Try it out!

Just try to paste into this textbox, I dare you!

BlackHat USA 2016 Training

After the success of last years training I’m pleased to announce that Blackhat is having us back to teach Fuzzing For Vulnerabilities at this years conference in Las Vegas! Kyle Hanslovan will be there again to offer his expertise in fuzzing and Python.

Last year we ran a “game” for all the students to apply their fuzzing techniques against an extremely vulnerable application, written by yours truly. All the students compete against each other to see who can find the most unique crashes and the winner takes home a prize. The students really enjoyed the competition and the chance to practice their new skills. I’m going to update the application this year to add even more bugs >:)

If you’re interested in the class content, you can find all the details and register on the training page on the Blackhat site. Look forward to seeing everyone there.

Use-After-Free in VLC 2.1.x

tldr; I found a vulnerability in VLC while creating a training course on fuzzing. I reported it to the VLC maintainers but they declined to fix it. I contend it’s a security vulnerability. Here is the evidence, you decide.

I’ve been meaning to write this post for a while. A year and a half ago I started developing a training course to teach a vulnerability discovery technique called fuzzing. As part of the course I wanted students to gain first hand knowledge and find a vulnerability in a “real” deployed application. I feel that this helps validate the technique and prove it’s usefulness.

I started browsing recent CVE reports looking for a vulnerability that I could try and find using a fuzzer. This is not as easy as it seems because there are two factors I had to keep in mind: first I wanted the application to be something that people actually use and second it needs to be a bug which can be fuzzed by a beginner in a day or two. Yeah I know, it’s unlikely I’ll find something I can use. I took a quick look and nothing popped out at me.

I did however, find a target: VLC. Since VLC is a media player it has to parse complicated audio and video formats from untrusted sources. Fuzzing a file can be as simple as replacing a percentage of bytes of a file.

After writing a simple Python script to replace random bytes in input files I got started fuzzing. It didn’t take long (a couple hundred iterations) for the fuzzer to start finding crashes. This is the details of one of them.

The Details

Opening the video will render corrupted playback for a few seconds before the window resizes and shows a black background with the VLC icon for a few seconds before VLC crashes and closes.

As I said before, I found this vulnerability in the fall of 2013 and at that time the current version was 2.1.4. I’ve tried building newer sources but after many hours of fighting trying to get a successful build, I’ve given up and the following code excerpts will be from the official 2.1.4 source release.

I reported this bug to the VLC maintainers but they declined to fix the vulnerability and instead downplayed it since the bug doesn’t affect the 2.2.x or 3.x branches. While it is true that it doesn’t affect the current 2.2.0 or 3.0.0 nightlies at the time of publishing, the 2.2.x branch was vulnerable when I reported it. From my perspective that doesn’t seem to matter much since the download page still serves up the 2.1.x binaries which are vulnerable.

The full bug report contains a complete AddressSanitizer output with symbols resolved. AddressSanitizer reported this crash as a Use-After-Free vulnerability. This is a stack trace for the faulting instruction:

=================================================================
==5667== ERROR: AddressSanitizer: heap-use-after-free on address
0x602a0005ea48 at pc 0x7f008f7971f7 bp 0x7f0077e74280 sp 0x7f0077e74278
WRITE of size 8 at 0x602a0005ea48 thread T36
    #0 0x7f008f7971f6 in vlc_atomic_sub vlc/src/../include/vlc_atomic.h:340
    #1 0x7f008f7618cc in vout_ReleasePicture vlc/src/video_output/video_output.c:436
    #2 0x7f008f6fc75d in vout_unlink_picture vlc/src/input/decoder.c:2435
    #3 0x7f008f704c97 in decoder_UnlinkPicture vlc/src/input/decoder.c:206
    #4 0x7f006e156a22 in ffmpeg_ReleaseFrameBuf vlc/modules/codec/avcodec/video.c:1081
    #5 0x7f006d56a433 in compat_free_buffer libav/libavcodec/utils.c:563
    #6 0x7f006ce52d6e in av_buffer_unref libav/libavutil/buffer.c:115
    #7 0x7f006d56a465 in compat_release_buffer libav/libavcodec/utils.c:570
    #8 0x7f006ce52d6e in av_buffer_unref libav/libavutil/buffer.c:115
    #9 0x7f006ce58a82 in av_frame_unref libav/libavutil/frame.c:285 (discriminator 2)
    #10 0x7f006d56c948 in unrefcount_frame libav/libavcodec/utils.c:1414
    #11 0x7f006d56cd0e in avcodec_decode_video2 libav/libavcodec/utils.c:1496
    <snip>

Looking at the stack trace it’s pretty clear this has something to do with releasing a picture instance. This is the structure of a picture_t:

struct picture_t
{
    video_frame_format_t format;
    plane_t         p[PICTURE_PLANE_MAX];     /**< description of the planes */
    int             i_planes;                /**< number of allocated planes */
    mtime_t         date;                                  /**< display date */
    bool            b_force;
    bool            b_progressive;          /**< is it a progressive frame ? */
    bool            b_top_field_first;             /**< which field is first */
    unsigned int    i_nb_fields;                  /**< # of displayed fields */
    picture_sys_t * p_sys;
    struct
    {
        vlc_atomic_t refcount;
        void (*pf_destroy)( picture_t * );
        picture_gc_sys_t *p_sys;
    } gc;
    struct picture_t *p_next;
};

Based on the functions in the stack trace and the faulting function vlc_atomic_sub, I assumed I could narrow this down to the refcount member since it’s the only atomic field in the structure. A little work with GDB proved this to be correct:

(gdb) p &((struct picture_t*)0)->gc.refcount
$1 = (vlc_atomic_t *) 0x128
(gdb) p sizeof(struct picture_t)
$2 = 328

This aligns with the AddressSanitizer output which reported the faulting instruction attempted to write 8 bytes at offset 296 or 0x128. This isn’t the instruction which attempted the write but it’s related. I’ll explain more in a minute.

This also explains the maintainers comment that this causes an assertion to fail in the 2.1.x branch. While I didn’t hit any assertion (due to my build settings) I expect the assertion that the maintainers are referring to is an assertion in PictureDestroy. This function is defined in src/misc/picture.c and is assigned to the pf_destroy field of the garbage collection structure when the picture_t structure is created.

static void PictureDestroy( picture_t *p_picture )
{
    assert( p_picture &&
            vlc_atomic_get( &p_picture->gc.refcount ) == 0 );

    vlc_free( p_picture->gc.p_sys );
    free( p_picture->p_sys );
    free( p_picture );
}

The developer is asserting that they are indeed freeing an instance whose reference count is zero, meaning there are no other references which still require access to this memory. A violation of this assertion means one of two things: the instance is being freed while references to it’s memory still exist, or that this instance has already been freed in which case the reference count will be negative.

In this case I know this to be the second possibility since AddressSanitizer tracks all allocations and frees. Looking at the AddressSanitizer output we can see the stack trace for the free also ends in PictureDestroy. I verified this by running the sample through GDB. You can see from the following output that at the time the vlc_atomic_sub function is executed the reference count field is zero:

Breakpoint 2, __asan_report_error (pc=140737298543095, bp=140736845898368, sp=140736845898360,
    addr=105733505280584, is_write=true, access_size=8)
    at ../../../../libsanitizer/asan/asan_report.cc:628
628                          uptr addr, bool is_write, uptr access_size) {
(gdb) up
#1  0x00007ffff5053857 in __asan::__asan_report_store8 (addr=addr@entry=105733505280584)
    at ../../../../libsanitizer/asan/asan_rtl.cc:234
234 ASAN_REPORT_ERROR(store, true, 8)
(gdb) up
#2  0x00007ffff4afb1f7 in vlc_atomic_sub (v=1, atom=0x602a0005ea48) at ../include/vlc_atomic.h:340
340     return atomic_fetch_sub (&atom->u, v) - v;
(gdb) p *atom
$1 = {u = 0}
(gdb)

This atomic_fetch_sub call contains the instruction which reads the value at atom->u subtracts 1 and writes it back to memory. Writing the new value back to memory is what triggers the AddressSanitizer crash since it’s writing to memory that was freed and has not yet been re-allocated.

We now have a picture_t structure which has been freed twice. Without AddressSanitizer this will crash due to heap corruption resulting from the double free.

Exploitation

I want to start this section by stating that I haven’t spent much time analyzing the exploitability of this vulnerability but for the sake of argument I’m going to walk through why I believe this vulnerability can be successfully exploited.

The sheer fact that the garbage collection structure contains a function pointer which will be invoked to free the instance is a huge step in the right direction. If an attacker is able to cause an allocation after the first free and before the second, they may be able to direct execution to a ROP gadget and pivot the stack to attacker controlled data. From that point it’s simply a matter of chaining ROP gadgets to call mprotect and return to the newly executable memory.

Conclusion

Yes I understand that I just glossed over some very technical and often difficult to overcome challenges but this blog post isn’t about exploiting this vulnerability merely challenging it’s existence. I contend this is a security vulnerability based on the details provided above.

If you would like to have a look for yourself you can find the sample in my Dropbox. I’d be interested to know if anyone is able to turn this into a PoC or potentially a Metasploit module. Good luck and happy hunting.

Blackhat USA 2015 Training Announced

I’m excited to announce that a training course I’ve created called Fuzzing For Vulnerabilities has been selected for inclusion in the BlackHat 2015 in Las Vegas! I’ve given this training previously at BlackHat Asia 2014 and BlackHat trainings in Washington DC but never before in Vegas. Registration open now and discounted until June 5th. Also there are still spots open for the BlackHat Asia 2015 offering.

For now you can find the details of the class on the BlackHat training page. In the future I’m going to write a longer blog post with more details about the training.