Another Review, Another Pre-Review

Amazon.com just published my five star review of Windows Forensic Analysis by Harlan Carvey. From the review:

I loved Windows Forensic Analysis (WFA). It's the first five star book from Syngress I've read since early 2006. WFA delivered just what I hoped to read in a book of its size and intended audience, and my expectations were high. If your job requires investigating compromised Windows hosts, you must read WFA.

In the mail today I received a copy of Fuzzing by ninjas Michael Sutton, Adam Greene, and Pedram Amini. H.D. Moore even wrote the foreword, for Pete's sake. However, I have some concerns about this book. I performed a technical review, mainly from the perspective of someone who wants to know more about how to do fuzzing. The drafts I read seemed to be more about how to build a fuzzer. Those of you who are jumping to hit the comment button -- I don't want to hear about "you learn how to fuzz by building a tool." Give me a chance to learn how to walk before I try to invent a new method of transportation! We'll see how the book reads in printed form when I review it.

Comments

dre said…
Ok I won't say what you specifically asked me not to say.

What questions do you have?

Allow me to summarize since I've read this book a few times.

1) Fuzz testing generates faults/errors/crashes through known metacharacter injections (defined as generation) into code inputs that use textual fields -or- usually random (but somehow predetermined) binary data (defined as mutation) into code inputs that use binary streams (often TLV's)

2) Heuristics such as protocol dissection (properly automated with a proxy fuzzer ), genetic algorithms, and bioinformatics usually make better mutation-based fuzzers. Generation-based fuzzers have good pre-programmed lists (see the OWASP testing guide v2 for a fairly basic list).

3) Fuzzer tracking takes source code, or reversed binary/bytecode complexity metrics (code and path coverage) and determines how to start fuzzing and when to stop fuzzing. These are the same metrics that QA gathers in order to show that the program's inputs have been tested, and to what percentage. As an example, Java programmers typically use EMMA or Clover to do this at build time (using ANT, similar to GNU make), and/or in their IDE (with, e.g. EclEMMA).

4) Intelligent fault detection monitors the application under test while fuzz testing occurs and checks the responses. If a crash occurs, a fuzzer stepper will detect if the crash was due to one fuzz test, or by some sort of combination of tests. In order to detect errors (any out-of-bounds exception) that do not cause a crash (such as off-by-one's) the application must be frozen in state while being attached to and checked before un-freezing. Additionally, if you check out the PaiMei crash binning routines, you'll note that they are capable of further automating and cataloging faults by referencing stack unwind information from each recorded crash into a tree list. Each
test case can then be classified by path as well as exception address.
Stack unwinding would be most useful to stack based buffer overflows, but with SBI/DBI it would be possible to look at bounds checking on the heap or possibly other areas of memory as well. Dynamic binary instrumentation would additionally allow insight into errors accumulating before a crash occurs, giving the fuzz tester a head start to finding a vulnerability.

I didn't even need the book to gather most of this information, as the authors have already written an article for DDJ entitled, Requirements for effective fuzzing"

That's all the theory you really need to know.t
Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics