Before we announce the exciting keynotes for #FUZZING'24, we found some time to upload the recordings for the last two years by Abhishek Arya (@infernosec), @AndreasZeller, Cristian Cadar (@c_cadar), and Kostya Serebryany (@kayseesee).
//@lszekeres, @baishakhir, @yannicnoller.
FUZZING'22 Keynote by Abhishek Arya (Google) on "The Evolution of Fuzzing in Finding the Unknowns"
FUZZING'22 Keynote by Andreas Zeller (CISPA & Saarland U) on "Fuzzing: A Tale of Two Cultures"
FUZZING'23 Keynote by Cristian Cadar (Imperial College London) on "Three Colours of Fuzzing: Reflections and Open Challenges"
FUZZING'23 Keynote by Kostya "KCC" Serebryany (Google) on "Rich Coverage Signal and the Consequences for Scaling"
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Recently modified code and sanitizer instrumentation seem to be among the most effective heuristics for target selection in directed #fuzzing according to this recent SoK by Weissberg et al. LLMs show much promise for target selection, too.
But in an interesting twist, the authors find that choosing functions by their complexity might be even better at retrieving functions that contained vulnerabilities in the past.
- Human artifacts (documentation) as oracles.
- How to infer oracles, e.g. from JavaDoc comments? What about false pos? Consider them as signal for user.
- Oracle problem impacts how good deduplication works.
- Metamorphic testing. Explore in other domains, e.g. perf. testing!
- Mine assertions and use them in a fuzzer feedback loop
- Assertions are the best way to build oracles into the code
- hyperproperties are free oracles (differential testing)
- ML to detect vuln patterns. Use as oracles
- Bugs as deviant behavior (Dawson)
- Bi-abductive symbolic execution
- Infer ran "symbolic execution" on changed part of every commit/diff
- Post-land analysis versus diff-time analysis changed fix rate from 0% to 70%. Why?
* Cost of context switch
* Relevance to developer
- Deploying a static analysis tool is an interaction with the developers.
- Devs would accept false positives and work with the team to "fit" the tool to the project rather.
- Audience matters!
* Dev vs SecEng
* Speed tolerance
* FP/FN tolerance
Security tooling
- ideal solution mitigates entire classes of bugs
- performance is important.
- adoption is critical!
- works with the ecosystem
Rewriting in memory-safe language (e.g. Swift)
- View new code as green islands in a blue ocean of memory-unsafe code.
- Objective: Turn blue to green.
- We need solutions with low adoption cost.
Motivation
- Keeping dependencies up2date is not easy.
- Breaking changes are problematic for dependants.
- Informally specified and difficult to check against your project
- general tools don't assist with changes.
Research challenges
- we fully trust the dependencies ecosystem.
- supply chain is reported to be full of vulnerabilities, how does a maintainer interpret this? 95% false positives?