Marcel Böhme👨‍🔬 Profile picture
Software Security @maxplanckpress (#MPI_SP), PhD @NUSComputing, Dipl.-Inf. @TUDresden_de Research Group: https://t.co/BRnFNNgynB
Sep 16 15 tweets 2 min read
Keynote by @halvarflake at #FUZZING'24 reflecting on the Reasons for the Unreasonable Success of Fuzzing. Image Hacking culture in the 90's had very strong values. It had a value system outside of and different from normal society. Fuzzing was for the dumb kids.
Aug 22 5 tweets 2 min read
Before we announce the exciting keynotes for #FUZZING'24, we found some time to upload the recordings for the last two years by Abhishek Arya (@infernosec), @AndreasZeller, Cristian Cadar (@c_cadar), and Kostya Serebryany (@kayseesee).

//@lszekeres, @baishakhir, @yannicnoller. FUZZING'22 Keynote by Abhishek Arya (Google) on "The Evolution of Fuzzing in Finding the Unknowns"
May 29 23 tweets 9 min read
Surprising facts about #fuzzing. A thread in slides 👇 Image Whitebox fuzzing is most effective because it can, in principle, *prove* the absence of bugs.

"Partition-based Regression Verification": mboehme.github.io/paper/ICSE13.p…
Image
May 12 5 tweets 2 min read
Recently modified code and sanitizer instrumentation seem to be among the most effective heuristics for target selection in directed #fuzzing according to this recent SoK by Weissberg et al. LLMs show much promise for target selection, too.

📝 mlsec.org/docs/2024c-asi…
Image More info about those two heuristics:
🦠 Sanitizer-guided Greybox Fuzzing:
♻️ Regression Greybox Fuzzing: usenix.org/system/files/s…
mboehme.github.io/paper/CCS21.pdf
Mar 28, 2023 5 tweets 1 min read
After oracles for memory-safety, what's next?

- generic correctness prop.
- dataflow based properties
- "unusually large" resource consumptions

* Program-specific vs generic oracles
* One input (e.g., crash) vs distribution (e.g. performance)
* Ref implementation(s)

#Dagstuhl - Human artifacts (documentation) as oracles.
- How to infer oracles, e.g. from JavaDoc comments? What about false pos? Consider them as signal for user.
- Oracle problem impacts how good deduplication works.
- Metamorphic testing. Explore in other domains, e.g. perf. testing!
Mar 28, 2023 5 tweets 2 min read
Peter O'Hearn (@PeterOHearn12) on "Hits and Misses from a decade of program analysis in industry".

#Dagstuhl - Bi-abductive symbolic execution
- Infer ran "symbolic execution" on changed part of every commit/diff
- Post-land analysis versus diff-time analysis changed fix rate from 0% to 70%. Why?
* Cost of context switch
* Relevance to developer
Mar 28, 2023 4 tweets 2 min read
Anna Zaks on "From Bug Detection to Mitigation and Elimination".

- Static and dynamic analysis.
- Hard to ensure coverage at scale!

#Dagstuhl Security tooling
- ideal solution mitigates entire classes of bugs
- performance is important.
- adoption is critical!
- works with the ecosystem
Mar 28, 2023 5 tweets 2 min read
Anders Møller (@amoellercsaudk) on "Dependencies Everywhere".

#Dagstuhl Motivation
- Keeping dependencies up2date is not easy.
- Breaking changes are problematic for dependants.
- Informally specified and difficult to check against your project
- general tools don't assist with changes.
Mar 27, 2023 4 tweets 1 min read
Can we use LLMs for bug detection?
- compiler testing: generate programs
- "like" static analyzers:
* what is wrong, how to fix it?
* this is wrong, how to fix it?
- cur. challenge: limited prompt size
- reasoning power?
#Dagstuhl Q: Isn't it the *unusual* and the *unlikely* that makes us find bugs?
A: You can increase temperature. Make it hallucinate more.
Mar 27, 2023 6 tweets 1 min read
"Coverage-guided fuzzing is probably the most widely used bug finding tool in industry. You can tell by the introductory slides everyone presented this morning".
--Dmitry Vyukov In the future, we need more practical, simple, and sound techniques for bug finding,
- Find bugs in production
- Find new types of bugs
- Develop better dynamic tools
- Develop better static tools
- Require less human time
- Reports bugs in a way to improve fix rate!
Apr 23, 2022 5 tweets 4 min read
The FUZZING'22 Workshop is organized by
* Baishakhi Ray (@Columbia)
* Cristian Cadar (@imperialcollege)
* László Szekeres (@Google)
* Marcel Böhme (#MPI_SP)

Artifact Evaluation Committee Chair (Stage 2)
* Yannic Noller (@NUSComputing) Baishakhi (@baishakhir) is well-known for her investigation of AI4Fuzzing & Fuzzing4AI. Generally, she wants to improve software reliability & developers' productivity. Her research excellence has been recognized by an NSF CAREER, several Best Papers, and industry awards. Image
Apr 23, 2022 10 tweets 2 min read
There are tremendous opportunities to improve the way we disseminate research in Computer Science. Our current approach is to ask three experts to decide: Accept or Reject.

Here is what's wrong with this publication model 🧵

1/n
1. Providing feedback when the work has already been completed is utterly ineffective. What do we do if reviewers point out flaws in the eval or experiment design? Cycle it through our confs & journals until we are lucky. There is no consistency among reviewer expectations.

2/n
Apr 7, 2021 14 tweets 4 min read
I asked #AcademicChatter about incentives & processes behind paper machines (i.e., researchers publishing top-venue papers at unusually high rates).

This is what I learned 🧵

TL;DR: Any incentive emerges from our community values. It is not "them" who needs to change. It is us. Image It was tremendously exciting to get so many perspectives from so many junior and senior researchers across different disciplines. This was only a random curiosity of mine but it seemed to hit a nerve. I loved the positive, constructive tone in the thread.

Let's get started.
2/12
Mar 30, 2021 5 tweets 8 min read
👇 Fuzzing Papers with Code 👇

In 2020, only 35 of 60 fuzzing papers published the code together with the paper. In 2021, let's do better! #reproducibility

Data from wcventure.github.io/FuzzingPaper/
Conferences: CCS, NDSS, S&P, USENIX Sec, ICSE, ESEC/FSE, ISSTA, ASE, ASIACCS, ICST.
1/5
Alphabetically,
* github.com/aflnet/aflnet
* github.com/aflplusplus/AF…
* github.com/andreafioraldi…
* github.com/assist-project…
* github.com/duytai/sfuzz
* github.com/fau-inf2/StarS…
* github.com/hexhive/USBFuzz
* github.com/hexhive/FuzzGen
* github.com/hexhive/retrow…
* github.com/hub-se/MoFuzz
2/5
Sep 5, 2020 10 tweets 4 min read
[#Fuzzing Evaluation] How do we know which fuzzer finds the largest number of important bugs within a reasonable time in software that we care about?

A commentary on @gamozolabs' perspective.
(Verdict: Strong accept). YES! We need to present our plots on a log-x-scale. Why? mboehme.github.io/paper/FSE20.Em…
Two fuzzers. Both achieve the same coverage eventually. Yet, one performs really well at the beginning while the other performs really well in the long run. (What is a reasonable time budget? 🤔)
Jul 2, 2020 13 tweets 5 min read
For my new followers, my research group is interested in techniques that make machines attack other machines with maximal efficiency. All our tools are open-source, so people can use them to identify security bugs before they are exploited.

This is how it all started. My first technical paper introduced a technique that could, in principle, *prove* that no bug was introduced by a new code commit [ICSE'13]. This was also the first of several symbolic execution-based whitebox fuzzers [FSE'13, ASE'16, ICSE'20].

mboehme.github.io/paper/ICSE13.p…
Sep 24, 2019 7 tweets 3 min read
Kostya's keynote: LibFuzzer hasn't found new bugs in <big software companie>'s library. We didn't know why. Later we got a note that they are now using LibFuzzer during regression testing in CI and that it prevented 3 vulns from reaching to production. In Chrome, libFuzzer found 4k bugs and 800 vulns. In OSS-Fuzz, libFuzzer found 2.4k bugs (AFL found 500 bugs) over the last three years.

@kayseesee #fuzzing #shonan