Pieter Wuille Profile picture
May 17 7 tweets 2 min read
1/7 New write-up: github.com/sipa/writeups/…

Imagine you can encrypt all of an application's P2P network connections, but only some are deliberately to specific peers, while others are just made randomly.

How private can you make an authentication protocol for this use case? Very!
2/7 The most surprising part is that if you can make it so that a Man-in-the-Middle cannot tell deliberate connections and random ones apart, the random ones get some protection too: MitMs can't selectively intercept if every connection could be an attempt to authenticate.
3/7 How to achieve that? All you need is to make sure responders don't learn which (unsuccessful) keys the attempt was for, and treat failed authentication attempts equal to random ones. Then run with random keys if you don't desire authentication.
4/7 It seems that it is not that hard to construct protocols which achieve this, as well as more far-reaching privacy properties. But at the same time, it does seem like a problem space that's unstudied for now.
5/7 The write-up is primarily intended as an introduction to the (interesting, IMO!) problem space and an informal description of the desired cryptographic properties it has. It also lists a few example protocols that achieve them, but proofs are a work in progress.
6/7 My interest is specifically in the context of P2P like Bitcoin's, where most connections are random, but sometimes people make deliberate connections (e.g. between their own nodes). We also want to avoid introducing discoverable identities here.
7/7 I do hope this can turn into a more formal publication at some point, but in the interest of having something to point to, this write-up is a start.

Thanks to all that contributed to the write-up and ideas it is based on: Greg Maxwell, @real_or_random, and @murchandamus.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Pieter Wuille

Pieter Wuille Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @pwuille

Jun 12, 2021
As of block 687284, Taproot signalling has reached 1815 blocks this period, guaranteeing that absent very deep reorgs, it is guaranteed to lock in. Following that, it will activate at block 709632, probably around mid-November 2021. 🥕
It's been a long story, that started in a diner in Los Altos, CA where in Greg Maxwell, Andrew Poelstra and I somewhere in January 2018 had lunch.

While I briefly had to leave the table, they had come up with a really cool idea to hide Merkle roots in P2PK-like outputs.
A few months later, a few people including me started writing a specification to focus on this idea as an upgrade to Bitcoin's script capabilities. There were so many ideas, but we couldn't realistically include everything in one proposal.
Read 13 tweets
Nov 26, 2020
@benthecarman @RubinhoISR The motivating example is hypothetical opcodes that are more expensive per byte than signature checking.

BIP342 replaces the sigops limit with a resource cost: everything is translated to bytes that you "pay" for. If a script executes N checksigs, it needs 50*N witness bytes.
@benthecarman @RubinhoISR That avoids the complex optimization problem for miners that exists in theory today (actual implementations just ignore it, though). Ideally they'd try to both maximize fee per weight and fee per sigop - this much harder to do (and estimate) than just one metric.
@benthecarman @RubinhoISR Having a limit of 1 checksig per 50 WU witness isn't a problem for useful scripts, as every checksig will consume at least a pubkey and a signature, together already 98 WU anyway.
Read 11 tweets
Dec 14, 2019
I wrote this analysis of insertion, deletion, substitution, and swap errors in Bech32: gist.github.com/sipa/a9845b37c…
The context is github.com/sipa/bech32/is…, which discovered that if a valid Bech32 ends in a 'p', then it may be possible to insert/delete 'q' character just before that 'p'.
This was a major oversight in Bech32's design, which I'm sorry was not discovered during design, or during review afterwards.

Bech32 focuses on detection of substitution errors, but there shouldn't be any reasonable class of errors that are detected worse than 1 per billion.
Read 8 tweets
Aug 19, 2019
Just announced our Miniscript project website on the bitcoin-dev mailinglist: bitcoin.sipa.be/miniscript/
In short, it's a way to write (some) Bitcoin scripts in a structured, composable way that allows various kinds of static analysis, generic signing, and compilation of policies.
Imagine a company wants to protect its cold storage funds using a 2-of-3 multisig policy with 3 executives. One of the executives however has a nice 2FA/multisig/timelock based setup on his own. Why can't that entire setup be one of the multisig "participants"?
Read 13 tweets
Mar 19, 2019
1) This question was clearly a bit underspecified, as some of the more creative responses showed. Despite that, my answer is (c) 5M-10M BTC. This includes all outputs with P2PK/raw multisig outputs, plus P2PKH outputs with known pubkeys, and P2SH/P2WSH with known scripts.
2) This includes about 1.75 M BTC in P2PK/raw multisig outputs, and over 4M BTC due to known pubkeys and scripts, revealed in the Bitcoin blockchain. If you include forkcoin chains, another ~0.5 M BTC becomes accessible to such a hypothetical machine.
3) I believe these numbers are evidence that the "public key hashes protect against a quantum computer" argument is (currently) bogus. Even when your own coins are encumbered with a PKH construction, you can't claim much security if 37% of the supply is at risk.
Read 8 tweets
Feb 21, 2019
1/ The correct answer is (d), with a value of 2 weeks, 20 minutes, 1.19seconds; that's a factor of 2016/2014 longer than 2 weeks.

The reason for this is due to two different effects, one well known, and one pretty obscure.
2/ The first effect is that the observed "length" of the retarget window is only 2015 blocks. This means that the retargetting logic is aiming to make 2015 blocks take 2 weeks, or 2016 will take 2016/2015 times two weeks - 10 minutes longer than 2 weeks. That's not all, however..
3/ In more detail, we can reformulate the question as "For which difficulty (or the block time corresponding to it, given constant hashrate) is the expected value for the difficulty adjustment value equal to 1?".
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(