X thread is series of posts by the same author connected with a line!
From any post in the thread, mention us with a keyword "unroll" @threadreaderapp unroll
Follow @ThreadReaderApp to mention us easily!
Practice here first or read more on our help page!

Recent

Dec 23
1/8 This was the final PGP* (Pretty Good Policy) for Crypto @pgpforcrypto meeting of the year, from last Wednesday in Washington, DC 🇺🇸: lightning talks + a Project Glitch fireside chat focused on where crypto policy, privacy tech, and compliance design are heading next.

Watch all of the updates here 🧵👇

Sessions:
• Policy Update — Lindsay Fraser @lindsayfraser0, @BlockchainAssn
• Enterprise Privacy — Paul Brody @pbrody, Principal & Global Blockchain Leader, Ernst & Young
• Privacy + Surveillance Policy — Tony Douglas Jr. @dao_officer, The Decentralization Research Center @TheDRC_
• Privacy Infrastructure + Compliance Frameworks — Remi Gai @remi_gai, Founder of Inco @inconetwork
• Digital ID + Anonymous Credentials — Ying Tong @therealyingtong, Independent Applied Cryptographer

Project Glitch Fireside Chat:
“Digital ID in 2026” — Wayne Chang, @wycdd, Founder & CEO of Spruce ID @SpruceID & Mike Orcutt @mike_orcutt, Founding Editor of @projectglitch_

Thank you to the sponsors of the PGP* for Crypto Briefing and Roundtable Series: @ZcashCommGrants, @hedera Hashgraph, Ernst & Young, The Digital Securities Initiative, USC VanEck Digital Assets Initiative @USC_VEDA , and @ElectricCoinCo (with additional support from DeFi Education Fund @fund_defi and Blockchain Association @BlockchainAssn).
2/8 Lindsay Fraser @lindsayfraser0, Chief Policy Officer, @BlockchainAssn — Policy Update

Highlights:
• Market structure work is converging, but key decisions remain open—especially around token classification, DeFi treatment, ethics language, and how authority is scoped across agencies.
• Lindsay flagged an emerging pressure point tied to stablecoins: whether “rewards/incentives” are permitted in secondary markets and third‑party platforms. Her point was that the GENIUS framework struck a deliberate balance (issuer yield restrictions, but not a blanket ban on downstream incentives), and that balance is now under pressure from lobbying and related regulatory activity.
• She also noted that agency staffing/quorum dynamics (including at the CFTC) are becoming part of the policy negotiation, not just a background detail.
3/8 Paul Brody @pbrody, Principal & Global Blockchain Leader, Ernst & Young — Institutions want onchain privacy, too

Highlights:
• Paul outlined EY’s two-track approach: Nightfall (for standardized, high‑volume private payments/transfers/swaps) and Starlight (for running customized business logic privately by translating Solidity contracts into zero‑knowledge circuits).
• He emphasized Nightfall’s design goals: scalability, EVM compatibility, and an open-source posture intended to support broader adoption and review.
• On progress and roadmap, he described multi‑year iteration since early prototypes, and referenced a Zcash-based version on testnet with a stated target of production deployment in 2026, alongside early ecosystem deployments/testnets (including Plume and Celo).
Read 8 tweets
Dec 23
Give Me 21 Days, And I'll Give You A New Life:

1. Image
2. Image
3. Image
Read 6 tweets
Dec 23
🧵 SILVER STACKERS — READ THIS CAREFULLY 🥈

#PhysicalSilver #SilverInventory #COMEX Image
1/
Today’s COMEX silver report wasn’t “boring”.
It was quietly explosive.

Over 3 MILLION ounces of physical silver left the system.

No headlines. No panic.
Just… gone.
2/
Here’s the key part most people miss:
👉 ALL of it came from Eligible
👉 ZERO added to Registered

That’s not trading.
That’s owners removing their metal.
Read 14 tweets
Dec 23
Hi Grok, can you give the a (1) short and (2) long answer to : have Doctors in the UK had real pay cut in the last 15-20 years and whom by? Thank you.

(1) Short answer
Yes, doctors in the UK (primarily in England, where most data focuses) have experienced a significant

1/N
**real-terms pay cut** over the last 15–20 years, estimated at around **15–25%** depending on the grade (e.g., junior/resident doctors ~20–26%, consultants ~15–27%) and inflation measure used.

The **UK government** is responsible,

2/N.
as it sets NHS doctors' pay based on recommendations from the independent Doctors' and Dentists' Review Body (DDRB), often implementing below-inflation rises or freezes, especially during austerity periods.

3/N
Read 12 tweets
Dec 23
I’m sure the “protect our women and girls” brigade will have plenty to say about this👍🏼

There are indeed some parts of our society that don’t treat women properly…

Here is a former TORY Councillor (and 5 OTHER men) charged with the rape and sexual assault of HIS WIFE

🧵
1/10 Image
6 men are charged with more than 60 sexual offenses, including rape, against his wife over a 13 year period.

Young himself is accused of 56 of them, including 11 rapes…

2/ Image
11 counts of “administering a substance with the intent of stupefying or overpowering to allow sexual activity”

13 specific charges of voyeurism, alongside voyeurism on at least 200 other occasions

230 indecent images of CHILDREN, of which 139 are category A - the worst kind

3/Image
Read 10 tweets
Dec 23
russia launched another large-scale attack on at least 13 Ukrainian regions — 650+ drones and 30+ missiles were sent to destroy energy systems, civilian infrastructure and residential areas, while killing civilians, including 2 children.

Help stop this:
u24.gov.ua/nafo-dark-nigh…
Image
Image
Image
Image
Read 3 tweets
Dec 23
A few interesting documents in data set 8 of the #EpsteinFiles:

🧵

Jan 2020:

“..Donald Trump traveled on Epstein’s private jet many more times than previously has been reported (or that we are aware), including during the period we would expect to charge in the Maxell case” ⤵️ Image
Another email appears to include a redacted image with the words:

“I found an image of Trump and Ghislaine Maxwell on Bannon’s phone (see screenshot from Cellebrite, attached)… Image
Interesting letter re support of the Maxwell bail application.

Authors name redacted although tiny piece of signature remains unredacted ⤵️ Image
Read 5 tweets
Dec 23
🧵 THREAD: Silver vs Big Tech — the market cap crossover is closer than most think 🥈 Image
Image
1/

Two screenshots. Four days apart.
One quiet shift in market structure.

Silver’s market cap is accelerating — while Big Tech is stalling.
Let’s quantify it. 👇
2/
📅 Dec 19, 2025
• Silver: $3.799T
• Apple: $4.022T
• NVIDIA: $4.38T

📅 Dec 23, 2025
• Silver: $3.914T
• Apple: $4.021T
• NVIDIA: $4.472T
Read 14 tweets
Dec 23
Who was behind this systemic discrimination of Sami people in Scandinavia and does it connect somehow to Canada’s issues? @AFN_Updates @NWAC_CA @MNC_tweets @MetisNationON @NCAI1944 @UN4Indigenous @NCTR_UM @Pontifex @USDOJ_Intl @MarkJCarney @hrw @interpol @ICIJorg @ICCT_TheHague
The Sami had our Metis sash and coats like Hudson bay fabric blanket. They whaled and fished in Acadia and their wampum like dwelling is called Lavvau. Image
Image
Image
Image
My family’s East Coast history included the food, the same cooking styles and seasonings and preservation of fish as Indigenous Sami. Image
Image
Read 11 tweets
Dec 23
STYLE-AWARE DRAG-AND-DROP INSERTION OF SUBJECTS INTO IMAGES

@Google's US20250378609A1 presents a style-aware drag-and-drop system that inserts subjects from one image into another while automatically adapting to the target's visual style. The method preserves subject identity, transforms appearance to match target aesthetics, and integrates the result with realistic shadows and reflections.

Generative AI image editing tools have seen explosive growth as users increasingly demand professional-quality results without specialized skills. Non-experts seek intuitive interfaces that handle complex transformations automatically, expecting results that previously required hours of manual work in professional software. The gap between user expectations and available capabilities continues to drive innovation in this space. Google's ongoing investment in generative AI positions this patent within a broader portfolio addressing creative workflows.

Traditional approaches to subject insertion rely on inpainting methods that prove computationally expensive and produce poor-quality outputs, particularly on smooth regions and boundaries ([0002], [0033]). These techniques often create visible artifacts where inserted subjects meet background elements. More fundamentally, existing methods struggle to balance identity preservation against style transformation, typically sacrificing one for the other ([0027]). Users face an unsatisfying choice between accurate subject representation and natural environmental integration.

The patent addresses this gap through a three-stage pipeline combining subject personalization, style transfer, and environment integration. A diffusion model first learns to represent the subject through auxiliary token embeddings and parameter adjustments ([0026]). Style information extracted from the target image then conditions generation to produce a style-matched version of the subject ([0027]). Finally, a subject insertion model places this transformed subject into the target environment with appropriate shadows, reflections, and occlusions ([0034]).

Key Breakthroughs:
◽Preserving subject identity through simultaneous token embedding and LoRA weight learning
◽Injecting target style via CLIP-IP-Adapter pathway without distorting learned representation
◽Adapting photorealistic insertion models to stylized domains through bootstrap self-filtering

[FIG. 8: Style-aware drag-and-drop results showing subjects inserted into target backgrounds with automatic style adaptation and realistic environmental integration]Image
1. Core Innovations

1️⃣ Dual-Space Subject Learning
◽ Technical Challenge: Representing a specific subject within a generative model requires balancing identity preservation against editability. Standard fine-tuning approaches that modify model weights risk overfitting, producing outputs that merely replicate training images rather than generating novel views ([0028]). The model memorizes pixel patterns instead of learning underlying subject characteristics. Conversely, lightweight embedding-only methods may fail to capture fine-grained details that distinguish one subject from another ([0030]). This tension between memorization and generalization has limited practical applications of personalized image generation.

◽ Innovative Solution: The patent introduces simultaneous optimization of token embeddings and low-rank adaptation weights. During fine-tuning, the system learns auxiliary input tokens [T1] and [T2] that condition the diffusion model's semantic space ([0030]). These tokens occupy positions in the prompt where natural language descriptors would normally appear. Concurrently, LoRA deltas modify frozen model parameters through rank decomposition matrices ([0031]). The joint optimization follows a denoising objective where both embedding vectors and weight adjustments train together ([0091], [0092]). This dual-space approach captures subject identity at multiple levels of abstraction within a unified training procedure.

◽ Competitive Advantage: Token embeddings encode high-level semantic identity while LoRA weights preserve fine-grained visual details. This complementary distribution achieves subject fidelity in fewer training iterations than weight-only methods ([0028]). The learned representation maintains pose and expression editability because the model avoids overfitting to specific training views, enabling generation of the subject in novel configurations while retaining distinctive characteristics.

2️⃣ Adapter-Based Style Injection
◽ Technical Challenge: Transferring visual style from a target image onto a personalized subject risks corrupting the learned identity representation. Naive approaches that blend style information throughout the generation process cause subject features to drift toward generic stylistic patterns ([0027]). A character's distinctive face shape might morph toward the style's typical proportions. The fundamental conflict between style adoption and identity preservation has constrained previous methods to limited style ranges or compromised fidelity in one dimension or the other.

◽ Innovative Solution: The system employs a CLIP encoder to extract style embeddings from the target image, producing vector representations of visual characteristics ([0032], [0093]). This encoding captures color palettes, texture patterns, artistic techniques, and lighting qualities as numerical features. An IP-Adapter then injects these style features into a subset of UNet upsampling layers within the fine-tuned diffusion model ([0093]). This selective injection imposes style information on the generation process while the learned auxiliary tokens continue conditioning for subject identity. The architecture keeps style and identity pathways functionally separate through layer-specific routing.

◽ Competitive Advantage: Restricting style injection to layers responsible for surface details prevents style information from overwriting identity-critical features encoded earlier. The subject emerges in target style without losing distinguishing characteristics. Experimental results demonstrate improvements in both CLIP-based style metrics and DINO-based identity metrics compared to StyleAlign and InstantStyle baselines ([0036], [0107]), achieving results that sequential approaches cannot match.

3️⃣ Bootstrap-Enhanced Subject Insertion
◽ Technical Challenge: Subject insertion models trained on photorealistic data fail when processing stylized images. A model that learns to generate shadows and reflections from photographs produces artifacts when given cartoon or painted subjects ([0037], [0097]). The lighting physics and surface properties differ fundamentally between photorealistic and stylized domains. Collecting paired training data across every possible artistic style proves impractical due to the unbounded variety of visual styles. This domain gap between training distribution and deployment scenarios limits real-world applicability.

◽ Innovative Solution: The patent introduces bootstrap domain adaptation that extends insertion model capabilities without manual data collection. The pre-trained photorealistic model first attempts subject removal on stylized images, revealing how well it understands each image type ([0098]). A filtering mechanism identifies successful removals and discards failures, creating a curated training set of stylized examples the model can already handle partially ([0099]). The model then fine-tunes on this filtered data, gradually expanding its effective domain ([0100]). Multiple bootstrap iterations progressively improve coverage of stylized image types by repeatedly expanding the success boundary ([0101]).

◽ Competitive Advantage: This self-supervised approach eliminates dependence on paired stylized training data. Each iteration expands the model's competence boundary into new visual domains without human annotation. Filtering ensures training quality by excluding catastrophic failures, preventing error propagation. Results show substantial improvement in stylized image handling compared to photorealistic-only models ([FIG. 12]), generalizing across style types without style-specific training.

2. Architecture & Components

The system comprises three functional modules that process subject images through style transformation to final integration.

1️⃣ Subject Learning Module:
- Diffusion model 300 with UNet architecture serves as the generative foundation ([0026], [0032])
- Noisy image versions of subject provide training signal for denoising recovery ([0026])
- Auxiliary input 305 conditions output via learned token sequence "A [T1][T2]" ([0030])
- LoRA deltas modify frozen model parameters through rank decomposition matrices ([0031])
- Token embeddings e1 and e2 capture subject identity in semantic embedding space ([0091])
- Joint loss function optimizes both embedding and weight components simultaneously ([0092])

2️⃣ Style Transfer Module:
- CLIP encoder 310 extracts style representation from target image xt ([0027])
- Style embedding 313 encodes visual characteristics as conditioning vector ([0032])
- IP-Adapter transforms embedding into layer-specific conditioning signals ([0093])
- Selective injection targets UNet upsampling layers to preserve identity features ([0093])

3️⃣ Subject Integration Module:
- Segmentation 410 isolates transformed subject from generated output background ([0033])
- Composite generation places segmented subject onto target background at specified location ([0034])
- Subject insertion model 400 generates contextually appropriate shadows and reflections ([0034])
- Bootstrap training extends capabilities from photorealistic to stylized domains ([0098])

3. Operational Mechanism

The method executes through four sequential phases from input reception to final output generation.

1️⃣ Input Reception:
- System receives subject image xs containing the entity to transfer ([0067])
- System receives target image xt defining style and destination environment ([0067])
- Subject image enters personalization pipeline for identity learning ([0026])
- Target image proceeds independently to style extraction pathway ([0027])

2️⃣ Subject Personalization:
- Noise corruption at varying levels creates training variants of subject image ([0026])
- Diffusion model learns to recover original subject from noisy versions ([0091])
- LoRA deltas and token embeddings optimize jointly via denoising loss function ([0092])
- Training continues until model specifically represents subject identity with fidelity ([0028])
- Convergence produces fine-tuned model with learned auxiliary input tokens ([0030])

3️⃣ Style-Aware Generation:
- CLIP encoder processes target image to produce style embedding vector ([0093])
- IP-Adapter transforms style embedding into UNet conditioning signals ([0093])
- Style signals inject into upsampling layers of personalized diffusion model ([0093])
- Model executes conditioned on learned tokens, generating styled subject image ([0067])
- Output depicts subject with preserved identity rendered in target visual style ([0027])

4️⃣ Environment Integration:
- Styled subject segmented from generated image background using mask ([0033])
- Subject composited onto target image at user-specified location ([0034])
- Insertion model analyzes scene context to determine appropriate lighting effects ([0034])
- Model applies shadows, reflections, and occlusions matching target environment ([0034])

4. Figures

[FIG. 3A/3B: Two-stage personalization process showing diffusion model fine-tuning with learned token input "A [T1][T2]" and style injection from target image 311 via embedding 313]
[FIG. 4: Subject insertion pipeline showing segmentation 410, composite generation, and integration model 400 producing output 411 with shadows and reflections]
[FIG. 6: Method flowchart depicting steps 610 through 640 from image reception through styled subject generation to final environment integration]

5. Key Advantages

✅ Objective improvements in subject fidelity metrics including DINO and CLIP-I scores relative to StyleAlign and InstantStyle baselines ([0036], [0107])
✅ Strong style adherence measured by CSD and CLIP-T metrics while maintaining low structural overfitting indicated by SSIM values ([0108])
✅ Reduced training iterations through simultaneous embedding and weight optimization versus sequential fine-tuning approaches ([0028])
✅ Identity preservation maintaining pose, expression, and semantic attributes during cross-style transformation ([0025])
✅ Computational efficiency surpassing traditional inpainting methods that struggle with smooth regions and produce boundary artifacts ([0033])
✅ Mobile deployment capability enabling local inference execution on cellphones and tablets after remote training completion ([0035])
✅ Bootstrap adaptation requiring minimal additional supervision to extend photorealistic models to diverse stylized domains ([0037])Image
Image
Image
6. Analogy

Consider commissioning a Renaissance portrait painter to create your likeness in the style of Caravaggio. You arrive at the studio with a photograph. The painter faces a fundamental challenge: capture your distinctive features while rendering everything in dramatic chiaroscuro technique with deep shadows and warm flesh tones.

A novice might attempt to simply apply sepia filters to your photograph. The result would look vaguely old but retain modern photographic qualities while losing your specific facial characteristics in the color manipulation. This corresponds to naive style transfer that corrupts identity.

An experienced portraitist takes a different approach. First, they study your face through multiple sketches, learning the proportions of your nose, the particular curve of your jawline, the way your eyes catch light. These preliminary drawings capture your identity at two levels: overall structure through quick gesture sketches, and fine details through careful studies. This dual learning mirrors token embeddings capturing semantic identity while LoRA weights encode visual specifics.

Next, the painter studies Caravaggio's technique separately. They analyze how he handled fabric folds, skin tones, and that signature spotlight-against-darkness contrast. This style analysis remains independent from their study of your features. When painting begins, they apply Caravaggio's methods to their learned understanding of your appearance. The CLIP-IP-Adapter pathway functions similarly, extracting and applying style without overwriting identity.

Finally, the painter must integrate your figure into a period-appropriate setting. Someone lounging in a Baroque interior needs shadows that match the scene's lighting, reflections in nearby surfaces, and appropriate occlusion by foreground objects. The painter practices rendering modern clothing in Renaissance oil technique, building competence through trial and refinement. This corresponds to bootstrap adaptation where the insertion model learns stylized rendering through iterative self-training.

The finished portrait shows unmistakably you, rendered in unmistakably Baroque style, sitting naturally in a candlelit chamber. Three distinct competencies combined: knowing your appearance, knowing the style, and knowing environmental integration.Image
Read 6 tweets
Dec 23
If some don't make benefits out of the wars;
Each religion which existed and remains alive has a book. That book is a codex.
That codex has its own meaning key (literal key)
Using those Keys if someone translates all religions, the results will be a book in a very easy language👇🏼
The name of that book will be the Bible.
But meaningfully translated Bible.
And there is no chance to have any promised one other than Jesus Christ.
The Jesus Christ as promised one exists on all Abrahamic religions but in other books they wrote other local names for him.👇🏼
The all are one and that all into a one (Jesus Christ) comes true.
The king of Kings. The true one whose soul and the God are united.

I just pray to the Lord and ask him to destroy those who make wars and make money out of wars and translate religions into Death.
Read 5 tweets
Dec 23
SitRep - 22/12/25 - A large Russian failed mechanized attempt near Dobropillia

An overview of the daily events in Russia's invasion of Ukraine. During the day, Russia attempted to storm Ukrainian positions towards Dobropillia but massively failed.

REPOST=appreciated

1/X Image
As usual we start with Russian losses
Read 25 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!