AzCDL has a lot to be thankful for this holiday season.
We are thankful to those that founded this organization 20 years ago for their wisdom, insight, and perseverance.
2/8 We are thankful to those early members and volunteers who had faith in our mission.
We are thankful to the legislators that have worked with us to give Arizona some of the best firearms laws in the country—78 pro-freedom bills signed into law by 4 governors! #AZLeg
3/8 We are thankful to our lobbyists who work full-time to educate, support, & encourage our elected officials in advancing the right to keep & bear arms in AZ.
We are thankful to our litigation partners who help us protect the legislative progress we've made.
🎁👻🚢 À Noël, le triple cadeau de la flotte fantôme 🇷🇺 à la France
Les 24 & 25, 3 pétroliers fantômes navigueront comme un seul homme dans les eaux 🇫🇷 : LION 1, SEASONS 1, CAI YUN.
Notre hypothèse : ils forment un même groupe avec les mêmes paramilitaires russes à bord. 🧵
Quels sont les points communs entre le LION I (IMO 9384069) et le SEASONS I (IMO 9308950) ?
Officiellement ? À part le transport de pétrole depuis des ports russes, pas grand-chose :
🔸️ils appartiennent à des armateurs différents
🔸️leurs managers commerciaux le sont aussi.
En creusant, leurs profils et comportements sont pourtant similaires :
🔸️Pas de pavillon officiellement enregistré
🔸️Gestion par des entreprises offshores aux Seychelles
🔸️L’un est audité par la Chine (indépendance toute relative), l’autre ne s’embête plus a être certifié.
In the first half of 2025, the Federal Government realised about half of its projected revenue, while debt servicing was 97.2% of the Federal Government's retained revenue.
Oil revenue contributed 36.2% (₦3.43tn) to FG’s retained revenue in the first half of 2025 while non-oil revenue made up 35.6%% (₦3.38tn). Meanwhile, debt service was the highest expenditure item for FG at ₦9.23tn, accounting for 60.4% of total expenditure.
In the first half of 2025, only 33% of budgeted oil revenue was realised, while non-oil revenue performed better at 76% of its target. On the spending side, only 13% of planned capital expenditure was implemented, while debt servicing exceeded budgeted projections by 29%.
Because most of my fellow Europeans haven't had the time nor stamina to fully acquaint themselves with the complete spectrum of insanity that is the current U.S. administration, I've taken it upon myself to offer a brief, honest introduction to its main characters.🧵
Often affectionately referred to as "pure f*cking evil" by her closest friends and by everyone who has ever met, seen, or heard her really, Kristi Noem is the woman Trump tasked with deporting people to Salvadoran concentration camps, after learning that she shot a puppy.
Robert F. Kennedy Jr, the former heroin addict and self proclaimed imaginary brain worm survivor that once admitted to having eaten a dead bear he stumbled upon before burying the carcass in New York's Central Park at night, currently serves as the U.S. Secretary of Pestilence.
The NEW DONBAS LINE, a massive ukrainian 🇺🇦 fortification program
This year, Ukraine built important fortifications that may be able to stop or slow down russian 🇷🇺 forces. Visible from space, these fortifications have already proven useful.
🧵THREAD🧵1/25 ⬇️
This weekend, Ukraine published this video showcasing their new defensive line.
For the first time, they officially showed the results of months of digging effort. This video shows the most advanced defensive line, with only two holes in 16km !
This line is new because it is not only anti-tank or anti-infantry obstacles, but all of them at the same time.
We can count 21 barbed wires rows (4 lines and 3 lines at the bottom of ditches), 3 dragon teeths rows and 3 rows of anti-tank ditches.
Depending on how you look at it growth in Q3 was very very strong or very strong or just possibly merely strong. Annual rates:
GDP: 4.3%
Real final sales to domestic purchasers: 2.9%
Average of GDP & GDI: 3.4%
GDI: 2.4%
A big part of the story was consumer spending up at a 3.5% annual rate. Started the year looking weak but new data and revisions have made consumers very strong.
Business fixed investment a bit weaker but also very heterogenous. Equipment investment and IPP up but non-residential structures down for the seventh straight quarter.
Sana yalan söylediler. Masada "çok konuşan" değil, "sessizliğe hükmeden" kazanır.
Pazarlık, kelimelerin savaşı değil, zihnin satrancıdır.
İşte "Görünmez İkna" sanatının, çoğunuzun kaçırdığı o karanlık ve derin mekaniği. 🧵
Önce "Usta" kime denir, onu ayıralım.
Sadece çok ucuza kapatan ya da çok pahalıya satan değil; her iki tarafın da "etkili" bulduğu, sicili temiz ve başarıyı şansa değil sisteme bağlamış kişilerden bahsediyoruz.
Bu adamların zihin haritası, ortalama insandan tamamen farklı çalışıyor. Fark, daha pazarlığa "hazırlık" aşamasında başlıyor.
1. Esneklik Yasası: Belirsizliği Kucaklamak
Amatörler pazarlığa girerken senaryo yazar: "Bana A derse, ben B derim." Bu aptallıktır. Çünkü hayat senin senaryona uymaz.
Usta pazarlıkçılar ise senaryo değil, "seçenek" üretir. Ortalama pazarlıkçıya göre tam 2 kat daha fazla alternatif düşünürler.
Onlar tek bir "hedef noktaya" kilitlenmezler (Örn: "Bunu 1000 TL'ye alacağım"). Bunun yerine bir "aralık" belirlerler (Örn: "850 ile 1050 TL arası makul").
Neden? Çünkü katı olan kırılır, esnek olan bükülür ama kopmaz. Belirsizlik anında manevra yapacak alanı kendine baştan yaratırlar.
1/8 This was the final PGP* (Pretty Good Policy) for Crypto @pgpforcrypto meeting of the year, from last Wednesday in Washington, DC 🇺🇸: lightning talks + a Project Glitch fireside chat focused on where crypto policy, privacy tech, and compliance design are heading next.
Watch all of the updates here 🧵👇
Sessions:
• Policy Update — Lindsay Fraser @lindsayfraser0, @BlockchainAssn
• Enterprise Privacy — Paul Brody @pbrody, Principal & Global Blockchain Leader, Ernst & Young
• Privacy + Surveillance Policy — Tony Douglas Jr. @dao_officer, The Decentralization Research Center @TheDRC_
• Privacy Infrastructure + Compliance Frameworks — Remi Gai @remi_gai, Founder of Inco @inconetwork
• Digital ID + Anonymous Credentials — Ying Tong @therealyingtong, Independent Applied Cryptographer
Project Glitch Fireside Chat:
“Digital ID in 2026” — Wayne Chang, @wycdd, Founder & CEO of Spruce ID @SpruceID & Mike Orcutt @mike_orcutt, Founding Editor of @projectglitch_
Thank you to the sponsors of the PGP* for Crypto Briefing and Roundtable Series: @ZcashCommGrants, @hedera Hashgraph, Ernst & Young, The Digital Securities Initiative, USC VanEck Digital Assets Initiative @USC_VEDA , and @ElectricCoinCo (with additional support from DeFi Education Fund @fund_defi and Blockchain Association @BlockchainAssn).
Highlights:
• Market structure work is converging, but key decisions remain open—especially around token classification, DeFi treatment, ethics language, and how authority is scoped across agencies.
• Lindsay flagged an emerging pressure point tied to stablecoins: whether “rewards/incentives” are permitted in secondary markets and third‑party platforms. Her point was that the GENIUS framework struck a deliberate balance (issuer yield restrictions, but not a blanket ban on downstream incentives), and that balance is now under pressure from lobbying and related regulatory activity.
• She also noted that agency staffing/quorum dynamics (including at the CFTC) are becoming part of the policy negotiation, not just a background detail.
3/8 Paul Brody @pbrody, Principal & Global Blockchain Leader, Ernst & Young — Institutions want onchain privacy, too
Highlights:
• Paul outlined EY’s two-track approach: Nightfall (for standardized, high‑volume private payments/transfers/swaps) and Starlight (for running customized business logic privately by translating Solidity contracts into zero‑knowledge circuits).
• He emphasized Nightfall’s design goals: scalability, EVM compatibility, and an open-source posture intended to support broader adoption and review.
• On progress and roadmap, he described multi‑year iteration since early prototypes, and referenced a Zcash-based version on testnet with a stated target of production deployment in 2026, alongside early ecosystem deployments/testnets (including Plume and Celo).
Alman asıllı hekimin kamerasından: 1930’lu yıllara ait, Anadolu'da çekilmiş 30 kadın portresi
Nazi ideolojisinin güç kazanmasından kaçmak zorunda kalan Yahudi asıllı çocuk hekimi Albert Eckstein, +++
1935 yılında genç Türkiye Cumhuriyeti’nin davetiyle bu topraklara geldi. Ankara Numune Hastanesi’nde Çocuk Sağlığı Kliniği’nin başına geçen Eckstein, yurdun pek çok bölgesinde anne ve bebek sağlığına odaklanan kapsamlı incelemeler yürüttü.
Tıp insanı kimliğinin yanı sıra fotoğraf sanatıyla da ilgilenen Eckstein, saha çalışmaları sırasında çok sayıda kare yakaladı. Bu görsel miras, yakın zamanda Cambridge Üniversitesi tarafından kamuoyuyla paylaşıldı.
Who Stole Father Christmas?
The true story of the Heist of the Relics of St.Nicholas
In which we travel with @SamDalrymple123 to the mysterious empty tomb of St.Nicholas of Myra in Lycia, modern Turkey
Long before Coca Cola advertising gave him a nice red and white hat, Father Christmas was actually a real Byzantine saint- St Nicholas, Bishop of Myra or ‘Santa Claus’ in Dutch. He was renowned for his generosity and gift giving.
St Nicholas, or Nikolaos as he would have pronounced his name, was a Byzantine Bishop of Myra, capital of Lycia, now in southern Turkey, from 280-352AD
On a vu qu’en ces temps de crise sur la question de l’épidémie de dermatose nodulaire contagieuse, les principaux fouteurs de zbeul n’étaient pas les éleveurs ni les paysans. On va prendre un cas: Vicky, qui a notamment trempé dans le fake des 70 vétérinaires suspendus.
Thread :
1/n: Qui est Vicky Dehaene, très présente sur X en ces jours de crise due à l’épidémie de DNC ? Réelle égérie du monde paysan, ou conspifaf ordinaire qui saute sur l’occasion ?
Eh bien, la question, elle est vite répondue…
Déjà, c’est pas elle, sur la vidéo…
Tu vois l'arnaque?
2/n : On va faire simple. On remonte un peu le temps, Epoque des Gilets Jaunes. Vicky en était. Ou du moins elle en était partisane. On trouve des posts par dizaines défendant le mouvement. Mais pas vraiment de traces de participation aux manifs ou blocage de ronds-points.
Who was behind this systemic discrimination of Sami people in Scandinavia and does it connect somehow to Canada’s issues? @AFN_Updates @NWAC_CA @MNC_tweets @MetisNationON @NCAI1944 @UN4Indigenous @NCTR_UM @Pontifex @USDOJ_Intl @MarkJCarney @hrw @interpol @ICIJorg @ICCT_TheHague
The Sami had our Metis sash and coats like Hudson bay fabric blanket. They whaled and fished in Acadia and their wampum like dwelling is called Lavvau.
My family’s East Coast history included the food, the same cooking styles and seasonings and preservation of fish as Indigenous Sami.
STYLE-AWARE DRAG-AND-DROP INSERTION OF SUBJECTS INTO IMAGES
@Google's US20250378609A1 presents a style-aware drag-and-drop system that inserts subjects from one image into another while automatically adapting to the target's visual style. The method preserves subject identity, transforms appearance to match target aesthetics, and integrates the result with realistic shadows and reflections.
Generative AI image editing tools have seen explosive growth as users increasingly demand professional-quality results without specialized skills. Non-experts seek intuitive interfaces that handle complex transformations automatically, expecting results that previously required hours of manual work in professional software. The gap between user expectations and available capabilities continues to drive innovation in this space. Google's ongoing investment in generative AI positions this patent within a broader portfolio addressing creative workflows.
Traditional approaches to subject insertion rely on inpainting methods that prove computationally expensive and produce poor-quality outputs, particularly on smooth regions and boundaries ([0002], [0033]). These techniques often create visible artifacts where inserted subjects meet background elements. More fundamentally, existing methods struggle to balance identity preservation against style transformation, typically sacrificing one for the other ([0027]). Users face an unsatisfying choice between accurate subject representation and natural environmental integration.
The patent addresses this gap through a three-stage pipeline combining subject personalization, style transfer, and environment integration. A diffusion model first learns to represent the subject through auxiliary token embeddings and parameter adjustments ([0026]). Style information extracted from the target image then conditions generation to produce a style-matched version of the subject ([0027]). Finally, a subject insertion model places this transformed subject into the target environment with appropriate shadows, reflections, and occlusions ([0034]).
Key Breakthroughs:
◽Preserving subject identity through simultaneous token embedding and LoRA weight learning
◽Injecting target style via CLIP-IP-Adapter pathway without distorting learned representation
◽Adapting photorealistic insertion models to stylized domains through bootstrap self-filtering
[FIG. 8: Style-aware drag-and-drop results showing subjects inserted into target backgrounds with automatic style adaptation and realistic environmental integration]
1. Core Innovations
1️⃣ Dual-Space Subject Learning
◽ Technical Challenge: Representing a specific subject within a generative model requires balancing identity preservation against editability. Standard fine-tuning approaches that modify model weights risk overfitting, producing outputs that merely replicate training images rather than generating novel views ([0028]). The model memorizes pixel patterns instead of learning underlying subject characteristics. Conversely, lightweight embedding-only methods may fail to capture fine-grained details that distinguish one subject from another ([0030]). This tension between memorization and generalization has limited practical applications of personalized image generation.
◽ Innovative Solution: The patent introduces simultaneous optimization of token embeddings and low-rank adaptation weights. During fine-tuning, the system learns auxiliary input tokens [T1] and [T2] that condition the diffusion model's semantic space ([0030]). These tokens occupy positions in the prompt where natural language descriptors would normally appear. Concurrently, LoRA deltas modify frozen model parameters through rank decomposition matrices ([0031]). The joint optimization follows a denoising objective where both embedding vectors and weight adjustments train together ([0091], [0092]). This dual-space approach captures subject identity at multiple levels of abstraction within a unified training procedure.
◽ Competitive Advantage: Token embeddings encode high-level semantic identity while LoRA weights preserve fine-grained visual details. This complementary distribution achieves subject fidelity in fewer training iterations than weight-only methods ([0028]). The learned representation maintains pose and expression editability because the model avoids overfitting to specific training views, enabling generation of the subject in novel configurations while retaining distinctive characteristics.
2️⃣ Adapter-Based Style Injection
◽ Technical Challenge: Transferring visual style from a target image onto a personalized subject risks corrupting the learned identity representation. Naive approaches that blend style information throughout the generation process cause subject features to drift toward generic stylistic patterns ([0027]). A character's distinctive face shape might morph toward the style's typical proportions. The fundamental conflict between style adoption and identity preservation has constrained previous methods to limited style ranges or compromised fidelity in one dimension or the other.
◽ Innovative Solution: The system employs a CLIP encoder to extract style embeddings from the target image, producing vector representations of visual characteristics ([0032], [0093]). This encoding captures color palettes, texture patterns, artistic techniques, and lighting qualities as numerical features. An IP-Adapter then injects these style features into a subset of UNet upsampling layers within the fine-tuned diffusion model ([0093]). This selective injection imposes style information on the generation process while the learned auxiliary tokens continue conditioning for subject identity. The architecture keeps style and identity pathways functionally separate through layer-specific routing.
◽ Competitive Advantage: Restricting style injection to layers responsible for surface details prevents style information from overwriting identity-critical features encoded earlier. The subject emerges in target style without losing distinguishing characteristics. Experimental results demonstrate improvements in both CLIP-based style metrics and DINO-based identity metrics compared to StyleAlign and InstantStyle baselines ([0036], [0107]), achieving results that sequential approaches cannot match.
3️⃣ Bootstrap-Enhanced Subject Insertion
◽ Technical Challenge: Subject insertion models trained on photorealistic data fail when processing stylized images. A model that learns to generate shadows and reflections from photographs produces artifacts when given cartoon or painted subjects ([0037], [0097]). The lighting physics and surface properties differ fundamentally between photorealistic and stylized domains. Collecting paired training data across every possible artistic style proves impractical due to the unbounded variety of visual styles. This domain gap between training distribution and deployment scenarios limits real-world applicability.
◽ Innovative Solution: The patent introduces bootstrap domain adaptation that extends insertion model capabilities without manual data collection. The pre-trained photorealistic model first attempts subject removal on stylized images, revealing how well it understands each image type ([0098]). A filtering mechanism identifies successful removals and discards failures, creating a curated training set of stylized examples the model can already handle partially ([0099]). The model then fine-tunes on this filtered data, gradually expanding its effective domain ([0100]). Multiple bootstrap iterations progressively improve coverage of stylized image types by repeatedly expanding the success boundary ([0101]).
◽ Competitive Advantage: This self-supervised approach eliminates dependence on paired stylized training data. Each iteration expands the model's competence boundary into new visual domains without human annotation. Filtering ensures training quality by excluding catastrophic failures, preventing error propagation. Results show substantial improvement in stylized image handling compared to photorealistic-only models ([FIG. 12]), generalizing across style types without style-specific training.
2. Architecture & Components
The system comprises three functional modules that process subject images through style transformation to final integration.
1️⃣ Subject Learning Module:
- Diffusion model 300 with UNet architecture serves as the generative foundation ([0026], [0032])
- Noisy image versions of subject provide training signal for denoising recovery ([0026])
- Auxiliary input 305 conditions output via learned token sequence "A [T1][T2]" ([0030])
- LoRA deltas modify frozen model parameters through rank decomposition matrices ([0031])
- Token embeddings e1 and e2 capture subject identity in semantic embedding space ([0091])
- Joint loss function optimizes both embedding and weight components simultaneously ([0092])
2️⃣ Style Transfer Module:
- CLIP encoder 310 extracts style representation from target image xt ([0027])
- Style embedding 313 encodes visual characteristics as conditioning vector ([0032])
- IP-Adapter transforms embedding into layer-specific conditioning signals ([0093])
- Selective injection targets UNet upsampling layers to preserve identity features ([0093])
3️⃣ Subject Integration Module:
- Segmentation 410 isolates transformed subject from generated output background ([0033])
- Composite generation places segmented subject onto target background at specified location ([0034])
- Subject insertion model 400 generates contextually appropriate shadows and reflections ([0034])
- Bootstrap training extends capabilities from photorealistic to stylized domains ([0098])
3. Operational Mechanism
The method executes through four sequential phases from input reception to final output generation.
1️⃣ Input Reception:
- System receives subject image xs containing the entity to transfer ([0067])
- System receives target image xt defining style and destination environment ([0067])
- Subject image enters personalization pipeline for identity learning ([0026])
- Target image proceeds independently to style extraction pathway ([0027])
2️⃣ Subject Personalization:
- Noise corruption at varying levels creates training variants of subject image ([0026])
- Diffusion model learns to recover original subject from noisy versions ([0091])
- LoRA deltas and token embeddings optimize jointly via denoising loss function ([0092])
- Training continues until model specifically represents subject identity with fidelity ([0028])
- Convergence produces fine-tuned model with learned auxiliary input tokens ([0030])
3️⃣ Style-Aware Generation:
- CLIP encoder processes target image to produce style embedding vector ([0093])
- IP-Adapter transforms style embedding into UNet conditioning signals ([0093])
- Style signals inject into upsampling layers of personalized diffusion model ([0093])
- Model executes conditioned on learned tokens, generating styled subject image ([0067])
- Output depicts subject with preserved identity rendered in target visual style ([0027])
4️⃣ Environment Integration:
- Styled subject segmented from generated image background using mask ([0033])
- Subject composited onto target image at user-specified location ([0034])
- Insertion model analyzes scene context to determine appropriate lighting effects ([0034])
- Model applies shadows, reflections, and occlusions matching target environment ([0034])
4. Figures
[FIG. 3A/3B: Two-stage personalization process showing diffusion model fine-tuning with learned token input "A [T1][T2]" and style injection from target image 311 via embedding 313]
[FIG. 4: Subject insertion pipeline showing segmentation 410, composite generation, and integration model 400 producing output 411 with shadows and reflections]
[FIG. 6: Method flowchart depicting steps 610 through 640 from image reception through styled subject generation to final environment integration]
5. Key Advantages
✅ Objective improvements in subject fidelity metrics including DINO and CLIP-I scores relative to StyleAlign and InstantStyle baselines ([0036], [0107])
✅ Strong style adherence measured by CSD and CLIP-T metrics while maintaining low structural overfitting indicated by SSIM values ([0108])
✅ Reduced training iterations through simultaneous embedding and weight optimization versus sequential fine-tuning approaches ([0028])
✅ Identity preservation maintaining pose, expression, and semantic attributes during cross-style transformation ([0025])
✅ Computational efficiency surpassing traditional inpainting methods that struggle with smooth regions and produce boundary artifacts ([0033])
✅ Mobile deployment capability enabling local inference execution on cellphones and tablets after remote training completion ([0035])
✅ Bootstrap adaptation requiring minimal additional supervision to extend photorealistic models to diverse stylized domains ([0037])
6. Analogy
Consider commissioning a Renaissance portrait painter to create your likeness in the style of Caravaggio. You arrive at the studio with a photograph. The painter faces a fundamental challenge: capture your distinctive features while rendering everything in dramatic chiaroscuro technique with deep shadows and warm flesh tones.
A novice might attempt to simply apply sepia filters to your photograph. The result would look vaguely old but retain modern photographic qualities while losing your specific facial characteristics in the color manipulation. This corresponds to naive style transfer that corrupts identity.
An experienced portraitist takes a different approach. First, they study your face through multiple sketches, learning the proportions of your nose, the particular curve of your jawline, the way your eyes catch light. These preliminary drawings capture your identity at two levels: overall structure through quick gesture sketches, and fine details through careful studies. This dual learning mirrors token embeddings capturing semantic identity while LoRA weights encode visual specifics.
Next, the painter studies Caravaggio's technique separately. They analyze how he handled fabric folds, skin tones, and that signature spotlight-against-darkness contrast. This style analysis remains independent from their study of your features. When painting begins, they apply Caravaggio's methods to their learned understanding of your appearance. The CLIP-IP-Adapter pathway functions similarly, extracting and applying style without overwriting identity.
Finally, the painter must integrate your figure into a period-appropriate setting. Someone lounging in a Baroque interior needs shadows that match the scene's lighting, reflections in nearby surfaces, and appropriate occlusion by foreground objects. The painter practices rendering modern clothing in Renaissance oil technique, building competence through trial and refinement. This corresponds to bootstrap adaptation where the insertion model learns stylized rendering through iterative self-training.
The finished portrait shows unmistakably you, rendered in unmistakably Baroque style, sitting naturally in a candlelit chamber. Three distinct competencies combined: knowing your appearance, knowing the style, and knowing environmental integration.
Contra lo que pudiera parecer el producto mejor vendido del mundo mundial no es el pan de molde , es la palabra: DEMOCRACIA
Pero claro una cosa es lo que uno pide por Aliexpress y otra muy distinta lo que le llega
2 Democracia significaba el “Gobierno del Pueblo “ hay que reconocerlo; es un nombre muy bonito, el mejor
3 Ahorita todo es "Democracia" el mito del siglo XX y parte del XXI
La palabrita ya está instalada en la psique de las masas y eso es: el gran triunfo el Liberalismo
Pero, no se emocionen, el liberalismo tampoco es lo que nos cuentan
The US COMMISSION ON INTERNATIONAL RELIGIOUS FREEDOM is a self-proclaimed 'freedom of religion' watchdog.
According to USCIRF, Bangladesh is not even "CPC (Country of Particular Concern)".
1/6
USCIRF labels India as a CPC and often selects specific cases to build a particular narrative.
Check these headlines.
2/
In their 2025 Annual Report, they indeed mentioned that Hindus are being targeted in Bangladesh more for their political affiliation than their religion.
Three German soldiers who had been caught behind American lines in American uniforms, are tied to wooden stakes and shot near the village of Henri-Chapelle, Belgium. 1/5
The three German infiltrators, Unteroffizier Manfred Pernass, Oberfähnrich Günther Billing, and Gefreiter Wilhelm Schmidt had been part of Operation Greif under the leadership of Otto Skorzeny, the man who had led the daring mission to rescue Mussolini in September 1943. 2/5
Skorzeny only had just over a month to find suitable men (English speakers) and also assemble enough captured Allied vehicles - some Panther tanks were disguised as Tank Destroyers (photo) - to pass through enemy lines once the Battle of the Bulge was under way.
Their mission was to seize vital bridges over the river Meuse at Amay, Huy, and Andenne.
3/