1. Details on the completion of 2026 HBM supply negotiations
• This year, numerous external variables made it difficult to finalize not only supply volumes but also product mix. As customer performance requirements evolved, it took longer than initially expected.
However, discussions with customers have now been concluded, and supply contracts for next year’s HBM have been finalized. Since 2023, demand for AI infrastructure and the company’s strong product competitiveness have kept HBM fully sold out. Prices are currently set at a level that allows profitability to be maintained.
Given the rapid expansion of HBM demand driven by the AI market, it will be difficult for supply to catch up with demand in the near term. Growth is expected to outpace general DRAM. Even in 2027, supply is projected to remain tight relative to demand. The company will ensure timely delivery of products tailored to customer needs.
⸻
2. On customer requests for higher HBM4 specifications
• As the AI inference market expands, memory bandwidth has become increasingly important. Since the number of I/O channels in HBM4 has been set to double that of the previous generation, customer requirements have shifted toward higher-speed performance.
With the industry’s leading HBM technology, the company is meeting these enhanced specifications. It was among the fastest in the industry to deliver samples reflecting customers’ upgraded requirements and is now prepared for mass production.
As competition intensifies among AI chipmakers, memory performance has become the key bottleneck in technological progress. Performance requirements for next-generation memory, including HBM, will continue to rise. SK hynix plans to deliver next-generation HBM products in a timely manner that meet customer expectations.
SK hynix 3Q25 Earnings Call – Key Q&A (2)
⸻
3. Achieving over KRW 10 trillion in quarterly results—how is this memory cycle different from past booms?
• Contrary to earlier expectations, memory demand has surged this year, pushing the market into a super-boom. We believe the current cycle differs from the 2017/2018 cycle.
The key difference is that today’s demand is tied to a paradigm shift toward AI and is connecting to a wide range of applications. AI is creating demand by being layered onto existing use cases, and over the medium to long term it is driving fundamental change by cultivating new applications such as autonomous driving and robotics.
Notably, as AI computing expands into inference, it is also stimulating demand for general servers. We expect server system shipments next year to grow in the high-teens percent. Server-bound demand will continue to lead overall DRAM demand.
On the supply side, even as HBM production expands and more cleanroom space is secured, total output growth is inherently limited. These characteristics structurally constrain DRAM industry supply growth, which underpins our view that the super-cycle will be prolonged.
⸻
4. Basis for seeing structural strength in eSSD demand
• Background to rising NAND demand: With server customers ramping AI investments, build-outs for both AI servers and general servers are increasing. Demand for TLC products is rising. As AI-generated data such as images and videos surges, storage needs are climbing, creating HDD supply shortages. Hyperscalers are increasingly shifting to high-capacity QLC eSSDs. We view today’s demand changes as going beyond short-term supply-demand issues and driving a structural uptrend in eSSD demand.
To overcome the limits of the traditional LLM approach, the importance of RAG (Retrieval-Augmented Generation) is growing. Instead of relying solely on trained data, RAG retrieves documents related to a query from external data before generating the final answer, enabling higher accuracy by incorporating the latest and user-specific data. Implementing RAG requires building external database vector stores, and eSSDs are essential for fast retrieval. We therefore expect storage demand to rise on the back of high-performance TLC and high-capacity QLC eSSDs.
In addition, as data throughput needs during inference surge, there is a growing need to offload data previously handled on the GPU to memory/storage for more efficient AI operations. By offloading the GPU’s KV cache down to SSDs, it’s possible to improve throughput per watt and reduce user response latency. We thus expect storage demand to increase, centered on high-performance TLC and high-capacity QLC eSSDs. As AI use cases proliferate, eSSD demand is rising, and memory demand is broadening from DRAM into NAND.
SK hynix 3Q25 Earnings Call – Key Q&A (3)
⸻
5. Company’s stance on the shift toward a “specialty market” characterized by pre-orders and post-sales
• Certain business segments, including HBM, have transitioned to a pre-order and post-sales model. Due to strong customer demand, we have been securing visibility and responding to customer needs through long-term annual contracts established from the initial agreement stage.
As a result, unlike in the past when the memory industry and our business showed high volatility, predictability has increased, and business stability has also expanded. Starting with HBM4E, custom HBM products are being developed jointly with customers from the early design stages of their GPUs or ASICs. Such long-term and strategic partnerships with specific customers are expected to have a positive impact on the stability and profitability of our memory business.
As memory suppliers prioritize capacity allocation for HBM expansion, constraints are emerging in general memory production. This has led to a shortage in general memory supply amid rising demand. Accordingly, more customers are seeking long-term supply agreements for general memory as well. Some customers are even issuing pre-purchase POs for 2026 to hedge against potential supply shortages.
Given the strength of customer demand and our current capacity, not only HBM but also DRAM and NAND capacities for next year are effectively sold out. We will continue to respond to customer demand with optimal production and sales strategies, and we will keep discussing with customers how HBM-driven market changes will affect general memory.
⸻
6. Outlook for 2026 CapEx
• With growing confidence in the expansion and monetization of the AI market, global AI companies are competitively investing, driving rapid demand growth across various memory products such as HBM, DDR5, and eSSD. To meet this demand, an increase in memory CapEx is inevitable. Our CapEx next year will rise significantly compared to this year.
Equipment move-in at M15X has begun in earnest, and the facility will be utilized to expand HBM supply. For general DRAM and NAND, we plan to accelerate migration to advanced process nodes to address demand. Additionally, considering the construction of the first phase of the Yongin fab and preparations for the Indiana plant, infrastructure investment will continue to increase.
Even with expanding investment, we will maintain strict CapEx discipline and aim to preserve a stable financial structure.
■ SK Hynix 3Q25 Earnings Call – Key Q&A (4)
⸻
7. Ramp-up timeline and product mix by the end of next year?
• Our principle is to prioritize products with strong demand visibility and secured profitability. Newly added capacity will primarily be allocated to HBM products, for which supply agreements have already been finalized. For general DRAM and NAND, we will respond to demand changes by transitioning to advanced process nodes.
The 1c-nm node completed development last year and began mass production this year. It will enter full-scale ramp-up next year, with over half of domestic conventional DRAM production planned to be on 1c-nm by year-end. We aim to secure profitability by establishing a high-performance, cost-competitive lineup including 1c LPDDR5 and GDDR, ensuring timely supply.
For NAND, our focus remains on improving profitability rather than expanding capacity. Productivity will be enhanced through migration to more advanced nodes. We’ve been gradually transitioning from 176-layer → 238-layer → 321-layer, and next year, we will expand not only TLC but also QLC production, entering a full-scale ramp-up of 321-layer NAND. By the end of next year, we expect over half of our NAND output at the headquarters to be based on the 321-layer process.
⸻
8. Given the strong demand in Q3, inventory levels likely declined further—what are SK Hynix’s and customers’ inventory situations?
• In the previous quarter, customer purchasing demand exceeded expectations, raising concerns about potential inventory buildup across the memory supply chain. However, AI-driven investments and accelerated inventory build-up demand have led to a visible decline in server customers’ inventory.
Our own inventory also decreased quarter-over-quarter for both DRAM and NAND amid strong memory demand. DRAM inventory is now at an extremely low level, with newly produced chips being shipped to customers almost immediately.
We will continue maintaining healthy inventory levels for both DRAM and NAND to ensure stable and timely supply to meet customer demand without disruption.
SK hynix 3Q25 Earnings Call – Key Q&A (5)
⸻
9. Possibility of early shipments or preparations related to M15X and the Yongin fab (May 2027)
• Regarding new fab preparation plans: To meet the substantial wafer capacity requirements driven by HBM demand, we began investing in M15X at the end of 2023. After two years of construction, M15X was opened ahead of schedule, and equipment move-in has begun. We are preparing for M15X to contribute to HBM production growth starting next year.
As memory demand is rising more steeply than previously anticipated, M15X capacity expansion is also progressing rapidly. The first phase of the Yongin fab, construction of which began in earnest this year, is also being pushed forward to accelerate the schedule in response to the pace of future memory demand growth. By establishing cutting-edge infrastructure connecting M15X and the Yongin fab, we aim to secure capacity in advance and respond flexibly to AI-driven demand.
⸻
10. With expectations for AI market growth continuing to rise, is there potential for HBM market and customer base expansion?
• Even under conservative assumptions, with major Big Tech companies expanding their investments, the HBM market is expected to grow more than 30% over the next five years. Our recently signed large-scale DRAM supply LOI with OpenAI also demonstrates the growing importance of HBM-centered AI memory in the evolution of AI technologies.
We are collaborating with not only GPU customers but also major ASIC customers as a key supplier for next-generation product development, contributing to the advancement of the industry. We expect to maintain a high market share and expand supply to meet newly emerging HBM demand.
■ SK hynix 3Q25 Earnings Conference Key Q&A (6)
11. DRAM spot prices are rising. What is the outlook for profitability between general DRAM and HBM?
• Due to the recent sharp increase in general DRAM prices, the profitability gap between the two products is narrowing. However, HBM profitability still remains high. If supply and demand remain tight next year, it is possible that the profit margin of general DRAM could become similar to that of HBM. We will not immediately change our capacity mix due to temporary profitability fluctuations.
Since HBM products are typically based on long-term volume agreements with customers, it is important to ensure stable supply without disruption. We reflect profitability considerations comprehensively when negotiating these long-term supply agreements.
Recently, even for general memory products, customers have started discussing pre-purchase orders and multi-year long-term contracts. The company is striving to achieve optimal productivity, and we expect this to serve as a turning point in changing the nature of the memory business.
As a leading player in AI memory, we have achieved differentiated performance based on stable profitability. Going forward, we will continue to respond to customer demand and grow optimally together with the AI market from a long-term perspective.
⸻
12. The company turned net cash this quarter. If free cash flow continues to improve, could there be any change in shareholder return policy?
• Our financial soundness has improved as 2025 performance turned out better than expected. We achieved a net cash position in Q3. As mentioned in our current shareholder return policy, we aim to maintain an appropriate level of cash to ensure stable business operations and sufficient capex even during market fluctuations.
Given the recent improvement in memory market conditions, our capex scale is also increasing, so this appropriate cash level needs to reflect that. Considering the significant growth potential of the AI memory market and our profitability, shareholders will likely agree that maintaining capex discipline and investing when necessary is the right approach.
As this is the first year of our 3-year shareholder return policy, we are not currently considering additional shareholder returns. However, we will continue to explore ways to maximize shareholder value, reflecting future market outlook and internal and external conditions.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Citi: The Semiconductor Cycle Still Has Significant Upside Potential 🧵/1
Citi analyst Christopher Danely expects global semiconductor sales to rise 16% in 2025 to reach an all-time high of $731 billion. However, he emphasized that this revenue growth is entirely driven by pricing, while shipment volumes remain well below previous peaks.
This indicates that inventory levels are low and the industry still has substantial room for further growth.
According to Citi’s data, the current semiconductor industry’s revenue growth has been primarily driven by the surge in logic chip prices.
Citi noted that the average selling price (ASP) of logic computing chips (including AI accelerators) has risen 24% over the past three years, far exceeding the 2% growth rate of the previous decade. The share of logic computing chips in total semiconductor sales has also climbed from 27% in 2020 to 39% in 2025. 🧵/2
Within this segment, logic computing revenue is expanding rapidly at a compound annual growth rate (CAGR) of 53%, rising from about $29.6 billion in 2022 to approximately $106.4 billion in 2025, with its share of total semiconductor sales jumping from 5% to 15%. 🧵/3
The average selling price of logic computing chips is expected to soar from $7.80–$8.50 during 2018–2022 to $26.40 in 2025, representing a 47% compound annual growth rate.
Citi identified NVIDIA’s rapid data center expansion as the key driver behind this transformation. The company’s data center business share of logic chip sales has surged from less than 10% in 2021 to 66% in 2025, while its share of total semiconductor sales has also increased from less than 3% in 2021 to 24% in 2025.
A Semiconductor Genius Who Rejected America’s Courtship, Produces “Terrifying Disciples” in China
– Korea JoongAng Ilbo 『China Innovation Report』 🧵/1
Sun Nan (孫楠), a professor at the University of Texas at Austin. Nicknamed a “semiconductor genius,” he is one of the top talents in the field. After graduating from Tsinghua University in China, he went to the U.S. to study abroad and earned his Ph.D. at Harvard. In 2020, at the age of 34, Professor Sun faced two choices.
“Should I stay in the U.S., or return to China….”
If he chose the latter, the opportunity cost was immense. At that time, his position at the University of Texas came with an average annual salary of $150,000 (about 200 million won). Appointed at just 26 years old, he also enjoyed the security of tenure. Boarding a plane to Beijing meant giving all that up.
The other option was his alma mater, Tsinghua University. The salary there would be about 1 million yuan (around 192.76 million won). Including elite talent funds, it would easily surpass what he earned in the U.S. Since his alma mater was earnestly reaching out to him, his heart began to sway. The school even promised to build him a research team.
What did he choose?
America or China. Like a scene from The Matrix, after much deliberation, the “red pill” Sun Nan swallowed was becoming a haigui (海龜 – a term for overseas-educated returnees). In 2020, Sun rejected America’s temptations and returned to China.
He was a genius worth drooling over. From 2002 to 2006, he studied at Tsinghua University, then chose to go to the U.S. to master cutting-edge semiconductor technology. Just four years of research later, he earned his Ph.D. in integrated circuit chip design from Harvard. He went on to secure tenure at the University of Texas. Between 2014 and 2024, he was the most prolific author in the world’s top semiconductor design journal, JSSC. From 2013 to 2020, he also consulted for companies such as AMD and TI, maintaining close ties with industry. To say he never considered staying in America would be a lie.
If you look at Professor Sun’s interview published in Tsinghua University’s campus newspaper, you can read what he was thinking at the time of his return.
“I went to the United States to learn advanced semiconductor technology. I achieved a lot there, but I believed that as far as my abilities reached, it was only right to contribute to China. That is what a Tsinghua graduate should do.” 🧵/2
The scent of “national pride” is unmistakable. Let’s listen to more of the interview.
“China manufactures and exports smartphones, home appliances, and other products. It is the world’s largest exporter. But to actually make those products, China also has to import more semiconductors than any other country. That was something I found regrettable.”
With a tangible goal before him, he decided to dedicate his life to China’s semiconductor self-sufficiency. Perhaps he phrased it that way because it was for a campus newspaper, but his sincerity still comes through clearly.🧵/3
Morgan Stanley: According to channel checks, the contract negotiations between SK Hynix and NVIDIA for HBM3E 12-hi remain in the price range of around $440 per die ($1.69 per Gb). However, the negotiations are still ongoing and have not yet been finalized.
Morgan Stanley: According to channel checks, HBM4 has been priced at $590–600 per die ($2.30–2.34 per Gb), but it is deemed unlikely that this pricing will be part of a full-year fixed contract.
Morgan Stanley: SK Hynix’s share of NVIDIA’s business is expected to decline from 85–90% in 2025 to just over 50% in 2026, driven by intensified competition from Samsung and Micron.
🧵1/ The HBM That Jensen Huang Called a Miracle Made with Samsung: What Happened Over the Past 9 Years? Hackerook JoongAng Daily
“The High-Bandwidth Memory (HBM)? It’s just a cheap knockoff of the Hybrid Memory Cube (HMC) we’re developing. I won’t bother debating HBM—arguing over it is like taking candy from a child.”
In August 2016, at the International Symposium on Low Power Electronics and Design (Hot Chips) held in Cupertino, California, a senior engineer from Micron harshly criticized HBM, which had been on the market for just one year, and then triumphantly left the stage saying, “The HMC that’s coming out soon is going to be spectacular.”
Shortly thereafter, Samsung Electronics announced plans to release a low-cost version of HBM that would reduce both performance and price. At that time, Samsung had already dominated the latest HBM2 market.
SK Hynix was the last to present. At that time, both performance and sales of SK Hynix’s HBM2 were a complete flop. Nevertheless, the presenter timidly struck back at Micron by unveiling plans for HBM3, saying, “A smaller kid can sometimes beat a bigger kid and snatch their candy.”
It’s hard to believe that HBM was once so ridiculed and doubted—now, in the era of artificial intelligence (AI), HBM is hailed as the “prince of memory.” Yet, only nine years ago, the three major memory companies—Samsung Electronics, SK Hynix, and Micron—viewed HBM in exactly this way. Their judgments then have determined today’s market landscape.
2/ Market Shares of HBM: 55 : 40 : 5. Those are the market shares of SK Hynix, Samsung Electronics, and Micron respectively (source: J.P. Morgan, 2024).
Since the 2000s, the semiconductor industry has been focused on developing next-generation memory to overcome the “memory wall.” The “memory wall” refers to a bottleneck phenomenon arising from the performance gap between the central processing unit (CPU) and memory. Even if the CPU swiftly completes its computations, it often has to wait idly for the next piece of data to arrive from memory. How much memory speed can be improved directly determines the speed of the entire system.
HBM was born as a “memory unlike any other in the world” in this context. The “18 years of HBM development” showcase the very nature of the advanced semiconductor industry—its competition and cooperation, chasing and overtaking drama.
3/ 1. When “Always-Second” AMD and SK Hynix Met
When a No. 1 company partners with another No. 1 company, success seems assured, and it’s hard to imagine much happening if two perpetual No. 2s clash. But the semiconductor world is different.
In 2007, AMD—the perpetual second place in the graphics processing unit (GPU) market—embarked on a project to develop a GPU equipped with “a new type of memory called HBM.”
It was not foresight into the AI market. At that time, the concept of a “GPU for AI” did not exist. It was only in 2011 that researchers at Google, Stanford University, and New York University discovered that GPUs were suitable for deep learning computations, and in 2012 that Geoffrey Hinton’s team at the University of Toronto won an AI competition using GPUs (Hinton later received the Nobel Prize in 2024). The HBM project started simply because AMD wanted to increase its market share in gaming GPUs.
Back then, Hynix (now SK Hynix) had just emerged from five years of workout (creditor management) following the CEO’s decision to relinquish management rights of Hynix Electronics (2001–2005), and was often mentioned as a potential acquisition target. Its financial results had only just turned to break-even. Nevertheless, Hynix joined AMD as an HBM partner in 2010. The interposer (middle substrate) was to be made by UMC in Taiwan, packaging by ASE in Taiwan and Amkor in the U.S., and manufacturing by TSMC.
🧵How China Took Over the LiDAR Industry: The Comeback of a "Fool's Tech"
While Elon Musk’s Tesla rejected LiDAR, China quietly took over the industry.
This thread unpacks how it all started—and where it stands now. 👇
The Eyes of Autonomy
LiDAR (Light Detection and Ranging) is a sensor that emits 360-degree laser beams to detect its surroundings quickly and accurately in 3D. Since the speed of light is constant (299,792,458 m/s), it calculates distance based on how long it takes the laser to bounce off objects and return.
Today, LiDAR is a key component in premium robotic vacuum cleaners and even in Apple iPhones—its scanner enables sharper night portraits. But LiDAR plays its most essential role in autonomous driving. Whether it's pitch dark, raining, snowing, or foggy, LiDAR helps vehicles detect the size and position of objects, making self-driving possible.
In 2023, about 1.6 million LiDAR sensors were installed in vehicles worldwide, ranging from fully autonomous robotaxis like Google Waymo to modern vehicles with advanced driver-assistance systems. For instance, the BMW i7 uses Israel-based Innoviz's LiDAR, and Volvo’s EX90 uses U.S.-based Luminar's.
LiDAR was first developed in the 1960s for military use in the U.S. But it wasn’t until the past decade—when sensing range exceeded 200 meters and devices shrank in size—that vehicle-grade LiDAR entered mass production. Automakers rushed to partner with LiDAR firms in hopes of achieving Level 3 autonomy (hands-free driving) by 2021 or 2022.
Back then, optimism fueled a boom in startups and IPOs. But Level 3 autonomy never arrived. LiDAR development proved extremely expensive, and as interest rates rose, the industry—already drowning in red ink—was hit hard. Massive layoffs followed.
In 2022, Ford and Volkswagen shut down their joint venture Argo AI. Germany’s Ibeo and America’s Quanergy Systems filed for bankruptcy. Even Velodyne, once an industry leader, merged with competitor Ouster in 2023.
Of the 80+ companies in the vehicle LiDAR space around 2020, fewer than 20 remain. Survivors like Luminar—once hailed for making its founder Austin Russell the youngest self-made billionaire—have suffered. Luminar’s stock, which peaked at $71.70 in December 2020, now trades at $3.63—a -99.5% drop. Innoviz has seen a similar collapse.
These days, it's no exaggeration to say that Samsung's stock price practically hinges on NVIDIA.
But did you know that back in the 2000s, there was a time when NVIDIA was desperately clinging to Samsung?
This thread covers it in detail.🧵
Jensen Huang founded NVIDIA in 1993. At the time, Samsung Electronics was the rising sun, holding the No.1 position in the global DRAM (memory semiconductor) market. Comparing the two companies back then would have been absurd.
But the relationship between Samsung Electronics and NVIDIA has completely reversed over the past 30 years. Today, NVIDIA is the world's No.1 company in terms of revenue, and Samsung has, for the first time, fallen behind SK Hynix — NVIDIA’s memory supplier — in DRAM market share.