Intel is planning a $100 million strategic investment in SambaNova Systems, an AI chip startup developing purpose-built hardware for large language model training and inference, according to sources familiar with the matter cited by financial news outlets on February 9, 2026. The investment forms part of new CEO Lip-Bu Tan’s broader strategy to reposition Intel as a meaningful competitor in the AI accelerator market currently dominated almost exclusively by Nvidia.
SambaNova was founded in 2017 by Stanford professor Kunle Olukotun alongside veterans from Sun Microsystems and Oracle, and has developed the Reconfigurable Dataflow Unit (RDU) architecture specifically optimised for AI workloads. Unlike traditional GPUs designed for parallel floating-point computation across a wide range of graphical applications, SambaNova’s RDUs are purpose-architected to handle the dataflow patterns characteristic of neural network inference and training — large matrix multiplications, sparse attention mechanisms, and high-bandwidth memory access that define modern transformer models.
The investment signals Intel’s recognition that its current AI accelerator portfolio, centred on the Gaudi series of chips, has failed to meaningfully challenge Nvidia’s market dominance. Gaudi 3, released in 2025, achieved competitive performance on specific benchmarks but struggled to replicate the ecosystem advantages that make Nvidia’s GPUs the default choice: a mature CUDA programming framework with years of optimised libraries, extensive customer support infrastructure, and first-mover relationships with every major cloud provider and AI lab.
Rather than continuing to compete head-on with Nvidia through conventional GPU architectures where Nvidia’s advantages compound, Intel is pursuing a two-track strategy. The organic track continues developing Gaudi successors targeting the large enterprise market. The investment track, exemplified by the SambaNova deal, explores whether specialised AI architecture approaches can carve defensible niches in segments where general-purpose GPU programming overhead represents a genuine inefficiency.
Lip-Bu Tan, who took over as Intel CEO after Pat Gelsinger’s departure, has emphasised urgency in rebuilding Intel’s relevance in the AI era. His background as founding managing director of Walden International, a semiconductor-focused venture firm, suggests comfort with investment as a strategic tool alongside internal R&D. Under his leadership, Intel has also signalled intentions to develop data-centre GPUs from scratch using new architectural approaches rather than evolving existing designs.
The hyperscaler context matters enormously here. Amazon, Google, Microsoft, and Meta are all developing custom AI silicon to reduce Nvidia dependence and improve price-performance for their specific workloads. If Intel’s investment in SambaNova helps one or more hyperscalers deploy non-Nvidia hardware at meaningful scale, it would generate substantial revenue through foundry manufacturing contracts and shift the competitive dynamics of the AI chip market.
For SambaNova, the Intel investment provides capital, manufacturing partnership access through Intel Foundry Services, and distribution credibility that helps enterprise sales teams position the technology against well-funded competitors including Groq, Cerebras, and Tenstorrent.








