<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet href="https://rss.buzzsprout.com/styles.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:psc="http://podlove.org/simple-chapters" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
  <atom:link href="https://rss.buzzsprout.com/2570635.rss" rel="self" type="application/rss+xml" />
  <atom:link href="https://pubsubhubbub.appspot.com/" rel="hub" xmlns="http://www.w3.org/2005/Atom" />
  <title>Semi Doped</title>

  <lastBuildDate>Fri, 13 Mar 2026 21:30:07 -0400</lastBuildDate>
  <link>http://semidoped.fm</link>
  <language>en-us</language>
  <copyright>© 2026 Semi Doped</copyright>
  <podcast:locked>yes</podcast:locked>
    <podcast:guid>e32e690f-adec-5ec2-816f-ec6fc4536c5b</podcast:guid>
  <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
  <itunes:type>episodic</itunes:type>
  <itunes:explicit>false</itunes:explicit>
  <description><![CDATA[<p>The business and technology of semiconductors. Alpha for engineers and investors alike.</p>]]></description>
  <generator>Buzzsprout (https://www.buzzsprout.com)</generator>
  <itunes:keywords>semiconductors, tech, ai, datacenter, optics, memory, infrastructure, computing, power</itunes:keywords>
  <itunes:owner>
    <itunes:name>Vikram Sekar and Austin Lyons</itunes:name>
  </itunes:owner>
  
  <itunes:image href="https://storage.buzzsprout.com/3x9iw6bg497vb5g37628m1y2nnx5?.jpg" />
  <itunes:category text="Technology" />
  <podcast:person role="host" href="https://www.linkedin.com/in/austinlyons/" img="https://storage.buzzsprout.com/tihw7shkt8nlg7ki4sv153x14zot">Austin Lyons</podcast:person>
  <podcast:person role="host">Vikram Sekar</podcast:person>
  <item>
    <itunes:title>Meta&#39;s Inference Accelerator &amp; Applied Optoelectronics (AAOI)</itunes:title>
    <title>Meta&#39;s Inference Accelerator &amp; Applied Optoelectronics (AAOI)</title>
    <itunes:summary><![CDATA[Austin recaps moderating an agentic AI panel at Synopsys Converge, then gives an in-depth technical breakdown of Meta's MTIA custom silicon. Why they're building it, how chiplets let them ship a new chip every 6 months, and how the roadmap is shifting toward gen AI inference. Vik digs into Applied Optoelectronics (AAOI), the vertically integrated Texas laser shop whose stock went from $1.48 to $100+, and whether history is about to rhyme.               ...]]></itunes:summary>
    <description><![CDATA[<p>Austin recaps moderating an agentic AI panel at Synopsys Converge, then gives an in-depth technical breakdown of Meta&apos;s MTIA custom silicon. Why they&apos;re building it, how chiplets let them ship a new chip every 6 months, and how the roadmap is shifting toward gen AI inference. Vik digs into Applied Optoelectronics (AAOI), the vertically integrated Texas laser shop whose stock went from $1.48 to $100+, and whether history is about to rhyme.                     <br/><br/>Austin Lyons: https://www.chipstrat.com<br/>Vik Sekar: https://www.viksnewsletter.com/<br/>                                                                                                                                  <br/>Topics covered:<br/>• Agentic AI in chip design — how it changes roles for junior and senior engineers<br/>• Optical circuit switching and what it means for Arista&apos;s business model<br/>• Meta&apos;s ad-serving pipeline: Andromeda, Lattice, and the GEM foundation model<br/>• Why custom silicon (MTIA) makes sense at Meta&apos;s scale<br/>• MTIA chiplet strategy — 4 generations in 2 years<br/>• AAOI&apos;s vertical integration, Amazon&apos;s $4B warrant deal, and the 2017 parallel<br/><br/>Chapters:<br/>0:00 Intro<br/>1:26 Synopsys Converge — Agentic AI Panel<br/>9:44 Vik&apos;s Article: Optical Circuit Switching &amp; Arista<br/>14:43 Meta MTIA — A New Chip Every 6 Months<br/>21:32 Why Custom Silicon Makes Sense for Meta<br/>27:22 MTIA Chiplet Strategy &amp; Roadmap<br/>33:56 Gen AI Fits Meta&apos;s Business Model<br/>36:31 How Meta Ships Chips So Fast<br/>40:30 Applied Optoelectronics (AAOI) Deep Dive<br/>45:02 Amazon&apos;s $4B Warrant Deal<br/>48:54 Can AAOI&apos;s Lasers Compete with Lumentum?<br/>53:16 AAOI&apos;s Aggressive Capacity Buildout<br/>55:35 History Rhymes: AAOI&apos;s 2017 Boom &amp; Bust<br/>1:00:55 Wrap-Up<br/><br/>#semiconductors #chips #tech #meta #MTIA #AAOI #optics #inference #AI</p>]]></description>
    <content:encoded><![CDATA[<p>Austin recaps moderating an agentic AI panel at Synopsys Converge, then gives an in-depth technical breakdown of Meta&apos;s MTIA custom silicon. Why they&apos;re building it, how chiplets let them ship a new chip every 6 months, and how the roadmap is shifting toward gen AI inference. Vik digs into Applied Optoelectronics (AAOI), the vertically integrated Texas laser shop whose stock went from $1.48 to $100+, and whether history is about to rhyme.                     <br/><br/>Austin Lyons: https://www.chipstrat.com<br/>Vik Sekar: https://www.viksnewsletter.com/<br/>                                                                                                                                  <br/>Topics covered:<br/>• Agentic AI in chip design — how it changes roles for junior and senior engineers<br/>• Optical circuit switching and what it means for Arista&apos;s business model<br/>• Meta&apos;s ad-serving pipeline: Andromeda, Lattice, and the GEM foundation model<br/>• Why custom silicon (MTIA) makes sense at Meta&apos;s scale<br/>• MTIA chiplet strategy — 4 generations in 2 years<br/>• AAOI&apos;s vertical integration, Amazon&apos;s $4B warrant deal, and the 2017 parallel<br/><br/>Chapters:<br/>0:00 Intro<br/>1:26 Synopsys Converge — Agentic AI Panel<br/>9:44 Vik&apos;s Article: Optical Circuit Switching &amp; Arista<br/>14:43 Meta MTIA — A New Chip Every 6 Months<br/>21:32 Why Custom Silicon Makes Sense for Meta<br/>27:22 MTIA Chiplet Strategy &amp; Roadmap<br/>33:56 Gen AI Fits Meta&apos;s Business Model<br/>36:31 How Meta Ships Chips So Fast<br/>40:30 Applied Optoelectronics (AAOI) Deep Dive<br/>45:02 Amazon&apos;s $4B Warrant Deal<br/>48:54 Can AAOI&apos;s Lasers Compete with Lumentum?<br/>53:16 AAOI&apos;s Aggressive Capacity Buildout<br/>55:35 History Rhymes: AAOI&apos;s 2017 Boom &amp; Bust<br/>1:00:55 Wrap-Up<br/><br/>#semiconductors #chips #tech #meta #MTIA #AAOI #optics #inference #AI</p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18843530-meta-s-inference-accelerator-applied-optoelectronics-aaoi.mp3" length="44641225" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/7vluncfb68jh4joovkekfp4p60pg?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18843530</guid>
    <pubDate>Fri, 13 Mar 2026 13:00:00 -0700</pubDate>
    <podcast:transcript url="https://www.buzzsprout.com/2570635/18843530/transcript" type="text/html" />
    <podcast:transcript url="https://www.buzzsprout.com/2570635/18843530/transcript.json" type="application/json" />
    <podcast:transcript url="https://www.buzzsprout.com/2570635/18843530/transcript.srt" type="application/x-subrip" />
    <podcast:transcript url="https://www.buzzsprout.com/2570635/18843530/transcript.vtt" type="text/vtt" />
    <itunes:duration>3716</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>13</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>The Great Optics-Copper Crossroads</itunes:title>
    <title>The Great Optics-Copper Crossroads</title>
    <itunes:summary><![CDATA[This week, Austin and Vik break down the optics vs. copper debate that rocked semis this week. Nvidia dropped $4 billion on Lumentum and Coherent, Credo posted a blowout quarter betting on copper, and then Hock Tan shocked everyone claiming 400G per lane works over copper in Broadcom’s labs — potentially pushing CPO out to 2030+. Plus, Vik’s 4D chess conspiracy theory on why Hock Tan is talking up copper when Broadcom is a CPO company.  Like, subscribe, and drop your thoughts on the copper vs...]]></itunes:summary>
    <description><![CDATA[<p>This week, Austin and Vik break down the optics vs. copper debate that rocked semis this week. Nvidia dropped $4 billion on Lumentum and Coherent, Credo posted a blowout quarter betting on copper, and then Hock Tan shocked everyone claiming 400G per lane works over copper in Broadcom’s labs — potentially pushing CPO out to 2030+. Plus, Vik’s 4D chess conspiracy theory on why Hock Tan is talking up copper when Broadcom is a CPO company.<br/><br/>Like, subscribe, and drop your thoughts on the copper vs. optics debate in the comments!<br/><br/>Subscribe to our newsletters:<br/>* Chipstrat by Austin Lyons — chipstrat.com<br/>* Vik’s Semiconductor Newsletter by Vik Sekar  — viksnewsletter.com</p><p>Chapters<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8'>00:00</a>) - Newsletter Plugs: Groq LPUs &amp; Broadcom’s Laser Business<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=195s'>03:15</a>) - Dynamo &amp; the Rise of Workload-Specific Hardware<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=484s'>08:04</a>) - Austin’s Broadcom Laser Deep Dive<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=593s'>09:53</a>) - The Week’s Whiplash: Optics Monday, Copper Wednesday<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1070s'>17:50</a>) - Why Nvidia Invested $4B: Geopolitics, Supply &amp; the HBM Playbook<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1455s'>24:15</a>) - CPO Lasers &amp; Optical Circuit Switches<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1576s'>26:16</a>) - Credo Earnings: 200% YoY Growth &amp; the Copper Bull Case<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1869s'>31:09</a>) - Reliability, AECs &amp; Oracle’s GPU Cluster Problem<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2148s'>35:48</a>) - Credo’s Optics Play: Micro-LED Active Cables &amp; the CPO Timing Risk<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2325s'>38:45</a>) - Broadcom Earnings: Hock Tan’s Copper Bombshell<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2614s'>43:34</a>) - Customer-Owned Tooling: Hock Tan Says “Good Luck”<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2665s'>44:25</a>) - Vik’s 4D Chess Theory: Why Hock Tan Talks Up Copper<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2823s'>47:03</a>) - Wrap-Up: It’s Both — The Real Question Is Timing</p>]]></description>
    <content:encoded><![CDATA[<p>This week, Austin and Vik break down the optics vs. copper debate that rocked semis this week. Nvidia dropped $4 billion on Lumentum and Coherent, Credo posted a blowout quarter betting on copper, and then Hock Tan shocked everyone claiming 400G per lane works over copper in Broadcom’s labs — potentially pushing CPO out to 2030+. Plus, Vik’s 4D chess conspiracy theory on why Hock Tan is talking up copper when Broadcom is a CPO company.<br/><br/>Like, subscribe, and drop your thoughts on the copper vs. optics debate in the comments!<br/><br/>Subscribe to our newsletters:<br/>* Chipstrat by Austin Lyons — chipstrat.com<br/>* Vik’s Semiconductor Newsletter by Vik Sekar  — viksnewsletter.com</p><p>Chapters<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8'>00:00</a>) - Newsletter Plugs: Groq LPUs &amp; Broadcom’s Laser Business<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=195s'>03:15</a>) - Dynamo &amp; the Rise of Workload-Specific Hardware<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=484s'>08:04</a>) - Austin’s Broadcom Laser Deep Dive<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=593s'>09:53</a>) - The Week’s Whiplash: Optics Monday, Copper Wednesday<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1070s'>17:50</a>) - Why Nvidia Invested $4B: Geopolitics, Supply &amp; the HBM Playbook<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1455s'>24:15</a>) - CPO Lasers &amp; Optical Circuit Switches<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1576s'>26:16</a>) - Credo Earnings: 200% YoY Growth &amp; the Copper Bull Case<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=1869s'>31:09</a>) - Reliability, AECs &amp; Oracle’s GPU Cluster Problem<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2148s'>35:48</a>) - Credo’s Optics Play: Micro-LED Active Cables &amp; the CPO Timing Risk<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2325s'>38:45</a>) - Broadcom Earnings: Hock Tan’s Copper Bombshell<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2614s'>43:34</a>) - Customer-Owned Tooling: Hock Tan Says “Good Luck”<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2665s'>44:25</a>) - Vik’s 4D Chess Theory: Why Hock Tan Talks Up Copper<br/>(<a href='https://www.youtube.com/watch?v=47cQTPjDUB8&amp;t=2823s'>47:03</a>) - Wrap-Up: It’s Both — The Real Question Is Timing</p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18804545-the-great-optics-copper-crossroads.mp3" length="34810143" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/6p64rbs6p4dahhgx82zrpdmisf7y?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18804545</guid>
    <pubDate>Fri, 06 Mar 2026 17:00:00 -0800</pubDate>
    <itunes:duration>2896</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>12</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Optical Supply Chain: What would you buy?</itunes:title>
    <title>Optical Supply Chain: What would you buy?</title>
    <itunes:summary><![CDATA[This week, we move from optics technology to optics companies. We walk the AI optical supply chain from bottom to top. Main debate: Who has a moat? Who is already priced for perfection?  *Not investment advice, do your own due diligence* AXTI - Indium phosphide substrate supplier. Critical bottleneck in the laser stack. Major China export-control risk. Massive stock run vs thin earnings. Tower Semiconductor - Leading silicon photonics foundry. 5x capacity expansion with customer prepayme...]]></itunes:summary>
    <description><![CDATA[<p>This week, we move from optics technology to optics companies. We walk the AI optical supply chain from bottom to top. Main debate: Who has a moat? Who is already priced for perfection?  *Not investment advice, do your own due diligence*</p><p><b>AXTI - </b>Indium phosphide substrate supplier. Critical bottleneck in the laser stack. Major China export-control risk. Massive stock run vs thin earnings.</p><p><b>Tower Semiconductor - </b>Leading silicon photonics foundry. 5x capacity expansion with customer prepayments. Strong process lock-in. Pure-play optics exposure.</p><p><b>GlobalFoundries - </b>300mm monolithic photonics platform + Chips Act support. Optics growing fast but still small piece of overall business.</p><p><b>Lumentum - </b>Dominant EML laser supplier. Explosive AI demand. Strong technical moat. Valuation and capex sensitivity are key risks.</p><p><b>Coherent - </b>Vertically integrated from substrate to module. 6-inch InP push could lower costs structurally. Execution and margin mix matter.</p><p><b>Fabrinet - </b>Optics assembly partner. High NVIDIA exposure. Scales with industry, but dependent on upstream supply.</p><p><b>Corning - </b>AI data centers require far more fiber than traditional cloud. $6B Meta deal adds visibility. Timing of scale-up optics is the swing factor.</p><p><b>Timestamps</b><br/>00:01 Intro<br/>06:59 AXT $AXTI<br/>13:38 Tower Semiconductor $TSEM<br/>23:58 GlobalFoundries $GFS<br/>32:43 Lumentum $LITE<br/>39:38 Coherent $COHR<br/>47:09 Fabrinet $FN<br/>54:07 Corning $GLW</p><p>Austin&apos;s Substack: https://www.chipstrat.com/</p><p>Vik&apos;s Substack: https://www.viksnewsletter.com/</p>]]></description>
    <content:encoded><![CDATA[<p>This week, we move from optics technology to optics companies. We walk the AI optical supply chain from bottom to top. Main debate: Who has a moat? Who is already priced for perfection?  *Not investment advice, do your own due diligence*</p><p><b>AXTI - </b>Indium phosphide substrate supplier. Critical bottleneck in the laser stack. Major China export-control risk. Massive stock run vs thin earnings.</p><p><b>Tower Semiconductor - </b>Leading silicon photonics foundry. 5x capacity expansion with customer prepayments. Strong process lock-in. Pure-play optics exposure.</p><p><b>GlobalFoundries - </b>300mm monolithic photonics platform + Chips Act support. Optics growing fast but still small piece of overall business.</p><p><b>Lumentum - </b>Dominant EML laser supplier. Explosive AI demand. Strong technical moat. Valuation and capex sensitivity are key risks.</p><p><b>Coherent - </b>Vertically integrated from substrate to module. 6-inch InP push could lower costs structurally. Execution and margin mix matter.</p><p><b>Fabrinet - </b>Optics assembly partner. High NVIDIA exposure. Scales with industry, but dependent on upstream supply.</p><p><b>Corning - </b>AI data centers require far more fiber than traditional cloud. $6B Meta deal adds visibility. Timing of scale-up optics is the swing factor.</p><p><b>Timestamps</b><br/>00:01 Intro<br/>06:59 AXT $AXTI<br/>13:38 Tower Semiconductor $TSEM<br/>23:58 GlobalFoundries $GFS<br/>32:43 Lumentum $LITE<br/>39:38 Coherent $COHR<br/>47:09 Fabrinet $FN<br/>54:07 Corning $GLW</p><p>Austin&apos;s Substack: https://www.chipstrat.com/</p><p>Vik&apos;s Substack: https://www.viksnewsletter.com/</p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18762736-optical-supply-chain-what-would-you-buy.mp3" length="44478824" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/jbgseozthxbrysrws8vuejopa6fp?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18762736</guid>
    <pubDate>Fri, 27 Feb 2026 12:00:00 -0800</pubDate>
    <itunes:duration>3702</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>11</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Optical Networking Supercycle - ALL the Tech You NEED to know</itunes:title>
    <title>Optical Networking Supercycle - ALL the Tech You NEED to know</title>
    <itunes:summary><![CDATA[Austin and Vik delve into the evolving landscape of optics and networking, particularly in relation to AI and data centers.   The conversation covers various scales of networking, including scale across, scale out, and scale up, while also addressing the demand-supply dynamics in laser manufacturing and the future of optical circuit switches.   The episode highlights the technological advancements and market opportunities in the optics sector, emphasizing the significance of these development...]]></itunes:summary>
    <description><![CDATA[<p>Austin and Vik delve into the evolving landscape of optics and networking, particularly in relation to AI and data centers. <br/><br/>The conversation covers various scales of networking, including scale across, scale out, and scale up, while also addressing the demand-supply dynamics in laser manufacturing and the future of optical circuit switches. <br/><br/>The episode highlights the technological advancements and market opportunities in the optics sector, emphasizing the significance of these developments for the future of AI.<br/><br/><b>Takeaways</b></p><ul><li>Silicon photonics is becoming crucial for data center connectivity.</li><li>Optics is essential for overcoming copper&apos;s limitations in speed and distance.</li><li>Scale across technology is vital for connecting data centers.</li><li>Scale out optics is the standard for connecting GPUs between racks.</li><li>Co-packaged optics can reduce energy consumption in data centers.</li><li>The scale up market for optics is emerging as a new opportunity.</li><li>Indium phosphide wafers are a critical bottleneck in laser manufacturing.</li><li>Optical circuit switches are gaining traction in data centers.</li><li>2026 is anticipated to be a pivotal year for optical networking. </li></ul><p><br/><b>Chapters</b><br/><br/>00:00 Introduction to AI and CPU Bottlenecks<br/>03:00 The Rise of Silicon Photonics<br/>06:01 Understanding Optical Networking and Data Centers<br/>08:49 Scale Across: Connecting Data Centers<br/>11:56 Scale Out: Optimizing Data Center Connectivity<br/>14:53 Scale Up: The Future of GPU Connectivity<br/>23:32 The Shift from Copper to Optical Connections<br/>26:13 Challenges and Reliability of Lasers<br/>30:47 Understanding Co-Packaged Optics<br/>34:17 Market Dynamics: Demand and Supply of Lasers<br/>40:46 Emerging Technologies: Optical Circuit Switches<br/><br/>Check out Austin&apos;s Substack: <a href='https://www.chipstrat.com'>https://www.chipstrat.com</a><br/>Check out Vik&apos;s Substack: <a href='https://www.viksnewsletter.com'>https://www.viksnewsletter.com</a></p>]]></description>
    <content:encoded><![CDATA[<p>Austin and Vik delve into the evolving landscape of optics and networking, particularly in relation to AI and data centers. <br/><br/>The conversation covers various scales of networking, including scale across, scale out, and scale up, while also addressing the demand-supply dynamics in laser manufacturing and the future of optical circuit switches. <br/><br/>The episode highlights the technological advancements and market opportunities in the optics sector, emphasizing the significance of these developments for the future of AI.<br/><br/><b>Takeaways</b></p><ul><li>Silicon photonics is becoming crucial for data center connectivity.</li><li>Optics is essential for overcoming copper&apos;s limitations in speed and distance.</li><li>Scale across technology is vital for connecting data centers.</li><li>Scale out optics is the standard for connecting GPUs between racks.</li><li>Co-packaged optics can reduce energy consumption in data centers.</li><li>The scale up market for optics is emerging as a new opportunity.</li><li>Indium phosphide wafers are a critical bottleneck in laser manufacturing.</li><li>Optical circuit switches are gaining traction in data centers.</li><li>2026 is anticipated to be a pivotal year for optical networking. </li></ul><p><br/><b>Chapters</b><br/><br/>00:00 Introduction to AI and CPU Bottlenecks<br/>03:00 The Rise of Silicon Photonics<br/>06:01 Understanding Optical Networking and Data Centers<br/>08:49 Scale Across: Connecting Data Centers<br/>11:56 Scale Out: Optimizing Data Center Connectivity<br/>14:53 Scale Up: The Future of GPU Connectivity<br/>23:32 The Shift from Copper to Optical Connections<br/>26:13 Challenges and Reliability of Lasers<br/>30:47 Understanding Co-Packaged Optics<br/>34:17 Market Dynamics: Demand and Supply of Lasers<br/>40:46 Emerging Technologies: Optical Circuit Switches<br/><br/>Check out Austin&apos;s Substack: <a href='https://www.chipstrat.com'>https://www.chipstrat.com</a><br/>Check out Vik&apos;s Substack: <a href='https://www.viksnewsletter.com'>https://www.viksnewsletter.com</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18716580-optical-networking-supercycle-all-the-tech-you-need-to-know.mp3" length="33266145" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/wer4np8npntws3a6kc9z9jkab48v?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18716580</guid>
    <pubDate>Thu, 19 Feb 2026 23:00:00 -0800</pubDate>
    <itunes:duration>2768</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Memory Mayhem &amp; AI Capex Madness</itunes:title>
    <title>Memory Mayhem &amp; AI Capex Madness</title>
    <itunes:summary><![CDATA[In this episode of the Semi Doped podcast, Austin and Vik delve into the current state of the semiconductor industry, focusing on the memory crisis driven by increasing demand from AI applications. They discuss the implications of rising memory prices, the impact of hyperscaler spending on the market, and the strategic moves of major players like Google, Microsoft, Meta, and Amazon in the AI landscape.   Takeaways Memory prices are skyrocketing, impacting consumer electronics.The memory crisi...]]></itunes:summary>
    <description><![CDATA[<p>In this episode of the Semi Doped podcast, Austin and Vik delve into the current state of the semiconductor industry, focusing on the memory crisis driven by increasing demand from AI applications. They discuss the implications of rising memory prices, the impact of hyperscaler spending on the market, and the strategic moves of major players like Google, Microsoft, Meta, and Amazon in the AI landscape. <br/><br/><b>Takeaways</b></p><ul><li>Memory prices are skyrocketing, impacting consumer electronics.</li><li>The memory crisis is affecting the production of lower-end devices.</li><li>DRAM prices have doubled in a single quarter, creating challenges for manufacturers.</li><li>Nanya Tech&apos;s revenue growth indicates a booming memory market.</li><li>AI applications are driving unprecedented demand for memory.</li><li>Hyperscalers are significantly increasing their capital expenditures for AI infrastructure.</li><li>The integration of AI into advertising is reshaping business models for companies like Google and Meta.</li></ul><p><b>Chapters</b><br/><br/>00:00 The State of Memory in Semiconductors<br/>03:08 Nvidia&apos;s GPU Dilemma and Market Dynamics<br/>06:13 The Impact of AI on Memory Demand<br/>09:08 NAND Flash and Context Memory Trends<br/>11:59 The Future of Memory Supply and Demand<br/>15:12 AI Infrastructure and CapEx Spending<br/>17:47 Google&apos;s Strategic Investments in AI<br/>20:58 The Advertising Business Model and AI Integration<br/>30:26 Revenue vs. Expenses: A Balancing Act<br/>31:08 The Future of TPUs vs. GPUs in Cloud Computing<br/>35:31 Microsoft vs. Google: AI Investments and Market Reactions<br/>38:22 AI Integration in Enterprises: Microsoft’s Unique Position<br/>39:57 The Power of Microsoft’s Reach in AI<br/>40:30 GitHub: A Hidden Gem for Microsoft’s AI Strategy<br/>43:52 Meta’s AI Strategy: Advertising and Revenue Growth<br/>51:18 Amazon’s Massive CapEx: Implications for the Future<br/>54:00 Looking Ahead: Predictions for 2027 and Beyond<br/><br/>Check out Austin&apos;s substack: <a href='https://www.chipstrat.com/'>https://www.chipstrat.com/</a><br/>Check out Vik&apos;s substack: <a href='https://www.viksnewsletter.com/'>https://www.viksnewsletter.com/</a></p>]]></description>
    <content:encoded><![CDATA[<p>In this episode of the Semi Doped podcast, Austin and Vik delve into the current state of the semiconductor industry, focusing on the memory crisis driven by increasing demand from AI applications. They discuss the implications of rising memory prices, the impact of hyperscaler spending on the market, and the strategic moves of major players like Google, Microsoft, Meta, and Amazon in the AI landscape. <br/><br/><b>Takeaways</b></p><ul><li>Memory prices are skyrocketing, impacting consumer electronics.</li><li>The memory crisis is affecting the production of lower-end devices.</li><li>DRAM prices have doubled in a single quarter, creating challenges for manufacturers.</li><li>Nanya Tech&apos;s revenue growth indicates a booming memory market.</li><li>AI applications are driving unprecedented demand for memory.</li><li>Hyperscalers are significantly increasing their capital expenditures for AI infrastructure.</li><li>The integration of AI into advertising is reshaping business models for companies like Google and Meta.</li></ul><p><b>Chapters</b><br/><br/>00:00 The State of Memory in Semiconductors<br/>03:08 Nvidia&apos;s GPU Dilemma and Market Dynamics<br/>06:13 The Impact of AI on Memory Demand<br/>09:08 NAND Flash and Context Memory Trends<br/>11:59 The Future of Memory Supply and Demand<br/>15:12 AI Infrastructure and CapEx Spending<br/>17:47 Google&apos;s Strategic Investments in AI<br/>20:58 The Advertising Business Model and AI Integration<br/>30:26 Revenue vs. Expenses: A Balancing Act<br/>31:08 The Future of TPUs vs. GPUs in Cloud Computing<br/>35:31 Microsoft vs. Google: AI Investments and Market Reactions<br/>38:22 AI Integration in Enterprises: Microsoft’s Unique Position<br/>39:57 The Power of Microsoft’s Reach in AI<br/>40:30 GitHub: A Hidden Gem for Microsoft’s AI Strategy<br/>43:52 Meta’s AI Strategy: Advertising and Revenue Growth<br/>51:18 Amazon’s Massive CapEx: Implications for the Future<br/>54:00 Looking Ahead: Predictions for 2027 and Beyond<br/><br/>Check out Austin&apos;s substack: <a href='https://www.chipstrat.com/'>https://www.chipstrat.com/</a><br/>Check out Vik&apos;s substack: <a href='https://www.viksnewsletter.com/'>https://www.viksnewsletter.com/</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18674267-memory-mayhem-ai-capex-madness.mp3" length="42450131" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/eldatvrdiz368bjnl9hxsc7onsip?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18674267</guid>
    <pubDate>Fri, 13 Feb 2026 02:00:00 -0800</pubDate>
    <itunes:duration>3533</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>The future of financing AI infrastructure with Wayne Nelms, CTO of Ornn</itunes:title>
    <title>The future of financing AI infrastructure with Wayne Nelms, CTO of Ornn</title>
    <itunes:summary><![CDATA[In this episode, Vik and Wayne Nelms discuss the emerging financial exchange for GPU compute, exploring its implications for the AI infrastructure market. They discuss the value of compute, pricing dynamics, hedging strategies, and the future of GPU and memory trading.  Wayne shares insights on partnerships, the depreciation of GPUs, and how inference demand may reshape hardware utilization. The conversation highlights the importance of financial products in facilitating data center deve...]]></itunes:summary>
    <description><![CDATA[<p>In this episode, Vik and Wayne Nelms discuss the emerging financial exchange for GPU compute, exploring its implications for the AI infrastructure market. They discuss the value of compute, pricing dynamics, hedging strategies, and the future of GPU and memory trading. </p><p>Wayne shares insights on partnerships, the depreciation of GPUs, and how inference demand may reshape hardware utilization. The conversation highlights the importance of financial products in facilitating data center development and optimizing profitability in the evolving landscape of compute resources.</p><p><b>Takeaways</b></p><ul><li>Wayne Nelms is the CTO of Ornn, focusing on GPU compute as a commodity.</li><li>The value of compute is still being defined in the market.</li><li>Hedging strategies are essential for managing compute costs.</li><li>The pricing of GPUs varies significantly across providers.</li><li>Memory trading is becoming a crucial aspect of the compute market.</li><li>Partnerships can enhance trading platforms and market efficiency.</li><li>Depreciation of GPUs is not linear and varies by use case.</li><li>Inference demand may change how GPUs are utilized in the future.</li><li>Transparency in pricing benefits smaller players in the market.</li><li>Financial products can facilitate data center development and profitability.</li></ul><p><b>Chapters</b></p><p>00:00 Introduction to GPU Compute Futures</p><p>03:13 The Value of Compute in Today&apos;s Market</p><p>05:59 Understanding GPU Pricing Dynamics</p><p>08:46 Hedging and Futures in Compute</p><p>11:52 The Role of Memory in AI Infrastructure</p><p>15:14 Partnerships and Market Expansion</p><p>17:46 Depreciation and Residual Value of GPUs</p><p>20:57 Future of Data Centers and Compute Demand</p><p>24:01 The Impact of Financialization on AI Infrastructure</p><p>27:04 Looking Ahead: The Future of Compute Markets</p><p><b>Keywords</b></p><p>GPU compute, financial exchange, futures market, data centers, AI infrastructure, pricing strategies, hedging, memory trading, Ornn </p><p>Follow Wayne Nelms (@wayne_nelmz on X)</p><p>Check out Ornn&apos;s website: https://www.ornnai.com/</p><p>Check out Vik&apos;s Substack: https://www.viksnewsletter.com/</p><p>Check out Austin&apos;s Substack: https://www.chipstrat.com/</p>]]></description>
    <content:encoded><![CDATA[<p>In this episode, Vik and Wayne Nelms discuss the emerging financial exchange for GPU compute, exploring its implications for the AI infrastructure market. They discuss the value of compute, pricing dynamics, hedging strategies, and the future of GPU and memory trading. </p><p>Wayne shares insights on partnerships, the depreciation of GPUs, and how inference demand may reshape hardware utilization. The conversation highlights the importance of financial products in facilitating data center development and optimizing profitability in the evolving landscape of compute resources.</p><p><b>Takeaways</b></p><ul><li>Wayne Nelms is the CTO of Ornn, focusing on GPU compute as a commodity.</li><li>The value of compute is still being defined in the market.</li><li>Hedging strategies are essential for managing compute costs.</li><li>The pricing of GPUs varies significantly across providers.</li><li>Memory trading is becoming a crucial aspect of the compute market.</li><li>Partnerships can enhance trading platforms and market efficiency.</li><li>Depreciation of GPUs is not linear and varies by use case.</li><li>Inference demand may change how GPUs are utilized in the future.</li><li>Transparency in pricing benefits smaller players in the market.</li><li>Financial products can facilitate data center development and profitability.</li></ul><p><b>Chapters</b></p><p>00:00 Introduction to GPU Compute Futures</p><p>03:13 The Value of Compute in Today&apos;s Market</p><p>05:59 Understanding GPU Pricing Dynamics</p><p>08:46 Hedging and Futures in Compute</p><p>11:52 The Role of Memory in AI Infrastructure</p><p>15:14 Partnerships and Market Expansion</p><p>17:46 Depreciation and Residual Value of GPUs</p><p>20:57 Future of Data Centers and Compute Demand</p><p>24:01 The Impact of Financialization on AI Infrastructure</p><p>27:04 Looking Ahead: The Future of Compute Markets</p><p><b>Keywords</b></p><p>GPU compute, financial exchange, futures market, data centers, AI infrastructure, pricing strategies, hedging, memory trading, Ornn </p><p>Follow Wayne Nelms (@wayne_nelmz on X)</p><p>Check out Ornn&apos;s website: https://www.ornnai.com/</p><p>Check out Vik&apos;s Substack: https://www.viksnewsletter.com/</p><p>Check out Austin&apos;s Substack: https://www.chipstrat.com/</p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18655763-the-future-of-financing-ai-infrastructure-with-wayne-nelms-cto-of-ornn.mp3" length="29301084" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/ixvloobns3umgqabyrly7q2ti4qe?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18655763</guid>
    <pubDate>Tue, 10 Feb 2026 05:00:00 -0800</pubDate>
    <itunes:duration>2438</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>A New Era of Context Memory with Val Bercovici from WEKA</itunes:title>
    <title>A New Era of Context Memory with Val Bercovici from WEKA</title>
    <itunes:summary><![CDATA[Vik and Val Bercovici discuss the evolution of storage solutions in the context of AI, focusing on Weka's innovative approaches to context memory, high bandwidth flash, and the importance of optimizing GPU usage.   Val shares insights from his extensive experience in the storage industry, highlighting the challenges and advancements in memory requirements for AI models, the significance of latency, and the future of storage technologies.  Takeaways Context memory is crucial for AI performance...]]></itunes:summary>
    <description><![CDATA[<p>Vik and Val Bercovici discuss the evolution of storage solutions in the context of AI, focusing on Weka&apos;s innovative approaches to context memory, high bandwidth flash, and the importance of optimizing GPU usage. <br/><br/>Val shares insights from his extensive experience in the storage industry, highlighting the challenges and advancements in memory requirements for AI models, the significance of latency, and the future of storage technologies.<br/><br/><b>Takeaways</b></p><ul><li>Context memory is crucial for AI performance.</li><li>The demand for memory has drastically increased.</li><li>Latency issues can hinder AI efficiency.</li><li>High bandwidth flash offers new storage capabilities.</li><li>Weka&apos;s Axon software enhances GPU storage utilization.</li><li>Token warehouses can significantly reduce costs.</li><li>Augmented memory grids improve memory access speeds.</li><li>Networking innovations are essential for AI storage solutions.</li><li>Understanding memory hierarchies is vital for optimization.</li><li>The future of storage will involve more advanced technologies.</li></ul><p><b>Chapters</b><br/><br/>00:00 Introduction to Weka and AI Storage Solutions<br/>05:18 The Evolution of Context Memory in AI<br/>09:30 Understanding Memory Hierarchies and Their Impact<br/>16:24 Latency Challenges in Modern Storage Solutions<br/>21:32 The Role of Networking in AI Storage Efficiency<br/>29:42 Dynamic Resource Utilization in AI Networks<br/>30:04 Introducing the Context Memory Network<br/>31:13 High Bandwidth Flash: A Game Changer<br/>32:54 Weka&apos;s Neural Mesh and Storage Solutions<br/>35:01 Axon: Transforming GPU Storage into Memory<br/>39:00 Augmented Memory Grid Explained<br/>42:00 Pooling DRAM and CXL Innovations<br/>46:02 Token Warehouses and Inference Economics<br/>52:10 The Future of Storage Innovations</p><p><b>Resources</b></p><p>Manus AI $2B Blog: <a href='https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus'>https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus</a></p><p>Also listen to this podcast on your favorite platform. <a href='https://www.semidoped.fm/'>https://www.semidoped.fm/</a><br/><br/>Check out Vik&apos;s Substack: <a href='https://www.viksnewsletter.com/'>https://www.viksnewsletter.com/</a><br/>Check out Austin&apos;s Substack: <a href='https://www.chipstrat.com/'>https://www.chipstrat.com/</a></p>]]></description>
    <content:encoded><![CDATA[<p>Vik and Val Bercovici discuss the evolution of storage solutions in the context of AI, focusing on Weka&apos;s innovative approaches to context memory, high bandwidth flash, and the importance of optimizing GPU usage. <br/><br/>Val shares insights from his extensive experience in the storage industry, highlighting the challenges and advancements in memory requirements for AI models, the significance of latency, and the future of storage technologies.<br/><br/><b>Takeaways</b></p><ul><li>Context memory is crucial for AI performance.</li><li>The demand for memory has drastically increased.</li><li>Latency issues can hinder AI efficiency.</li><li>High bandwidth flash offers new storage capabilities.</li><li>Weka&apos;s Axon software enhances GPU storage utilization.</li><li>Token warehouses can significantly reduce costs.</li><li>Augmented memory grids improve memory access speeds.</li><li>Networking innovations are essential for AI storage solutions.</li><li>Understanding memory hierarchies is vital for optimization.</li><li>The future of storage will involve more advanced technologies.</li></ul><p><b>Chapters</b><br/><br/>00:00 Introduction to Weka and AI Storage Solutions<br/>05:18 The Evolution of Context Memory in AI<br/>09:30 Understanding Memory Hierarchies and Their Impact<br/>16:24 Latency Challenges in Modern Storage Solutions<br/>21:32 The Role of Networking in AI Storage Efficiency<br/>29:42 Dynamic Resource Utilization in AI Networks<br/>30:04 Introducing the Context Memory Network<br/>31:13 High Bandwidth Flash: A Game Changer<br/>32:54 Weka&apos;s Neural Mesh and Storage Solutions<br/>35:01 Axon: Transforming GPU Storage into Memory<br/>39:00 Augmented Memory Grid Explained<br/>42:00 Pooling DRAM and CXL Innovations<br/>46:02 Token Warehouses and Inference Economics<br/>52:10 The Future of Storage Innovations</p><p><b>Resources</b></p><p>Manus AI $2B Blog: <a href='https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus'>https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus</a></p><p>Also listen to this podcast on your favorite platform. <a href='https://www.semidoped.fm/'>https://www.semidoped.fm/</a><br/><br/>Check out Vik&apos;s Substack: <a href='https://www.viksnewsletter.com/'>https://www.viksnewsletter.com/</a><br/>Check out Austin&apos;s Substack: <a href='https://www.chipstrat.com/'>https://www.chipstrat.com/</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18635559-a-new-era-of-context-memory-with-val-bercovici-from-weka.mp3" length="39248809" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/7v5dwxoq441vl6ose64b3gbmlfp5?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18635559</guid>
    <pubDate>Fri, 06 Feb 2026 06:00:00 -0800</pubDate>
    <itunes:duration>3267</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>OpenClaw Makes AI Agents and CPUs Get Real</itunes:title>
    <title>OpenClaw Makes AI Agents and CPUs Get Real</title>
    <itunes:summary><![CDATA[Austin and Vik discuss the emerging trend of AI agents, particularly focusing on Claude Code and OpenClaw, and the resulting hardware implications.  Key Takeaways: 2026 is expected to be a pivotal year for AI agents.The rise of agentic AI is moving beyond marketing to practical applications.Claude Code is being used for more than just coding; it aids in research and organization.Integrating AI with tools like Google Drive enhances productivity.Security concerns arise with giving AI agents acc...]]></itunes:summary>
    <description><![CDATA[<p>Austin and Vik discuss the emerging trend of AI agents, particularly focusing on Claude Code and OpenClaw, and the resulting hardware implications.<br/><br/><b>Key Takeaways:</b></p><ul><li>2026 is expected to be a pivotal year for AI agents.</li><li>The rise of agentic AI is moving beyond marketing to practical applications.</li><li>Claude Code is being used for more than just coding; it aids in research and organization.</li><li>Integrating AI with tools like Google Drive enhances productivity.</li><li>Security concerns arise with giving AI agents access to personal data.</li><li>Local computing options for AI can reduce costs and increase control.</li><li>AI agents can automate repetitive tasks, freeing up human time for creative work.</li><li>The demand for CPUs is increasing due to the needs of AI agents.</li><li>AI can help summarize and organize information but may lack deep insights.</li><li>The future of AI will involve balancing automation with human oversight.</li></ul><p><b>Chapters</b><br/>(00:00) Introduction: Why 2026 may be the year of AI agents<br/>(01:12) What people mean by agents and the OpenClaw naming chaos<br/>(02:41) Agents behaving badly: crypto losses and social posting<br/>(03:38) Claude Code as a research tool, not a coding tool<br/>(05:54) Terminal-first workflows vs GUI-based agents<br/>(07:44) Connecting Claude Code to Gmail, Drive, and Calendar via MCP<br/>(09:12) Token waste, authentication friction, and workflow optimization<br/>(10:54) Automating newsletter ingestion and research archives<br/>(12:33) Giving agents login credentials and security tradeoffs<br/>(13:50) Filtering signal from noise with topic constraints<br/>(16:36) AI-driven idea generation and its limitations<br/>(17:34) When automation effort is not worth it<br/>(19:02) Are agents ready for non-technical users?<br/>(20:55) Why OpenClaw should not run on your personal laptop<br/>(21:33) Safe agent deployment: VPS vs local servers<br/>(23:33) The true cost of agents: infrastructure plus inference<br/>(24:18) What OpenClaw adds beyond Claude Code<br/>(26:53) Agents require managerial thinking and self-awareness<br/>(28:18) Local inference vs cloud APIs<br/>(30:46) Cost control with OpenRouter and model hierarchies<br/>(32:31) Scaling agents forces model and cost optimization<br/>(33:00) AI aggregation vs creator analytics<br/>(35:58) AI as discovery, not a replacement for reading<br/>(38:17) When summaries are enough and when they are not<br/>(39:47) Why AI cannot understand what is not said<br/>(41:18) Agentic AI is driving unexpected CPU demand<br/>(41:49) Intel caught off guard by CPU shortages<br/>(44:53) Security, identity, and encryption shift work to CPUs<br/>(46:10) Closing thoughts: agents are real, early, and uneven</p><p>Deploy your secure OpenClaw instance with DigitalOcean:<br/>https://www.digitalocean.com/blog/moltbot-on-digitalocean<br/><br/>Visit the podcast website: <a href='https://www.semidoped.fm'>https://www.semidoped.fm</a><br/>Austin&apos;s Substack: <a href='https://www.chipstrat.com/'>https://www.chipstrat.com/</a><br/>Vik&apos;s Substack: <a href='https://www.viksnewsletter.com/'>https://www.viksnewsletter.com/</a></p>]]></description>
    <content:encoded><![CDATA[<p>Austin and Vik discuss the emerging trend of AI agents, particularly focusing on Claude Code and OpenClaw, and the resulting hardware implications.<br/><br/><b>Key Takeaways:</b></p><ul><li>2026 is expected to be a pivotal year for AI agents.</li><li>The rise of agentic AI is moving beyond marketing to practical applications.</li><li>Claude Code is being used for more than just coding; it aids in research and organization.</li><li>Integrating AI with tools like Google Drive enhances productivity.</li><li>Security concerns arise with giving AI agents access to personal data.</li><li>Local computing options for AI can reduce costs and increase control.</li><li>AI agents can automate repetitive tasks, freeing up human time for creative work.</li><li>The demand for CPUs is increasing due to the needs of AI agents.</li><li>AI can help summarize and organize information but may lack deep insights.</li><li>The future of AI will involve balancing automation with human oversight.</li></ul><p><b>Chapters</b><br/>(00:00) Introduction: Why 2026 may be the year of AI agents<br/>(01:12) What people mean by agents and the OpenClaw naming chaos<br/>(02:41) Agents behaving badly: crypto losses and social posting<br/>(03:38) Claude Code as a research tool, not a coding tool<br/>(05:54) Terminal-first workflows vs GUI-based agents<br/>(07:44) Connecting Claude Code to Gmail, Drive, and Calendar via MCP<br/>(09:12) Token waste, authentication friction, and workflow optimization<br/>(10:54) Automating newsletter ingestion and research archives<br/>(12:33) Giving agents login credentials and security tradeoffs<br/>(13:50) Filtering signal from noise with topic constraints<br/>(16:36) AI-driven idea generation and its limitations<br/>(17:34) When automation effort is not worth it<br/>(19:02) Are agents ready for non-technical users?<br/>(20:55) Why OpenClaw should not run on your personal laptop<br/>(21:33) Safe agent deployment: VPS vs local servers<br/>(23:33) The true cost of agents: infrastructure plus inference<br/>(24:18) What OpenClaw adds beyond Claude Code<br/>(26:53) Agents require managerial thinking and self-awareness<br/>(28:18) Local inference vs cloud APIs<br/>(30:46) Cost control with OpenRouter and model hierarchies<br/>(32:31) Scaling agents forces model and cost optimization<br/>(33:00) AI aggregation vs creator analytics<br/>(35:58) AI as discovery, not a replacement for reading<br/>(38:17) When summaries are enough and when they are not<br/>(39:47) Why AI cannot understand what is not said<br/>(41:18) Agentic AI is driving unexpected CPU demand<br/>(41:49) Intel caught off guard by CPU shortages<br/>(44:53) Security, identity, and encryption shift work to CPUs<br/>(46:10) Closing thoughts: agents are real, early, and uneven</p><p>Deploy your secure OpenClaw instance with DigitalOcean:<br/>https://www.digitalocean.com/blog/moltbot-on-digitalocean<br/><br/>Visit the podcast website: <a href='https://www.semidoped.fm'>https://www.semidoped.fm</a><br/>Austin&apos;s Substack: <a href='https://www.chipstrat.com/'>https://www.chipstrat.com/</a><br/>Vik&apos;s Substack: <a href='https://www.viksnewsletter.com/'>https://www.viksnewsletter.com/</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18616712-openclaw-makes-ai-agents-and-cpus-get-real.mp3" length="34295398" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/zboz0qg1y26ltjyc9970xygszw55?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18616712</guid>
    <pubDate>Tue, 03 Feb 2026 02:00:00 -0800</pubDate>
    <itunes:duration>2854</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>An Interview with Microsoft&#39;s Saurabh Dighe About Maia 200</itunes:title>
    <title>An Interview with Microsoft&#39;s Saurabh Dighe About Maia 200</title>
    <itunes:summary><![CDATA[Maia 100 was a pre-GPT accelerator. Maia 200 is explicitly post-GPT for large multimodal inference.  Saurabh Dighe says if Microsoft were chasing peak performance or trying to span training and inference, Maia would look very different. Higher TDPs. Different tradeoffs. Those paths were pruned early to optimize for one thing: inference price-performance. That focus drives the claim of ~30% better performance per dollar versus the latest hardware in Microsoft’s fleet.  Intereting topics includ...]]></itunes:summary>
    <description><![CDATA[<p>Maia 100 was a pre-GPT accelerator.<br/>Maia 200 is explicitly post-GPT for large multimodal inference.<br/><br/>Saurabh Dighe says if Microsoft were chasing peak performance or trying to span training and inference, Maia would look very different. Higher TDPs. Different tradeoffs. Those paths were pruned early to optimize for one thing: inference price-performance. That focus drives the claim of ~30% better performance per dollar versus the latest hardware in Microsoft’s fleet.<br/><br/><b>Intereting topics include:<br/></b>• What “30% better price-performance” actually means<br/>• Who Maia 200 is built for<br/>• Why Microsoft bet on inference when designing Maia back in 2022/2023<br/>• Large SRAM + high-capacity HBM<br/>• Massive scale-up, no scale-out<br/>• On-die NIC integration<br/><br/>Maia is a portfolio platform: many internal customers, varied inference profiles, one goal. Lower inference cost at planetary scale.<br/><br/><b>Chapters:<br/></b>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI'>00:00</a>) Introduction<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=60s'>01:00</a>) What Maia 200 is and who it’s for<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=165s'>02:45</a>) Why custom silicon isn’t just a margin play<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=285s'>04:45</a>) Inference as an efficient frontier<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=375s'>06:15</a>) Portfolio thinking and heterogeneous infrastructure<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=540s'>09:00</a>) Designing for LLMs and reasoning models<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=645s'>10:45</a>) Why Maia avoids training workloads<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=720s'>12:00</a>) Betting on inference in 2022–2023, before reasoning models<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=880s'>14:40</a>) Hyperscaler advantage in custom silicon<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=960s'>16:00</a>) Capacity allocation and internal customers<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1065s'>17:45</a>) How third-party customers access Maia<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1110s'>18:30</a>) Software, compilers, and time-to-value<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1350s'>22:30</a>) Measuring success and the Maia 300 roadmap<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1710s'>28:30</a>) What “30% better price-performance” actually means<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1920s'>32:00</a>) Scale-up vs scale-out architecture<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2100s'>35:00</a>) Ethernet and custom transport choices<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2250s'>37:30</a>) On-die NIC integration<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2430s'>40:30</a>) Memory hierarchy: SRAM, HBM, and locality<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2940s'>49:00</a>) Long context and KV cache strategy<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=3090s'>51:30</a>) Wrap-up</p>]]></description>
    <content:encoded><![CDATA[<p>Maia 100 was a pre-GPT accelerator.<br/>Maia 200 is explicitly post-GPT for large multimodal inference.<br/><br/>Saurabh Dighe says if Microsoft were chasing peak performance or trying to span training and inference, Maia would look very different. Higher TDPs. Different tradeoffs. Those paths were pruned early to optimize for one thing: inference price-performance. That focus drives the claim of ~30% better performance per dollar versus the latest hardware in Microsoft’s fleet.<br/><br/><b>Intereting topics include:<br/></b>• What “30% better price-performance” actually means<br/>• Who Maia 200 is built for<br/>• Why Microsoft bet on inference when designing Maia back in 2022/2023<br/>• Large SRAM + high-capacity HBM<br/>• Massive scale-up, no scale-out<br/>• On-die NIC integration<br/><br/>Maia is a portfolio platform: many internal customers, varied inference profiles, one goal. Lower inference cost at planetary scale.<br/><br/><b>Chapters:<br/></b>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI'>00:00</a>) Introduction<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=60s'>01:00</a>) What Maia 200 is and who it’s for<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=165s'>02:45</a>) Why custom silicon isn’t just a margin play<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=285s'>04:45</a>) Inference as an efficient frontier<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=375s'>06:15</a>) Portfolio thinking and heterogeneous infrastructure<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=540s'>09:00</a>) Designing for LLMs and reasoning models<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=645s'>10:45</a>) Why Maia avoids training workloads<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=720s'>12:00</a>) Betting on inference in 2022–2023, before reasoning models<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=880s'>14:40</a>) Hyperscaler advantage in custom silicon<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=960s'>16:00</a>) Capacity allocation and internal customers<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1065s'>17:45</a>) How third-party customers access Maia<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1110s'>18:30</a>) Software, compilers, and time-to-value<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1350s'>22:30</a>) Measuring success and the Maia 300 roadmap<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1710s'>28:30</a>) What “30% better price-performance” actually means<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=1920s'>32:00</a>) Scale-up vs scale-out architecture<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2100s'>35:00</a>) Ethernet and custom transport choices<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2250s'>37:30</a>) On-die NIC integration<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2430s'>40:30</a>) Memory hierarchy: SRAM, HBM, and locality<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=2940s'>49:00</a>) Long context and KV cache strategy<br/>(<a href='https://www.youtube.com/watch?v=6R-oMCdnLiI&amp;t=3090s'>51:30</a>) Wrap-up</p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18584858-an-interview-with-microsoft-s-saurabh-dighe-about-maia-200.mp3" length="37978285" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/ldx1f7kyscyj6lfox0sp2vaztay2?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18584858</guid>
    <pubDate>Wed, 28 Jan 2026 12:00:00 -0800</pubDate>
    <itunes:duration>3161</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Can Pre-GPT AI Accelerators Handle Long Context Workloads?</itunes:title>
    <title>Can Pre-GPT AI Accelerators Handle Long Context Workloads?</title>
    <itunes:summary><![CDATA[OpenAI's partnership with Cerebras and Nvidia's announcement of context memory storage raises a fundamental question: as agentic AI demands long sessions with massive context windows, can SRAM-based accelerators designed before the LLM era keep up—or will they converge with GPUs? Key Takeaways 1. Context is the new bottleneck. As agentic workloads demand long sessions with massive codebases, storing and retrieving KV cache efficiently becomes critical. 2. There's no one-size-fits-all. Sachin ...]]></itunes:summary>
    <description><![CDATA[<p>OpenAI&apos;s partnership with Cerebras and Nvidia&apos;s announcement of context memory storage raises a fundamental question: as agentic AI demands long sessions with massive context windows, can SRAM-based accelerators designed before the LLM era keep up—or will they converge with GPUs?</p><p><b>Key Takeaways</b><br/><b>1. Context is the new bottleneck. </b>As agentic workloads demand long sessions with massive codebases, storing and retrieving KV cache efficiently becomes critical.<br/><b>2. There&apos;s no one-size-fits-all.</b> Sachin Khatti&apos;s (OpenAI, ex-Intel) signals a shift toward heterogeneous compute—matching specific accelerators to specific workloads.<br/><b>3. Cerebras has 44GB of SRAM per wafer</b> — orders of magnitude more than typical chips — but the question remains: where does the KV cache go for long context?<br/><b>4. Pre-GPT accelerators may converge toward GPUs.</b> If they need to add HBM or external memory for long context, some of their differentiation erodes.<br/><b>5. Post-GPT accelerators (Etched, MatX) are the ones to watch.</b> Designed specifically for transformer inference, they may solve the KV cache problem from first principles.<br/><br/><b>Chapters</b><br/>  - 00:00 — Intro<br/>  - 01:20 — What is context memory storage?<br/>  - 03:30 — When Claude runs out of context<br/>  - 06:00 — Tokens, attention, and the KV cache explained<br/>  - 09:07 — The AI memory hierarchy: HBM → DRAM → SSD → network storage<br/>  - 12:53 — Nvidia&apos;s G1/G2/G3 tiers and the missing G0 (SRAM)<br/>  - 14:35 — Bluefield DPUs and GPU Direct Storage<br/>  - 15:53 — Token economics: cache hits vs misses<br/>  - 20:03 — OpenAI + Cerebras: 750 megawatts for faster Codex<br/>  - 21:29 — Why Cerebras built a wafer-scale engine<br/>  - 25:07 — 44GB SRAM and running Llama 70B on four wafers<br/>  - 25:55 — Sachin Khatti on heterogeneous compute strategy<br/>  - 31:43 — The big question: where does Cerebras store KV cache?<br/>  - 34:11 — If SRAM offloads to HBM, does it lose its edge?<br/>  - 35:40 — Pre-GPT vs Post-GPT accelerators<br/>  - 36:51 — Etched raises $500M at $5B valuation<br/>  - 38:48 — Wrap up<br/><br/></p>]]></description>
    <content:encoded><![CDATA[<p>OpenAI&apos;s partnership with Cerebras and Nvidia&apos;s announcement of context memory storage raises a fundamental question: as agentic AI demands long sessions with massive context windows, can SRAM-based accelerators designed before the LLM era keep up—or will they converge with GPUs?</p><p><b>Key Takeaways</b><br/><b>1. Context is the new bottleneck. </b>As agentic workloads demand long sessions with massive codebases, storing and retrieving KV cache efficiently becomes critical.<br/><b>2. There&apos;s no one-size-fits-all.</b> Sachin Khatti&apos;s (OpenAI, ex-Intel) signals a shift toward heterogeneous compute—matching specific accelerators to specific workloads.<br/><b>3. Cerebras has 44GB of SRAM per wafer</b> — orders of magnitude more than typical chips — but the question remains: where does the KV cache go for long context?<br/><b>4. Pre-GPT accelerators may converge toward GPUs.</b> If they need to add HBM or external memory for long context, some of their differentiation erodes.<br/><b>5. Post-GPT accelerators (Etched, MatX) are the ones to watch.</b> Designed specifically for transformer inference, they may solve the KV cache problem from first principles.<br/><br/><b>Chapters</b><br/>  - 00:00 — Intro<br/>  - 01:20 — What is context memory storage?<br/>  - 03:30 — When Claude runs out of context<br/>  - 06:00 — Tokens, attention, and the KV cache explained<br/>  - 09:07 — The AI memory hierarchy: HBM → DRAM → SSD → network storage<br/>  - 12:53 — Nvidia&apos;s G1/G2/G3 tiers and the missing G0 (SRAM)<br/>  - 14:35 — Bluefield DPUs and GPU Direct Storage<br/>  - 15:53 — Token economics: cache hits vs misses<br/>  - 20:03 — OpenAI + Cerebras: 750 megawatts for faster Codex<br/>  - 21:29 — Why Cerebras built a wafer-scale engine<br/>  - 25:07 — 44GB SRAM and running Llama 70B on four wafers<br/>  - 25:55 — Sachin Khatti on heterogeneous compute strategy<br/>  - 31:43 — The big question: where does Cerebras store KV cache?<br/>  - 34:11 — If SRAM offloads to HBM, does it lose its edge?<br/>  - 35:40 — Pre-GPT vs Post-GPT accelerators<br/>  - 36:51 — Etched raises $500M at $5B valuation<br/>  - 38:48 — Wrap up<br/><br/></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18570705-can-pre-gpt-ai-accelerators-handle-long-context-workloads.mp3" length="27435763" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/pha2gv2t8ostqgje9dcupb4rfwgz?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18570705</guid>
    <pubDate>Mon, 26 Jan 2026 05:00:00 -0800</pubDate>
    <itunes:duration>2282</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>6</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>An Interview with Innoviz CEO Omer Keilaf about current LiDAR market dynamics</itunes:title>
    <title>An Interview with Innoviz CEO Omer Keilaf about current LiDAR market dynamics</title>
    <itunes:summary><![CDATA[Innoviz CEO Omer Keilaf believes the LIDAR market is down to its final players—and that Innoviz has already won its seat.  In this conversation, we cover the Level 4 gold rush sparked by Waymo, why stalled Level 3 programs are suddenly accelerating, the technical moat that separates L4-grade LIDAR from everything else, how a one-year-old startup won BMW, and why Keilaf thinks his competitors are already out of the race.  Omer Keilaf founded Innoviz in 2016. Today it's a publicly traded Tier 1...]]></itunes:summary>
    <description><![CDATA[<p>Innoviz CEO Omer Keilaf believes the LIDAR market is down to its final players—and that Innoviz has already won its seat.<br/><br/>In this conversation, we cover the Level 4 gold rush sparked by Waymo, why stalled Level 3 programs are suddenly accelerating, the technical moat that separates L4-grade LIDAR from everything else, how a one-year-old startup won BMW, and why Keilaf thinks his competitors are already out of the race.<br/><br/>Omer Keilaf founded Innoviz in 2016. Today it&apos;s a publicly traded Tier 1 supplier to BMW, Volkswagen, Daimler Truck, and other global OEMs.<br/><br/>Chapters<br/>  00:00 Introduction<br/>  00:17 Why Start a LIDAR Company in 2016?<br/>  01:32 The Personal Story Behind Innoviz<br/>  03:12 Transportation Is Still Our Biggest Daily Risk<br/>  04:28 The 2012 Spark: Xbox Kinect and 3D Sensing<br/>  06:32 From Mobile to Automotive: Finding the Right Platform<br/>  07:54 &quot;I Didn&apos;t Know What LIDAR Was, But I&apos;d Do It Better&quot;<br/>  08:19 How a One-Year-Old Startup Won BMW<br/>  10:04 Surviving the First Product<br/>  11:23 From Tier 2 to Tier 1: The Volkswagen Win<br/>  13:47 Lessons Learned Scaling Through Partners<br/>  14:45 The SPAC Decision: A Wake-Up Call from a Competitor<br/>  16:42 From 200 LIDAR Companies to a Handful<br/>  17:27 NREs: How Tier 1 Status Funds R&amp;D<br/>  18:44 Why Automotive-First Is the Right Strategy<br/>  19:45 Consolidation Patterns: Cameras, Radars, Airbags<br/>  20:31 &quot;The Music Has Stopped&quot;<br/>  21:07 Non-Automotive: Underserved Markets<br/>  23:51 Working with Secretive OEMs<br/>  25:27 The Press Release They Tried to Stop<br/>  26:42 CES 2025: 85% of Meetings Were Level 4<br/>  27:40 Why Level 3 Programs Are Suddenly Accelerating<br/>  28:33 The EV/ADAS Coupling Problem<br/>  29:49 Design Is Everything: The Holy Grail Is Behind the Windshield<br/>  31:13 The Three-Year RFQ: Grill → Roof → Windshield<br/>  32:32 Innoviz3: Small Enough for Behind-the-Windshield<br/>  34:40 Innoviz2 for L4, Innoviz3 for Consumer L3<br/>  36:38 What&apos;s the Real Difference Between L2, L3, and L4 LIDAR?<br/>  38:51 The Mud Test: Why L4 Demands 100% Availability<br/>  40:50 &quot;We&apos;re the Only LIDAR Designed for Level 4&quot;<br/>  42:52 Patents and the Maslow Pyramid of Autonomy<br/>  44:15 Non-Automotive Markets: Agriculture, Mining, Security<br/>  46:15 Closing</p>]]></description>
    <content:encoded><![CDATA[<p>Innoviz CEO Omer Keilaf believes the LIDAR market is down to its final players—and that Innoviz has already won its seat.<br/><br/>In this conversation, we cover the Level 4 gold rush sparked by Waymo, why stalled Level 3 programs are suddenly accelerating, the technical moat that separates L4-grade LIDAR from everything else, how a one-year-old startup won BMW, and why Keilaf thinks his competitors are already out of the race.<br/><br/>Omer Keilaf founded Innoviz in 2016. Today it&apos;s a publicly traded Tier 1 supplier to BMW, Volkswagen, Daimler Truck, and other global OEMs.<br/><br/>Chapters<br/>  00:00 Introduction<br/>  00:17 Why Start a LIDAR Company in 2016?<br/>  01:32 The Personal Story Behind Innoviz<br/>  03:12 Transportation Is Still Our Biggest Daily Risk<br/>  04:28 The 2012 Spark: Xbox Kinect and 3D Sensing<br/>  06:32 From Mobile to Automotive: Finding the Right Platform<br/>  07:54 &quot;I Didn&apos;t Know What LIDAR Was, But I&apos;d Do It Better&quot;<br/>  08:19 How a One-Year-Old Startup Won BMW<br/>  10:04 Surviving the First Product<br/>  11:23 From Tier 2 to Tier 1: The Volkswagen Win<br/>  13:47 Lessons Learned Scaling Through Partners<br/>  14:45 The SPAC Decision: A Wake-Up Call from a Competitor<br/>  16:42 From 200 LIDAR Companies to a Handful<br/>  17:27 NREs: How Tier 1 Status Funds R&amp;D<br/>  18:44 Why Automotive-First Is the Right Strategy<br/>  19:45 Consolidation Patterns: Cameras, Radars, Airbags<br/>  20:31 &quot;The Music Has Stopped&quot;<br/>  21:07 Non-Automotive: Underserved Markets<br/>  23:51 Working with Secretive OEMs<br/>  25:27 The Press Release They Tried to Stop<br/>  26:42 CES 2025: 85% of Meetings Were Level 4<br/>  27:40 Why Level 3 Programs Are Suddenly Accelerating<br/>  28:33 The EV/ADAS Coupling Problem<br/>  29:49 Design Is Everything: The Holy Grail Is Behind the Windshield<br/>  31:13 The Three-Year RFQ: Grill → Roof → Windshield<br/>  32:32 Innoviz3: Small Enough for Behind-the-Windshield<br/>  34:40 Innoviz2 for L4, Innoviz3 for Consumer L3<br/>  36:38 What&apos;s the Real Difference Between L2, L3, and L4 LIDAR?<br/>  38:51 The Mud Test: Why L4 Demands 100% Availability<br/>  40:50 &quot;We&apos;re the Only LIDAR Designed for Level 4&quot;<br/>  42:52 Patents and the Maslow Pyramid of Autonomy<br/>  44:15 Non-Automotive Markets: Agriculture, Mining, Security<br/>  46:15 Closing</p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18552526-an-interview-with-innoviz-ceo-omer-keilaf-about-current-lidar-market-dynamics.mp3" length="33666793" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/d44tnzeioox4xznxhwzwdrggyemq?.jpg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18552526</guid>
    <pubDate>Thu, 22 Jan 2026 06:00:00 -0800</pubDate>
    <itunes:duration>2801</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>5</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>LiDAR, Explained: How It Works and Why It Matters</itunes:title>
    <title>LiDAR, Explained: How It Works and Why It Matters</title>
    <itunes:summary><![CDATA[Austin and Vik discuss why LiDAR is important for autonomy, how modern systems work, and how the technology has evolved. They compare Time of Flight and FMCW architectures, explain why wavelength choice matters, and walk through the tradeoffs between 905 nm and 1550 nm across eye safety, cost, and performance. The discussion closes with a clear-eyed look at competition, Chinese suppliers, and supply chain risk. Chapters (00:00) Introduction to LiDAR and why it matters (05:40) The case for LiD...]]></itunes:summary>
    <description><![CDATA[<p>Austin and Vik discuss why LiDAR is important for autonomy, how modern systems work, and how the technology has evolved. They compare Time of Flight and FMCW architectures, explain why wavelength choice matters, and walk through the tradeoffs between 905 nm and 1550 nm across eye safety, cost, and performance. The discussion closes with a clear-eyed look at competition, Chinese suppliers, and supply chain risk.</p><p><b>Chapters</b></p><p>(00:00) Introduction to LiDAR and why it matters</p><p>(05:40) The case for LiDAR in autonomous vehicles</p><p>(12:41) Wavelengths, eye safety, and system tradeoffs</p><p>(15:38) How LiDAR works: Time of Flight vs. FMCW</p><p>(20:12) Mechanical vs. solid-state LiDAR designs</p><p>(27:31) Market dynamics, competition, and geopolitics</p>]]></description>
    <content:encoded><![CDATA[<p>Austin and Vik discuss why LiDAR is important for autonomy, how modern systems work, and how the technology has evolved. They compare Time of Flight and FMCW architectures, explain why wavelength choice matters, and walk through the tradeoffs between 905 nm and 1550 nm across eye safety, cost, and performance. The discussion closes with a clear-eyed look at competition, Chinese suppliers, and supply chain risk.</p><p><b>Chapters</b></p><p>(00:00) Introduction to LiDAR and why it matters</p><p>(05:40) The case for LiDAR in autonomous vehicles</p><p>(12:41) Wavelengths, eye safety, and system tradeoffs</p><p>(15:38) How LiDAR works: Time of Flight vs. FMCW</p><p>(20:12) Mechanical vs. solid-state LiDAR designs</p><p>(27:31) Market dynamics, competition, and geopolitics</p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18534422-lidar-explained-how-it-works-and-why-it-matters.mp3" length="25722274" type="audio/mpeg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18534422</guid>
    <pubDate>Mon, 19 Jan 2026 09:00:00 -0800</pubDate>
    <itunes:duration>2140</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>4</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Nvidia CES 2026</itunes:title>
    <title>Nvidia CES 2026</title>
    <itunes:summary><![CDATA[Episode Summary Austin and Vik break down NVIDIA’s CES 2026 keynote, focusing on Vera Rubin, DGX Spark and DGX Station, uneducated investor panic, and physical AI. Key Takeaways DGX Spark brings server-class NVIDIA architecture to the desktop at low power, aimed at developers, enthusiasts, and enterprises experimenting locally.  DGX Station functions more like a mini-AI rack on-prem: Grace Blackwell for inference and development without full racks The historical parallel is mainfram...]]></itunes:summary>
    <description><![CDATA[<p><b>Episode Summary</b></p><p>Austin and Vik break down NVIDIA’s CES 2026 keynote, focusing on Vera Rubin, DGX Spark and DGX Station, uneducated investor panic, and physical AI.</p><p><b>Key Takeaways</b></p><ul><li>DGX Spark brings server-class NVIDIA architecture to the desktop at low power, aimed at developers, enthusiasts, and enterprises experimenting locally.  </li><li>DGX Station functions more like a mini-AI rack on-prem: Grace Blackwell for inference and development without full racks </li><li>The historical parallel is mainframes to minicomputers, expanding compute TAM rather than displacing cloud usage.  </li><li>On-prem AI converts some GPU rental OpEx into CapEx, appealing to CFOs  </li><li>NVIDIA positioned autonomy as physical AI with vision-language-action models and early Mercedes-Benz deployments in 2026.  </li><li>Vera Rubin integrates CPU, GPU, DPU, networking, and photonics into a single platform, emphasizing Ethernet for scale-out. (Where was the Infiniband switch?) </li><li>The new Vera CPU highlights rising CPU importance for agentic workloads through higher core counts, SMT, and large LPDDR capacity.  </li><li>Rubin GPU’s move to HBM4 and adaptive precision targets inference efficiency gains and lower cost per token.  </li><li>Context memory storage elevates SSDs and DPUs, enabling massive KV cache offload beyond HBM and DRAM.  </li><li>Cable-less rack design and warm-water cooling show NVIDIA’s shift from raw performance toward manufacturability and enterprise polish.  </li></ul>]]></description>
    <content:encoded><![CDATA[<p><b>Episode Summary</b></p><p>Austin and Vik break down NVIDIA’s CES 2026 keynote, focusing on Vera Rubin, DGX Spark and DGX Station, uneducated investor panic, and physical AI.</p><p><b>Key Takeaways</b></p><ul><li>DGX Spark brings server-class NVIDIA architecture to the desktop at low power, aimed at developers, enthusiasts, and enterprises experimenting locally.  </li><li>DGX Station functions more like a mini-AI rack on-prem: Grace Blackwell for inference and development without full racks </li><li>The historical parallel is mainframes to minicomputers, expanding compute TAM rather than displacing cloud usage.  </li><li>On-prem AI converts some GPU rental OpEx into CapEx, appealing to CFOs  </li><li>NVIDIA positioned autonomy as physical AI with vision-language-action models and early Mercedes-Benz deployments in 2026.  </li><li>Vera Rubin integrates CPU, GPU, DPU, networking, and photonics into a single platform, emphasizing Ethernet for scale-out. (Where was the Infiniband switch?) </li><li>The new Vera CPU highlights rising CPU importance for agentic workloads through higher core counts, SMT, and large LPDDR capacity.  </li><li>Rubin GPU’s move to HBM4 and adaptive precision targets inference efficiency gains and lower cost per token.  </li><li>Context memory storage elevates SSDs and DPUs, enabling massive KV cache offload beyond HBM and DRAM.  </li><li>Cable-less rack design and warm-water cooling show NVIDIA’s shift from raw performance toward manufacturability and enterprise polish.  </li></ul>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18482988-nvidia-ces-2026.mp3" length="34073590" type="audio/mpeg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18482988</guid>
    <pubDate>Mon, 12 Jan 2026 03:00:00 -0800</pubDate>
    <itunes:duration>2836</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>3</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Insights from IEDM 2025</itunes:title>
    <title>Insights from IEDM 2025</title>
    <itunes:summary><![CDATA[Austin and Vik discuss key insights from the IEDM conference.  They explore the significance of IEDM for engineers and investors, the networking opportunities it offers, and the latest innovations in silicon photonics, complementary FETs, NAND flash memory, and GaN-on-silicon chiplets.  Takeaways Penta-level NAND flash memory could disrupt the SSD marketGaN-on-Silicon chiplets enhance power efficiencyComplementary FETsOptical scale-up has a power problemThe future of transistors is ...]]></itunes:summary>
    <description><![CDATA[<p>Austin and Vik discuss key insights from the IEDM conference. </p><p>They explore the significance of IEDM for engineers and investors, the networking opportunities it offers, and the latest innovations in silicon photonics, complementary FETs, NAND flash memory, and GaN-on-silicon chiplets. </p><p><b>Takeaways</b></p><ul><li>Penta-level NAND flash memory could disrupt the SSD market</li><li>GaN-on-Silicon chiplets enhance power efficiency</li><li>Complementary FETs</li><li>Optical scale-up has a power problem</li><li>The future of transistors is still bright<br/><br/></li></ul><p><br/></p>]]></description>
    <content:encoded><![CDATA[<p>Austin and Vik discuss key insights from the IEDM conference. </p><p>They explore the significance of IEDM for engineers and investors, the networking opportunities it offers, and the latest innovations in silicon photonics, complementary FETs, NAND flash memory, and GaN-on-silicon chiplets. </p><p><b>Takeaways</b></p><ul><li>Penta-level NAND flash memory could disrupt the SSD market</li><li>GaN-on-Silicon chiplets enhance power efficiency</li><li>Complementary FETs</li><li>Optical scale-up has a power problem</li><li>The future of transistors is still bright<br/><br/></li></ul><p><br/></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18464157-insights-from-iedm-2025.mp3" length="30480620" type="audio/mpeg" />
    <itunes:author>Vikram Sekar and Austin Lyons</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18464157</guid>
    <pubDate>Thu, 08 Jan 2026 06:00:00 -0800</pubDate>
    <itunes:duration>2537</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>2</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Nvidia &quot;Acquires&quot; Groq</itunes:title>
    <title>Nvidia &quot;Acquires&quot; Groq</title>
    <itunes:summary><![CDATA[Key Topics What Nvidia actually bought from Groq and why it is not a traditional acquisitionWhy the deal triggered claims that GPUs and HBM are obsoleteArchitectural trade-offs between GPUs, TPUs, XPUs, and LPUsSRAM vs HBM. Speed, capacity, cost, and supply chain realitiesGroq LPU fundamentals: VLIW, compiler-scheduled execution, determinism, ultra-low latencyWhy LPUs struggle with large models and where they excel insteadPractical use cases for hyper-low-latency inference:Ad copy personaliza...]]></itunes:summary>
    <description><![CDATA[<p><b>Key Topics</b></p><ul><li>What Nvidia actually bought from Groq and why it is not a traditional acquisition</li><li>Why the deal triggered claims that GPUs and HBM are obsolete</li><li>Architectural trade-offs between GPUs, TPUs, XPUs, and LPUs</li><li>SRAM vs HBM. Speed, capacity, cost, and supply chain realities</li><li>Groq LPU fundamentals: VLIW, compiler-scheduled execution, determinism, ultra-low latency</li><li>Why LPUs struggle with large models and where they excel instead</li><li>Practical use cases for hyper-low-latency inference:<ul><li>Ad copy personalization at search latency budgets</li><li>Model routing and agent orchestration</li><li>Conversational interfaces and real-time translation</li><li>Robotics and physical AI at the edge</li><li>Potential applications in AI-RAN and telecom infrastructure</li></ul></li><li>Memory as a design spectrum: SRAM-only, SRAM plus DDR, SRAM plus HBM</li><li>Nvidia’s growing portfolio approach to inference hardware rather than one-size-fits-all<br/><br/></li></ul><p><b>Core Takeaways</b></p><ul><li>GPUs are not dead. HBM is not dead.</li><li>LPUs solve a different problem: deterministic, ultra-low-latency inference for small models.</li><li>Large frontier models still require HBM-based systems.</li><li>Nvidia’s move expands its inference portfolio surface area rather than replacing GPUs.</li><li>The future of AI infrastructure is workload-specific optimization and TCO-driven deployment.</li></ul><p><br/></p>]]></description>
    <content:encoded><![CDATA[<p><b>Key Topics</b></p><ul><li>What Nvidia actually bought from Groq and why it is not a traditional acquisition</li><li>Why the deal triggered claims that GPUs and HBM are obsolete</li><li>Architectural trade-offs between GPUs, TPUs, XPUs, and LPUs</li><li>SRAM vs HBM. Speed, capacity, cost, and supply chain realities</li><li>Groq LPU fundamentals: VLIW, compiler-scheduled execution, determinism, ultra-low latency</li><li>Why LPUs struggle with large models and where they excel instead</li><li>Practical use cases for hyper-low-latency inference:<ul><li>Ad copy personalization at search latency budgets</li><li>Model routing and agent orchestration</li><li>Conversational interfaces and real-time translation</li><li>Robotics and physical AI at the edge</li><li>Potential applications in AI-RAN and telecom infrastructure</li></ul></li><li>Memory as a design spectrum: SRAM-only, SRAM plus DDR, SRAM plus HBM</li><li>Nvidia’s growing portfolio approach to inference hardware rather than one-size-fits-all<br/><br/></li></ul><p><b>Core Takeaways</b></p><ul><li>GPUs are not dead. HBM is not dead.</li><li>LPUs solve a different problem: deterministic, ultra-low-latency inference for small models.</li><li>Large frontier models still require HBM-based systems.</li><li>Nvidia’s move expands its inference portfolio surface area rather than replacing GPUs.</li><li>The future of AI infrastructure is workload-specific optimization and TCO-driven deployment.</li></ul><p><br/></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2570635/episodes/18456609-nvidia-acquires-groq.mp3" length="29273715" type="audio/mpeg" />
    <itunes:image href="https://storage.buzzsprout.com/urs52e7xwqcpu2lkcwvfq4q2zhtm?.jpg" />
    <itunes:author>Vikram</itunes:author>
    <guid isPermaLink="false">Buzzsprout-18456609</guid>
    <pubDate>Mon, 05 Jan 2026 10:00:00 -0800</pubDate>
    <itunes:duration>2436</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>1</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
</channel>
</rss>
