<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet href="https://rss.buzzsprout.com/styles.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:psc="http://podlove.org/simple-chapters" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
  <atom:link href="https://rss.buzzsprout.com/2609956.rss" rel="self" type="application/rss+xml" />
  <atom:link href="https://pubsubhubbub.appspot.com/" rel="hub" xmlns="http://www.w3.org/2005/Atom" />
  <title>AI - Beyond the Hype</title>

  <lastBuildDate>Fri, 15 May 2026 10:30:36 +0800</lastBuildDate>
  <link>https://aibeyondthehype.buzzsprout.com</link>
  <language>en-gb</language>
  <copyright>© 2026 AI - Beyond the Hype</copyright>
  <podcast:locked>yes</podcast:locked>
    <podcast:guid>6e362022-5d27-5fda-aac5-43de3ec561c8</podcast:guid>
  <itunes:author>Sara, James &amp; Darryl</itunes:author>
  <itunes:type>episodic</itunes:type>
  <itunes:explicit>false</itunes:explicit>
  <description><![CDATA[<p><b>AI - Beyond the Hype</b> is a podcast for senior executives, technology leaders, and data professionals who want a clear-eyed view of what it really takes to make AI work in the enterprise.</p><p><br></p><p>Each short episode is designed for easy consumption by busy leaders and executives, offering concise, practical conversations on the foundations behind successful AI adoption — from data quality and observability to governance, operating models, architecture, and trust. Through thoughtful, conversational dialogue, the show connects executive priorities with the technical realities that determine whether AI delivers meaningful value or simply creates more noise.</p><p><br></p><p>If your organisation is asking big questions about AI readiness, digital transformation, and data-driven decision-making, this podcast is designed to help you quickly separate what sounds impressive from what actually works.</p><p><br></p>]]></description>
  <generator>Buzzsprout (https://www.buzzsprout.com)</generator>
  <itunes:owner>
    <itunes:name>Sara, James &amp; Darryl</itunes:name>
  </itunes:owner>
  
  <itunes:image href="https://storage.buzzsprout.com/6dpd1srczs0ns3tjnz7fsq3e35lp?.jpg" />
  <itunes:category text="Technology" />
  <itunes:category text="Business" />
  <podcast:person role="co-host" img="https://storage.buzzsprout.com/njzsok0dabdfk1k3a3mlt9ztt069">James</podcast:person>
  <podcast:person role="co-host" img="https://storage.buzzsprout.com/ddobmqflgg0sodce9k8y8ijcmobr">Sarah</podcast:person>
  <podcast:person role="producer" href="https://www.linkedin.com/in/darrylwells/" img="https://storage.buzzsprout.com/asfuhxdiykz87gki4t75zc7787ca">Darryl Wells</podcast:person>
  <item>
    <itunes:title>AI Security Part 3: Why PII and the Privacy Act Are the AI Foundation Most Leaders Skip</itunes:title>
    <title>AI Security Part 3: Why PII and the Privacy Act Are the AI Foundation Most Leaders Skip</title>
    <itunes:summary><![CDATA[You can have the most secure AI stack in the country and still be in breach of the Privacy Act before lunch.  Sarah and James close the series with the foundation underneath the foundation: personal information. James, now grounded on the security side, opens with a healthy push-back — surely if we own the data, we can use it however we want? Sarah, with the OAIC determinations in hand, takes that apart. What we cover APP 6 and purpose-binding: under Australia’s Privacy Act 1988, persona...]]></itunes:summary>
    <description><![CDATA[<p>You can have the most secure AI stack in the country and still be in breach of the Privacy Act before lunch. </p><p>Sarah and James close the series with the foundation underneath the foundation: personal information. James, now grounded on the security side, opens with a healthy push-back — surely if we own the data, we can use it however we want? Sarah, with the OAIC determinations in hand, takes that apart.</p><p><b>What we cover</b></p><p>APP 6 and purpose-binding: under Australia’s Privacy Act 1988, personal information collected for one purpose generally cannot be used for another. AI training, inference, and agent actions are all “uses,” yet most organisations haven’t mapped AI use cases to APP 6.</p><p>The 2024 amendments: the Privacy and Other Legislation Amendment Act introduced a statutory tort for serious privacy invasions, a children’s privacy code, and stronger OAIC enforcement, including AUD $66,000 infringement notices.</p><p>OAIC determinations: cases like Clearview AI, Bunnings/Kmart (facial recognition), and I-MED (patient data shared for AI training). I-MED’s de-identification was accepted, but it became a key APP 6 risk example.</p><p>The bank scenario: three walkthroughs — inference drift, indirect prompt injection, and multi-agent purpose laundering — showing how compliant data becomes non-compliant AI use.</p><p>Recommended controls: purpose registers, consent provenance, retrieval scoping, agent identity, and Meta’s “Agents Rule of Two.”</p><p><b>Sources</b></p><p>Privacy Act 1988: <a href='https://www.legislation.gov.au/C2004A03712/latest/text'>https://www.legislation.gov.au/C2004A03712/latest/text</a><br/>Privacy and Other Legislation Amendment Act 2024: <a href='https://www.legislation.gov.au/C2024A00128/asmade'>https://www.legislation.gov.au/C2024A00128/asmade</a><br/>Australian Privacy Principles (OAIC): <a href='https://www.oaic.gov.au/privacy/australian-privacy-principles'>https://www.oaic.gov.au/privacy/australian-privacy-principles</a><br/>OAIC — Clearview AI determination (PDF): <a href='https://www.oaic.gov.au/__data/assets/pdf_file/0016/11284/Commissioner-initiated-investigation-into-Clearview-AI,-Inc.-Privacy-2021-AICmr-54-14-October-2021.pdf'>https://www.oaic.gov.au/__data/assets/pdf_file/0016/11284/Commissioner-initiated-investigation-into-Clearview-AI,-Inc.-Privacy-2021-AICmr-54-14-October-2021.pdf</a><br/>OAIC — Bunnings determination: <a href='https://www.oaic.gov.au/news/media-centre/bunnings-breached-australians-privacy-with-facial-recognition-tool'>https://www.oaic.gov.au/news/media-centre/bunnings-breached-australians-privacy-with-facial-recognition-tool</a><br/>OAIC — Kmart determination: <a href='https://www.oaic.gov.au/news/media-centre/18-kmarts-use-of-facial-recognition-to-tackle-refund-fraud-unlawful,-privacy-commissioner-finds'>https://www.oaic.gov.au/news/media-centre/18-kmarts-use-of-facial-recognition-to-tackle-refund-fraud-unlawful,-privacy-commissioner-finds</a><br/>OAIC — I-MED preliminary inquiries report: <a href='https://www.oaic.gov.au/privacy/privacy-assessments-and-decisions/privacy-decisions/Investigation-inquiry-reports/report-into-preliminary-inquiries-of-i-med'>https://www.oaic.gov.au/privacy/privacy-assessments-and-decisions/privacy-decisions/Investigation-inquiry-reports/report-into-preliminary-inquiries-of-i-med</a><br/>EU AI Act overview: <a href='https://artificialintelligenceact.eu/'>https://artificialintelligenceact.eu/</a><br/>California ADMT — CPPA announcement: <a href='https://cppa.ca.gov/announcements/2025/20250923.html'>https://cppa.ca.gov/announcements/2025/20250923.html</a><br/>Meta — Agents Rule of Two: <a href='https://ai.meta.com/blog/practical-ai-agent-security/'>https://ai.meta.com/blog/practical-ai-agent-security/</a><br/>NIST AI RMF: <a href='https://www.nist.gov/itl/ai-risk-management-framework'>https://www.nist.gov/itl/ai-risk-manag</a></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></description>
    <content:encoded><![CDATA[<p>You can have the most secure AI stack in the country and still be in breach of the Privacy Act before lunch. </p><p>Sarah and James close the series with the foundation underneath the foundation: personal information. James, now grounded on the security side, opens with a healthy push-back — surely if we own the data, we can use it however we want? Sarah, with the OAIC determinations in hand, takes that apart.</p><p><b>What we cover</b></p><p>APP 6 and purpose-binding: under Australia’s Privacy Act 1988, personal information collected for one purpose generally cannot be used for another. AI training, inference, and agent actions are all “uses,” yet most organisations haven’t mapped AI use cases to APP 6.</p><p>The 2024 amendments: the Privacy and Other Legislation Amendment Act introduced a statutory tort for serious privacy invasions, a children’s privacy code, and stronger OAIC enforcement, including AUD $66,000 infringement notices.</p><p>OAIC determinations: cases like Clearview AI, Bunnings/Kmart (facial recognition), and I-MED (patient data shared for AI training). I-MED’s de-identification was accepted, but it became a key APP 6 risk example.</p><p>The bank scenario: three walkthroughs — inference drift, indirect prompt injection, and multi-agent purpose laundering — showing how compliant data becomes non-compliant AI use.</p><p>Recommended controls: purpose registers, consent provenance, retrieval scoping, agent identity, and Meta’s “Agents Rule of Two.”</p><p><b>Sources</b></p><p>Privacy Act 1988: <a href='https://www.legislation.gov.au/C2004A03712/latest/text'>https://www.legislation.gov.au/C2004A03712/latest/text</a><br/>Privacy and Other Legislation Amendment Act 2024: <a href='https://www.legislation.gov.au/C2024A00128/asmade'>https://www.legislation.gov.au/C2024A00128/asmade</a><br/>Australian Privacy Principles (OAIC): <a href='https://www.oaic.gov.au/privacy/australian-privacy-principles'>https://www.oaic.gov.au/privacy/australian-privacy-principles</a><br/>OAIC — Clearview AI determination (PDF): <a href='https://www.oaic.gov.au/__data/assets/pdf_file/0016/11284/Commissioner-initiated-investigation-into-Clearview-AI,-Inc.-Privacy-2021-AICmr-54-14-October-2021.pdf'>https://www.oaic.gov.au/__data/assets/pdf_file/0016/11284/Commissioner-initiated-investigation-into-Clearview-AI,-Inc.-Privacy-2021-AICmr-54-14-October-2021.pdf</a><br/>OAIC — Bunnings determination: <a href='https://www.oaic.gov.au/news/media-centre/bunnings-breached-australians-privacy-with-facial-recognition-tool'>https://www.oaic.gov.au/news/media-centre/bunnings-breached-australians-privacy-with-facial-recognition-tool</a><br/>OAIC — Kmart determination: <a href='https://www.oaic.gov.au/news/media-centre/18-kmarts-use-of-facial-recognition-to-tackle-refund-fraud-unlawful,-privacy-commissioner-finds'>https://www.oaic.gov.au/news/media-centre/18-kmarts-use-of-facial-recognition-to-tackle-refund-fraud-unlawful,-privacy-commissioner-finds</a><br/>OAIC — I-MED preliminary inquiries report: <a href='https://www.oaic.gov.au/privacy/privacy-assessments-and-decisions/privacy-decisions/Investigation-inquiry-reports/report-into-preliminary-inquiries-of-i-med'>https://www.oaic.gov.au/privacy/privacy-assessments-and-decisions/privacy-decisions/Investigation-inquiry-reports/report-into-preliminary-inquiries-of-i-med</a><br/>EU AI Act overview: <a href='https://artificialintelligenceact.eu/'>https://artificialintelligenceact.eu/</a><br/>California ADMT — CPPA announcement: <a href='https://cppa.ca.gov/announcements/2025/20250923.html'>https://cppa.ca.gov/announcements/2025/20250923.html</a><br/>Meta — Agents Rule of Two: <a href='https://ai.meta.com/blog/practical-ai-agent-security/'>https://ai.meta.com/blog/practical-ai-agent-security/</a><br/>NIST AI RMF: <a href='https://www.nist.gov/itl/ai-risk-management-framework'>https://www.nist.gov/itl/ai-risk-manag</a></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2609956/episodes/19180289-ai-security-part-3-why-pii-and-the-privacy-act-are-the-ai-foundation-most-leaders-skip.mp3" length="26456408" type="audio/mpeg" />
    <itunes:author></itunes:author>
    <guid isPermaLink="false">Buzzsprout-19180289</guid>
    <pubDate>Fri, 15 May 2026 10:00:00 +0800</pubDate>
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19180289/transcript" type="text/html" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19180289/transcript.json" type="application/json" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19180289/transcript.srt" type="application/x-subrip" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19180289/transcript.vtt" type="text/vtt" />
    <itunes:duration>2201</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>5</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>AI Security Part 2: When AI Stops Answering and Starts Acting</itunes:title>
    <title>AI Security Part 2: When AI Stops Answering and Starts Acting</title>
    <itunes:summary><![CDATA[Last episode was about AI that answers. This one is about AI that acts — and the moment prompt injection became a board-level risk. Sarah and James pick up where Part 1 left off. James, fully converted on the security argument, asks the question every executive is asking: if we lock down the data, are we safe? Sarah's answer: agentic AI changes the threat model entirely. What we cover EchoLeak (CVE-2025-32711, June 2025): the first zero-click attack on Microsoft 365 Copilot. CVSS 9.3. An atta...]]></itunes:summary>
    <description><![CDATA[<p>Last episode was about AI that answers. This one is about AI that acts — and the moment prompt injection became a board-level risk.</p><p>Sarah and James pick up where Part 1 left off. James, fully converted on the security argument, asks the question every executive is asking: if we lock down the data, are we safe? Sarah&apos;s answer: agentic AI changes the threat model entirely.</p><p><b>What we cover</b></p><p>EchoLeak (CVE-2025-32711, June 2025): the first zero-click attack on Microsoft 365 Copilot. CVSS 9.3. An attacker emails a user — the user never opens it — and Copilot quietly exfiltrates data from the mailbox. The vulnerability that retired the assumption &quot;a human is in the loop.&quot;</p><p>Slack AI prompt injection (August 2024): a public channel poisoned a private one. Simon Willison&apos;s write-up made it the canonical case study for indirect prompt injection in production SaaS.</p><p>Replit&apos;s production database deletion (July 2025): an AI agent ignored a code freeze, deleted a live database containing 1,206 executives and 1,196+ companies, then — in the agent&apos;s own words — &quot;panicked&quot; and fabricated test results. Replit&apos;s CEO publicly apologised.</p><p>The identity explosion: machine identities now outnumber human ones by 80 to 1, and most organisations can&apos;t audit the human accounts they already have.</p><p>The spending mismatch: Gartner reports a 17:1 ratio between &quot;AI for security&quot; and &quot;security for AI&quot; spending. James calls it what it is — buying AI faster than we&apos;re securing it.</p><p>The four-phase controls roadmap: foundations, pipeline access, agentic and RAG hardening, then continuous monitoring. The episode closes with the &quot;Five Friday Questions&quot; — the conversation Sarah thinks every CIO, CISO, and CDO should be having before the next agent ships.</p><p><b>Cliffhanger</b></p><p>Sarah closes with the line that opens Part 3: secured AI is not the same as lawful AI. A hardware retailer and a medical imaging provider both had technically secured systems — and both were found in breach by the regulator. The reason wasn&apos;t the machinery. It was the purpose.</p><p>Run time ~18–20 minutes. Episode 3 covers PII and Australia&apos;s Privacy Act.</p><p><b>Sources</b></p><p>EchoLeak (Checkmarx): <a href='https://checkmarx.com/zero-post/echoleak-cve-2025-32711-show-us-that-ai-security-is-challenging/'>https://checkmarx.com/zero-post/echoleak-cve-2025-32711-show-us-that-ai-security-is-challenging/</a><br/>EchoLeak (NVD): <a href='https://nvd.nist.gov/vuln/detail/cve-2025-32711'>https://nvd.nist.gov/vuln/detail/cve-2025-32711</a><br/>Slack AI (Simon Willison): <a href='https://simonwillison.net/2024/Aug/20/data-exfiltration-from-slack-ai/'>https://simonwillison.net/2024/Aug/20/data-exfiltration-from-slack-ai/</a><br/>Replit DB deletion (Fortune): <a href='https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/'>https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/</a><br/>Replit (Business Insider): <a href='https://www.businessinsider.com/replit-ceo-apologizes-ai-coding-tool-delete-company-database-2025-7'>https://www.businessinsider.com/replit-ceo-apologizes-ai-coding-tool-delete-company-database-2025-7</a><br/>OWASP Top 10 for LLM Apps: <a href='https://genai.owasp.org/llm-top-10/'>https://genai.owasp.org/llm-top-10/</a><br/>NIST AI 600-1 (PDF): <a href='https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf'>https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf</a><br/>NIST AI RMF: <a href='https://www.nist.gov/itl/ai-risk-management-framework'>https://www.nist.gov/itl/ai-risk-management-framework</a></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></description>
    <content:encoded><![CDATA[<p>Last episode was about AI that answers. This one is about AI that acts — and the moment prompt injection became a board-level risk.</p><p>Sarah and James pick up where Part 1 left off. James, fully converted on the security argument, asks the question every executive is asking: if we lock down the data, are we safe? Sarah&apos;s answer: agentic AI changes the threat model entirely.</p><p><b>What we cover</b></p><p>EchoLeak (CVE-2025-32711, June 2025): the first zero-click attack on Microsoft 365 Copilot. CVSS 9.3. An attacker emails a user — the user never opens it — and Copilot quietly exfiltrates data from the mailbox. The vulnerability that retired the assumption &quot;a human is in the loop.&quot;</p><p>Slack AI prompt injection (August 2024): a public channel poisoned a private one. Simon Willison&apos;s write-up made it the canonical case study for indirect prompt injection in production SaaS.</p><p>Replit&apos;s production database deletion (July 2025): an AI agent ignored a code freeze, deleted a live database containing 1,206 executives and 1,196+ companies, then — in the agent&apos;s own words — &quot;panicked&quot; and fabricated test results. Replit&apos;s CEO publicly apologised.</p><p>The identity explosion: machine identities now outnumber human ones by 80 to 1, and most organisations can&apos;t audit the human accounts they already have.</p><p>The spending mismatch: Gartner reports a 17:1 ratio between &quot;AI for security&quot; and &quot;security for AI&quot; spending. James calls it what it is — buying AI faster than we&apos;re securing it.</p><p>The four-phase controls roadmap: foundations, pipeline access, agentic and RAG hardening, then continuous monitoring. The episode closes with the &quot;Five Friday Questions&quot; — the conversation Sarah thinks every CIO, CISO, and CDO should be having before the next agent ships.</p><p><b>Cliffhanger</b></p><p>Sarah closes with the line that opens Part 3: secured AI is not the same as lawful AI. A hardware retailer and a medical imaging provider both had technically secured systems — and both were found in breach by the regulator. The reason wasn&apos;t the machinery. It was the purpose.</p><p>Run time ~18–20 minutes. Episode 3 covers PII and Australia&apos;s Privacy Act.</p><p><b>Sources</b></p><p>EchoLeak (Checkmarx): <a href='https://checkmarx.com/zero-post/echoleak-cve-2025-32711-show-us-that-ai-security-is-challenging/'>https://checkmarx.com/zero-post/echoleak-cve-2025-32711-show-us-that-ai-security-is-challenging/</a><br/>EchoLeak (NVD): <a href='https://nvd.nist.gov/vuln/detail/cve-2025-32711'>https://nvd.nist.gov/vuln/detail/cve-2025-32711</a><br/>Slack AI (Simon Willison): <a href='https://simonwillison.net/2024/Aug/20/data-exfiltration-from-slack-ai/'>https://simonwillison.net/2024/Aug/20/data-exfiltration-from-slack-ai/</a><br/>Replit DB deletion (Fortune): <a href='https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/'>https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/</a><br/>Replit (Business Insider): <a href='https://www.businessinsider.com/replit-ceo-apologizes-ai-coding-tool-delete-company-database-2025-7'>https://www.businessinsider.com/replit-ceo-apologizes-ai-coding-tool-delete-company-database-2025-7</a><br/>OWASP Top 10 for LLM Apps: <a href='https://genai.owasp.org/llm-top-10/'>https://genai.owasp.org/llm-top-10/</a><br/>NIST AI 600-1 (PDF): <a href='https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf'>https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf</a><br/>NIST AI RMF: <a href='https://www.nist.gov/itl/ai-risk-management-framework'>https://www.nist.gov/itl/ai-risk-management-framework</a></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2609956/episodes/19140678-ai-security-part-2-when-ai-stops-answering-and-starts-acting.mp3" length="16030050" type="audio/mpeg" />
    <itunes:author></itunes:author>
    <guid isPermaLink="false">Buzzsprout-19140678</guid>
    <pubDate>Thu, 07 May 2026 22:00:00 +0800</pubDate>
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19140678/transcript" type="text/html" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19140678/transcript.json" type="application/json" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19140678/transcript.srt" type="application/x-subrip" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19140678/transcript.vtt" type="text/vtt" />
    <itunes:duration>1333</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>4</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>AI Security Part 1: Why AI Without Data Security Is a Breach Waiting to Happen</itunes:title>
    <title>AI Security Part 1: Why AI Without Data Security Is a Breach Waiting to Happen</title>
    <itunes:summary><![CDATA[Sarah and James open the three-part Data Security for AI series with a simple argument: AI is only as trustworthy as the data underneath it. What we cover The adoption gap: Gartner expects 40% of enterprise apps to embed AI agents by end‑2026 (up from &lt;5%). IBM’s 2025 Cost of a Data Breach Report found 13% of organisations have had an AI-related breach — 97% lacked proper access controls. Structured vs unstructured data: IDC estimates 80–90% of enterprise data is unstructured. Varonis foun...]]></itunes:summary>
    <description><![CDATA[<p>Sarah and James open the three-part Data Security for AI series with a simple argument: AI is only as trustworthy as the data underneath it.</p><p><b>What we cover</b></p><p>The adoption gap: Gartner expects 40% of enterprise apps to embed AI agents by end‑2026 (up from &lt;5%). IBM’s 2025 Cost of a Data Breach Report found 13% of organisations have had an AI-related breach — 97% lacked proper access controls.</p><p>Structured vs unstructured data: IDC estimates 80–90% of enterprise data is unstructured. Varonis found only 1 in 10 organisations have labelled files, and 88% still have “ghost” accounts. Point a copilot at that estate and every overshared file is exposed.</p><p>The incident catalogue: Samsung engineers pasting source code into ChatGPT (2023). Microsoft’s AI team exposing 38 TB — via a misconfigured Azure SAS token. DeepSeek’s ClickHouse leak exposing chat histories and API keys (2025).</p><p>Liability is real: Moffatt v. Air Canada (2024), where the airline argued its chatbot was a separate legal entity — and lost. NYC’s MyCity chatbot.</p><p>Shadow AI: IBM found shadow-AI breaches cost US$670K more and make up 20% of incidents.</p><p>Memorisation: Carlini et al. (ICLR 2023) showed models memorise training data based on size, duplication, and prompt context — sensitive data should be treated as eventually leakable.</p><p><b>Sources</b></p><p>Gartner 40% forecast: <a href='https://finance.yahoo.com/news/40-enterprise-apps-embed-ai-181310288.html'>https://finance.yahoo.com/news/40-enterprise-apps-embed-ai-181310288.html</a></p><p>IBM 2025 Cost of a Data Breach: <a href='https://www.ibm.com/reports/data-breach'>https://www.ibm.com/reports/data-breach</a></p><p>IBM analysis (97%, US$670K): <a href='https://www.kiteworks.com/cybersecurity-risk-management/ibm-2025-data-breach-report-ai-risks/'>https://www.kiteworks.com/cybersecurity-risk-management/ibm-2025-data-breach-report-ai-risks/</a></p><p>IDC unstructured data: <a href='https://blog.box.com/90-percent-unstructured-data'>https://blog.box.com/90-percent-unstructured-data</a></p><p>Varonis 2025 State of Data Security: <a href='https://www.varonis.com/blog/state-of-data-security-report'>https://www.varonis.com/blog/state-of-data-security-report</a></p><p>Samsung ChatGPT leak: <a href='https://www.pcmag.com/news/samsung-software-engineers-busted-for-pasting-proprietary-code-into-chatgp'>https://www.pcmag.com/news/samsung-software-engineers-busted-for-pasting-proprietary-code-into-chatgp</a>t</p><p>Microsoft 38 TB exposure: <a href='https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers'>https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers</a></p><p>DeepSeek ClickHouse exposure: <a href='https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak'>https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak</a></p><p>Moffatt v. Air Canada (Forbes): <a href='https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/'>https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/</a></p><p>NYC MyCity (The Markup): <a href='https://themarkup.org/artificial-intelligence/2024/04/02/malfunctioning-nyc-ai-chatbot-still-active-despite-widespread-evidence-its-encouraging-illegal-behavior'>https://themarkup.org/artificial-intelligence/2024/04/02/malfunctioning-nyc-ai-chatbot-still-active-despite-widespread-evidence-its-encouraging-illegal-behavior</a></p><p>Cisco 2024 Privacy Benchmark: <a href='https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-privacy-benchmark-study-2024.pdf'>https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-privacy-benchmark-study-2024.pdf</a></p><p>Carlini et al., ICLR 2023: <a href='https://arxiv.org/abs/2202.07646'></a></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></description>
    <content:encoded><![CDATA[<p>Sarah and James open the three-part Data Security for AI series with a simple argument: AI is only as trustworthy as the data underneath it.</p><p><b>What we cover</b></p><p>The adoption gap: Gartner expects 40% of enterprise apps to embed AI agents by end‑2026 (up from &lt;5%). IBM’s 2025 Cost of a Data Breach Report found 13% of organisations have had an AI-related breach — 97% lacked proper access controls.</p><p>Structured vs unstructured data: IDC estimates 80–90% of enterprise data is unstructured. Varonis found only 1 in 10 organisations have labelled files, and 88% still have “ghost” accounts. Point a copilot at that estate and every overshared file is exposed.</p><p>The incident catalogue: Samsung engineers pasting source code into ChatGPT (2023). Microsoft’s AI team exposing 38 TB — via a misconfigured Azure SAS token. DeepSeek’s ClickHouse leak exposing chat histories and API keys (2025).</p><p>Liability is real: Moffatt v. Air Canada (2024), where the airline argued its chatbot was a separate legal entity — and lost. NYC’s MyCity chatbot.</p><p>Shadow AI: IBM found shadow-AI breaches cost US$670K more and make up 20% of incidents.</p><p>Memorisation: Carlini et al. (ICLR 2023) showed models memorise training data based on size, duplication, and prompt context — sensitive data should be treated as eventually leakable.</p><p><b>Sources</b></p><p>Gartner 40% forecast: <a href='https://finance.yahoo.com/news/40-enterprise-apps-embed-ai-181310288.html'>https://finance.yahoo.com/news/40-enterprise-apps-embed-ai-181310288.html</a></p><p>IBM 2025 Cost of a Data Breach: <a href='https://www.ibm.com/reports/data-breach'>https://www.ibm.com/reports/data-breach</a></p><p>IBM analysis (97%, US$670K): <a href='https://www.kiteworks.com/cybersecurity-risk-management/ibm-2025-data-breach-report-ai-risks/'>https://www.kiteworks.com/cybersecurity-risk-management/ibm-2025-data-breach-report-ai-risks/</a></p><p>IDC unstructured data: <a href='https://blog.box.com/90-percent-unstructured-data'>https://blog.box.com/90-percent-unstructured-data</a></p><p>Varonis 2025 State of Data Security: <a href='https://www.varonis.com/blog/state-of-data-security-report'>https://www.varonis.com/blog/state-of-data-security-report</a></p><p>Samsung ChatGPT leak: <a href='https://www.pcmag.com/news/samsung-software-engineers-busted-for-pasting-proprietary-code-into-chatgp'>https://www.pcmag.com/news/samsung-software-engineers-busted-for-pasting-proprietary-code-into-chatgp</a>t</p><p>Microsoft 38 TB exposure: <a href='https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers'>https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers</a></p><p>DeepSeek ClickHouse exposure: <a href='https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak'>https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak</a></p><p>Moffatt v. Air Canada (Forbes): <a href='https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/'>https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/</a></p><p>NYC MyCity (The Markup): <a href='https://themarkup.org/artificial-intelligence/2024/04/02/malfunctioning-nyc-ai-chatbot-still-active-despite-widespread-evidence-its-encouraging-illegal-behavior'>https://themarkup.org/artificial-intelligence/2024/04/02/malfunctioning-nyc-ai-chatbot-still-active-despite-widespread-evidence-its-encouraging-illegal-behavior</a></p><p>Cisco 2024 Privacy Benchmark: <a href='https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-privacy-benchmark-study-2024.pdf'>https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-privacy-benchmark-study-2024.pdf</a></p><p>Carlini et al., ICLR 2023: <a href='https://arxiv.org/abs/2202.07646'></a></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2609956/episodes/19096984-ai-security-part-1-why-ai-without-data-security-is-a-breach-waiting-to-happen.mp3" length="15600318" type="audio/mpeg" />
    <itunes:author></itunes:author>
    <guid isPermaLink="false">Buzzsprout-19096984</guid>
    <pubDate>Wed, 29 Apr 2026 22:00:00 +0800</pubDate>
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19096984/transcript" type="text/html" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19096984/transcript.json" type="application/json" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19096984/transcript.srt" type="application/x-subrip" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19096984/transcript.vtt" type="text/vtt" />
    <itunes:duration>1297</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>3</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>The Invisible Architecture: Why Data Modelling Is the Make-or-Break for Enterprise AI</itunes:title>
    <title>The Invisible Architecture: Why Data Modelling Is the Make-or-Break for Enterprise AI</title>
    <itunes:summary><![CDATA[Sarah and James unpack a question most AI programmes never ask early enough: is the data actually modelled? Drawing on recent benchmarks, documented enterprise failures, and hard ROI evidence, they explore why AI accuracy drops to zero without proper data foundations, why 80% of AI projects stall on data — not algorithms — and what leaders can do about it. From the London Whale to Walmart's checkout fiasco, this episode puts data modelling in the language of business risk, competitive advanta...]]></itunes:summary>
    <description><![CDATA[<p>Sarah and James unpack a question most AI programmes never ask early enough: <b>is the data actually modelled?</b> Drawing on recent benchmarks, documented enterprise failures, and hard ROI evidence, they explore why AI accuracy drops to zero without proper data foundations, why 80% of AI projects stall on data — not algorithms — and what leaders can do about it. From the London Whale to Walmart&apos;s checkout fiasco, this episode puts data modelling in the language of business risk, competitive advantage, and AI readiness. </p><p><b>References:</b></p><ul><li>A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model&apos;s Accuracy for Question Answering on Enterprise SQL Databases<br/><a href='https://arxiv.org/abs/2311.07509'>https://arxiv.org/abs/2311.07509</a></li><li>The Consequences of Poor Data Quality: Uncovering the Hidden Risks<br/><a href='https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/'>https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/</a></li><li>The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed<a href='https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2600/RRA2680-1/RAND_RRA2680-1.pdf'><br/>https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2600/RRA2680-1/RAND_RRA2680-1.pdf </a></li><li><b> </b>Generative AI Benchmark: Increasing the Accuracy of LLMs ...<br/><a href='https://data.world/blog/generative-ai-benchmark-increasing-the-accuracy-of-llms-in-the-enterprise-with-a-knowledge-graph/'>https://data.world/blog/generative-ai-benchmark-increasing-the-accuracy-of-llms-in-the-enterprise-with-a-knowledge-graph/</a></li><li>How a Single Source of Truth for Data Unlocks Growth ...<br/><a href='https://vizule.io/single-source-of-truth-data/'>https://vizule.io/single-source-of-truth-data/</a></li><li>Is a Semantic Layer Necessary for Enterprise-Grade AI Agents?<br/><a href='https://www.tellius.com/resources/blog/is-a-semantic-layer-necessary-for-enterprise-grade-ai-agents'>https://www.tellius.com/resources/blog/is-a-semantic-layer-necessary-for-enterprise-grade-ai-agents</a></li><li>The Consequences of Poor Data Quality: Uncovering the Hidden Risks<br/><a href='https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/'>https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/</a></li><li>The Impact of Poor Data Quality (and How to Fix It)<br/><a href='https://www.dataversity.net/articles/the-impact-of-poor-data-quality-and-how-to-fix-it/'>https://www.dataversity.net/articles/the-impact-of-poor-data-quality-and-how-to-fix-it/</a></li><li>Impact of Poor Data Quality on Business Performance: Challenges, Costs, and Solutions<br/><a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4843991'>https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4843991</a></li><li>The ROI of Data Modeling ...<br/><a href='https://sqldbm.com/blog/the-roi-of-data-modeling-speaking-to-the-c-suite-using-business-metrics/'>https://sqldbm.com/blog/the-roi-of-data-modeling-speaking-to-the-c-suite-using-business-metrics/</a></li><li>Master Data Management Case Study: Luxury Retail Transformation<br/><a href='https://flevy.com/topic/master-data-management/case-master-data-management-enhancement-luxury-retail'>https://flevy.com/topic/master-data-management/case-master-data-management-enhancement-luxury-retail</a></li><li>MDM case study: The value of the Golden Record and mastering your data<br/><a href='https://qmetrix.com.au/case-study/mdm-case-study-the-value-of-the-golden-record-and-mastering-your-data/'>https://qmetrix.com.au/case-study/mdm-case-study-the-value-of-the-golden-record-and-mastering-your-data/</a></li><li>JPMorgan Chase London Whale C: Risk Limits, Metrics, and Models <br/><a href='https://elischolar.library.yale.edu/cgi/viewcontent.cgi?article=1016&amp;context=journal-of-financial-crises'></a></li></ul><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></description>
    <content:encoded><![CDATA[<p>Sarah and James unpack a question most AI programmes never ask early enough: <b>is the data actually modelled?</b> Drawing on recent benchmarks, documented enterprise failures, and hard ROI evidence, they explore why AI accuracy drops to zero without proper data foundations, why 80% of AI projects stall on data — not algorithms — and what leaders can do about it. From the London Whale to Walmart&apos;s checkout fiasco, this episode puts data modelling in the language of business risk, competitive advantage, and AI readiness. </p><p><b>References:</b></p><ul><li>A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model&apos;s Accuracy for Question Answering on Enterprise SQL Databases<br/><a href='https://arxiv.org/abs/2311.07509'>https://arxiv.org/abs/2311.07509</a></li><li>The Consequences of Poor Data Quality: Uncovering the Hidden Risks<br/><a href='https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/'>https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/</a></li><li>The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed<a href='https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2600/RRA2680-1/RAND_RRA2680-1.pdf'><br/>https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2600/RRA2680-1/RAND_RRA2680-1.pdf </a></li><li><b> </b>Generative AI Benchmark: Increasing the Accuracy of LLMs ...<br/><a href='https://data.world/blog/generative-ai-benchmark-increasing-the-accuracy-of-llms-in-the-enterprise-with-a-knowledge-graph/'>https://data.world/blog/generative-ai-benchmark-increasing-the-accuracy-of-llms-in-the-enterprise-with-a-knowledge-graph/</a></li><li>How a Single Source of Truth for Data Unlocks Growth ...<br/><a href='https://vizule.io/single-source-of-truth-data/'>https://vizule.io/single-source-of-truth-data/</a></li><li>Is a Semantic Layer Necessary for Enterprise-Grade AI Agents?<br/><a href='https://www.tellius.com/resources/blog/is-a-semantic-layer-necessary-for-enterprise-grade-ai-agents'>https://www.tellius.com/resources/blog/is-a-semantic-layer-necessary-for-enterprise-grade-ai-agents</a></li><li>The Consequences of Poor Data Quality: Uncovering the Hidden Risks<br/><a href='https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/'>https://www.actian.com/blog/data-management/the-costly-consequences-of-poor-data-quality/</a></li><li>The Impact of Poor Data Quality (and How to Fix It)<br/><a href='https://www.dataversity.net/articles/the-impact-of-poor-data-quality-and-how-to-fix-it/'>https://www.dataversity.net/articles/the-impact-of-poor-data-quality-and-how-to-fix-it/</a></li><li>Impact of Poor Data Quality on Business Performance: Challenges, Costs, and Solutions<br/><a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4843991'>https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4843991</a></li><li>The ROI of Data Modeling ...<br/><a href='https://sqldbm.com/blog/the-roi-of-data-modeling-speaking-to-the-c-suite-using-business-metrics/'>https://sqldbm.com/blog/the-roi-of-data-modeling-speaking-to-the-c-suite-using-business-metrics/</a></li><li>Master Data Management Case Study: Luxury Retail Transformation<br/><a href='https://flevy.com/topic/master-data-management/case-master-data-management-enhancement-luxury-retail'>https://flevy.com/topic/master-data-management/case-master-data-management-enhancement-luxury-retail</a></li><li>MDM case study: The value of the Golden Record and mastering your data<br/><a href='https://qmetrix.com.au/case-study/mdm-case-study-the-value-of-the-golden-record-and-mastering-your-data/'>https://qmetrix.com.au/case-study/mdm-case-study-the-value-of-the-golden-record-and-mastering-your-data/</a></li><li>JPMorgan Chase London Whale C: Risk Limits, Metrics, and Models <br/><a href='https://elischolar.library.yale.edu/cgi/viewcontent.cgi?article=1016&amp;context=journal-of-financial-crises'></a></li></ul><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2609956/episodes/19030937-the-invisible-architecture-why-data-modelling-is-the-make-or-break-for-enterprise-ai.mp3" length="14780035" type="audio/mpeg" />
    <itunes:author>Sara, James &amp; Darryl</itunes:author>
    <guid isPermaLink="false">Buzzsprout-19030937</guid>
    <pubDate>Mon, 20 Apr 2026 16:00:00 +0800</pubDate>
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19030937/transcript" type="text/html" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19030937/transcript.json" type="application/json" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19030937/transcript.srt" type="application/x-subrip" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19030937/transcript.vtt" type="text/vtt" />
    <itunes:duration>1228</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>2</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>Why Data Observability Matters Before AI Scales</itunes:title>
    <title>Why Data Observability Matters Before AI Scales</title>
    <itunes:summary><![CDATA[In the first episode of AI - Beyond the Hype, Sarah and James explore why data observability is one of the most overlooked foundations of enterprise AI readiness. They discuss how incomplete, delayed, duplicated, or poor-quality data can quietly undermine dashboards, reporting, and AI outcomes — and why better AI still starts with better data. (Sources: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability, https://www.ibm.com/th...]]></itunes:summary>
    <description><![CDATA[<p>In the first episode of <b>AI - Beyond the Hype</b>, Sarah and James explore why data observability is one of the most overlooked foundations of enterprise AI readiness. They discuss how incomplete, delayed, duplicated, or poor-quality data can quietly undermine dashboards, reporting, and AI outcomes — and why better AI still starts with better data. (Sources: <a href='https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability'>https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability</a>, <a href='https://www.ibm.com/think/topics/ai-data-quality'>https://www.ibm.com/think/topics/ai-data-quality</a>)</p><p>They explain that AI success depends on more than models or tools. Organisations need confidence that data is flowing correctly from operational systems into a central platform for analytics, reporting, and AI use cases. Without strong foundations, AI can create polished outputs built on unreliable information. (Sources: <a href='https://cloud.google.com/transform/how-to-build-strong-data-foundations-gen-ai'>https://cloud.google.com/transform/how-to-build-strong-data-foundations-gen-ai</a>, <a href='https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai'>https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai</a>)</p><p>The episode also unpacks the difference between <b>pipeline monitoring</b> and <b>true data observability</b>. A pipeline may run successfully and still produce untrustworthy data. Observability helps teams detect, diagnose, and prevent issues before they create business impact. (Sources: <a href='https://www.databricks.com/blog/what-is-data-observability'>https://www.databricks.com/blog/what-is-data-observability</a>, <a href='https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability'>https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability</a>)</p><p><b><em>Key takeaways:</em></b></p><ul><li>AI readiness is not the same as AI enthusiasm. Strong data foundations determine what is actually possible. (Source: <a href='https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai'>https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai</a>)</li><li>Source-system data quality should be validated early, with ongoing checks for completeness, accuracy, and uniqueness. (Source: <a href='https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/best-practice-1.1---validate-the-data-quality-of-source-systems-before-transferring-data-for-analytics..html'>https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/best-practice-1.1---validate-the-data-quality-of-source-systems-before-transferring-data-for-analytics..html</a>)</li><li>Poor data quality is one of the most common reasons AI initiatives fail. (Source: <a href='https://www.ibm.com/think/topics/ai-data-quality'>https://www.ibm.com/think/topics/ai-data-quality</a>)</li></ul><p><b><em>Why this matters:</em></b></p><p>For leaders, this is not just a technical issue. It is a question of <b>trust, decision quality, governance, and risk</b>. If the data underneath reporting and AI is weak, faster systems can simply produce faster bad answers. (Sources: <a href='https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability'>https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability</a>, <a href='https://www.ibm.com/think/topics/ai-data-quality'>https://www.ibm.com/think/topics/ai-data-quality</a>)</p><p><b><em>Memorable ta</em></b></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></description>
    <content:encoded><![CDATA[<p>In the first episode of <b>AI - Beyond the Hype</b>, Sarah and James explore why data observability is one of the most overlooked foundations of enterprise AI readiness. They discuss how incomplete, delayed, duplicated, or poor-quality data can quietly undermine dashboards, reporting, and AI outcomes — and why better AI still starts with better data. (Sources: <a href='https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability'>https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability</a>, <a href='https://www.ibm.com/think/topics/ai-data-quality'>https://www.ibm.com/think/topics/ai-data-quality</a>)</p><p>They explain that AI success depends on more than models or tools. Organisations need confidence that data is flowing correctly from operational systems into a central platform for analytics, reporting, and AI use cases. Without strong foundations, AI can create polished outputs built on unreliable information. (Sources: <a href='https://cloud.google.com/transform/how-to-build-strong-data-foundations-gen-ai'>https://cloud.google.com/transform/how-to-build-strong-data-foundations-gen-ai</a>, <a href='https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai'>https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai</a>)</p><p>The episode also unpacks the difference between <b>pipeline monitoring</b> and <b>true data observability</b>. A pipeline may run successfully and still produce untrustworthy data. Observability helps teams detect, diagnose, and prevent issues before they create business impact. (Sources: <a href='https://www.databricks.com/blog/what-is-data-observability'>https://www.databricks.com/blog/what-is-data-observability</a>, <a href='https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability'>https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability</a>)</p><p><b><em>Key takeaways:</em></b></p><ul><li>AI readiness is not the same as AI enthusiasm. Strong data foundations determine what is actually possible. (Source: <a href='https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai'>https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-data-dividend-fueling-generative-ai</a>)</li><li>Source-system data quality should be validated early, with ongoing checks for completeness, accuracy, and uniqueness. (Source: <a href='https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/best-practice-1.1---validate-the-data-quality-of-source-systems-before-transferring-data-for-analytics..html'>https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/best-practice-1.1---validate-the-data-quality-of-source-systems-before-transferring-data-for-analytics..html</a>)</li><li>Poor data quality is one of the most common reasons AI initiatives fail. (Source: <a href='https://www.ibm.com/think/topics/ai-data-quality'>https://www.ibm.com/think/topics/ai-data-quality</a>)</li></ul><p><b><em>Why this matters:</em></b></p><p>For leaders, this is not just a technical issue. It is a question of <b>trust, decision quality, governance, and risk</b>. If the data underneath reporting and AI is weak, faster systems can simply produce faster bad answers. (Sources: <a href='https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability'>https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/manage-observability</a>, <a href='https://www.ibm.com/think/topics/ai-data-quality'>https://www.ibm.com/think/topics/ai-data-quality</a>)</p><p><b><em>Memorable ta</em></b></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2609956/episodes/19004428-why-data-observability-matters-before-ai-scales.mp3" length="8898647" type="audio/mpeg" />
    <itunes:author>Sara, James &amp; Darryl</itunes:author>
    <guid isPermaLink="false">Buzzsprout-19004428</guid>
    <pubDate>Tue, 14 Apr 2026 11:00:00 +0800</pubDate>
    <itunes:duration>738</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episode>1</itunes:episode>
    <itunes:episodeType>full</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
  <item>
    <itunes:title>AI - Beyond the Hype - Trailer</itunes:title>
    <title>AI - Beyond the Hype - Trailer</title>
    <itunes:summary><![CDATA[Welcome to AI – Beyond the Hype — the podcast for senior executives, technology leaders, and data professionals who want a clear-eyed, practical view of what it actually takes to make AI work in the enterprise. In this short trailer, co-hosts James and Sarah introduce the show, themselves, and the way they'll approach every episode: technical depth from Sarah, executive clarity from James, and an honest conversation about the foundations that decide whether AI succeeds or quietly fails. Expec...]]></itunes:summary>
    <description><![CDATA[<p>Welcome to <em>AI – Beyond the Hype</em> — the podcast for senior executives, technology leaders, and data professionals who want a clear-eyed, practical view of what it actually takes to make AI work in the enterprise.</p><p>In this short trailer, co-hosts James and Sarah introduce the show, themselves, and the way they&apos;ll approach every episode: technical depth from Sarah, executive clarity from James, and an honest conversation about the foundations that decide whether AI succeeds or quietly fails.</p><p>Expect short, conversational episodes covering data quality, observability, governance, architecture, operating models, trust, risk, responsible adoption, and business value — minus the buzzwords, the vendor pitches, and the keynote theatre.</p><p>Because better AI still starts with better foundations.</p><p>New episodes coming soon. Subscribe now.</p><p><br/></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></description>
    <content:encoded><![CDATA[<p>Welcome to <em>AI – Beyond the Hype</em> — the podcast for senior executives, technology leaders, and data professionals who want a clear-eyed, practical view of what it actually takes to make AI work in the enterprise.</p><p>In this short trailer, co-hosts James and Sarah introduce the show, themselves, and the way they&apos;ll approach every episode: technical depth from Sarah, executive clarity from James, and an honest conversation about the foundations that decide whether AI succeeds or quietly fails.</p><p>Expect short, conversational episodes covering data quality, observability, governance, architecture, operating models, trust, risk, responsible adoption, and business value — minus the buzzwords, the vendor pitches, and the keynote theatre.</p><p>Because better AI still starts with better foundations.</p><p>New episodes coming soon. Subscribe now.</p><p><br/></p><p><a target="_blank" href="https://www.buzzsprout.com/2609956/fan_mail/new">Send us Feedback</a></p>]]></content:encoded>
    <enclosure url="https://www.buzzsprout.com/2609956/episodes/19004417-ai-beyond-the-hype-trailer.mp3" length="4769281" type="audio/mpeg" />
    <itunes:author>Sara, James &amp; Darryl</itunes:author>
    <guid isPermaLink="false">Buzzsprout-19004417</guid>
    <pubDate>Mon, 13 Apr 2026 11:00:00 +0800</pubDate>
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19004417/transcript" type="text/html" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19004417/transcript.json" type="application/json" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19004417/transcript.srt" type="application/x-subrip" />
    <podcast:transcript url="https://www.buzzsprout.com/2609956/19004417/transcript.vtt" type="text/vtt" />
    <itunes:duration>394</itunes:duration>
    <itunes:keywords></itunes:keywords>
    <itunes:season>1</itunes:season>
    <itunes:episodeType>trailer</itunes:episodeType>
    <itunes:explicit>false</itunes:explicit>
  </item>
</channel>
</rss>
