diff --git a/.blogcache b/.blogcache index c3e0c41..d966583 100644 --- a/.blogcache +++ b/.blogcache @@ -1,4 +1,6 @@ { - "content/blogs/cocos-prism-release.md": "3841b39e8411e13b0d618bcffe9d7815", - "content/blogs/confidential-computing-meets-ai.md": "17261c4020c7f2f5d6a3da6f6272926b" -} + "content/blogs/cocos-prism-release.md": "8a45148d84030b5b5bcd47f5088cfd41", + "content/blogs/confidential-computing-meets-ai.md": "6997a319608301ea40d5ee467d954910", + "content/blogs/hipaa-compliant-ai-applications.md": "37766fd0be8210aff2de60095d14d39d", + "content/blogs/privacy-paradox-confidential-computing.md": "3d397da0606869726165bb1c8792a89c" +} \ No newline at end of file diff --git a/blog/cocos-prism-release/index.html b/blog/cocos-prism-release/index.html index 82b16a6..7183c8e 100644 --- a/blog/cocos-prism-release/index.html +++ b/blog/cocos-prism-release/index.html @@ -7,7 +7,7 @@ Unleashing Confidential AI: Cocos v0.8.0 and Prism v0.6.0 Released - Ultraviolet - + @@ -15,13 +15,13 @@ - + - + @@ -37,7 +37,7 @@ "@type": "Person", "name": "sammy oina" }, - "description": "" + "description": "Learn about the new features in Cocos AI v0.8.0 and Prism v0.6.0, including enhanced security policies, performance improvements, and a new asset discovery system." } @@ -562,6 +562,110 @@

Subscribe to Our Newsletter

Next Read

+
+ +
+ +
+ +
+
@@ -579,7 +683,7 @@

Confidential Computing Meets AI: How Cube AI Protects Your LLM Prompts

- + Cube AI leverages confidential computing and TEEs to protect your sensitive AI workloads. Ensure your prompts remain private with hardware-verified security.

diff --git a/blog/confidential-computing-meets-ai/index.html b/blog/confidential-computing-meets-ai/index.html index 0b3b06e..6f30bd9 100644 --- a/blog/confidential-computing-meets-ai/index.html +++ b/blog/confidential-computing-meets-ai/index.html @@ -7,7 +7,7 @@ Confidential Computing Meets AI: How Cube AI Protects Your LLM Prompts - Ultraviolet - + @@ -15,13 +15,13 @@ - + - + @@ -37,7 +37,7 @@ "@type": "Person", "name": "sammy oina" }, - "description": "" + "description": "Discover how Cube AI leverages Trusted Execution Environments (TEEs) to protect your sensitive data and LLM prompts from cloud providers and insider threats." } @@ -540,7 +540,7 @@

Unleashing Confidential AI: Cocos v0.8.0 and Prism v0.6.0 Released

- + Ultraviolet releases Cocos AI v0.8.0 and Prism v0.6.0, enhancing the confidential computing ecosystem. These updates bring improved attestation, stronger security, and a better enterprise experience.

@@ -575,6 +575,110 @@

+
+ +
+ +
+ +
+
diff --git a/blog/hipaa-compliant-ai-applications/index.html b/blog/hipaa-compliant-ai-applications/index.html new file mode 100644 index 0000000..0fdd2bb --- /dev/null +++ b/blog/hipaa-compliant-ai-applications/index.html @@ -0,0 +1,952 @@ + + + + + + + + Unlocking HIPAA-Compliant AI Applications with Confidential Computing - Ultraviolet + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ + + +
+ + +
+ +

Unlocking HIPAA-Compliant AI Applications with Confidential Computing

+ +
+ Jilks Smith +
+
Jilks Smith
+
+ February 03, 2026 · 8 min read +
+
+
+ + + +

HIPAA-Compliant AI

+

The healthcare industry stands at a paradoxical crossroads. On one side lies the transformative promise of Generative AI, capable of synthesizing medical research in seconds, providing real-time diagnostic support, and automating crushing administrative burdens. On the other side stands the immovable force of HIPAA compliance, creating a regulatory framework that predates the very concept of Large Language Models.

+

For two years, these forces have been locked in a stalemate. Innovation teams pilot impressive AI solutions, only to watch them die in security review boards. The reason? Traditional AI architectures are fundamentally incompatible with HIPAA's requirements.

+ +

The "Pilot Purgatory" Problem

+

Healthcare organizations are trapped in what we call "Pilot Purgatory": rich in AI potential but unable to deploy to production. The barrier isn't lack of innovation; it's architectural incompatibility.

+

When a healthcare product manager proposes a GenAI solution, perhaps to summarize physician notes or automate medical coding, it enters a gauntlet of review cycles designed to say "no." Legal reviews the Business Associate Agreement. InfoSec reviews data flows. Compliance examines audit trails.

+

In traditional AI architectures, this process halts because critical questions have unsatisfactory answers:

+
    +
  • Data Residency: "Can you guarantee our patient data isn't used to train models for other customers?" The answer is often a policy assurance, not a technical one.
  • +
  • Isolation: "Is our inference running on the same GPU memory as competitors?" In multi-tenant clouds, the answer is usually "yes," separated only by hypervisor logic with known vulnerabilities.
  • +
  • Auditability: "Can you prove exactly what code ran on our data?" With black-box APIs, providers say "trust us" but cannot offer cryptographic proof.
  • +
+

The gap between contractual assurance and technical reality is the primary barrier to AI adoption in healthcare.

+

Why Traditional AI Fails HIPAA

+

To understand why a new architecture is needed, we must map HIPAA's Security Rule against modern Generative AI realities.

+

Access Control

+

HIPAA Requirement: Allow access only to authorized persons or software.

+

AI Reality: In a standard RAG (Retrieval-Augmented Generation) pipeline, the AI effectively acts as a super-user. When a physician asks, "Show me patient Smith's history," the LLM retrieval system often has broad database access.

+

The Risk: Prompt injection attacks could force the LLM to retrieve data it has technical access to but shouldn't reveal to that specific user, bypassing application-layer controls.

+

Audit Controls

+

HIPAA Requirement: Record and examine activity in systems containing ePHI.

+

AI Reality: Auditing an LLM is notoriously difficult. The reasoning is opaque. If an AI denies a claim or recommends a diagnosis, the "audit trail" is often just a vector embedding or probability score, unintelligible to human auditors.

+

The Gap: HIPAA requires reconstructing events. With non-deterministic models, reconstruction is nearly impossible unless the architecture enforces strict logging of seed, prompt, and parameters in a tamper evident way.

+

Transmission Security

+

HIPAA Requirement: Guard against unauthorized access to ePHI transmitted over networks.

+

AI Reality: While TLS protects data in transit, the "Data in Use" problem is the new frontier. When data arrives at the AI server, it must be decrypted to be processed by the GPU.

+

The Risk: During processing, when data is unencrypted in GPU memory, it's vulnerable to cloud providers, server administrators, and side-channel attacks. A malicious insider could theoretically dump GPU memory and recover PHI.

+

The Compliance-First Architecture Solution

+

Healthcare is responding with a new paradigm: Compliance-First AI, exemplified by platforms like Cube AI. This approach inverts the security model. Instead of relying on policy ("we promise not to look"), it relies on cryptography and hardware isolation ("we physically cannot look").

+

Confidential Computing: Protecting Data in Use

+

At the heart of this shift is Confidential Computing. Traditional security protects data at rest (disk encryption) and in transit (TLS). Compliance-first platforms protect Data in Use.

+

How It Works:

+
    +
  • Hardware Isolation: Cube AI utilizes Trusted Execution Environments (TEEs) like Intel TDX or AMD SEV-SNP. GPU and CPU memory used for AI workloads are encrypted at the hardware level.
  • +
  • Encryption Keys: Generated by the processor itself, never exposed to the OS or cloud provider.
  • +
  • The "Black Box" for Hosts: Even cloud providers with root access cannot view memory contents. Attempting to dump RAM reveals only encrypted noise.
  • +
+

Impact on HIPAA: This definitively solves Transmission Security and Access Control problems. If cloud admins cannot see the data, insider threat risk is mathematically eliminated.

+

Remote Attestation: Verifiable Execution

+

A critical feature of Cube AI is Remote Attestation, the digital fingerprint of code and environment.

+

The Process:

+
    +
  1. Before sending PHI, the hospital requests a cryptographic "quote" signed by the hardware
  2. +
  3. This quote proves: "I am a genuine AMD/Intel processor, running this specific Cube AI version, with this model hash"
  4. +
  5. If code has been tampered with or model weights swapped, the hash won't match, and the hospital's system refuses to send data
  6. +
+

Impact on HIPAA: This provides ultimate Audit Control. Hospitals have cryptographic proof of exactly what software processed their patient data, moving from logs (which can be faked) to mathematical proofs (which cannot).

+

Zero-Trust AI Infrastructure

+

Cube AI represents a "Zero Trust" approach where model weights (vendor IP) and patient data (hospital IP) are mutually protected:

+
    +
  • Model Confidentiality: Vendors encrypt their models, decrypted only inside the enclave. Hospitals can't steal the model.
  • +
  • Data Confidentiality: Hospitals encrypt their data, decrypted only inside the enclave. Vendors can't see the data.
  • +
  • Output Confidentiality: Results are encrypted inside the enclave and sent back to the hospital.
  • +
+

Impact on HIPAA: This transforms vendors from "Data Processors" to "Blind Processors," dramatically lowering BAA liability profiles.

+

Real Healthcare Use Cases Unlocked

+

By removing security and privacy blockers, compliance-first architectures enable high-value use cases previously deemed "too risky."

+

Mental Health Crisis Triage

+

Scenario: A behavioral health provider deploys a GenAI chatbot for remote mental health crisis triage.

+

The Risk: Patients share deeply sensitive information (suicidal ideation, substance abuse). Data leaks would be catastrophic.

+

The Solution: Using Cube AI, the chatbot runs in a TEE. Patient chat history is encrypted in RAM. The system uses "Sealing" where conversation state is encrypted with a key derived from hardware and user identity—only that specific patient can decrypt their history.

+

Outcome: 24/7 empathetic triage without fear of conversation logs being mined or exposed. Attestation guarantees no human at the vendor can read transcripts.

+

Automated Clinical Documentation

+

Scenario: A hospital introduces AI that listens to doctor-patient conversations to generate SOAP notes and medical codes.

+

The Risk: Audio data is highly identifiable (biometric). Sending raw audio to generic APIs violates the "Minimum Necessary" rule if retained for training.

+

The Solution: Audio is processed in a confidential enclave. The "Zero Trust" model ensures audio is transcribed, summarized, and destroyed within enclave memory. Only the final text note leaves the secure environment.

+

Outcome: Drastic reduction in physician burnout with full HIPAA compliance. Attestation reports prove no audio was persisted to disk.

+

Federated Learning for Rare Diseases

+

Scenario: Five research hospitals want to collaborate on a rare pediatric cancer model. No hospital will share raw patient data due to privacy laws and competitive concerns.

+

The Solution: Confidential Federated Learning. The model travels to each hospital's secure enclave, trains on local data inside the enclave, and only updated model weights (gradients) are sent back. Raw data never leaves the hospital.

+

Outcome: A powerful global model trained without a single patient record leaving its home hospital—the "Holy Grail" of medical research.

+

Cost-Benefit Analysis

+

Critics argue Confidential Computing is expensive and complex. While compute costs are higher, the Total Cost of Risk heavily favors the compliance-first approach.

+

Compliance Engineering Costs

+
    +
  • Traditional: Teams build elaborate PII scrubbers, data masking proxies, and DLP gateways requiring continuous maintenance.
  • +
  • Compliance-First: Security is architectural. You pay a 20-30% compute premium but eliminate complex middleware.
  • +
+

Verdict: Higher OpEx (compute), Lower CapEx (engineering/maintenance).

+

Breach Risk Costs

+
    +
  • Traditional: Healthcare breaches average $10.93 million. Risk of prompt injection exposing patient databases is non-zero in multi-tenant systems.
  • +
  • Compliance-First: By isolating memory in TEEs, the "blast radius" of breaches is contained to single enclaves. Massive horizontal data exfiltration is virtually eliminated.
  • +
+

Verdict: Massive reduction in catastrophic risk liability.

+

Vendor Lock-in

+
    +
  • Traditional: Locked into model providers' ecosystems. If they change data policies, you're exposed.
  • +
  • Compliance-First: Platforms like Cube AI support open-source models (Llama 3, Mistral) in your own cloud account. You own the model, data, and enclave.
  • +
+

Verdict: Strategic autonomy and long-term regulatory resilience.

+

The Path Forward

+

The trajectory is clear: Confidential AI will become the default for regulated industries.

+

Just as HTTPS became standard for web traffic, Confidential Computing is becoming standard for cloud compute. Azure and GCP already offer "Confidential VMs" as simple toggles. Soon, "Unencrypted Compute" will be viewed as negligence in healthcare.

+

We're moving toward a "Ubiquitous TEE" world where every sensitive workload runs in an enclave by default. Regulators will eventually demand access to attestation logs. FDA approvals for medical AI devices will require cryptographic proof of non-modification.

+

Key Takeaways

+
    +
  1. The Barrier is Structural: Traditional "black box" AI APIs are fundamentally incompatible with healthcare compliance risk appetite.
  2. +
  3. Architecture is the Answer: Compliance cannot be achieved by policy alone—it requires Confidential Computing to protect Data-in-Use.
  4. +
  5. Verification Replaces Trust: Remote attestation provides cryptographic proof auditors need to approve AI deployments.
  6. +
  7. ROI is in Risk Reduction: The premium for confidential compute is negligible compared to breach costs or the strategic cost of being left behind in the AI revolution.
  8. +
+

Healthcare organizations don't have to choose between innovation and compliance. With the right architecture, they can—and must—have both.

+
+

Ready to secure your AI workloads? Learn more about Cube AI's confidential computing architecture or explore our developer guides to get started.

+
+ +
+ +
+ Back to + Blog +
+
Tags
+
+ + hipaa + + healthcare + + confidential-computing + + compliance + + ai-security + +
+
+ +
+
Share this article
+ +
+
+
+
+ + +
+
+
+
+

Subscribe to Our Newsletter

+

Stay updated with the latest news, updates and announcements.

+ + +
+
+ + + +
+
+ + +
+ + + +
+
+

+ By subscribing, you agree to our + Privacy Policy + and + Terms of Service.
You can unsubscribe + at any time. +

+
+
+
+
+
+
+
+ + + + +
+
+

Next Read

+ +
+
+ + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/blog/index.html b/blog/index.html index 804506d..9a30ed7 100644 --- a/blog/index.html +++ b/blog/index.html @@ -181,7 +181,7 @@

Ultraviolet Blog

-
+

Unleashing Confidential AI: Cocos v0.8.0 and Prism v0.6.0 Released

- + Ultraviolet releases Cocos AI v0.8.0 and Prism v0.6.0, enhancing the confidential computing ecosystem. These updates bring improved attestation, stronger security, and a better enterprise experience.

@@ -236,6 +238,116 @@

+ + + + + + + + + +
@@ -253,7 +365,7 @@

Confidential Computing Meets AI: How Cube AI Protects Your LLM Prompts

- + Cube AI leverages confidential computing and TEEs to protect your sensitive AI workloads. Ensure your prompts remain private with hardware-verified security.

diff --git a/blog/privacy-paradox-confidential-computing/index.html b/blog/privacy-paradox-confidential-computing/index.html new file mode 100644 index 0000000..2c18d52 --- /dev/null +++ b/blog/privacy-paradox-confidential-computing/index.html @@ -0,0 +1,972 @@ + + + + + + + + The Privacy Paradox: Why AI Teams Should Care About Confidential Computing - Ultraviolet + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ + + +
+ + +
+ +

The Privacy Paradox: Why AI Teams Should Care About Confidential Computing

+ +
+ Jilks Smith +
+
Jilks Smith
+
+ February 02, 2026 · 8 min read +
+
+
+ + + +

Privacy Paradox

+

As organizations race to integrate Large Language Models into their operations, they're simultaneously exposing intellectual property, customer records, financial models and many more assets to unprecedented vulnerabilities. The problem? Traditional software architectures are structurally inadequate for the AI era.

+ +

The Data-in-Use Gap

+

For decades, cybersecurity has relied on protecting data at rest (disk encryption) and data in transit (TLS). This framework has been largely successful—until now.

+

The critical gap is data in use. To generate value, data must be decrypted and loaded into memory for processing. At this moment, data exists in plaintext, vulnerable to:

+
    +
  • Operating system access
  • +
  • Hypervisor inspection
  • +
  • Cloud administrator snooping
  • +
  • Malicious actors with privileged access
  • +
+

In the pre-AI era, this vulnerability was manageable because the exposure window was small. With AI this risk profile changes.

+

The AI Multiplier Effect

+

Training or running inference on LLMs requires loading massive datasets into memory for extended periods. The "context window" of an LLM becomes a "vulnerability window."

+

If an attacker gains access, the entire contents of memory are available for exfiltration in cleartext. The models themselves have become high-value assets worth hundreds of millions in R&D investment.

+

This creates a paradox: organizations must use cloud scale to train models, but cannot trust the cloud with secrets and sensitive data.

+

Real-World Breach Forensics

+

The fragility of AI data pipelines isn't theoretical. Recent incidents demonstrate the urgent need for architectural change.

+

Microsoft AI Research Exposure (2023)

+

Microsoft's AI research team accidentally exposed 38 terabytes of private data while publishing open-source training data on GitHub. The breach included:

+
    +
  • Disk backups of employee workstations
  • +
  • Private keys and passwords
  • +
  • Over 30,000 internal Teams messages
  • +
+

The Lesson: AI "data lakes" are massive targets. The breach was caused by a misconfigured storage token, but highlights how data aggregation for AI creates concentrated risk. In a Confidential Computing model, even if storage keys leaked, attackers would lack the hardware-bound decryption keys needed to read the data.

+

Change Healthcare Ransomware (2024)

+

The February 2024 attack on Change Healthcare paralyzed the US healthcare system, costing over $872 million and disrupting patient care nationwide. Attackers gained entry via compromised credentials and allegedly stole 6TB of sensitive medical data.

+

The Lesson: While Confidential Computing can't prevent credential theft, it can prevent data exfiltration. If core processing ran inside Trusted Execution Environments (TEEs), ransomware could encrypt disk files but couldn't read cleartext patient data from memory. Hardware attestation would detect malicious code injection, potentially halting attacks before data compromise.

+

Samsung ChatGPT Leak (2023)

+

Engineers leaked proprietary source code and meeting notes into ChatGPT, demonstrating how "Shadow AI" bypasses IT governance. Once data enters the "AI black box," organizations lose visibility and control.

+

The Lesson: Without architectural safeguards like Confidential Computing, sensitive data can flow into uncontrolled environments with no audit trail or protection.

+

AI-Specific Attack Vectors

+

Beyond traditional breaches, AI systems face unique adversarial attacks:

+

Model Inversion & Membership Inference

+

Attackers can query API-exposed models to reconstruct training data. "Model Inversion" recreates specific training examples (faces, patient records). "Membership Inference" determines if specific data was used in training.

+

Confidential Computing Solution: Deploy privacy-preserving techniques like Differential Privacy inside tamper-proof enclaves, ensuring privacy guarantees can't be disabled by malicious admins.

+

Model Theft

+

For AI companies, model weights are primary IP. In standard cloud deployments, weights reside in GPU memory. Sophisticated attackers with kernel access can copy these weights.

+

Confidential Computing Solution: NVIDIA's H100 Confidential Computing encrypts GPU memory and the CPU-GPU link, preventing "weight stealing" even from infrastructure providers.

+

Supply Chain Poisoning

+

Research revealed hundreds of Hugging Face models containing malicious code or susceptible to tampering. Attackers upload models that execute arbitrary code when loaded or are "poisoned" to misbehave on triggers.

+

Confidential Computing Solution: Combine TEEs with supply chain tools like Sigstore to enforce "Verify then Trust" policies—models load into secure enclaves only with valid cryptographic signatures from trusted builders.

+

The Regulatory Imperative

+

Global regulations are accelerating Confidential Computing adoption, moving from general data protection to specific AI safety mandates.

+

EU AI Act

+

The world's first comprehensive AI law includes provisions that directly align with Confidential Computing capabilities:

+
    +
  • Article 78 (Confidentiality): Mandates protection of intellectual property and trade secrets
  • +
  • Article 15 (Cybersecurity): Requires high-risk AI systems to resist unauthorized alteration—implying execution environments that guarantee code and data integrity (TEEs)
  • +
  • Article 10 (Data Governance): Mandates data integrity and confidentiality during processing
  • +
+

US Legislation

+

By 2025, all 50 states introduced AI-related legislation. States like Colorado and California regulate algorithmic discrimination and require risk management policies, driving demand for auditable, secure compute environments.

+

Federal Executive Orders emphasize securing the AI supply chain and preventing model theft by adversaries—goals directly supported by hardware-enforced isolation.

+

How Confidential Computing Works

+

Confidential Computing resolves the Privacy Paradox by changing the fundamental assumption of trust in the compute stack.

+

Trusted Execution Environments (TEEs)

+

TEEs are hardware-isolated environments where code and data are protected from the rest of the system:

+
    +
  • Memory Encryption: Data written to RAM is encrypted with keys generated inside the CPU package that never leave it
  • +
  • Access Control: CPUs prevent any software outside the TEE (OS, hypervisor, other VMs) from reading or writing TEE memory
  • +
  • Attestation: Cryptographic proof that specific code is running in a genuine, untampered TEE
  • +
+

Even with full root privileges, attackers see only encrypted ciphertext when attempting to access TEE memory.

+

Hardware Platforms

+

AMD SEV-SNP: Each VM gets a unique memory encryption key. Ideal for confidential VMs hosting AI control planes, vector databases, or CPU-based inference.

+

Intel TDX: Introduces "Trust Domains" with efficient memory encryption. Optimized for high-performance compute and rigorous attestation. Strong for sensitive model training pipelines.

+

NVIDIA H100 Confidential GPU: Revolutionary for AI. Encrypts GPU memory (up to 80GB HBM3) and the CPU-GPU link. Enables confidential training and inference with <5% overhead for compute-bound workloads.

+

Remote Attestation

+

Remote attestation proves a workload is running in a genuine TEE:

+
    +
  1. Measurement: Hardware computes a cryptographic hash of code loaded into the TEE
  2. +
  3. Evidence Generation: Hardware signs this hash with a private key embedded in silicon
  4. +
  5. Verification: An Attestation Service checks the signature against manufacturer public keys
  6. +
  7. Key Release: If verification succeeds, decryption keys are released to the workload
  8. +
+

This ensures keys are never released unless the environment is proven secure and untampered. If malware is injected, the hash changes, verification fails, and keys remain locked.

+

Real-World ROI: Case Studies

+

BeeKeeperAI: Healthcare Research Platform

+

Challenge: Developing clinical AI requires diverse patient data, but privacy regulations create massive friction. Traditional de-identification is costly, slow, and reduces data fidelity.

+

Solution: BeeKeeperAI's "EscrowAI" platform uses Azure Confidential Computing (Intel SGX). Hospitals encrypt data and upload to secure enclaves. Algorithm developers upload models to the same enclave. Models execute against data inside the TEE—developers receive performance reports but never see raw data.

+

Impact:

+
    +
  • Time-to-Insight: Reduced from 12-24 months to days
  • +
  • Cost Savings: Eliminated expensive synthetic data purchases and legal fees
  • +
  • Innovation: Enabled validation on rare disease datasets previously inaccessible due to privacy fragmentation
  • +
+

Consilient: Federated AML for Financial Services

+

Challenge: Anti-Money Laundering efforts are hampered by information silos. Banks only see transactions within their walls. Criminals exploit this by moving funds across institutions. Traditional systems generate >90% false positives.

+

Solution: Consilient's "DOZER" uses Federated Learning in Intel SGX enclaves. Models move to banks' secure environments, learn from local data without data leaving custody, then aggregate to form smarter global detectors.

+

Impact:

+
    +
  • False Positive Reduction: Achieved 12% false positive rate (87% reduction from 90% industry standard)
  • +
  • Effectiveness: AUC-ROC score of 0.90 (17% improvement over single-institution models)
  • +
  • Efficiency: 75% efficiency improvement for one Tier-1 US bank
  • +
  • Discovery: Identified previously undetected money laundering network
  • +
+

Implementation Strategies

+

Confidential VMs (CVMs)

+

Approach: "Lift and Shift"—entire VMs run inside TEEs (AMD SEV-SNP)

+

Pros: Easiest deployment, no code changes, works with legacy applications

+

Cons: Large Trusted Computing Base (must trust entire guest OS)

+

Best For: Migrating existing monolithic AI applications, databases, legacy systems

+

Confidential Containers (CoCo)

+

Approach: Cloud-native—each Kubernetes Pod runs in its own lightweight microVM TEE

+

Pros: Small TCB, fine-grained isolation, better security posture, native Kubernetes integration

+

Cons: Requires mature Kubernetes setup, slightly more complex debugging

+

Best For: Modern AI inference services, multi-tenant SaaS platforms, sensitive microservices

+

The Path Forward

+

The global confidential computing market is projected to grow from $9.04 billion in 2024 to over $1,281 billion by 2034—a 64% CAGR. This isn't just security spending; it's a structural transformation in enterprise computing architecture.

+

By 2026, over 70% of enterprise AI workloads will involve sensitive data, making confidential architectures a necessity rather than a luxury.

+

The HTTPS Moment

+

In internet history, there was a moment when HTTPS transitioned from a requirement for banking sites to the default standard for the entire web. We're at that same inflection point for AI.

+

Confidential Computing is the "HTTPS for AI"—the protocol that builds the trust necessary for the next generation of intelligent systems to flourish.

+

Key Takeaways

+
    +
  1. Traditional security fails AI: The "two-state" model (at rest, in transit) leaves data-in-use vulnerable—the exact state AI requires
  2. +
  3. Hardware-based isolation is essential: TEEs provide mathematical guarantees that software-based security cannot
  4. +
  5. Regulatory pressure is accelerating: The EU AI Act and US legislation increasingly demand "privacy by design"
  6. +
  7. ROI is proven: Organizations like BeeKeeperAI and Consilient demonstrate dramatic time-to-value and cost reductions
  8. +
  9. The question has changed: From "Can we afford to implement this?" to "Can we survive ignoring it?"
  10. +
+
+

Ready to secure your AI workloads? Learn more about Cube AI's confidential computing architecture or explore our developer guides to get started.

+
+ +
+ + +
Tags
+
+ + confidential-computing + + ai-security + + privacy + + tee + + enterprise-ai + +
+
+ +
+
Share this article
+ +
+
+
+
+ + +
+
+
+
+

Subscribe to Our Newsletter

+

Stay updated with the latest news, updates and announcements.

+ + +
+
+ + + +
+
+ + +
+ + + +
+
+

+ By subscribing, you agree to our + Privacy Policy + and + Terms of Service.
You can unsubscribe + at any time. +

+
+
+
+
+
+
+
+ + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/content/blogs/hipaa-compliant-ai-applications.md b/content/blogs/hipaa-compliant-ai-applications.md new file mode 100644 index 0000000..f220eda --- /dev/null +++ b/content/blogs/hipaa-compliant-ai-applications.md @@ -0,0 +1,178 @@ +--- +slug: hipaa-compliant-ai-applications +title: "Unlocking HIPAA-Compliant AI Applications with Confidential Computing" +excerpt: "" +description: "Discover how Confidential Computing enables HIPAA-compliant AI applications in healthcare. Learn about TEEs, remote attestation, and compliance-first architecture." +author: + name: "Jilks Smith" + picture: "https://avatars.githubusercontent.com/u/41241359?v=4" +tags: [hipaa, healthcare, confidential-computing, compliance, ai-security] +image: /img/hipaa-ai-compliance.png +date: 2026-02-03 +--- + +![HIPAA-Compliant AI](/img/hipaa-ai-compliance.png) + +The healthcare industry stands at a paradoxical crossroads. On one side lies the transformative promise of Generative AI, capable of synthesizing medical research in seconds, providing real-time diagnostic support, and automating crushing administrative burdens. On the other side stands the immovable force of HIPAA compliance, creating a regulatory framework that predates the very concept of Large Language Models. + +For two years, these forces have been locked in a stalemate. Innovation teams pilot impressive AI solutions, only to watch them die in security review boards. The reason? **Traditional AI architectures are fundamentally incompatible with HIPAA's requirements.** + + + +## The "Pilot Purgatory" Problem + +Healthcare organizations are trapped in what we call "Pilot Purgatory": rich in AI potential but unable to deploy to production. The barrier isn't lack of innovation; it's architectural incompatibility. + +When a healthcare product manager proposes a GenAI solution, perhaps to summarize physician notes or automate medical coding, it enters a gauntlet of review cycles designed to say "no." Legal reviews the Business Associate Agreement. InfoSec reviews data flows. Compliance examines audit trails. + +In traditional AI architectures, this process halts because critical questions have unsatisfactory answers: + +- **Data Residency**: "Can you guarantee our patient data isn't used to train models for other customers?" The answer is often a policy assurance, not a technical one. +- **Isolation**: "Is our inference running on the same GPU memory as competitors?" In multi-tenant clouds, the answer is usually "yes," separated only by hypervisor logic with known vulnerabilities. +- **Auditability**: "Can you prove exactly what code ran on our data?" With black-box APIs, providers say "trust us" but cannot offer cryptographic proof. + +The gap between contractual assurance and technical reality is the primary barrier to AI adoption in healthcare. + +## Why Traditional AI Fails HIPAA + +To understand why a new architecture is needed, we must map HIPAA's Security Rule against modern Generative AI realities. + +### Access Control + +**HIPAA Requirement**: Allow access only to authorized persons or software. + +**AI Reality**: In a standard RAG (Retrieval-Augmented Generation) pipeline, the AI effectively acts as a super-user. When a physician asks, "Show me patient Smith's history," the LLM retrieval system often has broad database access. + +**The Risk**: Prompt injection attacks could force the LLM to retrieve data it has technical access to but shouldn't reveal to that specific user, bypassing application-layer controls. + +### Audit Controls + +**HIPAA Requirement**: Record and examine activity in systems containing ePHI. + +**AI Reality**: Auditing an LLM is notoriously difficult. The reasoning is opaque. If an AI denies a claim or recommends a diagnosis, the "audit trail" is often just a vector embedding or probability score, unintelligible to human auditors. + +**The Gap**: HIPAA requires reconstructing events. With non-deterministic models, reconstruction is nearly impossible unless the architecture enforces strict logging of seed, prompt, and parameters in a tamper evident way. + +### Transmission Security + +**HIPAA Requirement**: Guard against unauthorized access to ePHI transmitted over networks. + +**AI Reality**: While TLS protects data in transit, the "Data in Use" problem is the new frontier. When data arrives at the AI server, it must be decrypted to be processed by the GPU. + +**The Risk**: During processing, when data is unencrypted in GPU memory, it's vulnerable to cloud providers, server administrators, and side-channel attacks. A malicious insider could theoretically dump GPU memory and recover PHI. + +## The Compliance-First Architecture Solution + +Healthcare is responding with a new paradigm: **Compliance-First AI**, exemplified by platforms like Cube AI. This approach inverts the security model. Instead of relying on policy ("we promise not to look"), it relies on cryptography and hardware isolation ("we physically cannot look"). + +### Confidential Computing: Protecting Data in Use + +At the heart of this shift is **Confidential Computing**. Traditional security protects data at rest (disk encryption) and in transit (TLS). Compliance-first platforms protect **Data in Use**. + +**How It Works**: + +- **Hardware Isolation**: Cube AI utilizes Trusted Execution Environments (TEEs) like Intel TDX or AMD SEV-SNP. GPU and CPU memory used for AI workloads are encrypted at the hardware level. +- **Encryption Keys**: Generated by the processor itself, never exposed to the OS or cloud provider. +- **The "Black Box" for Hosts**: Even cloud providers with root access cannot view memory contents. Attempting to dump RAM reveals only encrypted noise. + +**Impact on HIPAA**: This definitively solves Transmission Security and Access Control problems. If cloud admins cannot see the data, insider threat risk is mathematically eliminated. + +### Remote Attestation: Verifiable Execution + +A critical feature of Cube AI is **Remote Attestation**, the digital fingerprint of code and environment. + +**The Process**: + +1. Before sending PHI, the hospital requests a cryptographic "quote" signed by the hardware +2. This quote proves: "I am a genuine AMD/Intel processor, running this specific Cube AI version, with this model hash" +3. If code has been tampered with or model weights swapped, the hash won't match, and the hospital's system refuses to send data + +**Impact on HIPAA**: This provides ultimate Audit Control. Hospitals have cryptographic proof of exactly what software processed their patient data, moving from logs (which can be faked) to mathematical proofs (which cannot). + +### Zero-Trust AI Infrastructure + +Cube AI represents a "Zero Trust" approach where model weights (vendor IP) and patient data (hospital IP) are mutually protected: + +- **Model Confidentiality**: Vendors encrypt their models, decrypted only inside the enclave. Hospitals can't steal the model. +- **Data Confidentiality**: Hospitals encrypt their data, decrypted only inside the enclave. Vendors can't see the data. +- **Output Confidentiality**: Results are encrypted inside the enclave and sent back to the hospital. + +**Impact on HIPAA**: This transforms vendors from "Data Processors" to "Blind Processors," dramatically lowering BAA liability profiles. + +## Real Healthcare Use Cases Unlocked + +By removing security and privacy blockers, compliance-first architectures enable high-value use cases previously deemed "too risky." + +### Mental Health Crisis Triage + +**Scenario**: A behavioral health provider deploys a GenAI chatbot for remote mental health crisis triage. + +**The Risk**: Patients share deeply sensitive information (suicidal ideation, substance abuse). Data leaks would be catastrophic. + +**The Solution**: Using Cube AI, the chatbot runs in a TEE. Patient chat history is encrypted in RAM. The system uses "Sealing" where conversation state is encrypted with a key derived from hardware and user identity—only that specific patient can decrypt their history. + +**Outcome**: 24/7 empathetic triage without fear of conversation logs being mined or exposed. Attestation guarantees no human at the vendor can read transcripts. + +### Automated Clinical Documentation + +**Scenario**: A hospital introduces AI that listens to doctor-patient conversations to generate SOAP notes and medical codes. + +**The Risk**: Audio data is highly identifiable (biometric). Sending raw audio to generic APIs violates the "Minimum Necessary" rule if retained for training. + +**The Solution**: Audio is processed in a confidential enclave. The "Zero Trust" model ensures audio is transcribed, summarized, and destroyed within enclave memory. Only the final text note leaves the secure environment. + +**Outcome**: Drastic reduction in physician burnout with full HIPAA compliance. Attestation reports prove no audio was persisted to disk. + +### Federated Learning for Rare Diseases + +**Scenario**: Five research hospitals want to collaborate on a rare pediatric cancer model. No hospital will share raw patient data due to privacy laws and competitive concerns. + +**The Solution**: Confidential Federated Learning. The model travels to each hospital's secure enclave, trains on local data inside the enclave, and only updated model weights (gradients) are sent back. Raw data never leaves the hospital. + +**Outcome**: A powerful global model trained without a single patient record leaving its home hospital—the "Holy Grail" of medical research. + +## Cost-Benefit Analysis + +Critics argue Confidential Computing is expensive and complex. While compute costs are higher, the **Total Cost of Risk** heavily favors the compliance-first approach. + +### Compliance Engineering Costs + +- **Traditional**: Teams build elaborate PII scrubbers, data masking proxies, and DLP gateways requiring continuous maintenance. +- **Compliance-First**: Security is architectural. You pay a 20-30% compute premium but eliminate complex middleware. + +**Verdict**: Higher OpEx (compute), Lower CapEx (engineering/maintenance). + +### Breach Risk Costs + +- **Traditional**: Healthcare breaches average $10.93 million. Risk of prompt injection exposing patient databases is non-zero in multi-tenant systems. +- **Compliance-First**: By isolating memory in TEEs, the "blast radius" of breaches is contained to single enclaves. Massive horizontal data exfiltration is virtually eliminated. + +**Verdict**: Massive reduction in catastrophic risk liability. + +### Vendor Lock-in + +- **Traditional**: Locked into model providers' ecosystems. If they change data policies, you're exposed. +- **Compliance-First**: Platforms like Cube AI support open-source models (Llama 3, Mistral) in your own cloud account. You own the model, data, and enclave. + +**Verdict**: Strategic autonomy and long-term regulatory resilience. + +## The Path Forward + +The trajectory is clear: **Confidential AI will become the default for regulated industries.** + +Just as HTTPS became standard for web traffic, Confidential Computing is becoming standard for cloud compute. Azure and GCP already offer "Confidential VMs" as simple toggles. Soon, "Unencrypted Compute" will be viewed as negligence in healthcare. + +We're moving toward a "Ubiquitous TEE" world where every sensitive workload runs in an enclave by default. Regulators will eventually demand access to attestation logs. FDA approvals for medical AI devices will require cryptographic proof of non-modification. + +## Key Takeaways + +1. **The Barrier is Structural**: Traditional "black box" AI APIs are fundamentally incompatible with healthcare compliance risk appetite. +2. **Architecture is the Answer**: Compliance cannot be achieved by policy alone—it requires Confidential Computing to protect Data-in-Use. +3. **Verification Replaces Trust**: Remote attestation provides cryptographic proof auditors need to approve AI deployments. +4. **ROI is in Risk Reduction**: The premium for confidential compute is negligible compared to breach costs or the strategic cost of being left behind in the AI revolution. + +Healthcare organizations don't have to choose between innovation and compliance. With the right architecture, they can—and must—have both. + +--- + +*Ready to secure your AI workloads? Learn more about [Cube AI's confidential computing architecture](https://docs.cube.ultraviolet.rs/architecture) or explore our [developer guides](https://docs.cube.ultraviolet.rs/developer-guide) to get started.* \ No newline at end of file diff --git a/content/blogs/privacy-paradox-confidential-computing.md b/content/blogs/privacy-paradox-confidential-computing.md new file mode 100644 index 0000000..508e1ce --- /dev/null +++ b/content/blogs/privacy-paradox-confidential-computing.md @@ -0,0 +1,209 @@ +--- +slug: privacy-paradox-confidential-computing +title: "The Privacy Paradox: Why AI Teams Should Care About Confidential Computing" +excerpt: "" +description: "Discover why AI teams need Confidential Computing to protect sensitive data during processing. Learn about TEEs, real-world breaches, and implementation strategies." +author: + name: "Jilks Smith" + picture: "https://avatars.githubusercontent.com/u/41241359?v=4" +tags: [confidential-computing, ai-security, privacy, tee, enterprise-ai] +image: /img/privacy-paradox.png +date: 2026-02-02 +--- + +![Privacy Paradox](/img/privacy-paradox/privacy-paradox.jpg) + +As organizations race to integrate Large Language Models into their operations, they're simultaneously exposing intellectual property, customer records, financial models and many more assets to unprecedented vulnerabilities. **The problem? Traditional software architectures are structurally inadequate for the AI era.** + + + +## The Data-in-Use Gap + +For decades, cybersecurity has relied on protecting **data at rest** (disk encryption) and **data in transit** (TLS). This framework has been largely successful—until now. + +The critical gap is **data in use**. To generate value, data must be decrypted and loaded into memory for processing. At this moment, data exists in plaintext, vulnerable to: + +- Operating system access +- Hypervisor inspection +- Cloud administrator snooping +- Malicious actors with privileged access + +In the pre-AI era, this vulnerability was manageable because the exposure window was small. **With AI this risk profile changes.** + +### The AI Multiplier Effect + +Training or running inference on LLMs requires loading massive datasets into memory for extended periods. The "context window" of an LLM becomes a "vulnerability window." + +If an attacker gains access, the entire contents of memory are available for exfiltration in cleartext. The models themselves have become high-value assets worth hundreds of millions in R&D investment. + +This creates a paradox: organizations must use cloud scale to train models, but cannot trust the cloud with secrets and sensitive data. + +## Real-World Breach Forensics + +The fragility of AI data pipelines isn't theoretical. Recent incidents demonstrate the urgent need for architectural change. + +### Microsoft AI Research Exposure (2023) + +Microsoft's AI research team accidentally exposed **38 terabytes** of private data while publishing open-source training data on GitHub. The breach included: + +- Disk backups of employee workstations +- Private keys and passwords +- Over 30,000 internal Teams messages + +**The Lesson**: AI "data lakes" are massive targets. The breach was caused by a misconfigured storage token, but highlights how data aggregation for AI creates concentrated risk. In a Confidential Computing model, even if storage keys leaked, attackers would lack the hardware-bound decryption keys needed to read the data. + +### Change Healthcare Ransomware (2024) + +The February 2024 attack on Change Healthcare paralyzed the US healthcare system, costing over **$872 million** and disrupting patient care nationwide. Attackers gained entry via compromised credentials and allegedly stole 6TB of sensitive medical data. + +**The Lesson**: While Confidential Computing can't prevent credential theft, it can prevent data exfiltration. If core processing ran inside Trusted Execution Environments (TEEs), ransomware could encrypt disk files but couldn't read cleartext patient data from memory. Hardware attestation would detect malicious code injection, potentially halting attacks before data compromise. + +### Samsung ChatGPT Leak (2023) + +Engineers leaked proprietary source code and meeting notes into ChatGPT, demonstrating how "Shadow AI" bypasses IT governance. Once data enters the "AI black box," organizations lose visibility and control. + +**The Lesson**: Without architectural safeguards like Confidential Computing, sensitive data can flow into uncontrolled environments with no audit trail or protection. + +## AI-Specific Attack Vectors + +Beyond traditional breaches, AI systems face unique adversarial attacks: + +### Model Inversion & Membership Inference + +Attackers can query API-exposed models to reconstruct training data. "Model Inversion" recreates specific training examples (faces, patient records). "Membership Inference" determines if specific data was used in training. + +**Confidential Computing Solution**: Deploy privacy-preserving techniques like Differential Privacy inside tamper-proof enclaves, ensuring privacy guarantees can't be disabled by malicious admins. + +### Model Theft + +For AI companies, model weights are primary IP. In standard cloud deployments, weights reside in GPU memory. Sophisticated attackers with kernel access can copy these weights. + +**Confidential Computing Solution**: NVIDIA's H100 Confidential Computing encrypts GPU memory and the CPU-GPU link, preventing "weight stealing" even from infrastructure providers. + +### Supply Chain Poisoning + +Research revealed hundreds of Hugging Face models containing malicious code or susceptible to tampering. Attackers upload models that execute arbitrary code when loaded or are "poisoned" to misbehave on triggers. + +**Confidential Computing Solution**: Combine TEEs with supply chain tools like Sigstore to enforce "Verify then Trust" policies—models load into secure enclaves only with valid cryptographic signatures from trusted builders. + +## The Regulatory Imperative + +Global regulations are accelerating Confidential Computing adoption, moving from general data protection to specific AI safety mandates. + +### EU AI Act + +The world's first comprehensive AI law includes provisions that directly align with Confidential Computing capabilities: + +- **Article 78 (Confidentiality)**: Mandates protection of intellectual property and trade secrets +- **Article 15 (Cybersecurity)**: Requires high-risk AI systems to resist unauthorized alteration—implying execution environments that guarantee code and data integrity (TEEs) +- **Article 10 (Data Governance)**: Mandates data integrity and confidentiality during processing + +### US Legislation + +By 2025, all 50 states introduced AI-related legislation. States like Colorado and California regulate algorithmic discrimination and require risk management policies, driving demand for auditable, secure compute environments. + +Federal Executive Orders emphasize securing the AI supply chain and preventing model theft by adversaries—goals directly supported by hardware-enforced isolation. + +## How Confidential Computing Works + +Confidential Computing resolves the Privacy Paradox by changing the fundamental assumption of trust in the compute stack. + +### Trusted Execution Environments (TEEs) + +TEEs are hardware-isolated environments where code and data are protected from the rest of the system: + +- **Memory Encryption**: Data written to RAM is encrypted with keys generated inside the CPU package that never leave it +- **Access Control**: CPUs prevent any software outside the TEE (OS, hypervisor, other VMs) from reading or writing TEE memory +- **Attestation**: Cryptographic proof that specific code is running in a genuine, untampered TEE + +Even with full root privileges, attackers see only encrypted ciphertext when attempting to access TEE memory. + +### Hardware Platforms + +**AMD SEV-SNP**: Each VM gets a unique memory encryption key. Ideal for confidential VMs hosting AI control planes, vector databases, or CPU-based inference. + +**Intel TDX**: Introduces "Trust Domains" with efficient memory encryption. Optimized for high-performance compute and rigorous attestation. Strong for sensitive model training pipelines. + +**NVIDIA H100 Confidential GPU**: Revolutionary for AI. Encrypts GPU memory (up to 80GB HBM3) and the CPU-GPU link. Enables confidential training and inference with <5% overhead for compute-bound workloads. + +### Remote Attestation + +Remote attestation proves a workload is running in a genuine TEE: + +1. **Measurement**: Hardware computes a cryptographic hash of code loaded into the TEE +2. **Evidence Generation**: Hardware signs this hash with a private key embedded in silicon +3. **Verification**: An Attestation Service checks the signature against manufacturer public keys +4. **Key Release**: If verification succeeds, decryption keys are released to the workload + +This ensures keys are never released unless the environment is proven secure and untampered. If malware is injected, the hash changes, verification fails, and keys remain locked. + +## Real-World ROI: Case Studies + +### BeeKeeperAI: Healthcare Research Platform + +**Challenge**: Developing clinical AI requires diverse patient data, but privacy regulations create massive friction. Traditional de-identification is costly, slow, and reduces data fidelity. + +**Solution**: BeeKeeperAI's "EscrowAI" platform uses Azure Confidential Computing (Intel SGX). Hospitals encrypt data and upload to secure enclaves. Algorithm developers upload models to the same enclave. Models execute against data inside the TEE—developers receive performance reports but never see raw data. + +**Impact**: +- **Time-to-Insight**: Reduced from 12-24 months to days +- **Cost Savings**: Eliminated expensive synthetic data purchases and legal fees +- **Innovation**: Enabled validation on rare disease datasets previously inaccessible due to privacy fragmentation + +### Consilient: Federated AML for Financial Services + +**Challenge**: Anti-Money Laundering efforts are hampered by information silos. Banks only see transactions within their walls. Criminals exploit this by moving funds across institutions. Traditional systems generate >90% false positives. + +**Solution**: Consilient's "DOZER" uses Federated Learning in Intel SGX enclaves. Models move to banks' secure environments, learn from local data without data leaving custody, then aggregate to form smarter global detectors. + +**Impact**: +- **False Positive Reduction**: Achieved 12% false positive rate (87% reduction from 90% industry standard) +- **Effectiveness**: AUC-ROC score of 0.90 (17% improvement over single-institution models) +- **Efficiency**: 75% efficiency improvement for one Tier-1 US bank +- **Discovery**: Identified previously undetected money laundering network + +## Implementation Strategies + +### Confidential VMs (CVMs) + +**Approach**: "Lift and Shift"—entire VMs run inside TEEs (AMD SEV-SNP) + +**Pros**: Easiest deployment, no code changes, works with legacy applications + +**Cons**: Large Trusted Computing Base (must trust entire guest OS) + +**Best For**: Migrating existing monolithic AI applications, databases, legacy systems + +### Confidential Containers (CoCo) + +**Approach**: Cloud-native—each Kubernetes Pod runs in its own lightweight microVM TEE + +**Pros**: Small TCB, fine-grained isolation, better security posture, native Kubernetes integration + +**Cons**: Requires mature Kubernetes setup, slightly more complex debugging + +**Best For**: Modern AI inference services, multi-tenant SaaS platforms, sensitive microservices + +## The Path Forward + +The global confidential computing market is projected to grow from **$9.04 billion in 2024 to over $1,281 billion by 2034**—a 64% CAGR. This isn't just security spending; it's a structural transformation in enterprise computing architecture. + +By 2026, over 70% of enterprise AI workloads will involve sensitive data, making confidential architectures a necessity rather than a luxury. + +### The HTTPS Moment + +In internet history, there was a moment when HTTPS transitioned from a requirement for banking sites to the default standard for the entire web. **We're at that same inflection point for AI.** + +Confidential Computing is the "HTTPS for AI"—the protocol that builds the trust necessary for the next generation of intelligent systems to flourish. + +## Key Takeaways + +1. **Traditional security fails AI**: The "two-state" model (at rest, in transit) leaves data-in-use vulnerable—the exact state AI requires +2. **Hardware-based isolation is essential**: TEEs provide mathematical guarantees that software-based security cannot +3. **Regulatory pressure is accelerating**: The EU AI Act and US legislation increasingly demand "privacy by design" +4. **ROI is proven**: Organizations like BeeKeeperAI and Consilient demonstrate dramatic time-to-value and cost reductions +5. **The question has changed**: From "Can we afford to implement this?" to "Can we survive ignoring it?" + +--- + +*Ready to secure your AI workloads? Learn more about [Cube AI's confidential computing architecture](https://docs.cube.ultraviolet.rs/architecture) or explore our [developer guides](https://docs.cube.ultraviolet.rs/developer-guide) to get started.* \ No newline at end of file diff --git a/img/privacy-paradox/privacy-paradox.jpg b/img/privacy-paradox/privacy-paradox.jpg new file mode 100644 index 0000000..94b26e7 Binary files /dev/null and b/img/privacy-paradox/privacy-paradox.jpg differ diff --git a/sitemap.xml b/sitemap.xml index 9beae56..93b92ff 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -104,13 +104,25 @@ https://www.ultraviolet.rs/blog/confidential-computing-meets-ai/ - 2026-02-06 + 2026-02-10 monthly 0.7 https://www.ultraviolet.rs/blog/cocos-prism-release/ - 2026-02-06 + 2026-02-10 + monthly + 0.7 + + + https://www.ultraviolet.rs/blog/hipaa-compliant-ai-applications/ + 2026-02-10 + monthly + 0.7 + + + https://www.ultraviolet.rs/blog/privacy-paradox-confidential-computing/ + 2026-02-10 monthly 0.7