How to Protect Sensitive Data in Generative AI Systems

Cover Photo

Aug

10

5:00pm

How to Protect Sensitive Data in Generative AI Systems

By IronCore Labs

AI tooling has taken enormous leaps forward, but it has largely left privacy behind. Companies building AI systems on private data need to know how to keep the data safe while still being able to employ these new tools.
In this webinar, we discussed modern AI systems and how to secure them. Plus, we explained the role of vector embeddings and how to protect embeddings with encryption-in-use.
We focused on four main areas in this webinar:
  • How data flows through AI systems
  • Where the data presents risks
  • How the data is useful (such as preventing hallucinations)
  • How to protect vector embeddings
About the presenter:
Patrick Walsh has more than 20 years of experience building security products and enterprise SaaS solutions. Most recently he ran an Engineering division at Oracle, delivering features and business results to the world’s largest companies. Patrick now leads IronCore Labs, a data control and privacy platform that helps businesses gain control of their data and meet increasingly stringent data protection needs.
Resources:
  • Security of AI explainer
  • Encryption for vector databases

hosted by

IronCore Labs

share

Open in Android app

for a better experience