Episode 81

Published on:

4th Jul 2023

How AI Will Redefine The Role Of Privacy Professionals

Privacy vs. AI: The Ultimate Showdown!

In this episode, you'll unveil:

  • How you can enjoy the benefits of AI without losing your privacy
  • Why balancing AI and privacy is not just a technical challenge but a legal and ethical one
  • Powerful frameworks to implement privacy by design with AI

If you're ready to transform your career and become the go-to GDPR expert, download the first chapter of 'The Easy Peasy Guide To The GDPR' here: https://www.bestgdprbook.com/

Follow Jamal on LinkedIn: https://www.linkedin.com/in/kmjahmed/

Get Exclusive Insights, Secret Expert Tips & Actionable Resources For A Thriving Privacy Career That We Only Share With Email Subscribers


Subscribe to the Privacy Pros Academy YouTube Channel

► https://www.youtube.com/c/PrivacyPros

Join the Privacy Pros Academy Private Facebook Group for:

  • Free LIVE Training
  • Free Easy Peasy Data Privacy Guides
  • Data Protection Updates and so much more

Apply to join here whilst it's still free: https://www.facebook.com/groups/privacypro


Are you ready to know what you don’t know about Privacy Pros? Then you're in the right place. Welcome to the Privacy Pros Academy podcast by Kazient Privacy experts. The podcast to launch progress and excel your career as a privacy pro.


Hear about the latest news and developments in the world of privacy. Discover fascinating insights from leading global privacy professionals and hear real stories and top tips from the people who've been where you want to get to.


We're an official IAPP training partner. We've trained people in over 137 countries and counting. So whether you're thinking about starting a career in data privacy or you're an experienced professional, this is the podcast for you.


Hi, and welcome to another episode of the Privacy Pros podcast. I'm your host, Jamal book author of The Easy Peasy Guide to the GDPR that's going to help you to become the go to GDPR expert and have a thriving career. And I'm thrilled to have each and every one of you joining me today. Today I have a special treat for you. Last month, I had the incredible privilege of speaking at the International Data Protection Conference hosted by TGS Baltic and the Lithuanian Data Protection Association. In my presentation, I delved into the challenges and the opportunities around balancing AI and privacy. I want to express my heartfelt gratitude to TGS Baltic and the Lithuanian Data Protection Association for organising such a remarkable event, and also for giving us permission to share this exclusive content with you, my valued listener. Without further ado, let's dive into my talk. But first, a quick reminder to subscribe to our podcast, if you haven't already, and leave us a review on your favourite platform. Your feedback is vital in helping us continue to deliver valuable content to you, absolutely free of charge.


Today, I'm going to be speaking about balancing AI and privacy. A Juggler's Guide now let's start with something a little bit unexpected. I want you all to imagine a world where your morning coffee is ready the moment you wake up. Your car knows the quickest route to work even before you do. And your fridge well, your fridge orders your favourite dessert just because it's seven years since GDPR. Wonderful, isn't it? But then you start receiving ads for dessert everywhere you go online. And that's when you begin to wonder, how much does the digital world know about me? It's like your own personal detective story exciting, intriguing, but just a bit concerning. That's the world of data we live in. I'm Jamal Ahmed, a data privacy professional, your friendly guide and detective in this world. And today, we're going to be exploring this thrilling drama of AI and privacy. A story of magic and a little bit of juggling. So let's dive into our story and meet our main characters. So we have AI or artificial intelligence, the wizard who grants our wishes. And then we have Privacy, the armour that guards our secrets. A fascinating pair, aren't they? AI our digital genie is truly mesmerizing. It recommends our next binge watch, predicts our pizza cravings, and even helps us avoid traffic. But the more accurately it guesses our wishes, the more we wonder just how much does this genie actually know about me? It's like having a mind reading roommate, although that would be cool and handy. It's a little bit unsettling, though, right?

Jamal 3:35

On the other side, we have privacy. Our very own super armour. We cherish it as it protects our personal information, our preferences, our secrets. But when we experience the convenience that AI brings, we often find ourselves willingly removing the armour. And there lies the paradox. Our desire for personalized AI experiences conflicts with our inherent need for privacy. It's a little bit like me when I want to eat a chocolate cake all by myself, but I also want six pack abs. As you can see, it's not quite working. Now, here's where the plot thickens. Just like a well-executed circus act, the challenge lies in balancing these two forces the wish granting AI and the privacy armour. It's a higher wire act, walking the line between personalized convenience and personalized privacy. But here's the good news we're not performers in this act without tools or tricks. There's a bit of magic in our hands, and it comes in the form of privacy enhancing technologies, or PETs if you fancy an acronym. These technologies, like Differential Privacy and Homomorphic Encryption, let us enjoy the benefits of AI without revealing the entire data diary to the world. They are the juggler's balls, helping us keep the act going. It also comes in the form of Daniel Solo's Taxonomy of Privacy, which can significantly contribute to balancing privacy and AI by providing a comprehensive framework to understand and address privacy issues within AI systems.

Jamal 5:05

And here's how that works. So first of all, let's look at a guide for privacy by design. Solo's taxonomy can serve as a guide during the design and development phases of AI systems. By understanding potential privacy invasions such as surveillance, aggregation, identification, and disclosure, developers can build privacy preserving features directly into AI systems from the outset. Then, we have improved transparency. His taxonomy can help AI companies to improve transparency about their data practices. This involves clear communication about what data is collected, how it's processed, and who it's shared with. Informed consent, understanding the different ways personal information can be used and misused can enable users to make more informed decisions when providing that consent. Regulatory Compliance, regulators can use the taxonomy as a reference to formulate and enforce laws and regulations that can explicitly address actions like unjustified surveillance or unauthorized aggregation of user data. Privacy Audits and Impact Assessments, the taxonomy can serve as a checklist in privacy audits and in privacy impact assessments and allow businesses to examine each category in the taxonomy so they consider a broad range of potential privacy issues. Then we have privacy education and training. The taxonomy can be used in privacy training for AI developers and data scientists to better understand and mitigate the potential privacy issues in their work. Remember, balancing AI and privacy is not just a technical challenge, but it's also a legal challenge and it's also an ethical one.

Jamal 6:47

An understanding of the taxonomy can help navigate these multifaceted aspects effectively. But that's not all. We also have Dr. Ankovician's Privacy by Design principles. They provide a robust framework, and here's how those principles can help balance AI and privacy. Proactive not reactive, preventative, not remedial. AI systems should be designed from the outset to respect privacy and protect user data. This proactive approach can prevent privacy breaches before they occur, which is much more effective and less harmful than trying to address the issues after they've happened. Privacy as the Default Setting, user privacy should be the default in any AI system. This means systems should be designed to protect user data without requiring the user to take any protective actions themselves. Then we have privacy embedded into the design. Privacy should be an essential component of the core functionality of AI systems, integrated into system design and not added as an afterthought. This encourages a privacy culture, and privacy safeguards exist throughout the system's entire lifecycle. Full functionality, positive sum, not a zero sum. Privacy and other objectives must be accommodated to keep all interests intact. It's not about trading off privacy for AI benefits, but rather achieving both a positive sum paradigm. This encourages the view that privacy and AI utility are partners and not opponents. End to end security, full lifecycle protection from the moment data is collected to the end of its use, strong security measures should be in place. This will make sure data is securely retained and properly deleted at the end of its lifecycle essential for AI systems that often require large volumes of data.


Visibility and transparency. Keep it Open. AI systems should be transparent, and organizations must be accountable. Users should have access to information about how their data is being used, and the AI systems decision making processes should be explainable. Respect for user privacy, keep it user centric above all, the user's interest in privacy should be paramount. This involves giving users granular privacy settings, clear consent mechanisms, and easy access to their own data. By following these principles, developers and businesses can build AI systems that respect user privacy while still delivering the benefits of AI, ultimately creating a balanced coexistence of AI and privacy. Earlier this year, NIST’s Artificial Intelligence Risk Framework or risk management framework, the RMF. It's a voluntary, non-sector specific use case agnostic guide for technology companies that are designing, developing, and deploying AI systems to help them manage the many risks of AI. Beyond risk management, the RMF framework seeks to promote trustworthy and responsible development and use of AI systems, and it has seven key characteristics that attributes to trustworthiness.


Number one is safe providing real time monitoring, backstops or other intervention of the AI systems to prevent physical or physiological harm or endangerment of human life, health and property. Number two, secure and resilient employing protocols to avoid, protect against, or respond to attacks against the AI system itself and withstand adverse events. Explainable and interpretable understanding and properly contextualizing the mechanisms of an AI systems as well as its output. Privacy enhanced safeguarding human autonomy by protecting anonymity confidentiality and control. Fair with harmful bias managed promoting equity and equality and managing systematic, computational, statistical and human cognitive biases. Accountable and transparent, making information available about AI system to individuals interacting with it at various stages of the AI lifecycle and maintaining business practices and governance to reduce potential harms. And number seven valid and reliable demonstrating through ongoing testing or monitoring to confirm the AI system performs as it was intended to perform. And above all, we should not forget that the power of transparency is our guiding light in this thrilling act. It's all about knowing what data is being collected, how it's being used, and who's getting access. It's the unwritten rulebook helping us navigate this exciting performance. So we have our tools and our guide. But what happens when we slip?


Is there a safety net? Great performances often have some behind the scenes support, don't they? In our data drama, the role is played by our unsung heroes, regulations. These legal frameworks, they are our safety nets. Let's take the GDPR, for example. It's been around for seven years, and it sets some pretty high standards. It's like the stage director who insists on a perfect performance ensuring the rights and privacy of the data subject. In the UK, with the Information Commissioner's office has created tailored guidance on AI and data protection. It provides the regulator's interpretation of how data protection rules apply to AI systems that process personal data. Recently added new content includes how to ensure transparency in AI, fairness in AI, and how data protection applies across the AI lifecycle. We have the EU AI Act, and its aims is to ensure that artificial intelligence systems placed in the European Union market and used in the Union are safe and respect existing laws on fundamental rights and Union values. But in the US, the Biden administration, they're also exploring an AI Bill of Rights. And Sam Altman, the CEO of Open AI, the guys behind Chat GPT. They're urging the introduction of a US AI regulation. So these regulations, along with others across the world, will help in making sure our performance doesn't turn into a freefall. They keep us on the tightrope, balancing our use of AI with the respect for privacy. But regulations can only do so much. The real power, ladies and gentlemen, lies with us. The users, the professionals, the performers in this grand act. So what's our grand finale in this thrilling drama of AI and privacy? It's the vision of a future where AI and privacy coexist harmoniously. A future where we don't have to choose between the convenience of AI and the security of our privacy. But this doesn't just happen magically. It requires our active participation. We need to understand how data is used, question what we're not comfortable with, and take control of privacy settings. We're not just passive audience members in this performance, my friends. We are the actors, the directors and the critics. And to our tech wizard colleagues out there creating these wonderful AI systems, let's make sure they're designed with privacy in mind from the ground up. Let's use our privacy enhancing technologies, privacy by design, the taxonomy of privacy. Let's be transparent and let's respect the regulations. Together, we can turn this highwire act into a dance. A dance where AI and privacy move in harmony, guided by our collective efforts. It's an ambitious vision, yes, but then, isn't ambition the first step towards innovation? Now, before I exit the stage, I'll leave you with a thought. Next time you're enjoying your perfectly brewed morning coffee or taking the quickest route to work, remember, you're not just part of an AI performance. You're a star in this thrilling act of balancing AI and privacy.


If you enjoyed this episode, be sure to subscribe, like and share so you're notified when a new episode is released. Remember to join the Privacy Pros Academy Facebook group, where we answer your questions. Thank you so much for listening. I hope you're leaving with some great things that will add value on your journey as a world class privacy pro.


Please leave us a four or five star review. And if you'd like to appear on a future episode of our podcast, or have a suggestion for a topic you'd like to hear more about, please send an email to team@kazient.co.uk. Until next time, peace be with you.

Show artwork for Privacy Pros Podcast

About the Podcast

Privacy Pros Podcast
Discover the Secrets from the World's Leading Privacy Professionals for a Successful Career in Data Protection
Data privacy is a hot sector in the world of business. But it can be hard to break in and have a career that thrives.

That’s where our podcast comes in! We interview leading Privacy Pros and share the secrets to success each fortnight.

We'll help guide you through the complex world of Data Privacy so that you can focus on achieving your career goals instead of worrying about compliance issues.
It's never been easier or more helpful than this! You don't have to go at it alone anymore!

It’s easy to waste a lot of time and energy learning about Data Privacy on your own, especially if you find it complex and confusing.

Founder and Co-host Jamal Ahmed, dubbed “The King of GDPR” by the BBC, interviews leading Privacy Pros and discusses topics businesses are struggling with each week and pulls back the curtain on the world of Data Privacy.

Deep dive with the world's brightest and most thought-provoking data privacy thought leaders to inspire and empower you to unleash your best to thrive as a Data Privacy Professional.

If you're ambitious, driven & highly motivated, and thinking about a career in Data Privacy, a rising Privacy Pro or an Experienced Privacy Leader this is the podcast for you.

Subscribe today so you never miss an episode or important update from your favourite Privacy Pro.

And if you ever want to learn more about how to secure a career in data privacy and then thrive, just tune into our show and we'll teach you everything there is to know!

Listen now and subscribe for free on iTunes, Spotify or Google Play Music!

Subscribe to the newsletter to get exclusive insights, secret expert tips & actionable resources for a thriving privacy career that we only share with email subscribers https://newsletter.privacypros.academy/sign-up

About your host

Profile picture for Jamal Ahmed FIP CIPP/E CIPM


Jamal Ahmed is CEO at Kazient Privacy Experts, whose mission is safeguard the personal data of every woman, man and child on earth.

He is an established and comprehensively qualified Global Privacy professional, World-class Privacy trainer and published author. Jamal is a Certified Information Privacy Manager (CIPM), Certified Information Privacy Professional (CIPP/E) and Certified EU GDPR Practitioner.

He is revered as a Privacy thought leader and is the first British Muslim to be awarded the designation "Fellow of Information Privacy’ by the International Association of Privacy Professionals (IAPP).