SUPA: Societal & User-Centered Privacy in AI (SUPA)

SUPA 2024 Workshop
Date: August 11th 2024

Philadelphia, PA, USA (In-Person)

→ Submission Site


Privacy is critical to the responsible development of AI. However, the rapid evolution of AI, including the rapid deployment of generative AI, has resulted in new privacy risks with far-reaching societal consequences. In this half-day workshop, we aim to bring together academics, practitioners, designers, and advocates across civil society and regulation to evaluate the challenges and opportunities for user-centered privacy in AI.

The SOUPS 2024 Workshop on Societal & User-Centered Privacy in AI (SUPA) aims to develop a community of experts to share ideas and collaborate on research addressing critical issues at the intersection of user-centered privacy and AI. We anticipate that it will provide a dedicated space to exchange (and in the future create and develop) knowledge around the methods, tools and policy considerations for user-centered privacy in interaction with AI, as well as reporting empirical research on societal impacts.

Call for Participation

We welcome participants who work on topics related to societal and user-centered privacy in AI. Interested participants will be asked to contribute a short paper to the workshop. Topics of interest include, but not limited to:

  • Challenges in implementing privacy measures, including issues in Usability and User Trust and interactions with AI privacy
  • Empirical studies that gather attitudinal or behavioral insights related to privacy expectations for AI, privacy solutions or techniques
  • Effects of emerging AI technologies (e.g., generative AI) on the privacy of at-risk, vulnerable or marginalized populations (e.g., persons with disabilities, underrepresented populations, children, or gender communities)
  • Case studies showcasing successful user experience (UX) strategies in privacy-preserving AI applications
  • Open-source tools to demonstrate the integration of privacy measures into AI models
  • Best practices for UX design in choosing appropriate privacy parameters and associated trade-offs
  • Ethical considerations for AI and user privacy
  • Challenges and opportunities in improving and enforcing regulation that protects privacy in AI

Submission Types: We are considering two types of submissions: (1) position papers, and (2) research papers. Submissions could include full or in-progress empirical studies, literature reviews, system demos, method descriptions, postion papers or encore of published work. Accepted submission will be made available on the workshop repositary and website.

Submission Format: 2 - 4 pages, excluding references.

Review Process: Double-blind. Submissions will be evaluated based on their relevance and potential to generate critical discussion.

Templates: [Word] [LaTex]

→ Submission Site

Important Dates

Submission deadline: May 23rd 2024 extended to May 30th 2024 AOE

Acceptance Notifications: June 6th 2024

Workshop date: 11th August 2024

Workshop location: Philadelphia, PA, USA



The workshop will feature a fireside chat with a special guest, paper presentations, and community discussions to provide an opportunity for smaller group interactive discussion about related topics of interest, which may include methods, challenges, and future directions in Societal & User-Centered Privacy in AI. The workshop will conclude with a (non-sponsored) lunch hour to encourage networking. Our current agenda includes:

9:00 am - 9:10 am: Welcome and Introduction

9:10 am - 10:00 am: Paper Session I: User-Centered Challenges, Usability issues and Methods

The Double-Edged Sword of Synthetic Data in Emotion-AI: Balancing Privacy and Ethical Challenges

Author/s: Adam Kingsmith

Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives

Author/s: Stephen Meisenbacher, Alexandra Klymenko, Patrick Gage Kelley, Sai Teja Peddinti, Kurt Thomas, Florian Matthes

The Impact of Solid-Enabled Data Sharing on Individual Data Disclosure Decisions

Author/s: Mathias Maes, Lieven De Marez, Ralf De Wolf

10:00am – 10:10 am: Lightning Presentations

10:10 am – 10:40am: Coffee Break

10:40 am – 11:30 am: Paper Session II: Tools, Measurement, and Solutions

Measuring and mitigating harms at the intersections of generative AI and image-based sexual abuse

Author/s: Natalie Grace Brigham

Privacy Notice Design in GenAI Ecosystem

Author/s: Xiaozheng Wang

Reducing Racial and Ethnic Bias in AI Models: A Comparative Analysis of ChatGPT and Google Bard

Author/s: Tavishi Choudhary

Addressing User Awareness and Consent in AI with User-Focused Privacy Threat Modeling: A Case Study Validation of the UsersFirst Framework

Author/s: Miguel Rivera-Lanas, Tian Wang, Xinran Alexandra Li, Yash Maurya, Lorrie Faith Cranor, Hana Habib, Norman Sadeh

11:30am – 12:00 pm: Fireside chat and Community Discussion with special guest Dr. Sauvik Das

12:00 pm - 12:10 pm: Reflection and Wrap up

12:10 pm: Collective Lunch

Workshop Organizers