Integrity2026: Workshop on AI-Enabled Integrity in Social Networks and Media at KDD Jeju International Convention Center Jeju, South Korea, August 10, 2026 |
| Conference website | https://sites.google.com/view/integrity-workshop-2026 |
| Submission link | https://easychair.org/conferences/?conf=integrity2026 |
| Abstract registration deadline | April 30, 2026 |
| Submission deadline | April 30, 2026 |
6th Workshop on Integrity in Social Networks and Media [TBD date/time: August 10, 2026]
Co-located with KDD 2026 · August 9–13, 2026 · Jeju, Korea
Workshop Description
Social networks and social media have become the default communication channels for billions of people worldwide. While these platforms enable connection and discovery at unprecedented scale, they also expose fundamental integrity challenges — from misinformation and coordinated manipulation to child safety risks and harms from AI-generated synthetic media. The rapid evolution of AI, especially generative models, has transformed this landscape. Advancing AI systems simultaneously intensify online risks while unlocking powerful new capabilities in automated moderation, behavioral anomaly detection, and human-AI safety operations.
This half-day workshop convenes researchers, practitioners, and policymakers to explore these dual-use dynamics. The event features invited talks from academic experts and industry leaders, peer-reviewed papers through an open call-for-papers, and a panel discussion. The Integrity 2026 workshop builds on five successful prior editions held at WSDM (2020–2024), consistently attracting 20–30 submissions and 50+ attendees.
List of Topics
We welcome submissions on, but not limited to, the following topics:
-
Adversarial Dynamics in the GenAI Era — Evolving evasion strategies, automated red-teaming, and real-time detection of AI-generated misinformation and behavioral anomalies.
-
AI-Accelerated Coordinated Operations & Agent Dynamics — Emerging integrity risks driven by synthetic personas, agent-to-agent manipulation, and coordinated influence operations.
-
Parasocial Harms from Synthetic Entities — Risks from AI-driven personas and virtual influencers engineered to create emotional dependency and manipulate users.
-
Open-Source Trust & Safety Toolkits — Advancing collaborative, open-source ecosystems for content moderation, threat detection, red-teaming, and safety evaluation.
-
Foundational Models for Integrity — Generative AI for content moderation, open-source integrity oracles, and reliable non-synthetic ground truth datasets.
-
AI-Enabled Evaluation Frameworks — Developing AI-enabled benchmarks and continuous evaluation pipelines that measure robustness and real-world harm reduction.
-
Data Pollution, Model Collapse & Knowledge Discovery — Long-term ecological risks from synthetic training data loops and the challenge of retrieving authentic human insight.
-
Human-AI Collaboration in Safety Operations — Improving reviewer well-being, quality, and efficiency through hybrid human-AI pipelines and LLM-assisted labeling.
-
Regulatory Alignment & Global Compliance — Balancing user rights, cultural nuance, and regional regulations while preventing over-enforcement.
-
Multimodal Safety at Scale — Challenges in moderating large-scale video, audio, and synthetic media, including efficient architectures and cross-modal reasoning.
-
Safety for Autonomous Agents — Ensuring safe behavior in agentic systems capable of planning, tool use, and long-horizon actions.
Submission Guidelines
We invite two types of submissions, technical papers and talk proposals:
- Technical manuscripts must be 8 pages long for full papers, and 4 pages long for short papers. We invite technical papers of the following types: analysis paper (focus is to generate new insights, rather than the specific method applied), methodology paper (focus is to test the effectiveness of a proposed method), reproduction paper (reproduce results documented in prior work), resource paper (presents a new resource, such as a dataset or tool), and use case paper (presents new insights about a specific use case, such as an event or a community).
- Talk proposals should be 2 pages long, describing the content of a roughly 20-minute talk (the actual length will be determined based on program constraints). We invite submissions from scholars, activists, developers, lawyers, ethics experts, fact-checkers, public servants, journalists, and all-around researchers.
All submissions must be original and not simultaneously submitted to another journal or conference. They must be written in English, and formatted using the standard two-column ACM Sigconf proceedings format. The submission is single-blind.
Accepted papers will either be presented as contributed talks, or as posters. All accepted submissions will be included in the workshop proceedings (CEUR-WS).
Please submit papers and talk proposals through EasyChair: https://easychair.org/conferences/?conf=integrity24
Important Dates
- Proposal submission: 30 April 2026
- Notifications: 4 June 2026
- Workshop date: 10 August 2026
More information: http://integrity-workshop.org
Organizing committee
- Panagiotis Papadimitriou, Meta
-
Mehmet Emre Sargin, Google
-
Sach Sokol, Meta
-
Madhu Ramanathan, Meta
-
Mohamed Abdelhady, Microsoft
-
Panayiotis Tsaparas, University of Ioannina, Greece
-
Prathyusha Senthil Kumar, Meta
-
Vasilis Verroios, Meta
-
Daniel Olmedilla, LinkedIn
-
Kiran Garimella, Rutgers University, USA
-
Timos Sellis, Archimedes / Athena Research Center, Greece
-
Anish Das Sharma, Reinforce Labs
Venue
The workshop will be held at the KDD 2026 conference in Jeju, South Korea from August 9–13, 2026.
Contact
All questions about submissions should be emailed to:
- Sach Sokol — sachsokol@meta.com
- Madhu Ramanathan — madram@meta.com
