Special Track: Enhancing Online Safety and Wellbeing through AI: The Power of NLP and LLMs

About:

For updated news and information about this special track, please visit the special track webpage


As digital platforms become increasingly integral to our daily lives, the imperative for a secure and positive online environment has never been more pronounced. This special track explores the transformative potential of Artificial Intelligence (AI), particularly through Natural Language Processing (NLP) and Large Language Models (LLMs), in enhancing online safety and user wellbeing. We delve into how NLP and LLMs can be pivotal in detecting and mitigating online risks such as misinformation, toxicity, and cyber threats, thereby fostering a healthier digital ecosystem. This track aims to spotlight innovative AI-driven strategies that prioritize user safety and wellbeing, examining the intersection of technology with ethical considerations and user-centric design. By showcasing cutting-edge research and applications, we aim to advance the dialogue on how AI can be a force for good, ensuring the internet remains a safe, inclusive, and empowering space for all.


Organizers:

  • Wajdi Zaghouani, Hamad Bin Khalifa University, wzaghouani@hbku.edu.qa
  • Firoj Alam, QCRI, Hamad Bin Khalifa University, fialam@hbku.edu.qa
  • Reem Suwaileh, Hamad Bin Khalifa University, rs081123@student.qu.edu.qa
  • Venus Jin, Northwestern University Qatar, venus.jin@northwestern.edu
  • Houda Bouamor, Carnegie Mellon University, hbouamor@cmu.edu
  • Raian Ali, Hamad Bin Khalifa University, raali2@hbku.edu.qa
  • Anis Charfi, Carnegie Mellon University, acharfi@andrew.cmu.edu

Topics of Interest:

Areas of interest include (but not limited to):

  • User Experience and Safety: Investigating how AI, particularly NLP and LLMs, can enhance user experience and safety on digital platforms, focusing on personalized safety features and user-centric design. 
  • Detection and Mitigation of Online Harassment: Utilizing NLP and LLMs to identify and address various forms of online harassment, providing mechanisms for real-time intervention and support for affected users. 
  • Mental Health Support: Exploring AI’s role in providing mental health support online, including the detection of distress signals in communication and the delivery of timely resources or interventions. 
  • Misinformation and Fact-Checking: Leveraging LLMs to combat misinformation and support fact-checking initiatives, enhancing the reliability of information circulated online. 
  • Digital Literacy and Education: Employing AI to develop educational programs that enhance digital literacy, focusing on safe internet practices, understanding AI’s role, and fostering critical thinking online. 
  • Ethical AI Use: Addressing ethical considerations in AI deployment for online safety, including transparency, accountability, and the mitigation of biases in AI models. 
  • Content Moderation: Enhancing content moderation with AI, using NLP to understand context, nuance, and cultural variations, ensuring a respectful and safe online environment. 
  • Privacy and Data Security: Examining how AI can bolster privacy and data security, ensuring user data is protected while enhancing online safety measures. 
  • AI in Cyberbullying Prevention: Developing AI-driven solutions for detecting and preventing cyberbullying, offering proactive support to victims, and fostering a culture of respect online.
  • Empowering Vulnerable Populations: Tailoring AI tools to protect and empower vulnerable groups online, ensuring equitable access to safety resources and support systems. 
  • AI and Emergency Response: Integrating AI with emergency response mechanisms to quickly identify and react to online threats, providing immediate support to users in distress. 
  • Community Building and Support: Utilizing AI to foster positive community interactions, encouraging supportive behaviors, and building resilience against online risks. 
  • Regulatory and Policy Considerations: Analyzing the impact of regulations and policies on the use of AI for online safety, promoting a balanced approach that respects user freedoms while ensuring a safe digital space.

Submission Guidelines

Papers should be submitted in PDF format. The results described must be unpublished and must not be under review elsewhere. Submissions must conform to Springer’s LNCS format and should not exceed 15 pages, including all text, figures, references, and appendices. Submissions not conforming to the LNCS format, exceeding 15 pages, or being obviously out of the scope of the conference, will be rejected without review. Information about the Springer LNCS format can be found at Springer. Three to five keywords characterizing the paper should be indicated at the end of the abstract.

All submissions must go through EasyChair system via Easychair


Important Dates

  • Submission Deadline: 30 June, 2024
  • Acceptance/Rejection Notification: 30 August, 2024
  • Camera-Ready Files Submission Deadline: 07 September, 2024

Publication

Please note that for every accepted paper, it is required that at least one person registers for the conference and presents the paper. All accepted papers will be included in the proceedings published as Springer’s LNCS series.