• Skip to main content
AAAI

AAAI

Association for the Advancement of Artificial Intelligence

    • AAAI

      AAAI

      Association for the Advancement of Artificial Intelligence

  • About AAAIAbout AAAI
    • News
    • Officers and Committees
    • Staff
    • Bylaws
    • Awards
      • Fellows Program
      • Classic Paper Award
      • Dissertation Award
      • Distinguished Service Award
      • Allen Newell Award
      • Outstanding Paper Award
      • AI for Humanity Award
      • Feigenbaum Prize
      • Patrick Henry Winston Outstanding Educator Award
      • Engelmore Award
      • AAAI ISEF Awards
      • Senior Member Status
      • Conference Awards
    • Partnerships
    • Resources
    • Mailing Lists
    • Past Presidential Addresses
    • AAAI 2025 Presidential Panel on the Future of AI Research
    • Presidential Panel on Long-Term AI Futures
    • Past Policy Reports
      • The Role of Intelligent Systems in the National Information Infrastructure (1995)
      • A Report to ARPA on Twenty-First Century Intelligent Systems (1994)
    • Logos
  • aaai-icon_ethics-diversity-line-yellowEthics & Diversity
  • Conference talk bubbleConferences & Symposia
    • AAAI Conference
    • AIES AAAI/ACM
    • AIIDE
    • EAAI
    • HCOMP
    • IAAI
    • ICWSM
    • Spring Symposia
    • Summer Symposia
    • Fall Symposia
    • Code of Conduct for Conferences and Events
  • PublicationsPublications
    • AI Magazine
    • Conference Proceedings
    • AAAI Publication Policies & Guidelines
    • Request to Reproduce Copyrighted Materials
    • Contribute
    • Order Proceedings
  • aaai-icon_ai-magazine-line-yellowAI Magazine
  • MembershipMembership
    • Member Login
    • Chapters

  • Career CenterAI Jobs
  • aaai-icon_ai-topics-line-yellowAITopics
  • aaai-icon_contact-line-yellowContact

  • Twitter
  • Facebook
  • LinkedIn

The 38th Annual AAAI Conference on Artificial Intelligence

February 20-27, 2024 | Vancouver, Canada

  • AAAI-24
  • Attend
    • AAAI-24 Photos
    • Accommodations & Travel
    • Job Fair
    • Lunch with a Fellow
    • Onsite Childcare
    • Registration and Visa Information
    • Student Scholarship and Volunteer Program
  • Program
    • AAAI-24 Paper Awards
    • AAAI-24 Program Overview and Schedule
    • AAAI-24 Invited Talks
    • AAAI-24 Panels
    • AAAI-24 Technical Program
    • Bridge Program
    • Demonstration Program
    • Diversity & Inclusion Activities
    • Doctoral Consortium Program
    • EAAI-24 Program
    • IAAI-24 Program
    • Journal Track
    • New Faculty Highlights Program
    • Student Abstracts and Poster Program
    • Senior Member Presentation Program
    • Tutorial and Lab Forum
    • Undergraduate Consortium Program
    • Workshop Program
  • Sponsors
    • AAAI-24 Sponsors
    • Become a Sponsor
  • Policies & Guidelines
    • AAAI Code of Conduct for Conferences and Events
    • Ethical Guidelines for AAAI-24 Reviewers
    • Policies for AAAI-24 Authors
    • Publications Ethics and Malpractice Statement
  • Organization
    • Conference Organizers
    • Program Committee
    • Senior Program Committee
    • Area Chairs
  • Calls
    • Main Technical Track
    • Special Track on AI for Social Impact
    • Special Track on Safe, Robust and Responsible AI
    • Bridge Program
    • Demonstration Program
    • Diversity and Inclusion Activities
    • Doctoral Consortium: Call for Proposals
    • EAAI-24 Call for Participation
    • IAAI-24 Call for Participation
    • New Faculty Highlights
    • Senior Member Presentation Track
    • Student Abstract and Poster Program
    • Tutorial and Lab Forum
    • Undergraduate Consortium
    • Workshops
Main Conference Timetable for Authors

Note: all deadlines are “anywhere on earth” (UTC-12)

November 2-7, 2023
Author feedback window

December 9, 2023
Notification of final acceptance or rejection

December 19, 2023
Submission of paper preprints for inclusion in electronic conference materials

February 20 – February 27, 2024
AAAI-24 conference

Previous Deadlines

July 4, 2023
AAAI-24 web site open for author registration

July 11, 2023
AAAI-24 web site open for paper submission

August 8, 2023
Abstracts due at 11:59 PM UTC-12

August 15, 2023
Full papers due at 11:59 PM UTC-12

August 18, 2023
Supplementary material and code due by 11:59 PM UTC-12

September 25, 2023
Registration, abstracts and full papers for NeurIPS fast track submissions due by 11:59 PM UTC-12

September 27, 2023
Notification of Phase 1 rejections

September 28, 2023
Supplementary material and code for NeurIPS fast track submissions due by 11:59 PM UTC-12

Call for the Special Track on Safe, Robust and Responsible AI

Submission Instructions
AAAI-24 Author Kit

AAAI-24 will feature a special track on Safe, Robust and Responsible Artificial Intelligence (SRRAI).  This special track focuses on the theory and practice of safety and robustness in AI-based systems and adherence to responsible AI principles. AI systems are increasingly being deployed throughout society within different domains such as data science, robotics and autonomous systems, medicine, economy, and safety-critical systems. With the recent explosion of interest in generative AI systems, the accessibility and applicability of foundational models have grown exponentially. 

Although the widespread use of AI systems in today’s world is growing, they have fundamental limitations and practical shortcomings, which can result in catastrophic failures. Specifically, many of the AI algorithms that are being implemented nowadays fail to guarantee safety and success and lack robustness in the face of uncertainties. Generative AI systems additionally give rise to a whole new suite of difficulties such as hallucinations, information leakage, toxicity, etc. 

To ensure that AI systems are reliable, they need to be robust to disturbance, failure, and novel circumstances. Furthermore, this technology needs to offer assurance that it will reasonably avoid unsafe and irrecoverable situations. In order to push the boundaries of AI systems’ reliability, this special track at AAAI-24 will focus on cutting-edge research on both the theory and practice of developing safe, robust, and responsible AI systems. Specifically, the goal of this special track is to promote research that studies 1) the safety and robustness of AI systems, 2) AI algorithms that are able to analyze and guarantee their own safety and robustness, 3) AI algorithms that can analyze the safety and robustness of other systems and 4) mechanisms for building responsible and trustworthy AI systems. For acceptance into this track, we would expect papers to have fundamental contributions to safe, robust and responsible AI, as well as applicability to the complexity and uncertainty inherent in real-world applications.

In short, the special track covers topics related to safety and robustness of AI-based systems and to using AI-based technologies to enhance the safety and robustness of themselves and other critical systems, including but not limited to:

  • Safe and Robust AI Systems
  • Safe Learning and Control
  • Quantification of Uncertainty and Risk
  • Safe Decision Making Under Uncertainty and Limited Information
  • Robustness Against Perturbations and Distribution Shifts
  • Detection and Explanation of Anomalies and Model Misspecification
  • Formal Methods for AI Systems
  • On-line Verification of AI Systems
  • Safe Human-Machine Interaction
  • Transparency, Interpretability, and Explainability of AI Systems
  • Fairness and Equity in Decision Making
  • Issues Specific to Generative AI (e.g., hallucination, toxicity, information leakage, prompt injection) 

Submissions to this special track will follow the regular AAAI technical paper submission procedure, but the authors need to select the Safe, Robust and Responsible AI special track.

Special track Co-Chairs:

  • Chuchu Fan (Massachusetts Institute of Technology)
  • Tatsunori Hashimoto (Stanford University)
  • Ashkan Jasour (NASA/Caltech JPL)
  • Balaraman Ravindran (Indian Institute of Technology Madras)
  • Reid Simmons (Carnegie Mellon University)

Safe, Robust and Responsible AI Keywords

  • SRAI: Safe AI Systems
  • SRAI: Robust AI Systems
  • SRAI: Safe Learning
  • SRAI: Safe Control
  • SRAI: Uncertainty Quantification
  • SRAI: Risk Quantification
  • SRAI: Safe Decision Making Under Uncertainty
  • SRAI: Robustness Against Perturbations 
  • SRAI: Robustness Against Distribution Shifts
  • SRAI: Anomaly Detection and Explanation
  • SRAI: Model Misspecification Detection and Explanation
  • SRAI: Formal Methods for AI Systems
  • SRAI: On-line Verification of AI Systems
  • SRAI: Safe Human-Machine Interaction
  • SRAI: Explainability and Interpretability
  • SRAI: Factuality and Grounding of Generative AI systems
  • SRAI: Security Risks of Generative AI
  • SRAI: Privacy Preserving Generative AI

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 2025 Association for the Advancement of Artificial Intelligence.
Your use of this site is subject to our Terms and Conditions and Privacy Policy.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT