Main Conference
ACL 2026
- Website: https://2026.aclweb.org
- Submission Deadline: January 5, 2026
- Conference Dates: July 2-7, 2026, 2026
- Location: San Diego, CA, USA
- Special Theme: “Explainability of NLP Models”
- Submission Website: https://openreview.net/group?id=aclweb.org/ACL/ARR/2026/January (not open yet)
- Commitment Website: https://openreview.net/group?id=aclweb.org/ACL/2026/Conference (not open yet)
Contact
- General Chair: Philipp Koehn
- Program Chairs: Maria Liakata, Viviane P. Moreira, Jiajun Zhang, David Jurgens
For questions related to paper submission, email: editors@aclrollingreview.org
For all other questions, email: acl2026pcs@gmail.com
Overview
ACL 2026 invites the submission of long and short papers featuring substantial, original, and unpublished research in all aspects of Computational Linguistics and Natural Language Processing. ACL 2026 has a goal of a diverse technical program—in addition to traditional research results, papers may contribute negative findings, survey an area, announce the creation of a new resource, argue a position, report novel linguistic insights derived using existing computational techniques, and reproduce, or fail to reproduce, previous results. As in recent years, some of the presentations at the conference will feature papers accepted by the Transactions of the ACL (TACL) and the Computational Linguistics (CL) journals.
Papers submitted to ACL 2026, but not selected for the main conference, will also automatically be considered for publication in the Findings of the Association of Computational Linguistics.
Paper Submission Information
Papers may be submitted to the ARR 2025 October cycle and ARR 2026 January cycle. Papers that have already received reviews and a meta-review from ARR from earlier cycles may be committed to ACL 2026 via the conference commitment site (not available yet). If you intend to commit to ACL 2026 and need an invitation letter for visas please fill out the visa request form as soon as possible. For additional queries, contact the visa chairs at acl-2026-visa-chairs@googlegroups.com.
Submission Topics
ACL 2026 aims to have a broad technical program. Relevant topics for the conference include, but are not limited to, the following areas (in alphabetical order):
- Safety and Alignment in LLMs
- AI/LLM Agents
- Human-AI Interaction/Cooperation
- Retrieval-Augmented Language Models
- Mathematical, Symbolic, and Logical Reasoning in NLP
- Computational Social Science, Cultural Analytics, and NLP for Social Good
- Code Models
- Interpretability and Analysis of Models for NLP
- LLM Efficiency
- Generalizability and Transfer
- Dialogue and Interactive Systems
- Discourse, Pragmatics, and Reasoning
- Low-resource Methods for NLP
- Ethics, Bias, and Fairness
- Natural Language Generation
- Information Extraction and Retrieval
- Linguistic theories, Cognitive Modeling and Psycholinguistics
- Machine Translation
- Multilinguality and Language Diversity
- Multimodality and Language Grounding to Vision, Robotics and Beyond
- Neurosymbolic approaches to NLP
- Phonology, Morphology and Word Segmentation
- Question Answering
- Resources and Evaluation
- Semantics: Lexical, Sentence-level Semantics, Textual Inference and Other areas
- Sentiment Analysis, Stylistic Analysis, and Argument Mining
- Speech Processing and Spoken Language Understanding
- Summarization
- Hierarchical Structure Prediction, Syntax, and Parsing
- NLP Applications
- Clinical and Biomedical Applications
- Financial Applications and Time Series
- Special Theme: Explainability of NLP Models
ACL 2026 Theme Track: Explainability of NLP Models
Following the success of the ACL 2020-2024 Theme tracks, we are happy to announce that ACL 2026 will have a new theme with the goal of reflecting and stimulating discussion about the current state of development of the field of NLP.
Explainability refers to the methods and techniques aimed at making the internal decision-making processes of complex NLP models, such as large language models, transparent and understandable to humans. It moves beyond treating models as “black boxes” whose predictions are accepted on faith, and instead seeks to uncover the reasoning behind specific outputs. Explainability is foundational to building trust, ensuring fairness, and facilitating responsible deployment. By revealing a model’s potential reliance on spurious correlations or societal biases, explainability allows developers to diagnose errors, improve model robustness, and provide accountability, which is especially critical in high-stakes domains like healthcare, finance, and law where understanding the “why” behind a decision is as crucial as the decision itself.
The theme track invites empirical and theoretical work as well as surveys and position papers reflecting on the Explainability of NLP Models. Possible topics of discussion include (but are not limited to) the following:
- How do explainability methods need to be adapted for different model architectures? Can we develop a unified framework to evaluate explanations across these architectures?
- How can we rigorously and quantitatively evaluate the quality of an explanation? What metrics can reliably measure the faithfulness (accuracy of the model’s reasoning) and plausibility (human-perceived reasonableness) of an explanation?
- Can explanations be used to reliably detect when a model is making a biased prediction based on sensitive attributes? How can input-based explanations help mitigate social biases during model training?
- Can we use explanations to systematically find and fix problems in the training data itself, such as spurious correlations or annotation errors? How can explainability facilitate a human-in-the-loop process for iterative data refinement?
- Can we identify specific directions, mechanisms, patterns, or “knobs” within a model’s internal activations that control high-level behaviors like abstaining from unanswerable questions? Can we design models that are inherently more interpretable?
Note that this track is distinct from the “Interpretability and analysis of models”. Papers submitted to the special theme should focus on understanding the internal workings of the model.
The theme track submissions can be either long or short. We anticipate having a special session for this theme at the conference and a Thematic Paper Award in addition to other categories of awards.
Two-Stage Review: Submission to ARR, Commitment to ACL 2026
ACL 2026 will use ACL Rolling Review (ARR) as a reviewing system, but final decisions will be made by the conference. Both submissions of articles for review and commitment of reviewed articles to the conference will be performed via the OpenReview platform. Specifically, authors will follow a two-step process:
- Authors submit articles to ARR, where submissions receive reviews and meta-reviews from ARR reviewers and area chairs;
- Authors commit their reviewed articles to a publication venue (e.g., ACL 2026), where Senior Area Chairs and Program Chairs make acceptance decisions from the ARR reviews and meta-reviews. ACL 2026 has chosen this approach in coordination with *CL 2026 conferences, which are adopting the same procedure and a coordinated submission plan to allow maximum flexibility during their submission periods for the authors. At each cycle, after a paper has been fully reviewed, authors have the option to commit their paper to a conference or revise and resubmit for another round of reviews.
The reviewing process will continue to be double-blind. Reviewers will not see authors, nor will authors see reviewers, and reviews on ARR will not be made publicly visible. However, authors will be given the option through ARR to make their anonymized submitted articles publicly visible.
Mandatory Reviewing Workload
As the pace of research in the field continues to increase, we need to strengthen the commitment to reviewing for each paper submission. During the ARR submission process, authors will be required to specify which co-authors are committing to cover reviewing in this reviewing cycle. Please see the new ARR policy regarding reviewing workload here. As this is an ARR-wide policy for all *CL conferences, questions or clarifications should be addressed to ARR directly.
Important Dates
- Submission deadline (all papers are submitted to ARR): January 5, 2026
- ARR reviews & meta-reviews available to authors of the January cycle: March 9, 2026
- Commitment deadline for ACL 2026: March 14, 2026
- Notification of acceptance: April 4, 2026
- Withdrawal deadline: April 19, 2026
- Camera-ready papers due: April 19, 2026
- Tutorials: July 2, 2026
- Main Conference: July 3-5, 2026
- Workshops: July 6-7, 2026
Note: All deadlines are at 11:59PM UTC-12:00 (“anywhere on Earth”).
Paper Submission Details
Both long and short paper submissions should follow all of the ARR submission requirements, including:
- Long Papers (8 pages) and Short Papers (4 pages)
- Instructions for Two-Way Anonymized Review
- Authorship
- Citation and Comparison
- Multiple Submission Policy, Resubmission Policy, and Withdrawal Policy
- Ethics Policy including the responsible NLP research checklist
- Limitations
- Paper Submission and Templates
- Optional Supplementary Materials Final versions of accepted papers will be given one additional page of content (up to 9 pages for long papers, up to 5 pages for short papers) to address reviewers’ comments.
Following the ACL and ARR policies, there is no anonymity period requirement.
At the time of submission to ARR, authors will be asked to select a preferred venue (e.g., ACL 2026). This is used only to calculate acceptance rates. Authors who selected ACL 2026 as a preferred venue when submitting to ARR may choose not to commit to ACL 2026 after receiving their reviews, and authors who selected a preferred venue other than ACL 2026 when submitting to ARR are still welcome to commit to ACL 2026.
Presentation at the Conference
All accepted papers must be presented at the conference to appear in the proceedings. The conference will include both in-person and virtual presentation options. Papers without at least one presenting author registered by the early registration deadline may be subject to desk rejection. Long and short papers will be presented orally or as posters as determined by the program committee. While short papers will be distinguished from long papers in the proceedings, there will be no distinction in the proceedings between papers presented orally and papers presented as posters.