Fact Extraction and VERification

The First Workshop on Fact Extraction and Verification (FEVER) will be held at EMNLP2018

co-organized by The University of Sheffield
co-organized by Amazon Research Cambridge
co-organized by Imperial College London


With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]

In an effort to jointly address both problems, herein we propose a workshop promoting research in joint Fact Extraction and VERification (FEVER). We aim for FEVER to be a long-term venue for work in verifiable knowledge extraction and to stimulate progress in this direction, we will also host the FEVER Challenge, an information verification shared task on the dataset that we plan to release as part of the challenge.


Our Workshop

The first workshop on Fact Extraction and VERification will be held at EMNLP2018 in Brussels. We are hosting two tracks and are seeking papers on topics relating to fact checking as well as system descriptions of entries to the FEVER shared task.

At the workshop, we will host invited talks, presentations of submitted papers and announce the results and winners of the FEVER Shared Task.

The shared task guidelines and call for papers will be released soon

Research Track

In order to bring together researchers working on the various tasks related to fact extraction and verification, we will host a workshop welcoming submissions on related topics such as recognizing textual entailment, question answering and argumentation mining.

  • First call for papers: 24th of May 2018
  • Second call for papers: 26th of June 2018
  • Submission deadline: 27th of July 2018
  • Notification: 18th of August 2018
  • Camera-ready deadline: 31st of August 2018
  • Workshop: 31st of October or 1st of November (EMNLP)

Shared Task Track

Participants will be invited to develop systems to identify evidence and reason about truthfulness of a given claim that we have generated. Our dataset currently contains 200,000 true and false claims. The true claims are written by humans annotators extracting information from Wikipedia.

  • Challenge Launch: 1st April 2018
  • Testing Begins (test set released, codalab submission page opened): 10th July 2018
  • Submission Closes: 13th July 2018
  • Results Announced: 16th July 2018
  • System Descriptions Due for Workshop: 27th July 2018
  • Winners Announced: 31st of October or 1st of November (EMNLP)

All deadlines are calculated at 11:59pm Pacific Daylight Savings Time (UTC -7h).

Invited Speakers

Delip Rao

Joostware AI Research, John Hopkins University,
Fake News Challenge

Luna Dong


Marie-Francine Moens

KU Leuven

Sebastian Riedel

University College London

Workshop Organising Committee

James Thorne

University of Sheffield

Andreas Vlachos

University of Sheffield

Oana Cocarascu

Imperial College London

Christos Christodoulopoulos

Amazon Research Cambridge

Arpit Mittal

Amazon Research Cambridge