Fact Extraction and VERification

The First Workshop on Fact Extraction and Verification (FEVER) will be held at EMNLP2018

Motivation

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]

In an effort to jointly address both problems, herein we propose a workshop promoting research in joint Fact Extraction and VERification (FEVER). We aim for FEVER to be a long-term venue for work in verifiable knowledge extraction and to stimulate progress in this direction, we will also host the FEVER Challenge, an information verification shared task on the dataset that we plan to release as part of the challenge.

Read more...

Our Workshop

In order to bring together researchers working on the various tasks related to fact extraction and verification, we will host a workshop welcoming submissions on the following topics:

  • Natural Language Inference and Recognizing Textual Entailment
  • Argumentation Mining
  • Machine Reading and Comprehension
  • Claim Validation/Fact checking
  • Question Answering
  • Automated Theorem Proving
  • Knowledge Base Population
  • Stance detection

We will also host a panel comprised of the invited speakers, members of the organizing committee and selected shared task participants. Panel discussion topics include:

  • Information extraction and verification from sources of unknown trustworthiness
  • Detection of ideologies/stances
  • Closing the gap between fact and story/argument extraction

Call for papers

Our Shared Task and Dataset

Participants will be invited to develop systems to identify evidence and reason about truthfulness of a given claim that we have generated. Our dataset currently contains 220,000 true and false claims. The true claims are written by humans annotators extracting information from Wikipedia.

The false claims are mutated versions of the true information (also generated by humans). They have been designed to reflect entailment in natural logics and be slightly adversarial in nature: using constrained word knowledge, annotators can mutate the original claims to be semantically plausible but untrue.

We expect the FEVER dataset and shared task to be a more challenging proposition than the state-of-the-art resources for natural language inference, machine comprehension, and fact checking. This is owing to the complexities of the mutations we introduce, compositional multi-sentence and multi-document inference, and the need for an Information Retrieval component. These complexities better reflect the current open challenges and limitations that must be addressed and are essential for accurate fact extraction and verification from user-generated content.

We have designed the dataset to be agnostic to structured and unstructured approaches to inference and expect that successful solutions will be a hybrid of symbolic reasoning and distributional models.

Call for participants

Invited Speakers

Delip Rao

Joostware AI Research, John Hopkins University Fake News Challenge

Luna Dong

Amazon

Marie-Francine Moens

KU Leuven

Sebastian Riedel

University College London

Organising committee

James Thorne

University of Sheffield

Andreas Vlachos

University of Sheffield

Oana Cocarascu

Imperial College London

Christos Christodoulopoulos

Amazon Research Cambridge

Arpit Mittal

Amazon Research Cambridge