Overview
This algorithm provides state-of-the-art natural language reasoning. It takes in a premise, expressed in natural language, and a hypothesis that may or may not follow up from. It determines whether the hypothesis follows from the premise, contradicts the premise, or is unrelated.
It is a wrapper around the Textual Entailment model put out by the AllenNLP team, which is itself a re-implementation of the Decomposable Attential Model (2017).
Applicable Scenarios and Problems
Usage
By default the algorithm only returns only the actual answer to the question. However, if you run in debug mode it will return the entire output of AllenNLP's model.
Input
The input JSON blob should have the following fields:
- premise: a descriptive piece of text
- hypothesis: a statement that may or may not follow from the premise
- debug(optional): a boolean indicating whether to be in debug mode
Any additional fields will be passed through into the AllenNLP model.
Output
The following output field will always be present:
- entailment: Probability the hypothesis follows from the premise
- contradiction: Probability the hypothesis contradicts the premise
- neutral: Probability the hypothesis is independent from the premise
If you run the algorithm in debug mode there will be additional output fields, including:
- premise_tokens: the tokens in the parsed premise
- hypothesis_tokens: the tokens in the hypothesis
Examples
Example 1: Default Behavior
Input:
{
"premise": "Two women are wandering along the shore drinking iced tea.",
"hypothesis": "Two women are sitting on a blanket near some rocks talking about politics."
}
Output:
{
"contradiction": 0.9735872745513916,
"entailment": 0.00033908788464032114,
"neutral": 0.02607356198132038
}
Example 2: Debug Mode
Input:
{
"premise": "Two women are wandering along the shore drinking iced tea.",
"hypothesis": "Two women are sitting on a blanket near some rocks talking about politics.",
"debug": true
}
Output:
{
"contradiction": 0.9735872745513916,
"entailment": 0.00033908788464032114,
"neutral": 0.02607356198132038,
'hypothesis_tokens': [
'Two',
'women',
'are',
...
See Also
- A web-based demo of the model available on the AllenNLP site
- Documentation of the model's code