Argumentation schemes are reasoning patterns that are commonly applied in scientific and legal enquiries. They represent templates for making inferences, formed by premises supporting a conclusion and critical questions that can be put forward against an argument. A comprehensive list of schemes is provided in the seminal work of Walton, Reed & Macagno.
For instance, if Dr. Brown states that a currently discussed policy will lead to an economical disadvantage, we might tentatively conclude that the currently discussed policy will lead to an economical disadvantage in the future. This is an example of an argument from expert opinion, where the pattern is the following:
- Source E is an expert in domain S containing A,
- E asserts that A is true,
- Therefore, A may plausibly be true.
Critical questions include: How credible is E as an expert source? Is E an expert in the field that A is in? A consistent with the testimony of other experts?
In the example, knowing that Dr. Brown is a surgeon and not an economist might reduce our degree of believing that the currently discussed policy will lead to an economical disadvantage in the future.
In CISpaces, the analysis is structured using schemes adapted for the intelligence process capturing in particular associative and causal relations among entities and events.
D. Walton, C. Reed, and F. Macagno, 2008.. Argumentation schemes. Cambridge University Press.
On the acceptability of arguments
Argumentation theory is used in CISpaces to help analysts identify plausible hypotheses via methods for deriving the acceptability status of arguments, e.g. accepted or rejected. An argument is said to be accepted if its supporting arguments are defended against attacking arguments. Arguments are tentatively accepted until further information might become available that changes their acceptability status. For example, consider argument A1, “Jill collaborates with Bob, Bob is a smuggler, thus Jill is a smuggler too”. An attack is an argument A2, “Jill had no contact with Bob, thus Jill does not know Bob”. The fact that Jill is a smuggler cannot be rationally accepted since A2 attacks A1. However, if A2 is attacked by a new argument A3, “Jill had no contacts with Bob but they are connected through Mark”, claim A1, defended by A3, may then be reinstated.
A mathematical theory for determining when two or more arguments are collectively acceptable depending on a chosen criterion is at the heart of Dung’s Abstract Argumentation Framework, first proposed in a seminal paper in 1995 that is currently among the most cited work in Artificial Intelligence.
Among the proposed criteria, one of them, so-called preferred semantics, has interesting theoretical and practical properties, not last the fact that such a criterion will always select some arguments as acceptable, for which it has been chosen to be used in CISpaces. Unfortunately, identifying the acceptable arguments according to the preferred semantics criterion is super-exponential.
For this reason, we employed jArgSemSAT, a Java re-implementation of the ArgSemSAT software that is constantly among the best ever proposed for identifying acceptable arguments according to the preferred semantics.
P. M. Dung, 1995. On the acceptability of arguments and its fundamental role in non-monotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2):321–357.
Automated Fact Extraction
Traditional military intelligence analysis has focussed on military sources of information such as satellite feeds, sensor data from manned and unmanned aerial vehicles and human intelligence from operatives in the field. Over the last 10 years Open Source Information Sources (OSINT), including social media sites such as Twitter & Facebook and live media streaming sites like Periscope, have become all pervasive and impossible to ignore. Effective use of OSINT adds a major capability to the UK’s military intelligence gathering capacity. However it also presents serious challenges to existing intelligence gathering approaches, which rely heavily on teams of analysts manually analysing data, in terms of the scale of information that must be checked and the real-time throughput at which hypotheses can be rigorously developed and assessed.
In CISpaces we are using AI techniques such as natural language processing to automatically extract relevant facts from large volumes of social media posts. High precision information extraction from natural language text is a grand challenge in computer science. We are exploring how novel Open Information Extraction (OpenIE) approaches can be best used to identify and extract key facts that intelligence analysts need to know to answer questions such as ‘what hotels are UK nationals using to shelter in during a crisis?’ or ‘which evacuation routes are blocked due to social unrest?’. Evidence extracted using automated fact extraction from OSINT is used in argumentation schemes alongside other sources of evidence to support teams of analysts trying to evaluate conflicting hypotheses during real-time events.