894
Views
1
CrossRef citations to date
0
Altmetric
Articles

On Construction and Evaluation of Analogical Arguments for Persuasive Reasoning

, , &

ABSTRACT

Analogical reasoning is a complex process based on a comparison between two pairs of concepts or states of affairs (aka. the source and the target) for characterizing certain features from one to another. Arguments which employ this process to support their claims are called analogical arguments. Our goals are to study the structure and the computation for their defeasibility in light of the argumentation theory. Our proposed assumption-based argumentation with predicate similarity ABA(p) framework can be seen as an extension of assumption-based argumentation framework (ABA), in which not only assumptions can be used but also similarity of predicates is used to support a claim. ABA (p) labels each argument tree with an analogical degree and different ways to aggregate numerical values are studied toward gullible/skeptical characteristics in agent reasoning. The acceptability of analogical arguments is evaluated w.r.t. the semantics of abstract argumentation. Finally, we demonstrate that ABA (p) captures the argumentation scheme for argument from analogy and provides an explanation when it is used for persuasion.

Introduction: Analogical Arguments and Their Acceptability

Analogical reasoning is a complex process based on a comparison between two pairs of concepts or states of affairs (aka. the source and the target) sharing some common features (Bartha Citation2010). This comparison is the ground of a specific type of inference called argument from analogy, in which the conclusion of an argument is attributed to a specific feature characterized from one to another (cf. the proposed models in (Copi, Cohen, and McMahon Citation2016; Davies Citation1988; Guarini, Butchart, Smith, and Moldovan Citation2009; Walton Citation2010; Walton, Reed, and Macagno Citation2008)). Despite the diversity, those models can be represented by the generic structure called an argumentation scheme for argument from analogy introduced in (Walton, Reed, and Macagno Citation2008) as follows:

SimilarityPremiseGenerally,caseC1issimilartocaseC2BasePremiseAistrue(false)incaseC1ConclusionAistrue(false)incaseC2

This generic structure can be explained as follows. The similarity is regarded to hold between two cases. These cases could be two different ‘concepts’ or ‘states of affairs’. Consequently, a property (e.g., a feature A) attributes from one to another. Intuitively, this kind of structure can be represented as a logic program where A and Ci are appeared as the head and the body of an inference rule, respectively. Several attempts similar to this approach were developed in (Racharak et al. Citation2016, Citation2017; Raha, Hossain, and Ghosh Citation2008; Sun Citation1995).

A fundamental problem for this kind of reasoning is how to evaluate an analogical argument, i.e., its acceptability. Basically, this problem amounts to investigations of the structure of analogical arguments and its defeasibility characteristics. At the abstract level, critical questions (CQ) (Walton, Reed, and Macagno Citation2008) associated with the argument scheme outlines several conditions of defeasibility:

CQ1IsAtrue(false)inC1?CQ2AreC1andC2similarintherespectscited?CQ3Arethereimportantdifferences(dissimilarities)betweenC1andC2CQ4IstheresomeothercaseC3thatisalsosimilartoC1exceptthatAisfalse(true)inC3?

These critical questions can be used to understand which analogical arguments should not be accepted. However, they do not address the following three basic problems: (1) how similarity/dissimilarity should be determined (which amounts to understand the notion of similarity); (2) how an analogical argument is constructed (which amounts to understand the structure of an analogical argument); and (3) how a conclusion drawn from the similarity premise and the base premise is warranted (which amounts to understand the evaluation of an analogical argument). The argumentation scheme and its critical questions do not involve these aspects concretely.

To address the first problem, we first take a look into the literature of similarity models. The most basic (but useful) one was developed by (Tversky Citation1977). In Tversky’s model, an object is considered as a set of features. Then, the similarity of two objects is measured by the relationship between a number of common features and a number of different features. Nevertheless, not every feature need to be cited in analogical arguments, the studies in (Hesse Citation1965; Waller Citation2001; Weinreb Citation2016) reported that features used by the comparison should be ‘relevant’ to the attribution of the property. This leads to our study on characteristics of similarity models for analogical arguments in this work (cf. Section 3).

Addressing the second and the third problems involve in computing arguments in terms of argumentation with structure (or structured argumentation). It should be noted that argumentation (Dung Citation1995) is proven to be a promising platform to understand a non-monotonic and defeasible reasoning. With this viewpoint, these problems are indeed the problems of determining ‘acceptable’ analogical arguments w.r.t. argumentation semantics. That is, analogical arguments can attack (and be attacked by) other arguments. We show the correspondence between this attack–counterattack relationship and the defeasibility conditions of the argumentation scheme in this paper. More specifically, the definition of ‘attack’ is formally given in Section 4 and the link to the argumentation scheme is explained in Subsection 5.2.

This work uses the following dialogue as our running example. It is considered as analogical reasoning because Agent1 and Agent2 employ the perception of similarity as a means to justify their reasoning mechanism.

Agent1: I think a goose can quack since it is like a duck.

Agent2: No. Though it is like a duck, but to say that it can quack, we have to look into their vocal cords. Since they are built differently, it cannot quack.

There are several remarks which could be observed from the above example:

  1. Analogical reasoning is a kind of commonsense reasoning and defeasible reasoning. For example, Agent1 employs this kind of reasoning when he owns partial knowledge but a conclusion has to be drawn;

  2. This kind of reasoning can be used for ‘persuasion’, which conforms to the investigation in (Waller Citation2001). For example, Agent1 is trying to change the belief of Agent2 by arguing from the similarity of geese and ducks.

  3. Human beings are not certain about their conclusions of analogical reasoning. Their levels of certainty depend on the status of information, the interaction between arguments (cf. the counter-argument uttered by Agent2), and types of agents i.e. gullible/skeptical agents. We further continue on this in Section 4.

In this paper, we focus on the computational aspect of analogical reasoning in argumentation, rather than the psychological modeling. Concretely, we study the structure of analogical arguments from the structured argumentation point of view, addressing the aforementioned problems. We analyze how the notion of ‘concept similarity’ contributes to the acceptability of analogical arguments. Section 2 reviews the basics of argumentation including assumption-based argumentation (ABA). Section 3 discusses a formal language for defining concepts and the notion of similarity measure of concepts. This notion equipped with ABA is used to define our proposed framework called assumption-based argumentation with predicate similarity (ABA(p)) in Section 4. This section also discusses about its relationship to different types of agents in analogical reasoning and Section 5 defines the notion of acceptability in argumentation and its link to Walton’s scheme is explored. Finally, we relate our approach to others and discuss its future directions in Section 6 and Section 7, respectively.

Preliminary: Argumentation Framework and Its Structure

Abstract Argumentation

An abstract argumentation framework (AA) is a pair (A,R) where A is a set of arguments and RA×A is called an attack relation. Arguments may attack each other; hence, it is clear that they may not stand together and their status are subject to an evaluation. Semantics for AA returns sets of arguments called extensions, which are conflict-free and defend themselves against attacks (Dung Citation1995).

Structured Argumentation

In AA, the structure and meaning of arguments and attacks are abstract. On the one hand, these characteristics enable the study of properties which are independent of any specific aspects (Baroni and Giacomin Citation2009). On the other hand, this generality features a limited expressivity and can be hardly adopted to model practical target situations. To fill out this gap, less abstract formalisms were considered, dealing in particular with the construction of arguments and the conditions for an argument to attack another e.g., ASPIC+ (Modgil and Prakken Citation2014), DeLP (Garca and Simari Citation2004), and assumption-based argumentation (ABA) (Dung, Kowalski, and Toni Citation2009). This work extends ABA and we include its basis here for self-containment.

Definition 2.1.

An ABA framework is a quadruple L,R,A, where

  • (L,R) is a deductive system, in which L is a language and R is a set of inference rules,

  • AL is a (non-empty) set, referred to as the set of assumptions,

  •   is a total mapping from A to L, where αˉ is the contrary of α.

We assume that the inference rules in R have the syntax l0l1,,ln (for n0) where liL. We refer to l0 and l1,,ln as the head and the body of the rule, respectively. We also represent the rule l simply as l and restrict our attention to flat ABA framework (Bondarenko et al. Citation1997), i.e., if lA, then there exists no inference rules of the form ll1,,lnR for any n0.

As an example, the argumentation scheme for argument from analogy (cf. Section 1) can be represented in ABA as follows:Footnote1

hold(A,C2)hold(A,C1),sim(C1,C2),arguably(A,C2)

where Ci represents different concepts or states of affairs, the conclusion hold(A,C2) may read “A holds in C2”; also, the assumption premises hold(A,C1), sim(C1,C2), and arguably(A,C2) may read “A holds in C1”, “C1 and C2 are similar to each other”, and “the defeasible rule should not apply to the conclusion between A and C2”, respectively.

The above (domain-independent) inference rule is exemplified to the agent reasoning described in our running example. According to the biological family of birds, we know that ducks and geese are belonged to the same family i.e. ‘Anatidae’. These birds are adapted for swimming, floating on the water surface, etc. Though they are under the same family, ducks and geese are different. This information supports us to conclude that ducks and geese are similar. We represent the assumptions as follows.

hold(quack,duck);sim(duck,goose)

where the assumptions hold(quack,duck) and sim(duck,goose) states that “ducks can quack” and “ducks and geese are similar to each other”, respectively.

Given an ABA framework, an argument in favor of a sentence cL supported by a set S of assumptions, denoted by Sc, is a backward deduction from c to S obtained by applying backward the rules in R, e.g. {hold(quack, duck), sim(duck,goose),arguably(quack,goose)}hold(quack,goose).

In ABA, the notion of attack between arguments is defined in terms of the contrary of assumptions, i.e., an argument S1c1 attacks another (or the same) argument S2c2 iff c1 is the contrary of an assumption in S2.

In general, the contrary of an assumption is a sentence representing a challenge against the assumption and can be suggested by critical questions (CQ) of an argumentation scheme (cf. page 2 for its description). For instance, the assumption hold(A,C1) can be challenged by providing a negative answer to CQ1 i.e. ¬ hold(A,C1), where symbol ¬ denotes the classical negation. Supplying a negative answer to CQ2 and CQ3 can also be understood as proving the contrary ¬ sim(C1,C2) (i.e. C1 and C2 are dissimilar to each other) of the assumption sim(C1,C2). A negative answer to CQ4 can be understood as showing the contrary ¬ hold(A,C2) of the assumption arguably(A,C2). This contrary ¬ hold(A,C2) may be defined by an additional (domain-independent) inference rule: ¬ hold(A,C2)sim(C1,C2),sim(C1,C3),hold(A,C1), ¬ hold(A,C3). Contraries may also be derived via a chain of rules, e.g. ¬hold(quack,C)cord(A,C),¬built(quack,A); cord(cordg,goose); ¬built(quack,cordg), representing an abnormality condition that their vocal cords are built differently. The overall ABA framework is summarized in .

Figure 1. ABA framework for the running example.

Figure 1. ABA framework for the running example.

A Formal Notion of Concepts and Similarity

Subsection 2.2 shows that the argumentation scheme for argument from analogy may be encoded into an ABA. However, several general problems still remain uncleared, for instance, how the similarity predicate sim(C1,C2) should be supplied to an ABA framework? We note that this point has already been mentioned in Section 1.

The example ‘biological family of birds’ apparently illustrates that similarity of concepts (or states of affairs) can be considered from their descriptions or their taxonomy e.g. “ducks are a kind of birds which are adapted for swimming”. Regarding this observation, any frameworks which encode ‘argument from analogy’ must provide mechanisms to formalize the description of concepts. In a very simple way, we may formalize the description of concepts in terms of inference rules. For instance, duck(X)waterbird(X),feature_d(X) where feature_d(X) represents a unique characteristic for ducks. When inference rules are grounded, ones can employ the model theory to derive the similarity between predicates as in (Goebel Citation1989).

Though using inference rules can encode our example, other knowledge representation formalisms which provide more expressivity may be also used to encode concepts e.g. description logic (DL) (Baader et al. Citation2007) or (other fragments of) first-order logic. For example, the same description can be formalized based on DL as: DuckWaterBird, in which ‘’ is read as ‘is a’. Successful examples of DL knowledge bases are ontologies in medicine and bioinformatics e.g. Snomed ct (www.snomed.org) or Go (www.geneontology.org).

The above investigation suggests that any ABA framework extended for analogical reasoning should supply with a module containing the formalized descriptions of concepts and the logical relationship between them. This section concentrates on concepts formalized using DL formalism and a similarity measure is defined for those concepts.

Preference Context for Having Relevance

Similarity of concepts is oftentimes context-sensitive and can be recognized from the comparison of features shared between them. Nevertheless, (Hesse Citation1965; Waller Citation2001; Weinreb Citation2016) reported that features used in comparisons should be ‘relevant’ to the attribution of the property. This means that there must be ways of expressing aspects of a context in consideration. In the following, we introduce a notion called preference context which can be used to express a considering context in DL formalism.

In general, DL concept descriptions C,D (or simply concepts) can be defined inductively through a set CN of concept names and a set RN of role names as: C,D::=A||CD|r.C|r.C where ACN, rRN, denotes the top concept, and ,, are called concept constructors. A terminological knowledge base or TBox T is a set of formulae defined over concepts. Examples of TBox formulae are CD (denoting “concept C is a kind of concept D”) and CD (denoting “concept C is definitely concept D”). The following definition defines different ways of preferences expressed over DL concepts.

Definition 3.1.

Let I1,I2 be non-empty sets equipped with partial orders I1 and I2, respectively; for any xI1, for any yI2, it holds that xy; and a special element n/I1I2 representing the neutral. Let S,D be a non-empty sets equipped with partial orders S and D, respectively. A preference context (denoted by p) is a quintuple ic,ir,sc,sr,d where ic,ir,sc,sr,d are ‘partial’ functions such that:

  • ic:CNI1{n}I2 captures the importance of concept names;

  • ir:RNI1{n}I2 captures the importance of role names;

  • sc:CN×CNS captures the similarity of concept names;

  • sr:RN×RNS captures the similarity of role names; and

  • d:RND captures the importance factor of a quantified role (e.g. r) in relation to the corresponding concept (e.g. C) for quantified concepts (e.g. r.C).

Now, we exemplify the above functions. Let I2:={i1,i2} where i1I2i2. Saying that an occurrence of Bird is more important than that of Lizard in a description can be expressed as ic(Bird)=i2 and ic(Lizard)=i1. Other functions are also straightforward to understand except d. Hence, we merely illustrate it next. Let D:={d1,d2} where d1Dd2. Suppose that ones would like to compare between float.Water and float.Air with a consideration that being ‘floatable’ is more influential than other properties. Then, we may express as d(float)=d2 and other role names are mapped to d1.

A well-investigated concrete notion of preference context is the preference profile (denoted by π) introduced in (Racharak, Suntisrivaraporn, and Tojo Citation2018) where I1:=[0,1), n:=1, I2:=(1,2], S:=[0,1], and D:=[0,1]. Next, we discuss that preference context can be considered in the development of concept similarity measures.

Concept Similarity under Preferences

This subsection defines a generic notion of concept similarity measure from two main observations. First, similarity of concepts should be ‘subjective to’ a preference context. This suggests that any similar measures for concepts should supply with tunable parameters w.r.t. the preference context. Second, similarity of concepts is a ‘direct generalization’ of equality relation for concepts (or the concept equivalence relation). In DL, two concept descriptions C,D are ‘equivalent’ w.r.t. TBox T (in symbols, C τ D) iff their semantic representations CI,DI are the same, i.e. CI=DI, for every modelFootnote2 I of T. We adopt these two viewpoints and introduce the following.

Definition 3.2.

Let P be an infinite set of preference contexts where pP, Con(CN,RN) be a set of concept descriptions constructed from CN and RN where C,DCon(CN,RN), and T be a TBox. Then, a concept similarity under preferences is a family of functions pτ:Con(CN,RN)×Con(CN,RN)[0,1] such that

p∈:P C pτ D=1CτD
(called preference invariance w.r.t. concept equivalence) holds; and
  • C pτ D=1 indicates maximal similarity (or concept equivalence) under preference context p w.r.t. T between concept descriptions CandD,

  • C pτ D=0 indicates having no relation under preference context p w.r.t. T between concept descriptions CandD.

The reason we require preference invariance w.r.t. concept equivalence because we do not want to allow the usage of any preference context to effect on the perception of semantically identical concept descriptions.

There also exist well-developed functions for concept similarity under preferences such as the function simπ for an unfoldable TBox introduced in (Racharak, Suntisrivaraporn, and Tojo Citation2018). Basically, this function computes the degree of similarity between concepts (e.g. Duck and Goose) by rather calculating from their corresponding description trees (e.g. TDuck and TGoose, respectively). Since simπ can be considered as an instance of pτ, this function enables the agent to express his preferences in terms of preference profile such that the degree of similarity between two concept descriptions is identified w.r.t. his perception. The following example shows that this similarity measure can be used to provide a numerical value representing the degree of similarity perception, in which the function hdπ computes the degree of directional tree similarity w.r.t. preference profile π. Their definitions are omitted to show here due to the limited space.

Example 3.3.

Let TBox T:={DuckWaterBird,GooseWaterBird} and the default preference profile π0 (also, introduced in (Racharak, Suntisrivaraporn, and Tojo Citation2018)) represents the agent’s preferences in the default manner i.e. preferences are not given.

We compute the similarity of Duck and Goose using simπ with the preference profile π0 i.e. simπ0(Duck,Goose)=(hdπ0(TDuck,TGoose)+hdπ0(TGoose,TDuck))/2, where TDuck,TGoose represents the concept trees of Duck and Goose, respectively. Since hdπ0(TDuck,TGoose)=(1)[(1max{1,0}+1max{0,0})/(1+1)]=1/2 and hdπ0(TGoose,TDuck)=1/2. Then, simπ0(Duck,Goose)=1/2. This number indicates the degree of similarity between Duck and Goose in the normal perception.

Assumption-based Argumentation with Predicate Similarity

We have discussed the theoretical analysis of using ABA framework to model the argumentation scheme for argument from analogy and concept similarity under preferences for understanding the degree of similarity between concepts in Subsection 2.2 and Section 3, respectively. Though using ABA alone could model the argumentation scheme for argument from analogy, it came up with several difficulties as follows.

First, ABA does not concretely describe where the source of similarity premises comes from, how a notion of concept similarity should be involved, how ‘relevance’ of concept similarity is defined and effects the degree of analogical arguments, and how analogical arguments should interact with normal arguments in case of persuasion. These problems are basically related to redefining both the notion of structured arguments and the framework in a way that arguments’ types can be classified.

Second, an analogical argument should be associated with a particular degree since each analogy used to support a claim is associated with a unit interval [0, 1]. This degree should also contribute to the attack relation between arguments. It is worth mentioning that similarity could be ‘qualitative’ in a sense that ones may only perceive if two concepts are similar or not. In this case, a certain threshold should be defined for being similar and each analogical argument could be associated with a binary {0,1} where 1 indicates ‘similar’ and 0 indicates ‘not similar’.

Third, different rational agents may value arguments supported by analogies unequally, depending on their characteristics. This point is related to different styles of making judgment. For example, there could be a ‘gullible’ agent who always gives a high degree on every analogical argument; or a ‘skeptical’ agent vice versa.

To address the first difficulty, we extend the original ABA framework to assumption-based argumentation with predicate similarity (denoted by ABA(p)) by identifying necessary components to form analogical arguments. In the following, the extended framework considers any arbitrary description language although DL terminological formalism is used in our running example.

Definition 4.1.

An ABA(p) is a 10-tuple D,,A,,T,T,,~pτ, p, where (LT,T) is a module formalizing descriptions of concepts with a language LT and a set T of formulae (constructed from LT) representing definitions of concepts,  M is a partial mapping from the predicate of sentences in LD to concepts in LT, pτ:LT×LT[0,1] is a certain concept similarity w.r.t. T under preference context p, F is an annotation function for each entire argument to a numerical valueFootnote3,        is a total function mapping from AAN to LD, where AN:={P pτQ|PM pτ QM(0,1],foranyP(t1,,tp),Q(t1,,tp)LD}Footnote4 representing a set of analogies, and LD,R,A are as defined in ABA framework. An argument for cLD (the conclusion or claim) supported by SAAN, is a tree with nodes labeled by sentences in LDAN, by sentences of the special form ?(φ,ψ,ς) representing a defeasible condition of sentence φ concluded from an analogy between ψ and ς, or by the special symbol □ representing an empty set of premises, such that:

  • the root is labeled by c;

  • for every node N,

    ∘ if N is a leaf, then N is labeled by an assumption in AAN, an assumption of the form ?(φ,ψ,ς), or by □\boxempty,

    ∘ if N is not a leaf, lN is the label of N, and there is an inference rule

    lNb1,,bm (m0) in R, then

    either m=0 and the child of N is \boxempty

    or m>0 and N has m children, labeled by b1,,bm, respectively,

    ∘ if N is not a leaf, lN is the label of N where lN:=P(t1,,tp),

    there is an analogy P pτ Q in AN, and there is

    either an inference rule Q(t1,,tp)b1,,bm (m0) in R

    or Q(t1,,tp) in A, then

    N has 3 children, labeled by P pτ Q, ?(lN,P,Q), Q(t1,,tp);

  • S is the set of all assumptions labeling the leaves.

(LT,T) can be defined for any kinds of terminological formalism specified by means of a language LT and a set of formulae T. For example, a DL terminological knowledge base can be recast as LT:=CNRN and T is a TBox constructed from LT.

We note that ?(φ,ψ,ς) can be read as “conclusion φ supported by an analogy between ψ and ς is opened for challenging”. A challenge of φ could be the contrary of φ, which may be possibly drawn from other analogies (aka. counter-analogies) or chains of inference rules. For example, a challenge of “sound2 created by bird2 is duck’s sound” is an evidence that sound2 is honk sound. Like ABA, assumptions are the only defeasible component in ABA(p) and they are used to support a conclusion. For the sake of simplicity, we clearly separate analogical assumptions from standard assumptions. That is, an argument for c supported by standard assumption SAA and analogical assumption SAN:=SSA is denoted by SASANc (i.e. SASAN=S such that SASAN=). When SAN is empty i.e. SAc, we call such an argument a standard argument. Otherwise, we call it an analogical argument. This style of writing helps recognizing analogical arguments and standard arguments at first glance.

It is worth noting that the study of analogical reasoning in logical systems is not new since several studies do exist. For example, Goebel (Citation1989) provided a form of analogical reasoning in terms of a system of hypothetical reasoning, Sun (Citation1995) integrated rule-based and similarity-based reasoning in a connectionist model. In argumentation systems, Racharak et al. (Citation2016) studied an implementation of analogical reasoning using an argument-based logic programming and (Racharak et al. Citation2017) proposed an idea to combine answer set programming with description logic. This work makes a continuous study of these papers by generalizing (Racharak et al. Citation2017) to ABA.

To address the second difficulty, we define the function f:S[0,1] for annotating (both standard and analogical) assumptions as follows:

Definition 4.2.

Given a set S of assumptions, a partial mapping  M from the predicate of sentences in LD to concepts in LT, and pτ:LT×LT[0,1] is a certain concept similarity w.r.t. terminological formalism T under preference context p, the (total) annotation function f:S[0,1] is defined, for any aS, as:

(1) f(a)={P~pτQifaisoftheformP~pτQP~pτQifaisoftheform?(lN,P,Q)1otherwise(1)
Intuitively, standard assumptions are labeled with 1 to correspond with the fact that similarity relation is bound by 1 (we note that 1 is used in pτ to indicate the maximal similarity). Next, we extend f to the function F for annotating arguments. Each annotation represents the degree of each entire argument.

Definition 4.3.

Let SASANc be an argument. Then, a function F for annotating an entire argument is defined as:

(2) F(SASANc)={f(ai),f(anj)}ifSASAN1otherwise(2)
where aiSA, anjSAN, and is a triangular norm (t-norm).

Since the above definition employs the notion of t-norm, we include its basis here for self-containment. A function :[0,1]2[0,1] is called a t-norm iff it fulfills the following properties for all x,y,z,w[0,1]: (1) xy=yx (commutativity); (2) xz and ywxyzw (monotonicity); (3) (xy)z=x(yz) (associativity); (4) x1=x (identity). A t-norm is called bounded iff xy=0x=0 or y=0. There are several reasons for the use of a t-norm. Firstly, it is the generalization of the conjunction in propositional logic. Secondly, the operator min (i.e. xy=min{x,y}) is an instance of a bounded t-norm. This reflects an intuition that the strength of an argument depends on the used ‘weakest’ analogical assumptions. Lastly, 1 acts as the neutral element for t-norms.

Concerning the third difficulty, the choice of (cf. for its examples) can represent a type of a rational agent in analogical reasoning. For example, a gullible/skeptical agent may give a high/low degree to his answer when his answer is derived from analogies. We formalize this characteristic as follows.Footnote5

Definition 4.4.

Let SASANc be an argument; also, F1 and F2 be two different functions representing different agents. Then, F1 is more gullible than F2 if F1(SASANc)F2(SASANc). On the other hand, F1 is more skeptical than F2 if F1(SASANc)F2(SASANc). Lastly, F1 and F2 are identical if F1 are both gullible and skeptical to F2.

Table 1. Some instances of the operator .

The following theorem is an aid to help deciding which operator should be chosen for F in ABA(p). That is, if an agent strongly recognizes analogical principles, we may choose the most gullible function (i.e. min). On the other hand, we may choose the skeptical function (i.e. mlt) if an agent weakly recognizes analogical principles.

Theorem 4.5.

From and let x1,x2(0,1]. Then, mltH0min.

Proof. (Sketch) We show the following inequality:

x1x2x1x2x1+x2x1x2min{x1,x2}

That is, we show x1x2x1x2x1+x2x1x2 as follows:

x1x2x1x2x1+x2x1x211x1+x2x1x2x1+x2x1x21
x2x1x21x1(1x1)x21x1x21(by assumption)

Lastly, we show x1x2x1+x2x1x2min{x1,x2} in the similar fashion. □

Attacks in ABA are defined in terms of the contrary of assumptions (cf. Subsection 2.2). However, argument trees and their supporting assumptions in ABA(p) are labeled with numbers. This is clear that the current definition of attacks in ABA is not appropriate for handling attacks in ABA(p). To define the notion of attacks in ABA(p), we extend the original definition of attacks in ABA to take into account the numbers. In addition, the extended definition imposes a particular restriction on the usage of analogical reasoning for ‘persuasion’ i.e. analogical arguments are always preferable to standard arguments. These characteristics are formally defined as follows.

Definition 4.6.

Let function       , function F, and function f be as defined in Definition 4.1, Definition 4.2, and Definition 4.3, respectively. An argument SA1S1ANc1 attacks an argument SA2S2ANc2 iff the following satisfies:

  • If S1AN and S2AN=, then c1 is the contrary of an assumption in SA2;

  • Otherwise, c1 is the contrary of an assumption in SA2S2AN (i.e. xSA2S2AN and c1=x) and F(SA1S1ANc1)f(x).

The first condition spells out that an analogical argument may attack a standard argument. This certain characteristic corresponds to the investigation in (Waller Citation2001), where analogical arguments can be used for persuasion. For instance, saying “geese can quack because they are similar to ducks” may effect the belief’s changing on the opponent if no evidences to falsify the argument can be shown up. To put it more precisely, an opponent can be persuaded to believe a conclusion and that conclusion is inherently subject to be challenged. Hence, the burden of proof is shifted back to an opponent after he/she is persuaded to believe in that conclusion.

The second condition associates with another circumstance i.e. an analogical argument can attack an assumption only if the argument has been labeled with the number higher than or equal to the number associated with the assumption. This way of treatment is not used in (Waller Citation2001; Walton, Reed, and Macagno Citation2008).

Example 4.7.

illustrates an overall ABA(p) framework for the running example. According to the figure, the framework uses simπ and π0 as concrete instances of pτ and p, respectively. The figure also uses pτ to indicate ‘being not similar under preference context p w.r.t. T’. The following suggests two arguments which can be constructed from the framework.

  • goose(bird2,sound2)duckpτgoose,?(quack(sound2),duck,goose) quack(sound2)} representing “sound2 created by bird2 is quack sound because bird2 is a goose and geese are similar to ducks”;

  • honk(sound2) representing “sound2 is honk sound”.

Figure 2. ABA(p) framework for the running example.

Figure 2. ABA(p) framework for the running example.

Hence, the second argument attacks the first argument. It is also worth observing that, in this case, varying each choice of does not effect on the attack relation between these two arguments even though the degree of an argument is changed. For example, if min is used, then the degree of the first argument is equal to 0.5. On the other hand, if mlt is used, then the degree of the first argument is equal to 0.25.

The following theorizes an observation which can be derived from Definition 4.6.

Theorem 4.8.

An analogical argument cannot attack a standard argument which does not use assumptions to support a claim.

Proof. Let argument G1 be defined as SA1S1ANc1 and argument G2 be defined as c2. We need to show that G1 cannot attack G2.

Since G2 contains no assumptions, we conclude that G1 cannot attack G2. □

Theorem 4.8 shows that when an agent supports a claim from the grounded truth, it is impossible for other agents to persuade him/her by analogies. This corresponds to how analogical arguments are treated in practical reasoning.

Acceptability in ABA(p) and Its Link to Argumentation Scheme

Acceptability of Arguments in ABA(p)

ABA(p) extends from ABA by equipping with predicate similarity and its attack definition is also extended for handling the degree of each argument and the preference between different types of arguments. Hence, ABA(p) can be considered as an instance of Dung’s abstract argumentation. This implies that it can be used to determine whether a given claim is ‘accepted’ by a rational agent. In a sense of analogical argumentation, the claim could be a potential belief to be justified from analogies.

In order to determine the ‘acceptability’ of a claim, the agent needs to find an argument for the claim that can be defended against attacks from other arguments. To defend an argument, other arguments must be found and may need to be defended in turn (Dung, Kowalski, and Toni Citation2009). We formally define these characteristics as follows:

  • A set of arguments Arg1 attacks a set of arguments Arg2 if an argument in Arg1 attacks an argument in Arg2;

  • A set of arguments Arg defends an argument arg if Arg attacks all arguments that attack {arg}.

As in Dung’s abstract argumentation, the notion of ‘acceptability’ can be formalized in many ways. In this work, we focus on the following notions:

  • A set of arguments is admissible iff it does not attack itself and it attacks every argument that attacks it;

  • An admissible set of arguments is complete if it contains all arguments that it defends;

  • The least (w.r.t. set inclusion) complete set of arguments is grounded.

We observe that the correspondence between ‘acceptability’ of arguments and ‘acceptability’ of assumptions in ABA(p) can be argued in the same way as in (Dung, Mancarella, and Toni Citation2007) for the link between ABA and AA. Hence, we know:

  • If a set of assumptions S is admissible/grounded, then the union of all arguments supported by any subset of S is admissible/grounded;

  • If a set of arguments S is admissible/grounded, then the union of all sets of assumptions supporting the arguments in S is admissible/grounded.

The above notion of acceptable sets of arguments provides a non-constructive specification. Now, we show how to turn the specification into a constructive proof procedure. The method we focus here is defined for a ‘grounded’ set of arguments and is extended from (Dung, Mancarella, and Toni Citation2007) for handling analogical arguments.

Informally, this constructive proof procedure is known as a dispute derivation which is defined as a sequence of transition steps from one state of a dispute to another. For each state, we maintain these following information. Component P maintains a set of (both standard and analogical) assumptions, which are used to support potential arguments of the proponent. Component O maintains multiple sets of assumptions, which are used to support all attacking arguments of the opponent. Component D holds a set of assumptions, which have already been used by the proponent. Component C holds a set of assumptions, which have already been used by the opponent and have been attacked by the proponent. Component SP maintains a set of triples holding an opponent’s attacked assumption, a set of proponent’s assumptions supporting a contrary of the attacked assumption, and a set of opponent’s assumptions supporting the argument. Component SO maintains a set of triples holding a proponent’s attacked assumption, a set of proponent’s assumptions supporting the argument, and a set of opponent’s assumptions supporting a contrary of the attacked assumption. In the following, we formally define the dispute derivation for a ‘grounded’ set of arguments.

Definition 5.1.

Let an ABA(p) is a 10-tuple LD,R,A,      ,LT,T,M,pτ,p,F. Given a ‘patient’ selection functionFootnote6, a ‘grounded belief’ dispute derivation of a defence set Δ for a sentence δ is a finite sequence:

P0,O0,D0,C0,SP0,SO0,,
Pi,Oi,Di,Ci,SPi,SOi,,
Pn,On,Dn,Cn,SPn,SOn
where P0:={{δ}}, D0:=A{δ}, O0:=, C0:=, Pn:={}, On:=, SP0:=, SO0:=, Δ:=Dn, and for every 0i<n, only one S in Pi or one S in Oi is selected, and:
  1. if S is selected in Pi and σ is selected in S, then

    1. if σ is an assumption, then

      Pi+1:=(Pi{S}){S{σ}},Oi+1:=Oi{{σ}},
      andSOi+1:=SOi{σ,S,{σ}}

    2. else if there exists an inference rule σRR such that CiR=, then

      Pi+1:=(Pi{S}){S{σ}R},Di+1:=Di(AR),
      andSPi+1:=(SPi{φ,PA,OA}){φ,PA{σ}R,OA}
      foranyφ,PA,OASPisuchthatσPA

      and if RA, then further validation needs to be checked:

      for any φ,PA,OASPi+1 such that PAOAAAN, we have

      either PAAN and OAA

      or F(PA)F(φ)

    3. else if σ:=P(t1,,tp) and there exists ϕ:=Q(t1,,tp)

      such that PMpτQM(0,1], then

      Pi+1:=(Pi\{S}){S\{σ}{P~pτQ,?(σ,P,Q),ϕ}},
      Di+1:=Di{P~pτQ,?(σ,P,Q)}(A{ϕ}),
      andSPi+1:=(SPi\{φ,PA,OA}){φ,PA\{σ}{P~pτQ,
      ?(σ,P,Q),ϕ},OA}foranyφ,PA,OASPisuchthatσPA

      and if ϕ A, then the same validation as in Case 1.b is required

  2. If S is selected in Oi and σ is selected in S, then

    1. if σ is an assumption, then

      (i) either σ is ignored, i.e.

      Oi+1:=(Oi{S}){S{σ}}

      (ii) or σDi and

      Oi+1:=Oi{S},Pi+1:=Pi{{σ}},Di+1:=Di({σ}A),
      Ci+1:=Ci{σ},andSPi+1:=SPi{σ,{σ},S}

    2. else if A:={R|σRR} and A, then

      Oi+1:=(Oi{S})RA{S{σ}R}
      andSOi+1:=(SOi{φ,PA,OA})RA{φ,PA,OA{σ}R}
      foranyφ,PA,OASOisuchthatσOA

      and further validation must be satisfied:

      for any φ,PA,OASOi+1 such that PAOAAAN, we have

      either OAAN and PAA

      or F(OA)F(φ)

    3. else if σ:=P(t1,,tp), A:={Q(t1,,tp)|PMpτQM(0,1]},

      and A, then

      Oi+1:=(OiS)Q(t1,,tp)A{Sσ{PpτQ,
      ?(σ,P,Q),Q(t1,,tp)}},andSOi+1:=(SOi{φ,PA,OA})
      Q(t1,,tp)Aφ,PA,OAσPpτQ,?(σ,P,Q),Q(t1,,tp)
      foranyφ,PA,OASOisuchthatσOA

      plus, the same validation as in Case 2.b is required

    4. else Oi+1:=Oi{S} and

      SOi+1:=SOi{φ,PA,PO|φ,PA,POSOi and PO:=S}.

A dispute derivation can be seen as a way of representing a ‘potential’ winning strategy for a proponent to win a dispute against an opponent. The proponent starts by putting forward a claim whose acceptability is under dispute. After that, there are many possibilities as follows. The opponent can try to attack the proponent’s claim by arguing for its contrary (cf. Case 1.a). The proponent argues for a non-assumption by using an inference rule (cf. Case 1.b). If an inference rule does not exist, the proponent can use an analogy to support the initial claim (cf. Case 1.c). Moreover, the proponent can select an assumption in one of the opponent’s attacks and either ignores it because it is not selected as a culprit (cf. Case 2.a.i) or decides to counter-attack it by showing its contrary (cf. Case 2.a.ii). Otherwise, the opponent can argue for a non-assumption by using either an inference rule (cf. Case 2.b) or an analogy (cf. Case 2.c). Unfortunately, the opponent may not have even a reason to argue for it (cf. Case 2.d). In addition, every attacking argument of the opponent to the proponent’s claim is maintained inside SO, i.e., σ,S,{σ} is read as “assumption σ in a set of proponent’s assumptions S is attacked by a set of assumptions {σ}”. Every attacking argument of the proponent to the opponent’s claim is also maintained inside SP, i.e., σ,{σ},S is read as “assumption σ in a set of opponent’s assumptions S is attacked by a set of assumptions {σ}”.

We give an informal dispute derivation for the running example.

Example 5.2. Consider an ABA(p) given in and let min be used. shows that there does not exist a grounded belief dispute derivation for quack(sound2), whereFootnote7 , 1, 2, 3, 4, 1, and 2 denote dpτg,?(d(b2,s2),d,g),g(b2,s2),

Table 2. A grounded belief dispute derivation for quack(sound2).

?(d(b2,s2),d,g),,{h(s2)}, ?(d(b2,s2),d,g),,{c(c1,s2),bfh(c1)},

?(d(b2,s2),d,g),,{bfh(c1)}, ?(d(b2,s2),d,g),,,

g(b2,s2),dpτg,g(b2,s2),¬g(b2,s2), and g(b2,s2),dpτg,g(b2,s2),dpτg, respectively.

At step 2, the proponent (P) has completed the construction of an argument for q(s2) supported by , saying that “s2 is a quack sound because goose b2 makes s2 and geese are similar to ducks”. At step 3, the opponent (O) has decided to attack on assumption ?(d(b2,s2),d,g) by showing its contrary h(s2). This argument is fully constructed at step 6, in which no assumptions have been used. Nonetheless, this attacking argument needs to be checked at SO6 if it satisfies the requirements of argument from analogy. Since it satisfies, step 6 is valid. Finally, no arguments of the proponent can defend the opponent’s argument at step 10, this dispute derivation fails.

With an analogous manner, we can find a grounded belief dispute derivation of {d(b1,s1)} for q(s1) with three transition steps.

Relationship to Argumentation Scheme for Argument from Analogy

Since ABA(p) extends from ABA with the capability for supporting the conclusion from similarity premises, the notion of argument trees in ABA(p) can be also used to display the structural relationships between conclusions and assumptions including standard assumptions and analogical assumptions. illustrates an example of argument trees for arguments discussed in Example 4.7. The figure uses a rounded rectangle for indicating an argument tree, a number floating near a rounded rectangle for indicating an annotated degree of that entire argument, a number floating near an assumption for indicating an annotated degree of that assumption, and a dashed arrow for indicating an attack. For example, the top rounded rectangle shows the structural relationship of argument “sound2 created by bird2 is quack sound of ducks because ducks are similar to geese and we know that bird2, which is a goose, creates sound2” whereas the bottom rounded rectangle shows the structural relationship of argument “sound2 is honk sound because it is created from cord1 and that cord is built for honk”. The figure also depicts that the bottom one attacks the top one.

Figure 3. An example of argument trees and their relationship.

Figure 3. An example of argument trees and their relationship.

Ones may observe that the structural relationship represented by an argument tree directly corresponds to the relationship between premises and a conclusion used in the argumentation scheme. That is, a similarity premise appears as an assumption of the form PpτQ and a base premise appears as either an assumption in A or an inference rule with the empty body in R. They appear as nodes in an argument tree. A conclusion drawn from the use of the argumentation scheme is represented as a parent of those nodes in an argument tree. This structure clearly explains the relationship indicated in the argumentation scheme.

The critical questions can also be captured in ABA(p). Let us repeat that page 2 writes down each critical question (CQ) matching the scheme argument from analogy. Firstly, asking CQ1 is captured by the provability of a claim i.e. a backward deduction from a claim to its supporting assumptions. Secondly, CQ2 and CQ3 are formalized by the use of a similarity measure together with a supplied terminological formalism. Since similarity measure of concepts identifies the degree of commonalities, it automatically models the questions. Lastly, the notion of counter-analogies can be also modeled by the construction of arguments from another analogies drawing the contrary of the defeasible condition of the former argument.

Argumentation schemes employ the idea of asking critical questions to evaluate the acceptability of generated arguments. In ABA(p), we evaluate by employing the notion of attack together with a semantics of argumentation framework (Dung Citation1995) insisting that sets of acceptable arguments do not attack themselves and counter-attack all the opponent’s arguments (aka. admissible sets of arguments).

Comparison with Related Works

There were attempts on modeling analogical reasoning including our recent work (Racharak et al. Citation2016, Citation2017) in which their results are continued to study in this work. We note that both formalized the scheme argument from analogy and provided a logical language which enables finding analogical conclusions. On the other hand, (Racharak et al. Citation2016) extended syntax and argumentative features of DeLP for handling analogical arguments whereas (Racharak et al. Citation2017) translated the logical language to the represented answer set program and an answer set solver would be used to compute analogical conclusions. As (Racharak et al. Citation2016) extended DeLP, this work differs to (Racharak et al. Citation2016) in the structure of an argument’s notion. Another difference is that (Racharak et al. Citation2016) is more computationally oriented and has restricted expressiveness whereas ABA(p), like ABA, is a more general framework for analogical argumentation. With (Racharak et al. Citation2017), it is worth observing that their definition of knowledge base can be captured by an ABA(p) framework. That is, a logic program LP is mapped to an ABA component, O is a concrete instance of (LT,T), and πτ is an abstract instance of pτ. However, the development in (Racharak et al. Citation2017) ignored analogical degrees in their computational method. We have completed that part and generalized the approach in this work.

A similar attempt to (Racharak et al. Citation2016, Citation2017), i.e., combing rules and similarities, was proposed in (Sun Citation1995). In that work, a two-level connectionist model was developed. The first level (called CL) had one node for each domain concept whereas the second level (called CD) had fine-grained features in which all domain concepts could be decomposed to. Characteristics of similarity measures (denoted by in (Sun Citation1995)) was also discussed and the formula based on the above two-level model was proposed for concepts A,B as: AB=(|FAFB|)/(|FB|) where FA,FB are features defined in CD. It is worth observing that those two levels and similarity formula can be represented as (LT,T) and pτ, respectively. However, how defeasible conditions and the notion of relevence should be handled was not discussed concretely.

In (Goebel Citation1989), the form of analogical reasoning was cast as hypothetical reasoning as: sourceknowledgetargetknowledgeequalityassumptionsconclusions where equality assumptions can be viewed as similarity between the source and the target. If there were many equality assumptions, certain explicit preferences, e.g., the highest number of shared properties, were used. However, the defeasible conditions and the notion of relevance were also not concretely discussed. It is also worth observing that source knowledge and target knowledge can also be recast in (LT,T) and the criterion for forming equality assumptions can be made explicitly in pτ.

Case-based reasoning (CBR) can also be viewed as a form of analogical reasoning. In CBR, dimensions and factors are used for comparing cases and the decision in the precedent case is then taken as the decision into the current case. Examples of CBR systems are HYPO (Ashley Citation2006) and CATO (Aleven Citation1997). With ABA(p), CBR can be recast by consisting the rules: cif1,,fn in T, the rules: pici in R, and similarity between two cases ci are measured from their common features fi.

Comparing this work with defeasible reasoning formalism, particularly Nute’s d-Prolog (Gabbay, Hogger, and Robinson Citation1998, pp.353–396), different forms of rules were introduced viz. strict (unchallengeable) rules, defeasible (challengeable) rules, and defeater (exceptionable) rules. Examples of strict rules, defeasible rules, and defeater rules are “all penguins are bird”, “birds normally fly”, and “sick birds do not fly”, respectively. Like ABA, inference rules in ABA(p) can be seen as strict rules and a simple transformation (as used in Theorist (Poole Citation1988)) can be employed to convert defeasible rules into strict rules with assumptions. Moreover, we may observe that ABA(p) does not need to supply with defeater rules since it can find counter-arguments, including counter-analogies, among arguments it is able to build.

Ones may would like to compare between ABA(p) and an abstract framework of argumentation equipped with a preorder relation e.g. preference-based argumentation framework (PAF) introduced in (Amgoud and Cayrol, Citation2002). Formally, a PAF is a triple Args,Attack,_ where Args is a set of arguments, Attack is an attack relation, and _ is used to define a ‘defeat’ relation on each attack. It is not difficult to observe the correspondence between an ABA(p) framework and a PAF framework. Informally, each argument tree in ABA(p) is mapped to an argument in Args and an attack in ABA(p) between argument trees is mapped to a defeat relation, in which the usage of an argument’s degree and the preference on analogical arguments can be captured in a preorder relation. Their further theoretical relationship is left for future work.

Discussion and Future Work

This paper introduces a structured argumentation framework called ABA(p), which formalizes the argumentation scheme for argument from analogy. ABA(p) offers ways to encode the pattern of reasoning in argument from analogy and its critical questions, where concepts (or states of affairs) are represented by predicates in an underlying language and are defined by a particular terminological formalism. Its underlying mechanism consists in four mainstreams, viz. an ABA framework, a terminology, and a concept similarity under preferences, and a preference context. When no assumptions are available to construct an argument tree, additional assumptions can be constructed from the use of a similarity measure w.r.t. a terminology and a preference context. In other words, it draws a connection between two different formalisms, i.e., inference rules and terminological sentences, for dealing with analogical argumentation.

ABA(p) is meant to be a general framework for analogical reasoning. Thus, other notions apart from an ABA framework are also remained in general. For instance, ones may express a terminology as inference rules in T underlying a language LT and pτ may be defined as a proportion of common features to different features as discussed in Section 6. In this work, we exemplify how ones can use a particular description logic to express terminological formulae and our recent developed measure simπ is also demonstrated. One benefit of using description logics is that their expressivity and computational complexities were clearly studied (Baader et al. Citation2007).

Like ABA, all semantic notions for determining the acceptability of arguments in AA also apply to arguments in ABA(p). Thus, we investigate a constructive proof procedure for determining a grounded set of assumptions in this work. Since different agents may value analogies for their reasoning unequally, we also study how each choice of operator can influence different types of agents in analogical reasoning. Concerning other semantic notions of acceptability, this becomes an obvious future work to investigate on a dispute derivation for them and to further study how each semantic notion contributes to analogical argumentation in practice.

Other future directions are as follows. Firstly, we intend to apply the framework in some practical domains where analogical reasoning is extensively used, e.g., in clinical practices. In the clinical domain, many terminologies do exist and are represented in description logics e.g. Snomed ct and Go. The remaining tasks will be then encoding the actual methods of medical experts in terms of inference rules. Secondly, in light of argumentation schemes (Macagno, Walton, and Tindale Citation2017) developed some inferential structures and defeasibility conditions for analogical arguments; thus, we aim at investigating if such inferential structures can be captured by ABA(p). Finally, we are interested to theoretically study the relationship between other instances of PAF and ABA(p) in the viewpoint of analogical argumentation.

Acknowledgement

This research is part of the JAIST-NECTEC-SIIT dual doctoral degree program; is supported by the Japan Society for the Promotion of Science (JSPS kaken no. 17H02258) and is partly supported by CILS of Thammasat University.

Notes

1. We use inference rule schemata, with variables starting with capital letters, to stand for the set of all instances obtained by instantiating the variables so that the resulting premises and conclusions are sentences of the underlying language. For simplicity, we omit the formal definition of the language underlying our examples.

2. In DL, a structure ΔI,I, where ΔI is a non-empty domain and I is an interpretation function mapping each concept name A to AIΔI and each role name r to rIΔI×ΔI, is said to be a model of TBox T if it satisfies all formulae in the obvious way i.e. AICI for all formulae AC, AI=CI for all formulae AC, and rIsI for all formulae rs in T.

3. See Definition 4.3, for its formal definition.

4. If p = 0, both P and Q are called propositions.

5. The choice of ~pτ also contributes to the type of a rational agent. That is, different concrete measures may have different skepticism. However, the definition only pays attention to how gullible is contributed from F.

6. A patient selection function always prefers a non-assumption to an assumption in its selection.

7. Obvious abbreviations are used here for the sake of succinctness.

References

  • Aleven, V., 1997. Teaching Case-based Argumentation through a Model and Examples, University of Pittsburgh, Pittsburgh, Pennsylvania: PhD diss.
  • Amgoud, L., and C. Cayrol. 2002. Inferring from inconsistency in preference-based argumentation frameworks. Journal of Automated Reasoning 29 (2):125–69. doi:10.1023/A:1021603608656.
  • Ashley, K. 2006. Case-based reasoning. In Information technology and lawyers: Advanced technology in the legal domain, from challenges to daily routine, 23–60. Berlin: Springer.
  • Baader, F., D. Calvanese, D. L. McGuinness, D. Nardi, and P. F. Patel-Schneider, eds. 2007. The description logic handbook: Theory, implementation, and applications. New York, NY, USA: Cambridge University Press.
  • Baroni, P., and M. Giacomin. 2009. Semantics of abstract argument systems, 25–44. Boston, MA: Springer US.
  • Bartha, P. 2010. By parallel reasoning: The construction and evaluation of analogical arguments, 1–384.
  • Bondarenko, A., P. M. Dung, R. A. Kowalski, and F. Toni. 1997. An abstract, argumentation-theoretic approach to default reasoning. Artificial Intelligence 93 (1–2):63–101. doi:10.1016/S0004-3702(97)00015-5.
  • Copi, I. M., C. Cohen, and K. McMahon. 2016. Introduction to logic. Routledge.
  • Davies, T. R. 1988. Determination, uniformity, and relevance: Normative criteria for generalization and reasoning by analogy. In Analogical reasoning, 227–50. Springer.
  • Dung, P., P. Mancarella, and F. Toni. 2007. Computing ideal sceptical argumentation. Artificial Intelligence 171 (10):642–74. argumentation in Artificial Intelligence. doi:10.1016/j.artint.2007.05.003.
  • Dung, P. M. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence 77 (2):321–57. doi:10.1016/0004-3702(94)00041-X.
  • Dung, P. M., R. A. Kowalski, and F. Toni. 2009. Assumption-based argumentation, 199–218. Boston, MA: Springer US.
  • Gabbay, D. M., C. J. Hogger, and J. A. Robinson. 1998. Handbook of logic in artificial intelligence and logic programming: Volume 5: Logic programming. Clarendon Press.
  • Garca, A. J., and G. R. Simari. 2004. Defeasible logic programming: An argumentative approach. Journal of Theory and Practice of Logic Programming 4 (2):95–138. doi:10.1017/S1471068403001674.
  • Goebel, R. 1989. A sketch of analogy as reasoning with equality hypotheses, 243–53. Berlin, Heidelberg: Springer Berlin Heidelberg.
  • Guarini, M., A. Butchart, P. S. Smith, and A. Moldovan. 2009. Resources for research on analogy: A multi-disciplinary guide. Informal Logic 29 (2): 84–197.
  • Hesse, M. B. 1965. Models and analogies in science.
  • Macagno, F., D. Walton, and C. Tindale. 2017. Analogical arguments: Inferential structures and defeasibility conditions. Argumentation 31 (2):221–43. doi:10.1007/s10503-016-9406-6.
  • Modgil, S., and H. Prakken. 2014. The ASPIC+ framework for structured argumentation: A tutorial. Argument and Computation 5 (1):31–62. doi:10.1080/19462166.2013.869766.
  • Poole, D. 1988. A logical framework for default reasoning. Artificial Intelligence 36 (1):27–47. doi:10.1016/0004-3702(88)90077-X.
  • Racharak, T., B. Suntisrivaraporn, and S. Tojo. 2018. Personalizing a concept similarity measure in the description logic ELH with preference profile. Computing and Informatics 37 (3):581–613. doi:10.4149/cai_2018_3_581.
  • Racharak, T., S. Tojo, N. D. Hung, and P. Boonkwan. 2016. Argument-based logic programming for analogical reasoning. In Setsuya Kurahashi, Yuiko Ohta, Sachiyo Arai, Ken Satoh and Daisuke Bekki, editors, New frontiers in artificial intelligence - JSAI-isAI 2016 workshops, 253–269.
  • Racharak, T., S. Tojo, N. D. Hung, and P. Boonkwan, 2017. Combining answer set programming with description logics for analogical reasoning under an agent’s preferences, in: Proceedings of the 30th International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems (IEA/AIE), Arras, France, 306–316.
  • Raha, S., A. Hossain, and S. Ghosh. 2008. Similarity based approximate reasoning: Fuzzy control. Journal of Applied Logic 6 (1):47–71. doi:10.1016/j.jal.2007.01.001.
  • Sun, R. 1995. Robust reasoning: Integrating rule-based and similarity-based reasoning. Artificial Intelligence 75 (2):241–95. doi:10.1016/0004-3702(94)00028-Y.
  • Tversky, A. 1977. Features of similarity. Psychological Review 84 (4):327. doi:10.1037/0033-295X.84.4.327.
  • Waller, B. N. 2001. Classifying and analyzing analogies. Informal Logic 21 (3). doi:10.22329/il.v21i3.2246.
  • Walton, D. 2010. Similarity, precedent and argument from analogy. Artificial Intelligence and Law 18 (3):217–46. doi:10.1007/s10506-010-9102-z.
  • Walton, D., C. Reed, and F. Macagno. 2008. Argumentation schemes. Cambridge University Press.
  • Weinreb, L. L. 2016. Legal reason: The use of analogy in legal argument. Cambridge University Press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.