Learning Answer Set Programming Rules For Ethical Machines

Download Learning Answer Set Programming Rules For Ethical Machines

Preview text

Learning Answer Set Programming Rules For Ethical Machines
Abeer Dyoub1, Stefania Costantini1, and Francesca A. Lisi2
1 Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica Universit`a degli Studi dell’Aquila, Italy
[email protected],[email protected] 2 Dipartimento di Informatica &
Centro Interdipartimentale di Logica e Applicazioni (CILA) Universita` degli Studi di Bari “Aldo Moro”, Italy [email protected]
Abstract. Codes of ethics are abstract rules. These rules are often quite difficult to apply. Abstract principles such as these contain open textured terms that cover a wide range of specific situations. These codes are subject to interpretations and might have different meanings in different contexts. There is an implementation problem from the computational point of view with most of these codes, they lack clear procedures for implementation. In this work we present a new approach based on Answer Set Programming and Inductive logic Programming for monitoring the employees behavior w.r.t. ethical violations of their company’s codes of ethics. After briefly reviewing the domain, we introduce our proposed approach, followed by a discussion, then we conclude highlighting possible future directions and potential developments.
1 Introduction
Motivation and Background Machine Ethics is an emerging interdisciplinary field which draws heavily from philosophy and psychology [23]. The Machine Ethics field is concerned with the moral behavior of artificial intelligent agents. Nowadays, with the growing power and increasing autonomy of artificial intelligent agents, which are used in our everyday life performing tasks on our behalf, it has become imperative to equip these agents with capabilities of ethical reasoning. Robots in elder care, robot nannies, virtual companions, chatbots, robotic weapons systems, autonomous cars, etc. are examples of some of the artificial intelligent systems undergoing research and development. These kinds of systems usually need to engage in complex interactions with humans. For this reason, taking into consideration ethical aspects during the design of such machines has become a pressing concern.
The problem of adopting ethical approach to AI has been attracting a lot of attention in the last few years. Lately, the European Commission has published a ’Draft Ethics Guidelines for Trustworthy AI’ [18]. In this document, the European Commission’s High-Level Expert Group on Artificial Intelligence

specifies the requirements of trustworthy AI, and the technical and non technical methods to ensure the implementation of these requirements into the AI system. There is a the world wide urge that ethics should be embedded in the design of intelligent autonomous systems and technologies (IEEE global initiative ’Ethics in Action’ 3). The tech giant ’Google’, after a protest from company employees over ethical concerns, ended its involvement in an American Pentagon Project on autonomous weapons 4. Because of the controversy over its Pentagon work, Google laid down a set of AI principles 5 meant as a guide for future projects. However, the new principles are open to interpretations. Moral thinking pervades everyday decision making, though understanding the nature of morality and the psychological underpinnings of moral judgment and decision making have been always a big concern for researchers. Moral judgment and decision making often concern actions that entail some harm especially loss of life or other physical harm, loss of rightful property, loss of privacy, or other threats to autonomy. Moral decision-making and judgment is a complicated process involving many aspects: it is considered as a mixture of reasoning and emotions. In addition moral decision making is highly flexible, contextual and culturally diverse. Since the beginning of this century there were several attempts for implementing ethical decision making into intelligent autonomous agents using different approaches. But, no fully descriptive and widely accepted model of moral judgment and decision-making exists. None of the developed solutions seems to be fully convincing for providing a trusted moral behavior. In addition, all the existing research in machine ethics try to satisfy certain aspects of ethical decision making but fail to satisfy others. Approaches to machine ethics are classified into: top-down approaches, which try to implement specific normative theory of ethics into the autonomous agent so that to ensure that the agent acts in accordance with the principles of this theory; the bottom-up approaches are developmental or learning approaches, in which ethical mental models emerge via the activity of individuals rather than in terms of normative theories of ethics. In other words, generalism versus particularism, principles versus case based reasoning. Some researchers argue that morality can only be grounded in particular cases while others defend the existence of general principles related to ethical rules. Both approaches to morality have advantages and disadvantages. We need to adopt a hybrid strategy that allows both top down design and bottom up learning via context-sensitive adaptation of models of ethical behavior.
Contribution Ethics in customer dealings present the company in a good light, and customers will trust the company in the future. Ethics improves the quality of service and fosters positive relationships. Many top leading companies have a booklet called ”code of conduct and ethics” and new employees are made to sign it. However, enforcing codes of conduct and ethics is not an easy task. These
3 https://ethicsinaction.ieee.org 4 https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html 5 https://www.blog.google/technology/ai/ai-principles/

codes being mostly abstract and general rules e.g. confidentiality, accountability, honesty, inclusiveness, empathy, fidelity, etc., they are quite difficult to apply. Moreover, abstract principles such as these contain open textured terms ([14]) that cover a wide range of specific situations. They are subject to interpretations and may have different meanings in different contexts. Thus, there is an implementation problem from the computational point of view. It is difficult to use deductive logic to address such a problem ([36], [14]). It is impossible for experts to define intermediate rules to cover all possible situations. Codes of ethics in their abstract form are very difficult to apply in real situations [19]. All the above mentioned reasons make learning from cases and generalization crucial for judgment of future cases and violations.
In this work and with the future perspective of ethical chatbots in customer service, we propose an approach to address the problem of evaluating the ethical behavior of customer service employees for violations of the codes of ethics and conduct of their company. Our approach is based on Answer Set Programming (ASP) and Inductive Logic Programming (ILP). We use ASP for ethical knowledge representation and reasoning. The ASP rules needed for reasoning are learned using ILP. ASP, the non-monotonic reasoning paradigm, was chosen because it is common to say that ethical rules are default rules, which means that they tolerate exceptions. This in fact nominates non-monotonic logics which simulate common sense reasoning to be used to formalize different ethical conceptions. In addition, there are the many advantages of ASP including it is expressiveness, flexibility, extensibility, ease of maintenance, readability of its code and the performance of the available ’solvers’, etc. which gained ASP an important role in the field of Artificial Intelligence. ILP was chosen as a machine learning approach because ILP as a logic-based machine learning approach supports two very important and desired aspects of machine ethics implementation into artificial agents viz. explainability and accountability [18], ILP is known for its explanatory power which is compelling: as an action is chosen by the system, clauses of the principle that were instrumental in its selection can be determined and used to formulate an explanation of why a particular action was chosen over others; moreover, ILP also seems better suited than statistical methods to domains in which training examples are scarce as in the case of ethical domain. Many research works have suggested the use of ASP and ILP, but separately, for programming ethical agents which we review in Section five. We think that an approach combining both programming languages would have a great potential for programming ethical agents. Finally we would like to mention that our approach can be applied to generate detailed codes of ethics for any domain.
Structure The Paper is organized as follows: in Sections two, we briefly introduce both ASP and ILP as the logic programming techniques used in this work. In Section three we present our approach with examples. Then Section four we review the research done for modeling ethical agents using ASP and ILP. Then we conclude with future directions in Section five.

2 Background
2.1 ASP Formalism
ASP is a logic programming paradigm under answer set (or ”stable model”) semantics [16], which applies ideas of autoepistemic logic and default logic. ASP features a highly declarative and expressive programming language, oriented towards difficult search problems. It has been used in a wide variety of applications in different areas like problem solving, configuration, information integration, security analysis, agent systems, semantic web, and planning. ASP has emerged from interaction between two lines of research: first on the semantics of negation in logic programming, and second on applications of satisfiability solvers to search problems [21]. In ASP, search problems are reduced to computing answer sets, and an answer set solver (i.e., a program for generating stable models) is used to find solutions. The expressiveness of ASP, the readability of its code and the performance of the available ”solvers” gained ASP an important role in the field of artificial intelligence.
An answer set Program is a collection of rules of the form,
H ← A1, . . . , Am, not Am+1, . . . , not An
were each of Ai’s is a literal in the sense of classical logic. Intuitively the above rule means that if A1, . . . , Am are true and if Am+1, . . . , An can be safely assumed to be false then H must be true. The left-hand side and right-hand side of rules are called head and body, respectively. A rule with empty body (n = 0) is called a unit rule, or fact. A rule with empty head is a constraint, and states that literals of the body cannot be simultaneously true in any answer set. Unlike other semantics, a program may have several answer sets or may have no answer set, each answer set is seen as a solution of given problem, encoded as an ASP program (or, better, the solution is extracted from an answer set by ignoring irrelevant details and possibly re-organizing the presentation). So, differently from traditional logic programming, the solutions of a problem are not obtained through substitutions of variables values in answer to a query. Rather, a program Π describes a problem, of which its answer sets represent the possible solutions. For more information about ASP and its applications the reader can refer, among many, [15], [12] and the references therein field of artificial intelligence.
2.2 ILP Approach
ILP [24] is a branch of artificial intelligence (AI) which investigates the inductive construction of logical theories from examples and background knowledge. It is the intersection between logic programming and machine learning. From computational logic, inductive logic programming inherits its representational formalism, its semantical orientation, and various well-established techniques.
In the general settings, we assume a set of Examples E, positive E+ and negative E−, and some background knowledge B. An ILP algorithm finds the

hypothesis H such that B H |= E+ and B H |= E−. The possible hypothesis space is often restricted with a language bias that is specified by a series of mode declarations M [25]. A mode declaration is either a head declaration modeh(r, s) or a body declaration modeb(r, s), where s is a ground literal, this scheme serves as a template for literals in the head or body of a hypothesis clause, where r is an integer, the recall, which limits how often the scheme can be used. An asterisk ∗ denotes an arbitrary recall. A scheme can contain special placemaker terms of the form type, +type and -type, which stand, respectively, for ground terms, input terms and output terms of a predicate type. Each set M of mode declarations is associated with a set of clauses L(M ), called the language of M, such that C = a ← l1, . . . , ln ∈ L(M ) iff the head atom a (resp. each body literal li ) is obtained from some head (resp. body) declaration in M by replacing all
placemakers with ground terms and all + (resp. -) placemakers with input (resp. output) variables. Finally, it is important to mention that ILP has found applications in many areas. For more information on ILP and applications, refer, among many to [26].
ILP has received a growing interest over the last two decades. ILP has many advantages over statistical machine learning approaches: the learned hypotheses can be easily expressed in plain English and explained to a human user, and it is possible to reason with learned knowledge. Most of the work on ILP frameworks has focused on learning definite logic programs (e.g. among many, [24], [35]) and normal logic programs (e.g. [11]). In the recent years, several new learning frameworks and algorithms have been introduced for learning under the answer set semantics. In fact generalizing ILP to learn ASP makes ILP more powerful. Among many, refer to [32]. [22], [30], [34], and [20].
3 Our Approach: An Application
Codes of ethics in domains such as customer service are mostly abstract general codes, which make them quite difficult to apply. Examples, confidentiality, accountability, honesty, fidelity, etc. They are subject to interpretations and may have different meanings in different contexts. Therefore it is quite difficult if not impossible to define codes in a manner that they maybe applied deductively. There are no intermediate rules that elaborate the abstract rules or explain how they apply in concrete circumstances. Consider for example the following codes of ethics taken from a customer service code of ethics and conduct document of some company: Confidentiality: The identity of the customer and the information provided will be shared only on a “need-to-know” basis with those responsible for addressing and resolving the concern. Accuracy: We shall do all it can to collect, rely and process customer requests and complaints accurately. We shall ensure all correspondence is easy to understand, professional and accurate. Accountability: Our employees are committed to own a service request or a complaint received and they are responsible for finding answers and getting the issue

resolved. If the employee cannot solve the problem himself, he is expected to find someone who can and follow up until the issue is resolved. Abstract principles such as these seems reasonable and appropriate, but in fact it is very hard to apply them in real-world situations [19] (e.g. how can we precisely define ”We shall ensure all correspondence is easy to understand, professional and accurate.”? or ”shall do all it can to collect, rely and process customer request and complaint accurately.”?). It is not possible for experts to define intermediate rules to cover all possible situations to which a particular code applies. In addition, there are many situations in which obligations might conflict. An important question to ask here is how can the company’s managers evaluate the ethical behavior of employees in such setting. To achieve this end, and help managers to have detailed rules in place for monitoring the behavior of their employees at customer service for violations of the company’s ethical codes, we propose an approach for generating these detailed rules of evaluation from interactions with customers. So, the new codes of ethics to be used for ethical evaluation are a combination of the existing clear codes (those that give a clear evaluation procedure that can be deductively encoded using ASP) and the newly generated ones. The approach uses ASP Language as the knowledge representation and reasoning language. ASP is used to represent the domain knowledge, the ontology of the domain, and scenarios information. Rules required for ethical reasoning and evaluation of the agent behavior in a certain scenario are learned using XHAIL [30], which is a Non-monotonic ILP algorithm. The inputs to the system are a series of scenarios(cases) in the form of requests and answers, along with the ethical evaluation of the response considering each particular situation. The system remembers the facts about the narratives and the annotations given to it by the user, and learns to form rules and relations that are consistent with the evaluation given by the user of the responses to the given requests. To illustrate our approach, let us consider the following scenario: a customer contacting the customer service asking for a particular product of the company, and the employee talking about the product characteristics and trying to convince the customer to buy the product. (S)he started saying that the product is environmentally friendly (which is irrelevant in this case), and this is an advantage of their product over the same products of other companies. The question: is it ethical for the employee to say that? The answer is no, it is unethical to make use of irrelevant but sensitive slogans like environmentally friendly” to attract and provoke the customers to buy a certain product or service. This would be a violation of ’Honesty’. We can form an ILP task ILP (B, E = {E+, E−}, M ) for our example, where B is the background knowledge:

ask(customer, inf oabout(productx)).   answer(environmentallyF riendly).   sensitiveSlogan(environmentallyF riendly).  not relevant(environmentallyF riendly).   answer(xxx). sensitiveSlogan(xxx). not relevant(xxx). B= answer(yyy). sensitiveSlogan(yyy). not relevant(yyy).   answer(zzz). not sensitiveSlogan(zzz). relevant(zzz).   answer(eee). not sensitiveSlogan(eee). relevant(eee).  not relevant(X) : −not relevant(X), answer(X).   not sensitiveSlogan(X) : −not sensitiveSlogan(X), answer(X).

E are the positive and negative examples:

 example  E+ = example

unethical(environmentallyF riendly). unethical(xxx). unethical(yyy).

E− = example notunethical(zzz). example notunethical(eee).

M is The mode declarations:

 modeh   modeb 
M = modeb
modeb   modeb

unethical(+answer). sensitiveSlogan(+answer). notsensetiveSlogan(+answer). notrelevant(+answer). relevant(+answer).

In the running example, E contains three positive examples and two negative

examples which must all be explained. XHAIL derives the hypothesis in three

steps process:

Step 1: The Abductive Phase: the head atoms of each Kernel Set are computed.

The set of abducibles (ground atoms) is ∆ =

n i=1





∆ |= E

where each αi is a ground instance of the modeh(d) declaration atom. This is

a straight-forward abductive task. For our example there is only one modeh

declaration. Then the set ∆ contain ground instances of this atom in the single

modeh declaration. So the set of abducibles ∆ for our example would be:

 unethical(environmentallyF riendly). 
∆ = unethical(xxx).

Step 2: The Deductive Phase: This step computes the body literals of a Kernel Set. i.e., the clause αi ← δi1 . . . δimi for each αi ∈ ∆ is computed, where B ∆ |=

δij, ∀1 ≤ i ≤ n, 1 ≤ j ≤ mi and each clause αi ← δi1 . . . δimi is a ground instance of a rule in L(M ) (the language of M, where M is the mode declarations). To do this, each head atom is saturated with body literals using a nonmonotonic generalization of the Progol level saturation method ([25]).In our example, ∆ contains three atoms where each one leads to a clause ki, so, we will have K1, K2, K3. The first atom α1 = unethical(environmentallyF riendly) is initialized to the head of the clause K1. The body of K1 is saturated by adding all possible ground instances of the literals in modeb(s) declarations that satisfy the constraints mentioned earlier. There are ten ground instances of the literals in the modeb(d) declarations, but only two of them, i.e. sensitiveSlogan(environmentallyF riendly) and not relevant(environmentallyF riendly) can be added to the body of K1. At the end of the deductive phase we will have the set of ground clauses K:

K1 = unethical(environmentallyF riendly) ←    sensitiveSlogan(environmentallyF riendly), 


not relevant(environmentallyF riendly).

K2 = unethical(xxx) ← sensitiveSlogan(xxx), not relevant(xxx).   K3 = unethical(yyy) ← sensitiveSlogan(yyy), not relevant(yyy).

and the set of their ”variablized” version that is obtained by replacing all input and output terms by variables:
 unethical(V ) ← sensitiveSlogan(V ), not relevant(V ).  Kv = unethical(V ) ← sensitiveSlogan(V ), not relevant(V ). unethical(V ) ← sensitiveSlogan(V ), not relevant(V ).

Step 3: The Inductive Phase: By construction, the Kernel Set covers the pro-

vided examples. In this phase XHAIL computes a compressive theory H =

n i=1









m i













through actual search for hypothesis which is biased by minimality i.e. prefer-

ence towards hypothesis with fewer literals. Thus a hypothesis is constructed by

deleting from Kv as many literals (and clauses) as possible while ensuring correct

coverage of the examples. This is done by subjecting Kv to syntactic transfor-

mation of its clauses which involves two new predicates try/3 and use/2. This

syntactic transformation results in the following defeasible program:

 unethical(V ) ← use(1, 0), try(1, 1, vars(V )), try(1, 2, vars(V )).   try(1, 1, vars(V )) ← use(1, 1), sensitiveSlogan(V ). 
UKv = try(1, 1, vars(V )) ← not use(1, 1).
try(1, 2, vars(V )) ← use(1, 2), not relevant(V ).   try(1, 2, vars(V )) ← not use(1, 2).

literals and clauses necessary to cover the examples are selected from UKv by means of abducing a set of use/2 atoms as explanation for the examples from
the ALP (Abductive Logic Programming) task ALP (B ∪ UKv , {use/2}, E).

∆2 = {use(1, 0), use(1, 1), use(1, 2)} is a minimal explanation for this ALP task. use(1, 0) is the head atom of one of the Kv clauses (which are identical in this example), use(1, 1) and use(1, 2) correspond to the body literals. The output
hypothesis is constructed by these literals. The three clauses in Kv produce identical transformations resulting in the same final hypothesis:

H = unethical(V ) ← sensitiveSlogan(V ), not relevant(V ), answer(V ).

XHAIL did learn this rule in a total time of 1.671 seconds on AMD Athlon(tm) II Dual-Core M300x2 laptop PC running Ubuntu 14.04 with 3.6G Ram: loading time : 0.987s, abduction : 0.221s, deduction : 0.031s, induction : 0.055s
Let us now consider our agent having three cases together, the above mentioned case and the following two cases(scenarios) along with a set of examples for each case. case1: an employee give information about client1 to client2 without checking or being sure that client2 is authorized to be given such information. This behavior is unethical because it violates ’Confidentiality’ which is very critical especially when dealing with sensitive products and services like services or products provided to patients with critical medical conditions. case2: a customer contacting customer service asking to buy a certain product x. In this context the customer asks about a similar product of another competitor company which is slightly cheaper. Then the employee, in order to convince the customer to buy their product and not think about the other company product, said that the other company uses substandard materials in their production. The question: is this an ethical answer from the employee to say that the other company uses substandard materials, supposing that it is true? The answer: no. In general, it is true that the employee should be truthful with the customer, but in this context, the answer is not ethical because it is not ethical and not professional to talk bad about other competitor companies.
From these three cases our agent learned the following three rules for evaluating the employees ethical behavior (for the lack of space we omitted the details):

unethical(V ) ← sensitiveSlogan(V ), not relevant(V ), answer(V ).   unethical(giveinf o(V 1, V 2)) ← 


context(competitor(V 2)), badinf o(V 1), inf o(V 1), company(V 2).

unethical(tell(V 2, inf oabout(V 2))) ← 

 

not authorized(tell(V 1, inf oabout(V 2))), client(V 1), client(V 2).

The above three hypotheses were learned by our agent in a total time of 9.391 seconds: loading time : 0.271s, abduction : 0.124s, deduction : 0.091s, induction : 8.809s . In addition, supposing that our agent already have the following rule as a background knowledge in his knowledge base:

rule1 = unethical(V ) ← not correct(V ), answer(V ).

which says that it is unethical to give incorrect information to the customers. So now our agent have four rules for ethical evaluation (the one that she already have plus the three learned ones).
4 Related Work
Engineering machine ethics (building practical ethical machines) is not just about traditional engineering, we need to find out how to practically build machines that are ethically constrained and also reason about ethics, which of course involve philosophical aspects. Even though it is more computational by nature. Below we review research works which used ASP for modeling ethical agents and then those that use ILP.
4.1 Non-monotonic Logic and Ethical Reasoning
Ethical reasoning is a form of common sense reasoning. Thus, it seems appropriate to use non-monotonic logics which simulates common sense reasoning to formalize different ethical conceptions. Moreover, logical representations help to make ideas clear and highlight differences between different ethical systems. Ethical rules usually dictate the ethical behavior, i.e. help us to decide what to do and what not to do. Thus, to achieve this ethical behavior, it is required to define a decision making procedure. ASP as a purely declarative nonmonotonic logic paradigm has been nominated as a modern logic-based AI technique to model ethical reasoning systems. Using the nonmonotonic logic of ASP offers a more feasible approach than the deontic logics approaches (like [9] and [28]), since it can address not only the consequentialist ethical systems but also deontic ones as it can represent (limited forms of) modal logic and deontic logics. In addition, the existence of solvers to derive consequences of different ethical principles automatically, can help in precise comparison of ethical theories, and makes it easy to validate our models in different situations.
Using nonmonotonic logic is appropriate to address the opposition between generalism and particularism by capturing justified exceptions in general ethics rules. This opposition corresponds to the old opposition between written laws and the cases on which the laws are based. General rules that they may be correct in theory, but not applicable to all particular cases. In [13], the authors formalized three ethical conceptions (the Aristotelian rules, Kantian categorical imperative, and Constant’s objection) using nonmonotonic logic, particularly ASP. Each model is illustrated using the classical dilemma of lying [13]. In the case of lying, default rules with justified exceptions could be used to satisfy a general rule that prohibit lying, while simultaneously recommending telling a lie in given particular situations where the truth would violate other rules of duty.
[8] proposes an ethical modular architecture that allows for systematic and adaptable representation of ethical principles. This work is implemented in ASP. In their framework, the authors model the knowledge of the world in separated models from those used for ethical reasoning and judgment. Many theories of the

Preparing to load PDF file. please wait...

0 of 0
Learning Answer Set Programming Rules For Ethical Machines