1,105
Views
8
CrossRef citations to date
0
Altmetric
Articles

On Creative Self-Driving Cars: Hire the Computational Logicians, Fast

&

ABSTRACT

There can be no denying that it is entirely possible for a car-manufacturing company like Daimler to build and the deploy self-driving cars without hiring a single computational logician. However, for reasons we explain herein, a logician-less approach to engineering self-driving automobiles (and, for that matter, self-moving vehicles of any consequence, in general) is a profoundly unwise one. As we shall see, the folly of leaving aside logic has absolutely nothing to do with the standard and stale red-herring concern that self-driving cars will face exotic human-style ethical dilemmas the philosophers have a passionate penchant for.

Introduction; planned sequence

There can be no denying that it is entirely possible for a car-manufacturing company like DaimlerFootnote1 to build and deploy self-driving cars without hiring a single computational logician. However, for reasons we explain herein, a logician-less approach to engineering self-driving automobiles (and, for that matter, self-moving vehicles of any consequence, in general) is a profoundly unwise one. As we shall see, the folly of leaving aside logic has absolutely nothing to do with the standard and stale red-herring concern that self-driving cars will face exotic human-style ethical dilemmas the philosophers have a passionate penchant for.Footnote2

There are any number of routes we could take in order to explain and demonstrate our central claim; in the present paper we opt for one based on a particular, new line of reasoning: on what we dub “The Core Argument” (= ). This argument starts by innocently asking whether a car manufacturer wishes to engineer self-driving cars that are intelligent, presumes an affirmative answer, next infers that since intelligence entails (a certain form of) creativity, the sought-after cars must be -creative. continues with the deduction of autonomy from -creativity; and, following on that, explains that the cars in question must also be powerful. We thus arrive at the intermediary result that our rational maker of self-driving cars must—using the obvious acronym for the four properties in question—seek ICAP (pronounced “eye cap”) versions of such cars. The Core Argument continues with an inference to the proposition corresponding to the imperative that is the sub-title of the current paper.

There are four specific things R1–R4 the following of this imperative will directly secure for car manufacturers from the hired computational logicians; herein we emphasize only one item in this quartet: OS-rooted ethical control of self-driving cars, R4. That is, the computational logicians must be recruited to design and implement logics that are connected to the operating-system level of ICAP cars, and that ensure these cars meet all of their moral and legal obligations, never do what is morally or legally forbidden, invariably steer clear of the invidious, and, when appropriate, perform what is supererogatory.

At this point, via its final inference, delivers some new and bad news: If the ethical theory sitting atop and instantiating these obligations (and prohibitions, etc.) for self-driving cars is utilitarianism, it will be impossible to engineer an ethically correct self-driving car. This bad news has absolutely nothing to do with the standard and stale red-herring concern that self-driving cars will face exotic human-style ethical dilemmas the philosophers have a passionate penchant for (e.g., see Lin Citation2015). Should a self-driving car that faces an unavoidable choice between crashing into a group of five innocent pedestrians versus crashing head on into another car with a single passenger opt for the smaller disutility? How should a self-driving car that must choose between hitting a motorcyclist wearing a helmet versus one not wearing one behave? Such cases can be multiplied indefinitely, just by following dilemmas that the philosophers have pondered for millennia. But the bad news we deliver is much more serious than such far-fetched “trolley problems,” which everyone agrees are vanishingly unlikely to materialize in the future; the bad news pertains to the vehicular version of the demandingness objection to utilitarianism, an objection—wholly separate from matters in artificial intelligence (AI) and robotics, and needless to say specifically from self-driving cars—presented in (Scheffler Citation1982). In a word, the demandingness objection to utilitarianism, at least utilitarianism of the standard “act” variety, implies that a human agent seeking to continuously meet her obligations under this ethical theory will be overwhelmed to the point of not being able to get anything done on the normal agenda of her life. It is probably safe to say that Daimler will not be keen on the idea of building self-driving cars that do not get done any of the things customers expect from such vehicles, such as simply getting their passengers from point A to point B. We thus end up learning of the fifth reason (i.e., R5, coming on the heels of R1–R4) for hiring computational logicians (and this applies not only to car manufacturers, but also to those setting law and public policy with respect to self-moving vehicles): to receive crucial help in dealing with the demandingness problem.

The sequence for the sequel is specifically as follows. We begin by presenting The Core Argument (“The proposed argument”), and then, immediately anticipating the objection to that intelligence does not in fact entail creativity. Next, in “Creativity in what sense?”, we consider a series of possible replies to this objection—a series that rests content (in “But what about Macgyveresque creativity?”) with the position that intelligence entails that a specific form of creativity (what we call “MacGyveresque” creativity, or, for short, m-creativity) should be possessed by any intelligent self-driving car. (We hence instantiate to “m.”) While this type of creativity is not exactly exalted, it is the kind of creativity we should indeed expect from self-driving cars. Our next step, still following the thread of , is to show that m-creativity implies autonomy (“From Macgyveresque creativity to autonomy”). We then (“Are powerful self-driving cars desired?”) explain that the likes of Daimler will need not only to engineer intelligent, creative, and autonomous self-driving cars, but also powerful ones. Having reached this point, we further explain that the aforementioned quartet R1–R4 must be provided for an ICAP self-driving car, and that only computational logicians can provide this quartet (“The quartet R1–R4; OS-rooted ethical control (R4)”). Our focus, as said, is herein on but one member of the quartet: R4: OS-rooted ethical control. Importantly, we explain that whereas some others naïvely view ethical control of self-driving cars from the point of views afforded by threadbare and fanciful human ethical dilemmas (again e.g., see Lin Citation2015), the real issue is that because ICAP self-driving cars will be making decisions that are alien to our moral cognition, on the strength of knowledge that is alien to our moral knowledge, at time scales that are alien to our moral decision-making, the demandingness objection to utilitarianism has firm and worrisome traction when it comes to building such self-driving cars. A brief conclusion, with remarks about next research steps, wraps up the paper. These steps include what we have said is the fifth and final thing that must be provided by computational logicians: R5: an escape from the AI version of the demandingness objection.

The proposed argument

As we have said, the present paper is driven by The Core Argument = ; that is, the chain of reasoning we adumbrated above, and which is laid out skeletally in . We refer in to Daimler here, but of course this is just an arbitrary stand-in (see note 1). Likewise, we denote by , to ease exposition, the arbitrary car around which argumentation and analysis revolves. And finally, as the reader can see in , the argument is at this point schematic, since (among other reasons) the type of creativity in question is left open.Footnote3

Figure 1. The core argument ().

Figure 1. The core argument ().

Figure 2. A simple circuit problem that invites M-creativity.

Figure 2. A simple circuit problem that invites M-creativity.

We anticipate that some readers will consider the very first inference in to be suspicious; in fact, many readers will no doubt regard this inference to be an outright non sequitur. Why should it be the case that intelligence entails creativity? As the skeptic will no doubt point out, when we speak of an intelligent self-driving car, we are not thereby speaking of the kind of intelligence one might ascribe to a towering human genius. Einstein was surely intelligent, and Einsteinian intelligence, all should concede, entails creativity, indeed extreme creativity at that, but however wondrous a 2020 “Autonomous Class” Mercedes-Benz might be, it is rather doubtful that Daimler needs the car to revolutionize an entire branch of science. Yet, we accept that the onus is on us to defend the first inference to C1 in The Core Argument.

Creativity in what sense?

To bear this burden successfully means that, at a minimum, we need to explain what kind of creativity we have in mind, and then proceed to show that with that type of creativity selected for interpretation of the first inference in , that inference is valid. Much of Bringsjord’s own work in AI has involved creativity, so we have no shortage of candidate kinds of creativity to consider for . Let us see how our inference fares on a series of these candidates.

Lovelace test creativity

To start the series, we note that Bringsjord, Ferrucci, and Bello (Citation2001) propose a highly demanding test for (computing-)machine creativity, one inspired by Lady Lovelace’s famous objection to Turing’s (Citation1950) claim that a computer able to successfully play the famous imitation gameFootnote4 should be classified as a genuinely thinking thing. Her objection, quite short but quite sweet, was simply that since computers are after all just programmed by humans to do what they do, these computers are anything but creative. Just as a puppet receives not a shred of credit for its moves, however fancy, but rather the human puppeteer does, so too, by the lights of Lovelace, it is only the human programmer who can credibly lay claim to being creative. Bringsjord, Ferrucci, and Bello (Citation2001) give conditions which, if satisfied by an AI system, should classify that system as creative even by the high standards of Lady Lovelace. An AI agent that meets these conditions can be said to be LT-creative. One condition, put informally here so as not to have to profligately spend time and space recapitulating the paper by Bringsjord and colleagues, is that the engineers of the AI system in question find the remarkable behavior of this system to be utterly unfathomable, despite these engineers having full and deep knowledge of the relevant logic, mathematics, algorithms, designs, and programs.

It should be obvious that the inference to C1 in is invalid if “-creativity” is set to “LT-creativity.” We can have any number of genuinely intelligent AIs that are nonetheless fully understood by engineers who brought these AIs into existence. We doubt very much that Deep Blue, the undeniably intelligent chessplaying program that vanquished Gary Kasparov, played chess in a manner that the engineers at IBM found unfathomable.Footnote5 A parallel point holds with respect to other famous AIs, such as Watson, the system that vanquished the best human Jeopardy! players on the planet. No one doubts that Watson, during the competition, was intelligent, and yet Watson’s performance was not in the least mysterious, given how the engineers designed and built it.Footnote6

The upshot is that if is to be sound, a different type of creativity is needed for the in this argument.

Creative formal thought?

It turns out that Bringsjord (Citation2015c) has published a purported proof that intelligence implies a certain form of creativity. Might this be just what the doctor ordered for the variable in ? Unfortunately, there is a two-part catch, and this is reflected in the title of the paper in question: “Theorem: General Intelligence Entails Creativity, assuming .” The overall issue is what is assumed. The proof assumes, one, that the intelligence in question must include arithmetic (here rhymes with “empathetic”) intelligence, and two, that the level of this intelligence is very high. More specifically, the idealized agent that the analysis and argumentation centers around must have command not only over the axiom system of Peano Arithmetic (PA) itself, but must also have command over the meta-theory of PA. (For instance, the agent must know that the axioms of PA, an infinite set, are all true on the standard interpretation of arithmetic—and this is just for starters.) Under these two assumptions regarding arithmetic intelligence, the representative agent is shown to have a form of logicist creativity (l-creativity). Unfortunately, setting “-creativity” to “l-creativity” in does not render the first inference valid, for the simple reason that Daimler engineers are not interested (let alone compelled) to seek self-driving cars able to understand and proof abstruse aspects of mathematical logic. So back to the drawing board we go.

Creative musical thought?

Another option for , at least formally speaking, is musical creativity (e.g., see Ellis et al. Citation2015). But as Bringsjord pointed out in person when presenting the kernel of the present paper in Vienna, mere steps from where the display of such creativity, in perhaps its highest form, happened in the past,Footnote7 while Daimler takes commendable pains to ensure that the sound systems in its cars are impressive, they have no plans to take on the job of producing AI that also generates the music that is so wonderfully presented to passengers.Footnote8

Creative literary thought?

Any notion that the kind of creativity relevant to self-driving cars is LT-creativity, l-creativity, or musical creativity is, as we have now noted, implausible to the point of being almost silly. But we come now to yet another form of creativity that just might not be so crazy in the current context: literary creativity. It turns out that this is once again a form of creativity that Bringsjord has spent time investigating; in this case, in fact, a lot of time (e.g., see the monograph Bringsjord and Ferrucci Citation2000). Under charitable assumptions, Bringsjord’s investigation implies that even today’s well-equipped but human-driven cars are already literarily creative, at least in a “plot-centric” way. If we interpret the route of a car from point of origin to destination to constitute a series of events involving passengers and the car itself as characters, then by some relaxed concepts of what a story is it would immediately follow that top-flight navigation systems in today’s human-piloted cars are quite capable of story generation. This would be true of Bringsjord’s own German sedan, and indeed true of his drive to JFK airport to fly to Vienna to give the very talk that expressed the core of the reasoning presented in the present paper. The reason is that during this drive the navigation system generated a number of unorthodox routes to JFK, in order to avoid extreme congestion on the infamous-to-New-Yorkers Van Wyck Expressway. It would be easy enough to express these alternative routes in a format that has been used in AI to represent stories. For example, the routes could be easily represented in the event calculus, which is explained and in fact specifically used to represent stories in (Mueller Citation2014). While doing this sort of thing might strike some readers as frivolous, there can be no denying that, in the longstanding tradition in AI that counts a mundane declarative representation of a series of events that involve agents to be a story,Footnote9 we must accept as fact that ICAP cars could be literarily creative. In fact, given that cars today integrate navigation with knowledge of not only traffic flow, but points of interest along the way, it would not be hyperbole to say that cars increasingly have knowledge by which they could spin stories of considerable plot-centric complexity. However, the previous two sentences say: could be creative. They do not say that Daimler would need to make literarily creative ICAP self-driving cars. Therefore, we still have not found an instantiation of in that produces a valid inference to C1.

But what about Macgyveresque creativity?

The kinds of creativity we have canvassed so far have been, in each case, domain-specific. We turn our attention now to a general form of creativity that seems to be applicable in nearly any domain that presents problems to agents whose goals require the solving of those problems. This form of creativity is a brand of “out-of-the-box” problem-solving, an extreme resourcefulness based in large part on an ability to create, on the spot, novel problem-solving moves in extremely challenging and novel situations, and to thereby conquer these situations—where the conquering, by definition, is not accessible to shallow learning techniques currently associated with such things as “machine learning” and “deep learning.”Footnote10 At least for those familiar with “classic” television in the United States, the paragon of this form of creativity is the heroic secret agent known as “MacGyver,” who starred in a long-running show of the same name.Footnote11 We thus refer to the kind of creativity in question as MacGyveresque, or, for short, m-creativity.Footnote12 In the very first episode, MacGyver is confronted with the problem of having to move a large and heavy steel beam that blocks his way. His creative solution is to cut the end off of a fire hose, tie a knot in that hose, and run the hose underneath the beam. MacGyver then turns on the water, and the hydraulic pressure in the hose lifts the beam enough for him to pivot it clear. This kind of creativity is manifested frequently, in episode after episode. One of the hallmarks of m-creativity is that (1) the known and planned purposes of objects, for MacGyver, turn out to be irrelevant in the particular problems confronting him, but (2) extemporaneously in those problems these objects are used in efficacious ways that the humans who designed and produced those objects did not foresee.Footnote13

It seems to us that it can be shown, rather easily, that a truly intelligent artificial agent, operating in the ordinary environment that usually houses human beings and presents them with every day, run-of-the-mill challenges, must be m-creative. We believe that this can be shown formally, that is, proved outright, but given the space required to set the formal context and assumptions for such an endeavor, we will rest content here with a simple example, and an argument associated with it, to make our point.

First, we inform the reader that we aim for the logically equivalent contrapositive: that if our arbitrary agent , in the kind of environment we have imagined, presented with a representative type of problem , is not m-creative, then is not intelligent. For , we choose a simple but classic challenge. In it, a subject is confronted with the challenge of completing a short and straightforward low-voltage circuit, in order to light a small bulb. A metal screwdriver with a plastic handle is provided, with instructions that it be used to first tighten down the terminals on either side of a lone switch in the circuit.Footnote14 The problem is that the circuit is not completed, and hence the lamp is unlit, because there is a gap in the wiring. No other props or tools are provided, or allowed. The puzzle is depicted in .

Now, suppose that a small humanoid robot, billed by its creators as intelligent relative to environments like the one that envelopes the circuit problem here, appears on the scene, and is confronted with the problem. The situation is explained to the robot, just as it is explained to the human subjects in (Glucksberg Citation1968). (Both the humans and our robot know that current will flow from the power source to the lamp, and light it, if the wire makes an uninterrupted loop. This is of course also common knowledge, even in middle school in technologized countries.) The robot proceeds to screw down the terminals on either side of the switch. But after that, despite being told to light the lamp, it is quite paralyzed. That is, the robot does not use the screwdriver to complete the circuit and light the lamp, but instead stares for a while at the setup in front of it, and then announces: “I am sorry. I cannot figure out how to light the lamp.” It seems clear that we would have to say that the robot is not intelligent, despite claims to the contrary by its creators. Of course, an m-creative agent need not have a perfect batting average: some problems will go unsolved, because the not-as-designed use of objects would not invariably be discovered by the agent in question. But to flesh out our argument, simply assume that our parable has a number of close cousins, and that in each and every one, the robot has no trouble using objects to do things for which they were explicitly designed, but invariably is stumped by problems testing for a simple level of m-creativity. Clearly, the robot is not intelligent.

The upshot is plain. In light of the fact that the conditional in question holds, it follows that if self-driving car is intelligent, we should indeed expect m-creativity from . Hence, we have made it to intermediary conclusion C1 in The Core Argument.

Of course, this is rather abstract; some readers will expect at least some examples of m-creativity on the roadways. It is easy to comply, by turning not to abstractions and dreamed-up-by-psychologists problems given to subjects in the laboratory, but rather to the real world. A convenient source of examples comes from inclement weather and emergencies; here is a weather-related one: Bringsjord probably should not have been driving his SUV, but he was. The reason why he probably should have been off the roadways was that they were snow-and-ice-covered. He attempted to drive down a steep hill in Troy, NY, specifically down Linden Avenue, which snakes down alongside the Postenkill waterfall. At the beginning of the descent, the SUV slid uncontrollably to the right, into a snowbank just above the curb, giving the driver a nice jolt. It was painfully obvious that it would be impossible to drive down the hill along anything like the normal track. It was also impossible now to back up to the top of the hill. So what Bringsjord did was follow this algorithm: About every fifteen feet or so, unsteadily but reliably steer the vehicle forward into the curb and snowbank to come to a complete stop; to get started again, move the steering wheel to the left; repeat. In this manner, the hill could be descended gradually, without building up too much speed. It took a long time, but it worked. The curb and the snowbank, combined with the right front tire taking moderate impact, functioned as a brake. Many, many such examples can be given, all realistic.Footnote15 Some realistic m-creativity involves violating the normal “rules” of driving. For example, it is often necessary, to avoid a crash, that a car be driven into a grass median, or onto a sidewalk, or across a double-yellow line into the oncoming lane, and so on. In cases where crash avoidance like this crosses over to making use of causal chains unanticipated by human designers, with objects used in a manner that is different than standard operating procedure, we have m-creativity. Given what we say below about the coming multi-agent nature of self-driving, m-creativity along this line will only grow in importance.

From Macgyveresque creativity to autonomy

But of course The Core Argument does not stop with mention of X-creativity (now instantiated to m-creativity): it proceeds to say that since creativity implies autonomy, the representative car must be autonomous. Does m-creativity entail autonomy? Yes, and there are three general reasons.

Reason #1: M-creative cars are partially LT-creative

The first reason is that despite the rejection, above, of LT-creativity for -creativity in , the fact remains that any artifact capable of m-creativity must, by definition, at least to some degree, move in the direction of what the Lovelace test demands: the artifact in question must exceed what the engineers of this artifact could have specifically anticipated. In other words, an m-creative self-driving car must be, at least to some degree, LT-creative. Why? An m-creative agent must be able to adapt to unprecedented problems on the spot, and devise, on that very same spot, unprecedented solutions to those problems. The key word here is of course “unprecedented.” Driving on a super-highway is something that humans can often do “without thinking.” Most human drivers, sooner or later, if they drive enough on super-highways (the so-called “interstates” in the United States, and such things as “B” or “M” roads in Europe), realize that they have been driving for quite a while without the slightest recollection of having done so. This is “zombie” driving. In zombie driving, the percepts to the driver never vary much through time. And, those percepts may well have been fully anticipated by the human engineers of self-driving cars. But zombie driving is not m-creative in the least. A zombie driver is not going to get out of a tough spot with ingenious resourcefulness applied on the fly, but such surgically applied resourcefulness is precisely what constitutes m-creativity.

Clearly, m-creativity in a computing machine is not anticipatable by the designers of this machine. We do not have to go all the way to the passing of the Lovelace test (Bringsjord, Ferrucci, and Bello Citation2001), but clearly the particular innovations cannot be anticipated by the designers, and hence when an artificial agent comes up with innovative uses for objects, the designers must find these innovations surprising. The designers can, upon learning of these innovations, and then reflecting, grasp how the machine in question could have risen to the occasion, but they cannot have known in advance about the specifics. In this sense, then, m-creative self-driving cars would exhibit a kind of minimal autonomy.

Reason #2: Satisfaction of could-have-done-otherwise definitions

While it is undeniable that the term “autonomous” is now routinely ascribed to various artifacts that are based on computing machines, it is also undeniable that such ascriptions are—as of the typing of the present sentence in early 2016—issued in the absence of a formal definition of what autonomy is. What might a formal definition of autonomy look like? Presumably such an account would be developed along one or both of two trajectories. On the one hand, autonomy might be cashed out as a formalization of the kernel that is autonomous at a given time just in case, at that time, can (perhaps at some immediate-successor time ) perform some action or some incompatible action . In keeping with this intuitive picture, if the past tense is used, and accordingly the definiendum is “ autonomously performed action at time ,” then the idea would be that, at , or perhaps at an immediate preceding time , could have, unto itself, performed alternative action . (There may of course be many alternatives.) Of course, all of this is quite informal. This picture is an intuitive springboard for deploying formal logic to work out matters in sufficient detail to allow meaningful and substantive conjectures to be devised, and either confirmed (proof) or refuted (disproof). But for present purposes, the point is only that the springboard commits us to a basic notion of autonomy: namely, that the agent in question, not some other agent, had the freedom in and of itself to have done otherwise. But an m-creative agent, we have already noted, comes up with an innovation that solves a problem on the spot. Yet this innovation is by no means a foregone conclusion. If it was, then there would be nothing innovative, nothing that the human designers and engineers did not themselves anticipate and plan to have happen. We thus have a second reason for holding that m-creativity implies autonomy.

Reason #3: A retreat to mere measurement of autonomy secures the implication

The third reason is revealed once we forego trying to exploit the nature of autonomy in order to show that m-creativity entails autonomy itself, and instead reflect upon the relationship between m-creativity and the measurement of autonomy, done in a way that bypasses having to speculate about its “interior” nature. This route dodges the issue of developing a formal definition, by substituting some formal quantity for a judgment as to what degree some robot is autonomous. Here is one possibility for this route—a possibility that draws upon the logic that constitutes the definition and further development of Kolmogorov complexity.Footnote16 Let us suppose that the behavior of our self-driving car from some time to any subsequent time , , can be identified with some string . We remind (or inform) the reader that the Kolmogorov complexity of a string , , is the length of the smallest Turing-level program that outputs ; that is,

Now simply define the degree of autonomy of a given at some point in its lifespan to be the Kolmogorov complexity of the string, up to , that is the representation of its behavior to that point.Footnote17 Of course, as is well-known, any finite alphabet can be used here, not just the binary one employed here. As long as the behavior of at every given point in its existence can be captured by a finite string over , we have developed here a measure of the degree of autonomy of that car. We can easily see that the behavior of an m-creative self-driving car, through time, must have a higher and higher degree of autonomy. A zombie self-driving car on a super-highway would almost invariably produce a binary string through time that is highly regular and redundant; hence such a car would have a relatively small autonomy. But things are of course quite different in the case of an m-creative self-driving car.

Are powerful self-driving cars desired?

We now come to the next step in , one triggered by the question: Does Daimler desire powerful self-driving cars? A likely response is: “Well, who knows? After all, we don’t know what you mean by ‘powerful,’ so we can’t give a rational, informative answer. We can tell you that we certainly want an effective, safe, and reliable self-driving car.” Yet these three avowed desiderata presuppose the very concepts that we now use to define power. These concepts include utility, disutility, and the basic, universally affirmed structure of what an artificial intelligent agent is, according to orthodoxy in the field of AI. Such orthodoxy is set out definitively by Russell and Norvig (Citation2009), who explain that AI is the field devoted to specifying and implementing intelligent agents, where such agents are functions from percepts (that which they perceive), to actions performed in the environment in which the operate. Where is an arbitrary agent of this type,Footnote18 per the set of percepts , and act the set of actions , a given agent is thus a mapping

Given the abstract level of analysis at which the present paper is pitched, there is no need to specify the domain and range of such a mapping. All readers who drive will have an immediate, intuitive grasp of many of the real-world members of these sets. For instance, on roads on which construction dump-trucks travel, sometimes debris flies out and down onto the road surface from the containers on such trucks; and if that happens in front of you while you are driving, it is good to able to perceive the falling/fallen debris, and avoid it. The same thing goes for trucks that transport, particularly in open-air style, building materials. If a concrete block slides off of such a truck in front of you on a super-highway, you will generally want to spy it, and dodge it. Such examples could of course be multiplied ad indefinitum. It is also easy enough to imagine percepts and actions of a more mundane sort: When Bringsjord drives to the airport in a snowstorm, he needs to perceive the degree to which the roads have been plowed and salted, the maximum reduced rate of speed he will likely be able to achieve, and so on. These percepts dictate all sorts of actions.

Now let us add a measure of the utility (or disutility) accruing from the performance of some action performed by an agent AIA, where the range of this function is the integers; hence

(1)

Given these rudiments, we can articulate a simple account of power, for we can say that a powerful agent is one such that, in the course of its life, will often be in situations that present to it percepts such that

(2)

Of course, not only is this account rather abstract, but it is also confessedly indeterminate, for the reason that we do not know how large should be the potential weal , nor how small should be the potential woe , in order to ensure satisfaction of the definiens. Moreover, this lacuna is not unimportant, for it clearly hovers around the key question of how much power Daimler engineers wish to bestow upon their self-driving cars. Yet, clearly if the constants and are, respectively, quite large and quite small, the quartet R1–R4 will indeed need to be provided. Truly powerful agents can bring on great goodness, and wreak great destruction. We will refrain from providing a justification for the claim that large amounts of power mandate R1–R4, and will make only a few quick remarks about R1–R3, before moving on to the planned treatment of R4, and then R5.

The quartet R1–R4; OS-rooted ethical control (R4)

Now, given that Daimler must build ICAP self-driving cars, that is, cars which are not only intelligent, but m-creative, autonomous, and powerful, simple prudence dictates that they must take great care in doing so. This is easy to see, if we consider not the realm of ground transportation, but the less dangerous realm of cooking within a home. Suppose, specifically, that you have a robot-manufacturing company, one that makes humanoid robot cooks that are not only intelligent, but also m-creative, autonomous, and powerful. We do not have to delve into the details of these ICAP cooks, because the reader can quickly imagine, at least in broad strokes, what the combination of I and C and A and P means in a standard kitchen. An ICAP robot chef would be able to figure out how to put together a wonderful dinner even out of ingredients that have not been purchased in connection with a recipe; would be able to command the kitchen in a manner independent of control exercised by other agents, including humans; and would have the power to start a fire that could cause massive disutility. In this hypothetical scenario, your robot-building company would have four reasons to hire the logicians: to formally verify the software constituting the “mind” of the robo-chef, to provide transparent software to inspect, debut, and extend; to enable the robo-chef to justify and explain, in cogent natural language, what it has done, is doing, and will be doing, and why; and to guarantee that the robot cook will not be doing anything that is unethical, and will do what is obligatory, courteous, and—perhaps sometimes—heroic.Footnote19

What the story about the robot chef shows is the need, on the part of Daimler, to hire the computational logicians flows directly from the need to be careful and thorough about ICAP cars, for four reasons/technologies: R1–R4. As we have said, our emphasis herein is on R4, but before discussing this reason for turning to logic for help, we now briefly run through the first three reasons, and explain briefly how they are inseparably linked to logic.

R1 This first reason for hiring the computational logicians refers to the formal verification of the computer program(s) that control(s) self-driving car . Anything short of such verification means that belief that programs are correct (and belief that therefore behavior caused by the execution of these programs will be correct) stems from mere empirical testing—but of course no matter how many empirical tests our self-driving car passes, it will still be entirely possible that under unforeseen conditions, will behave inappropriately, perhaps causing great harm by virtue of ’s power. We refrain from providing even the core of an account of formal program verification here.Footnote20 We simply point out here that it is wholly uncontroversial that the one and only way to formally verify a self-driving car, on the assumption that it is significant behavior through time conforms to the shape of 1 and 2, which means that this behavior is the product of the execution of computer programs and is to rely upon formal logic. The previous sentence is simply a corollary entailed by the brute fact that formal verification of software, in general, is a logicist affair.Footnote21

R2 Statistical techniques for engineering intelligent agents have the great disadvantage of producing systems that compute the overall functions of Equations (1) and (2) in black-box fashion. Such techniques include those that fall under today’s vaunted “machine learning,” or just “ML.” Logicist AI, in contrast, yields intelligent agents that are transparent (Bringsjord Citation2008b). Certainly ceteris paribus we would want to be able to see exactly why a self-driving car did or is doing something, especially if it had performed or was performing destructive or immoral actions, but without a logicist methodology being employed, this will be impossible.Footnote22

R3 The third reason to seek out the help of computational logicians is to obtain technology that will allow self-driving ICAP cars to cogently justify what they have done, are doing, and plan to do, where these justifications, supplied to their human overseers and customers, include the context of other objects and information in the environment. Cogent justification can be provided in a manner well shy of formal or informal proof. In fact, we have in mind that such justification cannot be delivered in the form of a formal proof—the reason being that a justification in this form would fail to be understandable to the vast majority of humans concerned to receive a justification in the first place. What is needed here from the machine is a clear, inferentially valid argument expressed in a natural language like German or English.Footnote23 This would be an exceedingly nice thing to receive, for instance, from the vehicle recently involved in Google’s first (self-driving) car accident.Footnote24 Without this capability, the Department of Motor Vehicles is utterly at the mercy of human employees at Google. Moreover, it is far from clear that even Google understood immediately after the accident what happened, and why. Even a responsible novice human driver, immediately after such a crash, could have promptly delivered, on the spot, a lucid explanation. Obviously, AI must provide this kind of capability, at a minimum.

But such capability entails that the underlying technology be logicist in nature. Minimally, a perspicuous argument must be composed of explicit, linked declarative statements, where the links are sanctioned by schemas or principles of valid inference, and the argument is surveyable and inspectable by the relevant humans.Footnote25 Relevant here is the empirical fact that while natural-language understanding (NLU) systems can be and often are (unwisely, in our opinion) rationally pursued on the basis of thoroughgoingly non-logicist methodology, this option is closed off when the challenge is natural-language generation (NLG) of substantive and semantic argumentation and proof. Evidence is provided by turning to any recent, elementary presentation of NLG techniques and formalisms (e.g., see Reiter and Dale Citation2000).Footnote26

R4 Now we come to our focus: the fourth member of the quintet of technologies that can be provided only by the computational logicians: OS-rooted ethical control. It is easy to convey the gist of what this technology provides: It ensures that ICAP self-driving cars operate ethically (a concept soon to be unpacked), as a function not merely of having had installed a high-level software module dedicated to this purpose, but because ethical control has been directly connected to the operating-system level of such cars; that is, because ethical control is OS-rooted.

The distinction between merely installing a module for ethical control as an engineering afterthought, versus first-rate engineering that implements such control at the operating-system level (so that modules allowing impermissible actions can trigger detectable contradictions with policies at the OS level), has been discussed in some detail elsewhere: (Govindarajulu and Bringsjord Citation2015). There, the authors write about two very different possible futures in the ethical control of artificial intelligent agents, one dark and one bright; these futures are depicted graphically in . The basic idea is quite straightforward, and while the original context for explaining and establishing the importance of rooting the ethical control of ICAP members of AIA was a medical one, it is easy enough to transfer the basic idea to the domain of driving, with help from simple parables that parallel the rather lengthy one given in (Govindarajulu and Bringsjord Citation2015): Imagine that a self-driving ICAP car has had a “red” high-level module installed that precludes a combination of excessive speed and reckless lane-changing, and that in order to make it possible for to be suitably used by a particular law-enforcement agency in high-speed chases of criminals attempting to escape, this module has been (imprudently and perhaps illegally) stripped out by some in the IT division of that agency. (Notice the red module shown in .) At this point, if discovers that some state-of-affairs of very high utility can be secured by following an elaborate, m-creative plan that includes traveling at super-high speeds with extremely risky lane changing, given that is autonomous, powerful, and that the red module has been ripped out, proceeds to execute its plan to reach . This could be blocked if the policies prohibiting the combination of speed and risky lane-changing had been engineered at the OS level, in such a way that any module above this level causing an inconsistency with the OS-level policies cannot be executed.

Figure 3. Two possible futures.

Figure 3. Two possible futures.

But now we come to the obvious question: What is meant by “ethical control,” regardless of the level that this concept is implemented at? There is already a sizable technical literature on “robot ethics,” and a full answer to this question would require an extensive and systematic review of this literature in order to extract an answer. Doing this is clearly impracticable in the space remaining, and would in fact be a tall order even in a paper dedicated solely to the purpose of answering this query.Footnote27 Bringsjord’s most-recent work in robot ethics has been devoted to building a new ethical hierarchy [inspired by Leibniz’s ethical theory and by 20th-century work on ethics by Chisholm (Citation1982)], for humans and robots (Bringsjord Citation2015a). This hierarchy includes not just what is commonly said to be obligatory, but what is supererogatory.Footnote28 It does seem to us that the latter category applies to robots, and specifically to self-driving cars. That which is supererogatory is right to do, but not obligatory. For instance, it is right to run into a burning building to try to save someone, but this is not obligatory. It might be said, plausibly, to be heroic. For another example, consider that it is right to say such things as “Have a nice day” to a salesperson after buying a cup of coffee, but such pleasantries are not obligatory.

Inspired in part by (Scheutz and Arnold Citationforthcoming), we have been investigating, in “test-track” fashion in our laboratory, intelligent artificial agents of the ICAP varietyFootnote29 that, in driving scenarios, size things up and sometimes decide to behave in supererogatory fashion. shows a robot perched to intervene in supererogatory fashion in order to prevent Bert (of Sesame Street fame) from being killed by an onrushing car. In order to save Bert’s life, the robot must sacrifice itself in the process. Would we wish to engineer life-size ICAP self-driving cars that are capable of supererogatory actions? If so, the computational logicians will need to be employed.

Figure 4. A demonstration of supererogatory ethical control The “action” happens below the robot and the table it is on. The self-driving ICAP car to the far left of Bert will flatten him to the great beyond—unless the robot from above heroically dives down to block this onrushing car.

Figure 4. A demonstration of supererogatory ethical control The “action” happens below the robot and the table it is on. The self-driving ICAP car to the far left of Bert will flatten him to the great beyond—unless the robot from above heroically dives down to block this onrushing car.

At this point, a skeptical reader might object as follows: “But why should we accept the tacit proposition, clearly affirmed by the two of you, that ethics must be based in logic?”

We now answer this question, and use that answer as a springboard to moving from consideration of self-driving cars engineered to carry out supererogatory actions, to the more realistic engineering target—and the target that Daimler needs to put within its sights—of engineering ICAP cars engineered to meet their obligations.

So, why does a need for ethical control entail a need for control via logic? Well, the fact is, there is no other route to achieve precision in ethics than to use logic. This is reflected in the fact that for millennia, ethical theories and ethical principles have been expressed in declarative form (albeit often informally), and the defense of these theories and principles have also been couched in logic (though again, usually in informal logic). Whether it is Socrates articulating and defending (before the first millennium) the view that knowledge, especially self-knowledge, is a moral virtue; whether it is Kant defending his position that one ought without exception to always act in a manner that can be universalized; whether it is Mill setting out and defending the first systematic version of the view that what ought to be done is that which maximizes happiness and minimizes pain; regardless, the commonality remains: that which these great ethicists did was to reason over declarative statements, in ways that are naturally modeled in formal logic, usually specifically in formal deductive logic.

We can certainly be more specific about the ethics-logic nexus. For example, the Millian ethical theory act utilitarianism consists in the biconditional statement that an agent ’s action at time is obligatory if and only if produces the greatest utility for the greatest number of agents from among all actions that can perform at . In opposition, a Kantian deontological ethical theory holds that, where is a collection of obligations that uncompromisingly require actions wholly independent of the consequences of those actions, an agent ’s action at time is obligatory if and only if performing at is logically entailed by one or more of the rules in . Even those with only a slight command of elementary formal logic will instantly see that if one were asked to take the time to render these theories more rigorous form amenable to implementation in a computing machine, one would inevitably set out these two theories by employing the machinery of elementary, classical logic: for example, minimally, quantification (over agents, actions, times, etc.), an operator or predicate to represent “obligatory,” and the five suitably deployed truth-functional connectives. Our point here is not at all how the specific formulae would be specified; rather, our point at the moment, given in response to the question above, is that any such specification would draw from the machinery of formal logic, of necessity.

In addition, particular ethical principles can be logically deduced from a combination of ethical theories expressed in rigorous, declarative fashion, in combination with a particular context at hand. For example, under the supposition that act utilitarianism as expressed in the present paragraph holds, if at some time faces but two incompatible options, in one case to cause a small, single-passenger car to move slightly to the right in its lane in order to save one human life (), and in the other to cause a large truck to move slightly to the right in its lane in order to save 100 human lives (), it follows (assuming that there are no other relevant consequences) that ought to perform . Formal logic can be used to render all such reasoning entirely explicit, implementable in an ICAP self-driving car, and checkable by any number of computer programs. Hence, while we do not, at least in the present paper, urge Daimler and its corporate cousins to engineer ICAP self-driving cars to perform supererogatory actions, we do urge these companies to hire computational logicians in order to engineer self-driving ICAP cars that meet their obligations. shows a snapshot of a scenario in which, in our lab’s test environment, an ICAP self-driving car manages to save Bert’s life by causing a slight deviation in the path of another car whose former route would have caused Bert’s demise. In macroscopic, real-life form, this is the kind of behavior that Daimler must seek from its self-driving cars, courtesy of what computational logicians can supply. Note that meeting an ethical obligation can sometimes entail violation of a standard driving rule or law. In the case of the obligation satisfied by the self-driving ICAP car shown in , the action requires a slight crash into the car that would otherwise kill Bert. It is therefore important to understand that the mechanization of an ethical theory in OS-rooted fashion does not at all mean that the self-driving cars in question will be inflexible. Quite the contrary. As we noted earlier, m-creativity on the roadways can entail violation of standard operating procedure and standard rules of traffic law.

Figure 5. A demonstration of obligation-only ethical control. Once again the “action” happens below the robot and the table it is on; and once again, the self-driving ICAP car to the far left of Bert will flatten him to the great beyond—but the other self-driving ICAP car meets its obligation by deflecting the onrushing car, thereby keeping Bert and his acting career alive and well.

Figure 5. A demonstration of obligation-only ethical control. Once again the “action” happens below the robot and the table it is on; and once again, the self-driving ICAP car to the far left of Bert will flatten him to the great beyond—but the other self-driving ICAP car meets its obligation by deflecting the onrushing car, thereby keeping Bert and his acting career alive and well.

It is important to be clear at this juncture that logicist machine/robot ethics has reached a level of maturity that allows services to be provided to Daimler et al. that would in turn allow such companies to install ethical-control technology in their self-driving ICAP vehicles. This maturity was certainly not in place at the time of (Arkoudas, Bringsjord, and Bello Citation2005), nor at the time of (Bringsjord, Arkoudas, and Bello Citation2006), but over a solid decade of productive, well-funded toil has been expended since then, and it is high time to stop idly fretting about ethical problems that machines, vehicles included, will face, and start hiring the computational logicians to provide technology that allows machines to solve such problems. We need to move from philosophizing and fretting, to engineering and testing. As long as the underlying ethical theory is selected, the computational logicians can mechanize ethical control on the basis of that selection.

The “tentacular” demandingness problem (R5)

We come now to the fifth and final reason (R5 in ; see again ) the computational logicians are needed by Daimler and their competitors. This reason is revealed by first taking note of the empirical fact that the ethical theory that clearly is (or—after assimilation of the present paper—would be) the first choice of the companies working on ICAP self-driving cars, and of the governments that regulate such work, is act utilitarianism, already defined above in at least rough-and-ready, but reasonably accurate, form. In other words, the ethical control of self-driving ICAP cars, at least in the early days of designing and engineering such control, will be based on act utilitarianism. But act utilitarianism appears to be very vulnerable, in light of the so-called demandingness objection. While even the generally educated public is aware of the fact that utilitarianism has long been thought by many to be untenable because it appears to classify such actions as torture and brutal slavery to be not only permissible but obligatory,Footnote30 the demandingness objection to utilitarianism flies beneath the laic radar. While above we directed the reader to an excellent, recent, in-print presentation of the objection (i.e., Scheffler Citation1982), the first author’s first exposure to this objection, in perfectly cogent form, came from one of his professors in graduate school: Michael Zimmerman; and Bringsjord can still remember both the simplicity and the sting of the objection. Zimmerman pointed out that sometimes when one is reading a magazine, one comes across a full-page request for a donation, on which it is stated explicitly that a donation of some small amount of money will save a starving child in Africa by providing enough food for the child for an entire year. Usually a moving photograph of such a child, malnourished and in dire straits, is included on the page. Zimmerman asked: If you subscribe to act utilitarianism, how can you turn the page and not donate, and at the same time satisfy your moral obligations? After all, at the time that you either turn the page or donate, the latter action clearly produces the most happiness for the most people, among the actions available to you at that time. So, suppose you go ahead and donate: You pick up the phone, dial a toll-free number, and give your credit card to make a sizable contribution. You have at this point put yourself a bit behind schedule for getting done a bit of grocery shopping for your dinner later on the same day, so now you need to move quickly to get back on track. However, the minute you walk out your door, you come upon an impoverished, disheveled beggar without legs, sitting on the sidewalk, holding a sign that pleads for any amount of money to help support him and his young family. Giving some cash in this case, among the other actions available to you (and certainly compared with simply walking passed the beggar toward the gleaming, well-stocked grocery store), would appear to be one you are obligated to perform by act utilitarianism, since as a matter of fact you do not really need the groceries it will now take you a solid 45 minutes to obtain, and you could cobble together a simple dinner for the night out of canned goods you have in store already, and while you planned to stop and fetch a bottle of wine as well, the wine is superfluous, and pretty much produces only happiness for you. Thus, being a good utilitarian, you give the cash you have to the beggar. You then wonder why this fellow has not received assistance from the rescue mission just around the corner, and then you remember that that mission has of late had a sign posted on its front door asking for volunteers, no special skills required, etc. At this point the reader will understand the objection, which amounts in the end to the apparent fact that act utilitarianism demands actions from people in a manner that leaves them utterly paralyzed to accomplish their own private agendas. We generally assume that the pursuit of these agendas is perfectly permissible, ethically speaking; if this assumption is correct, utilitarianism is wrong.

What does this have to do with self-driving ICAP cars? It should be rather obvious. Because these cars are going to be capable of multi-agent reasoning of a highly sophisticated form, they are going to perceive all sorts of opportunities to produce gains in utility that far outweigh the small amounts of utility produced by merely shuttling you from point A to point B. They are also going to perceive countless opportunities to prevent losses of utility, where these opportunities are orthogonal to traveling the route that you, for “selfish” reasons, wish to cover. In fact this is to put the problem in very gentle terms. For the fact is that the multi-agent power of a large group of self-driving ICAP cars will be quite staggering. One can grasp this by returning to the simple Equations (1) and (2) given above. Given that we are now talking about the “hive” intelligence of millions of cars, spread out across roads and non-roads (there are currently about 250 million cars operating in the U.S.) the percepts represented by for a single artificial intelligent agent are but a tiny part of the overall story. The real picture requires us to build up a much more complicated model. If we pretend in order to foster exposition that our hive of millions of self-driving ICAP cars will begin the search for a coordinated plan at a time point of inception at which each perceives its corresponding , the power of the hive from a utilitarian point of view would be such that [a multi-agent cousin of Equation (2)]:

(3)

And this equation, if implemented and a guide to action selection for interconnected self-driving ICAP cars, obviously presents a fierce form of the demandingness objection: what we call the tentacular demandingness objection. We appear to be looking at a future in which, if our machines are good utilitarians, they will be obligated to sweep aside our own petty-by-comparison agendas, in favor of goals and plans that are alien to us, and beyond our cognition to grasp in anything like real-time.

It seems to us that the problem becomes even more intense, and more intractable, when one realizes that given the “internet of things,” the hive-mind in question is not composed only of the artificial intelligent agents that are self-driving ICAP cars. We are not even talking merely of a vehicular hive-mind. The hive-mind will include all manner of artifacts ubiquitous in the environment, from top to bottom: lights of all kinds, gates, sensors of all varieties, mobile devices, smart toys, service robots, smart weapons, self-moving ICAP vehicles in the air and water, softbots and conversational agents interacting with humans through innumerable interfaces of various types, and so it goes and grows, ad infinitum. An interesting side effect of coming to see what our future in this regard will be like, is the realization that the somewhat silly ethical dilemmas like trolley problems for self-driving ICAP cars will eventually, in the multi-agent AI future we have foreseen, be so unlikely as to be a waste of time to worry about now, contra the concern for instance of Lin (Citation2015) and Goodall (Citation2014). After all, even today, for human drivers whose percepts and range of actions are severely limited compared to the hive-mind machines in our future, trolley-problem dilemmas are few and far between, to put it mildly. The gradual coming together of the hive-mind will see to it that such dilemmas are almost always prevented from forming in the first place. It is no accident that the familiar trolley problems are posed in almost ridiculously low-tech environments.

To be clear, we are not saying that the cases entertained in the likes of (Goodall Citation2014; Lin Citation2015) will never occur. What we are saying in the present paper is that whereas these cases may happen, and while the thinkers writing about them are providing a commendable service in doing so, these cases will be exceedingly rare—the demandingness problem in contrast is not unlikely at all: in fact it is guaranteed to face Daimler et al. on a continuous basis. In addition, the philosopher’s dilemma cases for self-driving cars are not ones that involve massive amounts of utility and disutility hanging in the balance.Footnote31

But what is the solution to the tentacular demandingness problem that we have described? Some readers may think that nothing could be easier to surmount, because the humans are after all in charge, and we can simply engineer the AIs, whether of the vehicular variety or not, in such a way that they do not seek to leverage their percepts and their power to maximize utility and minimize disutility. Unfortunately, since, as we have pointed out, the theory of utilitarianism is the dominant guiding theory for engineering AI in the service of humanity, it would be strangely ad hoc to restrict the power of self-driving ICAP cars to make the world a better place.

Providing a solution here to the tentacular demandingness problem is beyond the scope of our present objectives. We rest content with the observation that the problem cannot be solved without the assistance of the logicians, who will need to be called upon to apply their formal techniques and tools to a tricky philosophical problem, something they have been doing for many, many centuries.

Conclusion; next steps

We conclude that those organizations intent on building intelligent self-driving cars, in light of the fact that these cars, for reasons we have given, will be ICAP ones, must hire the computational logicians for (at least) the five determinate reasons we have given. We are well aware of the fact that the budgets, engineering approaches, and business models of companies busy striving to bring self-driving cars to market are in large measure threatened by what we have said. For example, in our experience, companies are often intent on using clunky, unverified legacy software. In the case of self-driving ICAP cars, what we have revealed will require an engineering cleanliness that can only be obtained if OS-level code is rewritten from scratch in accordance with the Curry-Howard Isomorphism to reach pristine rock-solidness, and then connected to logicist code at the module level for ethical control on an ongoing basis. Along the same disruptive lines, for reasons given above, self-driving ICAP cars cannot be wisely or even prudently engineered on the basis of machine-learning techniques—and yet such techniques are the dominant ones in play today in AI. This must change, fast.

Obviously much logicist research remains to be carried out in the self-driving-car space—and indeed, because the topics traversed above are not in the least restricted only to cars, we can say that much logicist work remains to be performed in the self-moving vehicle space.Footnote32 Since we have only discussed very rapidly the first three things (R1–R3) that, by , computational logicians must be hired to provide, the trio needs to be taken up in sustained fashion in future work. For example, if we are right that ICAP self-moving vehicles must have the ability to (in some cases) cogently self-justify why they did what they did (especially if what they did was objectionable to relevant humans), why they are doing what they are doing, and why they propose performing some future actions, it will be necessary that NLP engineering of a logicist sort be carried out by the relevant companies and engineers. By definition, cogent justifications are based on interleaved language and logic (whether or not in that interleaving the underlying logic is formal or informal, and whether or not deductive or inductive logic is used). Yet, to our knowledge, none of the relevant formal theory, let alone the technology upon which it would be based, has been developed by the corporations busy building self-moving vehicles. This state-of-affairs needs to change post haste.

There is of course a specific problem that should be targeted in near-term subsequent work: In light of how problematic is the use of utilitarianism as an undergirding moral theory for ethical control of self-moving vehicles, what should be done? How should the engineering of ethical control for vast numbers of interconnected self-moving vehicles proceed, in light of the hive-mind version of the demandingness objection revealed above? Bringsjord’s view, notwithstanding the fact that much public policy is implicitly based on utilitarian calculation, is that the undergirding moral theory for ethical control of self-moving vehicles should not be utilitarianism, nor for that matter any form of consequentialism, but should be based on Leibnizian ethics and the hierarchy . A defense of this view, and of a better foundation for ethical control, will need to wait for another day.

Acknowledgments

Bringsjord is deeply grateful to Robert Trappl and OFAI for the opportunity to present in Vienna a less-developed version of the case made in the current paper, and to the vibrant audience feedback he received in the City of Music. Whatever deficiencies still remain in the case in question are due solely to oversights of Bringsjord’s, ones possibly due in part to the ingestion of delectable Grüner Veltliner during his post-talk analysis of said feedback. The authors express their deep gratitude to ONR for support under a MURI grant devoted to investigating moral competence in machines, and to our brilliant collaborators (M. Scheutz, B. Malle, M. Si) in that grant. Special thanks are due to Naveen Sundar Govindarajulu for a trenchant review, and to Paul Bello and Alexander Bringsjord for subjecting our prose to their keen eyes. Finally, thanks also to Karin Vorsteher for supernatural patience and crucial contributions

Notes

1 While we are aware of the technological prowess of Daimler in the self-driving sphere, “Daimler” is here only an arbitrary stand-in for the many companies who are steadfastly aiming at engineering self-driving cars: General Motors, Ford, Google, Tesla, BMW, and on and on it goes. Formalists can regard “Daimler” to be an arbitrary name (suitable for universal generalization) relative to the domain of companies operating in the self-driving car sector.

2 Such dilemmas are nonetheless fertile soil for investigating the nature of (untrained) human moral cognition (e.g., Malle et al. Citation2015), and for informally investigating the informal, intuitive basis of much law (e.g., Mikhail Citation2011). In addition, experimental philosophy (Knobe et al. Citation2012) certainly makes productive use of such dilemmas.

3 takes for granted elementary distinctions that unfortunately are sometimes not made even in the literature on self-driving cars and machine ethics. For example, Lin writes: I will use “autonomous,” “self driving,” “driverless,” and “robot” interchangeably. (Lin Citation2015, p. 70) This conflation may be convenient for some purposes, but logically speaking it makes little sense, given for instance that many systems operating independently of human control and direction over extended periods of time are not autonomous. Someone might insist that, say, even an old-fashioned, mechanical mouse trap is autonomous (and hence so is, say, a mine running a simple computer program), but clearly this position makes no sense unless autonomy admits of degrees. We encapsulate below (“Reason #3: A retreat to mere measurement of autonomy secures the implication”) a degree-based concept of autonomy that can undergird . On could-have-done-otherwise accounts of autonomy (briefly discussed in “ Reason #3: A retreat to mere measurement of autonomy secures the implication”), neither a mouse trap nor a mine is autonomous, nor is a “driverless” car an autonomous car.

4 Now of course known far and wide as the “Turing test.”

5 Indeed, quite the contrary, since Joel Benjamin, the grandmaster who consulted to the IBM team, inserted his own knowledge of such specific topics as “king safety” into the system. For a discussion, see (Bringsjord Citation1998).

6 The designs can be found in (Ferrucci et al. Citation2010). For further analysis of Watson see, e.g., (Govindarajulu, Licato, and Bringsjord Citation2014).

7 In the talk in question, Bringsjord’s reference was to Mozart’s Don Giovanni, which Kierkegaard (Citation1992) argued is the highest art produced by humankind to that point.

8 We would be remiss if we did not point out that the work of Cope in musical creativity might well be a candidate for in . Essentially, Cope’s view, set out, e.g., in (Cope Citation2005), is that pretty much all problem-solving can be regarded to be creativity at work. Early in his book Cope gives an example of a logic puzzle that can be solved by garden-variety deduction, and says that such solving is an instance of creativity at work. While there should be no denying that intelligence implies problem-solving (i.e., more carefully, that if is intelligent, then has some basic problem-solving capability), the problem is that Cope’s claim that simple problem-solving entails creativity is a very problematic one—and one that we reject. Evidence for our position includes that simple problem-solving power is routinely defined (and implemented) in AI without any mention of creativity. For example, see the “gold-standard” AI textbook (Russell and Norvig Citation2009). Please note that MacGyveresque creativity (m-creativity) is not garden-variety problem-solving.

9 For example, see (Charniak and McDermott Citation1985).

10 These shallow techniques all leverage statistical “learning” over large amounts of data. But many forms of human learning (indeed, the forms of learning that gave us rigorous engineering in the first place!) are based on understanding only a tiny number of symbols that semantically encode an infinite amount of data. A simple example is the Peano Axioms (for arithmetic). Another simple example is the long-known fact that all of classical mathematics as taught across the globe is represented by the tiny number of symbols it takes to express axiomatic set theory in but a single page. Elementary presentation of such “big-but-buried” data (Bringsjord and Bringsjord Citation2014) as seen in these two examples is provided in (Ebbinghaus, Flum, and Thomas Citation1994). Perhaps the most serious flaw infecting the methodology of machine learning as a way to engineer self-driving ICAP cars is that obviously it would be acutely desirable for quick-and-interactive learning to be possible for such cars—but by definition that is impossible in the paradigm of ML. If Bringsjord’s automatic garage door seizes up before rising all the way, and he MacGyveresquely commands (in natural language) his car to deflate its tires to allow for passing just underneath and in, the car should instantly learn a new technique for saving the day in all sorts of tough, analogous spots, even if there is not a shred of data about such m-creative maneuvers.

12 Various places online carry lists of problems ingeniously solved by MacGyver. For example, see http://macgyver.wikia.com/wiki/List_of_problems_solved_by_MacGyver

13 Elsewhere, one of us, joined by others, has written at some length about the nature of m-creativity, in connection with famous problems invented and presented to subjects in experiments by the great psychologist Jean Piaget (Bringsjord and Licato Citation2012). We leave this related line of analysis and AI aside here, in the interest of space conservation. For a sampling of the Piagetian problems in question, see (Inhelder and Piaget Citation1958). This is perhaps a good spot to mention that readers interested in m-creativity at a positively extreme, peerless level that exceeds the exploits of MacGyver will find what they are looking for in the problem-solving exploits of the inimitable “egghead” genius, Prof. Augustus Van Dusen, a.k.a. “The Thinking Machine.” Perhaps the most amazing display of his m-creativity is in Van Dusen’s escape from prison cell 13. See (Futrelle Citation2003).

14 This simple problem, and other kindred ones, are presented in (Glucksberg Citation1968), in connection with a discussion of what we have dubbed m-creativity, but what Glucksberg considers to be creativity simpliciter.

15 In demonstrations in our lab, our focus is on m-creativity for miniature self-driving ICAP cars that have at their disposal the ability to move objects by plowing and nudging them.

16 We provide here no detailed coverage of Kolmogorov complexity. For such coverage, Bringsjord recommends that readers consult (Li and Vitányi Citation2008).

17 The string should also represent the varying state of the environment. Without loss of generality, we leave this aside for streamlining.

18 Disclaimer: Formally inclined-and-practiced readers will not find in the equations below full rigor. We do not even take a stand here on how large is the set of available agents (which would of course be assumed to minimally match the countably infinite set of Turing machines). The present essay is a prolegomenon to “full engagement” for the computational logicians.

19 Such as, e.g., extinguishing a fire, or retrieving a human from a fire that would otherwise have seen the human perish—even if it means that the it (= the robot) will itself expire.

20 For an efficient book-length introduction at the undergraduate level, the reader can consult (Almeida et al. Citation2011); a shorter, elegant introduction is provided in (Klein Citation2009). For Bringsjord’s recommended approach to formal verification, based aggressively on the Curry-Howard Isomorphism, readers can consult (Bringsjord Citation2015b), the philosophical background and precursor to which is (Arkoudas and Bringsjord Citation2007).

21 At the moment, for formally verified operating-system kernels, the clear frontrunner is seL4 (https://sel4.systems). It runs on both x86 and ARM platforms, and can even run the Linux user space, currently only within a virtual machine. It is also open-source, including the proofs. We see no reason why our ethical-control logic (discussed below) could not be woven into seL4 to form what we call the ethical substrate. For a remarkable success story in formal verification at the OS-level, and one more in line with the formal logics and proof theories our lab is inclined to use, see (Arkoudas et al. Citation2004).

22 Of course, some logicist ML techniques produce declarative formulae, which are transparent. For example, so-called inductive logic programming produces declarative formulae (for a summary, see Alpaydin Citation2014). But such formulae are painfully inexpressive conditionals, so much so that they cannot express even basic facts about the states of mind of the humans ICAP cars are intended to serve. In this regard, see (Bringsjord Citation2008a).

23 This need immediately brings forth a related need on the machine-ethics side to regulate what self-driving ICAP cars say. For example, Clark (Citation2008) has demonstrated that using the underlying logic of mental-models theory (Johnson-Laird Citation1983), a machine can deceive humans by generating mendacious arguments.

24 See “Google’s Self-Driving Car Caused It’s First Crash,” by Alex Davies, in Wired, 2/2916. The story is currently available online at http://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash. Davies writes: In an accident report filed with the California DMV on February 23 (and made public today), Google wrote that its autonomous car, a Lexus SUV, was driving itself down El Camino Real in Mountain View. It moved to the far right lane to make a right turn onto Castro Street, but stopped when it detected sand bags sitting around a storm drain and blocking its path. It was the move to get around the sand bags that caused the trouble, according to the report …While we do not press the issue, it turns out that the accident could have been avoided by the self-driving car in question had it been using computational logics (e.g., Bringsjord and Govindarajulu Citation2013) able to represent and reason about the epistemic attitudes of nearby human drivers. Such logics in our lab, with help from Mei Si and her body of work, can be augmented to directly reflect the modeling of emotion, as, e.g., pursued in (Si, Marsella, and Pynadath Citation2010).

25 Note that informal logic revolves around arguments, not proofs. An excellent overview of informal logic is provided in “Informal Logic” http://plato.stanford.edu/entries/logic-informal in the Stanford Encyclopedia of Philosophy. This article makes clear that informal logic concentrates on the nature and uses of cogent arguments.

26 Of course, there are some impressively engineered machine-learning systems that do such things are take in images and generate natural-language captions (e.g., Vinyals et al. Citation2015). Even run-of-the-mill sustained abstract argumentation and proof, at least at present, would require a rather more logicist framework; e.g., Grammatical Framework (Ranta Citation2011).

27 Should the reader be both interested and willing to study book-length investigations, here are four highly recommended volumes: (Arkin Citation2009; Bekey and Abney Citation2011; Pereira and Saptawijaya Citation2016; Trappl Citation2015).

28 Here we streamline rather aggressively for purposes of accelerating exposition. The fuller truth is that standard deontic logic, and in fact even the vast majority of today’s robot ethics that (in whole or in part) builds atop deontic logic, is based on the 19th century triad that includes not just the obligatory, but the forbidden, and the permissible as well (where the forbidden is that which it is obligatory not to do, and the permissible is that which is not forbidden). [(Chisholm (Citation1982, p. 99) points out that Höer had the deontic square of opposition in 1885.] adds not only the supererogatory, but the suberogatory as well. Indeed the hierarchy partitions the supererogatory into that which is courteous-but-not-obligatory, and that which is heroic; and partitions the suberogatory into that which is done in—as we might say—bad faith, versus that which is outright deviltry. For a look at seminal work devoted to engineering some “dark” AI, see the Lying machine invented by Clark (Citation2008).

29 The “P” in “ICAP” in our simulations is of course artificial, since (thankfully) the agents in our microworlds are not able to produce large [à la Equations (2) and (3)] utility or disutility in the real world.

30 In cases where happiness is maximized by carrying out torture and or owning slaves; see (Feldman Citation1978).

31 In contrast, consider the real-life cases chronicled in (Schlosser Citation2013).

32 Some take “vehicle” to connote land-based transport, but we take the work to be fully general, and hence it includes aircraft (e.g., ICAP UAVs). Note that in point of fact, the analysis and argument we give in the present paper applies to all ICAP robots, period.

References

  • Almeida, J., M. Frade, J. Pinto, and S. De Sousa. 2011. Rigorous software development: An introduction to program verification. New York, NY: Spinger.
  • Alpaydin, E. 2014. Introduction to machine learning. Cambridge, MA: MIT Press.
  • Arkin, R. 2009. Governing lethal behavior in autonomous robots. New York, NY: Chapman and Hall/CRC.
  • Arkoudas, K., and S. Bringsjord. 2007. Computers, justification, and mathematical knowledge. Minds and Machines 17 (2):185–202. doi:10.1007/s11023-007-9063-5.
  • Arkoudas, K., S. Bringsjord, and P. Bello. 2005. Toward ethical robots via mechanized deontic logic. In Machine ethics: Papers from the AAAI Fall Symposium; FS–05–06, 17–23. Menlo Park, CA: American Association for Artificial Intelligence. http://www.aaai.org/Library/Symposia/Fall/fs05-06.php.
  • Arkoudas, K., K. Zee, V. Kuncak, and M. Rinard. 2004. Verifying a File System Implementation. In ‘Sixth international conference on formal engineering methods (ICFEM’04)’, Vol. 3308 of lecture notes in computer science (LNCS), 373–90. Seattle, USA: Springer-Verlag.
  • Bekey, P. L. G., and K. Abney, eds. 2011. Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.
  • Bringsjord, E., and S. Bringsjord. 2014. Education and big data versus big-but-buried data. In Building a smarter university, ed. J. Lane, 57–89. Albany, NY: SUNY Press. This url goes to a preprint only. http://kryten.mm.rpi.edu/SB_EB_BBBD_0201141900NY.pdf.
  • Bringsjord, S. 1998. Chess is Too Easy. Technology Review 101 (2):23–28. http://kryten.mm.rpi.edu/SELPAP/CHESSEASY/chessistooeasy.pdf.
  • Bringsjord, S. 2008a. Declarative/logic-based cognitive modeling. In The handbook of computational psychology, ed. R. Sun, 127–69. Cambridge, UK: Cambridge University Press. http://kryten.mm.rpi.edu/sb_lccm_ab-toc_031607.pdf.
  • Bringsjord, S. 2008b. The logicist manifesto: At long last let logic-based artificial intelligence become a field unto itself. Journal of Applied Logic 6 (4):502–25. doi:10.1016/j.jal.2008.09.001.
  • Bringsjord, S. 2015a. A 21st-century ethical hierarchy for humans and robots. In A world with robots: Proceedings of the first international conference on robot ethics (ICRE 2015), eds. I. Ferreira, and J. Sequeira. Berlin, Germany: Springer. This paper was published in the compilation of ICRE 2015 papers, distributed at the location of ICRE 2015, where the paper was presented: Lisbon, Portugal. The URL given here goes to the preprint of the paper, which is shorter than the full Springer version. http://kryten.mm.rpi.edu/SBringsjord_ethical_hierarchy_0909152200NY.pdf.
  • Bringsjord, S. 2015b. A vindication of program verification. History and Philosophy of Logic 36 (3):262–77. doi:10.1080/01445340.2015.1065461.
  • Bringsjord, S. 2015c. Theorem: General intelligence entails creativity, assuming. In Computational creativity research: Towards creative machines, eds. T. Besold, M. Schorlemmer, and A. Smaill, 51–64. Paris, France: Atlantis/Springer. This is Volume 7 in Atlantis Thinking Machines, edited by Kü̈hnbergwer, Kai-Uwe of the University of Osnabrü̈ck, Germany. http://kryten.mm.rpi.edu/SB_gi_implies_creativity_061014.pdf.
  • Bringsjord, S., K. Arkoudas, and P. Bello. 2006. Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems 21 (4):38–44. doi:10.1109/MIS.2006.82.
  • Bringsjord, S., and D. Ferrucci. 2000. Artificial intelligence and literary creativity: Inside the mind of brutus, a storytelling machine. Mahwah, NJ: Lawrence Erlbaum.
  • Bringsjord, S., D. Ferrucci, and P. Bello. 2001. Creativity, the turing test, and the (Better) lovelace test. Minds and Machines 11:3–27. doi:10.1023/A:1011206622741.
  • Bringsjord, S., and N. S. Govindarajulu. 2013. Toward a modern geography of minds, machines, and math. In ‘Philosophy and theory of artificial intelligence’, vol. 5 of studies in applied philosophy, epistemology and rational ethics, ed. V. C. Mller, 151–65. New York, NY: Springer. http://www.springerlink.com/content/hg712w4l23523xw5.
  • Bringsjord, S., and J. Licato. 2012. Psychometric artificial general intelligence: The piaget-macguyver room. In Foundations of artificial general intelligence, eds. P. Wang, and B. Goertzel, 25–47. Amsterdam, The Netherlands: Atlantis Press. This url is to a preprint only. http://kryten.mm.rpi.edu/Bringsjord_Licato_PAGI_071512.pdf.
  • Charniak, E., and D. McDermott. 1985. Introduction to artificial intelligence. Reading, MA: Addison-Wesley.
  • Chisholm, R. 1982. Supererogation and offence: A conceptual scheme for ethics. In Brentano and meinong studies, ed. R. Chisholm, 98–113. Atlantic Highlands, NJ: Humanities Press.
  • Clark, M. 2008. Cognitive Illusions and the Lying Machine. PhD thesis, Rensselaer Polytechnic Institute (RPI).
  • Cope, D. 2005. Computer models of musical creativity. Cambridge, MA: MIT Press.
  • Ebbinghaus, H. D., J. Flum, and W. Thomas. 1994. Mathematical logic, 2nd ed. New York, NY: Springer-Verlag.
  • Ellis, S., A. Haig, N. Govindarajulu, S. Bringsjord, J. Valerio, J. Braasch, and P. Oliveros. 2015. Handle: Engineering artificial musical creativity at the ‘trickery’ level. In Computational creativity research: Towards creative machines, eds. T. Besold, M. Schorlemmer, and A. Smaill, 285–308. Paris, France: Atlantis/Springer. This is Volume 7 in Atlantis Thinking Machines, edited by Kü̈hnbergwer, Kai-Uwe of the University of Osnabrü̈ck, Germany. http://kryten.mm.rpi.edu/SB_gi_implies_creativity_061014.pdf.
  • Feldman, F. 1978. Introductory ethics. Englewood Cliffs, NJ: Prentice-Hall.
  • Ferrucci, D., E. Brown, J. Chu-Carroll, J. Fan, D. Gondek, A. Kalyanpur, A. Lally, W. Murdock, E. Nyberg, J. Prager, N. Schlaefer, and C. Welty. 2010. Building watson: An overview of the deepQA project. AI Magazine, pp. 59–79. http://www.stanford.edu/class/cs124/AIMagzine-DeepQA.pdf
  • Futrelle, J. 2003. The thinking machine: The enigmatic problems of professor augustus S. F. X. Van Dusen, Ph.D., LL.D., F.R.S., M.D., M.D.S. New York, NY: The Modern Library. The book is edited by Harlan Ellison, who also provides an introduction. The year given here is the year of compilation and release from the publisher. E.g., Futrelle published “The Problem of Cell 13” in 1905.
  • Glucksberg, S. 1968. “Turning on’ new ideas’. Princeton Alumni Weekly 69:12–13. Specifically, November 19.
  • Goodall, N. 2014. Ethical decision making during automated vehicle crashes. Transportation Research Record: Journal of the Transportation Research Board 2424:58–65. doi:10.3141/2424-07.
  • Govindarajulu, N., J. Licato, and S. Bringsjord. 2014. Toward a formalization of QA problem classes. In Artificial general intelligence; LNAI 8598, eds. B. Goertzel, L. Orseau, and J. Snaider, 228–33. Cham, Switzerland: Springer. http://kryten.mm.rpi.edu/NSG_SB_JL_QA_formalization_060214.pdf.
  • Govindarajulu, N. S., and S. Bringsjord. 2015. Ethical regulation of robots must be embedded in their operating systems. In A construction manual for robots’ ethical systems: Requirements, methods, implementations, ed. R. Trappl, 85–100. Basel, Switzerland: Springer. http://kryten.mm.rpi.edu/NSG_SB_Ethical_Robots_Op_Sys_0120141500.pdf.
  • Inhelder, B., and J. Piaget. 1958. The growth of logical thinking from childhood to adolescence. New York, NY: Basic Books.
  • Johnson-Laird, P. N. 1983. Mental models. Cambridge, MA: Harvard University Press.
  • Kierkegaard, S. 1992. Either/or: A fragment of life. New York, NY: Penguin. Either/Or was originally published in 1843.
  • Klein, G. 2009. Operating system verification—An overview. Sadhana 34 (1):27–69. doi:10.1007/s12046-009-0002-4.
  • Knobe, J., W. Buckwalter, S. Nichols, P. Robbins, H. Sarkissian, and T. Sommers. 2012. Experimental philosophy. Annual Review of Psychology 63:81–99. doi:10.1146/annurev-psych-120710-100350.
  • Li, M., and P. Vitányi. 2008. An introduction to kolmogorov complexity and its applications, 3rd ed. New York, NY: Springer.
  • Lin, P. 2015. Why ethics matters for autonomous cars. In Autonomes fahren: Technische, rechtiche und gesellschaftiche aspekte, eds. M. Maurer, C. Gerdes, B. Lenz, and H. Winner, 69–85. Berlin, Germany: Springer.
  • Malle, B., M. Scheutz, M. Arnold, and J. C. Voiklis. 2015. Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction (HRI 2015), 117–24. New York, NY: ACM. doi:10.1145/2696454.2696458.
  • Mikhail, J. 2011. Elements of moral cognition: Rawls’ linguistic analogy and the cognitive science of moral and legal judgment, Kindle ed. Cambridge, UK: Cambridge University Press.
  • Mueller, E. 2014. Commonsense reasoning: An event calculus based approach. San Francisco, CA: Morgan Kaufmann.
  • Pereira, L., and A. Saptawijaya. 2016. Programming machine ethics. Basel, Switzerland: Springer. This book is in Springer’s SAPERE series, Vol. 26.
  • Ranta, A. 2011. Grammatical framework: Programming with multilingual grammars. Stanford, CA: CSLI. ISBN-10: 1-57586-626-9 (Paper), 1-57586-627-7 (Cloth).
  • Reiter, E., and R. Dale. 2000. Building natural language generation systems. Cambridge, UK: Cambridge University Press. This book in in CUP’s Studies in Natural Language Processing series.
  • Russell, S., and P. Norvig. 2009. Artificial intelligence: A modern approach, 3rd ed. Upper Saddle River, NJ: Prentice Hall.
  • Scheffler, S. 1982. The rejection of consequentialism. Oxford, UK: Clarendon Press. This is a revised edition of the 1982 version, also from Clarendon.
  • Scheutz, M., and T. Arnold. forthcoming. Feats without heroes: Norms, means, and ideal robotic action. Frontiers in Robotics and AI.
  • Scheutz, M. forthcoming. The MacGyver test: A Turing test for machine resourcefulness and creative and creative problem solving.
  • Schlosser, E. 2013. Command and control: Nuclear weapons, the damascus accident, and the illusion of safety. New York, NY: Penguin.
  • Si, M., S. Marsella, and D. Pynadath. 2010. Modeling appraisal in theory of mind reasoning. Autonomous Agents and Multi-Agent Systems 20:14–31. doi:10.1007/s10458-009-9093-x.
  • Trappl, R. 2015. A construction manual for robots’ ethical systems: Requirements, methods, implementations. Basel, Switzerland: Springer.
  • Turing, A. 1950. I.—Computing machinery and intelligence. Mind LIX (236):433–60. doi:10.1093/mind/LIX.236.433.
  • Vinyals, O., A. Toshev, S. Bengio, and D. Erhan 2015. Show and tell: A neural image caption generator. http://arxiv.org/pdf/1411.4555.pdf

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.