Publication Cover
Social Epistemology
A Journal of Knowledge, Culture and Policy
Latest Articles
1,535
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Online Illusions of Understanding

Received 14 Nov 2022, Accepted 20 Nov 2022, Published online: 28 Dec 2022

ABSTRACT

Understanding is a demanding epistemic state. It involves not just knowledge that things are thus and so, but grasping the reasons why and seeing how things hang together. Understanding, then, typically requires inquiry. Many of our inquiries are conducted online nowadays, with the help of search engines, forums, and social media platforms. In this paper, I explore the idea that online inquiry easily leads to what I will call online illusions of understanding. Both the structure of online information presentation (with hyperlinks, shares, retweets, likes, etc.) and the operation of recommender systems and the like make it easy for people using them to form the impression that they are conducting inquiry responsibly, whereas they are in fact fed with irrelevant information, or, even worse, falsehoods, misinformation, disinformation, or outright conspiracy theories.

1. Introduction

Much of our information-gathering is carried out online nowadays.Footnote1 Reading the news, checking out background stories, researching products or services we want to buy, looking for travel advice, finding contact details, educating ourselves about a new topic or field, learning new skills, and so much more is done through the use of search engines, by visiting dedicated websites, or by spending time on social media or discussion forums. Doing so is seamlessly integrated into our lives. As Michael Lynch puts it, we have a digital form of life: ‘One which takes life in the infosphere for granted, precisely because the digital is so seamlessly integrated into our lives’ (Lynch Citation2016, 10). Not using online resources for information-gathering is becoming hard. It is easy to forget that this wealth of online information was simply unavailable until some 25 years ago (or even more recently, in the case of social media).

Easy access to unlimited information on any topic whatsoever might seem like epistemic paradise. And in many ways, it is. If you know what you’re looking for, you can find high-quality information on almost any topic, no matter how outlandish or specialized. You can often even find folks who share your interests and are willing to exchange insights. Even if you don’t know what you’re looking for but exercise enough care and critical judgment in sorting through search results and online resources, you can learn a lot. This is just as true for knowledge-that as it is for knowledge-how. There are instruction videos, tutorials, and step-by-step guides for almost any practical problem or skill you can think of.

But not all is well in paradise; our digital form of life has epistemic downsides. Miller and Record (Citation2013) argue that personalized search results – specifically, the opacity of the algorithms that generate these results in particular – undermine the justification of beliefs acquired online. Lynch (Citation2016, ch. 2) worries that ‘Google-knowing’ – acquiring beliefs through Google searches – is crowding out other, more self-directed and intellectually demanding forms of knowledge acquisition (cf. also Gunn and Lynch Citation2021). Fake news, misinformation, disinformation, propaganda, echo chambers, and filter bubbles make it hard to know whom to trust online for reliable information (Sunstein Citation2017; Rini Citation2017; Levy Citation2017; Benkler, Faris, and Roberts Citation2018; Gelfert Citation2018; Nguyen Citation2020; O’Connor and Owen Weatherall Citation2019; Bernecker, Flowerree, and Grundmann Citation2021; McBrayer Citation2021).

I want to add another worry to this list. I will make a case that online inquiry is particularly prone to generating illusions of knowledge and understanding, rather than the real goods. The reason for this is the interaction between features of our psychology, on the one hand, and the way that websites, social media outlets, and other online platforms are designed on the other. The set-up for the rest of the paper is as follows. I first give a general characterization of inquiry and its goals, among which is understanding. Next, I discuss results from psychology showing that humans are prone to illusions of knowledge and understanding. The bulk of the paper is then devoted to a discussion of how easily online information environments can make it seem as if you are conducting inquiry well when you really aren’t. Although we have a world of knowledge at our fingertips, it’s easy to deceive ourselves into thinking that we have knowledge and understanding when we in fact do not.

2. Inquiry and Understanding

Inquiry is about finding things out: coming up with questions and answering them, identifying problems and solving them, extending knowledge, or acquiring and deepening understanding. As Christopher Hookway puts it: ‘we attempt to find things out, to extend our knowledge by carrying out investigations directed at answering questions, and to refine our knowledge by considering questions about things we currently hold to be true’ (Citation1994, 211).Footnote2

‘Finding things out’ can mean different things and so inquiry can have different goals. The goal of inquiry might be to acquire true belief or knowledge about a specific issue of some importance: where are my keys, when does my next class start? Sometimes, we’re not trying to make up our minds about an issue, we just want get a sense of how people think about something, what the options are – in such cases, you might say we’re after true beliefs about what other people think about an issue, but not necessarily about the issue itself. Sometimes inquiry can have justified but not necessarily true belief as its aim. When we’re interested in getting an overview of something or getting the reasonable options on the table, truth isn’t the foremost concern; justified belief suffices. Inquiry can also have a more ambitious aim: understanding. Sometimes we want – or need – to get to the bottom of things and really grasp how an event came to happen, what explains a phenomenon, why someone did what she did, or get intimately acquainted with a topic, subject area, or theory. In such cases, mere true belief or isolated pieces of knowledge won’t do; we need a coherent set of true beliefs, a range of knowledge, including an appreciation of how things hang together. Most ambitiously, inquiry might even aim at wisdom.

In this paper, the focus is on understanding as an aim of inquiry, since the claim I will defend is that online inquiry is particularly prone to generating illusions of understanding, even though some of what I say applies to knowledge too. My arguments do not depend on a specific account of understanding, so a fairly broad and uncontroversial characterization should suffice. Understanding can take a range of objects: events, phenomena, topics, theories, actions, people, and perhaps even more. Understanding differs from knowledge in at least two ways. First, it isn’t limited to knowledge of isolated propositions, but involves knowledge of a number of related propositions. Second, it requires knowledge of or insight in how these propositions are related. Understanding is grasping ‘how things hang together’ (Grimm Citation2011; Baumberger, Beisbart, and Brun Citation2017; Gordon Citationn.d.). What sort of ‘hanging together’ is relevant? Explanatory relations or dependency relations are often said to be central. But, plausibly, these come in different kinds: causal relations, part-whole relations, constitution relations, logical relations, and conceptual relations can all be explanatory, depending on the question at hand.Footnote3

Inquiry is subject to normative evaluation: it can be done well or badly. Norms for good inquiry can be used to evaluate inquiry ex post, but also to regulate a process of inquiry while it is happening. The latter involves meta-cognition: monitoring how well inquiry is proceeding and reflecting on what can be done to move it along and improve it. Such meta-cognition can be unconscious or conscious. Oftentimes, we monitor inquiry largely unconsciously: our attention is focused on the inquiry itself and we make implicit assessments about how it is proceeding and whether we should change anything. It is only when we get the sense that something is off, that we might pause and step back to reflect consciously on how a process of inquiry is going, whether we should try out different methods, whether we can stop inquiring, etc.

The kinds of meta-cognition involved in conducting and regulating inquiry can be specified further. To start inquiry, one must come up with a question. But asking good, productive, worthwhile questions is far from a trivial skill (Hookway Citation2008; Watson Citation2018). One meta-cognitive task required for good inquiry, then, is:

  1. Posing good questions or identifying good problems.Does the question not have any false or misguided presuppositions? Is it understandable? Does it make sense given background knowledge? Is it relevant, ‘on topic’, neither too narrow nor too broad, timely? Does it admit of an answer?

Hookway (Citation2003, 199–200) identifies four further meta-cognitive tasks that we must be able to carry out if we are to conduct and regulate our inquiries wellFootnote4:

  • (b) Identifying good strategies for carrying out inquiries.Given a certain question, what methods are available for answering it? Which of these methods can I pursue with a reasonable chance of success? How long will they take? How reliable and accurate will they be?

  • (c) Recognizing when we possess an answer to our question or a solution to our problem.An answer can be right in front us, but we must appreciate it as such. What counts as an answer or solution and how can we recognize answers and solutions? When is it good enough and how can we tell?

  • (d) Assessing how good our evidence for some proposition is.Inquiry can produce lots of information, but we must assess its quality. Are claims epistemically justified? How strongly? How reliable is our own perception, intuition, memory, reasoning? Who are relevant experts and reliable sources? How should the evidence pro and con be weighed?

  • (e) Judging when we have taken account of all or most relevant lines of investigation.Doing inquiry well also involves knowing when to stop. When have we done enough to close inquiry? What methods or lines of inquiry are essential for answering the question at hand and which ones are optional or even redundant?

The list isn’t supposed to be exhaustive, nor does it provide a full-fledged normative framework for good inquiry. Each task could be elaborated upon in greater detail. Even so, in identifying key meta-cognitive tasks required for good inquiry, the list gives us enough to work with. I’ll argue below that several features of the online information environment can interact in detrimental ways with the meta-cognitive tasks on this list. Before I do so, however, I provide a primer of work in psychology on the illusion of understanding.

3. Offline and Online Illusions of Understanding

Contemporary cognitive psychology finds that humans are prone to overestimate how much they know and understand. The starting point is Rozenblit and Keil’s (Citation2002) work on the illusion of explanatory depth, also called the illusion of understanding. Rozenblit and Keil asked participants to rate how well they understood the workings of familiar objects (zippers, speedometers, piano keys, flush toilets, cylinder locks, helicopters, quartz watches, and sewing machines) on a 7-point scale. They then asked them to write down an explanation of how these objects work in as much detail as they could. After completing that task, participants were asked once more to rate their understanding on a 7-point scale. Predictably, the great majority decreased their self-assessment by a point or two. When it comes down to it, most of us don’t really know much at all about the workings of zippers and other familiar objects, which becomes apparent once we engage in the attempt to explain what we take ourselves to know.Footnote5 The results have been replicated with college students at different types of schools and samples of random Americans. Moreover, the same happens when people are asked about other topics: political ones such as tax policy and foreign affairs, scientific ones such as genetically modified organisms and climate change, and even their own finances (Fernbach et al. Citation2013; Sloman and Fernbach Citation2017, 22).Footnote6

These findings fit in a broader pattern of findings showing that people tend to make overconfident self-assessments. There is the well-known ‘better-than-average effect’: people’s tendency to overestimate their abilities, attributes, and personality traits in comparison with their average peer (Zell et al. Citation2020).Footnote7 We construct overly simplistic explanatory narratives from incomplete information and then take ourselves to possess genuine understanding. In hindsight, we overestimate the accuracy of our past beliefs: when something surprising happens, we think we expected it all along (Kahneman Citation2011, ch. 19).

This tendency to overestimate how much we know and understand is exacerbated by internet technology. Adrian Ward (Ward Citation2013, Wegner & Ward Citation2013) conducted experiments in which he had two groups of participants complete a trivia quiz, one group with the help of Google searches, another without. They were then asked to fill out a cognitive self-esteem questionnaire to measure beliefs about their abilities to process and remember information, and were asked to predict how well they would do on a second quiz without access to the internet. People who used the internet on the first quiz, it turned out, had higher cognitive self-esteem and gave more optimistic predictions of their expected performance on the second quiz than the group without internet access.Footnote8 What’s more, when the participants in the group with internet access were later asked where they found the information, they frequently misreported having known it all along even when they had in fact looked it up online. Having access to online information apparently makes people overestimate their own cognitive abilities.

Perhaps, though, this isn’t entirely wrong. When people use internet searches to find reliable information, they in fact do know and understand more, so perhaps they are right in having higher cognitive self-esteem. Other experiments, however, show that internet-induced inflation of cognitive self-esteem overgeneralizes in unwarranted ways. Fisher, Goddu, and Keil (Citation2015) let participants answer explanatory questions about familiar phenomena and objects (zippers, glass, golf balls, decks of cards, moon phases, leap years, time zones, and men and women). One group was allowed to confirm the details of their explanations online, the other group wasn’t. Next, they had to rate their ability to answer different explanatory questions about unrelated domains (weather, science, history, food, anatomy, and health). People who had been allowed to confirm their earlier answers through internet searches rated their ability to answer unrelated questions higher.Footnote9 The upshot is that searching for information online, regardless of whether the search is successful, inflates people’s sense of how much they know and understand.

Ward, Fisher, Sloman and Fernbach all offer a similar explanation for why this happens. Their basic idea is that we run together knowledge and understanding in our heads with what is easily accessible in our social or technological environment. As a result, we systematically overestimate how much we know and understand, particularly – as shown in the above experiments – when such access has been made salient by means of internet searches. Sloman and Fernbach put it like this:

The Knowledge Illusion occurs because we live in a community of knowledge and we fail to distinguish the knowledge that is in our heads from the knowledge outside of it. We think the knowledge we have about how things work sits inside our skulls when in fact we’re drawing a lot of it from the environment and from other people. … The world and our community house most of our knowledge base. A lot of human understanding consists simply of awareness that the knowledge is out there.

(Sloman and Fernbach Citation2017, 127–28).

One way to accept this empirical data but resist the conclusion that people overestimate what they know and understand when they are near the internet, is to deny that there is overestimation here. When people have a world of information at their fingertips, isn’t it in some sense true that they ‘know’ and ‘understand’ more? Carter and Gordon (Citation2021) employ the idea of extended cognition to explore a positive answer to this question.Footnote10 If online search technology is thoroughly integrated in people’s cognitive processes, as it arguably is, then people really do have knowledge (in an extended sense) of all the information that can be found online easily. Hence, they would be right in having high confidence in their abilities to answer further trivia questions or to provide other explanations (although they would still be wrong in thinking that they could do so without internet access).

I’ll offer three brief considerations that count against Carter and Gordon’s alternative interpretation. First, even if we grant that there is extended cognition and knowledge, it needs to be shown that anything we can easily find online satisfies the conditions for inclusion in extended knowledge. This is not an easy task, for two reasons. The exact conditions themselves are controversial.Footnote11 Moreover, there is a strong intuitive case against thinking that everyone with a smartphone really knows everything that shows up on the first page of any Google search and looks reasonably trustworthy – just pause for a moment and consider how extensive everyone’s knowledge would really be on this proposal.Footnote12 Second, the internet-related findings discussed above fit in a broad and robust array of findings about how humans tend to be overconfident in their own abilities. Carter and Gordon’s suggestion that people in fact make largely accurate self-assessments in the cognitive domain, then, would be an unexplained outlier in this broader perspective. Third, Ward’s experiment shows specifically that people overestimate how much they know even when they’re aware that they will not have access to the internet. On the extended knowledge hypothesis, Ward should have found that people adjust their self-assessments when they learn that they will have to answer questions without internet access, since they extended-know a lot less in these circumstances.

So the mere availability and use of internet search tools thus inflates people’s sense of how much they know and understand. In what follows, I explore how various more specific design features of websites, online tools, and social media can interact with our psychology to generate illusions of understanding.

4. Online Inquiry and the Illusion of Understanding

A common thread that will emerge is that online inquiry makes it easy – sometimes inevitable – for us to outsource meta-cognition required for good inquiry to the online environment. Since much of the internet isn’t designed with the purpose of facilitating responsible meta-cognition, online inquiry can easily go astray, leaving us with an illusion of understanding instead of real epistemic goods. Let me emphasize that my claim is fairly modest: I’m not saying that the effects I describe occur inevitably, or in every case of online inquiry. I only say that they can easily occur; design features of our online information environment make it relatively frictionless to let our inquiries be regulated by epistemically poor norms.

Let’s look first at how online inquiry aimed specifically at understanding can go wrong and next at how the online environment interacts with the meta-cognitive tasks relevant to conducting inquiry well.

4.1. Understanding-Seeking Online Inquiry

Understanding involves grasping ‘how things hang together’ by appreciating explanatory dependency relations. So when the goal of online inquiry is understanding, we need accurate information about such explanatory dependence relations. The online environment can lead us astray in a number of ways.

First, however, let’s consider an example of how the online environment can be conducive to understanding-seeking inquiry. By connecting information through hyperlinks, websites, and search engines the internet makes it easy to recognize and track down explanatory relations between things. Think of Wikipedia: entries contain lots of hyperlinks to sources and connected concepts and some articles include schematic overviews of related information. The entry on ‘philosophy’, for instance, presents an overview of the field as well as links to subfields of philosophy, various philosophers, traditions, historical periods, a glossary of terms, and some overview pages. Someone who’s unfamiliar with philosophy can get a decent sense of what it is all about from this page. The point is that hyperlinks are a great way of showing dependence relations, particularly conceptual relations, coherence relations, and epistemic support relations, but also explanatory relations and causal connections.

Let’s now look at how various features of the online information environment can impede or thwart genuine understanding while promoting the illusion of it. Although hyperlinks are a great way of showing dependence relations, they can just as easily draw spurious connections between bits of information, resulting in a misleading sense of understanding.

Next, consider the ordering of search results. From an epistemic perspective, it would be best if the top results were always links to truthful and accessible information. Unfortunately, however, nothing guarantees that this is so. The details of what currently goes into Google’s ordering of search results are proprietary information, but we can get a glimpse of how things might go awry by considering how it all began. Google’s PageRank algorithm (Page et al. Citation1999), which remains part of its search engine, determines the importance of webpages by looking at the number of links to it. Incoming links are seen as votes of authority or trust. However, not all votes count equally. Links from higher-ranked pages have more weight than those from lower-ranked pages. And links from pages that have lots of outgoing links count for less than links from pages with fewer outgoing links.Footnote13 There might be a story about how, in a better world than ours, this measure of importance could have tracked high-quality information by means of the wisdom of crowds, but in reality we’re all too familiar with dubious information showing up on the first page of our search results. PageRank-importance can easily come apart from truthfulness and accessibility. Recent simulation models confirm this suspicion (Masterton and Olsson Citation2018). Top search results do not necessarily include those webpages that are most conducive to understanding. This is certainly true for sponsored links appearing near the top search results. You can verify this for yourself by typing in keywords connected to controversial topics (preferably when you’re not logged in to Google so that it doesn’t use your own search and browse history to personalize results): just try ‘age of the earth’, ‘Obama Muslim’, or ‘vaccination’. Since few people make it past the first page of results,Footnote14 online searches can easily leave you with poor quality information.

Similar issues arise on other platforms. Twitter search results and ‘Home Tweets’ are ranked by their popularity (i.e. amount of engagement), which lacks any straightforward connection to the quality of information. While Twitter discussions can be a good way to learn about the different sides of an issue, they often also contain misinformation, polarizing rhetoric, and other unhelpful material. The same goes for tweets with a common hashtag. YouTube’s recommendations depend on relevance, but also on predicted watch time. The longer users are predicted to keep watching a video, the higher it will show up in the list of recommended videos. The effect of this is that YouTube’s recommendations tend towards the extreme and sensational, rather than the truthful (Lewis Citation2018; Weill Citation2018; Chaslot Citation2019; Roose Citation2019; Alfano et al. Citation2020).

Abstracting away from the particulars of these individual platforms, we can put the point in more general terms. Information online is and can be connected and made accessible in a multitude of ways, both static and dynamic (hyperlinks, responses to search queries, recommendations, hashtags, etc.). Some of these digital connections between bits of information represent dependence relations that are relevant to understanding, but many are irrelevant to it, or even obstruct and undermine understanding. Inquiring online thus puts you at risk of chasing down spurious relations, especially when you’re not actively monitoring and checking the kinds of connections you’re following.

There is a further effect that is relevant here. Just as the mere activity of searching online can give you a false sense of knowing, as the Fisher, Goddu, and Keil (Citation2015) study discussed in Section 3 showed, the mere activity of browsing through search results, clicking various hyperlinks, or following recommendations can give you a false feeling of understanding. When you’re skimming digital connections between bits of information, it can feel like you’re adding to your understanding of a topic. After all, it will seem as if you’re grasping more and more dependence relations between the issues you’re exploring. If you made it past the second page of search results, say, or watched several recommended videos, you’ll feel like you’ve earned your epistemic credits and have ‘done your own research’, as conspiracy thinkers like to put it. But, as we noted before, it may well be that you’re in fact tracking down various spurious connections and misleading information. Hence, you can come to feel very intellectually confident for all the wrong reasons – an illusion of understanding.

4.2. Posing Good Questions or Identifying Good Problems

The first step of conducting inquiry well is asking good questions. As Lani Watson points out, doing so is an intellectual skill: ‘A good questioner acts competently in order to elicit worthwhile information’ (Watson Citation2018, 358). This involves making contextually appropriate judgements about when, where, how, and from whom to elicit information, as well as appropriate judgements about what information to elicit. There are many ways for questions to be bad: they can fail to make sense, be impossible to answer, ambiguous, based on false or misleading presuppositions, irrelevant, too broad, or too narrow.

On the upside, the internet can help to improve questions: getting unexpected or unhelpful results from a query can alert you to ambiguity, false presuppositions, or irrelevancies. Seeing search results can also help to narrow down or broaden a question, or abandon it altogether. Note, however, that this requires active meta-cognition on your part: you need to be alert to the possibility that there is something wrong with your question and prepared to improve it, rather than just go along with whatever your search happens to throw up.

But online tools can also cause inquiry to go awry right off the bat. In 2004, Google introduced its autocomplete functionality. By now, most web browsers have this functionality built into their search bar. The purpose of this technology may seem benign – it saves you some keystrokes – but it can also derail online inquiry. To borrow an example from Alfano, Carter, and Cheong (Citation2018, 310), suppose you’re interested in alternative sources of energy and start typing ‘alt’ into your search window. Google might suggest ‘alt right’ as a way of completing your query, guiding you to a very different set of results. While someone who is genuinely interested in alternative sources of energy will probably dismiss this suggestion right away and continue typing, autocomplete can work in more subtle ways too. You might be interested in why professors are tenured, so you start typing ‘why are professors’, at which point Google proposes ‘ … allowed to double dip’.Footnote15 It’s easy to imagine that, while you’re on the topic of professors, your curiosity is piqued by this suspicious allegation and so your inquiry takes a turn for the worse, focusing on a question with a dubious presupposition and questionable relevance (at least to your original query).

Another worry about search and autocomplete functionality is that it just goes along with whatever you enter into a search window. If your questions or search terms are tendentious or have false presuppositions, Google will point you to websites sharing those presuppositions without any indication that they may be false. For instance, roughly half of the results on the first page when you enter ‘earth 6000 years old’ link to creationist websites defending the claim that the earth is very young. Search terms can also have implicit presuppositions. Noble (Citation2018, 111–16) discusses the example of searching for information about ‘black on white crime’. Rather than directing you to websites with reliable statistics showing that this is a relatively insignificant category in the overall crime statistics, the top results include white supremacist and nationalist websites with inflammatory and racist content, reinforcing the notion that white people are under threat.

Sometimes, you search for something of which you have only a vague sense. You put in some keywords to see what pops up. In effect, this is letting Google (or another search engine) and its underlying algorithms determine what your specific question is, by coming up with autocompletes and search results. Again, this can sometimes help you to get clearer on an issue, but it might as well saddle you with irrelevant or false information, confusions, or bad presuppositions. For instance, Noble (Citation2018) documents how Google reinforces stereotypes by showing plenty of misrepresentations of oppressed and marginalized groups among its top results.

What’s more, the effects of this can extend beyond just one failed search. The first bit of information people encounter has outsized influence on their subsequent thinking about an issue. This cognitive bias is known as the anchoring effect (Kahneman Citation2011, ch. 11). This means that the first autocomplete suggestions or the first few search results you see after typing in some poorly thought-through keywords can continue to pull your subsequent thinking in the wrong directions, even if you discard them and redo your search.

Similar things can happen when you turn to YouTube, Twitter, or Reddit to find out more about a topic. You can get distracted by YouTube’s recommended videos, Twitter’s trending topics or top tweets, or Reddit’s popular posts. YouTube and Twitter also employ autocomplete in their search window. The order of search results is in part a function of their personalized relevance (based on your browsing history, search history, and other information about your online behavior), but also of their popularity (measured by, e.g. interactions). It’s unclear to what extent accuracy or truthfulness plays a part. Alfano, Carter, and Cheong (Citation2018, 300–301) characterize these processes as ‘technological seduction’. The online environment ‘seduces’ your thinking into adopting its suggestions, recommendations, etc. While this need not be a bad thing, especially when you self-monitor vigilantly, it can easily lead inquiry astray, already from the very start when you’re still formulating a question. Outsourcing (part of) the meta-cognitive task of posing a good question to internet platforms can thus mislead you into thinking you’re conducting inquiry well when you aren’t.

4.3. Identifying Good Strategies for Carrying Out Inquiries

Good inquiry requires good strategies and identifying good strategies, in turn, needs good judgment. The omni-availability of smartphones, tablets, and computers with internet connections has turned an internet search into the default strategy for any inquiry whatsoever, regardless of its topic or goal. We have go-to websites or apps for questions about the news, sports, hobbies, financial information, music, travel, etc.

Again, this isn’t necessarily a bad thing. Google discloses an unbelievable amount of information and is incredibly efficient, typically getting you relevant results in seconds. And plenty of designated websites, forums, scholarly repositories, MOOCs, or YouTube channels are incredibly helpful in accessing high-quality information for educating yourself, learning new skills, answering questions, getting feedback, and whatnot.

However, online inquiry narrows down our range of strategies for finding things out, especially when we’re not actively looking for a more diverse set of sources. It’s not that alternatives don’t exist anymore (although for certain kinds of information that has happened too – phonebooks, books with train timetables), but they are often more cumbersome and people just don’t bother using them anymore. College students who have never spent time among university library’s stacks are far from an exception nowadays. The frictionless omnipresence of online sources shortcuts any process of considering whether searching online really is the best strategy for finding out what we want to know.

The default online options aren’t automatically the epistemically optimal choice. Search results are prioritized on the basis of other considerations than the quality of information, so search engines aren’t always your best bet for high-quality information, unless you curate their results yourself by clicking only those results that you already know are from reliable sources. Always only consulting your favorite news site is easy, but might prevent you from gaining broad understanding by learning about multiple perspectives, especially if your favorite sources happen to be partisan. We noted before that YouTube’s recommender system can be a radicalization engine. Amazon’s book recommendations can similarly guide you towards misinformation: Diresta (Citation2019) describes how anti-vaxxers and ‘natural health’ proponents gamed Amazon’s algorithms by leaving lots of five-star reviews to promote books with medical misinformation. TripAdvisor will get you mostly information by and for international tourists (as well as potentially fake or sponsored information), rather than local insider knowledge, which (some) traditional printed travel guides offered.Footnote16

The underlying problem is similar in all these cases. Several sectors of the digital economy are dominated by one or a few big tech companies. These tech giants crowd out – or buy out – competitors, leaving them with near monopolies on certain kinds of online information and services (Hindman Citation2018). This is a particularly thorny problem, because, arguably, social media companies naturally evolve towards monopolies, unless governments prevent this or interfere. Drawing on Joskow (Citation2007), Alfano and Sullivan (Citation2021) argue that social media networks may be ‘natural monopolies’. This is because of a combination of economies of scale – the cost of adding additional users and pages to a network is very low – and network effects – it is attractive for users to join a large existing network rather than a small new one. As a result, social media platforms tend to develop in the direction of monopolistic markets, which are known to produce undesirable effects such as ‘excessive prices, … poor service quality, and … potentially undesirable distributional impacts’ (Joskow Citation2007, 1229). As big tech companies are driven by commercial interests and rely on big data and sophisticated algorithms to streamline their services, the quality of the information they provide can easily suffer through commercially motivated influence, bias, or active manipulation. Since their services provide widely used default inquisitive strategies, inquirers who outsource the meta-cognitive task of identifying good strategies for inquiry to the online environment are at risk of inquiring poorly.

4.4. Recognizing When We Possess an Answer to Our Question or a Solution to Our Problem

Inquiring well requires furthermore that we recognize answers when we found them. For straightforward questions about facts, this is often easy, but when the goal is understanding, it can be much harder to tell when we have it – or enough of it.

Some features of the online environment really help to carry out this task well. Some moderated forums do a very good job of pointing users to good answers. They let them vote answers up or down, keep track of how often users report that answers worked for them, and close topics once they have been addressed to the satisfaction of the original poster. A good example of such a platform is stackoverflow.com for computer programmers. Similarly, websites of officially designated authorities or curated websites with expert information make it easy to recognize good answers.

In other ways, however, the online environment makes trouble for the task of recognizing good answers and solutions. We have already encountered one way in which online inquiry can make it hard to judge whether we’re acquiring genuine understanding: links and other connections between online information can reflect dependence relations conducive to understanding, but they can just as well reflect spurious or irrelevant relations, which leave you epistemically worse off when you mistake them for real explanatory dependence relations. In light of the meta-cognitive task we’re considering in this subsection, we can put the point differently, too. The fact that two bits of information are connected online can make you think you’ve found a good answer to the question how things ‘hang together’, but there’s no guarantee that the answer is in fact any good.

Another factor that makes it hard to judge whether you’ve found an answer to your question is the fact that there are almost always more search results (websites, tweets, posts, etc.) than you have time to survey. And that there is usually no indication of their quality or relevance (besides their place in the overall order of results, which is a highly fallible proxy for quality and relevance as we noted above).

The many ways in which online information is interconnected creates specific trouble when we’re trying to assess whether we have understanding. The presence of more links to other information suggests that there are further relations that could add to your understanding of an issue. Hence, it becomes hard to judge when you’ve understood enough. To appreciate the point, compare the online situation to a traditional encyclopedia or handbook entry. Such entries are supposed to give you a more or less self-contained and complete overview of a topic. While they also contain references to further reading and background sources that can deepen your understanding, it is at least clear that they form a self-contained whole from which readers should be able to acquire a certain level of comprehensive understanding of a topic. In contrast, an online environment with plenty of further clickable connections makes it unclear whether and when you’ve done enough to reach some desired level of understanding. Again, outsourcing this meta-cognitive task to online platforms risks bad inquiry.

4.5. Assessing How Good Our Evidence for Some Proposition Is

Well-conducted inquiry involves responsibly evaluating and weighing evidence. This was never an easy task offline either. Judging the reliability of a source is often difficult when you don’t already have prior knowledge of a topic. Identifying genuine experts can be hard, especially for outsiders. Appreciating the probative force of a piece of evidence and analyzing arguments is hard intellectual work, and we all suffer from biases, both individually and in groups.

Once more, the online environment can be helpful, but also creates risks and complications. As to the former: you can double-check claims and arguments quickly by comparing sources and consulting certified authorities, find out more about the background and reputation of sources and purported experts, trace down references, or check with other people. If you’re scientifically literate and savvy enough, you can read the original studies behind claims or even inspect raw data and run some analyses for yourself. If an idea, explanation, or account is highly controversial, search results will turn this up quickly. Wildly implausible or discredited claims will also be met with plenty of counterevidence online.

Now for the downsides. The general worry is that the internet has very little or no epistemic quality control, meaning that everything from the highest-quality evidence to complete nonsense can be put online easily and – even more worrying – might become popular.Footnote17 Trying to double-check information, look for counterevidence, find and vet experts, and consult others can thus easily put you in touch with misinformation, misleading evidence, fake authorities, or otherwise epistemically bad inputs. Preventing this requires considerable epistemic vigilance.

This general pattern manifests itself in various more specific ways. Assessing the quality of evidence from different sources requires checking whether sources are independent. This is difficult on the internet since information can be copied, modified, transformed, and reproduced very easily and quickly, even automatically, so that dependence relations become hard to spot or even invisible. Anonymous or pseudonymous posting, automated bots, and fake identities create problems for verifying sources. Sources can come and go quickly – certainly personal blogs or social media accounts, but even entire platforms. Hence, it can be hard to get a sense of reputations, track-records, or ideological colors. All of these things jeopardize sound assessment of the quality of evidence. For the goal of understanding specifically, you might form false beliefs about non-existent dependency relations or your extant beliefs about genuine dependency relations might get undermined by misleading evidence, leading to illusory understanding or a loss of real understanding.

These effects could be mitigated if the internet offered sound indicators for the quality of information and sources. Unfortunately, many popular websites, services, and social media hardly do so. Instead, they offer proxies that have no relation at all – or a tenuous one at best – with epistemic quality. We already discussed how Google’s PageRank algorithm often fails to order search results in an epistemically helpful way. Similar things can be said about ‘likes’, ‘shares’, ‘retweets’, and other common measures of popularity. The same goes for results that come out of recommender systems on YouTube, Amazon, and other websites. The fact that ‘people like you’ have also watched or bought X is not necessarily a good reason for taking X to be true and trustworthy. In general, these indicators are hardly indicative of truth, justification, or other epistemically good-making qualities.

This isn’t to say that we have no indicators of epistemic quality whatsoever. We can still go by things like track-records, reputation, background knowledge about the methods by which information is generated, etc. Rather, the point is that these don’t come from features of the internet or its organization; they are indicators that we had offline too. When we outsource our assessment of the evidence to what the online environment offers, the risk is that we rely unduly on misleading indicators that are bad proxies for epistemic quality, thus cultivating illusions of knowledge and understanding.

4.6. Judging When We Have Taken Account of All or Most Relevant Lines of Investigation

The final meta-cognitive task involved in good inquiry is knowing when to stop. The internet is Janus-faced when it comes to this task, too. For straightforward inquiries into everyday facts, there are plenty of one-stop reliable websites: you can get directions and contact details from Google Maps, news from CNN or Yahoo, etc. Also, if you have prior knowledge about which sources, experts, or organizations are reliable authorities on a topic of interest, it’s easy to consult their websites and find the information you’re looking for. Note, however, that the internet itself doesn’t necessarily help to discriminate the good from the bad sources: misinformation and propaganda might show up in the first page of your search results or YouTube recommendations right next to genuine expert information. Finally, discovering that multiple, independent online sources agree about certain issues or converge on an answer can be an indication that you can stop inquiry.

Things get trickier when you’re seeking understanding of more complex issues and don’t have a prior sense of where and how to look for reliable information. Various features of the online environment can then stand in the way of judging when to terminate inquiry. The sheer amount of results that most search terms produce makes it hard to determine when you’ve done enough. Similarly for never-ending new recommendations on YouTube or Amazon. Endless scrolling on platforms like Facebook and Twitter has the same effect. No matter how much time you spend scrolling and tracing down links, you’re always left with the impression that you could have gotten more potentially relevant information.

Because indicators for the epistemic quality of online information are usually lacking, you can end up spending lots of time chasing down poor-quality information or even disinformation. Since it’s natural to take the time invested in inquiry as an indication of its thoroughness and quality, we might stop inquiry prematurely. Alternatively, prominent misinformation can lead us to prolong inquiry when it ought to stop. Maybe you read a reliable summary of an IPCC report about climate change, stumble upon a seemingly serious climate skeptical website or provocatively titled talk on YouTube, decide to keep inquiring, and end up epistemically worse off. Misinformation similarly creates problems for using consensus or convergence as an indicator of true information. As long as spurious dissent shows up online, you can be led astray by manufactured controversy. Inquiry into complex issues should only be concluded after having taken into account multiple sources of evidence and different perspectives. Online misinformation makes this harder, too. You might take yourself to have considered multiple sources and viewpoints responsibly, but, in reality, you may have spent way too much time looking at misinformation. The ensuing feeling of having done your research can saddle you with an illusory sense of understanding.

The problems for recognizing when we found an answer to a question, which I identified in Section 4.4 above, all translate to problems for knowing when to stop inquiry too. Without much serious epistemic quality control, the enormous amounts of (mis)information and interconnections between it make it difficult to tell whether and when we have found reliable information, trustworthy sources, and dependence-relations conducive to real understanding. Relying uncritically on the tenuous proxy indicators for epistemic quality that the internet does provide will once more leave you with illusions of understanding.

5. Conclusion

I’ve argued that the internet isn’t the epistemic paradise it is sometimes been cracked up to be. When we’re conducting inquiry online in order to acquire knowledge or understanding there is a very real risk that we end up with illusions of knowledge and understanding rather than the real epistemic goods. First of all, it’s in our nature to begin with: psychology shows that humans are in general prone to overestimating their cognitive achievements. Second, experiments demonstrate that this is exacerbated by having easy access to online information. Third, various systemic and design features of internet platforms interfere with the process of inquiring well. When we outsource the meta-cognitive tasks required for responsible inquiry to the online environment, we can easily be led astray and take ourselves to know and understand more than we really do. We might mistake popularity and search ranking for reliability, hyperlinks and recommendations for explanatory dependency relations; ask the wrong questions or buy into false presuppositions; exclude relevant lines of inquiry prematurely; fail to recognize when we’ve found good answers and genuine understanding; form overly optimistic judgements about the epistemic quality of information; and protract inquiry unnecessarily or stop it too soon. The world that the internet puts at our fingertips is one of potential illusions of information, as much as it is of one real information.

In closing, though, let me hint at three lessons we can learn – one cautionary and two more hopeful. First, fake news and other forms of misinformation and disinformation, which have received so much attention recently, are no doubt part of the problem, but the issues run deeper. Many of the worries I discussed above derive from design features of internet platforms: the way they order and organize information, their lack of epistemic quality control, their openness, and the algorithms behind them. This means that online inquiry could still easily generate illusions of understanding, even if the internet could be purged of much intentional disinformation.

Second, it is no part of my argument to claim that the risks I have identified are inevitable, or that the internet necessarily leads to illusions of knowledge and understanding. Whether it does so depends in large part on myriads of design choices, business decisions, and government policies. This offers some reason for hope, because it means the epistemic potential of the internet can be improved. This is not an easy job. It needs concerted efforts from computer scientists, psychologists, communication scientists, and philosophers, in addition to better business incentives and policies and law-making (cf. Bak-Coleman et al. Citation2021).

Third, for all I have said, the internet remains an amazing and unparalleled source of knowledge and understanding, as long as you know where and how to look. Skilled searches, judicious use of social media, and a curated diet of websites can massively improve your epistemic standing vis-à-vis almost any topic whatsoever (Heersmink Citation2018). But the emphasis should lie on skill, judgement, and curation here. Online inquiry requires cognitive skills and intellectual virtues. Cognition should not be outsourced entirely to the online environment. Good inquiry is hard work, and this is as true online as it was offline.Footnote18

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Funding

The work was supported by the Dutch Research Council (NWO) [276-20-024].

Notes on contributors

Jeroen de Ridder

Jeroen de Ridder is Associate Professor of Philosophy at Vrije Universiteit Amsterdam and Professor by special appointment of Christian Philosophy at the University of Groningen. His research is focused on social and political epistemology.

Notes

1. This paper is a longer and more developed version of De Ridder (Citation2019).

2. More recent characterizations concur. Baehr: ‘We often find ourselves fascinated and puzzled by the world around us. We desire to know, to understand how things are, were, or might someday be. As a result, we make intentional and sustained efforts to figure things out. We inquire’ (Citation2017, 1). Also Friedman: ‘Every inquirer is trying to figure something out’ (Citation2019, 299).

3. Some commentators maintain that understanding essentially involves skills or knowledge-how for manipulating mental models of the objects of understanding (De Regt et al. Citation2005; De Regt Citation2017) or other inferential and cognitive abilities (Hill Citation2016). Others insist that the ability to answer ‘what if things had been different’ questions is inseparable from understanding (Woodward Citation2003; Grimm Citation2014). They may or not be right about this and these further ingredients may or may not follow logically from the broad characterization offered above. Since these points are immaterial to the project of the present paper, I won’t commit myself either way.

4. The formulation of tasks (b) through (e) is Hookway’s, but their explication is mine.

5. What about alternative explanations? Perhaps people understood the first question as asking about know-how: can I use the object in question with ease? The reason they lower their self-assessment is that they interpret the second question differently, as asking about their ability to articulate their knowledge-that of how the objects work. Rozenblit and Keil tried to prevent this confusion, however, by providing participants with explicit instructions about what they meant for each score on the 7-point scale. Even if some confusion remained, however, being asked to give an explanation made people realize that they had less articulable understanding than they thought: ‘many participants reported genuine surprise and new humility at how much less they knew than they originally thought’ (530).

6. Recent studies have failed to replicate Fernbach et al.’s (Citation2013) finding that puncturing the illusion of understanding has the further effect of reducing attitudinal polarization and political extremism (Voelkel, Brandt, and Colombo Citation2018; Crawford Citation2019), but the basic finding that people adjust their self-assessments of how much they know and understand downward remains unaffected.

7. This effect is probably also behind the well-known Dunning-Kruger effect (Kruger and Dunning Citation1999) which suggested that the less skilled people are, the more they overestimate their skills. There’s been some recent discussion, however, about the reality of this effect (Gignac and Zajenkowski Citation2020; Gelman Citation2021; Vyse Citation2022).

8. The effect persisted even when the group without internet access was (falsely) told that they made no mistakes on the first quiz. So the overconfidence doesn’t (merely) result from believing that you’ve done well on the first test.

9. The experiments controlled for various potentially confounding factors. The effect persisted even when misinterpretations of the questions were ruled out. It didn’t result from general overconfidence in all of one’s abilities caused by internet access. And it wasn’t induced specifically by the success of internet searches; the effect was there even when participants found only irrelevant results or no results at all.

10. See Clark and Chalmers (Citation1998), Carter et al. (Citation2014), and Carter et al. (Citation2018) for discussion of extended cognition and its application to epistemology.

11. See, e.g. Clark (Citation2010), Adams and Aizawa (Citation2011), Record and Miller (Citation2018).

12. See Pedersen and Christian Bjerring (Citation2021) for discussion.

13. This is a crude oversimplification even for the original algorithm. In fact, several other factors and processes influence the ranking of search results, most importantly the commercial interests of advertisers. Since it is fairly obvious that this will typically only detract further from the epistemic quality of search results, I’ll ignore these complications here.

14. See the statistics on click-through rates here: https://www.advancedwebranking.com/ctrstudy/.

15. This is something that showed up in my search window (in July 2019).

16. In fact, in (Citation2017) a prank showed how TripAdvisor’s algorithms could be gamed so as to make a non-existent fake restaurant the top choice in London by creating a nice-looking website, posting lots of fake reviews, adding photos of fake dishes, and creating an illusion of popularity by accepting only reservations for months ahead (Butler Citation2017).

17. A widely cited study of Twitter dynamics showed that false information spreads faster and farther than true information (Vosoughi, Roy, and Aral Citation2018). A smaller-scale analysis on BuzzFeed already showed how several fake news stories were more widely shared on Facebook than real news before the (Silverman Citation2016) US elections (Silverman Citation2016).

18. Thanks to the editors of this special issue and to Catarina Dutilh Novaes, Michael Hannon, Thirza Lagewaard, Rik Peels, Chris Ranalli, Emanuel Rutten, and René van Woudenberg for comments on an earlier version of this paper. Thanks also to participants and audiences at the Explanation and Understanding in the Age of Algorithms workshop at the Bielefeld Center for Interdisciplinary Research in November 2018; the Intellectual Autonomy, Epistemic Authority, and Epistemic Paternalism workshop in Madrid in February 2019; and the OZSW Annual Conference in Amsterdam in October 2019 for helpful questions and discussion. Research for this paper was made possible by a Vidi grant (276-20-024) from the Netherlands Research Council (NWO).

References

  • Adams, Frederick, and Kenneth Aizawa. 2011. The Bounds of Cognition. Oxford: Blackwell.
  • Alfano, Mark, J. Adam Carter, and Marc Cheong. 2018. “Technological Seduction and Self-Radicalization.” Journal of the American Philosophical Association 4 (3): 298–322. doi:10.1017/apa.2018.27.
  • Alfano, Mark, J. Adam Carter, Amir Ebrahimi Fard, Peter Clutton, Colin Klein, and C. Klein. 2020. ”Technologically Scaffolded Atypical Cognition: The Case of YouTube’s Recommender System.” Synthese 199 (1–2): 835–858. doi:10.1007/s11229-020-02724-x.
  • Alfano, Mark, and Emily Sullivan. 2021. “Online Trust and Distrust.” In The Routledge Handbook of Political Epistemology, edited by Michael Hannon and Jeroen de Ridder, 480–491. London: Routledge.
  • Baehr, Jason. 2017. The Inquiring Mind. New York: Oxford University Press.
  • Bak-Coleman, Joseph B., Mark Alfano, Wolfram Barfuss, Carl T. Bergstrom, Miguel A. Centeno, Iain D. Couzin, Jonathan F. Donges, et al. 2021. ”Stewardship of Global Collective Behavior.” Proceedings of the National Academy of Sciences. 118 (27): e2025764118. doi:10.1073/pnas.2025764118.
  • Baumberger, Christoph, Claus Beisbart, and Georg Brun. 2017. “What is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science.” In Explaining Understanding, edited by Stephen Grimm, Christoph Baumberger, and Sabine Ammon, 1–34. New York: Routledge.
  • Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. New York: Oxford University Press.
  • Bernecker, Sven, Amy Flowerree, and Thomas Grundmann, eds. 2021. The Epistemology of Fake News. New York: Oxford University Press.
  • Butler, Oobah. 2017. “I Made My Shed the Top Rated Restaurant on TripAdvisor.” Vice. December 6, 2017. https://www.vice.com/en_uk/article/434gqw/i-made-my-shed-the-top-rated-restaurant-on-tripadvisor.
  • Carter, J. Adam, Andy Clark, Jesper Kallestrup, and S. Orestis Palermos. 2018. Extended Epistemology, edited by Duncan Pritchard. Oxford: Oxford University Press.
  • Carter, J. Adam, and Emma Gordon. 2021. “Is Searching the Internet Making Us Intellectually Arrogant?” In Polarisation, Arrogance, and Dogmatism, edited by Alessandra Tanesini and Michael Lynch, 88–103. London: Routledge.
  • Carter, J. Adam, Jesper Kallestrup, Orestis Palermos, and Duncan Pritchard. 2014. “Varieties of Externalism.” Philosophical Issues 24 (1): 63–109. doi:10.1111/phis.12026.
  • Chaslot, Guillaume. 2019. “The Toxic Potential of YouTube’s Feedback Loop.” Wired, July 13. https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/.
  • Clark, Andy. 2010. “Memento’s Revenge: The Extended Mind Extended.” In The Extended Mind, edited by Richard Menary, 43–66. Cambridge, MA: MIT Press.
  • Clark, Andy, and David Chalmers. 1998. “The Extended Mind.” Analysis 58 (1): 7–19. doi:10.1093/analys/58.1.7.
  • Crawford, Janet. 2019. Puncturing the Illusion of Understanding Does Not Reduce Political Extremism: A Replication of Fernbach, Rogers, Fox, and Sloman 2013. PsyArXiv. doi:10.31234/osf.io/yn64p.
  • De Regt, Henk. 2017. Understanding Scientific Understanding. New York: Oxford University Press.
  • De Regt, Henk W., and Dennis Dieks. 2005. “A Contextual Approach to Scientific Understanding.” Synthese 144 (1): 137–170. doi:10.1007/s11229-005-5000-4.
  • De Ridder, Jeroen. 2019. “Algorithm-Based Illusions of Understanding.” Social Epistemology Review and Reply Collective 8 (10): 53–64. https://wp.me/p1Bfg0-4ws.
  • Diresta, Renée. 2019. “How Amazon’s Algorithms Curated a Dystopian Bookstore.” Wired, March 5. https://www.wired.com/story/amazon-and-the-spread-of-health-misinformation/.
  • Fernbach, Philip M., Todd Rogers, Craig R. Fox, and Steven A. Sloman. 2013. “Political Extremism is Supported by an Illusion of Understanding.” Psychological Science 24 (6): 939–946. doi:10.1177/0956797612464058.
  • Fisher, Matthew, Mariel K. Goddu, and Frank C. Keil. 2015. “Searching for Explanations: How the Internet Inflates Estimates of Internal Knowledge.” Journal of Experimental Psychology: General 144 (3): 674–687. doi:10.1037/xge0000070.
  • Friedman, Jane. 2019. “Inquiry and Belief.” Noûs 53 (2): 296–315. doi:10.1111/nous.12222.
  • Gelfert, Axel. 2018. “Fake News: A Definition.” Informal Logic 38 (1): 84–117. doi:10.22329/il.v38i1.5068.
  • Gelman, Andrew. 2021. “Can the ‘Dunning-Kruger Effect’ Be Explained as a Misunderstanding of Regression to the Mean?” Statistical Modeling, Causal Inference and Social Science Blog, October 12. https://statmodeling.stat.columbia.edu/2021/10/12/can-the-dunning-kruger-effect-be-explained-as-a-misunderstanding-of-regression-to-the-mean/.
  • Gignac, Gilles, and Marcin Zajenkowski. 2020. “The Dunning-Kruger Effect is (Mostly) a Statistical Artefact: Valid Approaches to Testing the Hypothesis with Individual Differences Data.” Intelligence 80: 101449. doi:10.1016/j.intell.2020.101449.
  • Gordon, Emma. n.d. ”Understanding in Epistemology.” In Internet Encyclopedia of Philosophy, edited by James Fieser and Bradley Dowden. www.iep.utm.edu/understa/.
  • Grimm, Stephen. 2011. “Understanding.” In The Routledge Companion to Epistemology, edited by Sven Bernecker and Duncan Pritchard, 84–94. London: Routledge.
  • Grimm, Stephen. 2014. “Understanding as Knowledge of Causes.” In Virtue Epistemology Naturalized, edited by Abrol Fairweather, 329–345. New York: Springer.
  • Gunn, Hanna Kiri, and Michael Lynch. 2021. “The Internet and Epistemic Agency.” In Applied Epistemology, edited by Jennifer Lackey, 389–409. New York: Oxford University Press.
  • Heersmink, Richard. 2018. “A Virtue Epistemology of the Internet: Search Engines, Intellectual Virtues and Education.” Social Epistemology 32 (1): 1–12. doi:10.1080/02691728.2017.1383530.
  • Hill, Allison. 2016. “Understanding Why.” Noûs 50 (4): 661–688. doi:10.1111/nous.12092.
  • Hindman, Matthew. 2018. The Internet Trap: How the Digital Economy Builds Monopolies and Undermines Democracy. Princeton, NJ: Princeton University Press.
  • Hookway, Christopher. 1994. “Cognitive Virtues and Epistemic Evaluations.” International Journal of Philosophical Studies 2 (2): 211–227. doi:10.1080/09672559408570791.
  • Hookway, Christopher. 2003. “How to Be a Virtue Epistemologist.” In Intellectual Virtue: Perspectives from Ethics and Epistemology, edited by Linda Zagzebski and Michael DePaul, 183–202. New York: Oxford University Press.
  • Hookway, Christopher. 2008. “Questions, Epistemologies, and Inquiries.” Grazer Philosophische Studien 77 (1): 1–21. doi:10.1163/18756735-90000841.
  • Joskow, Paul. 2007. ”Regulation of Natural Monopoly.” In Handbook of Law and Economics, Vol. 2, edited by A. Mitchell Polinsky and Steven Shavell, 1227–1348. Vol. 2. Amsterdam: Elsevier.
  • Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Strous, and Giroux.
  • Kruger, Justin, and David Dunning. 1999. “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology 77 (6): 1121–1134. doi:10.1037/0022-3514.77.6.1121.
  • Levy, Neil. 2017. “The Bad News About Fake News.” Social Epistemology Review and Reply Collective 6 (8): 20–36. http://wp.me/p1Bfg0-3GV.
  • Lewis, Paul. 2018. “‘Fiction is Outperforming Reality’: How YouTube’s Algorithm Distorts Truth.” The Guardian. February 2, 2018. https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth.
  • Lynch, Michael. 2016. The Internet of Us. New York: Liveright Publishing.
  • Masterton, G., and Erik. J. Olsson. 2018. “From Impact to Importance: The Current State of the Wisdom-of-Crowds Justification of Link-Based Ranking Algorithms.” Philosophy & Technology 31 (4): 593–609. doi:10.1007/s13347-017-0274-2.
  • McBrayer, Justin. 2021. Beyond Fake News: Finding the Truth in a World of Misinformation. London: Routledge.
  • Miller, Boaz, and Isaac Record. 2013. “Justified Belief in a Digital Age: On the Epistemic Implications of Secret Internet Technologies.” Episteme 10 (2): 117–134. doi:10.1017/epi.2013.11.
  • Nguyen, C. Thi. 2020. “Echo Chambers and Epistemic Bubbles.” Episteme 17 (2): 141–161. doi:10.1017/epi.2018.32.
  • Noble, Safiya Umoja. 2018. Algorithms of Oppression. New York: NYU Press.
  • O’Connor, Cailin, and James Owen Weatherall. 2019. The Misinformation Age: How False Beliefs Spread. New Haven, CT: Yale University Press.
  • Page, Lawrence, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank Citation Ranking: Bringing Order to the Web. Technical Report. Stanford InfoLab.
  • Pedersen, Nikolaj J. L., and Jens Christian Bjerring. 2021. “Extended Knowledge Overextended?” In Knowers and Knowledge in East-West Philosophy, edited by Karyn Lai, 191–233. London: Palgrave Macmillan .
  • Record, Isaac, and Boaz Miller. 2018. “Taking iPhone Seriously: Epistemic Technologies and the Extended Mind.” In Extended Epistemology, edited by J. Adam Carter, Andy Clark, S. Orestis Palermos Jesper Kallestrup, and Duncan Pritchard, 105–126. Oxford: Oxford University Press.
  • Rini, Regina. 2017. “Fake News and Partisan Epistemology.” Kennedy Institute of Ethics Journal 27 (2S): 43–64. doi:10.1353/ken.2017.0025.
  • Roose, Kevin. 2019. “The Making of a YouTube Radical.” The New York Times, June 8, 2019. https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html.
  • Rozenblit, Leonid, and Frank Keil. 2002. “The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth.” Cognitive Science 26 (5): 521–562. doi:10.1207/s15516709cog2605_1.
  • Silverman, Craig. 2016. “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News on Facebook.” Buzzfeed, November 16. https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook.
  • Sloman, Steven, and Philip Fernbach. 2017. The Knowledge Illusion: Why We Never Think Alone. New York: Riverhead.
  • Sunstein, Cass. 2017. #Republic: Divided Democracy in the Age of Social Media. Princeton, NJ: Princeton University Press.
  • Voelkel, Jan G., Mark J. Brandt, and Matteo Colombo. 2018. “I Know That I Know Nothing: Can Puncturing the Illusion of Explanatory Depth Overcome the Relationship Between Attitudinal Dissimilarity and Prejudice?” Comprehensive Results in Social Psychology 3 (1): 56–78. doi:10.1080/23743603.2018.1464881.
  • Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. “The Spread of True and False News Online.” Science 359 (6380): 1146–1151. doi:10.1126/science.aap9559.
  • Vyse, Stuart. 2022. “Yes, the Dunning-Kruger Effect Really is Real.” Skeptical Inquirer Blog. https://skepticalinquirer.org/exclusive/yes-the-dunning-kruger-effect-really-is-real/.
  • Ward, Adrian F. 2013. “Supernormal: How the Internet is Changing Our Memories and Our Minds.” Psychological Inquiry 24 (4): 341–348. doi:10.1080/1047840X.2013.850148.
  • Watson, Lani. 2018. “Educating for Good Questioning: A Tool for Intellectual Virtues Education.” Acta Analytica 33 (3): 353–370. doi:10.1007/s12136-018-0350-y.
  • Wegner, Daniel, and Adrian F. Ward. 2013. “How Google is Changing Your Brain.” Scientific American 309 (6): 58–61. doi:10.1038/scientificamerican1213-58.
  • Weill, Kelly. 2018. “How YouTube Built a Radicalization Machine for the Far-Right.” The Daily Beast, December 17. https://www.thedailybeast.com/how-youtube-pulled-these-men-down-a-vortex-of-far-right-hate.
  • Woodward, James. 2003. Making Things Happen. New York: Oxford University Press.
  • Zell, Ethan, Jason Strickhouser, Constantine Sedikides, and Mark Alicke. 2020. “The Better-Than-Average Effect in Comparative Self-Evaluation: A Comprehensive Review and Meta-Analysis.” Psychological Bulletin 146 (2): 118–149. doi:10.1037/bul0000218.