416
Views
4
CrossRef citations to date
0
Altmetric
ARTICLES

SOCIOLOGY OF EXPECTATION AND THE E-SOCIAL SCIENCE AGENDA

Pages 1103-1118 | Published online: 22 Oct 2009
 

Abstract

This paper explores the relevance of sociology of expectation in conceptualizing some of the tensions emerging in the UK context from the attempts to engage communities of social scientists, anthropologists and colleagues in cognate disciplines with e-social science. As the uptake of e-science proceeds fast in many scientific domains – from genetics to physics, from biology to clinical medicine – many social scientists and scholars in cognate disciplines remain apparently unaware or unimpressed by the promises of linking up large-scale data sets of fieldwork, and having access to the new tools and technologies that are being developed to cope with this scaling up of data set size. Science and Technology Studies (STS) has theorized technological innovations, and highlighted how they come packaged with expectations of their applications, their benefits and sometimes their risks. Future scenarios are projected in which a technology is integrated with society at large and with representations of everyday life. In line with an STS approach, instead of debating the likelihood of possible scenarios, this paper calls for uncovering the values and preferences that are implicitly inbuilt in the visions of the proponents of e-social science. It is only once these are rendered explicit that one can begin to explore the extent to which these values are shared across sections of the research community, or the extent to which they may be specific of certain stakeholders only. The process, it is argued, ultimately allows for a more transparent debate, and a negotiation of which values end up being up-taken in research policy and why.

Notes

Most notable was the establishment of a National Centre for e-Social Science (NCeSS), funded by the ESRC. Further information on http://www.ncess.ac.uk/ (17 July 2008).

The European Organisation for Nuclear Research (CERN)'s website stresses this point too (see http://public.web.cern.ch/public/en/LHC/Computing-en.html; last accessed February 2009) and examples of cases where the Grid has been pivotal to research and innovations are described in the PDF available at the bottom of that webpage and entitled ‘What is grid computing?’. One case presented is that of Malina Kirn, experiment physicist, who claims ‘the (…) experiment produces a walloping five petabytes of data a year, equivalent to 45 MP3s a second. Moving and analyzing this data is a challenging job, but by using grid technology, we can distribute data to physicists around the world, drastically improving response time and developing local computing infrastructure that contributes to a large and international collaboration’. Another case is presented by Fernando Danilo Gonzalez Nilo at the Center for Bioinformatics and Molecular Simulations, Universidad de Talca, who claims that ‘each day, biologists, chemists, physicists, mathematicians and engineers are working together to create new nanoparticles that require exhaustive structural characterization. These analyses require intensive use of computational chemistry, which can be very computertime expensive. Such revolutionary milestones in pharma, medicine, and other fields are only possible with collaborative grid computing and the use of emerging new field like nanoinformatics’. The full PDF is also available at http://www.olivier-art.com/gridTalk/Documents/GridBriefing_What_is_a_grid_2.pdf (last accessed February 2009).

One of these is the project e-Uptake: Enabling Uptake of e-Infrastructure Services currently being conducted at the ESRC National Centre for e-Social Science. Further details at http://www.ncess.ac.uk/research/hub_research/e_uptake/ (16 July 2008). Also, see Dutton, William H. and Meyer, Eric T, ‘e-Social Science as an Experience Technology: Distance from, and Attitudes Toward, E-Research’. Conference paper delivered at the 4th International e-Social Science Conference, 19 June 2008. Available at http://ssrn.com/abstract=1150422 (16 July 2008).

See http://www.ncess.ac.uk and also http://www.oii.ox.ac.uk/research/?rq=escience, supported by ESRC grant RES-149-25-1022 (17 July 2008).

It remains difficult to see how key steps of frame analysis might be automated through text mining. The aim of frame analysis is that of identifying frames or ways in which issues and events can be cast in a certain light, defining ‘the issues’ that we ought to attend to, and prioritizing some interventions and responses over other possible. Consequently, the presence of a certain frame in a discourse is just as revealing as the absence of another. It is not by ‘counting’ then that relevance to the analysis can be established. Furthermore, even when ‘counting’ or ‘accounting’ takes place for a frame that is recoverable in a discourse or data set, its linguistic features will vary and not necessarily be predictable. For instance to instantiate a frame of war, one could resolve to a very large set of vocabulary. The analyst may easily see and recognize that talk of ‘trenches’ or ‘entrenchment’ may evoke the frame of war, but it is difficult, time consuming and of little use to compile a very long list of all the terms that can potentially evoke the frame of war for the purpose of training the text mining software to recognize this frame. Such a list would always be partial anyway (lacking the creativity of language and the richness of idiomatic and literary allusions, and other cultural references that can make terms convey certain meanings in certain contexts). It is in that sense that frame analysis operates at the level of (discoursive and symbolic) representation, looking at the images, themes, concepts that are evoked.

The concept of lay user itself needs to be problematized as anyone approaching a search will have one (or more) interpretative background and modus operandi. Adopting Derrida's deconstructionist approach, no-one comes as a tabula rasa towards a text or data set.

This was debated at length in the question session following the talk ‘the diffusion of e-research: participants, spectators and the disengaged’ by Professor Dutton, 20 May 2008, Manchester University.

See Adolphs et al. in bibliography, as well as the following projects at the Nottingham node of NCeSS: Grid-based Assembly of Qualitative Records Grid-based Structuring of Assembled Records Grid-based Coupling of Qualitative and Quantitative Analysis.

The survey aimed to identify who uses and who does not use advanced ICTs in aspects of the research process.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 304.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.