Abstract
The goal of this study is to explore new tools for analyzing scientific sense‐making in out‐of‐school settings. Although such measures are now common in science classroom research, dialogically based methodological approaches are relatively new to informal learning research. Such out‐of‐classroom settings have more recently become a breeding ground for new design approaches for tracking scientific talk and ideas within complex data‐sets. The research reported here seeks to understand the language people do use to make sense of the life sciences over time. Another goal of this study is to track biological themes over time, using a new analytical scheme—the Tool for Observing Biological Talk Over Time. Our analyses are linked to and informed by tensions between particularistic and holistic data collection and analysis, qualitative and quantitative representations, and everyday and formal science discourse. These tensions and our analyses are linked to larger theoretical frameworks and to the recursive interplay between theory and practice.
Acknowledgements
This work has been supported in part by NSF REC Grant # 0133662 to Doris Ash and by NSF ESI 0119787 Grant to the Center for Informal Learning and Schools.
Notes
1. By everyday resources, we mean the spontaneous, ordinary understandings typical of non‐scientists gleaned from television, newspapers, friends, school, and many other distributed sources of knowing, which enable learners to create a dialogue with exhibits, with one another, and with the overall setting.
2. Head Start is a comprehensive child development program funded through the US Department of Health and Human Services that assists children from birth to age five, pregnant women, and their families. Head Start is a child‐focused program with the overall goal of increasing the school readiness of young children in low‐income families.
3. The measure of reliability used was directly exported from Callanan’s collaborating psychology research laboratory. Cohen’s Kappa is used to calculate the degree of agreement between coders while correcting for chance agreement. Although common in peer‐reviewed educational research, uncorrected or “just agreement” percentages (i.e., number of agreements/total number of coded pieces) do not take into account the part of the observed agreement that is due to chance. In psychology journals, a Kappa value of .7 or better is generally accepted, but a standard of .85 is highly regarded as an indication of high validity, or match between coders.