Looking Through to See Beyond: Changing Education

Is Quantitative Methodology a Good Fit for Educational Research?

Reactions to: Understanding and Describing Quantitative Data – Cathy Lewin, Education and Social Research Institute, Manchester Metropolitan University, UK

 

The idea of applying statistical analysis to measure the effects of education has always made me feel skeptical since, in my limited experience with statistical research, I have seen how the same numbers can be used to interpret and support opposite theories about education. Using statistics to reveal some objective truth, therefore, must be especially dependent upon the proper application of the data being collected and the reliability of the data. And yet, even with such empirical data with high reliability and validity created by such means as double-blind, peer-reviewed research methods, the results seem to meet the means for which the data was designed to be collected. This puts into question why do researchers apply such scientific methods to more esoteric fields of study such as education?

Perhaps there is an unconscious desire in a part of education stakeholders to arrive at a consensus about some aspect of education, some objective universal truth that can be said to describe the quality or best function of a student, teacher, school, or system of education. This desire often might express itself as a need to find a common purpose for education, an altruistic function that will confirm a need for its formal existence.

What is perhaps implicit in quantitative social science data collection and analysis is the predetermining notion that the design of the experiment or survey does not set out to answer a question, but rather is designed to collect statistics that support a hypothesis already in the mind of the researcher. Collecting data from such simple questions as “How many…?” or “How much…?” or “How often…” can provide some perspective to a limited extent, but it seems that for the majority of education research the most interesting aspect of the results of quantitative research are in the inferences made by the researchers that are followed with the proviso that “more research in this area is necessary”.

An example of this predetermining notion can be illustrated right from the text: “The sample size will be dependent on the accuracy required and the likely variation of the population characteristics being investigated, as well as the kind of analysis to be conducted on the data.” (Lewin, C. (2011). Understanding Quantitative Data. Theory and Methods in Social Research (pp. 220-230). Thousand Oaks, CA: Sage Publications. Determining the kind of analysis is in the least a predeterminant of the kind of data intended to be collected. The two go hand-in-hand, they are inseparable and cannot possibly be determined in isolation from each other. This was what drives my skepticism in statistical analysis and quantitative research. There is an undeniable bias built into these methods of social research that not only puts into question external validity but also creates a sort of “closed loop” effect on the research question and its answer. This “closed loop” design of quantitative research does not seem to marry well with a social context where, time, place, personal circumstance, mood, and even weather can play factors on the results.

Using the best practices of design such as employing stratified sampling to get an accurate test-group still does little to put the issue of creator-bias to rest. Surveys, correlational studies, and experimental designs all fall victim to the inherent bias of creating a means by which a predetermined answer can be confirmed or denied. And what is perhaps most striking is that in the field of social science research, this chapter makes a statement to the effect that it is commonplace to use the quantitative method of non-probability sampling for qualitative approaches from which the researcher tries at best not to generalize findings. So then, to what purpose beyond answering the research question itself does such research serve? Are we to avoid summarizing and avoid making a generalization only to maintain the integrity and validity of the research question?

The chapter goes on to describe questionnaire design and administration, exploring data using descriptive statistics to bring some adherence of significance to the data collected.  While quantitative research is certainly beneficial in simply describing a population, it tends to show a kind of conflict of interest in its top-down approach to designing questions in order to prove a hypothesis or to make a prediction about a population. This conflict of interest is exacerbated when using complex statistical tools, logarithms, computer-generated data set analyses, and abstract graphical representations of data as we have seen in so many professional development slideshows, sometimes to a point that the layperson becomes as confused as when trying to discern meaning from watching an episode of Max Headroom. It may appear as though a question is attempted to be answered through hyperactive intellectualism and cumbersome jargon.

Using quantitative research to study complex questions about human learning in the real world where variability resides in every nuance of action and decision-making is tenuous at best. It should be applied to the simplest of questions and only to open the door to deeper and more humanist approaches to understanding the nature and function of how and why learning is a fundamental part of an evolving society.

1 Comment

  1. andrewvogelsang

    As I said in one of my blog posts. Quantitative research seems cold. I agree that we need a more nuanced approach to help explain the data over just numbers and significance.

Leave a Reply

Your email address will not be published. Required fields are marked *