Education, Technology, and Empirical Data

I just returned from the Institute for Advanced Study’s Symposium on Technology and Education. Anyone interested in how education operates should contact the folks in today’s symposium or in the year-long seminar The Dewey Seminar: Education, Schools and the State. It is a great group of people thinking about justice, finance, the structure of schools, education and labor matters, whether constitutions address education, and much more. Indeed, it struck me that many of the participants’ work could provide interesting opportunities for collaboration.

Today’s speakers offered some fantastic ideas about the way education works in K-12. One thing that occurred to me was how, in yet another field, data is increasingly important. In many areas, vast amounts of data are being used to understand how a student is performing or where a different type of learning style may be required or whether a teacher is effective, and so on. This point may be readily familiar to those interested in empirical legal studies. Yet, two key issues arise. How does one sort the data? And, how does one interpret the data.

The answer seems to lie in the ability to embrace the Google mindset. Take in data. Study it. Study it. Study it. And see where it takes you. As Hal Varian has described (pdf), “The real secret to Google’s success is that they are constantly experimenting with the algorithm, adjusting, tuning and tweaking virtually continuously.” He compares this approach to “the Japanese approach to quality control is kaizen which is commonly translated as ‘continuous improvement.'” As general matter Varian has offered:

During the 1960s and 70s the scientific study of financial markets flourished due to the availability of massive amounts of data and the application of quantitative methods. I think that marketing is at the same position finance was in the early 1960s. Large amounts of computer readable data on marketing performance are just now becoming available via search engines, supermarket scanners, and other sorts of information technology. Such data provides the raw material for scientific studies of consumer behavior and I expect that there will much progress in this area in the coming decade.

After today’s seminar I am wondering whether “large amounts of computer readable data on marketing performance” could also be written “large amounts of computer readable data on education performance.” It seems like that day is coming, if not already here. We may be entering an era where education is heavily data driven and educators must be able to use new tools to understand and use the data. The challenges regarding privacy, notions of tracking, and fairness will be large. Then again the promise of improved educational outcomes and a system that can reach more students in ways far beyond training them to jump through test-taking hoops suggests that whatever the obstacles, it is worth pursuing the possibilities.

You may also like...

3 Responses

  1. A.J. Sutter says:

    On what basis can anyone one make a “promise of improved educational outcomes”? First of all, how long a baseline does one need to judge this — until students have grown up to be fully-participating citizens in society? How do such outcomes compare to those of an era of less data, but smaller classroom sizes, for example? And while you’re right to point out the privacy and fairness issues, there’s also the question of how much all this data collection and objectification of students will interfere with teaching and learning.

    Many of these “improvements” are just projections of the latest scientistic fads, and also of many Baby Boomers’ belief that with the right tricks they can turn their kids into Übermenschen. No doubt (and G-d willing) in a few years this will look tremendously dated, as will the “Google mindset”.

  2. Deven Desai says:


    Great points. Although I won’t go into details here, the group was well aware of your points and had some interesting debates about metrics, class size, and the problems of possible interference with teaching and learning. I did not discern agreement about these issues. The one thing that struck me was a sense that at least having data and trying to use it to assess whatever goal any group aimed at was a good idea. That idea reminds me of the social entrepreneurship movement which also tries to embrace setting goals with some outcome measurement and then seeing whether those goals were met, and if not, what might explain them.

    On a related note, it also reminds me of some material about espionage that argued that the difference between English and U.S. approaches is that the English have fewer resources and rely on human networks whereas the U.S. loves to gather vast amounts of information and then see what it can discern. My guess is that there is no perfect answer to which is better. And, as many in the session pointed out, a blend of methods may be the best way forward.