2010/06/21

Comment at Joanne's Place (2010-06-21)

Joanne Jacobs notices a new confirmation of an old result: Kids Get Computers; Scores Fall. Harriet left a comment.

TIMSS results, among others, as long ago as 14 years indicated that computers add little to basic instruction. This result parallels a similar assessment of the contribution which early use of calculators makes to Math performance. These are statistical results–”facts”, if you will–which admit various explanations and suggest various extrapolations.

First, note that most of these studies of school performance use standardized tests of reading comprehension and Math. As a wise lady from the ETS once said: “We can’t measure what’s important so we measure what we can.” When Chubb and Moe determined to study the relation between institutional structure and school performance, they used student gains between 10th and 12 grade on standardized tests of reading, Math, and Science. They did not use Social Studies scores because Social Studies scores did not correlate with anything (which is pretty funny if you know anything about statistics).

Second, computers have transformed so many industries one might reasonably project that they will transform the education industry when decision-makers face an incentive structure which rewards the choice of effective means. At Jay Greene’s blog, guest blogger Matthew Ladner described a successful charter thus:
Carpe Diem is a 6-12 school with 240 students. A value added analysis of test scores found that they have the biggest gains in the state of Arizona. Their math results are really off the chart, with some grades averaging at the 98th percentile on Terra Nova. Carpe Diem is a hybrid model school, rotating kids between self-paced instruction on the computer and classroom instruction.
Perhaps in the Vigdor and Ladd study we see the result of inept implementation and not a defect in the principle.

Third, skills improve with practice, and people will practice what they enjoy. When programs adjust the practice which they present to any student to that student’s skill level and when programs compose infinitely variable practice routines, it’s hard to see how they will not enhance student performance. Consider computer chess.

As usual: “What works?” is an empirical question which only an experiment (a decentralized public-policy regime or a competitive market in goods and services) can answer.

No comments: