To the editors:
In their Feb. 3 essay, users of the Nationwide Council for On line Instruction argue that on the net courses—properly done—are at least as excellent as in-person classes. As proof, they backlink not to a analyze or meta-analysis, but to a databases of papers, which is to some degree akin to my making a professional medical claim followed by a hyperlink to PubMed, other than in this situation the database was exclusively developed to be biased. It’s virtually named the “No Sizeable Distinction database,” and its belated claim to solicit research that do clearly show a important variance appears to be a minimal disingenuous.
It currently retains 141 research exhibiting no substantial difference, 51 exhibiting on line better—and demonstrating classroom better and showing combined success. Applying a regular p<0.05 significance level, we’d expect a fair database to show those latter numbers to be nonzero just by random noise, even if there were indeed no true difference.
But I think the real issue that has hit proponents of online courses in the past couple years is that, for the first time on a large scale, use of online courses was randomized (often by university or state). Many institutions have taught both online and face-to-face classes for years, but few have forced students into online courses. So students studying online was self selected, which violates the first rule of testing efficacy of something—randomizing your sample. At my own university, a number of students in my face-to-face classes had tried online and not liked it, and had specifically chosen in-person classes. It’s little wonder that such students were unhappy or underperforming when forced back online.
It’s certainly true that there’s a real difference between courses carefully planned to be online and courses abruptly forced to be remote. What’s telling to me, though, is reports that the courses least popular with our suddenly online students were those courses that had been online all along. Professors who’d taught online for many years were surprised that their best-practices asynchronous online courses were suddenly attracting lots of complaints in a way the Zoom-my-lecture-classroom-simulacrum courses weren’t. We know learning gains and student satisfaction aren’t perfectly correlated, but this does highlight the self-selection issue.
In April 2020, it was fair to say many of the “online” courses weren’t well designed. However, it’s rather bizarre to claim this in February 2022. If nearly two years of experience and training in how to design online courses, including universities making them all go through Quality Matters, doesn’t result in acceptable online courses, are we setting an impossible standard?
I think we all understand that the future will hold a mixture of in person and online courses, likely with more online than before because of the flexibility it provides. It works well for some students, and is necessary to serve those will full-time jobs. Many professors who previously said they’d never teach online now see it as a realistic possibility.
What I’d like to see is proponents of online courses honestly confronting the fact that the format doesn’t work well for some students and for some courses. And I’d like them to throw out every study that didn’t randomize the assignment of modality.