Making Decisions Using Randomized Evaluations
I really liked this article from a recent Economist: Economics focus: Control freaks; Are “randomised evaluations” a better way of doing aid and development policy?:
Laboratory scientists peer into microscopes to observe the behaviour of bugs. Epidemiologists track sickness in populations. Drug-company researchers run clinical trials. Economists have traditionally had a smaller toolkit. When studying growth, they put individual countries under the microscope or conduct cross-country macroeconomic studies (a bit like epidemiology). But they had nothing like drug trials. Economic data were based on observation and modelling, not controlled experiment.
That is changing. A tribe of economists, most from Harvard University and the Massachusetts Institute of Technology (MIT), have begun to champion the latest thing in development economics: “randomised evaluations” in which different policies—to boost school attendance, say—are tested by randomly assigning them to different groups...
Randomised evaluations are a good way to answer microeconomic questions... often, they provide information that could be got in no other way. To take bednets: supporters of distributing free benefits say that only this approach can spread the use of nets quickly enough to eradicate malaria. Supporters of charging retort that cost-sharing is necessary to establish a reliable system of supply and because people value what they pay for. Both ideas sound plausible and there was no way of telling in advance who was right. But the trial clearly showed how people behave...
Reading the whole article is best, but the core idea is that it might be helpful to conduct experiments on samples before applying policies to entire populations. In other words, don't just rely on theories, "conventional wisdom," "best practices," and so on... try to determine what actually works, and then expand the successful approaches to the overall group.
I thought immediately of the application to digital security, where, for example, bloggers write posts like Challenges to sell Information Security products and services:
Everyone knows (I hope) that some security measures are simply necessary — period. Firewalls and Antivirus, for example, are by common sense necessary.
Care to test that "common sense" in an experiment?
Laboratory scientists peer into microscopes to observe the behaviour of bugs. Epidemiologists track sickness in populations. Drug-company researchers run clinical trials. Economists have traditionally had a smaller toolkit. When studying growth, they put individual countries under the microscope or conduct cross-country macroeconomic studies (a bit like epidemiology). But they had nothing like drug trials. Economic data were based on observation and modelling, not controlled experiment.
That is changing. A tribe of economists, most from Harvard University and the Massachusetts Institute of Technology (MIT), have begun to champion the latest thing in development economics: “randomised evaluations” in which different policies—to boost school attendance, say—are tested by randomly assigning them to different groups...
Randomised evaluations are a good way to answer microeconomic questions... often, they provide information that could be got in no other way. To take bednets: supporters of distributing free benefits say that only this approach can spread the use of nets quickly enough to eradicate malaria. Supporters of charging retort that cost-sharing is necessary to establish a reliable system of supply and because people value what they pay for. Both ideas sound plausible and there was no way of telling in advance who was right. But the trial clearly showed how people behave...
Reading the whole article is best, but the core idea is that it might be helpful to conduct experiments on samples before applying policies to entire populations. In other words, don't just rely on theories, "conventional wisdom," "best practices," and so on... try to determine what actually works, and then expand the successful approaches to the overall group.
I thought immediately of the application to digital security, where, for example, bloggers write posts like Challenges to sell Information Security products and services:
Everyone knows (I hope) that some security measures are simply necessary — period. Firewalls and Antivirus, for example, are by common sense necessary.
Care to test that "common sense" in an experiment?
Comments
Out of 65 controls studied, two were very important, another 4 were often present in the best performers, a further 15 were considered foundational. Which leaves 44 which didn't really matter much.
I suspect similar results would come from examining security controls.