Syndicate content

development impact links

Weekly links October 20: is p-hacking jaywalking or bank-robbing? Why is African labor so expensive? Why do some nudges fail? & more …

David McKenzie's picture
  • NYTimes piece on when the revolution came for Amy Cuddy about how the replicability crisis came to psychology, but also about the issues surrounding online critiques: “subjectivity — had burrowed its way into the field’s methodology more deeply than had been recognized. Typically, when researchers analyzed data, they were free to make various decisions, based on their judgment, about what data to maintain: whether it was wise, for example, to include experimental subjects whose results were really unusual or whether to exclude them; to add subjects to the sample or exclude additional subjects because of some experimental glitch. More often than not, those decisions — always seemingly justified as a way of eliminating noise — conveniently strengthened the findings’ results….Everyone knew it was wrong, but they thought it was wrong the way it’s wrong to jaywalk,” Simmons recently wrote in a paper taking stock of the field. “We decided to write ‘False-Positive Psychology’ when simulations revealed it was wrong the way it’s wrong to rob a bank”

Weekly links October 13: an anthropological rationale for randomization, what is Jholawala Economics?, changing norms, and more…

David McKenzie's picture
  • Another reason to justify random selection – Michael Schulson in Aeon “there are plenty of situations when random chance really is your best option. And those situations might be far more prevalent in our modern lives than we generally admit.” An interesting discussion drawing on anthropology of how different cultures have introduced randomness into decision-making, with the advantage being that it stops you using bad reasons for making decisions. “we might want to come to terms with the reality of our situation, which is that our lives are dominated by uncertainty, biases, subjective judgments and the vagaries of chance”
  • Maitreesh Ghatak reviews Jean Dreze’s new book “Sense and Solidarity - Jholawala Economics for Everyone”. See also this twitter thread by Abhijeet Singh on whether Dreze is underappreciated in development economics.

Weekly links October 6: A Bridge too far for Jishnu, reducing recruiting information frictions, cash transfers in Niger, improving tax collection in Brazil, and more…

David McKenzie's picture
  • On the future development blog, Jishnu Das discusses recent experiments on public-private provision of education in Liberia and Pakistan, takes on Bridge Academies, and highlights the importance of good measurement: “in Liberia, Romero et al. tracked students to ensure that schools could not “game” the evaluation by sending weaker children home: “We took great care to avoid differential attrition: Enumerators conducting student assessments participated in extra training on tracking and its importance, and dedicated generous time to tracking. Students were tracked to their homes and tested there when not available at school. Finding children who have left a school is like finding a needle in a haystack. In a country where only 42 percent have access to a cell phone, it’s heroism.”
  • On Straight Talk on Evidence, James Heckman and co-authors get taken to task for torturing data to overstate findings in a 2014 Science article on the long-term effects of the Abecedarian ECD program. Specific criticisms on sample size (and its reporting) and multiple comparisons. Response and a rejoinder follow the post...

Weekly links September 29: mixed methods (not just for footnotes), parenting in China, step away from that quadratic, and more…

David McKenzie's picture

Weekly links September 15: the definitive what we know on Progresa, ethics of cash, a new approach to teaching economics, and more…

David McKenzie's picture
  • In the latest JEL, Parker and Todd survey the literature on Progresa/Oportunidades: some bits of interest to me included:
    • CCTs have now been used in 60+ countries;
    • over 100 papers have been published using the Progresa/Oportunidades data, with at least 787 hypotheses tested – multiple testing corrections don’t change the conclusions that the program had health and education effects, but do cast doubt on papers claiming impacts on gender issues and demographic outcomes;
    • FN 16 which notes that at the individual level, there are significant differences in 32% of the 187 characteristics on which baseline balance is tested, with the authors arguing that this is because the large sample size leads to a tendency to reject the null at conventional levels – a point that seems inconsistent with use of the same significant levels for measuring treatment effects;
    • Two decades later, we still don’t know whether Progresa led to more learning, just more years in school;
    • One of the few negative impacts is an increase in deforestation in communities which received the CCT
  • Dave Evans asks whether it matters which co-author submits a paper, and summarizes responses from several editors; he also gives a short summary of a panel on how to effectively communicate results to policymakers.

Weekly links September 8: career advice, measuring empowerment, is anyone reading, lumpy cash, and more…

David McKenzie's picture

Monthly links for August: What did you miss while we were on summer break?

David McKenzie's picture

Weekly links July 28: overpaid teachers? Should we use p=0.005? beyond mean impacts, facilitating investment in Ethiopia, and more…

David McKenzie's picture
  • Well-known blog skeptic Jishnu Das continues to blog at Future Development, arguing that higher wages will not lead to better quality or more effective teachers in many developing countries – summarizing evidence from several countries that i) doubling teacher wages had no impact on performance; ii) temporary teachers paid less than permanent teachers do just as well; and iii) observed teacher characteristics explain little of the differences in teacher effectiveness.
  • Are we now all doomed from ever finding significance? In a paper in Nature Human Behavior, a multi-discipline list of 72 authors (including economists Colin Camerer, Ernst Fehr, Guido Imbens, David Laibson, John List and Jon Zinman) argue for redefining statistical significance for the discovery of new effects from 0.05 to using a cutoff of 0.005. They suggest results with p-values between 0.005 and 0.05 now be described as “suggestive”. They claim that for a wide range of statistical tests, this would require an increase in sample size of around 70%, but would of course reduce the incidence of false positives. Playing around with power calculations, it seems that studies that are powered at 80% for an alpha of 0.05 have about 50% power for an alpha of 0.005. It implies using a 2.81 t-stat cutoff instead of 1.96. Then of course if you want to further adjust for multiple hypothesis testing…

Weekly links July 21: a 1930s RCT revisited, brain development in poor infants, Indonesian status cards, and more…

David McKenzie's picture