The Abdul Latif Jameel Poverty Action Lab (J-PAL)
J-PAL Executive Education: Evaluating Social Programmes, US 2018
This five-day in-person training gives participants a thorough understanding of why and when researchers and policymakers might choose to conduct randomised evaluations and how randomised evaluations are designed in real-world settings. The course covers basic concepts related to measuring impact through randomised evaluations and discusses technical design choices as well as pragmatic considerations when conducting a randomised study. It reviews the benefits and methods of randomisation, how to choose an appropriate sample size, and common threats and pitfalls to the validity of an experiment. It also covers the importance of a needs assessment and a theory of change, and how to measure outcomes effectively tools that are critical for all programme evaluations.
Date: 11 to 15 June 2018
Venue: Massachusetts Institute of Technology Cambridge, MA, United States
- What is an evaluation?
- Why and when is a rigorous evaluation of social impact needed?
- The common pitfalls of evaluations and how randomisation helps avoid them
- The key components of a good randomised evaluation design
- Alternative techniques for incorporating randomisation into project design
- How do you determine the appropriate sample size, measure outcomes, and manage data?
- Guarding against threats that may undermine the integrity of the results
- Techniques for the analysis and interpretation of results
- How to maximize policy impact and test external validity?
- Understanding and using the Theory of Change framework
The course is designed for directors, managers, officers, and researchers from governments, NGOs/nonprofits, international development organisations, and foundations, as well as trained economists looking to retool.
Contact: Tom Bangura, email@example.com