As my job description becomes increasingly clear, it also becomes increasingly unnerving. Unnerving in the sense that I don’t believe it’s possible. At least, it’s not possible in the way I was trained.
I’ve been asked to perform an economic evaluation of CEDAP’s programs in Peru. For a while, I wasn’t sure what an economic evaluation meant to CEDAP. Specifically, I did not know if they wanted estimations of the effect of their actions or determining causality of their programs.
The first isn’t so hard. It involves me learning about the programs and working out an idea of the expected outcome. For example, I can look at the program “Nutrihojitas,” which provides nutritional supplements to kids in kindergarten. I can do some research and work out a relative idea of the relative effect of this type of program from existing research. Similarly, I can do that for most other programs.
Now causality is much harder. Basically, it involves an approximation of the counterfactual, i.e. what would happen if the intervention didn’t occur. Then you compare the two. The best way to do that, or the “gold standard” of development economics, is to perform a randomized control trial. Randomized control trials involve a treatment and demographically similar control group. The idea is you can check what would have happened without the intervention with the control group. It’s a great way to say: “Look, this highly similar group of people did a lot better with our program.”
CEDAP does not have a control group. Now there are other ways to fake it with which I am fairly familiar like instrumental variables or regression discontinuity. Unfortunately, CEDAP seems to lack the data necessary for those as well. Plus, there are several other government and NGO programs in these communities, so there are convincing alternative reasons for positive changes in the communities.
So I finally asked Tulia what she was seeking. In a quick answer, she said “la diferencia sin programa y con programa” or a look at causality. This comes swiftly after Tulia gave a speech about CEDAP at her small birthday celebration a week ago. She hopes to provide a more comprehensive economic report on 2014’s programs to help grow CEDAP.
I’m hoping to have a lengthier conversation with her about the evaluation next week. Specifically, I need to figure out how comprehensive it should be. Likewise, I’d like to know exactly what criticisms she has received in the past when she said that people frequently ask for an economic evaluation.
Also, I’m not going sit on my hands. I’ll pull together what I can for an evaluation of 2014, but more importantly, I am planning to help CEDAP move into a more program evaluation-focused mindset.
For example, I am hoping they can add economic surveys before, after, and even throughout the competition. Likewise, I am going to push them to include a control group if and when it is possible.
Additionally, I want to help them move away from paper towards digital records. This means they use thousands of sheets of paper during the development competition, Pachamamanchikta Waqaychasun (PW), which means “Let’s Conserve Our Mother Earth.” Specifically, I want to design an app or program that can be run off of a basic cell phone for the scoring during PW, as well as to complete the evaluation process.
Tulia said she likes the idea of automating the process. Plus, it will help me learn to program an app, which is a pretty useful professional skill. Essentially, everybody would win and it would help them evaluate in the future.
I also considered the possibility of helping set them up with academic evaluation in the future. I contacted a former professor at Macalester about possibly connecting CEDAP’s data with her International Economic Development course. There could be a class project around evaluating the data even after I leave in July.
Mainly, it’s a start. It’s an attempt to add some sustainability moving forward and some definition to my job. And we’ll see what happens.