Some of us may not go into academia, but instead may be interested in joining the ranks of USAID, State Department or a small start-up NGO. Many of these organizations focus on monitoring and evaluation, lessons learned, and “scaling-up” and replication. However, are these buzzwords reflecting the implementation of effective programs and policies?
As you read the article "Stop Trying to Save the World", think about the use of these buzzwords and the movement towards a focus on lessons learned, replication and scaling. Please review discussion questions below as you read the article to guide our conversation in class.
(You can skip the section of the article about NGO funding titled: “The last NGO I worked for had 150 employees and a budget of…”)
Discussion questions:
- As Professor Härdig pointed out in his previous blog post, not all research can or should be used for producing policy. Why do you agree or disagree with this statement as it relates to the article by Hobbes?
- Is there such a thing as an objectively existing reality that researchers can represent without internal biases or shaping it through interpretation? Or is reality an inescapable byproduct of our own interpretations?
- Can there be more than one scientific method?
- Should policy or program relevance determine a research agenda?
- What are your ontological and epistemological commitments?
- As a practitioner of international development, what lessons can be drawn from the article “Stop Trying to Save the World” as it relates to research methodologies and small-N case studies?
- What are some methodological issues highlighted by Hobbes’ articles and how can practitioners avoid those issues?
- In the case of deworming in Kenya and India, what independent variables and processes could have been used to better understand the dependent variables, and how?
- George and Bennett suggest that a “controlled comparison...is very difficult to achieve,” (pg. 151) and the other methodologies discussed attempt to get around strict comparison as much as possible. If this is true, then should practitioners scale-up or replicate interventions based on findings from non-controlled comparisons? If so, how?