February 29, 2016

Emerging evaluation questions

I am just now completing a team evaluation which involved us all applying a standard set of outline questions. Even without getting into the problem of trying to share emerging questions across countries with poor Internet connections, I have been thinking again about how to keep track of emerging issues.

Perhaps the most important skill doing these interviews is to reflect what one person said to the next. Supposing, for example, one has a simple question

were the radio programs appropriate?

From one respondent, one could get the answer

no, because they used a language which the majority of the population would never understand: score of 2/5.

From the second person, one could get the answer

yes, there was an excellent variety addressing lots of different interest groups: score of five out of five.

Now the synthesis of these two might be something like:

mixed results, score of 3/5.

But this is missing the point. The point is to sharpen the interviews and reflect the answer which the first person gave to the second. So we can ask the second person

but what about the language, wasn’t it too abstract for most people?

And ideally, to reflect the answer which the second person gave to the first. So we can ask the first person, if we get the opportunity to go back to them,

There was a whole range of different programmes wasn’t there? Did you know about this range? Does what you say about language applied to all of them?

This is what journalists do to get to the truth and anything less is just shallow. In fact it’s what anyone would do and most evaluators do it too, more or less instinctively.

If respondent A says oh, project B, that was rather problematic”, my job is not to give a low score to project B, it is a) to ask what exactly was problematic and then reflect this claim, with the evidence, to the manager of project B.

The trouble is when, for various reasons, having an agreed catalogue of questions makes this harder to do. Not for the first time, the effort to make the evaluation method more scientific” is in danger or making it worse. (I am often disappointed to read research and evaluation studies which are explicitly declared to be qualitative to find out that there is a set interview guide, more or less structured, and exploratory coding of the final transcripts, without any opportunity to note emerging themes during the research and reflect them back to new sets of respondents incrementally. I note that iterative in discussions of qualitative research normally means iterative generation of themes based on a fixed and already-finished corpus of data, although there is also mention of what I am talking about here - moving from data to analysis back to questions and new data, actually quite clearly described in Grounded Theory.)

Two simple hypotheses might be: evaluation research avoids iterative designs in data collection because

  • it seems unscientific
  • it is hard to organise data recording, especially in teams.

There are several issues here.

What is data? This issue is about how the evaluation questions continually evolve from something quite generic to something much more specific. Answering the more specific questions is an unpredictable and emerging operationalisation of the more generic question we started with. So one insight can be at the same time data in the sense that it answers a more generic question and yet also structure in the sense that it is the presupposition behind a new, emerging question. So suppose a generic question about child protection leads to a specific narrative about children being made to do street work. On the one hand this is data: Children do street work. On the other hand it informs a new question for subsequent interviews: Did you hear about children doing street work?

Letting people off the hook? If we fail to adapt our questioning iteratively to to include this emerging detail, we will let informants off the hook and collect only bland data. Although of course, we must not restrict ourselves only to the more detailed questions. We still need to ask the more generic questions because subsequent respondents may have other details which branch off into other areas which are just as important. I think the only meaningful data structure which reflects these kinds of developments is a directed network or at the very least a hierarchical outline like a traditional mind map.

How to share across a team? It can be difficult to share these kinds of emerging issues across a team whose members might be dispersed across countries or continents, are probably under time pressure and have to juggle typing up interview notes with connectivity problems and complicated travel schedules. How can you share an emerging tree of questions?

Going backwards in time? The final issue is the fact that ideally you want to reflect issues which emerge later to respondents you spoke to earlier. Sometimes of course you get the opportunity to physically go back to people or drop them an email but this tends to be the exception rather than the rule. It means respondents you speak to earlier get blander questions on the one hand but more chance to shape the evolution of the research questions on the other hand.

What tools are available to keep track of emerging issues?

Now what collaborative software do we have? A simple flat file like a Google Doc can work okay in a team. But it is difficult to record an emerging tree of questions with this structure - in which the main emerging structure is shared across team members (and how do we decide which issues get taken up into the shared tree and which not?) while each member will have branches unique to their own context. To cap it all, ideally the tree will also have a revision history. And of course some members might be natural geeks and do it all with github but others won’t. This is a difficult challenge. It might work with coloured post-it notes on a big wall in a shared physical space but to do it online is a big challenge.

I think the way I used discourse.org discussion forum as discussed earlier was a pretty good attempt which addressed quite well the challenges I’ve outlined here. Technically, I guess it’s a strict hierarchy because a subquestion can only be assigned to one parent question, but it is so easy to link between posts using discourse.org that I think the hierarchical structure is no big hurdle to developing a potentially non-hierarchical structure.

However, of course, the big drawback is that an online discussion brings a whole different set of biases and blindspots and you don’t see the physical, human context in which people are working.

What I’m wondering is, couldn’t a team use a forum like discourse.org to enter data from interviews and focus groups etc, i.e. as if the respondents were actually taking part in the forum, via the evaluator who has to do the actual typing? This would extend the application of the forum software to real-life face-to-face interviews too. You’d have to create or tag a series of posts as all being answers from one particular respondent. Question to self: does discourse.org at present provide good enough facilities to do all the filtering and summarising which you need for doing all your research online? I do remember it is a bit of a pain to assign standard categories like age, occupation or district to forum members.

Advantages of discourse.org and similar forum software for handling emerging evaluation questions:

  • it doesn’t really distinguish between the structure of emerging questions and answers aka data; new questions can emerge from the discussion and managing the question tree is the same as managing the data tree. You don’t have, say, one document with the questions and another with the answers. This is a disadvantage for a typical quantitative research task but an advantage for capturing emerging themes.
  • it can work nicely across teams and doesn’t care if some threads are specific to some contexts and team members, because the others will just ignore them; you can label them as such, or even make them private to a subset of respondents if you wanted.
  • it works asynchronously but you can also chat live if you happen to be online at the same time as someone else.
  • people can link directly to the evidence for some claim they want to make, e..g they can post a link to a report.
research social research
February 19, 2016

Inventory & analysis of small conservation grants, C&W Africa - Powell & Mesbach! Lots of charts!

Here it is at last

This was an interesting job.

We visited three countries, did a lot of interesting interviews, and a lot of data analysis.

This project really made me shocked about how fast Western civilisation and Chinese money are eating up Africa’s nature.

I admit I got a bit distracted by the coding side of it.

It went like this:

We wrote to a lot of the agencies funding small conservation grants but of course we didn’t get much data from many of them. So I wrote long scripts to scrape all the websites and automated the whole report. Also, data was gathered from aiddata.org, which gathers and refines the OECD data, using the aiddata API, so it was a fully reproducible Rmarkdown product really and looked quite nice, with about 150 charts. Until of course it had to get squeezed into Microsoft Word in the end.

Is this big data”? Not really.

Data from one of the fundsData from one of the funds

r dataviz conservation IUCN Africa reproducibleResearch
February 17, 2016

How to make Theory of Change diagrams with QuickToC

Basic use

Each line in the text box is one variable in the graph. A variable is any element in your theory of change - something which is one way but could be another. Something close to you which you can control, or something far away which you want to happen, or anything in between.



A variable
Another variable!

Allowed characters

You can use ( ) ! ? . , in the names of your variables. Don’t use " = ; or '. If you are using to=, remove anything except letters and numbers from the name of the target.

Additionally, these charachters can be used in the names if you use an alias

{ | } ~ & `+` 

That isn’t much fun. We want to draw edges between the variables, usually with arrows to show which change contributes to what.

Arrows can be specified in four different ways for convenience. Just paste any of these examples into the text box to try them out and adjust them.

Using spaces to create edges (arrows) between variables.

You can just use spaces before the variables, like this:



me
 child a
  grandchild a
  grandchild b
 child b
  grandchild c
  grandchild d

Spaces are good for when you think about the effects of something.




Supermarkets charge for plastic bags
 Far fewer plastic bags purchased
  Much less plastic waste
 Somewhat more hemp bags purchased
  Somewhat more hemp waste
  Shoppers more conscious of waste

Using dots to create edges (arrows) between variables.

Or you can use dots, which reverses the order.




goal
.result a
..subresult a
..subresult b
.result b
..subresult c
..subresult d

This way you can build up nice tidy hierarchies, but you can also have ragged ones too:




goal
.result a
..subresult a
...sub-sub result x
...sub-sub result y
..subresult b
.result b
..subresult c

Don’t let your planning tools force you to be regimented if you don’t want to be!

Boxes

Grouping boxes are specified using an initial -. What comes after will be printed as the label of the box. If you want boxes inside boxes, use more dashes.

Variables following a box definition are shown inside the box. (If a variable appears more than once in the text, the last appearance determines which box it will be shown in.)

The edges between variables are not affected by boxes, though the layout may change a little.


        
-School
Principal;to=Teacher

--Classes
Teacher;to=Motivation Learning

---Students
Learning
.Motivation

Really, for quick sketches, that is all you need. But there is a lot more you can do with QuickToC, so read on if you want more control.

The last diagram was pretty nice but we’d like to point out that one of these effects is small and negative, another is large and positive.




Supermarkets charge for plastic bags
 Far fewer plastic bags purchased; edgecolor=black
  Pollution?
 Somewhat more hemp bags purchased;  edgecolor=red
  Pollution?

<

theorymaker



This blog by Steve Powell is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and powered by Blot.
Privacy Policy
.