One of the most significant aspect of the last year for The Gilbert Center was working with one of the most difficult research clients we’ve experienced. I’ll save the long version of the story for another time (there are valuable lessons to share), but the polite short version is this: (a) They didn’t like the fact that there simply wasn’t compelling evidence for the kinds of conclusions they were seeking. (b) The conclusion for which there was at least some evidence — a higher-level insight related to the fact that we need to build learning loops into our tools — was deemed insufficiently sexy for the board. And thus (c) the final recommendations to the board bore only an indirect relationship to our actual findings.
There’s no reason to believe that research, evaluation, and program planning in civil society will be any better in 2013, but we’re still going to do our best to try to push it in that direction. So let’s start the year with a short piece that we can all use to help keep us honest: Neuroskeptic’s Nine Circles of Scientific Hell. We’re all guilty of these from time to time: Limbo, Overselling, Post-Hoc Storytelling, P-Value Fishing, Creative Use of Outliers, Plagiarism, Non-Publication of Data, Partial Publication of Data, Inventing Data.
Do slideshows work? from Beaconfire Wire
Tali Sharot’s Time Magazine article entitled Optimism Bias: Human Brain May Be Hardwired for Hope, is a surprisingly good overview of a very important piece of science. In my judgement, this research has negative implications for as much as 80% of all nonprofit research and evaluation.
As the Gerry McGovern article we’ve linked to elsewhere shows, one obvious implication of this bias is how worthless it makes most website satisfaction surveys. And yet we keep doing them! Maybe that’s because they keep telling us what a good job we’re doing?
We’ve just released our fourth i4 Case Study, entitled Measuring the Value of Nonprofit Data Portals: An i4 Case Study on Backlinks as a Relevance Metric. This case study compares how people regard the profiles of nonprofits at two major information portals, as measured by whether they link to those profiles from elsewhere.
We used publicly accessible information on the two portals to explore a question that is of importance to almost every nonprofit organization: How do we measure the success of our online content? More specifically, how do we know that our content is relevant to the people we’re trying to inform? Most nonprofits are in the “content business” to some extent. Do you rely on Pageviews and surveys to determine whether your content is well regarded? In this case study, we take a look at the use of the behavioral metric that forms the basis for Google’s search ranking and many other measurements of relevance.