COMPETENCY N

“Evaluate programs and services on specified criteria.”

My understanding of programmatic evaluations is largely influenced by observations I made during my tenure with my former employer, The Nature Conservancy (TNC). I learned applicable lessons about how even a large, established organization can change its methodology based on applying specific, measurable criteria to its achievements and operations, lessons that can be applied in the library and information professions.

For the first fifty years of its existence, the Conservancy, the world’s largest conservation organization, operated on a simple strategy sometimes referred to as “bucks and acres”. As a land trust, TNC’s goal was to target and purchase environmentally sensitive properties for their permanent preservation. It was a pioneering concept in the 1950s that slowly became outmoded as environmental issues became more complex; larger pieces of land were prohibitively expensive and the upkeep on existing real estate holdings drained the company’s resources that could have otherwise been directed towards new acquisitions.

Measuring Success

The philosophy started to change in the 1990s with a new program called “Conservation By Design”. Instead of merely focusing on the opportunities of the moment, TNC would map out each ecoregion, determine its ecological cornerstones, and develop conservation priorities based on scientific determinations. In addition, TNC made greater use of conservation easements – for example, the Conservancy would purchase the rights to subdivide or develop a large property from an owner, who could otherwise continue their ranching or farming operations as long as they met certain sustainable standards. This provided the rancher or farmer with an infusion of capital that allowed them to continue their operations, and yet prevented that property from turning into suburban ranchettes or other fragmentary development. Easements usually cost (depending on the specific restrictions and the nature of the property) about half as much as outright an purchase, and aside from monitoring expenses do not tie organizational resources up into ongoing land management.

The Conservation By Design strategy was extremely influential, and The Nature Conservancy’s operations grew dramatically as a result. Many other land trust organizations followed suit, adopting many of the same strategic goals. It has redefined conservation practices. However, in the 2000s, valuable questions were asked: how does TNC, as an organization, measure the success of Conservation By Design? Purchasing easements in areas we’ve deemed to be ecologically important seemed like the right approach, but how does the organization know that these easements are working, that landowners are fulfilling their obligations, and that TNC’s efforts are actually successfully protecting and promoting the endangered flora and fauna that were originally targeted? How can the Conservancy continue to claim to be a science-driven organization if it is not applying rigorous, measurable standards to its meet its goals?

The Nature Conservancy of California organized a trial program called “Measures of Success”, run by TNC-California’s Science Director, Dr. M.A. Sanjayan (who has since gone on to be TNC’s Chief Scientist and made an appearance on Letterman). This system included a number of tools to monitor purchased easements and other conservation actions to determine the success of species protection and restoration activities. Using a new database to track activity, field representatives and site managers could upload data, maps and photos to be analyzed by TNC’s Conservation Science team. Only by being able to actually measure the success (or lack thereof) of conservation programs could the organization properly adjust its actions to emphasize successful initiatives and strategies. Once established, the system created a feedback loop – now specific criteria must be laid out during Conservation Planning so that later analysis has a defined point of reference. The “Measures of Success” initiative is by its definition a circular concept. Determining the success of past operations became the springboard for expanded, and improved, conservation activity.

Since that time, The Nature Conservancy has adopted “Measures of Success” as a standard part of its Conservation Planning process internationally and is a stronger force for conservation as a result.

Crossover to Library and Information Science

While this example comes from outside the realm of Library and Information Science, my observations are transferrable to almost any contemporary workplace. First, no program or initiative can truly claim success without measurable terms by which to test its results. From the start, this requires specific, detailed criteria. While plans can and do change depending on prevailing circumstances, having clear goals from the outset allows organizers to look back more objectively on their operation.

Secondly, The Nature Conservancy took advantage of emerging technological tools not available to it during its organizational infancy to operate this program. Instead of allowing bureaucratic lethargy and an unwillingness to adapt limit the Conservancy’s potential, it embraced a forward-thinking, technology-based approach that allowed staff in far-flung locations to submit data quickly and remotely for faster and more efficient analysis. Likewise, libraries and other institutions should not be afraid of adopting new technological tools to test the success of their operations (when applicable).

Based on these points, I submit my experience with The Nature Conservancy, as described above, as evidence of my understanding of big-picture programmatic evaluations.

Additional Evidence

I would like to add as additional evidentiary items a set of evaluations I personally made of various reference services. These evaluations were performed as part of a series of assignments during LIBR-210, Reference and Information Services, and were intended to determine two things: a comparison between in-person, chat and email reference services, and secondly, evaluating the individual performance of each reference librarian in response to my queries. In each scenario I played the role of a library patron without informing the librarian that I was an MLIS student involved in a project, and in each case I asked the same general line of questioning. None of three experiences are what I would characterize as completely successful reference interviews, and the email reference was a complete non-starter (the responding librarian considered the question too in-depth and referred me to in-person services – this confirms my preference for synchronous online reference over email services).

The in-person desk reference interview, which took place at the Main Library in San Francisco, was a limited success. The librarian referred me to a series of books that all related directly or indirectly to my topic of inquiry. However, he made no mention of possible research through journal databases (or even print journals), nor did he express any intellectual interest in my research – key to an open-questioning process as recommended by Brenda Dervin and Patricia Dewdney in their influential article on reference interviews (Dervin, 1986). The former may reflect the institutional nature of public libraries; however, the SFPL does subscribe to databases like JSTOR that would have been an appropriate recommendation for the subject of my questioning (relating to medieval literature).

The synchronous chat reference was also a partial success. The librarian was able to recommend a number of articles that related to my subject. However, the biggest issues with that interaction were based on the librarian’s digital “body language”. There was a long gap between replies and responses (some extending over five minutes without explanation), and a less patient researcher in my place would not have stayed connected. Even if the reference librarian is very busy, that should be explained to the patron so they understand why there are long gaps in communication.

Conclusion

My mix of professional experience with a large organization revising its procedures, and academic experience analyzing information services, prepares me for responsibilities as an information professional analyzing and evaluating programs and services. Determining the success of a program is dependent on clearly established criteria – measurable objectives – and an objective approach dedicated only to the improvement of existing or new services.

Exhibit N-1: Reference Service Assessment Form: Email/Web-form Reference | Available upon request

Exhibit N-2: Reference Service Assessment Form: Desk Reference | Available upon request

Exhibit N-3: Reference Service Assessment Form: Chat Reference | Available upon request

References and Further Reading:

  • Dervin, B. and Dewdney, P. (1986). Neutral questioning: A new approach to the reference interview, RQ, 506-513.
Share and Enjoy:
  • Print this article!
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Top