Currently browsing 'assessment'

An Atlas for Higher Ed: Curriculum Mapping and Library Instruction

Posted on 27 November 2013 at 3:03 pm in Musings.

When I was a kid, I was obsessed with maps. I had maps on the walls of my bedroom and I would browse an atlas for fun (nerd alert!). When I started reading, I liked the books with maps in the front (still do). By middle school, I had descended into roleplaying games and would spend hours (poorly) drawing maps of my own fictitious world, a habit that lasted longer than I care to admit.

So it shouldn’t be a surprise when professional peers started talking about curriculum mapping, I got interested. My first exposure to the concept came during the CARL Conference 2012 during a keynote presented by Char Booth and Brian Mathews. Char was presenting the curriculum mapping work she was doing at Claremont Colleges through an IMLS grant.

Meanwhile, my colleague Nicole Branch was investigating curriculum mapping as well. She attended a curriculum mapping workshop at ACRL 2013, and brought back a number of templates and materials we could incorporate into our work.

The Scaffold

Scaffolding on the side of the Boston Public Library, used via Creative Commons license, courtesy Boston Public Library.

Nicole and I co-coordinate library instruction at our place of work. We are a small, private liberal arts university. We do not have a semester-long library skills course as some institutions do; information literacy instruction is a part of university learning outcomes and is intended to be integrated with regular course curriculum. Librarian-led instruction takes place in one-shot workshops, either initiated by the faculty or in conjunction with partnered academic programs.

As our program expanded and we received more faculty requests, Nicole and I ran into a problem that will be familiar to other instructional librarians: students who think (rightly or not) that they’ve “already had this workshop before.” As a small school, we do often see the same students in many of our different workshops. We needed to ensure we were delivering different presentations each time, not only to maintain student interest, but to actually address all five ACRL standards for information literacy instruction over the course of a student’s matriculation (which is plainly impossible in any single workshop).

Before we were “mapping,” we were already starting this coordination: sharing lesson plans with each other, identifying skills we wanted to address in one course vs. another, and working with academic departments to ensure that our instruction would be regular parts of course curriculum (our primary partners were our school’s freshmen composition classes and an interdisciplinary general education program attended by all undergraduates.) This “scaffolded” program of information literacy was the foundation of our curriculum map: the landmarks, as it were, to which we’d apply a roadmap.

Charting Our Program

The next phase in the development of our curriculum map was a response to an opportunity: our campus administration was starting to think about curriculum mapping as it relates to graduate and undergraduate programs, and our University Librarian suggested Nicole and I could present on the subject to our annual Dean’s Conference, an end-of-academic-year faculty meeting. Most faculty had only seen the portions of our information literacy instruction that related to their courses; this was a chance to unveil the comprehensiveness of our program and use it as an example when explaining how curriculum mapping worked.

One of the features of curriculum mapping is that it can be adapted to any level of instruction, whether it’s an individual workshop, a semester-long course, or an academic program, and the micro components can roll into the macro. You can start the process from scratch, or you can adapt existing learning outcomes and assessment models into the curriculum mapping format. It’s flexible.

Nicole Branch put together the backbone of our presentation. She adapted the materials distributed at the ACRL workshop, and also incorporated the approach developed at University of Hawaii, Manoa’s office of assessment. Our scaffolded series of workshops became this chart, showing which ACRL standards and university learning outcomes each of our workshops addressed. We didn’t input every detail of our workshop into this form; those existed in our detailed lesson plans, and didn’t need replication.

As the cherry on top, I adapted that chart into a mind map, which makes for a splashy visual presentation that faculty really responded to — the map in “curriculum mapping.”

For us, the completed curriculum map was transformative: we could see the big picture of how our program worked and the pedagogy we employed; it forced us to ask tough questions about our own assessment techniques; it allowed us to communicate our goals better with faculty; and we could identify which information literacy standards were inadequately addressed. It encouraged us to be reflective and we revised a number of our workshops as a result, a process that is still continuing as we strive to improve our program.

An Atlas for Higher Ed

The next chapter is unfolding right now. As a follow-up to the Dean’s Conference presentation, the administration has asked the library to work with various academic programs on their own curriculum mapping efforts. Nicole and I have developed a new presentation we’ve been delivering to different departments to assist them in their process. I am curious how each department will approach this differently, and what we will learn from them as a result. Together, we can produce a series of maps that intersect across university learning outcomes, academic majors, and general education requirements: an atlas for our model of higher education.

LAUC-B Conference 2013: Counting on Libraries

Posted on 25 October 2013 at 8:13 pm in Musings.

I spent Friday at the LAUC-B Conference 2013, which UC opened up to the wider librarian community. Titled Making it count: Opportunities and challenges for library assessment, it was tightly focused on the evaluation of library services.

The opening keynote was delivered by Steve Hiller, the Director of Assessment and Planning at the University of Washington Libraries. This served as the lit review of library assessment practices, providing a chronology of how evaluation has changed in the past century+ of academic library services, with case studies and the best practices of today mixed in. The vital takeaway is that the traditional statistics of library work (circulation stats, reference desk interactions, etc.) look backwards instead of forwards. They emphasize prescriptive, numerical measures instead of looking at outcomes, such as whether our customers — the university’s students, faculty, and administration — are achieving success.

“What is easy to measure is not necessarily desirable to measure.” – Martha Kyrillidou, 1998.

That leaves us with an obvious question: how can we measure outcomes, rather than usage? Hiller recommended the book How to Measure Anything by Douglas Hubbard. Hubbard suggests the following:

  • Our dilemmas are not unique. Others have struggled with the same issues.
  • You need less data/information than you think.
  • You have more data/information than you think.
  • There are useful measures that are much simpler thank you think.

Hiller has noticed some trends in library assessment: a greater reliance on external (campus-wide) measures aligned with university planning, the demonstration of library impact on individuals and communities, and outcomes-based assessment that make use of multiple measurement tools. Institutions are more interested in student learning outcomes and how the library contributes to overall student learning than in the traditional metrics.

“‘Not how good is this library.’ Rather, ‘How much good does it do?’” – R.H. Orr, 1973.

It is up to our customers to determine the quality of our libraries and library services. Hiller left us with four assessment questions:

  • What do we need to know about our communities and customers to make them successful?
  • Who are our partners in collaborative assessment?
  • How do we measure the effectiveness of our services, programs, and resources and how they contribute to user success?
  • What do our stakeholders need to know in order to provide the resources needed for a successful library?

The speakers that followed Hiller did their best to answer these questions.

“What do we want? Incremental change. When do we want it? In due time” – Lyn Paleo.

The next segment featured a trio of speakers on different topics: Joanne Miller of the California Digital Library spoke on what information and data the University of California keeps, Lyn Paleo shared what librarians need to know about the assessment/evaluation process, and OCLC’s Merrilee Proffitt discussed assessing special collections.

Lyn Paleo’s presentation was particularly fascinating. Paleo is not a librarian: she is a program evaluator and member of UC Berkeley’s faculty. She outlined some of the steps involved in assessment:

  • Problem or need;
  • Intervention (program, policy, service, institution, etc.);
  • Outcomes (from the perspective of the beneficiary);
  • Impact.

So how does this relate to libraries? Paleo explained that the academic library is a social human-service intervention to solve a problem. The problem libraries are meant to solve?

  • The student’s need for information.
  • The faculty’s need for research materials.
  • The college’s retention and graduation rates.

Paleo laid out how the library attempts to solve the problem. It provides access to information sources for academic work, in the form of books, journals, and online resources (in all their various permutations). It provides reference and instruction services, which teach students how to access and use those information resources. The library provides the space students need to complete both academic work and have downtime relaxation, with (hopefully), proper lighting, amenities, organization, comfortable seating, individual study areas, group study areas, and both noisy and quiet spaces.

All of those solutions are, in their own ways, measurable. Simple methods can be devised for tracking foot traffic in certain areas of the library, whether students are working in groups or alone, and then arranging the furniture in the appropriate ratios. Short surveys, presented in the moment, on a single iPad page, can determine what draws patrons to library events, and why they (sometimes) leave early.  Reference services can be assessed through post-interview observations of student search replication skills. These small research projects can lead to incremental improvements of service, even in lean budget times.

Lyn Paleo also had a few tips for data collection and management. Avoid convenience samples, when you only gather information from the most conveniently accessible patrons. That will skew results. A small representative sample is more effective for research than a large sample of convenience. When using Excel to track data, remember that every record requires its own row, and you’re better off putting all of the data on one spreadsheet using multiple tabs than having an endless series of files. You should also include a tab titled “About this data” explaining the contents of the spreadsheet in case it is inherited by future staff.

Above all, Paleo insisted, that when you are surveying a population, always announce what the study is, and what its intended use will be, to the people you are surveying. If they understand a survey’s importance, the answers will be more comprehensive and informative.

“Practitioner research should be messy.” – April Cunningham, Palomar College.

In the afternoon, I attended a breakout session led by Stephanie Rosenblatt of Cerritos College and April Cunningham of Palomar College. They focused on action research, an evolving form of participatory, solution-oriented research that is practitioner-led. In action research, the subject material is informed by real-world concerns (such as the librarian’s professional observations), rather than being dictated by literature review. It moves in a cycle of planning, action, reflection, and sharing, and involves a group of critical participants who help analyze data, discuss related material, and provide feedback to the lead researcher. Many of the details of their presentation are available online, and are worth exploring.

I actually had the opportunity to be a part of a Participatory Action Research group on the campus where I work. The lead researcher brought together participants from across many campus departments, including both staff and faculty, and we discussed whiteness and white privilege in higher education, and the ways in which it can be deconstructed. Taking part was one of the most informative experiences I’ve had as a professional, and I derived many lessons I can apply to my work to make education more inclusive and meaningful.

Rosenblatt and Cunningham encouraged the audience to think of something — anything — that bothers them in their professional experience, any aspect of library work. It got me thinking about the challenge of getting first-year undergraduates to focus and participate in class. I don’t know any instructional librarians who haven’t dealt at some point with uninterested, disconnected students.

Why not work with the students themselves, away from the classroom, in an action research group? Why not ask them what would make a library workshop compelling to them — in a safe environment that would encourage them to talk? If we could pull together a representative sample of undergraduate students, action research could generate some solutions to a problem that is a thorn in the side of instructional librarians everywhere. And by asking them in a non-judgmental forum, we might actually get some  good answers.

Rosenblatt and Cunningham also demonstrated some usable data analysis tools, from the simple and free, like Google Forms, to more specialized products like Tableau Public and LIWC. Their website has more comprehensive information on each.

Closing

The closing keynote was Stanford’s David Fetterman, discussing the work he does in empowerment evaluation. He also tipped the audience off on freemium infographics services like infogr.am and visual.ly to create powerful assessment reports. Something to explore further!

Top