I've been looking at literature on training and have been trying to apply it to my teaching. I teach information literacy, which for my students is a set of skills to help them get through their courses and assignments. Performance, as you say, is relevant. To know that my students have really learned something, I would have to see that their behaviors had changed. That's very difficult to assess properly, in my opinion. I'd need to have a longer view than just one semester, and I'd need to see what the students were doing in other courses. An analysis of how students use the college's online library resources would be revealing, but the privacy issue could be a hurdle. I'd be interested to see how learning analytics could be applied.
Changing behaviors seems like the gold standard of measuring the success of learning, but I've been wondering if it is an ideal. As you said, it is hard to measure. Behavior is so multi-determined. Performance, the ability to do something, seems like a more concrete outcome for learning. What can you do that you couldn't do before?
As for privacy, does that issue go away if you are using aggregate usage data? Library usage data must be very interesting.
In some cases, I see students making thoughtful decisions about the quality of information sources, but other students come up with things like Buzzle.com, so I find plenty of teachable moments. I suspect the underperformance is tied to a lack of interest or motivation.
Libraries track circulation and download statistics, but as far as I know, do not keep information on users' histories for privacy concerns. I think it would be wise to aggregate user data and find a way to keep population information without storing personal identification elements. In my position I don't have access to any of the stats, unfortunately.
You're right, no user data stays recorded (mainly I think due to the PATRIOT act in the US) but in some of my graduate work I have often talked about user recommendation systems that could be great for library users if people could opt-in to have their data tracked (just like Google Latitude is opt-in), we might see more usage of our resources.
As far as I know, what people do with these metrics tend to be used for managerial purposes, funding (receiving of and allocation of). I don't know if any of this info goes back to colleges or specific departments to show how their faculty and students use the resources (or don't use them and thus have an opportunity to showcase some resource)
A few years ago, I taught a class in Web Content Development and had my students flesh out a paper prototype for a similar "opt in" recommendation system for library holdings. The students loved the idea, but the library balked on grounds of privacy.
I thought the library really blew it and missed an opportunity to use its own data to become more relevant in the academic and intellectual lives of students, especially at a time when Libraries themselves were going through a transformation from book repository to digital information hub.
I also think there's a difference between privacy and confidentiality. For example, the U.S. Family Educational Rights and Privacy Act (FERPA) requires educational institutions to keep student identities confidential, but FERPA does so to liberate administrators to innovate with their institutional data if there is an "educational interest" that can benefit current and future students. Too many schools view FERPA as a restraint, but miss the opportunity that confidentiality affords.
Here's the new confidentiality statement we've added to the homepage of our Check My Activity (CMA) tool for students:
"In compliance with the Family Educational Rights and Privacy Act (FERPA), your use of this site may be monitored to improve its educational effectiveness for you and future students. However, all UMBC officials are obliged to keep your identity confidential. For more information, please read the Notification of Rights under FERPA."
Anyway, I like the idea of opt-in user acceptance of confidentiality in lieu of privacy, for the wider benefit of insight that can only be gained from tracking people's behavior. Similar to putting the onus on students to check their own activity in an LMS--and deciding for themselves what it does or doesn't mean --Library patrons could opt-in to a system that contextualizes their own browsing and searching behaviors with others who have opted in to do the same. No harm, no foul, I say.
I do agree that libraries missed the boat! (perhaps though it's not too late!) A number of years ago I had developed a proposal for a new type of library website (for one of my IT courses) that I never really presented or published anywhere. I was reading some articles yesterday and it seems like the concept is still not really out there. It's given me the idea that perhaps I need to write more about it :-)
I work for Convergys Customer Management and we are very interested in the use of learning analytics - measuring both the quality of our training and its impact on performance.
In fact, we are just through the pilot process of a contract that was partly based on achieving improved performance after the training.
I think this two-prong appraoch to learning analytics is however important both in the business world and the academic world. Measuring whether students are benefitting from what they are learning in a classroom (live or virtual) seems like it should be an important component of what we do in both the worlds of training and education.
Measuring the outputs of any training or learning seems to be difficult for any sector. There is a nice slideshare on return on investment of training
http://www.slideshare.net/nusantara99/measuring-roi-of-training via measurable indicators or learning analytics and control groups. True, this takes more time in the beginning, but in the end a training manager can have a very clear picture of how training resulted in enhancing performance, and how this resulted in cost savings.
Many thanks for the link to this presentation, I found it very interesting! Specially the last section on barriers to training transfer and the transfer partnership and matrix.
In my organisation we always use what the presentation labels as level 1 and 2 and we are starting on the level 3, behaviour application. For this last level, most of the ideas are around what the trainee thinks or feels has changed since the training but we have not directly used control groups.
In addition to the issues of attribution mentioned by others in this conversation, what I personally find challenging when wanting to measure the output as a trainer is the fact that other colleagues may have to be convinced that investing time beforehand to prepare the measures is worthwhile!
Type (Level) 1: Learner satisfaction;
Type (Level) 2: Learner demonstration of understanding;
Type (Level) 3: Learner demonstration of skills or behaviors on the job; and
Type (Level) 4: Impact of those new behaviors or skills on the job.
I'm kind of in the middle of shifting my ideas on the evaluation of learning interventions, so I have some loose ends left to think out (any contribution welcomed). But so far I want to 'throw this into the group':
- While a nice, simple and appealing idea, Kirkpatrick does not work out in (most) corporate realities for one simple reason: the learning folks do not own the metrics for the higher levels, if they are even aware of those metrics. Therefore, training departments stick with what matters for them and what they can get their hands on: operational statistics on learning such as satisfaction with the trainer and lunch, a pre/post test comparison of knowledge retention, and training spend.
- I see really only two levels of measurement: at one level the 'internal' measurement of the learning department (did it do its job?), at the other hand the 'down stream' contribution measurement of that intervention in performance and impact (did it matter?).
- As I like things (overly) simple, I'd stick to two statistics for the first category of measurement of the learning black box : attractiveness (I Like) and self-efficacy (I'm confident I can do). The attractiveness is measured by the question 'would you recommend this'. The self-efficacy is what matters most and can be measured by 2 questions.
- As for the higher-up measurement on contribution to performance and impact, I need some further thinking. Analytics might play a role here in tracking KPI for people that did or did not take a learning intervention. If I may dream: I dream of a corporate library/taxonomy not only of competencies, but of Key Performance Indicators per job role. Every learning intervention is coupled with one or more of these. And the measurement of it is automated...
(I probably confused more than clarified, sorry if that is the case.)