John Fritz presentation and Corporate Learning

John Fritz presentation and Corporate Learning

by Adam Weisblatt -
Number of replies: 15
I am thinking about how John's presentation would translate to the corporate environment, where performance instead of grades is relevant. Would it still work to try to measure online activity against performance reviews? At my last company we gave a scorecard to division heads on their compliance training numbers but that is very monolithic. Senior leadership is always looking for how the learning investment is paying off in terms of business goals. For that we need to be able to measure not only the student's activity, and their improved performance but also the relevant business results. Has anyone else worked on these issues?
In reply to Adam Weisblatt

Re: John Fritz presentation and Corporate Learning

by Paul Bond -
I don't have any answers, but I like the questions.
I've been looking at literature on training and have been trying to apply it to my teaching. I teach information literacy, which for my students is a set of skills to help them get through their courses and assignments. Performance, as you say, is relevant. To know that my students have really learned something, I would have to see that their behaviors had changed. That's very difficult to assess properly, in my opinion. I'd need to have a longer view than just one semester, and I'd need to see what the students were doing in other courses. An analysis of how students use the college's online library resources would be revealing, but the privacy issue could be a hurdle. I'd be interested to see how learning analytics could be applied.
In reply to Paul Bond

Re: John Fritz presentation and Corporate Learning

by Adam Weisblatt -
Information Literacy sounds very interesting. Does it include learning how to analyze and assess the value of information?

Changing behaviors seems like the gold standard of measuring the success of learning, but I've been wondering if it is an ideal. As you said, it is hard to measure. Behavior is so multi-determined. Performance, the ability to do something, seems like a more concrete outcome for learning. What can you do that you couldn't do before?

As for privacy, does that issue go away if you are using aggregate usage data? Library usage data must be very interesting.
In reply to Adam Weisblatt

Re: John Fritz presentation and Corporate Learning

by Paul Bond -
We discuss evaluating information sources for authority and purpose, among other things, which is something like assessing the value. Many of my students have heard that Wikipedia is something they shouldn't cite in research papers, but still appear perfectly willing to accept almost anything they find online as a trustworthy source.
In some cases, I see students making thoughtful decisions about the quality of information sources, but other students come up with things like Buzzle.com, so I find plenty of teachable moments. I suspect the underperformance is tied to a lack of interest or motivation.
Libraries track circulation and download statistics, but as far as I know, do not keep information on users' histories for privacy concerns. I think it would be wise to aggregate user data and find a way to keep population information without storing personal identification elements. In my position I don't have access to any of the stats, unfortunately.
In reply to Paul Bond

Re: John Fritz presentation and Corporate Learning

by Apostolos Koutropoulos -
You bring up an interesting point. I work at the university library for my university (and I've had a few hats while I've worked here). I know that we have analytics/metrics on how many books are taken out, how many online journals have been accessed and how often and so on.

You're right, no user data stays recorded (mainly I think due to the PATRIOT act in the US) but in some of my graduate work I have often talked about user recommendation systems that could be great for library users if people could opt-in to have their data tracked (just like Google Latitude is opt-in), we might see more usage of our resources.

As far as I know, what people do with these metrics tend to be used for managerial purposes, funding (receiving of and allocation of). I don't know if any of this info goes back to colleges or specific departments to show how their faculty and students use the resources (or don't use them and thus have an opportunity to showcase some resource)
In reply to Apostolos Koutropoulos

Library Recommendation System

by John Fritz -
Apostolos,

A few years ago, I taught a class in Web Content Development and had my students flesh out a paper prototype for a similar "opt in" recommendation system for library holdings. The students loved the idea, but the library balked on grounds of privacy.

I thought the library really blew it and missed an opportunity to use its own data to become more relevant in the academic and intellectual lives of students, especially at a time when Libraries themselves were going through a transformation from book repository to digital information hub.

I also think there's a difference between privacy and confidentiality. For example, the U.S. Family Educational Rights and Privacy Act (FERPA) requires educational institutions to keep student identities confidential, but FERPA does so to liberate administrators to innovate with their institutional data if there is an "educational interest" that can benefit current and future students. Too many schools view FERPA as a restraint, but miss the opportunity that confidentiality affords.

Here's the new confidentiality statement we've added to the homepage of our Check My Activity (CMA) tool for students:

"In compliance with the Family Educational Rights and Privacy Act (FERPA), your use of this site may be monitored to improve its educational effectiveness for you and future students. However, all UMBC officials are obliged to keep your identity confidential. For more information, please read the Notification of Rights under FERPA."

Anyway, I like the idea of opt-in user acceptance of confidentiality in lieu of privacy, for the wider benefit of insight that can only be gained from tracking people's behavior. Similar to putting the onus on students to check their own activity in an LMS--and deciding for themselves what it does or doesn't mean --Library patrons could opt-in to a system that contextualizes their own browsing and searching behaviors with others who have opted in to do the same. No harm, no foul, I say.

Best,

John
In reply to John Fritz

Re: Library Recommendation System

by Apostolos Koutropoulos -
Libraries tend to hide behind the PATRIOT act (federal government being able to come and take all records of a user's library activity). I haven't actually seen anyone invoke FERPA, at least for library related issues.

I do agree that libraries missed the boat! (perhaps though it's not too late!) A number of years ago I had developed a proposal for a new type of library website (for one of my IT courses) that I never really presented or published anywhere. I was reading some articles yesterday and it seems like the concept is still not really out there. It's given me the idea that perhaps I need to write more about it :-)
In reply to Adam Weisblatt

Re: John Fritz presentation and Corporate Learning

by Tanya Elias -
Hi Adam,

I work for Convergys Customer Management and we are very interested in the use of learning analytics - measuring both the quality of our training and its impact on performance.

In fact, we are just through the pilot process of a contract that was partly based on achieving improved performance after the training.

I think this two-prong appraoch to learning analytics is however important both in the business world and the academic world. Measuring whether students are benefitting from what they are learning in a classroom (live or virtual) seems like it should be an important component of what we do in both the worlds of training and education.
In reply to Adam Weisblatt

Re: John Fritz presentation and Corporate Learning

by Lee Kraus -
I didn't get to listen to the entire presentation, but it did get me started thinking about what we might be able to do at the company I am at, and what might and might not be possible. One area that I seemed to focus on, was that we could start collecting data by role. Might be obvious for most, but it hit me that it might add value for us to really look at trends at the role level.

In reply to Lee Kraus

Re: John Fritz presentation and Corporate Learning

by Adam Weisblatt -
I think role would be a great identifier for making meaning out of the data. The problem is that some companies have poorly defined or inconsistent role descriptions. Would competency work?
In reply to Adam Weisblatt

Re: John Fritz presentation and Corporate Learning

by Inge Ignatia de Waard -
hi Adam and all,
Measuring the outputs of any training or learning seems to be difficult for any sector. There is a nice slideshare on return on investment of training
http://www.slideshare.net/nusantara99/measuring-roi-of-training via measurable indicators or learning analytics and control groups. True, this takes more time in the beginning, but in the end a training manager can have a very clear picture of how training resulted in enhancing performance, and how this resulted in cost savings.
In reply to Inge Ignatia de Waard

Re: John Fritz presentation and Corporate Learning

by Gemma Orta Martinez -
Hi Inge and all,
Many thanks for the link to this presentation, I found it very interesting! Specially the last section on barriers to training transfer and the transfer partnership and matrix.
In my organisation we always use what the presentation labels as level 1 and 2 and we are starting on the level 3, behaviour application. For this last level, most of the ideas are around what the trainee thinks or feels has changed since the training but we have not directly used control groups.
In addition to the issues of attribution mentioned by others in this conversation, what I personally find challenging when wanting to measure the output as a trainer is the fact that other colleagues may have to be convinced that investing time beforehand to prepare the measures is worthwhile!
gemma.

In reply to Inge Ignatia de Waard

Re: John Fritz presentation and Corporate Learning

by Steven De Pauw -
Very interesting presentation! Thanks!

This presentation is based on this reference: Jack J. Phillip and Lynn Schmidt, Implementing Training Scorecards, ASTD Publication. ( here )ir?t=exphrman-20&l=ur2&o=1 .


In reply to Adam Weisblatt

Alternatives to Kirpatrick's methods

by Dianne Rees -
Jane Bozarth wrote a nice summary of some of the issues raised by the Kirpatrick method and describes some alternative approaches to measuring the effectiveness of training. (I don't agree that Kirpatrick's approach is necessarily linear though it has been applied that way, but the alternative approaches are interesting.)
In reply to Dianne Rees

Re: Alternatives to Kirpatrick's methods

by Mark W -
I share your questions about Kirkpatrick's method. The four "levels" seem to me disjointed and not really connected. The four levels seem to be very different skills:
  1. Type (Level) 1: Learner satisfaction;

  2. Type (Level) 2: Learner demonstration of understanding;

  3. Type (Level) 3: Learner demonstration of skills or behaviors on the job; and

  4. Type (Level) 4: Impact of those new behaviors or skills on the job.

To me these are four very different skills.
In reply to Dianne Rees

Re: Alternatives to Kirpatrick's methods

by Bert De Coutere -
Kirkpatrick (and Philips) evaluation models are both heavily used and heavily critizised (http://donaldclarkplanb.blogspot.com/2006/09/donald-talks-bollocks.html) in corporate worlds.

I'm kind of in the middle of shifting my ideas on the evaluation of learning interventions, so I have some loose ends left to think out (any contribution welcomed). But so far I want to 'throw this into the group':

- While a nice, simple and appealing idea, Kirkpatrick does not work out in (most) corporate realities for one simple reason: the learning folks do not own the metrics for the higher levels, if they are even aware of those metrics. Therefore, training departments stick with what matters for them and what they can get their hands on: operational statistics on learning such as satisfaction with the trainer and lunch, a pre/post test comparison of knowledge retention, and training spend.

- I see really only two levels of measurement: at one level the 'internal' measurement of the learning department (did it do its job?), at the other hand the 'down stream' contribution measurement of that intervention in performance and impact (did it matter?).

- As I like things (overly) simple, I'd stick to two statistics for the first category of measurement of the learning black box : attractiveness (I Like) and self-efficacy (I'm confident I can do). The attractiveness is measured by the question 'would you recommend this'. The self-efficacy is what matters most and can be measured by 2 questions.

- As for the higher-up measurement on contribution to performance and impact, I need some further thinking. Analytics might play a role here in tracking KPI for people that did or did not take a learning intervention. If I may dream: I dream of a corporate library/taxonomy not only of competencies, but of Key Performance Indicators per job role. Every learning intervention is coupled with one or more of these. And the measurement of it is automated...

(I probably confused more than clarified, sorry if that is the case.)