Consider a quick sampling of the data a typical university collects:
1. On enrolment: previous schools attended, where the learner lives, emergency contact info, health info, entrance exam scores, etc.
2. Once enrolled - courses registered, attendance, grades, use of library services, organizations the learner is involved with, if they use the university "meal card" - dietary habits, books purchased from the book store, and so on
3. In courses: attendance, grades, assignments, perhaps clickers, if online: which articles the learner has read, how much time in the LMS, which students she has interacted with, etc.
The value of analytics lies in traversing data silos (but that raises ethical issues). Most universities have a dept focused on institutional stats...but that generally has a marketing focus. What has to happen before organizations begin to consider using student-generated data for improving learning? Learning design? Or, in corporate settings, how employees help and support each other?
What I do know is that information is stored in silos that aren't mutually accessible for any large scale analysis. Our campus doesn't have single-signon (which causes headaches) and some times the same (or similar) data is collected by many entities.
There are many places that this kind of data would be useful (in my immediate realm - library collection development) but the info just isn't available :-)
Academia still seems to be run as a conglomeration of many different silos as opposed to one monolithic organization
I suspect institutions could overtly resist shifts to empower individuals or offer to manage contemporary data analytics on behalf of the cohort.
As individual's multi disciplinary data skills and understanding of data value improve, admin (not teaching) in institutions may need to re-position their roles. I'm not sure if both can be compatible and budget efficiency would appeal to funding sources, be they government, private or a mix.
If students became their own administration; with mentored learning design, informed by their own big data, directly subsidised by government and in ethical control of semantic access to their own silos what institutional roles could become obsolete and with what budget savings?
Maybe that scenario is too futuristic, but if efficiacy and budget savings were found to be significant, then admin in institutions would be under pressure not to allow that scenario to evolve.
For example, in Victorian schools teachers generally used a set format for weekly lesson planning, decided upon at a school level, and most often this is done in Word. If this data was web-based imagine the opportunities, when coupled with attendance and grade data.
PS. I think there are technology solutions to some of the ethical issues eg. translucent databases
For Colleges and Universities, data needs to be cleaned so that it complies with FERPA guidelines. I think the next step would be to provide a data warehouse / portal that administrators, faculty and staff could access as needed. Additionally, the IR department should be encouraged to analyze and conduct trend analysis.
The data could also be shared with statistics, math and other departments as live projects that students could utilize and study. Not quite Open Data, more like internal open data
Lastly, the tool need to become easier to use. Most individuals do not know how to use the tools to draw conclusions or develop data visualization that makes it easier to recognize patterns and correlations. (A Web 2.0 drag drop version may be needed)
I recall reading an Educause article on a survey they conducted in higher education institutes about their shift in priorities and roles of key staff. It seemed to me they were aware of the need for better information. The article was from 2007 and titled, Current Issues Report, 2007.
One of the top ten issues was Administrative, ERP and Information Systems. It seemed CIOs were concerned about honing the right information.
In short, I think management is in need of good information for decision making. Hopefully, movements like evolving methods for learning analytics will serve that need.
I'm a Math teacher, and I coordinate a course for adults in a technical college (electronics, informatics).
I find LAK 11 course very interesting and useful for my work, because it permits to think about data we use daily (on enrollment - once erroled - in courses, offline and online), in order to improve our learning decisions for the students.
I'm at the beginning of this experience, but I think we can find very useful applications.
Thanks and greetings from Italy.
Of the many challenges of this arrangement was the need to bring in the right data to show compliance and support analytics. Learning is done by people and people data was stuck in the HR system. Until global demographic data standards were established, learning data was nearly impossible to track. Finally we brought in user data like Employee ID, name, manager, location, department and role. Since compliance rules are based on role or location, it was important to get that right, but since that data was still not standardized (every department had a different definition of roles), this data had to be manually entered into the LMS. The people collecting demographic data were driven by the need to get people paid, not trained.
The automated part of the LMS data stream: Completion date, Score, Assigned date, Deadline, registration date, and "no show" status were used for various reports. There was a desire to track individual questions but that required SCORM configuration that no one had patience for.
Compliance reporting was the most important function. Who was required to do what by when and did they do it or not. This is more complicated than it seems because the assignment rules were so complicated. We used the complexity built into the LMS to accommodate those rules but then had a hard time extracting the data. Also if the data about people that was relevant to their requirements wasn't available, it was impossible to report on their compliance without manual intervention.
The dream was to be able to have a single warehouse of data about training across the company to be able to assess the effectiveness of learning spend and remove redundancies. The problem is that you need to have standard naming conventions. But you would have to convince the people whose jobs were at risk by your redundancy project to adopt those standards. That wasn't going to happen soon.
I know that academic environments are different from corporate environments but I'm curious about what people see as the similarities.
We keep promising to capture all this potential but have not really managed to see much by way of effective implementation. I wonder if it is because it is hard to get the right skill sets and/or temperments working together. What is the alchemy for a successful implementation of analytics?