LAK11 Tools Wiki
We've talked a lot about different analytics tools. Compiling them into a single list may help.
Add any tools or other information about them to the table below. (It's looking like this table could be sorted into a few different tables based on different types of analytics tools...)
||What does it do?
||How could it be used to improved learning?
||Moodle Gismo is a graphical monitoring module for Moodle 1.9 that first extracts data of all courses before representing it:||http://gismo.sourceforge.net/|
||similar to SNAPP. Also identifes locations of posters
||by recording user visits and letting you play them back to see every mouse movement, click, and form interaction.||http://userfly.com/
||With a free Tynt account, we can show you detailed information and full statistics about every copy activity on your site and automatically add a link back to your content with every paste.||http://www.tynt.com/
||An open source alternative to Google Analytics. Provides you with detailed reports on your website visitors, their search engines, keywords, language they speak, your most popular pages, etc.||http://piwik.org/
||A tools to analyze the impact of resources on social networks||http://www.statsmix.com|
||Statistical package capable of manipulating huge datasets.
||It is barebones statistics, but it has modules with functions for social network analysis. It could be a great platform to create and share a library with functions made for learning analytics.
||Step learning curve, but very powerful once it is learned.
||Cytoscape: An Open Source Platform for Complex-Network Analysis and Visualization
||If you need to analyze a large network of interactions between students
||Originally developed for biology, it has the capability to manage very large networks with ease
||visualizing relationships in Twitter
|| for anyone wanting to use JS
||visualizing tags in delicious
Anyone know of a tool that can extract tags from IM or twitter text by analyzing the text for meaning?
There is no "discussion tab" that I can see.
Long way: You apply the porter stemmer algorithm to bring each word to their morphological root: http://tartarus.org/~martin/PorterStemmer/. You get rid of all the function words: http://www.flesl.net/Vocabulary/Single-word_Lists/function_word_list.php. You then sort words by frequency count and keep the most frequent ones.
This type of analysis really is by word frequency, avoiding the words that are known to not bear meaning.
A real meaning analysis is somewhat more complex. One of the techniques is called Latent semantic indexing