I believe "quizzing" early and often is key to student evaluation. Here is my dilemma.
I would like to develop online quizzes that student could use each week to evaluate the state of their knowledge. BUT...how do I know that the answers provided by the students are their OWN and not their friends? Any thoughts?
Is it important that the answers are theirs? Isn't there value in getting them to talk things through?
Do they have to do them at home, or could you do them in the classroom? We've used the personal response system (hand held voting devices) with students (university level) to vote on particular answers. I've used it a few different ways - individually - which (if you don't record who's got which handset), lets the teacher see a graph, so gives you an over view of what they've understood as a class, but doesn't embarass those that don't do so well. We've also used them in groups, with questions that stretch them, so they have to discuss the answer. (For groups, when we've not had the gadgets, we've just used coloured squares of card. The group discusses the answer & holds up the correct colour. Saves anyone individually getting embarassed & you still get the overview of whether the class gets the general idea).
If you're wanting them to do it at home (assuming they all have PC access); I guess anonymising scores would enable the kids to see how they're doing individually (if it reports their score), you to see how the class overall is doing, but not be able to tie a score to a kid - so less pressure to get the "right" answer.
How about some questions about a potential experiment - asking them which they think is the "best" way to approach a problem; give them 5 options & get the class to vote on the best; then carry it out. Then you could get them to discuss why they picked that method & what might have been the outcome had they picked another one.
Lot of work for you, though, to try to think up all the options!
Getting them to do anything at home, and expect that they won't discuss it, is, in my opinion, a waste of time - and unrealistic in the real world. Most of us (scientists included!) discuss what we're doing with peers to get reassurance that we're doing it right.
How right you are! I guess when I gave Dominic my answer, I was assuming that he wanted something that would mark automatically, rather than require him to do the marking; as you say, as soon as you start to look at something that requires human marking, so we can really start to stretch the students. Though, of course, ensuring that basic facts are known is important at times. We need both.
That's leaving out the difficulty of creating a good set of distractors when you do multiple choice, so that students really have to think, not to easily reject some because they are so wrong.
comparing and evaluating data. I think this is a great idea . This way, it might not evaluate their own idea but if they discuss with their friends, it's even better. It involves critical thinking. I like this idea. Now, I have to think of a way to entice them to take the quiz......Many students, even good ones, try to do the least they can. That's a fact!
Hi Domeninc- nice to "see" you too. I do both 'after class' and weekly online quizzes. Personally I find 'after class' the most effective (10 quick questions). There are not a lot of marks involved and students get half marks for late quizzes. The benefit is in the self assessment. Cheating is not going to help them much, but as they are open book, many use them to review the text & lecture, and as exam preparation. I have had a lot of positive feedback expecially with respect to the after class tests. It harkens back to that study on information retention, with better retention each time they see the material within close proximity of the last time. Thus, I provide the lecture ahead of time (some actually go over it and read the the text), then give the lecture, then have the quiz. This gives a potential three bats at the same material within a short space of time and hopefully lifts retenetion in those that choose to use it that way.
I know of one distance learning institution whose policy it was to phone each student after the first assignment was returned. The tutor would talk to the student about their assignment, ask if there were any areas of confusion etc., but also engage the student in a conversation about their answer to one of the questions. The student would have had to have actually done the work in order to participate in a discussion like that. In this way, they were able to flag students who may be misrepresenting their abilities.
There are a few faculty here at SFU who use LON-CAPA to administer quizzes. I have no first hand experience with this system myself, but my understanding is that it is a web-based service that allows you to administer quizzes. I believe that it is appropriate math, phys, chem problem set types of questions where there is a single right answer. I think you can also set the quizzes up to randomize questions or numerical data so that each student gets a slightly different question and therefore needs to come up with a different answer. This doesn't prevent cheating entirely, but would prevent one student from sending the right answer to all his/her friends.
Perhaps there is someone else out there who has experience with LON-CAPA and can comment on this?
Questions CAN have more than one possible answer, however.
There are diverse ways randomization can be devised and implemented in LON-CAPA, from simply using random parameters in problem statements, to selecting statements from a collection of different possible statements, to randomization over completely different problems.
There are various easy-to-use templates to create problems which can be of a variety of types:
- Numerical problems in which the numerical response can be graded subject to such things as a reasonable numerical tolerance (absolute, relative or algorithmic), units, significant figures.
- Symbolic algebraic or mathematical expression responses.
- String responses.
- Simple radio button problems in which only 1 of N is correct.
- So-call option response problems in which N or M responses may be correct.
- Matching lists type problems, in which there is a one-to-one correspondence between list elements.
- Ranking problems, in which the student select ordinals.
- Problems of the above types including choices from randomly labeled images, or scientific plots.
- Click-on-the-image problems, in which the coordinates of a mouse-click correspond to the students response to a question about the image.
These problem types can be easily set up to cause individual students to receive a 'personalize' problem in which the N statements (or 'foils") to which they must respond are randomly selected from a larger number of possible 'foils'. These can be categorized and selected according to concept groups.
LON-CAPA is a multi-target system, such that coded problems can be rendered as for homework, or scantron exams, surveys, practice modes etc.
There are many more things which could be said.
Instructors have fine-grained control over their content AND over many parameters.
LON-CAPA is NOT restricted to use in the Sciences.
I'd be happy to comment further if there is any interest.
Maybe you'd like to pop down to Burnaby for the LON-CAPA Conference May 22-24 at SFU? Best chance in a decade!
Registration is open until May 1
I'd like to hear about your Moodle experience too.
PS If not, then ask me again a little later and I'll set up something for you to have a look at. (Kind of busy right now...)