Personally, I have a habit of collecting participant feedback after any session I facilitate, whether face-to-face or online. In my classes, I collect informal feedback after a few weeks, in longer workshops, I collect end-of-day feedback. If I would not get that feedback, I wouldn't know if what I do works for my participants/students and I wouldn't feel as if I was learner-focused.
My purpose of offering synchronous sessions is to add value to content learning and skill development that goes beyond what could have been achieved asynchronously. If I only deliver content, I would choose low-bandwidth, low-immediacy media to accommodate all participants access to technology, time and space. My synchronous sessions have a heavy experiential focus, well aligned with my session objectives.
I collect feedback informally and formally. Informally though build-in activities such as polls, chat and sharing. Formally through sending out feedback forms. For online synchronous sessions, I would circulate a qualtrics survey after a session with specific questions focused on how the synchronous delivery helped achieving the objectives and what else I could do to help learning.
In my department at the JIBC, we have been delivering several synchronous sessions lately. More in the last month than in the previous year or more. We have mostly been delivering in-class courses or asynchronous online courses. So the move to synchronous online has been swift and experimental, and this is one reason why this FLO Synchronous has been so timely for me.
I found the table provided on the Evaluating Your Session page to be helpful as it identifies different types and degrees of feedback. In our more recent online synchronous sessions, we have a small team that has been having short debriefs afterwards, and we are collecting separate feedback from the students in our short courses as well. At the moment, it feels that we are learning a lot about good practice and common pitfalls that we are in the process of translating into a faculty development program for our department's instructors, who are also delivering online synchronous versions of traditionally on-site courses.
I do find the built-in activities can be very helpful, and I advise instructors to devise their own short exercises/activities for this purpose, from quick polls to Classroom Assessment Techniques like the muddiest point or the minute paper.
I'm reluctant to implement a separate evaluation form for each session within a short course to avoid evaluation overload/fatigue from students. We already have to work hard to get regular end-of-course evaluations from our students.
In health care practice the Kirkpatrick model of learning evaluation is commonly used:
The art of asking good questions is something I continue to reflect on.
I often include a question such as: "From today's workshop, what is one thing you will take with you into practice"...
In my own reflection on my practice, I assess how the learners are responding throughout the educational event. I rely a lot on my assessment of the energy in the room;"is the group with me?", how are learner's responding and engaging with the learning activities I am offering is of value to me. I find with virtual learning I need to listen differently to what people are saying to check attunement.
I like your example question. It is more focused on the learning rather than a performance review. I have a class currently where I'm putting forward some similar questions at the half way point of the course to see to what extent the group is still "with me"!