Student-focused evaluation of elearning activities
Centre for Learning & Teaching
University of Southampton
Paper presented at the European Conference on Educational Research, University College Dublin, 7-10 September 2005
Under the auspices of a funded digital library project, a number of new computer-based activities have been developed to enhance learning for geography students. In December 2004, student-focused evaluation of some of these innovations was undertaken on three separate courses of study. This involved observation of students as they were introduced to specific online learning activities, analysis of completed questionnaires and focused discussion. The questionnaire design drew on a specific methodology and generic quality criteria, facilitating comparative analysis of results. Teaching staff were invited to add any questions of particular interest and preliminary findings from the analysis were discussed with them. Their reflections informed the final evaluation reports.
Students were particularly positive about descriptions of learning objectives and content, accessibility of linked resources, inclusion of required tools, appropriateness of assessments and improvement of their knowledge and skills. However, one group who were remotely tutored gave negative feedback about the motivational and support aspects of this format. Female students’ responses, on these aspects, were more negative than those of males. A vocal minority of all students on the three courses reported that their tutors significantly underestimated the time required to complete online learning activities.
The JISC/NSF funded project, Digital Libraries in Support of Innovative Approaches to Learning and Teaching in Geography (DialogPlus) , has enabled geography teachers in two American and two English universities to develop a number of online activities to enhance student learning. In December 2004, the author of this paper undertook specifically student-focused evaluation of some of these innovations. Three separate courses of study were involved, two at level two (intermediate) and one at level three (final year and postgraduate). Two were in the domain of Physical and Environmental Geography, the other in Earth Observation and Geographical Information Systems. Two adopted a blended approach to learning and teaching, the other was delivered in distance learning mode.
In the blended models, students attended traditional lectures and occasional seminars. The eLearning activities here comprised a number of online activities which introduced the students to a specific topic, offered them a wide range of embedded digital library resources for further study, set them tasks involving the manipulation of real data sets, and assessed their learning both formatively and summatively. In the distance learning model, the students were in fact campus-based but the tutor was in a different country. The lecture material was online, released weekly to the students who then had to undertake practical, lab-based assessments. Tutor support was via email and online discussion board.
The evaluation involved observation of two groups of students as they were introduced to specific online activities, design of questionnaires, analysis of completed forms, and the use of a nominal group discussion technique with the distance learners. The questionnaire design drew on the MECA-ODL methodology and quality criteria (Riddy & Fill, 2004), facilitating comparative analysis of results. Teaching staff were invited to add any questions of particular interest and preliminary findings from the analysis were discussed with them. Their reflections informed the final evaluation reports.
Overall, student responses were particularly positive with respect to descriptions of learning objectives and content, accessibility of linked resources, inclusion of required tools, appropriateness of assessments and improvement of their knowledge and skills. The students on the distance course, all young undergraduates with no prior experience of this delivery mode, responded negatively about their own motivation, preferences for learning and support, with respect to the online and remote tutoring components. Female students’ responses, on these aspects, were more negative than those of male students. A vocal minority of all students on the three courses reported that their tutors significantly underestimated the time required to complete eLearning activities.
This paper outlines the background to the DialogPlus project, the specific online learning activities reviewed in December 2004, the evaluation processes and detailed outcomes.
The DialogPlus Project
This five year project started in 2002. Geographers at Pennsylvania State University and the University of California, Santa Barbara (U.S.) and the University of Southampton and the University of Leeds (U.K.) are collaborating in the use of digital library resources and the development of online learning activities. They are supported by Computer Scientists and Educationalists in the partner universities.
Examples of project outputs are digital resources (documents, images, maps, databases, simulations), online activities and complete units of learning that use these, and other web-based, digital resources, a toolkit to support the design of online activities (Conole and Fill, 2005), and evaluation reports on their effectiveness in supporting student learning. The development phase of the project finishes in 2006 and will be followed by a period of consolidation during which it is intended that the resources and, importantly, the innovative academic practice should become embedded in the partner universities.
Evaluated online learning activities
This paper describes and discusses student-focused evaluation of DialogPlus resources used on three courses of study in the first semester of academic year 2004/5. The courses are anonymised here and referred to by number.
Course 1 was taken by 130 campus based undergraduates, mainly in their second year of study, with a few third years and postgraduates. Project members had developed a number of eLearning activities to complement the traditional lectures and encourage active learning. The evaluated activity introduced students to important environmental indicators, offered them links to many online resources, and required them to manipulate real world data, discuss and critique the results obtained. Numeric and textual answers were entered online and assessed by a mixture of computer and human marking. Students were introduced to the activity in a timetabled practical session and completed it in their own time. Face-to-face support was available in ‘clinic’ sessions and there was an online discussion board.
Course 2 was taken by 58 second year undergraduates. The students were campus based but in academic year 2004/5 the tutor for the first semester unit was working in another country. As part of the DialogPlus project the weekly face-to-face lectures for this unit had been developed into eLearning modules, linking to many useful resources and with associated online multiple choice quizzes (MCQs). The students worked through these at their own pace. They could contact the tutor by email or use an online discussion board. They also attended weekly practical sessions, supervised by postgraduate assistants. In the final week of the course the tutor returned for a face-to-face lecture and group tutorials.
Course 3 was taken by 48 campus based students, mainly third year undergraduates with some postgraduate students. Here again, eLearning activities supplemented lectures in a blended delivery mode. The evaluated activity involved calibrating and using a computerised model. Students were introduced to the activity in a timetabled practical session and completed it in their own time. Some group work was involved in calibrating the model.
The internal project evaluation activities are led by Educationalists at the University of Southampton and use a mix of quantitative and qualitative methods. The quantitative approach adopted for student-focused evaluation is based on the MECA-ODL methodology (Riddy and Fill, 2004). Ten of the user quality criteria proposed there have been used generically on all student surveys for DialogPlus evaluation. Questionnaires are supplemented with further questions, usually another ten, agreed with the academic tutors. The generic quality criteria are shown in Figure 1. Students are asked to score their response to each statement as 0 (No), 1 (Somewhat), 2 (Yes) or N/A (not applicable). This range of responses was chosen, following feedback sessions on the MECA-ODL tool, instead of the 5 point Likert scale originally suggested in the methodology. Students’ scored responses are entered to spreadsheets and analysed using standard statistical methods. Students are also asked to make comments if they wish and these are considered with other qualitative input.
Figure 1: the generic quality criteria
Qualitative methods used in DialogPlus evaluations include observation of students using the resources, discussions with individual or groups of students, a nominal group technique (Harvey, 1998, pp. 44-45) and analysis of contributions to online discussions.
All findings are discussed in detail with the academic tutors and if necessary, and possible, further data is sort for clarification or illumination. Finally a brief summary of each evaluation is agreed and made available to external stakeholders. All the teaching and learning resources, plus the detailed and summary files are accessible to team members via the project website.
There is a wealth of quantitative and qualitative data produced by these evaluation methods. For the purposes of this paper, the simple summaries plus tabulated responses to the generic quality statements for each evaluated course are given first, followed by a comparative chart based on those responses.
The evaluation activities undertaken were observation of students’ introduction to and initial use of a specific online activity, plus analysis of their responses to a questionnaire.
Positive aspects from the observation were that the students seemed engaged with, and interested in, what they were doing; there were no technical problems; and students seemed confident as they left the session that they would be able to complete the activity by the deadline. One negative aspect observed was that the navigation through the activity was unsatisfactory (Next/Back only).
Student responses on the questionnaires were particularly positive with respect to the description of content and learning objectives; inclusion of required tools; mechanisms for information and support; appropriateness of assessments; and the improvement of their knowledge and skills. The most negative response was to the suggestion that they might prefer to be assessed by essays rather than eLearning activities, suggesting general acceptance of the assessment components of the online activities.
There was a spread of views about the other aspects. More students were positive that the specifically evaluated eLearning activity improved their knowledge and skills than were positive that they had learnt a lot from all such activities on the course.
Generic quality scores
Fifty-six (43%) of the 130 students taking the unit completed the questionnaire. Their responses to the generic quality statements are shown in the table below.
The evaluation activities undertaken were a questionnaire and a nominal focus group session for students at the end of the final, face-to-face, lecture.
The student response to the ten eLearning modules was particularly positive with respect to description of content and learning objectives; inclusion of required tools; accessibility of linked resources; and appropriateness of the MCQs. The students, all young undergraduates with no prior experience of distance learning, gave particularly negative feedback about their own motivation and preferences for learning with respect to the online and distant tutoring aspects of the unit delivery.
Female students’ responses, on these aspects, were more negative than those of male students. Subsequent analysis of students’ summative results for the course showed no statistically significant differences between male and female students.
Generic quality scores
Thirty-five (60%) of the fifty-eight students taking the unit attended the last lecture and completed the questionnaire. None of them had studied by distance learning before.
The evaluation activities undertaken were observation of students’ introduction to and use of the computerised model, plus analysis of student responses to a questionnaire.
Positive aspects from the observation were that students seemed engaged with and, largely, interested in what they were doing; and some students worked very collaboratively together. Negative aspects observed were unclear instructions about the group work aspect; computer crashes; data validity problems; and students having to wait overly long for answers to queries or to submit results.
Student responses on the questionnaires were particularly positive with respect to the description of content and learning objectives; inclusion of required tools; and appropriate assessment.
There was a spread of views about the other aspects. More students were positive that this activity improved their knowledge and skills than were positive that they had learnt a lot from all the online activities in the course. A minority positively enjoyed the online activities.
Generic quality scores
Thirty-six (75%) of the forty-eight students taking the unit attended the session and completed the questionnaire.
Comparative quality indicators
One of the virtues of the MECA-ODL approach is that the quality of different eLearning activities can be compared. Figures 2 and 3 below show such comparisons in a graphical format for the three DialogPlus evaluations described above.
It is of the utmost importance to understand the rationale behind any evaluation. On DialogPlus, a utilisation focus has been adopted (Patton, 1997) with the objectives that project team members, and other stakeholders, should be able to know, judge and improve the quality of the learning resources. Comparative analysis is not used here to decide which of several possible resources should be used instead of others, but rather to learn from what students have found to be the best and worst aspects of online learning and to promote good practice.
Figure 2: Comparison of quality criteria 1-5
Figure 3: Comparison of quality criteria 6-10
Discussion & Conclusions
Students on all three courses have found the elearning activities to be well described and with the necessary tools included for them to complete the set tasks. Assessments are deemed appropriate and the students acknowledge improvements in their knowledge and skills. Relevance and clarity of content could possibly be improved in all cases after some further investigation with the students.
The interface on course two scored higher for ease of use than the others and it is interesting to note that it was delivered via a different VLE, Bodington Common. The other two were mounted in Blackboard, with some customisation. This could also be investigated further to better understand interface characteristics that students particularly like.
Course two scored lowest with respect to students' preferred learning style. The qualitative evaluation exercises suggest this was more to do with the remote support than with the actual content of the modules. This appears to be confirmed by the relatively high rating for the accessibility of the materials but low rating for support mechanisms. Course one scored highest on both learning styles and support. The former could be taken as an endorsement of the thoughtful mix of text, image and data resources and tasks that tested both literacy and numeracy. The support mechanism most valued by students, as expressed in interviews, was face-to-face clinics. The discussion board was little used because students tended to raise queries in these timetabled sessions.
The evaluated activity in course three required good mathematical skills which may account for its moderate rating on learning style. It also used an adapted simulation model that was error prone and the low rating of response times may be explained by some frustration expressed by some students in qualitative evaluation sessions with delays in resolving these problems. There was less proactive support for these higher level students than on the other two courses but it still scored slightly higher than the remotely tutored version.
Overall, ratings on the MECA-ODL quality criteria were supported by qualitative student feedback. They provide a useful means of comparing instances of elearning and suggesting areas for further investigation and improvement. However, one aspect was raised by a number of students in face-to-face evaluation sessions on all three courses that is not covered by the generic set of criteria. These students found that it took them much longer to complete the online activities than had been estimated by the tutors. There can be several reasons for this, including lower levels of computer skills, difficulty in accessing computers, and lower functional specification of students' computers. Students who raised these time concerns were vehement that they would not want all of their study units to incorporate some elearning. This should be taken into account by developers and teaching staff who are concerned with the quality of the student experience and learning.
The author is grateful for the co-operation of the teachers and students who took part in these evaluations.
Conole, G. and Fill, K. (2005). A learning design toolkit to create pedagogically effective learning activities. Journal of Interactive Media in Education (Advances in Learning Design. Special Issue, eds. Colin Tattersall, Rob Koper), 2005/08. ISSN:1365-893X [jime.open.ac.uk/2005/08].
DialogPlus website: www.dialogplus.org
Harvey, J. (Ed.) (1998) Evaluation Cookbook. Edinburgh: Heriot-Watt University.
MECA-ODL website: www.adeit.uv.es/mecaodl (Note: the methodological guide is available in English, Spanish, German, Italian and Greek from the Materials page)
Patton, M.Q. (1997) 3rd edition. Utilization-focused evaluation, Sage Publications Inc., USA
Priest, S. and Fill, K. (forthcoming) Online learning activities in second year Environmental Geography. In O’Donoghue, J. (ed) Technology Supported Learning and Teaching: A Staff Perspective. Idea Group Inc, USA
Riddy, P. and Fill, K. (2003) Evaluating the quality of elearning resources. Paper presented at British Educational Research Association Annual Conference 2003, Edinburgh, UK, September 2003. Available online: http://www.leeds.ac.uk/educol/documents/00003331.htm
Riddy, P and Fill, K. (2004). Evaluating eLearning Resources. In Networked Learning 2004, (eds.) Banks, Goodyear, Hodgson, Jones, Lally, McConnell & Steeples, Lancaster & Sheffield University, pp. 630-636. ISBN 1-86220-150-1.
This document was added to the Education-Line database on 05 October 2005