Education-line Home Page

Peer assessment in university teaching. An exploration of useful designs

Ineke van den Berg, Wilfried Admiraal, Albert Pilot
IVLOS Institute of Education, Utrecht University, The Netherlands

Paper presented at the European Conference on Educational Research, University of Hamburg, 17-20 September 2003

Abstract

Learning to write at an acceptable academic level cannot be isolated from learning the particular discipline content. The acquisition of academic writing is a long-term matter. So, in many curricula, teachers search for proper methods to provide more support to students in developing their writing competence. Peer assessment is an arrangement in which students consider the quality of their fellow students' work and in which the assessment is a formative one.

This research focuses on the contribution of peer assessment to the acquisition of writing skills by university students. Moreover we wanted to establish an optimal model of peer assessment. The study can be seen as a multiple-case study with seven peer-assessment designs. Aspects that have been considered include the implementation of peer assessment by students and teachers, the components of peer feedback, the interaction between students during oral peer feedback, students' achievement and students and teachers' evaluation of peer assessment.

In all, 168 students mainly following the History program of the Faculty of Arts and nine teachers of the History Department of Utrecht University were involved. From this student group, 131 took part in peer assessment groups and 37 in a control group (with no peer assessment). Data were gathered from questionnaires, semi-structured interviews and observations of classes, students' writing products and all written and oral peer comments.

Results indicated that most students complied with the procedure and assessed the work of their fellow students seriously, and used the peer feedback to revise their work. A comparison of the seven designs of peer assessment reveals features of the design supporting the system of peer assessment. We recommend a combination of written and oral peer feedback, as students tend to concentrate on evaluating the writings in their written feedback, whereas in oral peer feedback they show other activities like questioning, explaining and giving suggestions.

1 INTRODUCTION

Learning to write at an acceptable academic level cannot succeed when it is isolated from the particular discipline content. The acquisition of academic writing also is a long-term matter. So, in many curricula, teachers search for proper methods to provide more support to students in developing their writing competence.

Research on collaborative learning and studies of peer assessment show that students learn from peers by studying educational materials together, as well as by assessing each other's work. They learn more when the assessment procedure includes feedback on the educational products and processes. In most literature, the second mentioned form of cooperative learning is labelled as peer assessment. A review study by Topping (1998) reported positive effects of peer assessment in higher education, especially on learning to write. Peer assessment is defined as 'an arrangement in which individuals consider the amount, level, value, worth, quality, or success of the products or outcomes of the learning of peers of similar status' (Topping, 1998, pp. 250). But, as Topping indicates, it is difficult to see what factors are responsible for the effects of peer assessment as long as there is no consistent framework of description.

Whereas peer assessment can be organised in many different ways it is important to be explicit about the important variables. For that reason, Topping proposes a typology of peer assessment. This typology will be used as a framework for description and evaluation in the present research, whose aim was to find an optimal model of peer assessment (Van den Berg, 2003). To find out which mix of characteristics gives the best results, we developed seven designs and implemented them in seven courses distributed over the entire curriculum. The educational setting of the research is the History Department of Utrecht University, The Netherlands.

2 METHOD AND MATERIALS

Our study is a multiple-case study of seven cases, or seven designs of peer assessment, each implemented in a different course. Of these cases, several elements are considered: the execution of peer assessment by the students and the teachers, the components of the peer feedback, the interaction between students during oral peer feedback, students' achievements and students' and teachers' evaluation of peer assessment.

2.1 Method of peer assessment

As a basic method of peer assessment, used in all seven designs, we adopted Bean's elaboration of the concept 'advice-centred feedback' (Bean, 1996). Students were asked to exchange their draft versions and assess the draft versions of their fellow students using the same criteria as the teacher when assessing the final versions. They were not supposed to mark, but only to establish their findings in the form of written comments to the assessed student. At the end of their report, they reflected on their judgement and made a selection by formulating at least three recommendations to the writer on how to improve the writing product. After peer assessment, students had the opportunity to rewrite their draft version. The teacher checked that the students carried out these steps properly. To facilitate teachers' monitoring the process, copies of the draft versions and the written feedback reports were also handed in to the teacher. The teacher's strategy was to give his comment only after peer feedback, in a complementary form.

2.2 Seven designs of peer assessment

To make different designs of peer assessment, we used Topping's aforementioned typology, shown in Figure 1. This typology consists of a survey of variables found in reported systems of peer assessment in higher education. We clustered all variables into four categories: variables concerning peer assessment as a method for assessing (var. 2-6); variables concerning the interaction of the students (var.7-9); variables concerning the composition of the feedback groups (var. 10-13) and variables concerning requirements and rewards (var. 14-17). To discover which educational design of peer assessment offers the best results, we developed seven designs.

Variable

Range of Variation

1

Curriculum area/subject

All

2

Objectives

Of staff and/or students?

Time saving or cognitive/affective gains?

3

Focus

Quantitative/summative or qualitative/formative or both?

4

Product/output

Tests/marks/grades or writing or oral presentations or other skilled behaviours?

5

Relation to staff assessment

Substitutional or supplementary?

6

Official weight

Contributing to assessee final official grade or not?

7

Directionality

One-way, reciprocal, mutual?

8

Privacy

Anonymous/confidential/public?

9

Contact

Distance or face to face?

10

Year

Same or cross year of study?

11

Ability

Same or cross ability?

12

Constellation Assessors

Individuals or pairs or groups?

13

Constellation Assessed

Individuals or pairs or groups?

14

Place

In/out of class?

15

Time

Class time/free time/informally?

16

Requirement

Compulsory or voluntary for assessors/ees?

17

Reward

Course credit or other incentives or reinforcement for participation?

Figure 1 A Typology of Peer Assessment in Higher Education (Topping, 1998, p.252)

The selection of courses for the implementation was based on the following arguments. Firstly, we wanted to involve courses with a strong focus on learning to write. So, designs were developed for a freshman writing training (SS), two second year courses in which students had to write a text of considerable length (HA and DS) and a third/fourth year writing course (RP). Secondly, in some courses the same teacher taught in parallel groups. This made it possible to compare the results of groups with and without peer assessment. For this reason, designs were made for a third year course (HIST) and two third/fourth year course (ICE and TGM). In these three courses students worked on writing products, but learning to write was not the main aim.

The differences between the designs consist of variations of ten of the seventeen elements of Topping's typology. The other elements were not varied, for practical reasons (curriculum area/subject, year, time, requirement) and educational reasons (objectives, focus), or because the teachers would not comply with any variation of this element (official weight). The products to be assessed varied in sort and size: a three page analysis of an exhibition in ICE, a five page journal report in RP, a ten to fifteen pages biography in HIST etc. The relation with staff assessment also differed. In four courses, peer assessment was supplementary to staff assessment, in the other three courses (HIST, ICE and TGM) peer assessment was a source of formative feedback where the teacher only assessed the final product (relation to staff assessment). In TGM and SS, the assessment was one-way, which means that assessors and assessees did not mutually change roles in the small feedback group, but were assessed by other students than they themselves assessed (directionality). Only in SS, the assessment was in public, because students presented their oral feedback in front of the plenary (privacy). In all courses, except in TGM, peer assessment was executed partly at a distance, for reading the writing product and for filling in the assessment form, and partly in face-to-face contact for the oral feedback (contact). In RP and ICE students had studied the same material and were all given the same assignment. In HA, the teacher formed feedback groups placing together students with a related subject. In the other courses, the feedback groups were formed at random (ability). The size of the feedback group varied from two students in DS, to three in RP and HA, and four in HIST and ICE. In every course, except in HIST, students assessed the products of their fellow students in the feedback group individually. In HIST, two students had to reach consensus about their judgment in the assessments. In SS and TGM, there were no feedback groups, but every student assessed the work of two students (constellation assessors and assessees). Peer assessment was carried out partly in and outside class, except in TGM where the procedure was conducted fully out of class (place). Only in RP participation in peer assessment was rewarded. In this course, the teacher assessed the written feedback. When it was of good quality, the students were graded a quarter point higher (reward). The seven designs and the operationalization of Topping's variables are summarised in Figure 2 (at the end of this paper).

2.3 Subjects and data gathering

Involved in one of the studies were 168 students mainly following the History program of the Faculty of Arts, and nine teachers of the History Department of Utrecht University. From this student group, 131 took part in peer assessment groups and 37 in a control group (with no peer assessment).

Data were gathered from questionnaires, semi-structured interviews and observations of classes. Also the products of writing, written peer feedback and marking by the teachers were used. Student reactions to peer assessment were solicited by means of an open questionnaire addressed to all students in the peer assessment groups, with items concerning the amount of time students spent on carrying out the peer assessment, the usefulness of the assessment form, the estimation of students capability to assess the work of other students, the appreciation of the feedback from fellow students and the progress they estimated in their work after revision. At the end of the questionnaire students were asked what they thought about working more often with peer assessment and if they had suggestions for improving the method. The questionnaire was conducted before students knew the marking of their final version. The teachers were interviewed immediately after they graded the final versions. In the interviews they were addressed both as observers of what their students were doing and learning and as users of the method. Like the students, they were asked about the amount of time they spent in assessing the writing products, the usefulness of the assessment form and the quality of the writing products. Apart from these questions, the teachers were asked to compare their experiences in the peer assessment group with the control group, if they had one. The observation of classes was concentrated on the introduction of peer assessment by the teacher, the reaction of students on the introduction, the way the feedback groups were formed, the exchange of writing products and written feedback, the participation in oral feedback, the plenary discussion after oral feedback and the interaction during classes between students and between students and teacher. In addition to these methods of data gathering, students' writing products were collected, final versions as well as draft versions. All written and oral feedback was collected, from the students to their fellow students, and from the teachers.

The data were mainly analysed by means of content analysis and descriptive statistics. The results of the seven designs are presented in the form of matrices, in order to relate the results to design features. The quality of the method was checked by means of inter-rater reliability, member check and peer debriefing. To test the significance of differences between the results in the seven designs t-tests and non-parametric analyses of variance were used.

3 RESULTS

The seven cases of peer assessment were studied on four topics: firstly, the process of peer assessment; secondly, the components of the peer feedback; thirdly, the interaction between students and fourthly, students' achievement as a result of peer assessment and students' and teachers evaluation of the peer assessment method.

3.1 The execution of peer assessment

Most of the students executed the peer assessment procedure as planned. They handed in their draft version on time, the exchange of articles did not cause any problems and they received feedback from at least one other student. However, not all students executed the planned number of assessments. About two thirds received the number of comments prescribed in the peer assessment design. Only a few students did not receive any feedback from their peers.

Table 1 shows the differences between the peer assessment designs concerning the execution of the procedure.

Table 1 The execution of the peer assessment procedure (% of the number of students participating in the course)

 

SS

HA

DS

HIST

RP

ICE

TGM

Draft versions handed in for PA

88

80
75*

67

65

100

91

100

Assessed by all members of the feedback group

80

63
47*

100

31

75

75

50

Not assessed by peers

0

13
38*

0

0

0

0

0

Note: *the first % concerns assessment of the design of the paper; the second % concerns assessment of a written chapter

The printed information about the procedure and, in addition to this, the teachers' verbal explanation of the criteria, was enough for the students to understand what activities peer assessment would require from them. Most students not only completed the assessment forms in full detail, followed by suggestions and advice for improving their articles, but they also scribbled notes on the draft versions to explain which parts of the text their comments referred to. Some students thought the assessment form gave them too little space to give all the feedback they wanted to give.

The oral feedback in the feedback groups that followed in the next meeting was lively and task-directed. This was also the case in the plenary sessions: students actively brought in their questions as points they had been unable to resolve in their feedback group, so they were anxious to hear the teacher's point of view.

In sum, the students took the task to assess the work of their fellow students seriously. The amount of time taken for reading and formulating the comments was on average equal to our estimation of the time minimally needed. Two thirds of the students considered themselves capable of assessing their peers' work. Moreover, students did not have problems with the fact that the assessment was not done anonymously.

The teachers complied with the procedure. However, in the courses in which students were assumed to receive a substantial part of their writing training, the teachers participated in the peer feedback process more than we thought desirable.

3.2 The components of the peer feedback

According to Roossink (1990) and Flower et al. (1986), adequate feedback consists of at least four types of utterances: Analysis, Evaluation, Explanation and Revision. To evaluate a writing product, it is necessary to analyse it first. Evaluation without explanation is not helpful, the receiver needs arguments to make the step to revision. Apart from these as we call them 'product-oriented feedback functions' we distinguish feedback aimed to discuss the writing process (Method) and utterances meant to structure the discussion (Orientation). The frequency of students performing these functions in their written feedback is presented in Table 2; the frequencies for the oral feedback are presented in Table 3.

Table 2 Frequencies of feedback functions in written peer feedback (% of the total amount of written feedback utterances per course

 

SS

HA-1

HA-2

DS

HIST-1

HIST-2

RP

ICE

TGM

M

Analysis*

20

25

12

19

22

17

24

24

2

19(22**)

Evaluation

51

50

45

46

41

46

36

37

46

44(47)

Explanation

16

14

32

20

16

19

17

9

27

18(19)

Revision

10

7

4

10

13

13

16

16

7

11(12)

Orientation

0

0

0

1

0

0

0

1

0

0

Method

1

2

1

0

0

0

0

2

0

1

Fbna***

1

2

6

4

8

4

7

11

17

6

Tot.

100

100

100

100

100

100

100

100

100

100(100)

Note. * The product oriented feedback functions are printed in bold, **percentage of the product oriented feedback; ***utterances that had no feedback function; M=mean

The designs differ significantly in the frequencies of the different feedback functions, in the written peer feedback (c = 143; df =48; p 0,001 for all feedback functions, c = 77; df =24; p 0,001 for the product oriented feedback) as well as in the oral peer feedback (c = 351; df =42; p 0,001 for all feedback functions, c = 93; df =21; p 0,001 for the product oriented feedback).

Table 3Frequencies of feedback functions in oral peer feedback (% of the total amount of oral feedback utterances per course)

 

SS

HA-1

HA-2

DS

HIST-1

HIST-2

RP

ICE

M

Analysis*

13

12

15

13

13

14

16

13

14 (23**)

Evaluation

34

12

19

19

21

22

24

26

20(32)

Explanation

27

12

16

20

17

15

19

16

17(26)

Revision

3

13

9

12

5

14

11

10

11(17)

Orientation

14

15

10

10

16

12

9

7

11

Method

0

12

7

3

6

2

1

1

5

Fbna***

9

24

25

22

21

21

19

26

22

Tot.

100

100

100

100

100

100

100

100

100(100)

Note. * The product oriented feedback functions are printed in bold; **percentage of the product oriented feedback; TGM is left out because in this course there was no oral feedback; ***utterances that had no feedback function; M=mean

In their written feedback, students concentrated on evaluating the text. In their oral feedback, they offered arguments for the evaluation and proposed revisions. As in adequate feedback also these functions have to be fulfilled, oral feedback seems to have an important additional value compared with written feedback. In their oral feedback, students also talked about matters not related to the text, but to the subject of study. In one course, students had to present their feedback in the form of a public oral report, with hardly any opportunity for the receivers to respond or ask questions. As a consequence, the oral feedback was strongly focused on summative evaluation. This indicates that the feedback procedure requires time for interaction.

Students seldom discussed the writing process. An explanation for this could be the fact that they were not familiar to process-oriented instruction. Mostly, the feedback was content and style directed, but it was not aimed at the structure of the text. It might be that students needed more instruction in assessing the structure of a text and more time to do this. It is also possible that the products that had to be assessed were already too much final version. In one of the courses, where students first assessed the design of their fellow students' paper, and then assessed a written chapter, they gave more feedback on structure than students who participated in other courses.

As table 4 and 5 show, there is a significant relationship between the way feedback is given (written or orally) and the kind of feedback functions that are fulfilled. This applies for the textual aspects. The realisation of the feedback functions, along with all textual aspects, is at a maximum when written and oral feedback are given in combination.

Table 4 The relation between feedback function and feedback aspect in written peer feedback

 

C

c

df

p

SS

.19

52

9

p .001

HA-1

.20

8

9

n.s.*

HA-2

.12

8

9

n.s.

DS

.30

27

9

p .001

HIST-1

.15

9

9

n.s.

HIST-2

.10

5

9

n.s.

RP

.008

5

9

n.s.

ICE

.22

39

9

p .001

TGM

.07

2

9

n.s.

* Note: n.s.= not significant

Table 5 The relation between feedback function and feedback aspect in oral peer feedback

 

C

c

df

p

SS

.08

5

9

n.s*

HA-1

.11

21

9

p .05

HA-2

.07

10

9

n.s.

DS

.09

9

9

n.s

HIST-1

.18

30

9

p .001

HIST-2

.11

18

9

p .05

RP

.07

12

9

n.s.

ICE

.19

40

9

p .001

* Note n.s.= not significant

3.3 The interaction between students

To analyse the interaction of students when giving oral feedback, we used a typology of Lockhart & Ng (1995). They perceive four reader stances, or as we call them 'feedback positions': authoritative, interpretative, probing and collaborative. The authoritative and the interpretative position are both instances of the evaluative mode'; the probing and the collaborative position are instances of the 'discovery mode'. According to Lockhart & Ng (1995), cognitive development and learning to write are also social processes, which are more fruitful when students do not take an authoritative position. A probing or collaborative manner of giving feedback would be the best, because the receiver is stimulated to discuss his own text. In doing so he intensifies his knowledge of what he wants to communicate. However, as is shown in Table 6, more than one half of the students interacted in an evaluative way.

Table 6 Feedback position per design (expressed in % of the discussions of the writing product

 

SS

HA-1

HA-2

DS

HIST-1

HIST-2

RP

ICE

M

Evaluative

80

31

41

50

66

58

54

85

58

Discovery

 

63

50

25

 

29

31

 

26

Not classif

20

6

8

25

33

14

15

15

16

Tot.

100

100

100

100

100

100

100

100

100

Note M=mean

The fact that so many students have been using an evaluative feedback position can partly be explained because they had been given the task of evaluating the work of their fellow students. But it could also be explained by their inexperience in giving feedback, which might stimulate them to imitate their teacher's use of the red pencil. From that point of view it is surprising that a quarter of all feedback conversations were the discovery position. So, many of these students were capable or giving their feedback in a more open manner, without training or instruction (for more information see Van den Berg, 2003) .

3.4 Achievement and evaluation of the peer assessment method.

Student achievement is determined in terms of grades for the writing products. In addition, we used students' and teacher's estimations of the profits from peer assessment.

As is shown in Table 7, most students used the peer feedback in their revision work, although partly.

Table 7 Processing of peer feedback

Course

Processed y/n

% content

% structure

% style

N

ICE

y

50

0

50

22(23%

n

38

0

62

74

TGM

y

43

6

50

16(34%)

n

52

13

35

31

HIST-1

y

50

5

45

20(41%)

n

34

10

55

29

HIST-2

y

41

12

47

17(29%)

n

54

2

44

41

RP

y

n

33

8

58

24(34%)

24

7

70

46

All courses

y

n

44

5

51

98(31%)

39

5

55

222

Note : N=the total amount of suggestions for revision processed or not processed

Most revisions were on content and style. It was only sporadically that students' revisions could not be traced back to the feedback from their fellow students or teacher.

In two courses, marks before and after revision could be compared. In one of these courses, the progress that everyone had made was a direct consequence of peer assessment. In three courses, the marks of the peer assessment group could be compared with those of control groups. There was no significant difference.

Most students thought their revised version was better, as a consequence of peer feedback, than their draft version. There were differences between the courses to the extent students were convinced of their progress.

The teachers, as far as they studied the draft versions of the writing products, observed that the revised products were of better quality. They were uncertain whether this was a result of peer assessment, or a result of their own feedback or of the extra time students were allowed to revise their draft version. The teachers who only studied the final version thought the writing products of peer assessment and control groups of equal quality.

Students' evaluation of peer assessment was positive. They found it was useful to read and comment on the work of their fellow students, especially because of the suggestions for revising their own work. Peer feedback was considered as useful and oral feedback was seen as an important addition to written feedback. As a negative element they stated the fact that the time needed for reading and assessing the work of peers had not been included in the courses' official study load. About two thirds of the students preferred to work frequently with peer assessment. They often gave suggestions for development of the procedure.

Teachers sometimes considered the written feedback too brief and superficial. The teachers who listened to the oral feedback had the opinion that written feedback, which in their view was often too mild, became more critical by oral explanation. Some teachers saw peer asessment as a method to escape from the one-way interaction between teacher and students. Other teachers saw it as an opportunity to us the experience of students having read each other's work, which made it possible to initiate a joint discussion about it. Some teachers struggled with their role in the system of peer assessment. They wanted to give more assistance, but found that the opportunities for that assistance were restricted, because now students also had to give their feedback first. However, in general teachers were positive about implementing peer assessment as a structural part of their courses.

4 CONCLUSIONS AND DISCUSSION

Based on cross-case analysis of the seven designs of peer assessment and their results, we draw conclusions about the most important features of design supporting the effectiveness of peer assessment. They are summarized as follows.

4.1 Important design features

1. The writing product is a draft article of five to eight pages. Students will not invest adequate time in reading and assessing larger products.

2. There must be some time between peer assessment and teacher assessment, so students can first revise their paper on basis of peer feedback before handing it in to the teacher.

3. Mutual feedback is easier to organise for teacher and students, because it is clear that the assessor will in turn be the assessee, which makes it easier to exchange products.

4. There seems to be no need for letting students assess each others' work anonymously. That is to say, small feedback groups seem to give students enough privacy.

5. A combination of written and oral feedback is more profitable than only written feedback. In their oral feedback, students enrich their written feedback by clarifying questions. They sometimes change their evaluation because of new information they receive from the writer, and in the interaction develop suggestions for revision.

6. Students having full knowledge of each other's subject do not give more feedback on content. Also, students do not give more feedback on style when knowing less on each other's subject.

7. When the teacher gives feedback in a feedback group together with the students, this results in less student participation in terms of speech utterances. From the point of elaboration of the subject matter by students interacting and exchanging their views, the participation of the teacher in the feedback group is not recommended.

8. The size of feedback groups in which students mutually assess their work is to be three or four, so students have some opportunity to compare the remarks of their fellow students, and determine their relevancy. In a group of two students, one partner is at risk when the partner does not perform well.

9. Oral feedback must be organised during contact hours, because there is a risk that students will not organise this themselves when out of class.

10. It does not seem necessary to reward students for participation in peer assessment in the form of credits for the quality of the written peer feedback. This kind of reward takes the teacher much time.

4.2 Discussion

Of course, critical remarks can be made on our study. Firstly, it was carried out in the practical situation and not in a laboratory. We worked with teachers who did what they considered to be good for their students, which meant that there were more teacher interventions in some courses than we had planned. As a consequence, we could not keep each variable fully under control.

Secondly, we did not organise extra qualitative data gathering, which would have been helpful to gain a better insight into the peer feedback process and the quality of the interaction.

Thirdly, the achievements we studied were confined to short term gains, because the students experienced peer assessment at only one moment in their studies. We did not study metacognitive profits, although this is what we aim for with peer assessment in a long-term perspective.

Finally, there is the issue of generalization. We have no data about the validity of our results for other disciplines or types of education. These shortcomings of our study are challenges for future research. An other interesting topic would be to develop forms of peer assessment that focus less on summative evaluation and more on feedback on the process of writing.

REFERENCES

Bean, J. C. (1996). Engaging ideas:the professors guide to integrating writing,critical thinking and active learning in the classroom. San Fransisco: Jossey-Bass Publishers.

Berg, B. A. M. v. d. (2003). Peer assessment in universitair onderwijs. Een onderzoek naar bruikbare ontwerpen. [ Peer assessment in university teaching. An exploration of useful designs] Utrecht University (The Neths), Utrecht. http://www.library.uu.nl/digiarchief/dip/diss/2003-0702-105650/inhoud.htm

Flower, L., Hayes, J. R., Carey, L., Schriver, K., & Stratman, J. (1986). Detection, Diagnosis, and the Strategies of Revision. College Composition and Communication, 37(1), 16-55.

Lockhart, C., & Ng, P. (1995). Analyzing Talk in ESL Peer Response Groups: Stances, Functions and Content. Language Learning(45:4), 605-655.

Roossink, H. J. (1990). Terugkoppelen in het natuurwetenschappelijk onderwijs, een model voor de docent [ Feeding back in science education, a feedback model for the teacher]. Universiteit Twente, Enschede.

Topping, K. (1998). Peer Assessment Between Students in Colleges and Universities. Review of Educational Research, 68(3), 249-276.

 

SS

HA

DS

HIST

RP

ICE

TGM

(4)product

Draft version paper (10 pp.)

Plan for paper (1-2 pp) and core chapter (3-5 pp)

Draftversion paper (15 pp.)

Draft biography

(10 pp.)

Draft article

(5 pp.)

Draft-analysis of an exhibition (3-5 pp)

Draft article (5 pp)

(5)relation to staff assessment

Supplementary; teacher gives written feedback on draft version

Supplementary

Supplementary; teacher gives written feedback on draft version + marks draft version

Supplementary; extra assessment

Supplementary; teacher gives written feedback on draft version + marks draft version

Supplementary; extra assessment

Supplementary; extra assessment

(7)directionality

One-way (2 assessments)

Mutual (2 or 3 assessments of both products)

Mutual (1 assessment)

Mutual (2 joint assessments)

Mutual (2 assessments)

Mutual (3 assessments)

One-way (2 assessments); assessm (1)(r) writing (r) assessm (2)

(8)privacy

In public (for teacher and all students)

Confidential (within the feedback group); teacher receives a copy

Confidential (within the feedback group); teacher receives a copy

Confidential (within the feedback group); teacher receives a copy

Confidential (within the feedback group); teacher receives a copy and assesses

Confidential (within the feedback group); teacher receives a copy

Confidential; teacher receives a copy

(9)contact

Written and oral feedback; plenary discussion with supplementary feedback from the teacher

Written and oral feedback

Written and oral feedback; plenary discussion of questions from the feedback groups

Written and oral feedback; plenary discussion of questions from the feedback groups

Written and oral feedback; plenary discussion of questions from the feedback groups

Written and oral feedback

Written feedback

(11)ability

Constellation of the feedback groups at random

Constellation of the feedback groups (by the teacher) on basis of joint topics

Constellation of the feedback groups (by the students) on basis of joint topics

Constellation of the feedback groups at random

Constellation of the feedback groups at random; same topic

Constellation of the feedback groups based on exhibition (by students)

Constellation of the feedback groups at random

(12)constell assessors

Two students and the teacher

Small groups (3-4 student) with teacher participating

Two students and the teacher

Small groups of 2 pairs (students)

Small groups (3 students) and the teacher

Small groups (4 students)

Two students

(13)constell assessees

Two other students

The same small groups

The same two students

The same small groups

The same small groups

The same small groups

Two other students

(14)place

Written feedback out of class/oral feedback in class (plenary discussion and tutorial with teacher)

Written feedback out of class/ oral feedback in class (small groups with teacher

Written feedback out of class/ oral feedback in class (small groups and plenary discussion)

Written feedback out of class/ oral feedback in class (small groups and plenary discussion)

Written feedback out of class/ oral feedback in class (small groups and plenary discussion

Written feedback out of class/ oral feedback in class (small groups)

Written feedback out of class

(17)reward

No reward for participation of PA

No reward for participation of PA

No reward for participation of PA

No reward for participation of PA

Max.1/4 point extra for written feedback

No reward for participation of PA

No reward for participation of PA

Figure 2 Features of seven designs of peer assessment

This document was added to the Education-line database on 18 September 2003