2008年4月2日 星期三

3/27 inter-rater reliability design

Mission:

1.find out the meaning of inter-rater reliability?
inter-rater reliability is the degree of agreement among raters. It gives a score of how much homogeneity, or consensus, there is in the ratings given by judges. It is useful in refining the tools given to human judges, for example by determining if a particular scale is appropriate for measuring a particular variable. If various raters do not agree, either the scale is defective or the raters need to be re-trained.

There are a number of statistics which can be used to determine inter-rater reliability. Different statistics are appropriate for different types of measurement. Some options are: joint-probability of agreement, Cohen's kappa and the related Fleiss' kappa, inter-rater correlation, concordance correlation coefficient and intra-class correlation.

2.Find out the advantage and disadvantage points?

3.Does this study design suitable with the clinical environment?

4.Asking the clinicals T's about the efficiency of the study design?
Only one rater assess the examination will be suitable for clinical.

5.Find out the source's influence of the result?
T's experience,attitude and instructions will be the main influence of the result.

沒有留言: