Kylie:
I tried your method and SPSS correctly weighted out the dummy case.
The crosstab table showed 60% agreement (the raters agreed on 3 out of
5 valid ratings) which is correct. But it calculated Kappa as .000,
which is definitely not correct.
My test data was set up as follows:
###
Rater1 Rater2 Weight
Item1 Y Y 1
Item2 N Y 1
Item3 Y Y 1
Item4 Y Y 1
Item5 N Y 1
Dummy N N .000000001
###
Any ideas?
Kurt
On May 16, 7:58 pm, klange <klang...@yahoo.com.au> wrote:
On May 17, 1:21 am, Kurt <kheisl...@cox.net> wrote:
I am trying to assess the level of agreement between two raters who
rated items as either Yes or No. This calls for Kappa. But if one
rater rated all items the same, SPSS sees this as a constant and
doesn't calculate Kappa.
For example, SPSS will not calculate Kappa for the following data, because Rater 2 rated everything a Yes.
Rater1 Rater2
Item1 Y Y
Item2 N Y
Item3 Y Y
Item4 Y Y
Item5 N Y
SPSS completes the crosstab (which shows that the raters agreed 60% of the time), but as for Kappa, it returns this note:
"No measures of association are computed for the crosstabulation of VARIABLE1 and VARIABLE2. At least one variable in each 2-way table
upon which measures of association are computed is a constant."
Is there anywhere to get around this? I can calculate Kappa by hand
with the above data; why doesn't SPSS?
Thanks.
Kurt
Hi Kurt,
Add one extra case to your file with the value of 'N' for Rater 2 (and
any value for Rater 1). Add a weighting variable that has a value of 1
for your real cases, and a very small value for this new dummy case
(eg, 0.00000001). Weight the file by the weighting variable (Data >
Weight cases), and then run the Crosstabs/Kappa.
The new case is enough for the Kappa to be calculated, but the
weighting means that it won't impact your results.
Cheers,
Kylie.- Hide quoted text -
- Show quoted text -
On Friday, May 18, 2007 at 9:20:01 PM UTC+5:30, Kurt wrote:
Kylie:
I tried your method and SPSS correctly weighted out the dummy case.
The crosstab table showed 60% agreement (the raters agreed on 3 out of
5 valid ratings) which is correct. But it calculated Kappa as .000,
which is definitely not correct.
My test data was set up as follows:
If there are two rater R1 & R2, then Can you tell me how to add third coloum for weight in SPSS as you did, can share your SPSS screenshot?
Your will be a very big hand for me. my email id : clickwaheed@gmail.com
On Thu, 2 Jul 2020 08:06:50 -0700 (PDT), clickwaheed@gmail.com wrote:
On Friday, May 18, 2007 at 9:20:01 PM UTC+5:30, Kurt wrote:
Kylie:
I tried your method and SPSS correctly weighted out the dummy case.
The crosstab table showed 60% agreement (the raters agreed on 3 out of
5 valid ratings) which is correct. But it calculated Kappa as .000,
which is definitely not correct.
My test data was set up as follows:
< snip, details >
If there are two rater R1 & R2, then Can you tell me how to add third coloum for weight in SPSS as you did, can share your SPSS screenshot?
Your will be a very big hand for me. my email id : clickwaheed@gmail.com
The original thread from 2007 is available from Google, >https://groups.google.com/forum/#!topic/comp.soft-sys.stat.spss/ChdrpJTsvTk
and it give plenty of reason why you don't really want to
have a kappa reported when there is no variation.
Especially study my posts and the one of Ray Koopman.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 70:59:12 |
Calls: | 6,656 |
Calls today: | 2 |
Files: | 12,201 |
Messages: | 5,332,212 |
Posted today: | 1 |