• SPSS doesn't calculate Kappa when one variable is constant

    From clickwaheed@gmail.com@21:1/5 to Kurt on Thu Jul 2 08:06:50 2020
    On Friday, May 18, 2007 at 9:20:01 PM UTC+5:30, Kurt wrote:
    Kylie:

    I tried your method and SPSS correctly weighted out the dummy case.
    The crosstab table showed 60% agreement (the raters agreed on 3 out of
    5 valid ratings) which is correct. But it calculated Kappa as .000,
    which is definitely not correct.

    My test data was set up as follows:

    ###

    Rater1 Rater2 Weight
    Item1 Y Y 1
    Item2 N Y 1
    Item3 Y Y 1
    Item4 Y Y 1
    Item5 N Y 1
    Dummy N N .000000001

    ###

    Any ideas?

    Kurt


    On May 16, 7:58 pm, klange <klang...@yahoo.com.au> wrote:
    On May 17, 1:21 am, Kurt <kheisl...@cox.net> wrote:

    I am trying to assess the level of agreement between two raters who
    rated items as either Yes or No. This calls for Kappa. But if one
    rater rated all items the same, SPSS sees this as a constant and
    doesn't calculate Kappa.

    For example, SPSS will not calculate Kappa for the following data, because Rater 2 rated everything a Yes.

    Rater1 Rater2
    Item1 Y Y
    Item2 N Y
    Item3 Y Y
    Item4 Y Y
    Item5 N Y

    SPSS completes the crosstab (which shows that the raters agreed 60% of the time), but as for Kappa, it returns this note:

    "No measures of association are computed for the crosstabulation of VARIABLE1 and VARIABLE2. At least one variable in each 2-way table
    upon which measures of association are computed is a constant."

    Is there anywhere to get around this? I can calculate Kappa by hand
    with the above data; why doesn't SPSS?

    Thanks.

    Kurt

    Hi Kurt,

    Add one extra case to your file with the value of 'N' for Rater 2 (and
    any value for Rater 1). Add a weighting variable that has a value of 1
    for your real cases, and a very small value for this new dummy case
    (eg, 0.00000001). Weight the file by the weighting variable (Data >
    Weight cases), and then run the Crosstabs/Kappa.

    The new case is enough for the Kappa to be calculated, but the
    weighting means that it won't impact your results.

    Cheers,
    Kylie.- Hide quoted text -

    - Show quoted text -

    If there are two rater R1 & R2, then Can you tell me how to add third coloum for weight in SPSS as you did, can share your SPSS screenshot?
    Your will be a very big hand for me. my email id : clickwaheed@gmail.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to clickwaheed@gmail.com on Thu Jul 2 12:11:16 2020
    On Thu, 2 Jul 2020 08:06:50 -0700 (PDT), clickwaheed@gmail.com wrote:

    On Friday, May 18, 2007 at 9:20:01 PM UTC+5:30, Kurt wrote:
    Kylie:

    I tried your method and SPSS correctly weighted out the dummy case.
    The crosstab table showed 60% agreement (the raters agreed on 3 out of
    5 valid ratings) which is correct. But it calculated Kappa as .000,
    which is definitely not correct.

    My test data was set up as follows:


    < snip, details >

    If there are two rater R1 & R2, then Can you tell me how to add third coloum for weight in SPSS as you did, can share your SPSS screenshot?
    Your will be a very big hand for me. my email id : clickwaheed@gmail.com

    The original thread from 2007 is available from Google, https://groups.google.com/forum/#!topic/comp.soft-sys.stat.spss/ChdrpJTsvTk

    and it give plenty of reason why you don't really want to
    have a kappa reported when there is no variation.

    Especially study my posts and the one of Ray Koopman.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to rich.ulrich@comcast.net on Thu Jul 2 12:39:12 2020
    On Thu, 02 Jul 2020 12:11:16 -0400, Rich Ulrich
    <rich.ulrich@comcast.net> wrote:

    On Thu, 2 Jul 2020 08:06:50 -0700 (PDT), clickwaheed@gmail.com wrote:

    On Friday, May 18, 2007 at 9:20:01 PM UTC+5:30, Kurt wrote:
    Kylie:

    I tried your method and SPSS correctly weighted out the dummy case.
    The crosstab table showed 60% agreement (the raters agreed on 3 out of
    5 valid ratings) which is correct. But it calculated Kappa as .000,
    which is definitely not correct.

    My test data was set up as follows:


    < snip, details >

    If there are two rater R1 & R2, then Can you tell me how to add third coloum for weight in SPSS as you did, can share your SPSS screenshot?
    Your will be a very big hand for me. my email id : clickwaheed@gmail.com

    The original thread from 2007 is available from Google, >https://groups.google.com/forum/#!topic/comp.soft-sys.stat.spss/ChdrpJTsvTk

    and it give plenty of reason why you don't really want to
    have a kappa reported when there is no variation.

    Especially study my posts and the one of Ray Koopman.

    I will add to a point that I made in the original discussion.

    The reader's problem arises because "agreement" is intuitively
    sensible in usual circumstances, but it is nonsensical under
    close examination when the marginal frequencies are extreme.

    "Reliability" for a ratings of diagnosis logicallyy decomposes
    into Sensitivity and Specificity -- picking out the Cases, and
    picking out the Non-cases. Kappa is intended to combine
    those measures, essentially. It looks at "performance above
    chance." (For a 2x2 table, it is closely approximated by the
    Pearson correlation.)

    I gave a hypothetical table {90, 5; 5, 0} with a negative kappa.
    90 5
    5 0

    One can arbitrarily label the rows and columns as starting with
    Yes or with No. In one labeling there is 90% "agreement" as to
    who is a case (cell A) , in the other case there is 0% "agreement"
    (cell D).

    When each of two raters are seeing 95% as Case, chance would
    have them agree SOME time; so the 'agreement" of 0 is below
    chance, and the kappa is negative.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)