Skip to main content

Table 3 Agreement between clinician pairs in classification of CXR abnormalities in patients with a severe acute respiratory infection

From: A chest radiograph scoring system in patients with severe acute respiratory infection: a validation study

   

Number of CXRs with discrepant scores

 

Weighted Kappa

Strength of

n = 250

Clinician-clinician combination

(95 % CI)

agreementa

n (%)

Agreement after independent review

Pediatrician vs. internal medicine physician

0.85 (0.73 to 0.98)

‘Very good’

39 (16)

Pediatrician vs. internal medicine resident

0.76 (0.63 to 0.88)

‘Good’

48 (19)

Pediatrician vs. pediatric resident

0.81 (0.68 to 0.95)

‘Very good’

51 (20)

Pediatrician vs. medical student 1

0.66 (0.53 to 0.78)

‘Good’

67 (27)

Pediatrician vs. medical student 2

0.63 (0.50 to 0.76)

‘Good’

70 (28)

Pediatrician vs. research nurse

0.75 (0.62 to 0.88)

‘Good’

56 (22)

Agreement after combined review of CXRs with discrepant scores

Pediatrician vs. internal medicine physician

0.98 (0.90 to 1.06)

‘Very good’

3 (1)

Pediatrician vs. internal medicine resident

0.99 (0.87 to 1.12)

‘Very good’

4 (2)

Pediatrician vs. pediatric resident

0.97 (0.84 to 1.09)

‘Very good’

5 (2)

Pediatrician vs. medical student 1

0.99 (0.86 to 1.11)

‘Very good’

3 (1)

Pediatrician vs. medical student 2

0.98 (0.85 to 1.10)

‘Very good’

3 (1)

Pediatrician vs. research nurse

0.99 (0.86 to 1.11)

‘Very good’

6 (2)

  1. aAgreement: weighted Kappa ≤0.2 = ‘poor’, >0.2 to 0.4 = ‘fair’, >0.4 to 0.6 = ‘moderate’, >0.6 to 0.8 = ‘good’, >0.8 to 1.0 = ‘very good’ agreement
  2. CI Confidence interval