Abstracts

Inter-Rater Reliability of a Seizure Classification System for Naturally-Occurring Canine Seizures

Abstract number : 3.108;
Submission category : 1. Translational Research
Year : 2007
Submission ID : 7854
Source : www.aesnet.org
Presentation date : 11/30/2007 12:00:00 AM
Published date : Nov 29, 2007, 06:00 AM

Authors :
B. G. Licht1, A. Ranney1, 2, M. Licht1, L. Hyson1, K. Harper3, S. Sullivan4

Rationale: The overall goal of our research is to establish a naturally-occurring canine model of epilepsy. Research shows that canine seizures have similar clinical presentations to human seizures. The study presented here examined whether a modified version of the ILAE seizure classification system can be used to reliably classify canine seizures. The system was modified because consciousness must be assessed differently with dogs than people, and electroencephalograms rarely are used in veterinary medicine. Thus, our classification system was based solely on observable clinical signs. We examined inter-rater reliability among four raters who classified descriptions of seizures provided by the dogs’ owners.Methods: Seizures from 66 pet Poodles were classified. Descriptions were obtained through structured telephone interviews with owners. Seizure descriptions were classified independently by four raters. Three raters were university undergraduates that had been trained to use the classification system. The fourth rater was the senior author (BGL). Undergraduates were employed to determine if persons without expertise in epilepsy could be trained to use the system reliably. Training took approximately 30 hours. Each rater classified the same 66 seizures. Raters could consult the manual, but could not discuss any classifications with anyone. Twelve classification decisions were made for each seizure. These included whether the seizure had a generalized versus focal onset, whether a focal onset seizure secondarily generalized, whether autonomic signs were noted by the owner, etc. Agreement among raters for each of the 12 decisions was evaluated two ways. First, we computed percent agreement. This reflected the percent of all 66 seizures for which the 4 raters agreed on a specific decision. Second, we computed KAPPA for each decision. KAPPA is a more stringent estimate of agreement because it reflects agreement above what is expected by chance. A percent agreement of 80% or better is considered very good. However, for KAPPA, .61 to .80 is considered “substantial agreement,” .41 to .60 is considered “moderate agreement,” and .21 to .40 is “fair agreement.” Results: Results are presented in Table 1. Of the 12 decisions, percent agreement among the raters was 80% or higher for 10 decisions; the remaining two were at least 73%. For KAPPA, 7 decisions were in the “substantial” range, 2 were “moderate,” and only one was considered fair. (KAPPA could not be computed for 2 decisions due to lack of variability.) Conclusions: Overall, this classification system showed very good agreement among raters, particularly considering that agreement was calculated across 4 independent raters. These data demonstrate that canine seizures can be reliably classified based on a modification of the ILAE system. This provides further evidence of the utility of a naturally-occurring canine model of epilepsy. It also provides evidence that when explicit operational definitions of seizures are provided, those without prior expertise in can learn to reliably classify seizures.
Translational Research