The IDEA (Individuals with Disabilities Education Act) pretty carefully lays out the ground rules for a special education assessment. Among the various regulations related to administering a case study evaluation are that assessments “are used for the purposes for which the assessments or measures are valid and reliable; are administered by trained and knowledgeable personnel; and are administered in accordance with any instructions provided by the producer of the assessments.” We bring this up because the fifth edition of the Wechsler Intelligence Scale for Children (WISC), one of the mainstays of cognitive testing for children, was released this past fall. So does the existence of a newer edition of such a standard battery render the previous edition of the WISC no longer valid? It’s a good question, because not even professional psychological organizations appear to agree on at what point a previous edition of an exam becomes obsolete. Moreover, as a practice point, we have won cases based on the fact the district knowingly or carelessly made decisions based on outmoded test instruments.
The Principles for Professional Ethics of the National Association of School Psychologists (NASP) state that school psychologists should be “knowledgeable about the validity and reliability of their instruments and techniques, choosing those that have up-to-date standardization data and are applicable and appropriate for the benefit of the child.” Yet the term “up-to-date” is not defined. Similarly, the ethics of the American Psychological Association (APA) state that psychologists may not “ . . base their assessment or intervention decisions or recommendations on data or test results that are outdated for the current purpose" and that psychologists may not "base such decisions or recommendations on tests and measures that are obsolete and not useful for the current purpose.” Once again, “outdated” and “obsolete” are not defined.
According to Dr. Robert Lichtenstein in a communique published by the National Association of School Psychologists (NASP), two academic psychologists had previously debated the question of timeliness of adoption of a new test. The first psychologist argued that a 1-year time limit for adoption of new tests should be added to the NASP ethical principles. Conversely, a different academic argued that such an adaptation would be “ill-advised” because it would deny independent verification of the validity of the new test. Dr. Lichtenstein concluded that “a rigid rule of thumb should not supplant professional judgment.” It takes time for professionals to assess the new test, order it, and familiarize themselves with its administration and scoring protocols. Thus, allowing time to ensure the validity and appropriateness of use of a new test as well as the weighing the budget demands for purchase and staff training are all valid reasons to delay immediate adoption of new tests.
So let’s go back to the newly revised WISC-V. The most recent reiteration of the IDEA clearly states that evaluation for a specific learning disability “must not require the use of a severe discrepancy between intellectual ability and achievement for determining whether a child has a specific learning disability”—the so-called discrepancy formula. However, the IDEA does not forbid use of cognitive testing as long as a response to intervention model is used along with such testing. Even without the discrepancy model, and beyond LD determinations, a valid WISC is an important instrument in making proper educational decisions. Thus many school districts, as well as private evaluators, continue to include cognitive batteries, such as the WISC, as one of the mainstays of their case study evaluations. So with this in mind and if your child is undergoing a case study evaluation by your school district (or even a private evaluation), what will be different about the new edition of the WISC?
The WISC is designed to assess a student’s learning, potential, and ability and is used in conjunction with other tests to assess a student’s learning profile fully. Structurally, the WISC-IV was divided into four indexes: Verbal Comprehension Index; Perceptual Reasoning Index, Working Memory Index, and the Processing Speed Index. Each index was comprised of a series of subtests. A full scale IQ--the score most widely used but not necessarily indicative of intellectual function in school--is derived from combining specific subtests. The new WISC-V has five primary index scores: Verbal Comprehension; Visual Spatial; Fluid Reasoning; Working Memory, and Processing Speed (the Perceptual Reasoning Index on the old WISC-IV was divided into two indexes). Each index is comprised of certain subtests. The newest revision has dropped some subtests, added new subtests, and updated retained subsets. Compared with the WISC-IV, the WISC-V FSIQ places a lighter weight on working memory and processing speed abilities which frequently depress FSIQ scores that can have a negative effect on placement and expectations for the student.
The five primary indexes on the WISC-V assess function within specific cognitive areas. But these scores in and of themselves don’t tell the whole story for a student. Individual subset scores need to be examined to look for scatter within the cognitive areas. A general ability (GAI) as well as ancillary and complementary index scores are examined to assess further a child’s strengths and weaknesses across a variety of domains to enable identification of learning disabilities.
This is pretty complicated material, and not many parents are trained psychologists. But to help out parents, the IDEA requires that one of the key members of an IEP team is a staff member “who can interpret the instructional implications of evaluation results.” Mark Twain famously said, “There are three kinds of lies: lies, damned lies, and statistics.” I’m not trying to suggest that school districts set out to deceive parents deliberately. But how scores are presented can affect how they are used. Parents don’t have to have a master’s degree in school psychology tucked into their back pockets, but it doesn’t hurt for them to have some familiarity with educational testing, including what standard scores and bell-shaped curves are. There are several good sites, in particularly Wright’s Law, which offer tutorials for parents in understanding educational testing and statistics. Please look into this information in order to participate meaningfully in your child’s IEP process. It is not a bad idea to check test given to see if there are newer versions that have been published which could give you a significant leg up in advocating, getting an IEE and winning the case for your client/child.