Here is the continuation of the last post to help guide parents and advocates through the testing process:
Testing Errata
Always be attuned to the possibility that the test scores may be erroneous. Although hard to detect on the face of the report, where the school personnel appear disorganized that can be a sign of possibly wrong evaluation reporting. Perhaps more prominently is where the test data does not at all line up with other objective measurements regarding the student. One pre-K student, who was obviously very capable, received test scores indicating severe cognitive involvement. The mother asked to see the test, which required him to circle the correct answer. He had fine motor issues, which prevented him from making a circle, so he answered almost none of the questions. An evaluator who was even mildly paying attention would have noticed, but it is ultimately up to the parent to detect such palpable errors.
Errors have been reported in looking up raw scores in the tables of standard scores. It is a common mistake to have the student’s age or birth date or grade wrong which can have a significant effect on the results; always check these details that should be stated at the top of the report. Adding up figures has been another source of error. The most accurate way to check for such errors would be to have an outside evaluator review the test protocols, lookups and arithmetic to insure that the results are accurate.
Nonverbal versus Verbal IQ Probes
In the area of IQ testing there are two distinct types of tests—verbal and nonverbal. Some verbal tests have a nonverbal component but that is not their focus. There was a study a number of years ago that found using the WISC that 85% of students with autism were reporting well below average IQ scores and 15% were reporting IQ scores in the average or near-average range. The WISC is a verbally based probe. Autism is fundamentally a verbally based disability. In effect the WISC was testing the student’s disability and not measuring their ability. When evaluators applied separate nonverbal probes such as the Ravenswood Progressive Matrices (used in the study), the ratios were reversed with nearly 85% of student with autism testing in the average or near average range of cognition and only 15% testing in the below average range of cognition. Nonverbal probes typically involve no verbal instructions and do not require any verbal output. The testing is highly visual (a strength for many students with autism).
Other nonverbal probes include the Leiter-R, UNIT and the CTONI, which I ask for by name at meetings. Schools will normally resist writing in a specific test so my fallback is to have the domains sheet reflect, “the testing will include a separate nonverbal probe not just a subtest of a verbal instrument.” This language has worked to get the kind of testing I am looking for, has resulted in the vast majority of cases in improved scores, and a more appropriate IEP.
Mine the Data for Meaning
Contrary to what school people represent, the numbers do not speak for themselves. If that were the case then we could do away with all school psychologists and have a technician administrater the testing that would then be scored by a computer without any further analysis. Nothing in special education is that simple, least of all testing.
I have a colleague who administers the PAT, the Phonological Awareness Test, which is a very useful phonics test that does not take very long to take, but can produce very useful information as to the student’s phonological learning. The results are not very exciting unless a skilled person applies thoughtful analysis that can drive the IEP and instruction. It can be very useful to use this test along side the CTOPP. I had a case where the student received the PAT that revealed he only had mastery of 2 of his 5 short vowels. His reading comprehension scores were high because he was a good guesser at the early primary level of school and he had a lot of background knowledge.
At hearing the district argued that they had been doing an excellent job pointing to his comprehension scores. On cross or adverse direct I asked: “Did you read the newspaper this morning?” The answer was obviously yes. Next question, “ How much of the newspaper could have read if you only knew 2 of your 5 short vowels.” After shooting me a hateful stare the answer was obviously “not much.”
Many RtI programs are centered on DIBELs or Aimsweb, which produce fluency numbers of ____ correct words per minute (cwpm). That fluency rate number standing alone is not all that useful. The much more useful questions center on “error-analysis.” Where is the breakdown occurring in the student’s reading? Are they making consistent errors? Can we draw any instructional strategies from the error patterns? I insist that questions like these be answered as part of progress monitoring and fluency probes.
When addressing fluency probes always inquire if the material read was hot or cold; it should be cold, that is the passage has not been read previously. I also ask if a timer was used or just the wall clock. Ask if the probe was on a passage on grade level or not. You can download fluency rate charts to have as a handy reference.
I also ask if the district uses testing such as MAP testing for reading from the NWEA, Northwest Evaluation Association, that produces a lexile. Lexile is defined in Wikipedia as: “The Lexile Framework for Reading is an educational tool that uses a common measure called a Lexile to match readers of all ages with books, articles and other leveled reading resources.
Recognized as the most widely adopted measure of reading ability, more than 28 million Lexile measures are reported from reading programs and assessments annually.[1] Thus, about half of U.S. students in grades 3rd through 12th receive a Lexile measure each year.[2] Lexile measures are being used across schools in all 50 states and abroad.[3]
What makes the Lexile Framework unique, and what has led to its widespread adoption, is that it measures both reading ability and text difficulty on the same developmental scale. When Lexile text measures are used together with Lexile reader measures, students, parents and educators are able to choose books and other targeted materials that are at a moderately challenging level and optimize improvement in reading skills.”
It can be incredibly useful to tie reading goals to standard measures like a lexile to increase the measurability of goals, and to provide parents with useful information on the kind of books that their child should be reading at lexile.com.
Some Common Mistakes that Parent Evaluators Commit
1. Not sharing reports with the school or not doing so in a timely way. The common refrain is that “the school has not been open so why should I.” The answer is simple and unfair—there is a double standard; hearing officers hold parents to a higher standard; their lack of candor and openness will be punished and typically the district’s same conduct will be excused. It just is a losing proposition for the parents to play hide the report;
2. Notwithstanding point number 1 above, read the report before you give it to the school. If the report is poor or takes a position that is contrary to your position put it in the safe deposit box and forget it. I have had too many clients hand over reports (prior to my watch) that had painfully bad information that can take months, years or never to undo and overcome. You cannot unring that bell, so be careful!
3. Make sure the evaluator has made every effort to solicit information in writing, on the telephone, from work product/school records and in person from school personnel. The whole point of the evaluation is that it is educationally relevant. Evaluations that fall prey to the charge of being “merely clinical” are not worth much at all. Some evaluators need to be reminded of who the audience is for their reports and letters. They are not justifying things to an HMO, it is the school and their attorney who are the ultimate parties that need to be convinced.
4. Evaluators need to understand the legal context of their reports. I have seen too many reports that use words like “best” and “optimal.” Obviously, insist that the report use words like “appropriate” and “necessary.” Moreover, keep the language of the report on the ground. I had one psychiatrist who testified in a case, that we ultimately won residential for the student despite his testimony, which was so academic and erudite that despite my enormous preparation for the case and prior interview with the witness, I barely understood his technical jargon. I think the hearing officer ruled for us even though he only got the flavor of the testimony and not much of the content.
5. Small details can be deadly to a report. In one hearing the evaluator for the parent forgot to sign and date the report. The school’s counsel turned these minor omissions into something tantamount to a Watergate cover-up and the hearing officer bought it. These omissions hurt us a lot at the hearing. I have had evaluators who have delegated parts of the work to interns or subordinates and then attempted to testify to the results. While the hearing officer allowed the testimony it did not help her credibility.
6. Observations are critical to the weight and credibility of a report. In Illinois we finally have an observation law as do some other states. I am mindful of the fact that schools do their level best to bar or limit observations. We need to do everything to fight for observations (e.g. language of equal fire power from Shaffer v. Weast case), and document our efforts to have observations to rebut or even bar cross examination on the basis of the fact the witness did not observe. Even if you cannot get observations the evaluator should still seek input via questionnaires or review of records/work product. Sometimes observing a tutoring session can be a partial defense to the charge that the report is “just clinical” and not relevant to the educational process.
7. Test publishers have strict guidelines regarding test-retest periods. If the school has given a WIAT or WISC make sure that your evaluator did not give the same typically until at least 12 months have past. Similarly, make sure that your testing will not invalidate the school’s testing. It feel strategic but at hearing it will not go down well if you are perceived to have undermined the school’s testing.
8. Make sure all preconditions to testing are in place. Specifically, make sure your evaluator knows your child’s age and grade so the wrong edition is not used. Make sure hearing and vision testing is done before testing gets underway. If the child is not hearing or seeing properly that can undermine the validity of the testing. Similarly make the sure the school testing is done only after hearing and vision are ruled out as confounding factors.
References that are useful:
DSM-IV soon to be the DSM-V, Standards for Educational and Psychological Testing, and www.iqscorner and its blogroll are very useful in researching tests and testing issues.
Comments