When to update to the newest revision of a test

By Daniella Maglione, Ed.S., MS, Gail Rodin, Ph.D., and Maggie Kjer, Ph.D.
April 18, 2020 - Last updated: May 14, 2020


testIn our work for a major psychological test publisher, one of the questions we’re asked most frequently is, “When do I need to transition to the latest revision of a test?”

Our employer and – to the best of my knowledge – no other major test publisher makes any recommendation with regard to this question. Instead, publishers urge test users to look for guidance from their professional associations, most frequently the American Psychological Association (APA) and the National Association of School Psychologists (NASP).

What information do these and other organizations provide on the topic?

APA’s Ethical Principles of Psychologists and Code of Conduct offers the following guidelines:

9.08 Obsolete Tests and Outdated Test Results
(a) Psychologists do not base their assessment or intervention decisions or recommendations on data or test results that are outdated for the current purpose.
(b) Psychologists do not base such decisions or recommendations on tests and measures that are obsolete and not useful for the current purpose.

The NASP Principles for Professional Ethics also states that when utilizing norm-referenced measures, school psychologists should, “choose instruments with up-to-date information.”

Some additional justification for this advice is provided in NASP Best Practices in School Psychology IV, Volume 2 (2002), which says:

“The newest revision and most recent norms for a test should be used because recent studies show that the stringency of norms changes over time and more recent norms typically are tougher than older norms. The now well-known Flynn Effect must be considered to avoid the undue effects of out-of-date norms.”

James R. Flynn, a New Zealand researcher on intelligence, concluded after examining the intellectual level of the U.S. population for 46 years that the average annual national gain is .33 IQ points. This indicates that in a 10-year time span, IQ scores should differ by approximately 3 IQ points.

Finally, the American Educational Research Association (AERA) Standards for Educational and Psychological Testing (2014) say:

“Test specifications should be amended or revised when new research data, significant changes in the domain represented, or newly recommended conditions of test use may reduce the validity of test score interpretations. Although a test that remains useful need not be withdrawn or revised simply because of the passage of time, test developers and test publishers are responsible for monitoring changing conditions and for amending, revising, or withdrawing the test as indicated.” Standard 4.24 (under Standards for Test Revision)

The comment for the above AERA Standard adds:

“Test developers need to consider a number of factors that may warrant the revision of a test, including outdated test content and language, new evidence of relationships among measured or predicted constructs, or changes to test frameworks to reflect changes in curriculum, instruction, or job requirements. If an older version of a test is used when a newer version has been published or made available, test users are responsible for providing evidence that the older version is as appropriate as the new version for that particular test use.”

While all of these professional resources are in agreement regarding the importance of, and reasons for, using the most up-to-date test measures, none specifies a time frame for compliance.
In the absence of any clear rule regarding transition time, a professional consensus (sometimes referred to as a “community standard”) has solidified, suggesting that the switch should be made within one year of publication. An article titled “Ethical Standards and Best Practices in Using Newly Revised Tests” by Stefan Dombrowski states:

“The profession has instead established a community standard for the transition to newly revised IQ instruments: Ranging from six months to one year, this transition period has been tacitly agreed upon by trainers of school psychologists and other leaders in the field.”

There is also a lack of consensus in terms of states’ special education regulations. Some states allow for a one-year transition period while others do not address the issue.

For example, the Florida Department of Education does not have a written policy specifying the timeline for transitioning to a new revision of an instrument. However, districts typically utilize one year as a timeline, which they believe (erroneously) to conform to test publishers’ recommendations.

While there is a dearth of published ethical or legal guidance for psychologists on the question of when we “must” switch to the most recent revision of a test, there is evidence of an “unspoken rule” when one looks at legal decisions regarding the use of assessment for the purpose of special education eligibility determination and placement.

Hearing officers from several states have invalidated evaluations because the psychologist utilized an outdated version of a test, including an important decision affecting the Boston Public School System.

Similarly, a colleague who is a member of her state’s psychology licensing board noted that in her experience, “If a school district has a litigious client population (i.e., increased risk of ending up in a due process hearing), their assessments will be more ‘legally defensible’ if they have utilized the most current version of the tests.”

In summary, one must look closely at the various relevant ethical guidelines and best practices provided by professional organizations, as well as state guidelines, in order to make an informed decision regarding when to upgrade to a new revision of a test.

Ultimately, it is our responsibility as professionals to make this decision, in order to assure that our test results are accurate and to maximize the likelihood that they will contribute to the most appropriate diagnostic and treatment decisions for the clients we serve.

References available from authors

Share Button

The authors are solutions analysts for Pearson. Daniella Maglione, Ed.S., MS, is a Florida licensed school psychologist and certified school psychologist. She specializes in national licensing models for Pearson. Her email address is: Daniella.Maglione@pearson.com. Gail Rodin, Ph.D., is a licensed psychologist and health services provider in psychology (HSP-P) in North Carolina. Her email address is: Gail.Rodin@Pearson.com. Maggie Kjer, Ph.D., is a certified director of special education specializing in emotional and behavioral disorder program development.

Related Articles