There is a tendency in education for apparently logical solutions to backfire. The NAPLAN system is a case in point. It was launched with great fanfare a decade ago as a tool to "drive improvements in student outcomes and provide increased accountability to the community", according to its website. This sounds fair enough, but after 10 costly years virtually everyone who actually understands NAPLAN opposes it.
Federal Education Minister Simon Birmingham rejected on Friday the call of his NSW colleague Rob Stokes to abandon NAPLAN, but in light of the release of the Gonski 2.0 report this week, and a review of aspects of the test later this year, there is an opportunity to re-examine it and the consequences it has wrought on education.
NAPLAN has not achieved its aims and damaged education in the process.
Photo: FairfaxFor starters, NAPLAN was poorly designed. International expert Les Perelman, who reviewed NAPLAN for the NSW Teachers Federation, has called it "one of the strangest writing tests I’ve ever seen ... It’s measuring all the wrong things". For those interested, his report is detailed - and it is not the first.
Our teachers have observed other problems, such as students being tested on material not yet due to be taught, and distinctly odd marking criteria. For instance, one quarter of the marking criteria for writing in 2017 was not achieved by any Year 3 student in NSW, and one third was achieved by fewer than 5 per cent. Year 5 results were similar. It is hard to believe that a well-designed test would set standards that could not be met by a single student in NSW, regardless of talent, schooling or background.
The limits of statistical analysis are another major hindrance. Accurate statistics require a large population. If NAPLAN were well designed, we might use it to draw conclusions about education in NSW or Australia. But at the level of an individual, a class or a school, its results are so uncertain as to be almost useless.






Add Category