Thanks for your links. This is a confusing point that we've had several debates over and i agree with your assumption that the fact that the scores are vertically scaled would lend itself to this type of analysis. We are consulting with New Leaders who provided us with the snippet below (emphasis is mine). I just reached out to get the source and will share that when they get back. Thanks, David
"Each student who completes the Smarter Balanced summative assessment receives a total scale score and associated achievement level for each content area. Scale scores are the basic unit of reporting. A scaled score is derived from a total number of obtained score points that is statistically adjusted and converted into a consistent, standardized scale that permits direct and fair comparisons of scores from different forms of a test either within the same administration year or across years (Tan & Michel, 2011). Established psychometric procedures are used to ensure that a given scale score represents the same level of performance regardless of the test form. For example, if a student receives a scale score of 2570 on the Grade 6 mathematics test and another student earns a 2570 on the Grade 6 mathematics test the following year, the scaling process ensures that both scores represent the same level of performance. Scale scores are especially suitable for comparing the performance of different groups of students in the same grade from year to year and for maintaining the same performance standard across the years. While scale scores are comparable across tests in a given content area within the same grade, they are not comparable across content areas or grades. For instance, a scale score on the mathematics test should not be compared with a scale score on the ELA/literacy test, nor should a scale score on a Grade 3 test be compared with a scale score on a Grade 4 test."