Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
4 pages
1 file
To what extent does individual student change (growth) over the academic year statistically explain why students differ in end-of-year performance after accounting for performance on interim assessments? The four growth estimates examined in this report (simple difference, average difference, ordinary least squares, and empirical Bayes) all contributed significantly to predicting performance on the end-of-year criterion-referenced reading test when performance on the initial (fall) interim assessment was used as a covariate. The simple difference growth estimate was the best predictor when controlling for mid-year (winter) status, and all but the simple difference estimate contributed significantly when controlling for final (spring) status. Quantile regression suggested that the relations between growth and the outcome were conditional on the outcome, implying that traditional linear regression analyses could mask the predictive relations.
ETS Research Report Series, 2010
Recently growth-based approaches to accountability have received considerable attention because they have the potential to reward schools and teachers for improving student performance over time by measuring the progress of students at all levels of the performance spectrum (including those who have not yet reached proficiency on state accountability assessments). While the use of growth in accountability holds promise for students with disabilities, measuring changes over time in their academic performance is complex. This paper summarizes models and approaches that use individual student test scores from multiple years for 3 different purposes: determination of adequate yearly progress under the federal accountability system, research on individual growth trajectories, and evaluation of the contribution of teachers and schools to student learning. Practical challenges in measuring and modeling growth for students with disabilities are then discussed. Finally, 3 areas in need of research on the measurement of growth from large-scale annual accountability assessments are identified and described: testing accommodations, test difficulty, and understanding the longitudinal characteristics of the population of students with disabilities.
Educational Research and Evaluation
Accounting for previous performance of students by means of growth curves analyses to estimate the size, stability, and consistency of school effects Anneke Timmermans & Greetje van der Werf To cite this article: Anneke Timmermans & Greetje van der Werf (2017) Accounting for previous performance of students by means of growth curves analyses to estimate the size, stability, and consistency of school effects,
Educational and Psychological Measurement
Interim assessments are increasingly common in U.S. schools. We use high-quality data from a large-scale school-level cluster randomized experiment conducted in Indiana in the 2009-2010 school year to examine the impact of two well-known commercial interim assessment programs (mCLASS and Acuity) on mathematics and reading achievement in the state of Indiana. Quantile regression was used to estimate treatment effects across the entire distribution of achievement (i.e., middle, lower, or upper tails). Results indicate that the treatment effects are positive, but not consistently significant. The treatment effects are smaller in lower grades (i.e., kindergarten to second grade), and larger in upper grades (i.e., third to eighth grade). Significant treatment effects are detected in grades three to six and three to eight. There is some evidence that in certain grades (third, fifth, and sixth) low-achievers may benefit more from Acuity than higher achieving students. Motivated by the passage of the No Child Left Behind (NCLB) Act, all states operate accountability systems that measure and report school and student performance annually. The NCLB accountability mandate further resulted in a plethora of assessment-based school interventions that targeted to improve student performance (Bracey, 2005; Sawchuk, 2009). Among the assessment-based solutions offered to improve student performance are periodic assessments variously known as benchmark, diagnostic, or interim assessments (Perie, M.,
2009
The measurement of student academic growth is one of the most important statistical tasks in an educational accountability system. The current methods of measuring student growth adopted in most states have various drawbacks in terms of sensitivity, accuracy, and interpretability. In this thesis, we apply the conditional growth chart method, a welldeveloped diagnostic tool in pediatrics, to student longitudinal test data to produce descriptive and diagnostic statistics about students' academic growth trajectory. We also introduce an innovative simulation-extrapolation (SIMEX) method which corrects for measurement error-induced bias in the estimation of the conditional growth model. Our simulation study shows that the proposed method has an advantage in terms of mean squared error of the estimators, when compared with the growth model that ignores measurement error. Our data analysis demonstrates that the conditional growth chart method, when combined with the SIMEX method, can be a powerful tool in the educational accountability system. It produces more sensitive and accurate measures of student growth than the other currently available methods; it provides diagnostic information that is easily understandable to teachers, parents and students themselves; the individual level growth measures can also be aggregated to school level as an indicator of school growth.
This research brief describes a model that correctly predicts reading performance (basic vs. proficient or above) on the Maryland School Assessment (MSA) for approximately 91% of students.
2020
Abstract/Summary The impetus for this paper can be traced back to two simple questions that most parents likely ask at some point in time during their child s education: 1) How much has my child learned? 2) Is the amount my child has learned good enough? These are straightforward and intuitively important questions about growth. The first asks a question about the magnitude of growth. The second asks a question about criteria for judging the amount of growth. While the questions may be intuitive, the psychometric and statistical gymnastics involved in coming up with a defensible answer are not. In what follows we pose a research question that is also deceptively straightforward: Do interpretations of student growth depend on the way longitudinal test scores have been scaled? This paper provides a theoretical and empirical context where one can give a provisional answer of no to this question. We show that when growth interpretations are made normatively, they appear insensitive to m...
Educational Measurement: Issues and Practice, 2009
Annual student achievement data derived from state assessment programs have led to wide spread enthusiasm for statistical models suitable for longitudinal analysis. In response, the United States Department of Education recently solicited growth model proposals from states as a means of satisfying NCLB adequate yearly progress requirements. Given the current policy environment's rigid adherence to NCLB's universal proficiency mandate, the preponderance of models thus far proposed maintain compliance by estimating future (i.e., projected) student achievement. Referred to as the "growth-to-standard" approach, these criterion referenced growth models designate whether a student is "on track to being proficient" and use this designation as evidence of school quality. This paper begins by situating current growth-to-standard approaches within a larger domain of statistical models including those based solely upon achievement as well as more traditional growth models. Within this context, we demonstrate that current growth-to-standard approaches present an impoverished view of student progress because they lack a normative foundation. To remedy this, student growth percentiles are introduced as a normative description of growth capable of accommodating, informing, and extending criterion referenced aims like those embedded within NCLB.
International Journal of Quantitative Research in Education, 2013
It is a commonly accepted assumption by educational researchers and practitioners that an underlying longitudinal achievement construct exists across grades in K-12 achievement tests. This assumption provides the necessary assurance to measure and interpret student growth over time. However, evidence is needed to determine whether the achievement construct remains consistent or shifts over grades or time. The current investigative study uses a multiple-indicator, latent-growth modelling (MLGM) approach to examine the longitudinal achievement construct and its invariance for the measures of academic progress® (MAP®), a computerised adaptive test in reading and mathematics. The results of the analyses from ten states suggest that with repeated measures, the construct of both MAP reading and mathematics remained consistent at different time points. The findings support the achievement construct's invariance throughout different grades or time points and provide empirical evidence for measuring student growth.
Exceptional Children, 2015
Students with significant cognitive disabilities (SWSCDs) have been included in alternate assessments as part of their states' large-scale testing programs since 2004. During the past decade, a considerable amount of research has been published on various aspects of alternate assessments for students judged against alternate assessments of alternative achievement standards (AA-AAS). However, for a number of technical and conceptual reasons, very little is known about the achievement growth of these students as measured by an AA-AAS. The purpose of the present study was to examine the reading achievement growth of SWSCDs on Oregon's AA-AAS using two approaches to determining growth: a transition matrix model and a multilevel linear growth model. This research is critical in that any attention to growth requires technical adequacy in the development and implementation of alternate assessments. We first consider critical issues for alternate assessments that are uniquely defined for this population of students and then review the limited research that has been conducted on growth for SWSCDs.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Educational Psychology, 2019
British Journal of Educational Psychology, 2014
Journal of School Psychology, 2009
Applied Measurement in Education, 2018
Educational Measurement: Issues and Practice, 2014
Center on Education Policy, 2009
School Effectiveness and School Improvement, 2010
International Journal of Assessment Tools in Education, 2019
2019
Journal of College Student Development, 2006
Northwest Evaluation …, 2006
Learning and Individual Differences, 2013
Regional Educational Laboratory Mid Atlantic, 2013
Learning and Individual Differences, 2014
Psychometrika, 2011
Educational Policy, 2014
Online Submission, 2011
education policy analysis archives, 2005