### Types of Student Growth Models

Everyone involved in K-12 education has probably heard of the term Adequate Yearly Progress (AYP). This is a report that determines the amount of positive influence (efficiency) of a school on a student. As student growth becomes more and more popular, AYPs have begun incorporating growth models in their calculations. Because these models differ from state to state, it is important to know what kinds of models your state is using to determine just how effective your school is.

Currently, there are three main categories of student growth models: growth-to-proficiency, transition-matrix, and projection models. Currently, Matrix clients include states that have adopted the growth-to-proficiency model. As Matrix’s user base grows, so too will the support for additional growth models including transition-matrix and projection.

All of these models use the idea of proficiency. Proficiency is a level of student achievement (on test scores) that illustrates appropriate student ability. Students are said to be sub-proficient if they are below this cutoff point. Some models make use of additional levels of student achievement, such as weak sub-proficiency (lowest level), low marginal sub-proficiency, high marginal sub-proficiency, proficient, and advanced (highest level), but the most important cutoff in these models is the proficient level.

As you can see, the sub-proficient student needs to make up quite a bit of additional ground each year to become proficient soon. This model thus presents a conundrum: if a teacher does just as well with their sub-proficient students as with their proficient students, they are actually punished for not achieving above-average growth results. Thus, this model may encourage teachers to create curricula that cater towards and pay disproportionate attention to sub-proficient students. Furthermore, because this model does not account for growth above the proficiency line, a teacher has no incentive to ensure a proficient or advanced student achieves any growth whatsoever. A teacher only has incentive to make sure they do not fall below proficiency.

The transition-matrix model uses a table of sub-proficiency levels and rewards students who grow from one achievement level to the next. Students who grow at least one level are considered “on track” for AYP purposes. Some versions of the transition-matrix model award full points to sub-proficient student growth, and some versions award some fraction of points to these students. Personally, a transition-matrix table that uses partial points, such as the Delaware model, misses the point of growth entirely -- that students should not be punished for starting at a low achievement level. A transition matrix table similar to the one used in Delaware’s model is shown below.

Currently, there are three main categories of student growth models: growth-to-proficiency, transition-matrix, and projection models. Currently, Matrix clients include states that have adopted the growth-to-proficiency model. As Matrix’s user base grows, so too will the support for additional growth models including transition-matrix and projection.

All of these models use the idea of proficiency. Proficiency is a level of student achievement (on test scores) that illustrates appropriate student ability. Students are said to be sub-proficient if they are below this cutoff point. Some models make use of additional levels of student achievement, such as weak sub-proficiency (lowest level), low marginal sub-proficiency, high marginal sub-proficiency, proficient, and advanced (highest level), but the most important cutoff in these models is the proficient level.

The growth-to-proficiency or trajectory model asks that a student below proficiency will become proficient in some set amount of time. It compares current student achievement to recommended student achievement (e.g. the proficient cutoff) and asks that these two values meet in a few years (typically 3-4). This model awards AYP points if a student has met proficiency or is on track to proficiency. The problem with this model is that it expects sub-proficient students to make larger-than-typical gains every year to even be considered as “on track.”

As you can see, the sub-proficient student needs to make up quite a bit of additional ground each year to become proficient soon. This model thus presents a conundrum: if a teacher does just as well with their sub-proficient students as with their proficient students, they are actually punished for not achieving above-average growth results. Thus, this model may encourage teachers to create curricula that cater towards and pay disproportionate attention to sub-proficient students. Furthermore, because this model does not account for growth above the proficiency line, a teacher has no incentive to ensure a proficient or advanced student achieves any growth whatsoever. A teacher only has incentive to make sure they do not fall below proficiency.

The transition-matrix model uses a table of sub-proficiency levels and rewards students who grow from one achievement level to the next. Students who grow at least one level are considered “on track” for AYP purposes. Some versions of the transition-matrix model award full points to sub-proficient student growth, and some versions award some fraction of points to these students. Personally, a transition-matrix table that uses partial points, such as the Delaware model, misses the point of growth entirely -- that students should not be punished for starting at a low achievement level. A transition matrix table similar to the one used in Delaware’s model is shown below.

Transition-matrix models (at least, the ones that award full points for all growth) alleviate one of the problems of growth-to-proficiency models. Namely, that sub-proficient but normal growth is treated as the solid achievement that it is and does not punish a student for starting low.

One disadvantage to this model is similar to the growth-to-proficiency model in that proficient or higher students are not rewarded for growth at all. Again, this can cause teachers to pay less attention to these students.

The s points represent the current student’s yearly scores, and the p points represent the proficiency cutoff for that year. The solid line takes the current student’s data (s points) along with some fancy regression analysis using the p points to determine where the student will land next year. As we can see, the student is currently in grade 5 and sub-proficient. However, this model will consider the student to be “on track”, since they are projected to achieve proficiency in grade 6.

The solid predictive power of projection models come at a large cost to complexity. While this graph may or may not be easily explainable to administrators, principals, and teachers, the regression analysis used in the background is not. This means that the model is now harder to explain and can make the model feel “less accountable” than a simple model. Furthermore, the projection in this model uses probability to determine the most likely outcome for a student. The fact that probability is involved means that there is uncertainty involved with the measurements. This means a student’s projection might be either too high or too low, which can place either too many or too few students in the “on track” category for AYP.

All three models have their upsides and downsides, and hopefully states new to student growth will choose a model that’s right for them.

Further Reading

Auty, William, et al. Implementor’s Guide to Growth Models. CCSSO: The Council of Chief State School Officers. June 2008. http://www.ccsso.org/Documents/2008/Implementers_Guide_to_Growth_2008.pdf

Hoffer, Thomas B., et. al. Final Report on the Evaluation of the Growth Model Pilot Project. U.S. Department of Education. January 2011. http://www2.ed.gov/rschstat/eval/disadv/growth-model-pilot/index.html

One disadvantage to this model is similar to the growth-to-proficiency model in that proficient or higher students are not rewarded for growth at all. Again, this can cause teachers to pay less attention to these students.

Projection models are a more complicated, predictive analysis model. These models take into account cohort (previous classes) test scores, and compares them to this year’s class scores. This model properly takes into account growth regardless of the student achievement level (which reduces the ability to game the system), takes into account changes from one year to the next in the same subject area (for example, stunted growth in math across the board when students encounter algebra), and has the best predictive ability of all the models with upwards of 80% correct future proficiency prediction.

The s points represent the current student’s yearly scores, and the p points represent the proficiency cutoff for that year. The solid line takes the current student’s data (s points) along with some fancy regression analysis using the p points to determine where the student will land next year. As we can see, the student is currently in grade 5 and sub-proficient. However, this model will consider the student to be “on track”, since they are projected to achieve proficiency in grade 6.

The solid predictive power of projection models come at a large cost to complexity. While this graph may or may not be easily explainable to administrators, principals, and teachers, the regression analysis used in the background is not. This means that the model is now harder to explain and can make the model feel “less accountable” than a simple model. Furthermore, the projection in this model uses probability to determine the most likely outcome for a student. The fact that probability is involved means that there is uncertainty involved with the measurements. This means a student’s projection might be either too high or too low, which can place either too many or too few students in the “on track” category for AYP.

All three models have their upsides and downsides, and hopefully states new to student growth will choose a model that’s right for them.

Further Reading

Auty, William, et al. Implementor’s Guide to Growth Models. CCSSO: The Council of Chief State School Officers. June 2008. http://www.ccsso.org/Documents/2008/Implementers_Guide_to_Growth_2008.pdf

Hoffer, Thomas B., et. al. Final Report on the Evaluation of the Growth Model Pilot Project. U.S. Department of Education. January 2011. http://www2.ed.gov/rschstat/eval/disadv/growth-model-pilot/index.html

Labels: Matrix

## 0 Comments:

## Post a Comment

Subscribe to Post Comments [Atom]

## Links to this post:

Create a Link

<< Home