Saturday, January 11, 2014

Market theory and Chicago Public high schools

Daniel Kay Hertz has a great post up at his website (that was tweeted by WBEZ and reproduced on Catalyst) regarding the market trends for Chicago Public Schools high schools. The major charts of the piece are reproduced at right: basically that when one looks at the district as a whole it appears at least at face value that students and families are choosing to move from schools with lower overall ACT scores to ones with higher ACT scores.

However, when Hertz breaks the data down between charter and non-charter CPS schools things get murkier. Students in general are choosing "better" non-charter schools as opposed to "worse" non-charter schools. But, within the subset selection of charter schools students and families are statistically choosing randomly.

There are a few issues with the way the analysis was done (as I understand it).

  1. Including selective enrollment schools in the pool of data will skew results to make it look like there is not much self selection for students to move to the top schools in CPS. This is, of course, patently false. If a parent were able to just choose that their student go to Northside (with a 29) or Whitney Young (with a 27), they would. That is, of course, impossible though because each has an admissions rate of about 12%.
     
  2. Using ACT growth over three years would be a much better measure. Comparing the overall end scores on the ACT doesn't as much measure the quality of education received at the school (for a moment ignoring the side debate of test-as-imperfect-indicator-of-learning) as it does measure where the students end up. This sounds a bit tautological, and it is, but take an example: if a student enters Roberto Clemente (neighborhood CPS HS) at a 12 and ends at an 18 - that is a fantastic amount of growth and speaks to the skills acquired during the first three years of HS. On the other hand, if a student enters ASPIRA at a 17 and improves to an 18 over three years, at least by the measure of the test, not as much has been learned.

    So, it would be interesting to do a similar analysis on Hertz' dataset sorting by three year cohort growth from freshman EXPLORE to junior or senior year ACT scores. If the market approach is working well, parents and students would presumably move to the neighborhood and charter schools which provide the most growth.
     
  3. High school is (with few exceptions) geographically constraining. Some of the explanation for an increase in "lower performing" schools could be the fact that the nearest reasonable alternatives were also "lower performing" schools. In other words the perceived cost-benefit trade off of an hour long commute each way for marginally better (say, 1-3 points) school might seem unattractive. Which would then result in increased enrollments with less attractive schools (test score wise) but more attractive geographically.*
     
  4. The ending ACT data has a long tail and isn't always at the forefront of a parent's decision making process, especially for newly started schools. With 48 new high schools in the last 10 years many parents are choosing to send their students to programs without records or results until three years later. Doing some back of the envelope math here, 48 schools multiplied by 4 classes that will enter each of those buildings before ACT scores are published/publicized would result in 192 cohorts of students/families selecting a school based on imperfect or partial data. So, while it may appear that parents/students who chose charter school X or new neighborhood school X made an irrational choice, for 192 sets of those families judging that choice requires a level of hindsight bias. 
Overall Hertz brings up some interesting analysis though, and at a minimum this is a start to a conversation, and an important one. Assuming that some of the mitigating variables could be explained away or solved in the near future (better access to information for parents etc), if the trends identified above to persist it would be a serious blow to the entire philosophy of market-based portfolio management for large urban districts.

The underlying larger takeaway should be this - not all neighborhood public high schools are awful, not all charter schools are fantastic. Far from it on both sides, those who paint either groups with such a broad brush do a disservice to the most important constituency here - the parents and students who need to make a real, difficult and impactful decision of where to spend their high school years.



---

* Might be interesting to compare increases and decreases in enrollment with a variety of other factors to see if some have a higher correlation with increased enrollment, such as a the aspects of the 5-Essentials survey.

Correction: an earlier version of this post said that WBEZ had reproduced the Hertz piece, they had in fact only tweeted a link. Apologies for the confusion.

4 comments:

  1. I agree with some of your points, particularly number one. Including selective enrollment schools heavily skews the results. In fact, even including them in the average ACT score is misleading. Do we know what the average would be if we removed Northside, Jones, et al from the sample? It's possible that could change of a lot of his analysis.

    On the other hand, I am much less optimistic than you that any useful policy data can be gleaned from the ACT. While EPAS growth would be a marginally better indicator of school quality, it would still fail to paint a usefully accurate picture.

    For starters, the EXPLORE, PLAN, and ACT are not designed to measure growth, they are designed to rank and sort students in grades 9, 10, and 11, respectively. For this type of data analysis, especially if it were to have any policy implications, you would want to use an accurate tool that was designed for the task, which in this case is measuring growth, not ranking students.

    Secondly, for a lot of students scoring on the lower end, and the schools in which they are enrolled, EPAS is even less useful. If a student enters high school at a second grade reading level, and advances to a sixth grade reading level by their junior year, that type of growth is unlikely to register at all between the reading portions of the EXPLORE and the ACT.

    Perhaps the so-called "Next Generations Assessments" will address these concerns, though I, as always, remain skeptical.

    ReplyDelete
    Replies
    1. Hi! I wrote the original piece. It's possible I should have been more up front about this, but as I explained here (http://danielhertz.wordpress.com/2014/01/13/school-markets-cont/), I don't think including selective enrollment schools "skews" my results as much as it means that Matt and I are thinking of slightly different questions. Mine is, Does the market *as it exists now* take students from the worst schools and bring them to better ones? If there's a structural impediment to that happening - as selective schools clearly are - that doesn't change the fact that right now the market is not bringing students to the best schools. You and Matt seem to be asking another interesting question, which is whether parents choose the best available option. For that question, you're right that it doesn't make sense to include selective schools.

      Delete
    2. Thanks for the response, Daniel, and I agree the slight difference in original question definitely impacts the best data to use for analysis.

      Looking forward to more conversation!

      Delete
  2. More and more I am with you on the usefulness of cross-grade tests, Cy. The more I learn and interact with the intricacies of the reading PLAN, ACT etc. the less confident in their use, especially on small sample sizes (less than 100). I think there is still some utility in their school, area and district wide samples because some of the noise can be accounted for. But, only to see the very general trends of relative improvement.

    Here's hoping for the NexGen assessments....

    ReplyDelete