Giter VIP home page Giter VIP logo

california-school-ratings's Introduction

California School Ratings

Based on the downloadable data files, record layouts, and flag definitions for the Academic Performance Index (API) provided by the California Department of eduction.

A work in progress...

Base and Growth API

Extracted from http://www.cde.ca.gov/ta/ac/ap/documents/infoguide12.pdf:

In order to measure the academic improvement of a school, academic results in the form of the API are compared from year to year. Growth (or change) in the API is the difference between the Base API and Growth API within a reporting cycle.

Each reporting cycle begins with a Base API. The Base API is calculated using the test results of the previous year and the Growth API is calculated using the test results of the current year. For example, the 2011 Base API is calculated using results of statewide testing from spring 2011 and the 2012 Growth API is calculated using results of statewide testing from spring 2012. Any changes in the API calculations, such as adding a new assessment, in a year begin with the Base API. Therefore, the calculation methods for the Base API might not be the same across years. However, the Base API and Growth API within a reporting cycle must use the same calculation method. The following charts show the 2011–12 API reporting cycle:

The indicators are the same for the Base and Growth APIs, but the 2011 Base includes 2011 test results whereas the 2012 Growth includes 2012 test results. The 2011 Base API is subtracted from the 2012 Growth API to show how much a school's API changed from 2011 to 2012 (referred to as 2011–12 API growth). This determines whether a school meets its API growth target. The Base API Report includes the Base API, targets, and ranks. The Growth API Report includes the Growth API, growth achieved, and whether or not targets were met.

Comparability and Changes to the API

In order to make comparisons, the APIs being compared must be based on the same tests with the same test weights. If the API indicators and methodology remained the same from year to year, there would be no need for two API reports. However, complication arises in year-to-year comparisons of the API when changes to the API must be made. From one year to the next, assessments may be added or taken away from the set of API indicators. For example, in the 2011–12 API cycle, the CMA ELA in grades ten and eleven and CMA Geometry in grades eight through eleven were added to the API. Also, the test weights (relative emphasis on each test) or rules for inclusions/exclusions in the API can also change.

Why Two API Reports Are Needed

In order to measure growth as well as incorporate new changes into the API, two API reports are produced. When changes occur in the API, the Base API at the start of a new API cycle is adjusted to reflect the changes. The Base API, including all new indicators and methodological changes, becomes the baseline against which to compare the next year’s Growth API. The Growth API must match the Base API in order to compare the two. The graphic below illustrates why two API reports are needed. In order to reflect the incorporation of the additional assessments, the Base API is adjusted, and the Growth API for that cycle is calculated in the same way.

API Reporting Cycles

An API reporting cycle consists of two components: (1) base information and (2) growth information. The base reports are provided in the spring, and the growth reports are provided in the fall.

Appropriate Comparisons of the API

Because new indicators are added to the API and test weights may change from one cycle to the next, it is inappropriate to compare APIs across reporting cycles. It is appropriate, however, to compare the Base and Growth APIs within a reporting cycle as well as to compare the amount of API growth (i.e., change in the API) of different reporting cycles.

Invalid comparisons of the API

The following examples are invalid comparisons because the APIs are compared across reporting cycles. The reporting cycles may differ in the assessments (indicators) and the weights included in the APIs, and that type of comparison would not be a valid comparison.

2010 Base API and 2011 Base API

In this example, the 2010 Base API is in the 2010–11 reporting cycle, and the 2011 Base API is in the 2011–12 reporting cycle. The comparison is not valid because different indicators were used in each cycle (i.e., the CMA in ELA, grades ten and eleven, and CMA in Geometry were not in the 2010–11 cycle but were in the 2011–12 cycle).

2006 Base API and 2011 Growth API

In this example, the 2006 Base API is in the 2006–07 reporting cycle, and the 2011 Growth API is in the 2010–11 reporting cycle. Again, the comparison is not valid because different indicators were used in each cycle (i.e., the 2010– 11 cycle included many more indicators than the 2006–07 cycle).

2011 Base API and 2011 Growth API

In this example, the 2011 Base API is in the 2011–12 reporting cycle, and the 2011 Growth API is in the 2010–11 reporting cycle. The APIs were calculated from the same year’s test data (2011). However, the 2011 Base API included the CMA in ELA, grades ten and eleven, and Geometry and the 2011 Growth API did not include these indicators. This comparison is not valid.

Valid comparisons of the API

The following examples are valid comparisons because (1) the Base and Growth APIs are compared within the same reporting cycle, or (2) the amount of growth (change) in the API from different reporting cycles is compared. The first example compares APIs that are calculated based upon the same assessments (indicators). The second example compares the amount of change in the API across reporting cycles.

2011 Base API and 2012 Growth API Within a Reporting Cycle

This example shows the amount of API change from 2011 to 2012 for the school and for each numerically significant student group. It also shows the school and student group API scores compared to the statewide target of 800.

The following example shows API growth (change) from 2007–08, 2008–09, 2009–10, 2010–11, and 2011–12 for a school site while preserving the validity of API comparisons because growth within API cycles over five years is shown.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.