Improving STARS Data Quality Part 1: Introduction
Thanks in part to concerns expressed by stakeholders about the integrity of STARS data (most recently in an analysis of responses to questions in the Investment section by the Sustainable Endowments Institute), the STARS Steering Committee is again exploring options for improving data quality. We’ve taken a number of steps to address problems with data over the past few years (described below) but it seems that additional steps may be warranted.
As the options under discussion could have potentially large impacts on all STARS participants and the future of STARS more generally, I think we have an obligation to inform the community about our deliberations and invite comments and feedback from interested parties. I hope to do that through a series of blog posts on specific questions that the Steering Committee will be discussing.
Before we dive into the discussion, it’s probably worth giving a quick reminder of the ways we already try to encourage good data, as well as the limitations associated with each (see the STARS Data Accuracy Policy for more info):
- Clear definitions and credit requirements - the first line of defense against bad data is to provide clear definitions so that credit requirements are interpreted consistently by participants. We haven’t always been able to achieve this goal (the academic courses and research credits are obvious places that, despite our best efforts, still involve a good deal of subjectivity), but we do regularly refine credit language in an attempt to clarify definitions and requirements that are misinterpreted by participants. However, these refinements often make the already lengthy Technical Manual even longer and participants sometimes report in ways that are explicitly not allowed in the Manual anyway.
- Reporting system warnings for unlikely data - When quantitative responses fall outside of a specified range, the reporting system will alert the submitter to this deviation and request confirmation that the numbers are in fact correct. This mechanism only catches mistakes that are outside normal range, and given the diversity of higher education, these ranges are typically set to be quite wide.
- Responsible Parties - For each credit claimed, participating institutions are required to designate a “Responsible Party,” who affirms that the information submitted is accurate. This individual is supposed to be the person on campus who collects or maintains data relevant to the credit and is best qualified to answer any questions about it. Responsible Parties are typically from a variety of different departments across campus and may not be familiar with the specific requirements of STARS, so listing a Responsible Party doesn’t necessarily ensure consistency with credit requirements.
- President/CEO letter - Each STARS submission is accompanied by a letter from the highest ranking executive that, among other things, confirms the accuracy of the overall submission. In practice however, it’s not clear that presidents or their staff are actually checking submissions to a meaningful degree.
- Pre-publication review for Platinum submissions - Before a Platinum rating becomes official, AASHE staff will do a full review of the submission. Any mistakes that the staff identify would need to be fixed before the rating is made public. Since no institution has yet applied for a Platinum rating, this mechanism has yet to be tested.
- Post-publication review by AASHE - Starting with the release of STARS 2.0, AASHE staff review a selection of 24 credits (out of 70 total) after submission and then work with campuses to correct any misinterpretations or inconsistencies. On average, staff find 3 or 4 issues per submission that require follow up, so this approach is effective at identifying problems. However, getting them fixed in a timely fashion can be a challenge. After their rating has been published, institutions may not feel a pressing need to make the suggested changes. As a result, erroneous data can persist for long periods of time and a good deal of staff time is spent following up with campuses to remind them to make the changes.
- Transparent submissions and public inquiry process - All STARS submissions are made publicly available online and users are invited to submit inquiries if they come across data that they believe to be inaccurate. AASHE staff then follow up on all credible inquiries and work with the institutions to make changes, if warranted. One or two inquiries are submitted each month on average so this mechanism is having some effect, but not nearly enough to fully fix the data quality issues. Further, resolving inquiries often requires a significant amount of staff time so this probably isn’t the most efficient way of improving data quality.
Overall, while each of these mechanisms probably has value, continued evidence of data problems suggests they are insufficient. As a result, the Steering Committee is considering more comprehensive reviews of STARS submissions. In the next post, I’ll examine a key question in this discussion: should the review be mandatory for all STARS participants or instead offered as an option for institutions that want to ensure a higher quality submission and potentially for some kind of enhanced recognition in STARS?
Julian Dautremont-Smith is the Chair of the STARS Steering Committee and a Senior Program Manager at GreenerU.
Browse by Topic
- AASHE Biz
- Co-Curricular Education
- Community Engagement
- Connecting the Dots
- Dining Services
- Diversity, Access, and Affordability
- Faculty and Staff Development
- Government & Legislation
- Human Resources
- Presidents & Chancellors