QS is currently collecting sustainability data from higher education institutions for the QS Sustainability Rankings. As the AASHE community has previously expressed interest in the potential for AASHE and QS to collaborate around data sharing or alignment in order to reduce the reporting burden faced by institutions, I thought it would be worth sharing an initial analysis of the potential for alignment between the metrics QS is using in its 2025 ranking cycle and the metrics we use in the Sustainability Tracking, Assessment and Rating System (STARS). I hope this will help STARS participants who choose to participate in the QS rankings get a better sense of what data from their STARS reports might also be relevant for QS.
At first glance, as shown in the spreadsheet, there seems to be only limited overlap in the metrics used by QS and STARS. Specifically, there is a related QS metric for only about 30% of the 111 indicators in STARS 3.0. Likewise, there are relevant STARS 3.0 data fields for only 41% of the 87 metrics used by QS.
The only modest alignment between the two systems is explained in part by the fact that STARS includes questions on a number of topics that aren’t directly covered in QS’ ranking (e.g., water, waste, food, biodiversity, transportation, affordability, compensation) and has more indicators on some topics that are covered by both systems (e.g., STARS has 8 indicators related to procurement while QS has 1).
On the other hand, while STARS relies entirely on institution-submitted data, QS incorporates a variety of indicators that are calculated based on external data gathered by QS and its partners. Indeed, by my calculation, 72% of an institution’s score in the QS Sustainability Ranking is based on data gathered from other sources. This includes data on research impact, academic reputation, alumni impact, employee perceptions, and employer perceptions as well as data about the country in which the institution is located (e.g., the county’s unemployment rate or score in the Academic Freedom Index).
There is better alignment when comparing only institution-submitted data used by QS and STARS. Of the 50 metrics that QS collects directly from institutions, 32 (64%) have relevant STARS 3.0 data fields. The lack of a relevant STARS data field for the other 18 institution-reported metrics in QS is often a function of different choices related to indicator design. In particular, STARS tends to emphasize quantitative performance indicators while QS focuses more on qualitative measures like the existence of a particular office or policy.
Overall, my takeaway from this analysis is that there isn’t enough alignment of data fields presently for direct data sharing to be practical. However, since about two thirds of the metrics that QS asks institutions to report are already also captured in some way in STARS and the general topical alignment between the two systems is even greater than that, there does seem to be potential to align at least some data fields to make participation easier for institutions that do both. Indeed, based on a tentative assessment of the relative difficulty of aligning indicators (included in the spreadsheet), 17 of the 50 institution-supplied indicators seem relatively easy to align and another 18 have medium potential for alignment. We have shared these findings with QS and are set to have conversation with them to explore this possibility.
In the meantime, I hope the spreadsheet will be helpful to STARS participants who also report to QS.