I love the NACUBO-TIAA Study of Endowments. Every year, I eagerly await the annual report from the National Association of College and University Business Officers on the investment performance and management practices of higher ed endowments. The NTSE study, which this year is 198 pages, is chock full of useful, insightful and interesting data, charts, and tables. One can learn about long term trends in asset allocation, spending, and gifts. Or one can study themes like the use of special appropriations, student-managed funds, and approaches to ESG. I read it cover-to-cover when it comes out and keep a hard copy on my desk for reference all year long.
As a matter of fact, at Commonfund we annually survey institutions for our Benchmarks Studies of private and community foundations and independent schools to collect information on returns, asset allocation and other investment trends and practices. We find the information we gather in these studies incredibly helpful for the industry and believe deeply in the important role they play in disseminating data.
And so, it is no surprise that when a study so rich in information is released there is a flurry of interest from the press to investment managers to the schools themselves. Yet, what continues to surprise me year after year, is the singular and myopic focus of that interest on arguably the least relevant number among the thousands of numbers in the study: the one-year return. Several years ago I wrote a blog in which I described the most recent one-year return from the study as potentially “the most overrated number” in the entire study. This year, I believe that more than ever because of the noise that is undoubtedly part of Fiscal Year (FY) 2020 data. The NTSE study is immensely informative and will serve you and your institutions well. However, if you are looking for clarity on higher education and foundation investment performance and management practices, the ostensible purpose of the study, you are unlikely to find it in the table of one-year returns.
The first challenge is that one-year performance numbers may not be reported on an apples-to-apples basis. Specifically, the nature of reporting the performance of private investments, which are long-term, often illiquid investments with reporting lags, is not consistent across endowments. Private investment managers report their valuations and performance to their endowment investors many weeks (and sometimes months) after a quarter-end. That delay creates challenges for providing timely performance reporting at the total portfolio level.
There are generally three ways that institutions handle the timing issue: use a lagged valuation methodology, wait to report performance until some threshold of managers have reported, or estimate the performance of those investments. The different methodologies can distort performance in shorter term time periods or in periods of heightened market volatility, as was the case in 2020. Any institution that reported their performance using March 31, 2020 marks for their private investments likely reported lower returns than had they waited or estimated their performance as of June 30 due to the strong recovery in risk assets in the final quarter of FY 2020. The greater the allocation to private investments, the greater the discrepancy becomes. While it diminishes over longer time periods as more quarters are included, thereby reducing the influence of any one single quarter, this dynamic can be particularly acute in short-term periods and in periods where there are big swings in performance between two quarters like we saw in the March quarter and the June quarter of FY 2020.
The second challenge is the unfortunate focus on short-termism that comes with comparing one-year returns. There isn’t an endowment manager on the planet who would admit to optimizing an endowment portfolio for one-year returns. These are perpetual pools of assets that must fund operations, scholarships, etc. in perpetuity. Their objective is not to outperform a benchmark or a peer group over 12 months. Their objective is to produce returns that will support annual distributions to their institutions at the same, or greater, level over time.
So why then do we focus any time on such a short time period? Especially in a year like 2020 which ended in a global pandemic. Very few, if any, institutions were prepared in January 2020 for what would unfold in the following six months. We spend lots of time modeling stress scenarios and creating crisis playbooks with our clients. Not one of them had as a scenario a global pandemic that would force all of higher ed to shut down, send all of their students home, and blow a massive hole in their finances. Likewise, no investment professionals foresaw a global pandemic and as a result very few endowment portfolios were positioned specifically for what would unfold in the capital markets.
Perhaps they were overweight assets or strategies, such as technology, that “benefitted” from the pandemic, but it wasn’t because they expected a pandemic. Or perhaps they were underweight the handful of stocks that drove so much of the return in the post-March recovery. Either way the one-year return number ending June 30, 2020 reflects that reality. And much like the financial results from FY 2020 do not capture how your college or university has performed operationally over the past decade, due to the unexpected and temporary (hopefully) impact of COVID-19, or how effectively those operations have been managed, the most recent one-year return does not reflect how effectively your endowment has been managed over the long term.
The third challenge is the nature of peer comparisons themselves. There’s little doubt that a competitive spirit runs deep throughout higher ed; a competitive spirit that has existed for centuries. Maybe it started with a rowing race between Yale and Harvard in 1853, which marked the beginning of intercollegiate competitions. Maybe it started in 1859 when Amherst played Williams in the first intercollegiate baseball game. Or maybe it was the first football game between Rutgers and the College of New Jersey, now Princeton, in 1869. Needless to say, colleges competing against each other has been part of the fabric of higher ed since the beginning. But more than a century later something happened that would change the nature of that competition. In 1983 U.S. News & World Report published its first “America’s Best Colleges” report, ushering in a new focus on peer rankings that today are the most widely quoted of their kind in the country. While perhaps worth arguing whether this has been good or bad for higher ed, it is not the focus of this blog.
However, the shortcomings inherent in ranking very different institutions on the same metrics are not dissimilar to the shortcomings in ranking very different institutions on one-year performance. A $200 million endowment that supports 10 percent of a college’s operating budget is very different than a $200 million endowment that supports 25 percent of a college’s operating budget. One that annually distributes 5.5 percent is very different than one that spends 4.5 percent. One that supports financial aid is very different than one that supports operations. You get the point and yet we compare their performance as if the only difference between them was their nominal size.
It is fun to compare performance against our peers and it is informative to see how we did against market benchmarks. But the question of whether or not the endowment outperformed either over a twelve-month period is only one of many important metrics in the study and may in fact not be the one we should all focus so much on.