top of page
Writer's pictureRobin Powell

Can machines identify winning fund managers in advance?

By its own admission, Morningstar's system of star ratings is of limited use in predicting future fund performance. Can machines do any better? LARRY SWEDROE looks at the latest evidence.

Morningstar is best known for its star rating service, which is backward-looking. The star rating consists solely of mathematically derived and adjusted past performance indicators in comparison to other funds within the same category. Unfortunately, even Morningstar has concluded that “such a backward-looking measure provides rather minimal usefulness in predicting future performance” — there is persistence of poorly performing, high expense funds.

Despite their lack of value, Itzhak Ben-David, Jiacui Li, Andrea Rossi and Yang Song, authors of the November 2020 study Non-Fundamental Demand and Style Returns, concluded that mutual fund cash flows are highly influenced by changes in Morningstar’s popular star ratings — a rating upgrade results in a surge in mutual fund flows, and flow increases lead to contemporaneous stock price appreciation and subsequent reversals.

To attempt to address the problems with its star ratings, in 2011 Morningstar rolled out a new service: the forward-looking Analyst Rating on mutual funds. Given that the coverage is limited by the size of the Morningstar analyst team, in June 2017 Morningstar further developed a machine learning model to create its Quantitative Rating. Unlike the star ratings that are available to all, both the Analyst Rating and the Quantitative Rating are only available to registered members.

The machine learning model uses the decision-making processes of the analysts, their past rating decisions and the data used to support those decisions. Morningstar built its model using a two-step process. They first estimate the pillar ratings in five key areas for each fund — people, process, parent, performance and price—and then estimate the overall rating. To estimate the pillar ratings, Morningstar includes more than 180 attributes and more than 10,000 rating updates in the training sample. For each pillar, two random forest models are estimated to determine the probability that funds will be rated Positive or Negative. Morningstar then aggregates these probabilities to produce the overall pillar rating and fund rating. Thus, the Quantitative Rating is analogous to the rating a Morningstar analyst might assign to a fund if it were covered. The Morningstar Quantitative Rating has the unique advantage of maintaining a monthly update cycle.

Do these new ratings provide more value than the popular star ratings? To answer that question, Si Cheng, Ruichang Lu and Xiaojun Zhang, authors of the October 2020 study What Should Investors Care About? Mutual Fund Ratings by Analysts vs. Machine Learning Technique, examined the performance of Morningstar’s two forward-looking mutual fund ratings, both of which are based on the five key areas (people, process, parent, performance and price). The Analyst Ratings coverage increased from 8% in 2011 to 30% by the end of 2018. Their dataset covered U.S. actively managed open-end equity mutual funds between 2011 and 2018.

Following is a summary of their findings:

— The Analyst Rating and star rating were not highly correlated (0.41). — The Quantitative Rating was more correlated with the star rating, though still only 0.6. — The Analyst Rating successfully identified outperforming funds in the univariate portfolio sort — Gold-rated funds and recommended funds (i.e., rated as Bronze or above) outperformed the benchmark by 0.91% and 0.53% per year, respectively, while the non-recommended funds did not outperform the benchmark. — The Quantitative Rating failed to deliver significant outperformance. — Compared to the non-covered funds, analyst-covered funds tended to be larger and older, charged lower fees and displayed a higher star rating and style-adjusted return as well as lower turnover. — The tone in the analyst report reveals incremental information in predicting fund performance — Gold-rated funds with more negative tone display lower future performance, while Negative-rated funds with a more positive tone tend to rebound.

— Retail investors do not follow analyst recommendations but instead chase the Quantitative Rating.

Before jumping to conclusions (other than about the inability of machines to predict fund performance, and given that the correlation of Analyst Ratings and star ratings is low, the analysts agree that the star ratings have little value), it’s important to note a few points:

— The period covered was very short, just eight years for the Analyst Ratings.

— The number of funds in each of the five ranking categories covered was quite small. For example, there is an average of just seven funds that have negative analyst ratings. Thus, the findings could be driven by a small group of funds. — The Analyst Ratings did outperform in the univariate test. And the authors also performed a multivariate regression analysis based on fund characteristics (AUM, recent performance, expenses, age, turnover). Unfortunately, the authors did not choose to run factor regressions to determine if the return predictability they found was attributable to observable fund characteristics (exposure to factors such as beta, size, value and profitability). Given that it is simple to run such regressions, one wonders why they chose not to. — Adjusting for styles, the explanatory power of their findings was extremely low, with R-squared values as low as 0.01 for the Analyst Ratings and 0.03 for the Quantitative Ratings.































bottom of page