How to Be a Great Investor, Part Six: Update Your Views Effectively
This article is the sixth in a ten-part series loosely based on Michael J. Mauboussin’s white paper “Thirty Years: Reflections on the Ten Attributes of Great Investors.” See “Part One: Be Numerate,” “Part Two: Understand Value,” “Part Three: Properly Assess Strategy,” “Part Four: Compare Effectively,” and “Part Five: Think Probabilistically” for previous installments. And please keep in mind that although I’m basing my work on Mauboussin’s, I am departing from his ideas on occasion.
Mauboussin writes,
Great investors do two things that most of us do not. They seek information or views that are different than their own and they update their beliefs when the evidence suggests they should. Neither task is easy. . . . The best investors among us recognize that the world changes constantly and that all of the views that we hold are tenuous. They actively seek varied points of view and update their beliefs as new information dictates. The consequence of updated views can be action: changing a portfolio stance or weightings within a portfolio.
Michael J. Mauboussin in “Thirty Years: Reflections on the Ten Attributes of Great Investors.”
So I asked a few investors about what they’re doing to change their strategies. One slowly moved his securities out of models that he realized had been over-fitted; in their place he created a minimum-variance portfolio inspired by Modern Portfolio Theory. Another shifted from a high-turnover to a low-turnover strategy, reasoning that “getting in and out of positions before the rest of the crowd does is a fool’s errand.” A third has “learned to treat tickers for the companies they represent” rather than as a collection of statistics. A fourth is de-emphasizing value ratios, relying more on short interest and previously shunned technical factors; he is also paying more attention to portfolio weighting. A fifth is turning from futures and Forex, where volatility has gotten too low for profitable trading, to equities.
Personally, I’ve undergone two sets of transformations. The first was the beginner’s journey that many investors go through, during which I made practically every mistake in the book. I bought and sold based on technical analysis, using complicated algorithms involving stochastic indicators. I changed my strategy every few months, convinced I’d found something that really worked, only to realize it had a fundamental flaw. I chose ETFs based on past performance rather than on their merits. I only invested in companies with high ethical standards, without realizing that doing so would only hurt my own returns, not those of evil companies. I used stock screeners based on faulty data without thoroughly evaluating the stocks I was buying. I bought things whose price was going up and sold them when their prices started falling, thus buying high and selling low. I used stop-loss orders and got whipsawed. I relied on backtests using limited samples. I bought stocks based purely on their “value” without paying attention to their growth prospects or their quality. In the process, I lost a tremendous amount of my savings.
Then one day in 2015 it hit me. I suddenly realized that the only way to succeed in stock investing is to examine every stock from as many different angles as possible. I started creating multifactor ranking systems on Portfolio123 in order to automate that process. And that’s when I began to make a lot of money.
The second set of transformations I went through was more modest and slow. I began using variable position sizing. I modified my strategy so that it had low beta. I dramatically reduced my turnover. As the amount of money I invested increased (I’m now managing six times the amount I was managing in 2015), I moved slowly away from investing only in microcaps and started investing in small caps and a few mid-caps as well. I placed more emphasis on quality considerations and less on growth and value. I incorporated a few long-term momentum factors, which I had shunned at the outset.
Did these changes improve my results? I don’t know. When I backtest old systems that I stopped using years ago, they tell me that if I’d continued to use them since then rather than “improving” them I would have made more than I actually made. But that’s assuming perfect trading conditions and no mistakes along the way.
Is it possible, then, that all the investors I’ve mentioned, including myself, have changed some things for the worse, and should have stuck to their old strategies? After all, steadily diminishing excess returns don’t indicate wise choices, and that’s precisely what I’ve been suffering through. It’s all very well to say that great investors “update their beliefs when the evidence suggests they should,” but what if you’re not at all sure you’re a “great investor,” and how do you know if the evidence’s suggestions are correct? How can you “update your views effectively” if you can’t know if those updates will be good ones?
One expert on updating your views effectively is Philip Tetlock, a professor at the Wharton Business School, author of Superforecasting: The Art and Science of Prediction, and cofounder of the Good Judgment Project. After all, investing well is, in an important sense, predicting the future (of the securities in which one invests). One paper he recently wrote, “Small Steps to Prediction Accuracy,” attempts to outline the hallmarks of accurate forecasters by examining participants in a four-year forecasting tournament, in which participants had to forecast close to five hundred geopolitical questions. He found that two kinds of errors were prevalent: under-reaction to new evidence, usually because of anchoring or confirmation bias (your natural propensity to look for facts that confirm your beliefs); and over-reaction, which is caused by “base-rate neglect” (your natural tendency to base your probability estimate on a very specific situation and to ignore the more general instances of the same case). “The most accurate forecasters,” Tetlock found, “made frequent, small updates, while less accurate ones persisted with original judgments or made infrequent, large revisions.” These accurate forecasters were intelligent, open-minded, fluid in their beliefs, and adhered to the principles of probability. “A pattern of frequent, small belief revisions,” Tetlock concluded, “appears effective in identifying skilled forecasters in uncertain real-world settings.”
Mauboussin and Tetlock have worked closely together. In Mauboussin’s paper “Cultivating Your Judgment Skills,” he presents Tetlock’s “Judgment Matrix,” which outlines a three-stage process. The first stage is “Cognitive,” which is error-prone. The cognitive thinker identifies causality superficially, gets stuck in inside views and doesn’t consider outside references, casually tracks news and over- or underreacts to it. The second stage is “Associative,” which makes fewer errors. The associative thinker sees causality as a point-counterpoint process but overcompensates when using reference classes and gets stuck in an outside view; she reads news closely, focusing on diagnostic information, but doesn’t necessarily incorporate her own insights. The third stage is “Autonomous,” which is near perfection. The autonomous thinker shrewdly integrates causality into her process, weighs inside and outside views with nuance and skill, and accurately updates her views based on new information.
Knowing whether you are correct or not in updating your views may depend more on your process than on the views themselves. To answer the questions I asked above—should you have stuck to your old views? how do you know if the evidence is correct? how do you know if your updates are good ones?—examine your process. Are you integrating your own beliefs with outside ones effectively? Are you making a series of frequent and small belief revisions rather than infrequent, large ones? Are you avoiding overreacting to news? Do you understand what is causality and what is simply correlation? If the answers to all of these questions are “yes,” you’re probably on the right track.
The investment writer Jason Zweig recently wrote a nice piece called “Overconfidence: An Autobiography.” In it he relates how overconfident he was as a college freshman. He then remarks,
More than four decades later, I still regularly commit the same blunder of presuming I know more than I do, more than the people around me, more than the people who came before me, more than the people who have spent decades studying a topic or working in a field. I underestimate the difficulty of problems and overestimate the ease of solutions. I assume reality is simpler than a lifetime of encountering complications should already have taught me it must be. . . . I love saying “I don’t know,” but I don’t love it nearly as much as I should; just as water is the universal solvent, “I don’t know” should be our universal first response to nearly every hard question. What I learned, those first days of college so long ago, wasn’t how to stop being overconfident. (I’m old enough now to realize that I’m unlikely ever to learn that.) What I learned was the power of feedback, the importance of throwing yourself open to being corrected in public. That doesn’t eliminate error and misjudgment. It does teach you to do your homework, to consider the historical and social contexts of your evidence before you draw conclusions, to evaluate the quality of your information before you act on it, to go back and check your work again before you commit, and above all to think twice.
Jason Zweig in “Overconfidence: An Autobiography.”