Benjamin Graham, who has often been called the father of value investing, published The Intelligent Investor in 1949 and revised it several times, most recently in 1972. In that last and fourth edition, published in 1973, he included three different sets of guidelines, which could be called “checklists” or “screens.” The first was for the “defensive investor,” and it’s the most famous. The second was a rule for investing in “Net-Current-Asset (or ‘Bargain’) Issues.” And the third was for the “enterprising investor.”
These turkeys are being screened out.
This article is an overview of screening; in subsequent articles I will be doing deep dives into some classic screens such as those by Benjamin Graham, William O’Neil, Joel Greenblatt, Joseph Piotroski, and James and Patrick O’Shaughnessy. Hundreds of thousands—perhaps millions—of investors use screens as a first step to picking stocks. Sometimes the screens have […]
Last year I created a screen on Portfolio123 that invests in companies in the Russell 3000 that spend heavily on R&D. (To access this screen, you need to either have a Portfolio123 account or start a free trial.) There were four rules in the screen, but the most important were: Expenditures on R&D that amount […]
A lot of investors talk about “market regimes.” This term can have several different meanings. Classically, a market regime is characterized primarily by four measures: interest rates, inflation, GDP growth, and unemployment; often added to these are characterizations of government fiscal and monetary policy. But one can also talk about a market regime in terms of which stock factors work and which don’t.
It has long been established that stocks with low variability in prices tend to outperform stocks with high variability. I’ve explored this in a few recent articles (A Tale Of Two Volatilities, Low-Volatility Stock Picking For High-Volatility Markets, and Why Alpha Works – And A New Way Of Calculating It), and the evidence for outperformance […]
For the past few years, investors have noticed what we call a “value inversion,” which appears to be getting progressively worse. Theoretically—and normally—stocks with low price-to-sales ratios (cheap stocks) outperform those with high price-to-sales ratios (expensive stocks). Such was the case over the majority of the current century, and indeed, as James O’Shaughnessy has shown in What Works on Wall Street, for most of the twentieth century too.
When we as investors talk about volatility, we’re usually talking about variability in price returns. If an investment goes up and down 5% to 10% per day, that’s high volatility; if it goes up and down 0.05% to 0.1% per day, that’s low volatility. It’s a relatively simple concept, and is traditionally measured using standard deviation.
But when we compare investments to each other, we start talking not only about variability in price returns, but also about beta. And the implicit assumption is that beta measures something very different from variability.
As Michael Mauboussin relates, not too long ago the Columbia Business School sent a group of students to meet with Todd Combs, the investment manager at Berkshire Hathaway and (currently) CEO of Geico. He recommended that they read 500 pages a day. The students were dumbfounded. Combs’s colleague at Berkshire, Vice Chairman Charlie Munger, has said, “In my whole life, I have known no wise people (over a broad subject matter area) who didn’t read all the time—none, zero.” And Warren Buffett himself has suggested that he devotes 80% of his working day to reading.
In high-volatility markets like the one we’re in now, low-volatility investing can offer considerable comfort. But it can also offer excess returns. In this article, I’m going to single out six basic factors (and their variations) that investors should explore when designing a low-volatility model, and I’m going to present an actual model on Portfolio123 […]
If you’re a quantitative investor or trader, you build a model and then backtest it to see if it has worked in the past; if you’re like most people, you try to improve your model with repeated backtests. You’re operating under the assumption that there will be at least some modest resemblance between what has worked in the past and what will work in the future. (If you didn’t assume that, you wouldn’t backtest at all.) But what few backtesters do after building their model is to try to break it by subjecting it to stress tests. A truly robust model should withstand every moderate attempt to break it.