I was doing some rough analysis of market valuations today compared to historical norms using a few long-term metrics that I like to monitor. These metrics are not timing signals, nor do they tell you where the market will go next. They are useful to understand where we are in the cycle and where we currently sit on the valuation spectrum.
This analysis is done only looking at the S&P500 Index, as a proxy for “markets”. Today the S&P500 closed at: 2529.19
CAPE (Shiller P/E)
CAPE is a crude metric that just tracks the price vs the 10 year trailing earnings. This smooths out business cycles, and has proven to be a pretty good, but fuzzy, indicator of where business valuations stand relative to their long term, durable, earnings power. As you can see below, we’ve seen a significant improvement in this metric over recent weeks, but still remain in overvalued territory.
The historical mean for CAPE going back more than 100 years has been 17. Today we stand at 24.2, which is 42.6% higher than the mean. A reversion to the historical mean would require a further 29.8% (42.6/142.6) decline from today’s (3/17) close. This would take the S&P 500 to $1775.49.
At the 2007 market peak, the CAPE was 27.4 – not much higher than where we still stand today. At the market’s bottom in 2009, the CAPE touched 13.30 (but not for long!), which was below the long term mean of 17, and considered undervalued. If todays S&P500 were to reach to a 13.3 CAPE, with present 10-year earnings, the S&P500 would need to fall to $1390.00, or another 45% from todays prices.
Market Cap / GDP
Another indicator, made famous by Warren Buffett, is the proportion of the total Market Cap of all stocks relative to the US GDP. This is also a very crude metric, which can be impacted over time by things like private companies staying private longer (and being bigger), and public companies doing more and more business internationally, but I think it is still a useful heuristic to look at and consider. There are also counter-effects, such as government spending becoming a larger and larger portion of GDP, which would have the effect of moving this average down (businesses make up a smaller portion of GDP). Below you can see the historical total market cap in blue, and US GDP in green.
Another way to look at this is by plotting the ratio between the two. The historical average ratio is around 0.8 – that is, where the total market cap of public stocks is around 80% of US GDP.
At present, the total market cap is 115.9% of GDP, which is 30.9% ((115.9-80)/115.9) above the historical average of 80%. If we use the S&P500 as a proxy for total markets, it would need to fall to $1745.77 to be valued at the historical average levels using this metric. The present valuations, using this metric, are still higher than they were at the peak of the market in late 2007 (the total market cap was 110.10% of GDP then)!
At the market’s bottom in 2009, the market was valued at 57% of US GDP (and GDP had contracted, as I’m sure you’re aware). To get to an undervaluation of 57% of current GDP, with no contraction in GDP, the S&P500 would need to fall another 50.82% to $1243.86.
It’s interesting that both of these valuation metrics, give very similar numbers for “fair value” on the markets of around $1745-$1775 on the S&P500, based on current GDP, and current trailing earnings. They also give similar “undervaluation” prices, if we were to assume we achieve valuations similar to those reached in 2009, which would put the S&P500 in the $1250-$1400 range. Note that the 2009 undervaluation was relatively brief, and less severe than virtually all recessions before it, with the 2000-2003 recession being the exception.
In 2008/2009 the market only went below the mean for < 12 months (around Dec 2008 - July 2009 on both metrics). The 2000 recession actually never reverted fully to the mean in CAPE valuations (seeing a low of around 21), but did go a bit below the mean in terms of Market Cap / GDP. With the benefit of hindsight, we know now that it would have been quite prudent to have invested during either of these windows.
Finally, I want to stress that the numbers above are not predictions! They are just simple mathematical extrapolations of historical data, assuming certain mean reversions were to happen. The market can stay overvalued (or undervalued) for quite some time, and there’s no rule that these levels have to be hit, and certainly no indication on timing if they do. There are also many other valuation metrics out there, but, I tend to favor a long-term mindset, and think that markets will continue to be weighing machines in the long term and voting machines in the short term.
With everything going on with Coronavirus in the markets, I find it useful to keep an eye on the long term, and to understand that “this, too, shall pass”. There will be short term pain, but the long term value of good businesses will be little changed (see here, for a good case as to why that is here).
I’m getting into Java services at work, and I put together a guide with some principles / best practices to follow for Java 8. A lot of these things become much more possible with Java 8 than they would have been in earlier versions of Java, but they can also be applied to other programming languages and technologies.
I wrote this guide based on a desire to have code that is easy to reason about and understand, which means it will be easy for new folks to jump into the codebase over time, and also with a desire to quickly produce high quality code. Performance is not the major concern of this guide.
Many of the principles and ideas come from my foray into functional programming with both Haskell and Purescript, and I just find that these concepts are the way I prefer to program now. Following these principles gives me much more confidence in the correctness of my code, and gives me peace of mind knowing that I’m giving myself the best chance of understanding the code when I come back to it months or years later.
Without further ado, you can check out the guide here (inlined below).
Java 8 Coding Guide
This coding guide was written for a Java 8 project, but contains principles that are valid in many languages. It is aimed at producing code that is easy to reason about, easy to validate and test, easy to maintain, and easy to avoid common bugs and pitfalls.
Avoid explicit null (or, Don’t use null)
Null is famously the “billion dollar mistake” (and that’s probably an under-estimation!). Don’t use null as a way of representing error or failure, and also don’t use null as a way to avoid passing a certain argument to a function. Essentially you should never explicitly use a null value unless you are checking that something is not null. Even in those cases, it should probably be handled better by an Option<T>.
Expressions are declarative and evaluate to (and return) a value, while statements are imperative bits of code that perform actions. Because of this, expressions are easier to reason about, debug, and verify correctness for.
A simple example is an if statement, vs an if expression. If in java does not return a value, nor does java enforce that an if have an else branch. Additionally, if there is an else branch, it does not have to result in the same type of value as the if branch. This makes if statements hard to reason about and understand. The ternary form predicate ? value : value is preferable to if/then statements.
In a language like Java, statements will be required, but we should isolate those to their own, small, methods, and keep their surface area as small as possible to reduce risk and complexity from these parts of the code.
With Java 8, we have streams, and should no longer need explicit loops. Loops are very repetetive, and error prone, so we should leverage abstractions that handle looping for us to avoid bugs, verbosity, and complexity in our code.
There may be some cases where loops are more desirable (possibly for performance), but they should be avoided until it is clear they are actually needed.
In general, we should never throw an exception of our own. Obviously we will use libraries that throw exceptions. These exceptions should be caught directly in the function/method using the library, and turned into values (to be returned) as much as possible.
This is because exceptions are like goto statements, which make the flow of your program much more difficult to reason about. Every function that throws an exception can return a value in 2 ways, through the “normal” return flow, or through an exception, and every user of that function has to think about both cases. It is generally easier to reason about functions that only return values (something like vavr’s Either or Try types work very well for this). This also makes it much easier to leverage Java 8’s lambda-using utilities, as they don’t like accepting Functions that throw exceptions.
It is always best to fail as early as possible, at the soonest point that you find something is wrong. This often means that we should validate all of our data up front, to decouple validation from the business logic that needs to use the values in question.
Additionally, the more we can make fail at compile time, rather than run time, the better off we will be at finding bugs earlie. Use Java’s typesystem as much as you can to enforce invariants statically!
For our own data objects, we should have strong validation of each object, so that once we’ve constructed one, we know it will have good data in the rest of our program. Failed construction (on validation errors) should also be expressed by returning values, rather than exceptions. Something like vavr’s Validation type is useful for this purpose. Make it impossible to construct a “bad”/invalid data object!
Inheritance can be very fragile, and make refactoring of complex code involving inheritance hierarchies very difficult to do safely and confidently. Composition (and in java, Interfaces) should be favored over inheritence where possible.
Often inheritance is required by certain frameworks / libraries, but we should limit the depth of inheritance hierarchies that we use as much as possible.
Composition, via composing interfaces or composable types, allows for more design flexibility, and easier, more confident refactoring.
The recent news around Hilary Clinton and her lack of prosecution for clearly violating many laws (even in the opinion of would-be prosecutors) just perpetuates the growing global sentiment of “Us” vs “Them”. This is a growing divide, and more people are becoming disgruntled, disenfranchised, and disconnected from the system every day.
There’s a political class that is clearly treated differently, and beholden to a different set of rules, and people the world over are getting fed up with it. The people of the United Kingdom were fed up with the far off “them” in the EU, and the people in France, Italy, Spain, and Greece are pretty fed up too.
Where will this lead? Ultimately it’ll land us in a better place. People will demand more fairness, and a closing of the gap between “Us” and “Them”, but it may get pretty nasty in the interim.
Don’t tell people things. That is arrogant. Ask questions and let them discover their passions, their conflicts, their misunderstandings, and who they are through their own answers and internal discoveries.
In doing so, you may even end up learning more about yourself.
I was re-reading an old Mish article on Bitcoin, and read this phrase from Mish:
Yet, should that happen or even be suspected, how long will it take before there is a government crackdown on bitcoin? Should such a thing happen, the value of bitcoin can plunge to next to nothing if confidence is lost.
But hasn’t gold been cracked down by many governments in the past, including the US? What happened to its value on the black market in these cases? Certainly, over the long term, we can see that it hasn’t had much impact on the price of gold, other than maybe in the upwards direction.