Things go wrong
The word “model” has a bit of a negative connotation in the financial world these days. After all, bad models have received blame on many occasions when things went wrong. For example, take a look at this quote:
"Time alone will tell whether [this date] enters the history book as the day American confidence was so shaken that a premature recession resulted or merely as the day the computers went wild.”
To what memorable market event does this quote sound appropriate? The recent COVID crash? The 2010 Flash Crash? The 2008 Financial Crisis? The blow up of Long-Term Capital Management? If I answered with any of those events you probably wouldn’t be surprised.
But, it’s actually from Louis Rukeyser in October 1987, after Black Monday. So, the point here is, computers and the models that they run have been blamed for bad things happening for a very long time.
Blame game
Models are tools. If you give a tool, like a chainsaw, to someone who has never used one before and something goes wrong, do you blame the chainsaw? Maybe! Maybe if the chainsaw had a weird feature that whenever it touched wood the chain would snap off and fly at the person holding it!
But, even if you have a really good tool, like a hammer, which has been used for a really long time and is pretty popular amongst people who use tools, and you give it to someone who doesn’t know how to use a hammer (pray you never meet this person), something could go horribly wrong.
Tools can cause problems because they are flawed and/or because they are used improperly. The same goes for models. It’s just that when it comes to models, which are tools that aim to simplify the complex and chaotic phenomena of the world around us, you basically know they are going to be wrong. Hence, the old adage “all models are wrong but some are useful.”
Thus, it’s critical to understand the assumptions and limitations of any model. After that, if the weaknesses of the model are acceptable given the benefits the model offers, it’s important to make sure it’s used properly. Basically, be super careful.
Careful isn’t always good enough
Consider this season’s hottest modeling trend for us nerds, machine learning. These models are notoriously opaque, difficult to understand, and provide results that are sometimes impossible to justify. But, they can also be built to do neat things like beat the best players in the world at Chess and Go. These models have weaknesses in certain ways but if your goal is to embarrass people who have dedicated their lives to a game, then they are useful.
That’s why, when we look at the catastrophic role that models can play in terrible financial events, we might want to be somewhat skeptical of algorithms that influence our behavior as a species. Think about the complex web of models that determine what information we see whenever we venture on to the internet. These tools aren’t hammers that have been in use for thousands of years and are well understood. Much of this infrastructure is extremely new, evolving quickly, and the behavioral properties of their emergent complexity are poorly understood. So, that’s just slightly terrifying.
But, don’t worry, the owners of these models will claim, in their vested interest, that they are totally safe. At least, they are if you ignore the link to the massive social engineering efforts that models have played a part in during several notable disturbing events in our very recent history.
What can we do?
Time, last I checked, moves in one direction. And the drastic (yet largely unseen) changes occurring in our world during this fascinating period of our fourth industrial revolution aren’t going to stop just because we feel a bit uncomfortable. We can, however, nudge these changes toward a path of true progress.
On an individual basis, consider your own role in the feedback loop of information that you give off explicitly (e.g., entering information into the web, or talking to a smart device) and, perhaps more importantly these days, implicitly (e.g., the behavioral information that apps or websites collect from you in the background that you don’t even notice). The more information you shed, the more algorithms can process and the more they will tailor themselves to you in ways that may or may not be accurate but will certainly be of benefit to those that own the algorithms. Back in the old days, people had this concept called “privacy” and personal information wasn’t given out so flippantly. Maybe that wasn’t such a bad way to be.
Sadly, the individual part basically feels impossible unless you are willing to cut your internet and wear a tinfoil hat. Worse yet, the information you shed as an individual may be worth it in some aspects when you consider the benefits. Who doesn’t like asking Alexa what the temperature is three different ways before she finally understands you and gives you the weather update you need?
We can’t go it alone
This whole dilemma is kind of like recycling. Sure, if every individual recycled the world would be a better place but it wouldn’t fix the underlying problem in that we still create things that pollute the planet! Worse yet, even with perfect compliance on an individual basis, we don’t stop pollution if big companies don’t do anything to recycle.
There needs to be collective pressure on organizations to use information and models properly. We did that with banks following the financial crisis. Regulators put out guidance for models and now it’s an absolute nightmare to build stuff unless you can prove up and down that it’s not going to do bad things. Why isn’t there the same pressure on technology companies which use personal data? That kind of data certainly feels way more sensitive than some market information going into a financial model.
What do we want! Not bad models! When do we want it! Soon, if possible, please…
It’s about this point when people start bringing the idea of ethics into this. But, to me, the argument about “ethics” when it comes to these things ends up creating distracting debates about “well what really is ethical?” and a bunch of philosophers start asking you whether you would let a train run over one guy on a track or five guys on another track. Like, no.
We should demand transparent analysis of the holistic outcomes created by the use of data and models. Full stop.
Model complexity is not an excuse. It’s a red flag. Proprietary privilege is a load of garbage when it comes to models that literally change millions of people’s behavior and influence their thinking.
We can’t trust companies that tell us everything is all fine, good for our algorithmic living environment, and a net positive on society when they could be covering their creation of a digital toxic dump filled with byproduct from their actions. That stuff doesn’t fly for a company building widgets and it shouldn’t fly for a company building models. We should demand proof. Not marketing.
We’ve done it before
Again, we went through this already with banks. They said, “hey, don’t worry about our massive collection of mortgage backed securities, they make us lots of money and they have been given the top credit rating by credit rating agencies that definitely don’t have perverse incentives to rate this stuff well. Our models tell us everything is okay.”
Well, the models didn’t work right and they were part of the problem in a massive financial infrastructure that blew up spectacularly, cost tons of taxpayer money to fix, ruined people’s retirements, cost millions of jobs, created a lost decade for growth, and created structural scarring on our economy that remains to this day. And, many people argue that the 2008 Financial Crisis turned out to be not as bad as it could have been. Imagine that! If we got the best outcome we could have hoped for then what was the downside scenario? Because, I’m basically imagining scenes from Mad Max.
Calling it out
Like many things in life we eventually come to realize are an issue, either slowly over time (like that smoking causes cancer or that we should wear seatbelts in a car) or very quickly (like not putting hydrogen in the Hindenburg), it’s not always apparent how momentum for change will manifest. Sometimes, with certain issues, meaningful change takes an incredibly long time. After all, I’m sure each of us can name a few major problems in the world that aren’t getting tackled.
But, like any issue, who doesn’t love calling out bad practice when it’s deserved? In this case, it’s one nerd calling upon other nerds out there to build good models and use them responsibly!
We don’t need more things in life that we think are simple hammers but turn out to be unstable chainsaws.
Links
Yesterday’s Post | Most Popular Posts | All Historical Posts | Main Site | Contact