Decisions: Rational, Right.
Decisions Can Be Right Without Being Rational, and Rational Without Being Right. While being rational is somewhat in our hands, getting the decisions right is certainly not.
Decisions Can Be Right Without Being Rational, and Rational Without Being Right.
In the battle of Narva (on the border between Russia and what we now call Estonia) on November 20, 1700, King Carl of Sweden and his 8,000 troops attacked the Russian army, led by Tsar Peter the Great. The tsar had about ten times as many troops at his disposal. Most historians agree that the Swedish attack was irrational, since it was almost certain to fail. Moreover, the Swedes had no strategic reason for attacking; they could not expect to gain very much from victory. However, because of an unexpected blizzard that blinded the Russian army, the Swedes won. The battle was over in less than two hours. The Swedes lost 667 men and the Russians approximately 15,000.
— Martin Peterson. 2009. An Introduction to Decision Theory.
Sweden army didn’t have any good reason to attack a significantly outnumbered Russian army. It didn't predict a blizzard which would significantly up their chances of winning. Yet, it attacked. It is easy to see that their decision to attack wasn't a well informed one; an act of valour maybe, but definitely not a rational one. But despite the irrational decision, the outcome turned out to be the most favourable one for the Sweden army — they won!
So an irrational decision turned out to be right, but certainly, a rational decision has to be right!
Software development is much more than just writing code. There are meetings to be attended, work to be prioritised, documentation to be kept up to date, customers to be spoken with, new teammates to be onboarded, other teams to be collaborated with, dependencies to be managed etc. etc. — aka the “glue work”. The importance of glue work is indispensable, and its impact is significant. This underscores the crucial role that a good manager plays in the team. It is rational for one to keep picking up the glue work from time to time and keep the team moving forward. Counterintuitively, it can be a severe career limiting decision when you’re a junior. In her excellent talk, Tanya Reilly captures this “Glue work is expected when you’re a senior, and risky when you’re not” phenomena brilliantly. When you’re a junior, and you concentrate on glue work more than technical tasks, the appraisers tend to interpret either that you’re less technical or that you’re more managerial material. Either ways, in their eyes you’re less of “an engineer” and more of “a manager”. Your promotion will be impacted because of this. This is great if management was more appealing to you in the first place, otherwise the rational choice of doing glue work has proven to be a wrong one. This is an interesting thing to notice — a choice which is rational for one person (the junior) is an irrational one for the other (the junior’s appraiser).
Rationality
Decisions seem right or wrong after the fact, depending upon whether the outcome is a favourable one or not1. This temporal relation between taking a decision and observing its outcome, makes it impossible to know upfront whether a decision is going to be the right one. But, what can be said about the time when we are about to make the decision? What about the rationality of our decisions? That is pre-hoc analysis.
Most of us have an intuition about what rationality is. If there’s a forecast about 70% chance of rain, it is rational to carry an umbrella with you before you step outside. It would be irrational not to. Whether it actually rained that day and you could use your umbrella to protect yourself is post-hoc. Similarly, if you prefer apple over orange, and orange over banana, it would be irrational to choose a banana when a juicy apple was lying there in your fruit basket2. Generally speaking, we have an aim — to not get wet, to enjoy a fruit, to get a promotion, to be recognised. Rationality will be the act of choosing the decision(s) which puts us in the most optimal path to achieve our aim. This “means to an end” notion of rationality, without surprise, is criticised throughout the literature, for instance if your aim is to get rich and you steal3.
Our primary concern should be about making rational decisions rather than the right ones.
We could also say that rationality is choosing an act (from all possible available acts) which maximises the value we receive from the outcome.
“Writing tests will take us longer, let’s stop for the time being and get the release out first. If we don’t have a release, if we don’t have the users using it, what is even the point of these tests?”. There goes testing out of the window! “Vulnerability checks are time consuming, let’s hold on till the release. If we don’t have a release, if we don’t have the user using it, what is even the point of having a secure software?” There goes security out of the window! “We can’t take on tech-debt right now. If we don’t have a release, if we don’t have the user using it, what is even the point of having a maintainable software?“ And all the tech-debts were surely addressed after the release!?! Quite a few times such decisions turn out to be right4. In most of the other cases, we minimise our optionality for future and end up regressing. “why are we not going any faster? — because there are regression failures”. “Why are there regression failures? — because we don’t have tests!”.
Bounded Rationality.
We know that humans make irrational decisions under bias, and sometimes quite predictably so. We make quite reasonable decisions based on the information we have, but our holistic information is itself limited — either we miss the feedback from distant yet related part of the system, or the feedback itself is delayed, or we deny the information itself because it is dissonant with our mental model.
Most systemic failures happen because of our bounded rationality5. For instance, in Westrum’s bureaucratic and pathological organisations, it is most rational to withhold the information pertaining to failures; because either you will be punished or the department will justify its act and ignore the information (read sweep under the rug). One is still maximising the expected value, it’s just that what might be best for the individual turns out not to be so good for the whole organisation.
This highlights an important (obvious when sighted, yet overlooked most of the times) aspect of decision making — the leverage points6. The reason leverage points become so integral to decision making is because most of our decisions are influenced by culture, and change over time and across cultures. Rationality of decisions at the pivotal leverage points, then, becomes the prime directive. No good amount of policies or practices (read post-mortems, retrospectives, root-cause analysis etc.) is impactful enough to bring about a considerable change until the organisation learns how to embrace the information around failures, unless the organisation treats its mission as the most important thing above everything else, unless the organisation has a mission to begin with!
Closing thoughts
Surely not all decision types are the same, because circumstances are not the same. For some decisions the outcome is quite certain. For instance, if you drop an apple from a tree, it will fall to the ground (how poetic!). In other cases, we know of all possible outcomes and their chances (probability), yet the final outcome is unknown. For e.g. tossing a coin — we know that in an unbiased coin, there is a 50% chance of a toss resulting in a heads and 50% chance of it turning out to be tails, yet the final outcome can only be determined after the toss. These are decisions under risk. Yet, there is another class of decisions where we know all the possible outcomes, but don't know their respective probabilities. For e.g. going to a new restaurant (so no reviews are available) and wanting to try out an exquisite dish — we know that if the chef is really good the dish will be a treat, and if the chef is not, the dish will turn out to be a disaster. But we don't know what are the chances of either of the outcomes. These are decisions under uncertainty. Quite a lot of decisions in our day to day life/work are of the risky and uncertain kind.
The normative techniques of taking rational decisions under risk differ from those under uncertainty. (I wish to cover them in subsequent articles.) Our heuristics and bounded rationality might not make us perfect decision making machines, and one might argue about the importance of studying normative methods of decision theory. To that I will leave the readers with a quote from Peterson (2009):
“Anyone wishing to know what makes a rational decision rational should study normative decision theory. How people actually behave is likely to change over time and across cultures, but a sufficiently general normative theory can be expected to withstand time and cultural differences.”
Note that sometimes we use “right” to refer to a rational decision, for e.g., it is the right thing to carry an umbrella when the meteorological department has predicted high chances of rain. However, here I’m strictly classifying right vs wrong based on the outcome of decision, and rational vs irrational for the act of making a decision.
These are some examples of how a VNM-rational agent would act.
This notion of “means to an end” rationality is called instrumental rationality. It is also argued that morality is different from rationality. So what might be an immoral thing to do, like steal from others, it might be rational for the thief since it maximises the most favourable outcome for him — to get rich. The aim itself is outside of the realm of decision theory and most decision theorists are interested in how we make the decision rather than what the aim is. Peterson (2009) quotes another criticism of instrumental rationality: “Philosopher John Rawls argues that an aim such as counting the number of blades of grass on a courthouse lawn is irrational, at least as long as doing so does not help to prevent terrible events elsewhere. Counting blades of grass on a courthouse lawn is not important enough to qualify as a rational aim. In response to this point it could perhaps be objected that everyone should be free to decide for herself what is important in life. If someone strongly desires to count blades of grass on courthouse lawns, just for the fun of it, that might very well qualify as a rational aim.”
Especially in the cases when product is yet to find its market value (PMF) or you’re working on an MVP to evaluate your hypothesis and are quite sure that the code you're writing is not production quality and will have to be rewritten.
Bounded rationality is a concept proposed by Herbert A. Simon, an American political scientist, in his 1957 book “Models of Man”. Simon noted that we’re not omniscient, rational optimisers. Rather we are blundering “satisficers”, attempting to meet (satisfy) our needs well enough (sufficiently) before moving on to the next decision.
There are several leverage points in a system. Culture and paradigms (mental models) are among the most impactful leverage points. And counterintuitively, people are quite below the list. Primarily because replacing people doesn’t change the system as much. Although, certain change of people is more impactful than other (for e.g. change in leadership), but still the underlying skeleton of system is difficult to be influenced at this level.