Sunday, October 30, 2022

Internal & External explanations

In this essay, I will describe two distinct ways in which a system can be understood, or explained. I call them the Internal explanation, and the External explanation. This is a very useful distinction to keep in mind when trying to understand a new system.

The internal explanation tells you what you see when you "pop the hood" of the system. For software, the source code is an internal explanation. It can also be diagrams that shows how data flows in the software system, for example. For a car, a schematic or a blue print are internal explanations. For the human body, internal explanations can be the genome, an anatomical diagram, or an illustration of the Krebs cycle. For a country, internal explanations are laws, economic data, or population statistics. 

The external explanation, one the other hand, tells you about how the system interacts with its surroundings. It tells you what kind of selection pressures it's subjected to, what risks it needs to mitigate to survive, or what it's optimized for. It tells you why a system is the way it is, rather than how it works. For software, a use case is an external explanation. For a car, we have urban planning and traffic models. For the human body, the theory of evolution combined with a history of humans' ancestral environment is an external explanation. For a country, we have political science, as well as models for things such as trade, war, and crime. 

System Internal explanations External explanations
Software
Source code

Architecture diagram
Use case

Threats


Car
Schematic


Urban planning


Body
Genome

Anatomical diagram




Theory of evolution
Evolutionary history
System Internal explanations External explanations
Country
Formal structure

Laws
Military history

Diplomatic realities

Company
Cap table

Company policies

Competitors

Customer behaviour

Explanations and education

I'm tempted to propose the following heuristic: for every system that you don't have to be an expert in, be biased towards learning external explanations. For systems that you do have to be an expert in, focus on internal explanations (though obviously don't forget about the external ones).

The less you need to know about a subject, the more you should focus on how it fits in with everything else, rather than how it works internally.

Schools like teaching internal explanations. Maybe because curricula were at some point put together by experts in their respective fields, and experts are biased towards internal explanations about their field. 

Personally, my formal education (engineering) consisted of over 95% percent internal explanations of the subjects. The context was rarely made explicit, but more often implied by the examples that were brought up. The external explanations had to be mostly learned during the first years of working. Things like: how is technology developed and maintained in real life? What is the role of a single engineer? What is the history of the engineering profession, and how has that affected how we teach engineering?

What about the best order to learn things? Suppose you want to study for 5 years to become an expert at a system. Maybe the optimal ratio is to spend 4 years learning the inner workings, and 1 year learning context. But is it smart to put the context bit first, and then go more in depth? Or is it better to study the main theoretical results first, and pick up the context later? 

I think it's best to basically follow the graph and do most of the external explanations at first, and only pick up internal explanations later if it becomes necessary. The argument in favor of that is that it is more agile: you're less likely to learn something just because it sounds like a thing you should learn. The argument against is something I heard quite a few times during my education, that if you don't learn the internal explanations properly first, then you might do some irrecoverable damage to your understanding of the subject field. But I don't think that argument bears. Programming is something I learned by trial-and-error as a teenager. When I finally did take a programming class, I of course had some terrible habits. However, most of that could be corrected by the courses, and the classes were more rewarding because I had tried by myself first. 

Explanations and information representation

Something I noticed when making the examples is that the internal explanations are much more likely to lend themselves to a compact, complete, representation (such as source code). Many external explanations on the other hand are theoretical frameworks that need to be adapted to the system in question (for instance: how does the theory of evolution apply to this species?). Other external explanations are just unstructured sets of data points that need further filtering, narration, or interpretation to be useful (for instance: the digital revolution has affected this particular corporation in a million different ways, which ones really mattered?). 

There are clearly many advantages with a compact representation. It's easier to teach, easier to discover gaps in one's own knowledge, and easier to test a person's knowledge of a subject matter with a compact representation. Perhaps we should make an effort to create compact representations of external explanations? Maybe this would make external explanations more palatable to institutions?


Explanations and the history of science

Let's take the following narrative: in the early Western history of science, internal explanations were very dominant, and the guys in charge suppressed external explanations because of ignorance or narrow-mindedness. It was only in the 20th century that external explanations and holistic views of systems started being taken seriously, but the bias still persists. The best counterargument I can think of is that internal explanations have a longer life time, and the external explanations or holistic models of the past have mostly become irrelevant. The asymmetry here is that internal explanations tend to make fewer assumptions. However, there are a couple of external explanations that have proved to be very long lived: the theory of evolution, and the theory of microeconomics. So maybe there really is something useful in trying to find great external explanations. 

A compromising narrative that doesn't throw out history while still providing a path forward could be: reductionism, the prevailing scientific philosophy, is about dividing a system, describing the individual parts, and then putting everything together. Our predecessors in the history of science have mostly done the work of describing the parts, and it is our role to put everything together.

Tuesday, March 15, 2022

I don't want value. I want to be lucky.

A fundamental assumption that I make as part of a decision process, is that I want as much value as possible. A recent thought experiment has made me doubt this. 

Background: I arrive at a train station that I've never been to before, where I'm going to catch the next departure. I don't know anything about how often the trains go from this station, or when the next one will be. Now I consider which of these scenarios will make me happier:

Scenario A: The train departs every 10 minutes. When I get there, the previous train is just pulling out of the station, which means I will have to wait for 10 minutes.

Scenario B: The train departs once every hour. When I get there, the next one is scheduled to arrive in 12 minutes.

The contrast here is of course: Scenario B has a longer wait time, i.e. the value to me is strictly lower. However, Scenario B clearly feels luckier. 
If I could press a button to land in either scenario, I would press B. The thought of Scenario A just pisses me off too much, whereas thinking about B feels like lowering myself into a hot bath. Am I stupid?

I think the source of my irrational instinct here is that in Scenario A, I immediately take the high frequency of the train service for granted, not realizing that I've actually been lucky with respect to the distribution of train services. It is also easy to imagine a peer who arrives just a minute earlier in Scenario A. In Scenario B, at least the unluckiness does not just affect me. Preferring B now seems pretty selfish of me. Perhaps this is a preference we should try to fight? 

Are there any other situations that could provoke the same irrational decision, something that matters more?