Saturday, February 8, 2025

A short Bestiarium of types of Conversation

Surface level

Small talk

The conversation doesn't matter very much for either participant. Is mainly there to prevent an awkward silence or to pass the time until it's socially acceptable to separate. Nothing important is learnt or shared.

Conference

The word conference comes from the Latin word conferre, which means to compare. By conference I mean any conversation which is about sharing information that one party doesn't have, but not much more than that. The sharing can go both ways. Examples of conferences are: tips on things to buy between consumers, tips on rules and regulations between home owners, or system knowledge between coworkers. Canned 'fun facts' may also belong here. Conferences often happen between people who are getting to know each other. It's usually necessary sooner or later to go through the ways the two are different, in order to establish what feels like a real relationship. This can be if they are from different places, come from different cultures, or are different ages. However, conference is not a substitute for shared reality conversations (see below). A failure mode of romantic relationships is to confuse 'conference type' conversation topics with your new partner for having a lot to talk about. Conference is a finite resource, sooner or later you simply know everything the other person is willing to share about themselves!

Debate & Discussion

Debate and discussion both involve a back and forth dialogue between two viewpoints. The goal of a person in a discussion is to convince the other person, or to be convinced themselves if they believe the other person has a better argument. In a discussion, the goal is to get closer to the truth together. In a debate however, the goal is to convince the audience, and the debating parties shouldn't expect to convince the opposing side. Bad things happen when one person thinks they are in a discussion and the other person thinks it's a debate.

Shared reality

This is a less philosophical and more social cousin of the discussion. In a shared reality conversation, the participants' goal is not as much about getting closer to objective truth as it is about establishing a common social truth. It may be that there is not much objective truth in the topic to be had in the first place; it could be a matter of establishing taste or social norms. 

Argument

An argument is like discussion and shared reality in that the two parties are looking for a common conclusion. In an argument however, the participants have put aside the principle of charity that says you have to interpret the other person's words in as forgiving way as is possible. Gone is also the idea that there can be a conclusion that will perfectly satisfy both parties: a gain by one party has to at least somewhat come at the expense of the other party. What prevents an argument from escalating to violence is partly third party social norms against violence. But not resorting to violence is also a way to get some kind of shared legitimacy for the conclusion. 

Deeper levels

A lot of what people say to each other is not communicated at the level of the words, but is communicated by what is intentionally brought up or left out in a way that has more meaning in context. 

Signalling

In all of the above types of conversation, there is room for the person speaking to use what they say to define themselves for other people. A long conference that doesn't ever veer into shared reality or discussion is probably two people taking turns just signalling to each other, without the self-satisfaction from signalling they would both get bored quickly.

Saturday, September 14, 2024

In environments where replacing systems is very costly, systems will spend most of their lifetime waiting to be replaced. This interval, when the system is still in use but waiting to be replaced, we can call the zombie time of the system's life cycle. During zombie time, the motivation to make improvements is much lower. Known issues can remain unfixed for years, even when the fix is relatively cheap. How is it possible for an organization to become inefficient in this way? 

Part of this is that people generally want their work to have a lasting impact, so working on a system that is soon to be replaced is seen as a waste of time. Another reason is that the organization often wants to tie its own hands by letting the existing system degrade, thus making the replacement more necessary. 

Let's first zoom in on the personal motivations of system developers, why they prefer to work on new systems rather than old. One reason can be that they believe (perhaps correctly) that having been a part of launching a new system will look more impressive when negotiating for a higher salary or applying for a new job. Working on a successful system can bring prestige, whereas the old soon-to-be-replaced system is by definition not a successful system (at least, it is not seen as successful at the time). The new system also typically takes advantage of some new technology such as cloud or machine learning that is seen as the future. Working on the new system gives the developer a potentially valuable competence free of charge, so this is a strong incentive. 

Another personal motivation has to do with the psychology of developers. Almost all have a background in STEM, and as such they will have several role models who were inventors or discoverers. And the people who polished work are usually much less famous, than the ones that history has remembered as the main discoverer. Take for instance Oliver Heaviside, who compressed Maxwell's equations from an unwieldy 20, to an elegant 4 [1]. He is most famous for the simple step function, that sometimes bears his name. And again, by moving the spotlight of praise from Maxwell to Heaviside, we leave out countless others who came before and after them, and whose work was essential for our understanding of electromagnetism. By this, I mean to say that developers are not only economically, but also culturally biased towards invention over maintenance. 





Sunday, December 31, 2023

Institutions

I want to share some thoughts about institutions. What makes them special as organisations? What is their history? Will there be more institutions in the future? Should there be?

As a first definition, I'm tempted to say that institutions are organisations that are neither family-based, nor for profit. This makes a difference for what holds it together. 

In a family, members stick together out of loyalty to other individuals, because of personal relationships. There may be some fixed roles, but in principle a person in a family is irreplaceable. This was less true in the past, when families had to do more of the tasks that institutions and companies do today. Back then, a person's role in a family came with much more specific expectations. This may have been partly cultural, but it also makes sense for an organisation that has to fulfill a lot of complicated functions where people sometimes need to be replaced. 

In a company, members are attached to the organisation through equity, or through a contract (or both). To members without equity (most employees), there is not much difference if they work for a for-profit company or an institution. If the company doesn't fulfill its part of the contract, the individual should leave. If the individual doesn't fulfill their part of the contract, the company should kick them out. At least in theory. Having equity, on the other hand, means being entitled to a share of the profits. This is an entirely different relationship, which has proven historically to be a very powerful way to align people's interests. However, for there to be a company, there must be the potential for profit.

This leaves us with institutions. An institution is an impersonal organisation that exists to produce some value that can't be sold for profit. When does it make sense to produce something that can't be sold for profit? Take roads, for instance. The economic value of roads clearly is larger than the cost of building them. However, it is often impractical to collect money from people based on how much they use a specific road, and adding the infrastructure necessary to do so adds cost and decreases the economic value of the roads themselves. So, the modern solution is to tax some combination of vehicle owners and everyone in general to finance the roads, and to make them freely available. 

Let's challenge this definition. What about education and healthcare? Clearly, it's possible to sell these things for profit, and yet this is mostly handled by non-profit institutions. I'll just leave this aside for now, saying that it's an ongoing process to determine which model really works best here. In the case of general education, it is hard to measure the value the education will provide (since it takes to long to pay off), which makes it difficult for the client to know how much to pay. In the case of healthcare, the client is often at a natural disadvantage since they may have a strong time preference. 

What about publicly owned companies? Why not be completely an institution? Probably, the reason is that it makes it easier for a government to raise money quickly by selling a part of the company. 

Area effects

In this piece, I'll go over some thoughts around area effects.

To me, they seem understudied. I think it's the sort of thing that makes "common sense", but not sense to people more used to actively model the world. 

Let's start with a story to illustrate what I mean by area effects. I grew up in a suburb of single family houses. In that neighbourhood, there used to be a cat, a single black cat called Licorice. At one point, Licorice died of old age. What we saw in the following months was that our garages would get invaded my mice, and perhaps also rats. The mice eventually became fatter and fatter and would hardly even bother to scurry away quickly when a person approached. In the end, another neighbour got a new cat called Elsa. Right away, the mice disappeared. The drastic change in number of mice seemed disproportionate to me, for just the difference of one cat. Then I realized that the cat was actually 'killing' more mice than it ate since it was constantly denying the mice the opportunity to forage. Since a cat could come around at any time, it was never safe to be a mouse in the neighbourhood anymore, and the mice starved.

Let's talk terrorism. During the attacks of 2015 to 2017, they killed in total about 200 in Europe. Media and governments were reacting like World War III. At that point I was thinking "Why are people freaking out? They can't kill all of us, they can even make a dent of a dent in the population". Surely some of this fear was due to cognitive availability bias. Getting killed in an attack felt very likely because a Dunbar number of people had been killed and we had heard about all of them. Also being unable to take in just how many people 500 million people really are. But the widespread fear wouldn't have been possible if the victims hadn't been chosen at random. If they had only targeted satiric cartoonists, almost no-one would have felt a personal fear. 

Same thing with Covid. The widespread panic definitely died down when it became clear that unless you had a preexisting health condition, Covid for you would be a heavy cold and that would be that. 

There is a kind of risk calculation everyone does. I'll repeat an idea from Taleb here. The ensemble probability is not the same as the time probability in the presence of irreversible events. Let's break that down. An irreversible event to a person would be death. To a company or state, being dissolved is usually irreversible. To someone in certain positions of power, losing that power is irreversible. 

What about ensemble average? Let's take Covid again. You take 100 people, they all get Covid. Mortality rate is about 1%, so one of them dies. Seems like okay odds, right?

Of course not. That's not the calculation people make, not intuitively. For that, you have to imagine one person, being subjected to something as dangerous as Covid, several times over their life. On average, a person will survive 100 such events. After 70 such events, they have a 50/50 chance of being dead. So it seems like taking such a risk about once per year wouldn't make the average lifespan of an adult much shorter. However, it's not a psychologically implementable rule to say "I'll only take 1% risks once per year". It's too easy to lose track of how often you take the 1% risks. Internally this will have to be implemented as never deliberately taking a 1% risk, and only taking it in the face of an even bigger risk to yourself or someone close to you. So something that has a chance of killing a random 1% of the population will cause huge disruptions as people try to avoid it at all costs. 

Baby Daddy

The Proposal

Suppose you are a man (this is important to the point being made). You go on a date with a woman who is out of your league. She has some combination of beauty, intelligence, humor, popularity, status, and wealth that would normally preclude a relationship between you two, or at least make it unstable since she has, honestly speaking, better options. Let's also assume that she is seen as a high-value partner in your eyes, not just general opinion. She is kind to you, treats you with respect, and there is some personal chemistry. Now she makes the following deal (as indirectly and subtly as custom requires):

-She will donate an egg that is to be fertilized with your sperm. 
-The fertilized egg will be gestated by a surrogate mother, whose free cooperation and compensation she will provide for.
-When the child is born, it will be taken care of and raised by you. She will pay for about 2/3 of the expenses for the both of you, so unless you're ready to lower your living standard a bit, you will have to be a working single parent eventually.

This is of course just a gender-flipped version of an arrangement that women have been presented with throughout all of history, and that some have gone along with. That is, having a child with a wealthier man out of wedlock (he is most likely married to someone else), but with his financial support. The mechanics are a bit different, but the pros and cons from your perspective are the same. 

Sunday, October 30, 2022

Internal & External explanations

In this essay, I will describe two distinct ways in which a system can be understood, or explained. I call them the Internal explanation, and the External explanation. This is a very useful distinction to keep in mind when trying to understand a new system.

The internal explanation tells you what you see when you "pop the hood" of the system. For software, the source code is an internal explanation. It can also be diagrams that shows how data flows in the software system, for example. For a car, a schematic or a blue print are internal explanations. For the human body, internal explanations can be the genome, an anatomical diagram, or an illustration of the Krebs cycle. For a country, internal explanations are laws, economic data, or population statistics. 

The external explanation, one the other hand, tells you about how the system interacts with its surroundings. It tells you what kind of selection pressures it's subjected to, what risks it needs to mitigate to survive, or what it's optimized for. It tells you why a system is the way it is, rather than how it works. For software, a use case is an external explanation. For a car, we have urban planning and traffic models. For the human body, the theory of evolution combined with a history of humans' ancestral environment is an external explanation. For a country, we have political science, as well as models for things such as trade, war, and crime. 

System Internal explanations External explanations
Software
Source code

Architecture diagram
Use case

Threats


Car
Schematic


Urban planning


Body
Genome

Anatomical diagram




Theory of evolution
Evolutionary history
System Internal explanations External explanations
Country
Formal structure

Laws
Military history

Diplomatic realities

Company
Cap table

Company policies

Competitors

Customer behaviour

Explanations and education

I'm tempted to propose the following heuristic: for every system that you don't have to be an expert in, be biased towards learning external explanations. For systems that you do have to be an expert in, focus on internal explanations (though obviously don't forget about the external ones).

The less you need to know about a subject, the more you should focus on how it fits in with everything else, rather than how it works internally.

Schools like teaching internal explanations. Maybe because curricula were at some point put together by experts in their respective fields, and experts are biased towards internal explanations about their field. 

Personally, my formal education (engineering) consisted of over 95% percent internal explanations of the subjects. The context was rarely made explicit, but more often implied by the examples that were brought up. The external explanations had to be mostly learned during the first years of working. Things like: how is technology developed and maintained in real life? What is the role of a single engineer? What is the history of the engineering profession, and how has that affected how we teach engineering?

What about the best order to learn things? Suppose you want to study for 5 years to become an expert at a system. Maybe the optimal ratio is to spend 4 years learning the inner workings, and 1 year learning context. But is it smart to put the context bit first, and then go more in depth? Or is it better to study the main theoretical results first, and pick up the context later? 

I think it's best to basically follow the graph and do most of the external explanations at first, and only pick up internal explanations later if it becomes necessary. The argument in favor of that is that it is more agile: you're less likely to learn something just because it sounds like a thing you should learn. The argument against is something I heard quite a few times during my education, that if you don't learn the internal explanations properly first, then you might do some irrecoverable damage to your understanding of the subject field. But I don't think that argument bears. Programming is something I learned by trial-and-error as a teenager. When I finally did take a programming class, I of course had some terrible habits. However, most of that could be corrected by the courses, and the classes were more rewarding because I had tried by myself first. 

Explanations and information representation

Something I noticed when making the examples is that the internal explanations are much more likely to lend themselves to a compact, complete, representation (such as source code). Many external explanations on the other hand are theoretical frameworks that need to be adapted to the system in question (for instance: how does the theory of evolution apply to this species?). Other external explanations are just unstructured sets of data points that need further filtering, narration, or interpretation to be useful (for instance: the digital revolution has affected this particular corporation in a million different ways, which ones really mattered?). 

There are clearly many advantages with a compact representation. It's easier to teach, easier to discover gaps in one's own knowledge, and easier to test a person's knowledge of a subject matter with a compact representation. Perhaps we should make an effort to create compact representations of external explanations? Maybe this would make external explanations more palatable to institutions?


Explanations and the history of science

Let's take the following narrative: in the early Western history of science, internal explanations were very dominant, and the guys in charge suppressed external explanations because of ignorance or narrow-mindedness. It was only in the 20th century that external explanations and holistic views of systems started being taken seriously, but the bias still persists. The best counterargument I can think of is that internal explanations have a longer life time, and the external explanations or holistic models of the past have mostly become irrelevant. The asymmetry here is that internal explanations tend to make fewer assumptions. However, there are a couple of external explanations that have proved to be very long lived: the theory of evolution, and the theory of microeconomics. So maybe there really is something useful in trying to find great external explanations. 

A compromising narrative that doesn't throw out history while still providing a path forward could be: reductionism, the prevailing scientific philosophy, is about dividing a system, describing the individual parts, and then putting everything together. Our predecessors in the history of science have mostly done the work of describing the parts, and it is our role to put everything together.

Tuesday, March 15, 2022

I don't want value. I want to be lucky.

A fundamental assumption that I make as part of a decision process, is that I want as much value as possible. A recent thought experiment has made me doubt this. 

Background: I arrive at a train station that I've never been to before, where I'm going to catch the next departure. I don't know anything about how often the trains go from this station, or when the next one will be. Now I consider which of these scenarios will make me happier:

Scenario A: The train departs every 10 minutes. When I get there, the previous train is just pulling out of the station, which means I will have to wait for 10 minutes.

Scenario B: The train departs once every hour. When I get there, the next one is scheduled to arrive in 12 minutes.

The contrast here is of course: Scenario B has a longer wait time, i.e. the value to me is strictly lower. However, Scenario B clearly feels luckier. 
If I could press a button to land in either scenario, I would press B. The thought of Scenario A just pisses me off too much, whereas thinking about B feels like lowering myself into a hot bath. Am I stupid?

I think the source of my irrational instinct here is that in Scenario A, I immediately take the high frequency of the train service for granted, not realizing that I've actually been lucky with respect to the distribution of train services. It is also easy to imagine a peer who arrives just a minute earlier in Scenario A. In Scenario B, at least the unluckiness does not just affect me. Preferring B now seems pretty selfish of me. Perhaps this is a preference we should try to fight? 

Are there any other situations that could provoke the same irrational decision, something that matters more?