Contemporary Utilitarianism in 4-Dimensions

The first dimension is a risk x benefit spectrum: something may be more risky, but may provide a larger pay-out, and vice versa. This is mostly seen with respect to time. For example, investing into things that happen farther in the future are proportionately more risky, and therefor demand a larger benefit. The extreme example of this is research into protecting us from Existential Risk (ex: unfriendly artificial intelligence). On the other side is low risk, with moderate benefit. The extreme example of this is GiveWell, and their recommended charities (ex: buying malaria bed-nets for the Against Malaria Foundation). But how do we represent everything in between Existential Risk and empirical approaches to solving world poverty? I think investment into technology (generally defined) captures quite a lot of this in between space.

The next dimension refers to the degree of technological Magnification. The classic interpretation comes from questioning whether Information Communication Technology (ICT) can solve world poverty. The theory is that ICT only magnifies human intent & capacity, but is not an additive or substitute. For example, giving internet cafes to a rural community in India (where there is not much human capacity) will probably only result in numbing uses of facebook. Whereas an internet cafe at a University (where there is arguably more human capacity) is more likely to be used for learning. This ‘magnifier hypothesis’ can potentially result in a technologically-driven force of inequality. That is, if only those in the developed world have the human capacity, then they are the only ones who yield a large magnification from the technology.

This means that actually having this kind of technology is not relevant. Because you can have it, but only use it for (example) numbing games, which arguably doesn’t impact your quality of life in the longer run.

However, there is another kind of technology, that which has a more additive effect. Such as, maybe, having 3-D printers in your home in the future to give you whatever small objects you need. Many won’t be able to buy this (at least at first), but once someone has it, they can realize its benefits independent of their human capacity. On this other side of this view are technologies that democratize certain goods. For example, a homeless person can now enjoy the same Coca-cola as the president of the US. Therefor, this second view on technology defines the extent to which it globalizes?

The ability for technology to globalize, along with magnifying effects within its individual cases, is what defines this second dimension of contemporary utilitarianism. And it maps nicely onto (may even be synonymous with) the last two dimensions, which talks about the opposite of Global Catastrophic Risk. Regardless, i think making it distinct is useful to the extent that it captures a lot of the “middle space” between the extremes of the risk x benefit spectrum.

The final two dimensions come in a pair because they define the opposite of global catastrophic risk (GCR).

y-axis
Lets call the y-axis the degree to which something is beneficial to a single person [1]. A large y-value is a significant improvement to quality of life, while a small value is a small improvement. However, there is a distinction here: how much does some thing benefit that person in the current moment vs being beneficial in the future? In its ideal form, this value should first take the integral over the individual’s entire life. Then do the same after receiving the good/service. and then take the difference between the two, and optimize for that.

x-axis
The x-axis of the opposite of the GCR is how many people it affects: close to 0, it effects no one. on the far right, it affects everyone.

(edit to make opposite) Therefor, a small x-value means that this thing can’t scale to be applied to many people (ex: CFAR), whereas a larger x-value means that it has scalability built more into its mission (ex: MOOCs)

For intuition, consider this (x,y) grid for some large value C: at point (C,1) a single person. at (1,C), everyone in the world gets a small piece of candy. point (C,C) is defined as existential risk (for example: the world blowing up).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s