a recent grand challenge in global health was to increase the interoperability of data for social good.
Why would this be of benefit?
In general, the information arms race is about acquiring more accurate and comprehensive models of social systems, which is best done through accumulating cross-domain data sets (making them interoperable).
Being able to interface with a more high dimensional view of users can aid in understanding correlations within social systems. Running high-quality experiments (i.e. RCTs), guided by this high dimensional view, allows one to infer causation from correlation.
Creating this platform seems to be a plausible way to either save, or take over, the world. The feasibility of this last statement may not be an extrapolation from reality, but from the echo chamber of a silicon valley bubble. However, the ethics behind information asymmetries, and the power that comes with controlling new technology, is only increasing. Therefor, there are possible dangers following the interoperability of data for social good.
In general, where might we find these kinds of dangers?
(1) Government (i.e. NSA)
(2) Monopolies (Google, Facebook, Palantir)
(3) Organizations where an abundance of power is requested as a result of high moral claims. The advantages gained by this trust (and possibly re-enforced by a false-positive claim of transparency) can later be leveraged for ulterior motives. This theoretical progression seems similar to the short-lived benevolence of start-ups, which is only meant to help them get upwind of enormous revenues.
How can we handle this kind of danger?
Information centralization does not seem to be an appropriate solution to leveraging the benefits of interoperability of social data. This is because (1) the privacy concerns that follow increased permissions. However, this is likely to be more malleable for “social good data”, as oppose to “data for social good.” (2) centralization increases the security threat: effort is proportional to expected benefit, and digital surface area is proportional to the number of vulnerabilities.
Therefor, is there a distributed perspective of permissions that still allows for the benefits of interoperability? The only two dimensions I can think of for distributing permissions are temporal and spatial. A temporal example could be: have a data-driven innovation challenge, open to select groups, that allows read-access to data for only the duration of the event (organized by, say, the UN’s Global Pulse). I can’t think of a representative spatial example, but I’m sure Palantir’s nailed it.
In addition, at what point should initiatives be viewed by an external ethics committee (i.e. how to credibly predict all possible use cases for ‘evil’)?
Lastly, if the NSA is doing what they are doing, then does this subject matter?