Tagged: text analysis

To measure the policy positions of every single person

Earlier this month I wrote about the “Rule of Suspicion Algorithms” . Using computer expert systems in order to predict who is more or less likely to become a criminal or a political dissident is not so different from predicting peoples’ policy positions. Michael Laver, an authority on computer-aided quantitative content analysis in political science from New York University, is enthusiastic about the prospects that the large new data troves generated by users themselves hold for political science data analysis:

There is no reason, for example, why we should not set out to measure the policy positions of every single person who uses social media and, with appropriate modeling, to make inferences from these positions about people who do not use social media.

While this indeed is exciting, from a normative perspective concerned with the quality of democracy I’d like to add that it does matter whether such information is generated by academics in order to inform the academic debate and the wider public or if this information will only inform the few, such as security services and corporations. If information about the many is accessible to the many — in aggregated form — societies may reach a higher degree of self-understanding. This would be on the basis of symmetric information distribution. An asymmetric information distribution, on the other hand, would diminish the quality of democracy by granting a limited set of the population privileged access to information which offers them possibilities for manipulating opinion and perception — from the macro to the micro-scale.

How such kinds of information are used will likely become a defining feature of politics over the years to come.