However, when you sync Scrivener with the cloud (such as Dropbox or OX Drive) you risk loosing texts. Scrivener saves the text snippets as separate files and when you have a big project, like a dissertation, you run the danger of not noticing that some of these texts were not correctly synced and you may end up noticing only much later that you are stuck with old versions. Good luck finding the desired versions of the text snippets then!
To minimise this risk, implement the following set-up:
Store your Scrivener document file in a non-syncing folder on your hard disk
Do you use the wonderful open source reference management solution Bibdesk / Bibtex and the synchronisation software Ox Drive, which is, for example, used by the privacy-aware mailbox provider Mailbox.org? Have you ever run into trouble with your .bib (bibliography files) getting messed up or do you want to avoid them getting messed up?
Make sure you always have sufficient free back-up space on your Ox Drive! Otherwise the .bib file gets duplicated, with one version upon another piling up and, on the way, references getting lost ..
I live in London, a city with excessively bad air pollution levels. A lot of that is due to car and bus diesel engines. London is also currently in negotiations with Uber, a ride share app/company, on how to structure its business offerings. Many people also claim that Britain needs a development bank, such as the German KfW.
What about picking up all these strands and coming up with an innovative solution for London?
City A.M. reports that “Jeremy Corbyn has said the next Labour government could push for so-called gig economy firms like Uber to be run as co-operatives …”
Sounds like an interesting idea to tackle the problem of new monopolies arising from network effects, such as in the cases of Google, Facebook, and, of course, Uber. How would one incentivize the emergence of such cooperatives? There is certainly a range of options; here I only want to focus on one:
A new green investment bank could offer subsidised loans for the purchase of electric vehicles to people living in London, on the condition that they are going to let ride-share cooperatives use them for a certain number of hours per month. This would bring a number of advantages:
Increased availability of a transport option that contributes less to local air pollution,
subsidies would not benefit a taxi oligopoly but individual purchasers to the extent that they confer a large part of these advantages to taxi drivers in co-ops,
private capital from non-cabbie purchasers of electric vehicles could be crowded in so that cabbies who cannot afford cab acquisition can also take part,
boost for electric vehicle infrastructure in general,
more efficient approach than subsidising individual electric vehicle purchases (let’s not forget that electric vehicles are also resource intensive, so it’s best to only promote them when they are actually used much, rather than being parked all day)
One could calculate the costs to health from local air pollution in monetary terms and then argue that some of that may be saved by the roll-out of electric vehicles.
To recap: we can weave the ideas of green investment banks, co-ops and the tackling of local air pollution together into a powerful new policy approach.
This is, of course, only a really rough sketch and I’m looking forward to hearing more detailed proposals.
Earlier this month I wrote about the “Rule of Suspicion Algorithms” 1. Using computer expert systems in order to predict who is more or less likely to become a criminal or a political dissident is not so different from predicting peoples’ policy positions. Michael Laver, an authority on computer-aided quantitative content analysis in political science from New York University, is enthusiastic about the prospects that the large new data troves generated by users themselves hold for political science data analysis:
There is no reason, for example, why we should not set out to measure the policy positions of every single person who uses social media and, with appropriate modeling, to make inferences from these positions about people who do not use social media. 2
While this indeed is exciting, from a normative perspective concerned with the quality of democracy I’d like to add that it does matter whether such information is generated by academics in order to inform the academic debate and the wider public or if this information will only inform the few, such as security services and corporations. If information about the many is accessible to the many — in aggregated form — societies may reach a higher degree of self-understanding. This would be on the basis of symmetric information distribution. An asymmetric information distribution, on the other hand, would diminish the quality of democracy by granting a limited set of the population privileged access to information which offers them possibilities for manipulating opinion and perception — from the macro to the micro-scale.
How such kinds of information are used will likely become a defining feature of politics over the years to come.
The decision-making criteria of computer expert systems are often so complex that they are beyond the comprehension of their individual users and creators. For example, computer systems equipped with artificial intelligence can be used for estimating the degree to which somebody is likely to default on her credit. These days, computer programs can also be used for the large-scale monitoring of populations and for attempts at predicting who is more or less likely to become a criminal or a political dissident.
When computers start to decide who is likely to be a threat and who isn’t and neither secret services, law enforcement nor the subjects of surveillance understand how a threat assessment comes about, the shared understanding of what constitutes suspicious behaviour gets lost. Writing in the Intercept Dan Froomkin cites Phillip Rogaway, a professor of computer science at the University of California, Davis:
If the algorithms NSA computers use to identify threats are too complex for humans to understand, Rogaway wrote, “it will be impossible to understand the contours of the surveillance apparatus by which one is judged. All that people will be able to do is to try your best to behave just like everyone else. 3
If people don’t understand the criteria by which they are judged anymore, one can still find it reasonable to use such computer systems. Yet, their “suspicion algorithms” themselves don’t express human reasoning anymore. People become subject to a governance by statistical probabilities instead of human value choices 4. The computers may not rule as they don’t possess true agency yet. Still, humans delegate their assessment of who is an insider and who an outsider, of who is a friend and a potential foe to systems whose calculations are beyond their comprehension. Reasoning about an essentially political decision is transferred to machines 5.
The data is there, the algorithms set in place. An ethics of the data age has yet to emerge.
It probably doesn’t come as news but I just got curious why this little program called gkoverride wants to call someone through my firewall when I try to install a patch for SPSS and I found a pretty good explanation on Zdziarski’s Blog of Things saying that Apple basically checks new program installations for security purposes. However, without asking they also keep track of the programs users install — in the name of security. Most users probably care more about security than about privacy but I think it should be made more clear and transparent and it would be important to also provide alternative security mechanisms — as suggested by Zdziarski.
If Cameron knew it was coming, his adversarial stance towards Jean-Claude Juncker in the previous months — even his defeat when he tried to prevent him becoming President of the EU Commission — would make much more sense. Now he can say: “See, I told you so.” He may claim that history has proven him right. Now he could wage a campaign against Juncker not only as the public face of the EU — by which he can cater to anti-EU sentiment — but he can also add tax justice to his election campaign, thereby giving it a slightly ‘anti-corporate’ streak, which could help him to neutralise some of the other parties’ reformist agenda points.
However, such a strategy would be severely complicated by the UK’s Overseas Territories and Crown dependencies status as tax havens7.
Recently, I was teaching on emissions trading and the EU Emission Trading System (ETS). In preparation for this it took me quite a while to get a good overview of the different phases of the EU ETS. So I thought I’d share my overview table here:
If you have any suggestions for improvement, I’m quite happy to update it.
A majority of French parliamentarians has signalled that they would agree to a law stipulating the reduction of nuclear energy in France’s energy mix from 75% to 50% until 20258. Instead, energy efficiency and renewables shall receive more support. The national assembly is due to vote on the new law on 14 October 2014.
France has historically seen much less public resistance to its huge nuclear power installations than Germany. Why does it now seem so easy to announce such drastic cuts?
A major factor could be that most nuclear reactors belong the EDF, an energy corporation whose vast majority of shares are still held by the French government. In the French case, the government still controls one of the “commanding heights” of the economy and is thus far less affected by private sector lobbying.
Nuclear power is a technology that corresponds well to centralisation, as it is based on big projects and needs a lot of security measures in place. The question is now whether France will follow such a centralised approach in the case of renewables, which lend themselves much better to decentralised approaches than nuclear power. If the road of decentralisation is chosen, this may well result in the emergence of a stronger lobby group for renewables.
Hopefully, the French government retains control over EDF until its energy portfolio is much less nuclear than today. Otherwise, a strong lobby force, blocking the transition from nuclear to renewable power, may result.
Yesterday I taught the seminar “Climate Change Policies in Comparative Perspective” and one of my students asked me whether the countries who have emissions targets under the Kyoto Protocol had actually been on target during the first Commitment Period (2008-2012). I knew that they were within the target range overall, yet I didn’t have a good overview of which individual countries had actually been on target and which hadn’t. Realising this I trawled through the net and it actually took me a while to come up with a good overview. The document I finally found states that
…complete and detailed official data regarding the countries’ emissions and transactions of carbon credits in 2008-2012 was not available until April 2014. 9
So until recently it wouldn’t have been possible to produce an authoritative overview. Now that I found the document, here I provide a screenshot of the relevant overview page with a link to the original report.
Canada, who had the greatest overshoot, had dropped out before the end of the first commitment period. Japan, with a much smaller overshoot, has withdrawn from the second committment period.
Interestingly, Norway also had a sizeable overshoot. I wonder if that’s a factor behind the country’s heavy engagement with the establishment of Reduced Emissions from Deforestation and Degradation (REDD) programmes10.
Without the economic crisis Spain would certainly have been even less on target.
As I understand it, these numbers don’t take into account the use of flexible mechanisms (offsetting and emissions trading). Thus, they cannot really provide the whole picture in terms of compliance. All in all, a handy overview.