How to name that *@#! paper?!

Ever wondered how to give your research paper that precise / witty / punchy title? Read no further and leave the confines of this blog to indulge in Giorgos Kallis’ Getting the Title Right (you know how to spot a hyperlink when you see it, no?).  Enjoy!

Is mobile phone and wi-fi radiation something to worry about?

Is mobile phone and wi-fi radiation something to worry about? Here are really interesting comments by scientists on what they want the world to know about the importance of the International electromagnetic field (EMF) Scientist Appeal that calls upon the United Nations and its sub-organizations, the WHO and UNEP, and all U.N. Member States, for greater health protection on EMF exposure.

For people who are not biologists or medical experts it is certainly difficult to assess the risks of mobile phone and wi-fi radiation. That’s why we have experts. Plenty of them seem to agree that more safeguards are in order.

In the European Union “the precautionary principle may be invoked when a phenomenon, product or process may have a dangerous effect, identified by a scientific and objective evaluation, if this evaluation does not allow the risk to be determined with sufficient certainty.” What would need to happen for the European Commission to invoke the precautionary principle when it comes to, by-now ubiquitous, electromagnetic fields? How much happens currently the way it does as a result of lobbying and nobody wanting to spoil the party? Are we even able to seriously imagine a reversal of the rollout of wireless technologies?

Cyberduck for working remotely on high-performance computing clusters

Are you working remotely with R on a  high-performance computing cluster? Does it not allow you to directly edit your files with R Studio by using R Studio Server?

I have spent endless hours tiresomely navigating my files with Midnight Commander, editing them with Nano from the shell and transferring them by hand with SCP.

Now I have discovered Cyberduck and my life is much easier. I simply edit files via Cyberduck with R Studio or an editor on my desktop, and transfer simply by drag and drop.

Syria and the taboo on chemical weapons

The US, in cooperation the the UK and France, have just launched military strikes on Syrian regime facilities, with the declared intention to punish the use of chemical weapons.

One of the main worries about these strikes is that they are targeted at a regime that is backed up by the presence of Russian troops who fight alongside the regime. Observers are justly worried that, if anything goes wrong, such moves may bring us closer to nuclear war.

In the run-up to the strikes, Friday’s Economist newspaper (14th-20th April, p. 11) argued, under the title of The duty to deter, that

Punishing the use of chemical weapons will not end the suffering in Syria, or unseat Mr Assad. But if the taboo on chemical weapons is allowed to fade away, other despots will be tempted to use them, too.

Let us look at this line of argument, taking into account the danger of nuclear escalation: if nuclear powers showed restraint in punishing the use of chemical weapons in a situation where the alleged perpetrator is fighting, on their own territory, alongside another nuclear superpower, would it follow that other despots will be tempted to use chemical weapons, too? I think we would need to give a qualified answer to that question, which goes beyond a simple yes or no. Yes, other despots working alongside a nuclear superpower may feel emboldened to use chemical weapons. No, it would be unwise for those despots, who are not fighting alongside the forces of a nuclear superpower, to assume that such action may not turn them into targets for retaliatory strikes.

Only last year, the Economist freely acknowledged that its support for the 2003 Iraq invasion was wrong. While it is understandable that the press seeks to make a neat and succinct case for or against something, the Economist should now be wary of any ill-founded sabre-rattling, and weigh each case for military action with due consideration for context.

 

 

 

 

How to properly sync Scrivener with the cloud

Many blogs/websites advocate using Scrivener together with Dropbox or other cloud services.

However, when you sync Scrivener with the cloud (such as Dropbox or OX Drive) you risk loosing texts. Scrivener saves the text snippets as separate files and when you have a big project, like a dissertation, you run the danger of not noticing that some of these texts were not correctly synced and you may end up noticing only much later that you are stuck with old versions. Good luck finding the desired versions of the text snippets then!

To minimise this risk, implement the following set-up:

  •  Store your Scrivener document file in a non-syncing folder on your hard disk
  • Automatically save backups to a syncing folder

Solve problems with Bibdesk / Bibtex and Ox Drive

 

Do you use the wonderful open source reference management solution Bibdesk / Bibtex and the synchronisation software Ox Drive, which is, for example, used by the privacy-aware mailbox provider Mailbox.org? Have you ever run into trouble with your .bib (bibliography files) getting messed up or do you want to avoid them getting messed up?

Make sure you always have sufficient free back-up space on your Ox Drive! Otherwise the .bib file gets duplicated, with one version upon another piling up and, on the way, references getting lost ..

However, even then data may get lost, for whatever reason. To avoid this, save your .bib file not in a synced folder but on an ordinary folder on your hard disk. Then set up an automator work flow to make daily backups from the non-synced folder to the synced folder.

#havingtofiddlewiththedetailsjustwhenyouarewritingupyourthesis

Green investment bank loans for electric vehicle taxi co-ops

I live in London, a city with excessively bad air pollution levels. A lot of that is due to car and bus diesel engines. London is also currently in negotiations with Uber, a ride share app/company, on how to structure its business offerings. Many people also claim that Britain needs a development bank, such as the German KfW.

What about picking up all these strands and coming up with an innovative solution for London?

City A.M. reports that “Jeremy Corbyn has said the next Labour government could push for so-called gig economy firms like Uber to be run as co-operatives …”

Sounds like an interesting idea to tackle the problem of new monopolies arising from network effects, such as in the cases of Google, Facebook, and, of course, Uber. How would one incentivize the emergence of such cooperatives? There is certainly a range of options; here I only want to focus on one:

A new green investment bank could offer subsidised loans for the purchase of electric vehicles to people living in London, on the condition that they are going to let ride-share cooperatives use them for a certain number of hours per month. This would bring a number of advantages:

  1. Increased availability of a transport option that contributes less to local air pollution,
  2. subsidies would not benefit a taxi oligopoly but individual purchasers to the extent that they confer a large part of these advantages to taxi drivers in co-ops,
  3. private capital from non-cabbie purchasers of electric vehicles could be crowded in so that cabbies who cannot afford cab acquisition can also take part,
  4. boost for electric vehicle infrastructure in general,
  5. more efficient approach than subsidising individual electric vehicle purchases (let’s not forget that electric vehicles are also resource intensive, so it’s best to only promote them when they are actually used much, rather than being parked all day)

One could calculate the costs to health from local air pollution in monetary terms and then argue that some of that may be saved by the roll-out of electric vehicles.

To recap: we can weave the ideas of green investment banks, co-ops and the tackling of local air pollution together into a powerful new policy approach.

This is, of course, only a really rough sketch and I’m looking forward to hearing more detailed proposals.

To measure the policy positions of every single person

Earlier this month I wrote about the “Rule of Suspicion Algorithms” 1. Using computer expert systems in order to predict who is more or less likely to become a criminal or a political dissident is not so different from predicting peoples’ policy positions. Michael Laver, an authority on computer-aided quantitative content analysis in political science from New York University, is enthusiastic about the prospects that the large new data troves generated by users themselves hold for political science data analysis:

There is no reason, for example, why we should not set out to measure the policy positions of every single person who uses social media and, with appropriate modeling, to make inferences from these positions about people who do not use social media. 2

While this indeed is exciting, from a normative perspective concerned with the quality of democracy I’d like to add that it does matter whether such information is generated by academics in order to inform the academic debate and the wider public or if this information will only inform the few, such as security services and corporations. If information about the many is accessible to the many — in aggregated form — societies may reach a higher degree of self-understanding. This would be on the basis of symmetric information distribution. An asymmetric information distribution, on the other hand, would diminish the quality of democracy by granting a limited set of the population privileged access to information which offers them possibilities for manipulating opinion and perception — from the macro to the micro-scale.

How such kinds of information are used will likely become a defining feature of politics over the years to come.

 

The rule of “suspicion algorithms”

The decision-making criteria of computer expert systems are often so complex that they are beyond the comprehension of their individual users and creators. For example, computer systems equipped with artificial intelligence can be used for estimating the degree to which somebody is likely to default on her credit. These days, computer programs can also be used for the large-scale monitoring of populations and for attempts at predicting who is more or less likely to become a criminal or a political dissident.

When computers start to decide who is likely to be a threat and who isn’t and neither secret services, law enforcement nor the subjects of surveillance understand how a threat assessment comes about, the shared understanding of what constitutes suspicious behaviour gets lost. Writing in the Intercept Dan Froomkin cites Phillip Rogaway, a professor of computer science at the University of California, Davis:

If the algorithms NSA computers use to identify threats are too complex for humans to understand, Rogaway wrote, “it will be impossible to understand the contours of the surveillance apparatus by which one is judged.  All that people will be able to do is to try your best to behave just like everyone else. 3

If people don’t understand the criteria by which they are judged anymore, one can still find it reasonable to use such computer systems. Yet, their “suspicion algorithms” themselves don’t express human reasoning anymore. People become subject to a governance by statistical probabilities instead of human value choices 4. The computers may not rule as they don’t possess true agency yet. Still, humans delegate their assessment of who is an insider and who an outsider, of who is a friend and a potential foe to systems whose calculations are beyond their comprehension. Reasoning about an essentially political decision is transferred to machines 5.

The data is there, the algorithms set in place. An ethics of the data age has yet to emerge.

How Apple keeps track of the programs you install on your Mac

It probably doesn’t come as news but I just got curious why this little program called gkoverride wants to call someone through my firewall when I try to install a patch for SPSS and I found a pretty good explanation on Zdziarski’s Blog of Things saying that Apple basically checks new program installations for security purposes. However, without asking they also keep track of the programs users install — in the name of security. Most users probably care more about security than about privacy but I think it should be made more clear and transparent and it would be important to also provide alternative security mechanisms — as suggested by Zdziarski.