All posts by admin

Singularity

Willard McCarty on Humanist pointed me to a, quite silly, article in the Economist entitled “March of the Machines”. It can almost be called a genre piece. The author downplays very much the possible negative effects of artificial intelligence and then argues that society should find an ‘intelligent response’ to AI—as opposed, I assume, to uninformed dystopian stories.

But I do hope the intelligent response society will seek to AI will be less intellectually lazy than the author of said contribution. I think to be honest that someone needed to crank out a 1000 words piece quickly, and reverted to sad stopgap rhetorics.

In this type of article there’s invariably a variation on this sentence: “Each time, in fact, technology ultimately created more jobs than it destroyed”. As if—not denying here any of a job’s power to be meaningful and fulfilling for many people—a job is the single quality of existence.

Worse is that such multi purpose filler arguments ignore unintended side effects of technological development. Mass production was brought on by mechanisation. We know that it also brought mass destruction. It is always sensible to consider both the possible dystopian and utopian scenarios. No matter what Andrew Ng (quoted in the article) obviously should say as an AI researcher, it is actually very sensible to consider overpopulation of Mars before you colonise it. Before conditions are improved for human live there—at whatever expense—even a few persons will effectively establish such an overpopulation. Ng’s argument is a non sequitur anyway. If the premise of the article is correct we are not decades away from ubiquitous application of AI. Quite the opposite, the conditions on Earth for AI have been very favourable for more than a decade already. We hardly can wait to try out all our new toys.

No doubt AI will bring some good, and also no doubt it will bring a lot of awful bad. This is not inherent in the technology, but the in the people that wield it. Thus it is useful to keep critically examining all applications of all technologies while we develop them, instead of downplaying without evidence its unintended side effects.

If we do not, we may create our own foolish utopian illusions. For instance when we start using arguments such as “AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.” Which effectively means asking the machines what the machines think the non-machines should do. Well, if you ask a machine, chances are you’ll get a machinery answer and eventually a machinery society. Which might be fine for all I know, but I’d like that to be a very well informed choice.

I am not a believer of The Singularity. Chances that machines and AI will aggressively push out human kind are in all likelihood gross exaggerations. But a realistic possibility is the covert permeation of human society by AI. We change society by our use of technology and the technology changes us too. This has been and will always be the case, and it is far from some moral or ethical wrong. But of these changes we should be conscious and informed, so that we hold the choice and not the machine. If a dialogue between man and (semi-)intelligent machine would be started as naive as the author of the Economist piece suggests, then human kind might indeed be very naively set to become machine like.

Machines and AI are, certainly until now, extensions and models of human behaviour. They are models and simulations of such behaviour, they are never humans. This can improve human existence manyfold. But having the heater on is something quite different than asking a model of yourself: “What gives my life meaning? How should I come to a fulfilling existence?” Asking that of a machine, even a very intelligent one, is still asking a machine what it is to be human. It is not at all excluded that a machine will not ever find a reasonable or valuable answer to that. But I would certainly wait beyond the first few iterations of this technology before possibly buying into any of the answers we might get.

It is deceptively easy to be unaware of such influences. In 1995 most people found cell phones marginally useful and far too expensive. A mere 20 years later almost no one wants to depart from his or her smartphone. This has changed how we communicate, when we communicate, how we live, who we are. AI will have similar consequences. Those might be good, those might be bad. They shouldn’t be however covert.

Thus I am not saying at all that a machine should never enter a dialogue with humans on human existence. But when we enter that dialogue we change the character of the interaction we have had with technologies since we can remember considerably. Humans have always defined technology, and our use of it has in part defined us. By changing technology we change ourselves. This acts out on the individual level—I am a different person now due to using programming languages than I was when I did not—and on the scale of society where we are part of socio-technical ecosystems comprising both technologies, communities, and individuals.

But these interactions have always been a monologue on the intellectual level. As soon as this becomes a dialogue because the technology literally can now speak to us, we need to be aware that it is not a human speaking to us, but a model of a human.

I for one would be excited to learn what that means, what riches is may bring. But I would always enter such a conversation well aware that I am talking not to another, but to a machine, and I would weigh that fact into the value and evaluation of the conversation. To assume that AI will answer questions on what course of action would lead me to improving my skills and my being, may be too heavily a buy in into the abilities of AI models to understand human life.

Sure AI can help. Even more so if we are aware of the fact that its helpful qualities are by definition limited to the realm of what the machine can understand.

 

Methodological safety pin

There is a trope in digital humanities related articles that I find particularly awkward. Just now I stumbled across another example, and maybe it is a good thing to muze about it a short bit. Whence the example comes I don’t think is important as I am interested in the trope in general and not in this particular instance per sé. Besides, I like the authors and have nothing against their research, but before you know it flames are flying everywhere. So in the interest of all I file this one for prosperity anonymized.

This is the quote in question: “The first step towards the development of an open-source mining technology that can be used by historians without specific computer skills is to obtain a hands-on experience with research groups that use currently available open-source mining tools.”

Readers of digital humanities related essays, articles, reviews etc. will have found ample variations on this theme in the literature. From where I am sitting such statements rig up a dangerous strawman or facade. There are a number of hidden (and often not so hidden at all) assumptions that are glossed over with such statements.

First of all there is the assumption that it is obvious that as a scholar without specific computer skills you still should be able to use computer technology. This is a nice democratic principle I guess, but is it a wise one too?

Second, there’s the suggestion that all computer technology is homogeneous. There is no need to differentiate between levels and types of interfaces and technologies. It can all sweepingly be nicely represented as this amorphous mass of “open-source mining technology”. I know it is not entirely fair to pin this on the authors of such statements. Indeed the authors may be very well aware that they are generalizing a bit in service of the less experienced reader. However, the scholarly equivalent would be to say that the first step for a computer scientist that wants to understand history is to get a hands-on experience with historians. Even if that might be in general true, from scholarly arguing I expect more precision. You do not ‘understand history’. One understands tiny, very specific parts of it, maybe, when approached with very specific very narrowly formulated research questions, and meticulous methodology. I do not understand why the wide brush is suddenly allowed if the methodology turns digital.

Third, and this is the assumption that I find most problematic: there is the assumption (rather axiom maybe) that there shall be a middle man, a bridge builder, a guide, a mediator, or go-in-between that shall translate the expertise from the computer skilled persons involved towards the scholar. You hardly ever read it the other way round by the way, it is never the computer scientist in need of some scholarly wisdom. This in particular is a reflex and a trope I do not understand. When you need expertise you talk to the expert, and you try to acquire the expertise. But when it comes to computational expertise we (scholars) are suddenly in need of a mediator. Someone who goes in between and translates between expertises. In much literature—that in itself is part of this process of expertise exchange—this is now a sine qua non that does not get questioned at all: of course you do not talk to the experts directly, and of course you do not engage with the technology directly. When your car stalls, you don’t dive into the motor compartiment with your scholarly hands do you?!

Maybe not—though I at least try to determine even with my limited knowledge of car engines what might be the trouble. But I sure a hell talk to the expert directly. The mechanic is going to fix my car, I want to know what the trouble is and what he is going to do. Yes well, the scholar retorts, but quite frankly I do not talk so much on the car engine trouble to my mechanic at all! Fair enough, might not be your cup of tea. But the methodology of your research should be. Suppose you are diagnosed with cancer, do you want to talk only to the secretary of your doctor?

Besides, it is about the skills. A standard technique to disguise logical fallacies in reasoning is to substitute object phrases. I play this little game with these tropes too: “The first step towards the development of a hand grenade that can be used by historians without specific combat skills is to obtain a hands-on experience with soldiers that use currently available hand grenades.”

This doesn’t invalidate the general truthiness of the logic, but it does serve to lay bare its methodological fallacy: if you want to use that technology, better acquire some basic skills from the experts if you want to rely safely on the outcome of its use.