The AI Imposter

A new phenomenon seems to have emerged in recent months, if not in the past two or three years, where some people, both in industry and academia, present themselves as artificial intelligence (AI) experts without having the required competencies. This is, in light of recent success of artificial neural networks and some other machine-learning algorithms — that we self-indulgently call AI — an opportunistic tactic to jump on a train that most of us think is not only carrying the solution to all of our problems (this may be, of course, a bit exaggerated), but can also lead us to a good job, and/or to money and fame.

Let’s be clear about this: We are not talking about the young engineering and computer science school graduates, who are starting their career and are naturally fascinated by new technologies and hence emphasize their knowledge of the field, to get a chance to start. There is no shame in pointing to your undergraduate courses and small projects to demonstrate you have the basis to grow into an AI expert. This is perfectly all right, as long as you stay committed to facts and pronounce your knowledge and abilities instead of aggrandizing them.

The AI impostors, mainly academicians but also some industry employees, are people who change their fields overnight and suddenly claim to be an AI expert without having the depth and magnitude of necessary knowledge and experience. AI impostors are basically scientific chameleons.

As for professors, we tend to run after any whistle that promises grants, a trait that is, unfortunately, reinforced by the publish-or-perish credo. As an engineer, I would frantically search to find a sentence in my papers from twenty years ago, containing the words ‘personality’ and ‘self’ to prove I know about psychology — if there was an opportunity to get money for psychology research. It is silly, childish and certainly indecorous for the scientific grandeur. The maxim of objectivity and unwavering dedication to facts in science does not leave any place for masquerading. But, as we academicians happen to be part of the species of Homo Sapiens, we do indeed exhibit all traits of its other members; having a Ph.D. doesn’t seem to vaccinate us against utter imprudence and obvious greed.

Not that this makes sense, but we could, just for sake of entertainment, devise a Turing Test to recognize AI impostors. Such Turing Impostor Test should distinguish a real AI expert from a hoaxer. As my working environment is a postsecondary institution, I may be able to contribute to the academic version of such a test by proposing some questions for the Turing judge (who has to be a real AI expert, by the way, say a colleague like Geoff Hinton). The Turing Impostor Test does not require separate rooms and has to be face-to-face (which understandably would freak out all impostors).

So here we go with some questions to debunk AI impostors:

When did you get your Ph.D.? If this happened less than ten years ago, you can hardly call yourself an expert in anything unless you can back it up with 10,000 citations (minus self-references) for your fantastic algorithm you published two years ago.

What was the topic of your Ph.D. thesis? If your field of research, reflected in the title of the thesis and its content, is not AI, you can hardly call yourself an AI expert. Rudimentary alignments of some of the pages of your thesis with some AI methods do not count.

How many publications do you have in the AI field? Here the crafty nature of professors can potentially fool the judge but not if the latter is of Hintonian caliber. Using some notions of probability theory, a little statistics here, and some toothless pattern recognition there may not even qualify as AI knowledge let alone at expert level.

How old are your AI publications? Related to the question of Ph.D. age, this question aims at the only thing that matters in the science. If you have publications in AI (of either theoretic, algorithmic or applied nature), then you may as well claim competency (well, your colleagues would actually recognize you anyway if that’s the case). Of course, seniority and track record do count. Associating knowledge and wisdom with the white-bearded professor may appear superficial but it has, resting on a body of decently cited literature, some validity.

The AI impostor, naturally, would never expose himself to such a test. He generally operates in the small and cozy environment of his institution where he may manage to impress some students and, through his relationships, the administration of his university. AI impostors use cunning marketing techniques, chose fancy titles for their papers and (local) talks, and embed colorful but unintelligible graphics in their publications and presentations. They are mainly after resources, and they only need to deceive a small number of people at their institutions to achieve their goal.

The AI impostor is not just a silly figure with a simplistic worldview, which is quite parochial and naive. Beyond the ridiculousness of their actions, AI impostors’ actions may seriously damage their home institutions. A university, faculty or department that puts forward a swindler to represent them as an AI expert, jeopardizes its reputation, a strategic risk that should not be taken lightly.

First published on LinkedIn (July 04, 2017)