Probleme mit KI-Forschung

Zur Zeit wird KI-Forschung entweder in kleinen, akademischen Laboren bearbeitet oder in größeren, privaten Laboren (z.B. Google, Uber etc.). Auch wenn viele glauben, dass diese privaten Labs so viel frischen Wind bringen in die angestaubte akademische Welt und sie außerdem besser sind weil Don’t be evil und WTF, sieht das ein KI-Entwickler, der in beiden Welten gelebt hat, nicht so. Gary Marcus schreibt für die New York Times (Artificial Intelligence Is Stuck. Here’s How to Move It Forward.):

Academic labs are too small. Take the development of automated machine reading, which is a key to building any truly intelligent system. Too many separate components are needed for any one lab to tackle the problem. A full solution will incorporate advances in natural language processing (e.g., parsing sentences into words and phrases), knowledge representation (e.g., integrating the content of sentences with other sources of knowledge) and inference (reconstructing what is implied but not written). Each of those problems represents a lifetime of work for any single university lab.

Corporate labs like those of Google and Facebook have the resources to tackle big questions, but in a world of quarterly reports and bottom lines, they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content. There is nothing wrong with such research, but it is unlikely to lead to major breakthroughs. Even Google Translate, which pulls off the neat trick of approximating translations by statistically associating sentences across languages, doesn’t understand a word of what it is translating.

Statt wie bei Pharma, wo viel Forschung privat bleibt, würde ich bevorzugen, dass Forschung rund um Künstliche Intelligenz frei verfügbar und einsehbar ist, etwa wie Open Source. Das ginge wahrscheinlich gar nicht in privaten Laboren, denn die wollen damit ja Geld verdienen. Gary Marcus wünscht sich sowas wie das CERN:

I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. They pursue ambitious, tightly defined projects (like using the Large Hadron Collider to discover the Higgs boson) and share their results with the world, rather than restricting them to a single country or corporation. Even the largest “open” efforts at A.I., like OpenAI, which has about 50 staff members and is sponsored in part by Elon Musk, is tiny by comparison.

Ich sehe das wie Gary Marcus: Künstliche Intelligenz sollte ein Allgemeingut sein, nicht das Privileg einiger weniger Leute. Oder?