Artificial intelligence (AI) seems to be everywhere, including the art world. Recently, AI decided that a painting long thought to be a copy of Caravaggio’s The Lute Player is actually by the master, while another version of the same subject, previously thought to be authentic, is not. Both conclusions were disputed by the former Metropolitan Museum of Art curator Keith Christiansen. A similar debate erupted in March 2025 when AI declared that portions of The Bath of Diana, also long believed to be a copy, could have been painted by Peter Paul Rubens. Again, a leading Rubens scholar, Nils Büttner, disputed this. In this battle between machines and humans, it sometimes seems as if machines are winning. Dwindling inventory and rising values incentivise attempts to certify “new” works by famous artists, while fear of costly litigation has caused many experts to stop issuing opinions of authenticity. The art world is increasingly inclined to challenge what the journalist Sarah Cascone calls “the dubious science of connoisseurship”.
As the leading expert on Egon Schiele and the author of his catalogue raisonné, I have been issuing opinions of authenticity regarding works attributed to him since 1990. Every year, the Kallir Research Institute receives more than 100 submissions, around 95% of which are fakes, forgeries or misattributions. Most can easily be identified as such by me and my staff. No special tools or forensics are required, just deep familiarity with Schiele’s authentic oeuvre. Occasionally, these fakes are accompanied by lengthy but ultimately meaningless scientific reports purporting to support an attribution to Schiele. Scientific techniques such as X-rays, infrared reflectography and pigment analysis can indeed usefully complement the observations and insights of human experts. However, these techniques can only determine whether the materials and methods used to create a given work are compatible with those of the artist in question. Forensic testing may rule out an attribution, but it is not sufficient to make one. Similarly, while AI can be trained to augment human expertise, it is unlikely that the technology alone will ever be capable of reliably authenticating art.
‘Unbridgeable differences’
There are fundamental, unbridgeable differences between the ways machines and human beings function. Human expertise combines intensive, protracted looking with the investigation of pertinent historical circumstances. Over time, experts come to recognise an artist’s unique methods, materials, signature styles and developmental phases, the ways in which those factors align in genuine works, and how each work relates to others by the artist. Visual evidence is primary, but experts also learn to assess more tactile, physical qualities, such as an artist’s typical substrates. Finally, experts must consider ancillary documentation, which is sometimes faked. They study an artist’s principal collectors, as well as common patterns of ownership, authentic records of transfer and recuring collection stamps. Human expertise is multifaceted and cumulative, backed by years of focused effort, published results and acceptance by art-world peers.
An AI cannot see, smell, taste, hear or feel. AI attempts to mathematically replicate the human brain’s neural networks by reducing sensory inputs to numerical formulae. This is why AI is best at tasks, such as coding, that are mathematically based to begin with. An AI is limited to past knowledge, which may be vast, but will never be new. In the case of art, AI uses digital photographs to identify stylistic “fingerprints” and the underlying patterns that distinguish an artist’s genuine works. Not only is this approach one dimensional, but it is inherently hampered by the quality and quantity of available digital images. Quality depends on the photographer’s skill, equipment and lighting, but even the best photographs are not 100% accurate. Moreover, properly training an AI can require thousands of images. Few artists’ oeuvres are that large to begin with, and stylistic changes over the course of a career may further dilute the comparative dataset.
In his 1992 book Technopoly: The Surrender of Culture to Technology, the media theorist Neil Postman predicted that technology would eventually smother culture by reducing complex issues to quantifiable data and numerical calculations. He noted a fundamental difference between naturally occurring processes, which can be studied scientifically, and human practices, which are not governed by natural laws. Connoisseurship is not a “dubious science”, because it’s not a science at all. AI potentially deceives by offering the illusion of objective certainty in a field—art—that is inherently subjective.
• Jane Kallir is a curator, former dealer and president of the Kallir Research Institute



