The development of photographic processes paved the way to a more incorporeal, automatic image tracing or transfer through the photochemical action of light on a sensible emulsion. Digital processes have made replication even more intangible, with no differences between the original and the copy, whose appropriation and circulation have become easier and faster, as they are based on the properties of the electronic circuits that make images become virtual, available on several types of screens and able to be transmitted through wire and wireless technology, like any other kind of information, further accelerating the «age of technical reproducibility» that is a staple of contemporary culture.
Due to its continuous spatial nature, image multiplication has always been more complex than the reproduction of writing: even in the case of ideographic writing, it shows simpler and more concise graphisms and develops in a more linear, discrete way. Although several rudimentary systems had been invented to reproduce simple writings and drawings, mostly emblems, since the Sumerian, who used « negative» argyle plates (circa 3500 B.C.), it was with the adaptation of Chinese xylography technique that images began to multiply effectively. Woodcut was created: it became dominant and autonomous in Europe from the 15th century (Picture 1).
Following Gutenberg’s improvements, with the creation of the lead movable type printing press in the mid-15th century, books – which hitherto had been hand-copied, as were the engravings they included – were finally able to multiply. The same thing happened to images. Woodcut uses a special press, made of overhangs or carvings on wood: in the first case, only the parts in relief are soaked in ink, which can be of several colours, whereas in the second case only the recesses are dyed. As an alternative to wood, metal engraving, mostly carved, was also used at the time.
According to Marshal McLuhan’s famous analysis in The Gutenberg Galaxy, the «typographical man» was born along with the printing press: he can be characterised by the primacy of sight over the other senses. Reading benefits his analytic rationality, which privileges the linearity, spatiality, distance and self-control demanded by concentration, as well as individuality. These factors affected all European visual culture, which has been associated to information circulation ever since.
The biggest innovation in this field came only with lithography, invented circa 1796 by the German Alois Senefelder, based on the principle according to which water and oil do not mix. The name comes from the use of limestone as the matrix (litos means stone in Greek) on which drawing and/or text is inscribed using a grease pencil; the grease is then chemically stabilised so that the ink may only adhere to these parts and thus leave the «negative» zones white and wet. The use of chemical reagents was the biggest innovation of the process which is at the basis of the industrialisation of the printing press. As with other engraving processes, lithography has been widely used by artists (Picture 2).
The invention of photography, whose history is frequently associated to that of painting, may (should) also be seen in the light of the history of engraving and the issue of the technical reproducibility of image. Actually, many of its inventors (), as the French Nicèphore Nièpce, who owned a lithography shop, or the British Thomas Wedgwood, the proprietor of a hand-painted china factory, sought a process similar to lithography that might facilitate image reproduction. One of the «early photographs» («heliographs», as Nièpce named them) was a reproduction of an engraving (Picture 3).
In fact, a lot of the technical vocabulary of photography is close to that of engraving, although one of the most important photographic genres in the early decades, daguerreotype, is made of a single image. The calotype, and later glass-negative processes, paved the way for the production of countless photographic copies from a new concept of «matrix» - the negative. Yet even in its greyish look (which was also sepia or bluish) as well as in the early contact proofs it was close to engraving: it reminded us of tracing techniques, as the tracings that made Talbot’s «photogenic drawings».
Only as late as in 1873, with the invention of the «half-tone» process, which transform an image into ink dots, was it possible to reproduce photographs to be published on printed material, namely newspapers, which paved the way for a radical transformation in graphic arts and an increasingly massified visual culture. The «off-set» photomechanical printing process, a cross between photographic and lithographic processes developed during the 20th century. This meant a quantum leap in reproduction: the same subtractive process used in photography was applied to colour. Worth noting are, for less industrial purposes, photocopiers and mainly in art serigraphy or silk-screen printing, which are frequently used by artists as a technique of appropriation/ transformation of images from popular culture rather than merely as reproduction (Pictures 4 and 5).
Electronic digital images are associated to the need to transmit photographic and videographic images taken by robots on the missions to the Moon and to Mars from the 1960s, which led to the enhancement of scanners, the first model of which appeared in 1950. Scanners project light on an image transforming each image tone and colour variation, line for line, into sequential electrical impulses of corresponding variable intensity, which can be transmitted through wires or wavelength and reconstituted in the receptor from the code that establishes the equivalence between each light dot, called Picture Element or pixel, and an image dot. This meant a further look into the same idea as that of the chemical «grain» in photography, as well as the «half-tone» dot network, according to which an element seen as essentially continuous, such as image, can be interpreted as discreet, i.e., as a set of dots that can be codified into electrical information and therefore converted in the computer 0s and 1s system. These principles were used in film and photographic camera production.
In electronic image, if the information processing capacity of transmission/reception media allows it, there is no information loss in the image multiplication process and its manipulation is facilitated and undetectable, thus allowing for the simulation of the «photographic» (Picture 6). The chemical element is entirely replaced by electrical impulses which can be stored in microchips and made easily accessible and visible on camera or computer screens. Their application on increasingly smaller and portable cameras like those fitted in mobile phones, providing immediate access to the transmission network, making the image a quick, cheap vehicle to exchange individual experiences and a sociability factor, thus contributing to the transformation of images into interfaces and access portals to other information or images.
One of the decisive reflections on the implications of reproducibility of images was proposed by Walter Benjamin in his essay «The Work of Art in the Age of its Technical Reproducibility», which was first published in 1936. Benjamin reflects on the social changes in the forms of reception and relation to the works of art, especially painting and sculpture, which were introduced by the possibility to photograph and widely disseminate these works. According to his analysis, this meant the end of the aura of the works, whose staple was their unique character, here and now, hic et nunc, accessible only where they are, forcing us to go there, as in a religious pilgrimage. The work’s exhibition value due to its photographic multiplication would provide the chance to end its cult value, thus contributing to put an end to the theological concept of art he sees as bourgeois. However, he also complexifies this relation, suggesting another reading, as, regarding the aesthecisation of politics by fascism, the cult value of the chief increases according to his exhibition value, demanding, as Benjamin proposed, a «politicisation» of art, close to the avant-garde projects of his time.
There have been many diagnoses on image digitalisation. Among them is Lev Manovich’s essay «The paradoxes of digital photography» (in Photography after photography: Memory and representation in digital culture, G + B Arts, 1996) in which he contests the idea that direct or «straight photography», seen as «normal» and a paradigm of photography, will be replaced simply because such photography never existed: it has always been a matter of procedures of choice and technical transformation. Besides, actually, its culture has always been one of manipulation. The most important paradox mentioned by the author is perhaps that the technical possibility of non-degradation and the noise and degradation actually became a staple of digital culture, which is also able to provide more visual information than we need, producing hyper-real images.
Another reflection was made by Vilém Flusser in Towards a Philosophy of Photography (European Photography Edition, 1983). In it he interpreted the history of the medium itself as an example of post-industrial culture characterised by reproducibility, as the aim of his complex and programmed machines is to accomplish their programmes, which transforms the photographic index we hold as the most authentic into a set of programmed, reproducible symbols (even still in the «chemical» age). If we do not wish to be mere «servants» of photographic cameras we must play against them, i.e., invent other unexpected programmes for them (rather than merely transform the indicial information of the world into a previously programmed type of information, in this case, a certain kind of images, say «a beautiful landscape», «a fine portrait»).