The specific case of visualization tools
Finally, among the vast set of digital tools made available to researchers, special attention must be paid to visualization tools, in which we have become particularly interested in our own work. This choice had a purpose: it neutralized the humanist’s suspicion regarding an ostensibly quantitative approach like that of statistics; it also had an even more strategic goal: it offered the possibility of making people appreciate the hermeneutic plus- value of technology. The specificity of data visualization is in fact allowing for a purely visual grasp of the corpus’ elements, favoring the detection of similarities, pattern regularities or, to the contrary, anomalies, which can be interpreted thanks to subsequent exploration of the legend, or at least will allow the creation of interpretation guidance, if not new research. This preattentive perception brought to light in 1985 by Anne Triesman thus quite certainly underpins the hermeneutic productivity of visualization. Jean- Marie Schaeffer completed this observation by associating precisely our ability to interpret with the existence of a categorical wandering in that preattentive phase: it allows approximations, often based only on graphic forms, to operate without effort and without considering the categories of the things represented (this link to the category comes in during a second phase with the exploitation of the legend and data tags). Jean-Marie Schaeffer attributes the richness of artistic experience generally to what he calls a “delayed categorization”: it is the fact that the object read or seen resists the immediate assimilation that categorization would allow, or at least the fact that it offers a space, or “elbow room”, which is the source of creativity. In the face of data visualization, in the same way, categorization loses evidence and frees the associative power of wandering attention.
To provide a very simple example, the researchers in the ANR Euterpe group knew from experience at what stages peaks in the publication of scientific poetry came about during the 19th Century, and without the aid of visualization, they could indicate three successful periods for the genre with little risk of error through mere familiarity with the corpus. However, the visualization revealed to them a pattern, that of a series of ebbs and flows, we could say, going in and out of fashion, a process that visibly stalls after 1872 (and not in 1871 as an effect of the tightening of publication in a country at war as may have been expected, that year instead seeing a proportional increase in poetic writings in global editorial production).
Figure 2.1. Publication rhythm of scientific poetry books
For historians who wondered about the reasons for the virtual disappearance of the genre after the 1910s, this simple diagram provides an alternative route: the closing decade is in fact the end of a breakdown process that began in 1872-1876, at the moment when the usual upswing in the cycle is clearly stopped. The question is thus no longer what happens in 1890, but rather what happens around 1874? Another question, or rather another field of knowledge for the researcher, is that if 1890 is known as the theater of a profound crisis for bookstores (therefore, an explanation not particular to the genre studied), the immediate post-war period, on the other hand, sees large debates over the causes of the French defeat in the face of German science, which makes science both a desired power and the privilege of the hated and reviled victors. Immediately, through visualization, there is another possible story, even well-founded and verifiable, for which the researcher is grateful to the tool, whatever native mistrust he may have.
These approaches through visualization have been introduced in literary history by the comparatist Franco Moretti, who used resources from Stanford to produce visualizations that allow new questions to be posed about known corpuses, and about data that is not necessarily quantitative, but which has a distinguishing feature of spatialization, in the absence of geolocalization. Both powerful and evocative, the images of this work have given rise to new perspectives, even if the polemics were heated, even in the researcher’s circle. They form a reference, as this was the first time that a researcher led, sponsored a visualization project [MOR 08].
Among these visualizations, we became particularly attached to those allowing for the representation of influence networks, for the reasons mentioned above. The tools that we experimented with did not meet the criteria that we counted on, particularly in terms of the expression of researchers’ point of view. In order to provide a more effective tool, we have come up with a digital device primarily dedicated to the expression and sharing of these points of view. The principle consists of modeling researchers’ arguments in interoperable semantic graphs in the shape of a network between documents, actors and concepts of relation to a given time and location. With this tool, we aim to go beyond the tools available today by offering a generic method for analyzing digital archives and developing collective intelligence processes understood as:
“The ability of human groups to collaborate on the intellectual stage to create, innovate, and invent. This ability can be applied to any level, from small work groups to the human species by going through networks of all sizes.” [LEV 10, p. 105]