Making 3D Visual Research Outcomes Transparent
Visualisation methods are increasingly prominent in research across a wide range of arts and humanities disciplines, and in the cultural heritage sector. Amongst these, one of the most challenging, but potentially rewarding, is 3-dimensional (3D) visualisation. However, if users of 3D visual research outputs are to understand and critically to evaluate them, information is required about the incremental process of data gathering and assessment, testing of hypotheses, and consequent choices through which the outcomes were produced. In addition, researchers need to convey distinctions between “hard fact”, plausible conjecture, intelligent guesses, and alternative hypotheses.
Despite a number of initiatives in this area, there is as yet no broad consensus or widely-accepted standard regarding the nature or degree of documentation that visualisation-based projects should include in order for their research outcomes to be "transparent". This absence has undoubtedly contributed to the variable quality of both visualisation-based research processes and outputs, and continues to compromise the perception of such projects within the wider community. With the increasing availability of inexpensive 3D-visualisation-creation tools that can be deployed by those with little or no discipline-based training, the lack of standards for 3D research visualisation documentation is a problem that acquires ever greater urgency. Conversely, the creation and adoption of such standards could dramatically improve the quality standing, and consequent use of potentially very valuable research methods.
This Symposium aims to address these issues and opportunities by bringing together a small number of speakers and audience members, expert in aspects of 3D visualisation who have developed interests in these questions, to discuss the theory and practice of "transparency", and to attempt to move towards a consensus on minimum standards and recommended methods of recording the process of analysing, creating and interpreting 3D visual research outcomes.
The symposium will present surveys on the state of the art, suggestions and tentative guidelines, together with reports on work in progress. It will be an interdisciplinary panel of scientists, social scientists, engineers, metadata implementation experts and archaeologists, discussing current practice in process tracking within academic contexts where this is an essential and well-established part of the discipline. Examples and case studies from leading practitioners of how transparency information can be recorded and disseminated will be welcome as well.
The AHRC funded Project, "Making space: A Methodology for Tracking and Documenting the Cognitive Process in 3-dimensional Visualisation-based Research" has proposed the term "paradata" to denote accumulated information recording such processes, and a major objective of the Symposium and Expert Seminar will be to attempt to agree a usable definition of paradata, and to assess its value and possible roles in making transparent and susceptible to evaluation by others, the choices, decisions, and rationale shaping 3D research processes. The Symposium will also complement and further develop the activities of EPOCH in the area of identifying and setting appropriate standards for VR and 3-D visualisation based activity, which was the subject of a recent workshop at the VAST 2005 Conference in Pisa.
Burning questions include: visualisation-based research: process or product? Who will record and use paradata? How do we decide how paradata are recorded and disseminated (e.g. as a metadata element, in graphical, numerical or textual form, as XML, or as a multimedia element?) Is there a 'one size fits all' solution, or are different solutions appropriate for different types of project and/or outcomes?
“How to read and write what we know from what we do”
Members of this panel are asked to comment on the implications for scholarship of constructing and using digital objects into which primary intellectual capital has been invested. They have been invited from disciplines with traditions and methods for tracking and preserving the research process and for dealing with the implements of research that are themselves contributions to knowledge. The hope is that scholars in the digital humanities may learn from those for whom equipment-orientated research is better understood and that they may contribute to current debate on the topic of “things that speak”. The panel is specifically asked to suggest recommendations that the Symposium might make in this regard to visualisation projects.
The introduction of computing tools for research in the humanities has brought about many changes in the way scholars work. But we do not yet have a firm grip on the stylistic changes in reasoning and expression developing from the altered forms of practice. Lorraine Daston and others have commented on the curious lack of attention to the epistemology of the humanities – to the question, as she says, of how we know what we know. Computing provides both motivation and means for addressing that lack, since digital scholarship must articulate and implement its research methods. By forcing us thus to make explicit and to externalize the formerly inarticulate stages of research, humanities computing draws attention to method and the question of method, generates the need for observant, self-reflective practice and problematizes traditional forms of expression. The dynamic, open-ended nature of computing strongly suggests that being observant of scholarly practice is not a temporary flicker in the transition from print-based to digital scholarship but a durable virtue, if not requirement, of good work in the new media.
Methods of investigation, externalized as tools, become themselves epistemic objects and so forms of scholarly expression, but they speak in a language we lack the training to understand. Much recent work on the material culture of these epistemic objects – e.g. in the history of science, anthropology, textual editing, aesthetics and art history – provides a lively context in which to ask about their communicative properties and about the new habits of reading them that we need to develop. Problems in scholarly peer-review and publication follow.
At the same time as intellectual capital is invested directly into the design, development and use of the research machines we find difficult to read, the “paradata” of their making and use – e.g. decisions of design, application and interpretation that guide subsequent work, apparently unrelated or inexplicable, but potentially revolutionary, anomalies – tend to be lost. We need to find ways of preserving the footprints of these processes, so that others can understand and evaluate our analyses and choices. We need to consider what types and scope of paradata should be disseminated with research outcomes such as 3D reconstructions of historic buildings or artefacts to make them susceptible to such evaluation.
The use of equipment that continually demands attention, the externalization of research methods in explicit form and the ludic qualities of work with computing all point toward a precedent in the experimental sciences. This is not to scientize the humanities, rather to suggest that neighbourly help is to be found in experimental practice with a long and distinguished history. This practice suggests, for example, the use of laboratory notebooks and recommends whatever else may assist re-enactment of exploratory research. It suggests as well that the traditional rhetorical forms of scholarship in some fields may not be adequate because they obscure or erase the historical development of the ideas expressed in them. Hybrid kinds of documentation that combine, for example, aspects of the traditional essay with those of the laboratory report may be needed.
Individual Paper Abstracts
Individual abstracts for speakers papers can be found HERE.