firstname.lastname@example.org |||| About the Editor
email@example.com |||| http://www.fhis.ubc.ca/winder |||| About the Editor
CHWP September 2003. © Editors of CHWP 2003. Jointly published with TEXT Technology (12.1 ), McMaster University.
This issue of Computing in the Humanities Working Papers, jointly published with TEXT Technology,url was inspired by the Toronto COCH-COSHurl conference held in May 2002, “Inter/Disciplinary Models, Disciplinary Boundaries: Humanities Computing and Emerging Mind Technologies”url. That conference was the high water mark for COCH-COSH in recent years (with over 70 presentations) and reminiscent of the milestone ALLC/ACH conference held in Toronto in 1989 (see Lancashire et al. Research in Humanities Computing I: Papers from the 1989 ACH-ALLC Conference, Oxford University Press, 1991). Canadian humanities computing enjoys now, as then, a new level of institutional acceptance and support. Established university programs for the field (at McMaster Uurl and the U of Albertaurl, to name two) and multi-university collaborations, such as the TAPorurl and SAToRurl projects, reflect the Society's traditional “consortium” focus. Indeed, one of the distinguishing traits of humanities computing --recognized at the Society's inception as the locus of both its strength and weakness-- is a foundational need for collaboration.
The research collected here offers a sample of the diversity and distinctiveness of Canadian humanities computing. These projects of 2003 appear on the surface to be very similar to those of 1989. Humanities computing is still about making a place for computers in literary (meta)theory (Finn, McCarty), literary database studies (Santos and Fortier, Sinclair), lexicology (Lancashire, Roberts-Smith), and pedagogy (Bonnett and Dubrule). Yet beneath these superficial classifications lie a substantial change in approach and results.
Humanities computing has only recently turned its attention to the challenges of understanding interactive fiction as literature. Now, as we see in Patrick Finn's article, “What you C is What You Get: Formalizing Computer Game Criticism”, games are posing an even more recent and knotty challenge. Finn considers the formal and cultural critique of computer games such as Half Life and Counter Strike and suggests that though game studies are “the new kid on the academic block” they will ultimately receive more attention because “the academy, like all good bureaucracies, eventually makes everything its business”. This expanding role of computers in cultural practices is indeed a constant challenge to humanities computing.
Interaction of any sort, whether in games or otherwise, has long been an enigma for traditional literary theory (in spite of what reader response criticism might offer), yet for Willard McCarty interaction --with literary data-- is the solution to the problem of interpretation. In “‘Knowing true things by what their mockeries be’: Modelling in the Humanities” he outlines the humanities computing modelling tradition, taking as his starting point presentations by Northrop Frye and Jean-Claude Gardin at the 1989 ACH-ALLC Toronto conference. In “Depth, Markup and Modelling” he ties modelling theory to modelling practice through a study of personification in Ovid's Metamorphosis. What he comes to see as crucial is not so much the replication or snapshot of a phenomenon, but rather the ability to manipulate the model: “and so it goes: results from the model fail in some particular to correspond to one's sense of the data, temporary adjustments are made, these force an elaboration or reconstruction of the model, new results are generated.” Markup falls short of the modelling mark because “current markup software is ...non-dynamical. One is able to make models with markup systems but not to do modelling, not properly so called.” In other words, literary data is something that must be constantly adjusted, moulded, and coaxed towards an interpretation.
Modelling in McCarty's sense adds a new dimension to humanities computing mainly because of the “depth” that these techniques aspire to attain. And so too do large-scale collaborative projects such as the SAToR project presented by Stéfan Sinclair. His SAToRBase is an online tool designed to centralize and manage the close reading of the hundreds of scholars that belong to the Société d'Analyse de Topiques Romanesques [society for the analysis of topoi in narrative fiction]. When members encounter a topos in their reading they can log it in the online database and the resulting database can be used to view the whole corpus of novels from the 13th to the 18th century. “Depth” here lies in that large-scale close reading that can only be properly managed by software. Sinclair's article describes the interface that allows that collaboration to happen.
Online collaboration through a virtual classroom is the focus of Diane Dubrule's article, “Teaching Scholastic Courses Requiring Discussion On Line”. What is novel here is that the virtual classroom is no longer a project, but rather an effective practice that has logged a number of classroom hours. We can finally have some insight into its strengths and weaknesses. For the author's courses, Computer Ethics and Information Ethics at the Philosophy Department at Carleton University, “evaluations by students and of students indicate that the courses have not only met, but exceeded the learning goals of a corresponding classroom course”.
John Bonnett shows how 3D modelling and information visualization with computers can be used as a powerful pedagogical technique for developing students' critical thinking. In his “Oral Tradition in 3D: Harold Innis, Information Visualisation and the 3D Historical Cities Project” he describes a constructivist pedagogy using computer modelling to translate information into visual representations. Like graphing in mathematics, constructing 3D visualization is a creative and pedagogically sound way to give students mastery over historical information: “For many [students], the project has provided them with their first opportunity to generate a finished piece of work without being told what to do by the teacher.”
Ian Lancashire and Jennifer Roberts-Smith
show us what insights can be gained from
the tidal wave of information that can be found in early modern
lexicons. Ian Lancashire's “Lexicons of Early Modern English” describes
free online Early Modern English Dictionary Database (EMEDD)
the companion database Lexicons of Early Modern English (LEME).
Lancashire notes that “bilingual lexicons have linguistically
English explanations, full of synonyms and idioms, but the word-entries
alphabetized by foreign-language headwords so that the only way to look
English words in paper editions is by scanning the text manually.
a searchable database, these texts open up their wealth of information
the English language.”
Lancashire considers Shakespeare's language in the light
of these lexicons; in Jennifer Roberts-Smith's “Puttenham
significance of ‘tune’ in The Arte of English Poesie”
we have a detailed
investigation of Puttenham's usage of ‘tune’, a usage which is in fact
attested in the OED, but for which Roberts-Smith can amply find
attestations in the
LEME. Both these studies, valuable in themselves for their
early modern language and their critique of the OED,
portend just as importantly a new, online
humanities scholarship, an “open source” scholarship because freely
to scholars, where “any number of eccentrics may be
rehabilitated, any number of esoteric arts redeemed”.
While humanities computing
will continue to excel at building architectonically on
information and rendering
explicit what is vague in our descriptions of human artefacts, it is
perhaps in software mediated
collaboration --the interface
between researchers themselves, as much as between researchers and
their data-- that we will find a fundamentally new
style of humanities computing. The editors are pleased CHWP and TEXT Technology can contribute to
that collaborative future.