Go to Semiotic Review of Books Home Page
Go to SRB Highlights
Go to SRB Archives


SRB

SRB Archives

This review appeared in Volume 10(3) of The Semiotic Review of Books.

Forensic Semiotics

by Warren Buckland

Analysing for Authorship: A Guide to the Cusum Technique by Jill M. Farringdon, with contributions by A.Q. Morton, M.G. Farringdon, and M.D. Baker. Cardiff: University of Wales Press, 1996, xii +324pp.

Without an objective test of authorship -- one capable of verification by experiment -- even the most reasoned arguments for X or Y being the author of a work are endlessly open to dispute. (Jill Farringdon, Analysing for Authorship, p. 8)

As is well known, at the beginning of the twentieth century structural linguistics brought the scientific procedures of systematisation and formalisation to the study of language. Its aim was to identify the minimal units of language, the irreducible invariant traits that constitute language's specificity. Taking its cue from structural linguistics, poetics attempted to ground the study of literature in science by systematically formalising the irreducible invariant traits of literature as such -- what Jakobson called 'the literariness of literature'.

Literary theorists long ago moved on to interrogate their own scientific status and pretensions. What, many of them now ask, was the purpose of a science of literature beyond the desire to gain academic respectability? Was it feasible for a study of literature to limit itself to the scientific procedures of structural linguistics? With the demise of a poetics of literature (in favour of a hermeneutics), it indeed seems appropriate, with hindsight, to bring into question literary theory's scientific pretensions. It is now commonplace to hear literary theorists argue that the scientific processes of systematisation and formalisation reduce literature to a level of abstraction that makes it unrecognisable. In other words, the scientific procedure is an inappropriate framework in which to study literature as literature.

This does not, of course, rule out a scientific study of literature for other purposes. In the following pages I shall review recent developments in Stylometry, a scientific investigation, based on statistical methods, that quantifies language style. By reviewing Jill Farringdon's Analysing for Authorship, a remarkable book in applied stylometry, we shall discover that stylometry does not analyse language style for aesthetic reasons. Instead, it has many practical applications -- not least to bring the scientific procedures of systematisation and formalisation to the traditional and long-standing issue of attributing authorship to an anonymous or pseudonymous text. Authorship attribution is a pressing matter not only in New Testament and literary scholarship (celebrated cases being the disputed authorship of the Pauline Epistles, the Junius Letters and, more recently, the authorship of the novel Primary Colors), but in the legal context as well (for demonstrating whether the defendant wrote his or her confession, or whether it was 'coauthored' with the police, for example). Here, the scientific status of stylometry is crucial, since its results must stand up in a court of law -- which has resulted in the formation of the discipline called 'forensic stylometry'. Moreover, Jill Farringdon discusses other practical functions for a stylometric study of authorship, including: the identification of an anonymous translator of a text, to test whether another author has edited a text, to analyse a writer's style over time and in different genres (e.g. the scholarly essay and the novel), plagiarism, and to determine whether the method works on the utterances of non-native speakers of a language. In his contribution to the book, M.D.Baker investigates whether the writer's social and educational background influences the results of a stylometric analysis. Finally, another practical application -- one Farringdon does not mention -- is that academics can use stylometry to determine the authorship of anonymous readers' reports!

Stylometry employs the procedures of descriptive statistics to quantify -- or systematise and formalise -- the irreducible invariant traits from the data under analysis. From this quantified data, stylometry crucially moves into the realm of inferential statistics by making predictions and testing hypotheses -- not least those concerning who is the author of an anonymous text.

Much of the following review will focus on the way stylometry attempts to place authorship attribution on a scientific foundation. Before authorship attribution can be carried out, however, the stylometrist must first establish a set of objective procedures to identify authorship, the irreducible invariant traits of an author's writing style. In the sixties, Andrew Q Morton pioneered the 'cusum method' (a contraction of 'cumulative sum', sometimes written QSUM) to achieve this aim. With several colleagues, including Jill Farringdon, Morton subsequently refined the cusum method in the eighties. His latest exposition of the method appears as Chapter 11 of the book under review.

As we shall see in more detail below, for Farringdon and Morton the cusum method is an objective procedure that enables the stylometrist to attribute authorship to an anonymously written text by comparing its stylistic traits with texts of known authorship. However, as with all statistical inferences, the stylometrist can only attribute authorship to an anonymous text with a degree of probability rather than complete certainty. Additional statistical tests need to be carried out to determine the degree of probability of an inference such as an author attribution. Despite Farringdon's claims of objectivity, the cusum technique is highly controversial and open to dispute -- and indeed has been disputed by several authors, many of whom are discussed in Chapter 10. After reviewing the contents of thebook chapter by chapter I shall return to the controversial nature of the cusum method and approach the issue of its scientific status.

In Chapters 1 and 2 of Analysing for Authorship Jill Farringdon presents a clear and comprehensive account of how to use the cusum method to attribute authorship to texts of unknown or doubtful origin. However, she does not discuss the origin of the cusum method within manufacturing industries. Briefly, this method has been used in an industrial context since the fifties as a form of quality control -- that is, as a way of statistically measuring the quality of industrial output. All manufacturing inevitably involves deviation in quality, and the purpose of quality control measures such as the cusum method is to quantify that deviation from a given standard, and to indicate to the manufacturers when to implement corrections to reduce the deviations. Several quality control measures exist to quantify deviation, but what makes the cusum method unique is that it easily detects deviation trends by creating a chronological or continuous sum of those deviations, rather than individual sums. In other words, the cusum method takes the previous values of deviation into account and adds them to the current value of deviation. This makes the cusum method very sensitive in detecting deviations maintained over time, since such deviations will cumulatively add up.

In carrying over the cusum method from industrial quality control to stylometry, Morton and then Farringdon retain most of its characteristic features -- including the cumulative measurement of deviations. This carry over is based on the premise that invariant irreducible traits of an author's language use can be defined and measured quantitively as deviations from a standard (simply calculated as the mean, or average for the sample under analysis). First and foremost, Farringdon takes the number of words in a sentence as a trait to be measured in terms of deviation. But in addition, she takes what she characterises as consistent language 'habits' -- including an author's use of two and three letter words, and words beginning with a vowel -- as invariant irreducible traits to be measured.

Measuring sentence length is a routine procedure, but the other language 'habits' are controversial and have generated several criticisms. Farringdon defends the analysis of the above language 'habits' because they are frequent and consistent language traits, which means that they can act as a constant discriminator of an author's style: 'A sophisticated analysis for language attribution purposes must be based on regular and recurrent usage which is both very frequent while being unconscious to the user' (p. 8). In Chapter 11 Morton also defends the analysis of frequent habits as a measure of authorship: 'The most frequent words in a language have proved to be good discriminators between writers and speakers. The most frequent words in a language are mostly short words and so an obvious habit to examine is the use of words of two or three letters' (p. 280). Moreover, these two and three letter words are primarily function words, whose use is independent of context and content. At first it may seem odd to distinguish writing style by analysing an author's consistent use of frequent function words, which he or she is barely conscious of choosing. Nevertheless, as Morton makes clear, these words offer the stylometrist a common point of comparison: 'A test of authorship is some habit which is shared by all writers and is used by each at a personal rate, enabling his works to be distinguished from the works of other writers' (p. 274). The importance of function words shows that authorship is not determined by analysing an author's carefully chosen content words (which can easily be picked up and imitated by other authors), but by the least conscious categories of words an author uses -- words used regardless of the author's subject matter or the context in which they were written.

An author's style can therefore be identified by means of his or her consistent use of particular language habits in a particular quantity. The first step in cusum authorship attribution is to identify the correct habit(s) that an author uses consistently and regularly. Farringdon points out that a correct habit to analyse is one that occurs about 45-55% of the time in each sentence. The lower and upper limits are 35% and 70%. Below 35% homogeneity is hardly possible, and above 70% it is inevitable.

The cusum method is said to determine authorship by successively adding up the deviations of sentence length and language habits in a given sample (deviation from the mean sentence length and mean number of habit usages in the sample under analysis). These deviations are then plotted on a graph. Farringdon's clear summary runs as follows:

The initial counting, of the number of words in each sentence and the number of 'habit' words in each sentence, provides the primary data. Calculations are made of the averages of sentence length and of occurrences of the 'habit' words per sentence, then for both sentence length and habit [the analyst calculates] the deviations from these averages. A further counting (by cumulative sum) of these deviations for each sentence provides the final data necessary. (p. 16)

Farringdon takes the cusum method even clearer by spelling it out as a four-stage procedure:

... the four stages by which the values of each QSUM test are calculated:

i. counting the length of each sentence in the sample and the number of habit occurrences within each sentence;

ii. making a cumulative, or running sum of these figures;

iii. finding the average and calculating how each sentence and habit within the sentence deviates from the average

iv. making a cumulative sum of these deviations. (pp. 17-18)

Farringdon then goes through a sample of thirty-one sentences of her own writing stage by stage until she has produced two line graphs and a QSUM chart (Figures 1, 2, and 3 (pp. 35-36), reproduced here).

Figure 1.
The first line graph (Figure 1) represents the cumulative sum of sentence length deviations (or qsld for short). Along the horizontal or x-axis one finds the sentences represented in sequence, while the vertical or y-axis represents the cusum of sentence length deviations. The zero, representing the mean (or the standard), appears half way up the y-axis because sentence length can of course deviate both positively and negatively from the mean. If all the sentences in Farringdon's example equalled the mean, then the line of the empty squares would be straight and parallel to the x-axis, signifying zero deviation (or total quality control in manufacturing terms). But of course, sentence lengths deviate from the mean, and we can see from the chart the successive deviation of sentence length as they are added up (or cumulatively summed) sentence by sentence.

Figure 2.
Figure 2 carries out the same process for the language habits. Farringdon has chosen to combine two tests: cusum deviations of two and three letter words (qs23lw), and the cusum deviations of initial vowel words (ivw).

Figure 3.
Figure 3 simply combines the two graphs, which constitutes a cusum chart. It represents the cumulative sum of deviations of an author's invariant irreducible traits, which is called throughout the book an author's 'QSUM fingerprint'.

According to cusum stylometrists, author attribution is achieved by interpreting the way the two graphs relate to each other. The controversial premise of the cusum chart is that the cusum sum deviation of sentence length and of the regularly used habit remain consistent throughout a writer's work, and will therefore track one another closely in the cusum chart. This, at first, sounds quite plausible, since a longer (shorter) sentence will employ proportionally more (less) language habits, and so when a long (short) sentence deviates from the norm, the habits will automatically deviate to the same degree. This hypothesis is rigorously tested in Analysing for Authorship through a variety of samples, although why it works is not clearly specified. Farringdon simply writes: 'That [the QSUM method] is capable of differentiating individuals is a fact to be explored by those competent in psycho-linguistics' (p. 48). This issue is addressed again in Chapter 10.

But how does the cusum chart aid author attribution? Farringdon emphasises that the chart can work in a variety of ways. First, the cusum chart of a single sample, as with Figure 3, can show whether it was written by one author, or by several authors. If written by a single author, the premise is that the two graphs will closely track one another, since they represent the regular and consistent habits of a single author. (The shape of the two graphs is therefore more important than the values they represent.) However, if the text was written by more than one author, then the two graphs will diverge, because they will be averaging out the cusum sum deviations of two different authors which, because of their different values, would interfere with one another.

Another way of using the cusum method is to compare two samples of writing. This involves combining the results of two sample analyses into a single cusum chart. One set of results derives from the anonymous text, the other from a writer hypothesised to be the author of the anonymous text. If the sentence length and habit graphs of the combined chart closely track each other, then the anonymous text is, with a high degree of probability (at least for the cusum stylometrists) by the author of the first sample, since the averages of the two samples complement one another rather than interfere with each other.

To sum up, cusum stylometrists interpret a homogeneous chart -- one in which the two graphs closely track one another -- as a sign of individual authorship within the sample analysed. This applies whether a single sample has been analysed (to establish whether it is indeed by a single author) or when two or more samples are combined in the same chart, since single authorship will produce homogeneous charts however many samples of the author's writings are combined, whereas mixed authorship will produce a separation of the chart's two graphs.

One strength of the cusum method of authorship attribution is that it is particularly apt at detecting trends, such as consistent deviations of sentence length and language habits. This reduces the significance of deviations caused by random one-off events that take place in a small time span, and instead focuses attention on changes over long periods of time. Such maintained, long term changes have a high probability of being assignable to a stable external cause, such as an author. If the cusum method was more sensitive to small term trends, the possibility would arise of giving importance to trends created by chance and randomness.

Chapter 3 reports on an early test case for the cusum method -- to determine if D.H. Lawrence was the probable author of a newly discovered short story called 'The Black Road'. As the first part of the control group, Farringdon analysed four samples known to be written by Lawrence. Using sentence length deviations and the habits 23lw and ivw (two and three length words and initial vowel words) Farringdon came up with a consistent cusum chart -- that is, a graph of cusum sentence length deviations and of the habits closely tracking one another. This shows that 23lw+ivw is a good discriminator of Lawrence's writing.

For the second part of the control group, Farringdon analysed a sample of Huxley's writing, produced a consistent cusum chart from the result, and then combined it with the Lawrence sample. The combined sample produced, as expected, a divergent graph, because of its mixed authorship (both authors have a distinct QSUM fingerprint, and the different values interfere with each other). When Farringdon combined the results of an analysis of 'The Black Road' with the D.H. Lawrence sample, she ended up with a divergent cusum chart. This indicated that the combined sample is of mixed authorship, proving (to the cusum analyst at least) that 'The Black Road' was not written by D.H. Lawrence.

In Chapter 4 Farringdon analyses the work of Muriel Spark and Iris Murdoch. These authors were chosen because of the variations within their writing. In combining the results of cusum analyses of Murdoch's essays and novels, Farringdon found that her linguistic habits in these different genres remain consistent. In the case of Spark, Farringdon compares her work written before and after her conversion to Catholicism, when she says that her writing changed significantly. The combined cusum charts of Spark's writing before and after her conversion shows consistency. Farringdon is clear to emphasise that Spark is not in any way mistaken in her belief that her writing changed; Spark's self-analysis is a qualitative evaluation, whereas the cusum analysis identifies quantitative consistencies in an author's work, an 'underlying permanent structure of unconscious language usage' (p. 86).

In Chapter 5 the cusum method is put to the test to identify the anonymous translator of Gustavus Alderfield's History of Charles XII (1740). The problem is whether one can identify an anonymous translator's cusum fingerprint, or underlying invariant traits of language usage. Because the cusum method focuses on function words, rather than content words such as nouns and verbs, then the problem should be as solvable as any other authorship attribution problem, for the subject matter of the translated text will not interfere with the translator's habitual selection of function words.

As authorship attribution is based on a comparison of samples, the stylometrist needs to establish and then test a hypothesis concerning who the author might be. The translation of Alderfield's book has tentatively been attributed to Henry Fielding, and Farringdon set out to test this hypothesis. She combined a sample from the translation with a sample from Fielding's novel Joseph Andrews, which resulted in a homogeneous cusum chart, strengthening the hypothesis that Fielding is the translator. To verify the result, Farringdon established a control group consisting of samples from other eighteenth century authors -- Amhurst, Smollet, Swift, and Steele. By combining each sample with the translation, Farringdon ended up with divergent cusum charts, ruling them out as translators because the divergent chart signifies multiple authorship.

Chapter 6 investigates children's utterances through an analysis of Helen Keller's published letters. Farringdon conducts her investigation by testing three hypotheses:

i. whether Helen Keller's utterance can be shown to separate from writing of another child of a similar age;

ii. whether the eight-year-old Helen remains consistent with the Helen as a mature adult;

iii. whether she clearly separates from another adult -- say, her teacher, Anne Sullivan. (p. 139).

The tests, as to be expected, involve combing the results of two samples to see if they produce a homogeneous cusum chart or a divergent chart. Testing hypothesis 1, a letter by Keller and a letter by another child, resulted in a divergent chart, which shows mixed authorship of the two utterances, suggesting that cusum analysis can distinguish one child's writing from another.

To test the second hypothesis, Farringdon combined the results of two of Keller's letters, one written as a child, the other as an adult. The cusum chart combining these two samples is homogeneous, signifying single authorship.

The third hypothesis is tested by comparing a letter written by Keller when she was a child and an adult, her teacher. The combined cusum chart diverges, indicating mixed authorship. Farringdon tests the writing of other children to confirm the results. These results included the writing of twelve-year-old twins, which Farringdon was able to distinguish (p. 150). For Farringdon, these results validate cusum analysis in attributing authorship to children's writing, and suggest that writing habits (at least those tested by cusum analysis) remain consistent from childhood to adulthood.

Chapter 6 ends by detecting 'plagiarism' in one of Helen's childhood letters: 'At eight, [Helen] 'encountered' a book of stories by a children's author, Margaret Canby. Two of these stories impressed themselves in language structure, form, imagery, figure of speech, so strongly on her young mind that she was able to re-produce versions of them unconsciously, at the age of ten, as her 'own' stories' (p. 161). A cusum chart combining two of Helen's letters, together with the plagiarised story she wrote, produces a divergent cusum chart. In a separate test the two letters by themselves produced a homogeneous cusum chart, clearly showing that the story created the divergence. Finally, Farringdon combined Helen's story with Canby's original, which resulted in a homogeneous chart. While 'Helen's'story does not match her own letters, they nonetheless match Canby's story.

In Chapter 7 Farringdon applies the cusum method to three attribution problems, and to the analysis of dialect in fiction. Farringdon's combined cusum analysis of four different dialects from Huckleberry Finn resulted in a homogeneous cusum chart, showing that the dialects derive from the same author, Mark Twain.

The three attribution problems relate to Martin Battestin's attribution of forty-one pseudonymously written essays to Henry Fielding (Battestin 1989), to the famous problem of who wrote The Federalist papers, and to The Diary of Gerald Keegan. Battestin attributed the forty-one essays to Fielding by standard stylometric tests such as collocations, pairs of words, and preferred position of words. Farringdon tested the attribution independently via the cusum method. In Chapter 7 (pp. 177-183) she publishes the result of a test on one of the essays, which confirms Battestin's attribution.

The Federalist papers were published during 1787-8 in various newspapers to convince New Yorkers to ratify the American constitution. Published under a pseudonym, the papers were collectively written by James Madison, John Jay, and Alexander Hamilton. Historians have managed to attribute most of the essays to each of the three authors, except twelve, which remain disputed between Hamilton and Madison.

Numerous studies of the Federalist papers have been carried out since the sixties, many of which have also significantly advanced the stylometric approach to authorship attribution. Mosteller and Wallace (1964) used Bayesian statistics, which favoured Madison as the author of the disputed papers, with one paper thought to have been coauthored by Hamilton and Madison. More recently, Tweedie, Singh and Holmes (1994) employed a neural network model, which also favoured Madison. And Holmes and Forsyth (1994) used genetic algorithms, a method that also came out in favour of Madison.

Despite the status of the Federalist papers as the classic example of authorship attribution, Farringdon does not discuss their history. She only mentions Mosteller and Wallace's study, together with a study by Michael Farringdon and A.Q. Morton (1990). In the latter study, Morton analysed the jointly written Federalist paper using the cusum technique, to determine which author wrote which sections of the paper. Morton concluded that Hamilton wrote the beginning and the ending, and Madison the middle -- except one sentence, which Morton was able to identify and attribute to Hamilton (this sentence created a divergence in the cusum chart for the central part of the essay, but was shown to be homogeneous with the cusum chart for the beginning and ending of the essay).

The final problem of authorship attribution addressed in Chapter 7 is that of an entire work, The Diary of Gerald Keegan: 'The nature of the dispute was, of course, that academics had disputed the origin of the diary, maintaining that it was entirely a fictitious work by the Canadian publisher Robert Sellar, instead of an eyewitness first-hand account, which had been sought, found, and then published by Sellar, of an emigrant's voyage in 1847 from Ireland to Canada' (p. 186). After extensive testing and cross checking, Farringdon concludes that Robert Sellar wrote the diary.

In Chapter 8, Michael Farringdon (in the first of two contributions to Analysing for Authorship) writes a general account of the procedures and problems involved in using the cusum method in a legal context, where it was first used by A.Q. Morton in 1991, and subsequently by Michael Farr-ingdon and M.D. Baker. The method has so far only been used for the defence in an attempt to question the prosecution's claim that a statement or other documents (forgeries, recordings of anonymous telephone calls, etc.) were made by the defendant. Moreover, the cusum technique can only speak of the origin of utterances, not their truthfulness.

Some practical problems encountered in using the method in a legal context involve obtaining control texts (such as authentic writing samples made by the defendant with which to compare and contrast the disputed sample), and the time permitted to analyse the samples. Unlike academic research, the analysis of samples in a legal context needs to be carried out according to strict deadlines, and usually involves the minimum amount of testing, due to the lack of time and of the cost to the defendant. Furthermore, problems arise when analysing authentic transcripts of police interviews, due to errors in the transcription process (spelling, punctuation, inaudible voice, missing text). However, the cusum method of authorship attribution remains the same in the legal context as in the academic context: the disputed sample needs at first to be analysed separately. Subsequently, charts combining the disputed sample and the control sample are produced. If a divergent result is produced from the tests, then the cusum analyst expresses the opinion to the defendant's solicitor, and if necessary in court, that the author of the control texts did not write the disputed texts. However, if the resulting cusum charts are homogeneous, then the cusum analyst is unable to offer evidence in favour of the defence.

Chapter 9 is coauthored by Jill Farringdon and M.D. Baker. In the first half Farringdon examines if cusum analysis produces consistent results with speakers of regional accents and with non-native speakers. First, Farringdon quickly dismisses the influence of accent on cusum analysis: 'accent does not alter syntax, word-order or vocabulary, and thesame standard English passage read in local accents by speakers from different parts of the country would obviously yield an identical cusum analysis' (p. 210).

But can the cusum method analyse for consistency in samples from a speaker who has an imperfect grasp of English? Farringdon looks at an example from a legal context involving a non-native English speaker whose grasp of English increased over time. The problem is whether the non-native speaker's utterances use consistent habits over time during which his/her language competence becomes more sophisticated. After testing several samples, Farringdon finds that the speaker's later, more sophisticated utterances contain the same consistency as the earlier, less sophisticated utterances. Farringdon concludes that the cusum analysis is applicable to the utterances of non-native speakers.

In the second part of the chapter, M.D. Baker presents a case history that considers whether education, intelligence, and social background create samples that offer exceptions to the cusum method. As should be evident by now, the integrity of the cusum method is dependent on identifying a consistent, invariant fingerprint regardless of factors such as age, education, and social background. Baker's fairly lengthy case study of four prisoners' letters (pp. 219-238) again confirms the validity of the cusum technique under a variety of contexts.

Michael Farringdon's experience of the adversarial legal system in Britain comes in useful in Chapter 10, as he attempts to discredit the authority and credibility of those who have criticised the cusum method. Farringdon identifies four areas of criticism: that the cusum method does not agree with commonsense; that no theoretical basis for the method has been advanced; no standard statistical measures are used to compare the two sets of data; and, the method does not work (p. 239). These criticisms have variously been advanced by David Canter (1992), R.A. Hardcastle (1993), Michael Hilton and David Holmes (1993), and Anthony Standford et. al. (1994), together with several legal reports written by Canter and by Hardcastle.

Farringdon's first response is to remind his critics that the cusum method is designed for the attribution of authorship to a small sample of text (ideally fifty sentences), since it involves the intense and precise examination of samples. Much larger samples require different statistical tools. Farringdon dismisses the rather odd criticism (advanced by Canter) that the cusum method does not conform to commonsense. Clearly, almost all research carried out in both the humanities and the hard sciences could be dismissed on the basis of this narrow and normative criticism.

The criticism that the cusum method has no theoretical basis is more valid, although Farringdon reminds the reader that many scientific discoveries have worked without at first having been theoretically justified (Kepler's hypothesis that Mars has an elliptical orbit is a typical example). For Farringdon, the proof that a method is valid is that its results are reproducible by others in repeated experiments.

The third objection is more technical than the others. The objection focuses on the technique of identifying discrepancies in the data by visual means only -- that is, by looking at the resulting cusum chart. Farringdon's response is that 'we have successfully tested both a measure of probability for the cusums -- the 'Vmax -test' using 'weighted cusums' -- and a t-test for comparing two text segments. ... Their results have only served to confirm the analysis of comparing two cusum charts by eye' (p. 242). Farringdon briefly explains these terms later in the chapter, and makes reference to the work of A.F. Bissell on weighted cusums (Bissell 1995).

In contesting the fourth objection, Farringdon questions the ability of his critics to use the method and interpret the results correctly. Farringdon then examines the papers of Canter, Hardcastle, and Standford et. al. one by one in what becomes a pithy and acrimonious demolition of his adversaries.

Throughout Analysing for Authorship, the tone of the authors is defensive, since they are fully aware of the way the cusum method -- and the quantitative analysis of style in general -- will be perceived by humanities scholars. Stylometrists are frequently accused of being mechanical in their adherence to a statistical analysis of literary and other texts, an analysis that seems to be insensitive to the literariness of literature. In the Preface Farringdon writes: 'It should be understood, however, that [stylometry] would have nothing to do with 'style' in any literary sense. This will reassure professionals in the literary field whose skills and expertise are directed towards quite other ends and aims in the realm of literary value' (p. ix). She emphasises that the aims of literary theory and stylometry are distinct. The methodology one chooses should simply be a means to an end to answering a series of pre-defined questions and to the testing of hypotheses. Literary theorists study literary value and interpret texts of known authorship, which can best be achieved via qualitative methodologies. One of the aims of stylometry is authorship attribution, a problem that can best be achieved via a quantitative methodology. There is no conflict here, since each methodology is extracting different data from written texts to answer different questions.

Yet the cusum method is not a wholly quantitative method for attributing authorship, because the results -- thecusum charts -- require qualitative assessment and interpretation. Separation of the two graphs does not automatically mean that the sample is of mixed authorship. The cusum chart is based on sentence length, and Farringdon emphasises that anomalous words, half sentences, direct quotations, and long sentences need to be edited or omitted from the sample. Names and addresses are a matter of pure chance, rather than a characteristic of the writer's style, and lists are anomalous syntactic structures that may cause a separation in a chart. The first step after detecting a separation in the chart is therefore to go back to the sample to see first of all if the separation is caused by an anomaly. Only after the sample has been checked for anomalies can the cusum analyst begin thinking about mixed authorship.

Unlike literary theorists' attempts to establish a science of literature, the stylometrists' attempt to establish a science of authorship attribution is not simply a matter of gaining academic respectability, but has valuable practical consequences. The stakes are high since the scientific status of the cusum method will determine whether or not it is used in law courts as a forensic test. At least one lingering doubt remains in my mind: Is the cusum method measuring an individual's consistent language habits, or is it simply describing an arithmetical property of language? Saussure was troubled by a similar problem when he 'discovered' in Latin poems anagrams of their author's names. (As with his Course in General Linguistics, Saussure never published his research into anagrams; Jean Starobinski published fragments of Saussure's notes in Words upon Words; see Starobinski, 1979). The anagram is a form of double writing, which works against the linearity of the signifier, and instead focuses on the nonlinear matrix of connections between signs. At first Saussure thought the anagrams were a covert but deliberate literary device, but after finding them everywhere, he could not determine whether they were deliberate or whether their 'discovery' was simply due to the inherent structure of language. He confessed that: 'I make no secret of the fact that I myself am perplexed -- about the most important point: that is, how one should judge the reality or phantasmagoria of the whole question [of the intentionality of the anagrams]' (Saussure, quoted in Starobinski 1979, pp. 105-6). Saussure was unable to decide and almost went insane thinking about the issue, before finally abandoning it in favour of his lectures on general linguistics. In attempting to determine whether the cusum method is able to fingerprint authors, or whether it only measures arithmetical properties of language, the jury is still out.

References

Battestin, M. (1989), New Essays by Henry Fielding, with a Stylometric Analysis by Michael G. Farringdon (Charlottesville: University of Virginia Press).

Bissell, A.F. (1995), 'Weighted Cusums for Text Analysis Using Word Counts', Journal of the Royal Statistical Society, 158, 3, pp. 525-45.

Canter, D. (1992), 'An Evaluation of the "CUSUM" Stylistic Analysis of Confessions', Expert Evidence, 1, 3, pp. 93-99.

Farringdon, M., and A.Q. Morton (1990), Fielding and the Federalist (University of Glasgow, Deptartment of Computing Science).

Hardcastle, R.A. (1993), 'An Assessment of the Cusum Method for the Determination of Authorship', The Journal of Forensic Science Society, 33, 2, pp. 93-99.

Hilton, M., and D. Holmes (1993), 'An Assessment of Cumulative Sum Charts for Authorship Attribution', Literary and Linguistic Computing, 8, 2, pp. 73-80.

Holmes, D., and R.S. Forsyth (1994), 'The Federalist Revisited: New Directions in Authorship Attribution', Literary and Linguistic Computing, 10, pp. 111-27.

Mosteller, Frederick, and David Wallace (1964), Inference and Disputed Authorship: The Federalist (Addison-Wesley, Reading: Mass.).

Standford, A., J.P. Aked, L.M. Moxey, and J. Mullin. (1994), 'A Critical Examination of Assumptions Underlying the Cusum Technique of Forensic Linguistics', Forensic Linguistics, 1, 2, pp. 151-167.

Starobinski, Jean (1979), Words upon Words, trans. Olive Emmett (Yale University Press).

Tweedie, F.J., S. Singh and D. Holmes (1994), 'Neural network Applications in Stylometry: The Federalist Papers, in A. Monegham (ed.), Proceedings of CS-NLP 1994 (Natural Language Group, Dublin University).

Warren Buckland is a lecturer in Screen Studies at Liverpool John Moores University, UK. He is editor of The Film Spectator: From Sign to Mind (Amsterdam University Press, 1995), author of Film Studies (NTC Press, 1998) and The Cognitive Semiotics of Film (Cambridge University Press, 2000), and has published articles and reviews in Kodikas/Code, Semiotica, Screen,and Quarterly Review of Film and Video.


Go to Semiotic Review of Books Home Page
Go to SRB Highlights
Go to SRB Archives