Unlike the genetic code, language is a learned code, and in this arena of human activity, as in all other human endeavor, errare humanum est. Error, moreover, is exclusively within the human realm, having no direct counterpart in nature despite having a natural history. Part of that history, when it comes to language, as with all social codes, is imperfect learning.
Children routinely make mistakes when learning their native language, and the degree to which their mistakes are rooted out by parents and other adults (and older children) in part determines the lineaments of linguistic change. Adult native speakers with the requisite amount of education can be reckoned to have a more or less complete command of their language, the range of completeness varying with factors such as book learning or technical knowledge, by which syntax–– and particularly vocabulary–– can continue to be expanded over the span of one’s entire life.
But even adult speakers make mistakes that are the product of imperfect learning. This is evident to anyone who makes a special point of observing how people speak (and write).
The opportunity to observe imperfect learning has been considerably expanded by modern media. One hears many voices on the radio using English either as a native language or a lingua franca, and one need not listen long before hearing a mistake.
Frank Deford, whose commentaries on sports are heard weekly on National Public Radio, is described as a writer with many books and essays to his credit. Nevertheless, in commenting on college football (“Morning Edition,” KPCC 89.3, Pasadena, Jan. 7, 2009) he uttered the solecism “strange duck” instead of “odd duck;” (odd is apt here not simply because it is the traditional epithet but because of the repeated [d] that led to these two words being juxtaposed in the set phrase odd duck). One cannot blithely ascribe this error to a writer’s penchant for creative idiosyncrasy: it’s a mistake tout court.
Foreigners who resort to English as a lingua franca, no matter how fluent, are especially prone to mistakes that arise from imperfect learning. Thus the Israeli novelist Amos Oz, whose thick accent belies a near-perfect command of English syntax and vocabulary, when interviewed on National Public Radio (“Morning Edition,” KPCC 89.3, Pasadena, Jan. 7, 2009) used the solecism “uprise” (obviously but nonetheless erroneously back-formed from the noun uprising) as if it were a verb of English. Such instances of imperfect learning can even encompass the most hackneyed items: Mr. Oz also changed at the end of the day to “in the end of the day.” Interestingly, he closed his side of the interview by demonstrating a tacit solidarity with contemporary American English grammar by uttering the erroneous “Thanks for having me,” i. e. omitting the postposition on––a linguistic phenomenon that has reached near ubiquity in the cloyingly unctuous etiquette of radio interviewees.
MICHAEL SHAPIRO
Why prefer the term ‘legisign’ to the more familiar notion of the code? After all, the concept of the code has figured prominently in many previous introductions to sign analysis and interpretation. The Collins English Dictionary offers the following concise definition: ‘Code: A system of letters or symbols, and rules for their association by means of which information can be represented or communicated for reasons of secrecy, brevity, etc.’ Or consider this eminently sensible definition by Colin Cherry : ‘The term code has a strictly technical usage…. Messages can be coded after they are already expressed by means of signs (e.g. letters of the English alphabet); the a code is an agreed transformation, usually one to one and reversible, by which messages may be converted from one set of signs to another…. In our terminology, then, we distinguish sharply between language, which is developed organically over long periods of time, and codes, which are invented for some specific purpose and follow explicit rules’ (On Human Communication: 8). However, in addition to notions of secrecy and brevity, the term has many other meanings which range from complex civil codes like the Napoleonic or Justinian Codes, through ‘codes of conduct’ such as the British Highway Code and the Italian Galateo, to relatively simple codes like the Morse, ASCII and hexadecimal machine codes, and there are, it seems to me, a number of reasons for preferring Peirce’s term to the other.
To begin with, the notion of the code considered as an interpretive semiotic system of one to one correspondences between code unit and value first gained credence with Barthes’s work on myth and pictorial rhetoric, and therefore belongs to a Saussure-inspired, structuralist approach to pictorial data, which I find incompatible with the Peircean approach to signs. However, there are less parochial reasons for preferring Peirce’s term. For example, the types of NVC data to be found in most pictorial representations of human activity tend to be scalar in nature, and in this way frustrate a search for exact term to term correspondences between the legisigns and what they represent. More importantly, the very notion of a code has a touch of atomism about it; that is, the term suggests that ultimately the basic code units can be associated with corresponding basic meaning units within some biunivocal relation. Such a suggestion is plausible in the case of the Morse code, for example, but is problematic when we consider verbal and pictorial signs. To see the import of this, consider now the philosopher Ernest Gellner’s comments concerning the logical atomism informing Wittgenstein’s Tractatus:
As can be seen, Wittgenstein elaborated the idea [of a biunivocal correspondence between the atoms of language and the atoms of the world] into a kind of Mirror theory of meaning and language. In a sense, the idea is valid, and had indeed been one of the bases of communication theory. A code can only communicate information concerning the same number of possible objective alternatives as happens to be the same number of its alternative possible messages (and then it can be said to mirror them). Any greater richness in the world cannot be conveyed by it. On the other hand, any greater richness in the code (i.e. more signs than are necessary for the number of alternative messages liable to be conveyed by it) is redundant.
(Words and Things, 1959 : 93, emphasis added)
In other words, and simplifying considerably, it is sometimes suggested, as Wittgenstein did in the Tractatus, that such complex signs as the sentences of human languages can be broken up and their ultimate constituents made to enter into a term to term relation with the extralinguistic ‘atoms’ they are held to represent. If all sentences were like this then it might be feasible to talk of natural language in terms of a code. However, this is simply not the case: some types of sentences are less ‘rich’, i.e. less complex, than the objects they represent; in other words, they are underdetermined with respect to those objects. Nor is it the case, either, that in all images there is a biunivocal relation between the elements inscribed on their surfaces and the objects, protagonists or whatever, being represented. Reformulating Gellner’s argument, I would say that there is, in certain well defined cases, a certain degree of ‘richness’ in the sign’s object which simply cannot be expressed biunivocally either by an image or by an assertion in any natural language. This problem comes within the purview of an ecology of signs.
A further reason for abandoning the concept of the code in semiotic and linguistic analysis, it seems to me, is that there is something ideologically unwholesome and vaguely deterministic in the idea that there should be a pre-existent set of ‘interpretations’ valid for the totality of signs we encounter in our daily intercourse with the world, out there waiting to be discovered. This is the method of analysis championed by Culler in The Pursuit of Signs (1981) and adopted by Barthes, for example, in his Mythologies: the essay ‘The Romans in Films’, with its various interpretations of hairstyles, is an excellent example of the code approach, and of its limitations — the analyses proposed are sufficient, amusing, but idiosyncratic and in no way necessary.
Moreover, and this is perhaps the most compelling reason of all for abandoning the notion of the code, from a Peircean perspective the way a person interprets a given sign is a function of that person’s experience of the world, meaning that interpretation is differential, and not uniform from person to person. Thus to equate a normal person’s semiotic activity — interpretation, deliberate and self-controlled ratiocination — with some sort of code-breaking, cipher-cracking process is to play down or neglect entirely the familiar and customary properties of signs and the habitual nature of much of our interpretation of the world: during most of our waking moments and in most of our encounters with others the vast majority of the innumerable signs that we have to assimilate and act in accordance with are, pace Noam Chomsky, signs we have already dealt with on prior occasions, albeit in different contexts. They are signs we are usually familiar with, which makes it hard to imagine that publicists and advertisers, to take a simple example, would fill their copy with specialized codings requiring painstaking investigation, as this would surely defeat their object. Thus while it would be both unwise and contentious of me to argue that we don’t encounter signs with a predetermined conventional meaning — codes, in other words — or that we don’t ever encounter enigmatic signs that require careful and deliberate consideration, it cannot be the case that all familiar systematic signs function in this way.
My preference, then, goes to the original Peircean term ‘legisign’, which has no atomistic implications and whose interpretation depends largely upon personal experience of the world and collateral knowledge of the object determining the sign, rather than to that of ‘code’, which, in many semiotic contexts is a misnomer. This is no denial of the existence or importance of codes: a set of regulatory signs such as the British Highway Code is clearly a code, and a necessary one, as are the Morse and the ASCII codes. The point is that while all codes are legisigns, not all legisigns are codes..
Response to Tony Jappy:
While it is perhaps infelicitous to talk of language as a code, since human languages are not additive systems like an alphabet or the Morse code, nevertheless there is also something to be gained from using the term code in such cases, for instance, as the “elliptic (sub)code” vs. the “explicit (sub)code” in phonology. In this case what is meant is two subsystems of phonetic traits that depend on whether speech is informal, colloquial, in allegro tempo, on the one hand, or formal, bookish, in lento tempo––i.e., the parts that constitute the phonostylistics of a language. Beyond the utility of distinguishing linguistic phenomena in this way through the use of the term code, the term legisign, which is Peirce’s coinage for the type of sign that is a rule or law, has a very specific role to play in his theory of signs, where it is, moreover, one member of a trichotomy of sign types and not properly to be used to designate an ensemble of signs, which is what a code is.