Sunday, March 3, 2013

The presence of English in India at the crossroads chapter 5

Probal Dasgupta

Kumud Chandra Dutta Memorial Lectures 1997 (Dibrugarh University, Assam)

Published as ‘The presence of English in India at the crossroads’, pp 1-132, in Probal Dasgupta, Udayon Misra, Amaresh Datta (2002) English at Crossroads: The Post-Colonial Situation: Kumud Chandra Dutta Memorial Lecture Series, 1997-98. Guwahati: Students’ Stores.

Posted here chapterwise; this is the fifth of six chapters. In the text itself I call the chapters ‘sections’ and the sections ‘subsections’.

5. Against Naturalism

5.1 The dialexis principle

Some prefatory remarks first. Throughout this argument we address the practical task of working towards a healthy redistri­bution of knowledge resource functions in linguistic terms. We approach this problem at the level of trying to offer a theoreti­cal adjunct to the other practical enterprises, while continuing to regard theorizing itself as one of these, since thinking too is a practice. The present section of the argument questions the naturalistic fixation of our thinking on a "natural" basis of cognition construed as entirely, purely independent of our cul­tural lives, and helps find a way to strive for a sustainable naturalness accountable to what we know about our culturalness.

Earlier work has shown, in our opinion quite clearly, that English is present in India as a vehicle of a certain classical­ized naturalism. This string on this bow resonates more generally with tacitly related tensions in domains remote from language. This tension decomposes on vector analysis into two contrary resultants. These forecast two different ways the presence of English in India can turn after it moves past the cross‑roads it finds itself at. The classic‑facing vector yields a classical or monumentalist resultant; the nature‑facing vector projects a naturalistic resultant; it is the latter that many thinkers hope will prevail.

The existence of this hope is a useful factor, of some political importance. But we would like to sharpen the debate by inviting all participants to note one important danger. The nature‑facing vector is subject to hijack by a certain forced marriage of this naturalism with an instant, ethnic‑ghetto‑mak­ing, classicalization under the tutelage of the populist machine of the United Societies of Amusement. Considerations of true or sustainable naturalness only become available in a landscape shaped by an optionalization of technologies. We have been call­ing this optionalization Green, for convenience of reference. Such a Greening restores true classicality as an international inheritance, working through and across regions. This retrieval of the cognitive needs to swim against the current of the unre­generate, teacherly Enlightenment's monumentalist or Olympian demand for continuous Performance.

One of this century's most trenchant critiques of Perfor­mance as a directly examinable and complete "record" comes from classical generative grammar. In the present section of our argument we revisit generative grammar's classic rejection of Performance in favour of Competence as a more serious object of cognitive study. As we move towards a characterization of knowl­edge as locally and interlocally continuable lines of modifiable action, we propose to retain a performative moment in our ac­count. For us, it is important that the performers we visualize ‑‑ who must exist ‑‑ should know, should understand, that freedom must take the form of an active recycling that modifies, con­cedes, accommodates its codes, as its discourse comes to terms with what we shall characterize in this subsection as the Dia­lexis.

The Enlightenment, as Green apprenticeships reinvent it on a reclothed planet, may wish to view learning in the following terms: whatever can be known, or can be repeatably learnt across time and space gaps, is a convergent but unbounded set of formal teloi. That these learnables hold good under appropriate ideali­zations can be shown by a (not always flesh and blood instanti­able) fair teacher whose hologram image hovers as co‑learning communities keep staging their joint learning transactions. This is the pragmatics of the game.

In the infrastructure of the game there will always be room for some players to want to set up a semiotics and seek deep naturalistic meanings already given, as the discoverably unique pretransactional truths. A naturalism vector of this sort will haunt any transaction as an Ibsen‑ghost. Such vectors are best resisted by acknowledging that they are inevitable, and yet re­sistable.

With these preliminary remarks out of the way, we may now state the Dialexis Principle as the idea that any Lexis, if its words are to signify, must lie athwart the diachrony of paths already taken which embody as words. Pragmatically, words need to keep recharging from contexts perceived as providing significant novelty to ensure noticeability. At the same time, the charges flowing from contexts into words as intercontextual keys need to retain repeatedly recognizable shape as writables. This coupling of the contextual speakable and the textual writable constitutes the Dialexis account's version of the widely accepted view that there is no private language or, to put it in deconstructive terms, that there is no speaking without logically prior linguis­ticity qua writing.

Dialexis becomes dialectical when it throws up a question of the types of dependency that validly mediate between the complex and the simple in this coupling of contextual privacy and textual publicity. Can a mere criss‑cross of friendship‑cruisings reli­ably link past paths to present traversals? This is a question not about the cruisings but about reliability. The young are going to face the old as they embody the question of handing over of power which forms part of any learning. We visualize this as a question of novelty taking the form of transcodal categorization. The teaching scene is a site of inheritance.

This takes us into the present exercise. We shall argue, in this section, that a competence must be seen as heterogeneous, that a knowledge is best formalized not as a system but as a flowing, non‑essentializable trans‑code. The present subsection seeks to articulate this goal ‑‑ of shifting in an anti‑natural­ist direction the metalinguistic assumptions driving the genera­tive programme in grammatical research ‑‑ in syntactic terms. The particular formulations used here reflect the preferences in the minimalist discussions of economy and language design features. For inclusiveness, though, it is useful to remember that several other varieties of linguistic research converge on a broadly similar paradigm.

On our way to articulating our particular anti‑naturalist position, it is useful to consider a rather different version of anti‑naturalism ‑‑ the idea that the study of language cannot be continuous with the natural sciences. This opinion, reflecting a view of language certainly held by many lay people and probably also by most practising academics outside linguistics itself, amounts to the view that linguistic phenomena are human, and thus surely human‑made or cultural, not nature‑made or natural. This "artifact view of language" (AVL) is obviously one anti‑natural­ist position that one might hold. Why do few, if any, contempo­rary linguists care for AVL?

Empirically, AVL would lead to the prediction that language phenomena fall into the sorts of "untidy", culturally "packaged" patterns that better known artifactual phenomena in historical formations tend towards. This would make linguistics in its methods, findings, and concerns similar to the historical, liter­ary and cultural disciplines in general.

Such an empirical prediction has never seemed accurate. In no century have grammarians and historians found their subject matters converging in type or in content. And today the diver­gence is even clearer. So the major prediction of AVL fails. This seems to be why AVL makes so little sense to most of us.

Now, certain linguists find the results of grammatical research similar to patterns familiar from the exact sciences. They do not just refuse to accept AVL; they distinctly oppose it. For them, a close look at language turns up phenomena of the type that the exact sciences study. So they take language to be either a natural phenomenon (the official generative stand) or a formal one (the Platonist minority view) unless shown to be otherwise.

This Exact Science View of Linguistics ESVL places linguis­tics among the exact sciences. Proponents of this view presume that research will eventually bridge whatever gaps now divide it from the better investigated fields. Now, linguistics claims exactitude by assuming generative grammar's homogeneous speech community idealization featuring speaker‑hearers with perfect native command of the language and undistracted by finitude of memory or attention. This generative move closely parallels the social scientific idealizations that underlie abstract models of humans functioning according to the canons of some "perfect", exactly formalizable, rationality.

In the present section of our anti‑naturalist argument, we revisit the homogeneous community of perfect speakers idealiza­tion as the basis of ESVL. As we propose to modify the operative idealization in generative grammar, by the same token we suggest an alternative to ESVL which is an anti‑naturalism but does not lapse back into the obviously unviable AVL.

Thanks to long experience with generative grammar models, linguists understand by now both in theory and in practice what the homogeneous perfect speech community idealization brings into focus and out of focus. It highlights the fact that one's Lin­guistic Knowledge LK  can be treated as a homogeneous patch of mind relatively independent of many other mental endowments. It ignores the fact that LK's systemic organization tacitly treats certain subknowledges as resting on foundational subknowledges. Thus we are tacitly committed to the assumption that a perfect user commands equally all parts of the language. It follows that a variety of expressions and derivations should be equally avail­able to the perfect mind. But real native speakers consistently seem to find some expressions, some derivations, more readily constructible or accessible than others.

Of course, one can build models that treat the substance of LK as homogeneous for technical purposes. The point is not that current thinking literally compels us to abandon the perfection idealization or the notion that LK can be treated as homogeneous vis‑‑vis other faculties. Rather, my point is that we should respond to a certain tension between two imperatives. The old imperative comes from the perfection idealization itself which makes us treat LK as a homogeneous substance. Such treatment ensures that all parts of a language are equally "easy" in some sense that should bear on the proper formulation of an economical theory of language design. And the new logic driving current considerations of economy seems to say: Linguistic material enables the construction of a variety of patterns. The linguistic knowledge module LK contemplates all these. But it admits or selects only the "optimal" ones, whose contemplation and use are computationally the cheapest.

Instead of resorting to the current talk of computational economy, I could rest my case also on early‑parametric rhetoric. In any parametric framework, UG (Universal Grammar) takes princi­pled and parametric responsibility for the unmarked Core Grammar CG(L). But LK for language L also harboured a marked Peripheral Zone PZ(L) rendered messy by Saussurean arbitrariness and other necessary imperfections. That architecture, too, made some parts of L easier on LK than others, even within the perfection ideali­zation surrounding LK as a whole. One used to formalize differen­tial naturalness in terms of markedness. This held the fort during the interregnum between the transformational simplicity metric economy and the minimalist derivational economy concepts.

The discussion so far may be summarized thus. Exact linguis­tics rightly wishes to give the theoretical form of an imagined friction‑free LK to its intuition that LK stands as a strategic system of rational knowledge in its natural inner logic and serves as a tactics‑contaminated deployment of this knowledge only in its interaction with other mental modules. But this rationality at its most rational, when considering the economy that drives it towards optimal use of resources within the stra­tegic system, also discriminates between a more rational, regu­lar,  exact sector and a less rational, irregular, inexact sector of such an economy. This means drawing within LK a boundary of the type that one would expect to find only around LK.

Why do I think these considerations are going to lead to the idea that the general notion of a code will have to give way to a trans‑code with internal heterogeneity written into the structure of a knowledge? Jumping ahead a little, as a matter of expository convenience, I think this because I find that the relation bet­ween harder (less readily accessible) and easier (more readily accessible) parts of an LK is best seen in terms of a fully diversified adult Classicality at the periphery of LK serving as a permanent query answering service to a basic and relatively undifferentiated Natural core playing the role of a permanent child or learner. In other words a knowledge is optimally stored not as a product but as a process of potential transmission of relatively difficult knowledge to an imagined questioning child whose standpoint is constituted by the relatively immediately accessible parts of the knowledge. This image puts a potential on‑going dialogue at the heart of the stored knowledge system. The representation of language becomes not a box but a flow or a circulation, by the same token opening up the content of language to a certain history (an imagined development of the complexes from the simples ‑‑ no doubt in an etymographic or "folk‑etymo­logical" form as a misprision of "real" or archival etymologies) and to a certain social geography (an imagined relation whereby elites and other specialists controlling particular sublanguages hold resources in trust for the default core community that can ask queries whenever special expressions need to be used).

To see this point more clearly, let us move to a considera­tion of bilingual knowledge, where there is no doubt that hetero­geneity exists.

5.2 Bilingualism and representing difficulty

Let us take another look at LK. The old idealization which took a monoglot knowledge as prototypical emerged from the rule‑oriented period of generative work. LK(L‑i) took the form of knowledge of the rules of L‑i differentiating L‑i from any other language L‑j. This yielded a sharp boundary between LK(L‑i) and LK(L‑j) for any i, j. At that stage UG was an informal set of comments on how the LK representations constructed by grammarians hung together as a family. It did no descriptive work.

In our minimalist period, UG is a usably general LK(HL) for Human Language. Its invariant principles dictate part of the content of LK(L‑i) for any i. And UG's parametrized principles set the terms for much of the rest. Thus, today's boundary bet­ween LK(L‑i) and LK(L‑j) need not look like a flat "Here Comes A Border Checkpost" in a mind that knows L‑i and L‑j. For related L‑i, L‑j, and especially for i, j mutually intelligible, lin­guists must assume that a substantial amount of the i‑j‑bilingual mind's knowledge is an LK(HL) plus shared specifications which do not choose between L‑i and L‑j. Only where i and j differ, as in lexical items and systemic quirks, will the mind bother to divide the items into LK(L‑i) and LK(L‑j) as separate boxes.

Notice that this visualization models something like a compound bilingualism. It implies that the conventional picture of true coordinate bilingualism departs from perfect rationality. This implication is new. Classical generative linguistics implied no such thing. In this respect, the shift in the generative grammatical research focus from rules to principles in the seven­ties has led to a slight and perhaps hitherto unremarked shift in what the core idealization implies for the analysis of bilingual­ism. Of course, this account can still register the factual distinction between compound and coordinate bilinguals, in terms of how the lexical entries, say, are organized, as separate sets without correspondence or as bilingual entries wherever possible.

I turn now to the question of how easy a mind is supposed to find its two languages to the (now limited) extent that i and j are two and not one. A rationality with perfect strategic command over the whole system should not care. But we have seen that rationality selectively invests in various parts of the strategic system so as to be clear about where in the system the mind, with its laziness imperative, can rationally relax. If this is true even in a monoglot LK idealization, surely a bilingual model should make the further move of placing the commanding heights of this knowledge in the i or the j sector of the composite LK.

Obviously there is no compulsion to take this option. The assumption of a neutral and unlocated rational mind knowing i and j equally well costs less, formally. But I am taking the position that the self‑organization of rationality is partly an empirical question. Linguistic research seems to have shown us that even a monoglot system prioritizes simpler morphological and syntactic derivations over less simple ones. It stands to reason, then, that a bilingual system could leave one of its languages as the cheaper or default language, as a base.

For any pair of languages, say, English and French, we must visualize two types of composite linguistic knowledge, then. Assuming for instance that N.R. and R.K. both know English and French perfectly, we may speak of the ordered N.R. and R.K. models LK(j,i) and LK(i,j) of bilingual knowledge of English and French. For formal completeness, one can assume also an unordered or baseless composite knowledge LK{i,j} with no default code, leaving as an empirical issue the organization of minds that seem to instantiate this possibility.

How do we represent an ordered bilingual knowledge? For concreteness, let us ask the question within the minimalist programme.

The problem reduces to the formal representation of relative difficulty. We already have mainstream proposals as to how to model a monoglot native speaker's knowledge of her L‑i. Our assumptions make L‑j relatively difficult for LK(i,j). We must therefore show L‑j as harder for an i‑based speaker to access, although the perfect knowledge idealization means that she always surmounts the difficulty.

This section of our argument concerns itself mainly with the status of the problem of representing difficulty, not with plead­ing for my solution to it. Although little turns on the details at this stage, it is only fair to mention at once that I shall offer an LK(i,j) where the entries in the j sector are heavier in that they leave some redundancy and in that they touch base with the i sector.

The case of a bilingual LK is a useful starting point be­cause we all expect an i‑based speaker to find j harder than i. But this is only a special case. Even within a monoglot LK(i), we now see, difficulty zones exist. Theory must represent these zones. It may be useless to posit sharp boundaries marking them as less natural and more cultural, as less economical and more effortful. What is definite is the need to pose the question of differential accessibility within an LK.

We had tacitly assumed that within an ideal speaker‑hearer's LK all items known  are equally known, therefore equally easy. We had also tacitly chosen to focus on speakers finding language i infinitely easy and any other language j infinitely difficult. This was one idealization regarding the nature of differential access to LK. If we choose to pose the issue in terms of access differences within i itself, the question does not get postponed to boundaries between languages. And it then becomes unclear if the theory should posit language boundaries as such. Here we begin to unpack the oft‑repeated remark that linguistics cannot afford to just accept pretheoretical notions of single languages like French or English or Swahili.

But notice that we are slightly expanding the terms of generative reservations about those notions. In generative work, culture sets up language‑entities as a social arrangement which people can socially rearrange, while nature is what linguistic science can be concerned with. And  LK(HL) is the only place where nature, as human biology, intervenes. In generative work, LK(L‑i) is always an artifact of the contingent exposure someone has had to this or that body of speaking.

The moves I am proposing involve changing this in two re­spects. At the level of what linguists do when they face the public, formal linguistics now is agnostic about what people shall do in social negotiations about what they wish to regard as language entities. But  linguists accepting the anti‑naturalist revisions just suggested should feel obliged to persuade social negotiators to review the current sharp separation of cross‑language Foreignness Boundaries from within‑language Difficulty Barriers. This would involve actively questioning the popular perception of languages as quasi‑natural entities. Serious gener­ativists suggest that these folk perceptions do not help scien­tific work. If we agree ‑‑ and I do ‑‑ then we should tell the public this. The public still believes, because of strong impres­sions left by historical and structural linguists, that our science still endorses or condones the popular view of languages as natural entities. Correcting this impression is a bare minimum that surely the whole field can agree on. I would like to add that the similarity of cross‑language FBs to within‑language DBs should be part of the message to the public, a supplement others may choose to omit.

The present proposals also entail some changes at the level at which linguists face the subject matter they see as falling within "nature". Consider the standard view that linguistic study of the nature of language can rest exclusively on data regarding speakers' knowledge as to which  sound‑meaning pairings do and do not exist; and that the data base can omit material that pertains to how some parts of this knowledge ride piggyback on other parts. This standard view needs to be modified.

That such a modification is needed may be unclear to readers who reason as follows. "This man thinks speaker S's understanding of expression E works in terms of a derivative reading R(E) that crucially refers to more basic expressions F, G via some deriva­tion R(E) = d. But all he needs is the claim that some function d plays a role in the semantic part of the lexical entry for E. A full linguistic account has always postulated registration of synonymy relations that S knows by virtue of LK. So the picturesque metaphor of 'riding piggyback' can be unpacked in terms of conventional generative linguistic methods of data gathering and description."

Of course classical generative syntax supplemented by some favoured type of formal semantics will readily permit the free construction of toy derivations d from any F, G to any E, in the semantic part or any other part of lexical entries.  Of course classical methods allow us to gather synonymy knowledge data bearing on the correctness of such accounts. Needless to say. My point is about asymmetric synonymy, coded knowledge, and other details of the concrete proposal I summarize briefly below.

We seek to question the normal Uniform Entry Theory (UET) of lexical entries in LK. The UET idealization of LK, in the context of an attempt to formalize  knowledge of distinct languages, suggests placing inter‑knowledge boundaries between Particular Grammars, not within them. Our alternative idealization permits the relevant boundaries, here visualized as difficulty bounda­ries, to start within what we would ordinarily see as one Partic­ular Grammar. In a Differentiated Entry Theory (DET), the diffi­culty boundary differentiates unmarked or light parts of LK from marked or heavy parts. On such a view, everything in LK does not come equally naturally to speaker S. This consequence makes DET less naturalistic than UET. It is in this sense that my proposals force changes not only on the cultural front vis‑‑vis the pub­lic, but even on the natural front where the generative linguist meets the supposedly purely natural data out there.
If not all parts of LK come equally naturally to idealized speaker S, we suggest that some of LK is an explicitly cultural or effort‑born surplus. Less natural items uttered by S to hearer H  appeal to S's and H's shared past effort invested in the learning of these less natural parts of their LK. When we de­scribe the place of this appeal in S's speaking, we shall have occasion to slightly extend the notion of speech act. All these moves are connected, which makes it useful to announce them together. But we have to make them one by one to clarify what separate roles the moves play in the account we are developing.

We call DET a Differentiated Entry Theory because it differ­entiates light, uncoded, unmarked lexical entries from heavy, coded, marked entries.

A coded entry formally involves two operations. At the word level, it is interpreted via some other entry or entries, which may or may not be specified. At the speech act level, any speech act featuring a coded entry gets embroiled in that entry's "code".

The set of light or uncoded entries is a lexical equivalent to early generative grammar's kernel sentences. The relation between a Coded Item CI and its gloss is one of asymmetric synon­ymy, or asymmetric interpretance. The gloss, call it the Kernel Gloss KG as it uses kernel words, serves as interpretant for CI. But CI, though synonymous with KG, is not playing the role of interpretant for KG. This is the asymmetry.

One example, VCR for Video‑Cassette Recorder, helps us to see what the theory does say and does not say. The item VCR has a coded lexical entry. Interpretation proceeds via something else, which in this case can be specified as the expanded form, video‑cassette recorder. And there is a code to which the type of difficulty this item exhibits belongs. This code happens to be Abbreviationese.

We are not saying every S and H had school‑teachers teach them pieces of a socially recognized code called Abbreviationese that S must appeal to H's knowledge of. But we do claim that the use of VCR in a speech act flashes a kind of signal that goes: Attention, this speech act features an Abbreviation, special listening effort may be called for. This flash may fade into the statutory warning on a pack of cigarettes, routinely ignored by smokers, but it is still present in the case of VCR. In the case of Laser or Radar, specialist knowledge alone will teach some of us that these words are ex‑abbreviations. They no longer wave the Abbreviationese code flag, though they may wave some other code flag, an issue I have no wish to prejudge. Codedness is an empir­ical question, not a matter of fiat.

Nor are we saying the term VCR is terribly difficult or repels half the users of English or will someone please wage a campaign against the opaque use of abbreviations that is driving all oppressed non‑native speakers of English crazy. Our formal proposal is that VCR is a coded term and therefore located at one remove from the Lexical Kernel of LK(English). The actual ques­tion of who finds what difficult is a matter of the psychology of this or that speaker or speaker type. There may well be people who find some basic words hard that others find easy. Our propo­sal provides a representation for their difficulty. But we are not trying to predict in general who will find what subjectively difficult.

We are merely making it formally easy to note that the way that leads from some expressions to their content is a detour. This adds a step to certain computations.

If we describe coded items in our formal system as Formally  Representing Difficulty, such slightly unwarranted terminology involves cutting corners. Technically, the theory DET makes predictions about perceived difficulty only if you conjoin it with the hypotheses that a mental system that speaks uses a functioning module that mimics LK's computations and that a processor finds longer computations more difficult. Even if you add those assumptions, it does not follow that all cases of perceived language difficulty will involve extra computation. 

Having said all this, we are still inviting the inference that VCR is less straightforward than kernel items for reasons that somehow have to do with the non‑basic nature of abbrevia­tions. This aspect of our move is "obscure and intuition‑bound", to use classical words, and merits criticism, hereby invited.

We will quickly run through some more examples which we do not comment on in comparable detail. Example two, the verb Moti­vate, means what it does via cross‑reference to 'make someone feel like doing something'. I am using single quotes to stress the glosslike role of this cross‑reference. The code flag that Motivate carries may be called Difficult Words, possibly a mini­mal code in the sense that it is less specified than other codes.

Example three, Raison d'Etre, is interpreted via the gloss 'reason for existing' and bears the code flag French. Not every word historically borrowed by English from French still bears this flag, of course; only the non‑naturalized ones do. And this account fails to distinguish, in a bilingual LK, the French‑flagged items used as words "in English" and the ones that work in the French sector of the LK.

The fourth example, Doorbell, interpreted via the gloss 'bell that makes you answer the door', bearing the code flag Compounds, illustrates another feature of DET. Every compound is going to count as coded.

More controversially, example five, Learner, interpreted via 'person who learns' and coded as a Derivative, commits this version of DET to saying every word that counts as derived does so by getting coded.
Example six, Abracadabra, interpreted via 'some spell' and coded as Magic, is, in a language marked by the decay of this register, a remnant of a bigger sublexicon and a token of how interpretants lapse into vagueness for all speakers. Of course, a personal LK ‑‑ an LK which keeps the perfection idealization in order to be a knowledge representation but which mirrors the patterns of knowing and ignorance in a particular person's mind so as to model that person ‑‑ will provide vague glosses for what that person happens not to know much about, like Birch 'some tree', or whatever. The formal point is that DET is committed to saying that if you know too little about a word to feel that you can use it to make definite sense, then the word needs an inter­pretant and goes into code.

In contrast to all of the above, an ordinary Lexical Kernel item like Bird is uncoded. Its lexical entry shows it without any interpretant. It is thus a light entry, unlike the examples of heavy entries we have been looking at. DET is called the Differ­entiated Entry Theory because it distinguishes between a kernel of such light entries and coded zones of kernel‑dependent heavy entries.

Such a theory can distinguish mediated from unmediated interpretation of expressions. The interpretation of kernel words and of expressions composed entirely of kernel items is unmediat­ed. Coded words and expressions featuring them have their inter­pretations mediated by interpretants. Mediation is a matter of degree. If certain coded words serve as mediators (as interpret­ants or as participants in a complex interpretant) for other coded words, then the latter manifestly involve more mediation. 

In the architecture we have just finished proposing, media­tion works with codes, whereas ordinary light speaking precedes and grounds all codes. The "unmarked code" is a non‑coded open space.

This strikes me as a move that must be made if we are to complete the generative and sociolinguistic movements away from structuralism. Let me quickly clarify why I think these enter­prises have been moving in the right direction, away from the code visualization of language, but not far enough. 

Structuralism had assumed the nature of language to be codelike. Sociolinguistics has verbally opposed this, inventing a whole range of terms like formal informal high low acrolect basilect, to talk about the fact that some language uses are more taught than others, but unfortunately implying only that a lexi­cal item could be laden with additional features in a sort of sociolinguistic supplement to syntax, semantics, etc. ‑‑ which boils down to labelling some words as Power‑connected and others as Power‑disconnected. And generative grammar has gradually outgrown the "languages are codes" part of the structuralist legacy in its drift from rule‑built particular grammars towards a principled and effective universal grammar, but this only means that one refrains from using the structuralist right to seal the language borders, not that one has new principled reasons for refusing to view a lexicon as a code. In other words, both enter­prises, though obviously no longer committed to the view that a language is a code, have stayed anchored in the old naturalism that defines the domain of linguistic study by viewing all ex­pressions as having a common nature, as uniformly bearing a content involving what is always potentially direct extralinguis­tic reference.

In this sense, standard models of generative grammar and sociolinguistics, for all their diversity, subscribe to what we have explicated as the Uniform Entry Theory.

The DET brings out the latent capacity of structuralism's successors to come out of the code cage and locate the base of a language in a truly open space, reformalizing codes in terms of how some words depend for their exact wordhood on other words.

This changes the way we look at the parts of linguistics that have some claim to exactitude, the parts that deal with relations between words. Recall the crucial status of the notion of exactness in the earlier discussion.

5.3 Embedding

There is nothing intrinsically inexact about the material we are discussing, of course. Consider the speech acts that include coded items and thus count as heavy speech acts. To speak them and to hear them involves a cost. This cost is some sort of strain, if you wish. It is such strain that any exact linguistics will need to deal with. One factor in this strain is the codes that flag these coded items. Another well‑known factor is embed­ding. Sentences where clause embeds clause embeds clause are costly to produce and to comprehend.

I propose to conflate these two factors and to say that speech acts, when embedded, cost S and H some strain, usefully formalized in terms of codedness in both cases. The word case and the sentence case of codedness differ in that lexical codedness literally involves Codes, whereas sentence embedding involves codedness but not Codes. They are alike in that all codedness, as we propose to formalize it, has to do with embedding.

To effect this unification, we suggest three basic moves, which we list and which others more interested in formalizations may choose to integrate into existing formal games. In move one, we say you perform an array of word‑level speech Strokes as part of every speech Act. Move two extends the notion of embedding, currently reserved for a syntagmatic relation, so that it encom­passes also Paradigmatic Embedding. Move three makes codedness a function of embedding. And in an optional and thus non‑basic fourth move, which may help as it clarifies the place of the present proposals in the generative research programme, we define the general notion Coded Expression as 'expression E whose inter­pretation is keyed to the prior interpretation of some key mate­rial K(E) less coded than E such that, if K(E) is precisely specifiable, then either expression K(E) is part of expression E or content C(K(E)) is part of C(E)'. Under this move, a Kernel is one example of a key. The head of a syntactic chain is another, for a chain is now a Coded Expression in this sense. Any outcome of a derivation ‑‑ either a derivate or, if it counts as an expression, a derivation itself ‑‑ is now also a Coded Expres­sion. Although its K may in practice or even in principle not be precisely specifiable, the set of its key‑parts satisfies the spirit of the definition.

An empirically unsustainable but controversial and thus possibly welcome strong form of this account would compel every "foreign/opaque" word, metaphorically speaking, to live as some existing or potential "native/transparent" expression's paying guest. For our limited project of formalism development, we propose a weaker version. Every coded expression is either coded for reasons of obvious structural heaviness or a flag‑bearing member of some code and therefore spoken in some "special" reg­ister. A coded expression must either have a gloss of its own (which, remember, has the right to be vague, as in the Abracada­bra case) or activate a register that has a generalized supraseg­mental gloss ('teenager talk', 'officialese') providing the marked standpoint through which interpretation must be routed. Even this can be formalized in a fuller unpacking in terms of paradigmatic embedding.

Time to wrap up this part of the argument. The moves made here try to complete the direction of the generative revolution on a certain reading of what the programme has been about. To share this reading, first recall that early generative grammar used the GT (Generalized Transformation) mechanism only to con­join S to S or to embed S in S, an operation that thus held the key to the system's open‑endedness. Minimalist work brings GT back in a way that makes it fair to say that, in effect, every Merge is a GT. This means that current work already generalizes Embedding and makes it cover more syntagmatic ground. Our propo­sals generalize Embedding in a paradigmatic direction as well. This represents one formal aspect of the continuity between our ideas and the general drift. Consider now a substantive point. The generative revolution concerns itself with the fact that the living speaker spontaneously says things that s/he has not or need not have heard before. In other words, normal speaking need not refer to frozen codes. A full account of this central fact should be accountable to the intuitively given existence of certain frozen codes that do dot the linguistic mindscape ‑‑ codes corresponding to colonizers who have dominated one's com­munity and other psychologically real pieces of heterogeneity in one's LK. We need to say that speaking is based in a free, uncod­ed kernel and remains aware of ‑‑ and in charge of ‑‑ the coded character of much material that it relativizes to this kernel which calls the shots. This is a more responsible account of the native speaker's free and spontaneous speakerhood than one that pretends that all is homogeneous and equi‑natural in LK.
As part of the fuller account that our formalism allows you to construct, you can rigorously refer, when you wish, to syntag­matic embedding of structure in structure and to paradigmatic embedding of speaking in speaking. This means you can speak abstractly of some parts of your knowledge depending formally on other parts of the knowledge in your own or in others' minds. And our framework frees such speaking from the compulsions of a space‑time‑anchored "realistic" portrait ‑‑ from the mindset that says thou shalt cast all knowledge‑dependence in the mode of somebody having physically taught thee certain items. That mode forces you to couch all references to knowledge‑depending‑on‑knowledge in an empiricist framework that accepts the claims of the "common sense" doctrine that what has been learnt by con­scious effort must have been taught by some conscious external controller, some teacher. Our proposed modifications of the LK idealization free you from the empiricist framework and the doctrine that all cultural knowledge is teacher‑imparted, and yet let you characterize the way relatively cultural knowledge items depend on the more natural basis that underwrites them.

Relatively cultural? More natural? We have come a long way from notions pitting the naively antinaturalist view AVL against the naively naturalist view ESVL. We can now question the Exact Science View of Linguistics ESVL without at once collapsing into the Artifact View of Language AVL. And the basis of our question­ing is a clear continuation of the drift of generative inquiry.

5.4 MIVL

The Message Increment View of Language MIVL, which I am advocating here to give some concreteness to the programme and not because this specific proposal has had all the glitches taken out, emerges from normal ESVL linguistics as  a continuation of the generative critique of structuralism. The account of language we will argue for retains the exact linguistic form subaccount from ESVL work. It adds a parallel subaccount in which big mes­sages grow by little messages getting embedded in other little messages. Message embedding works on syntagmatic and paradigmatic tracks. Though inexact in its combinatorics, message organization shares structural work with the units of linguistic form that do feature in the exact subaccount. We introduce this view by work­ing through a certain reading of present and past work which we seek to extend.

Structural linguistics had postulated atomically arbitrary simple signs, leaving open the degree of relative motivation in composite signs. To the extent that composition procedures turned out to be language‑specific and thus opaque, even the composite signs would count as relatively arbitrary. Arbitrary or Society‑chosen material appears to the Speaking Subject as given and immutable. Only "transparent" procedures of sign composition, if any, might escape the arbitrariness of all Language‑bound Form into the Speech‑anchored world of Substance. 
        Therefore the structuralist research programme grimly ex­pected that the composition procedures might turn out to contain large doses of opacity, requiring elaborate descriptions. If not only words but even sentences are approached as potential Signs with unspecified amounts of Arbitrariness, then languages might well differ from each other wildly, in unpredictable ways. This leads to the generative critique, as is obvious. What may be less obvious is how the moves of the critique take us away from AVL to ESVL.

Structuralism practically endorses AVL. A sign is a social artifact. Even sentences are, for all you know, giant signs, with lots of social artifice in them waiting for discovery by the social science of linguistics. Composite signs exhibit relative motivation and thereby invite exact methods of description, as in all social sciences.

The generative revolution undermines AVL in two ways, inau­gurating ESVL. One, generative work views the formal richness of language as mirroring the creative freedom of the human mind. This move,  by claiming creative linguistic freedom as a property of human action and a formally rich language mechanism  as a property of the human endowment supporting this activity,  lo­cates language in the natural world. Two, generativism subjects this richness to an exact computational accounting. This move gives linguistics a particular niche in the scientific ecosystem, by showing that the computations used by the language mechanism exhibit design features such as nonredundancy and subsystem simplicity which one might have expected to find only in the inorganic physical sciences.

Today, this computational accounting focuses again on the word level, enabling us to rethink the questions structuralism had once faced. Structuralism had settled them by postulating atomically arbitrary simple signs and a unique upward structura­tion leading from these simples up a single hierarchy of rela­tively motivated composite signs. We propose to rethink the questions  in a way that learns from and extends the generative rejection of this settlement.

Generative linguistics has steadily examined the displace­ment property of human language, the fact that nearly every significant item occupies at least two distinct syntactic posi­tions, one where it is pronounced and one where it is interpret­ed. We now think we know that this property reflects the pushes and pulls of formal features such as Case and Agreement. If current work is on the right track,  lexical items bearing such  formal features need to discharge them and thus bring about cer­tain displacements so that the right words appear next to each other, discharging the relevant features. This picture denies the uniqueness of an upward structuration of relatively motivated composite signs as postulated in structuralism.

Let us look carefully at what this generative picture, by now standard, asserts in contrast to the structuralist view. As we do this, we need to remember that the official opponents to mainstream generative research, variously located in sociolin­guistics, psycholinguistics, language pedagogy, pragmatics, and other important forms of linguistic study do not normally defend any alternative analysis of syntactic structure. In other words, contemporary syntax, although its researchers may hold a UG‑based metatheory rejected by official non‑generativists, represents a near‑consensus of the community of linguists as far as syntactic analysis itself is concerned.

The structuralist analysis emphasized the sign as a relation between a signifier in language and its signified anchored in the external world. The theory stressed that the sign linking the two was an arbitrary piece of social currency that individuals must accept as given. This analysis, however hard it might then seek and find composite signs that are relatively motivated because of the non‑arbitrary aspects of sign composition, predicts the existence in principle of substantive signs that are pure, non‑composite, and pointed exclusively towards the external world.

In contrast, the generative theory of the word in its mature form asserts that words point not only at the world outside but at other words, in a ubiquitous process of mutual accommodation. This implies no weak thesis along the old lines of "Each word must be co‑significant with or relativized to all  other words, which means you cannot locate any sense", an old route to in­scrutability or indeterminacy. Rather, this implies the strong claim that a word does point both at the world and at necessary neighbour‑words in specific ways. An important prediction fol­lows. Substantive words, in such a theory, must be at least syntactically inflected. Morphological inflection depends on how much a particular language uses affixes and therefore varies; but it implements syntactic inflection, and the generative theory predicts that it should exist.

Notice that the structuralist theory does not predict that morphological inflection should exist. It is consistent with the possible ubiquity of unsplit, uninflected root words. Here the generative theory makes a distinct assertion ‑‑ that words exist only in relationships of mutual accommodation ‑‑ with a clear and accurate prediction: that words are often inflected.

To discuss and extend the insight involved here, we give it a name: Mutual Accommodation. We say words accommodate each other syntagmatically by using inflection and other inter‑word regis­tration devices. As our understanding of this phenomenon grows, we shall learn how to tell the stories of affixes, anaphors, variables, and other dependents as part of a larger account of Mutual Accommodation syntagmatics. If expressions were kept afloat as entirely self‑sufficient atomic signs by social arbi­trariness decisions and did not need each other, structuralism would have worked, as would AVL. But expressions do cooperate in specific ways that lend themselves to exact description and allow people to exercise creative freedom, whence the need for ESVL.
        But ESVL is not enough. Mutual Accommodation works also in a paradigmatic direction. Paradigmatic accommodations add up but do not compute. On this rests our case for MIVL.

5.5 Increments

I ask a Delhi‑based colleague P.M. to referee for a Indian journal I am editing. She remarks that she will do what she has done before when refereeing for "firangi journals", as she puts it. Why use this Indian English borrowing from  Hindi instead of just saying "Western journals"? She is relying on my knowing why. By using the word, she is speaking as Indian to Indian and sig­nalling our joint distance from the world of Western journals where we function but do not belong. This gesture gains added depth from biographical details which have no bearing on our discussion but whose existence I mention to make the point that codedness carries a lot of contextual content. 

Does P.M.'s choice of "firangi" instead of "Western" add to the meaning of what she says? Yes. Does this addition count in the computation of the PF or LF or some PF‑LF‑feeding representa­tion of her sentence? No. It exists as a Message Increment but not as a Representation Segment. The representations show only that the item Firangi bears some code flag. This fact does not flow into any other computationally significant fact about any sentence featuring the item.

How does P.M. speaking to P.D. in 1986 generalize to an LK beyond space‑time imperfections?

Every LK representation of a structure treats it both as a Cognitive, subject to representational structure computations of the usual generative sort, and as a Preactual, subject to message organization considerations of the sort I am adding in the exten­sions proposed here. Imagine the Cognitive as a letter and the Preactual as an enveloped letter with no Speaker's from‑address, no Hearer's to‑address on the envelope. Visualize an Actual realization in real‑time performance as an addressing and mailing of that enveloped letter by an actual S to an intended H.

So you subtract P.M., P.D., 1986, Delhi, and still get a Preactual message organization that assumes that the perfect bearer of the relevant LK perfectly organizes a message in which the submessage 'Western' enters paradigmatic mutual accommodation relations with some submessage emanating from the Hindi source of this loan word.

There are some points that need to be packed into a fuller account. Context encoding crucially shapes message composition in a way that must enter into a serious representation of the pro­cess of composition itself. One way to handle this is to allow for parallel and interactive structuration of expressions and expressing‑acts. This would involve insisting in one's technical work that the mutual accommodation between words takes not only the syntagmatic form that current minimalist theories of syntac­tic feature checking worry about, but also a paradigmatic form. We are far from having usable tools to do all this work with.

As we explore ways of encoding matters of context and other intangibles of composition, one methodological worry is going to be: What empirical basis provides the appearance that the study of language can be a prisoner of exactitutde at all? Where in language do you find the exactly characterizable phenomena which underpin the exact science revolution in linguistics?

The instructive, and as yet unexamined, answer is: in the Learned sectors of linguistic knowledge, which even literate adult speakers have trouble with. Many of the crucial phonology examples from English are items you look up in a dictionary. Much of the material that drove the post‑Ross‑constraints period of syntax involved long sentences of the sort sometimes used in written prose, but seldom in spontaneous speech.

This indicates that computations which make language a rich system requiring exact science treatment appear most detectably, perhaps, wherever structures combine "freely" and "rationally", using the full formal combinatorics that comes into play when choices interact. In the terms proposed here, the exact enclave of language falls within those areas of Linguistic Knowledge where codedness arises as a consequence of embedding. If this is so, difficult or Learned sectors of language are precisely where we would expect to find crucial data for ESVL.

The Message Increment View of Language MIVL helps us to reexamine the general relation between coded expressions and their basis in the kernel. Coding itself on the MIVL account becomes a generalized paradigmatic increment.

5.6 A sustainable naturalness

It is time now to plug the rather specialized concerns of this section into the more general worries of our argument as a whole. All the moves I have been making amount to an unpacking that can, in principle, satisfy an appropriate community of specialists who wish to know why I think that the relation bet­ween harder (less readily accessible) and easier (more readily accessible) parts of an LK might best be seen in terms of a fully diversified adult Classicality at the periphery of LK serving as a permanent query answering service to a basic and relatively undifferentiated Natural core playing the role of a permanent child or learner.

At one level, what I have been saying may be summarized as the claim that a knowledge is optimally stored not as a product, but as a process. To know X is thus in principle to know how to  potentially transmit X, an object of relatively difficult knowl­edge, to an imagined questioning child whose standpoint is con­stituted by the relatively immediately accessible parts of the knowledge. This image makes the stored knowledge system dialogi­cal and makes the representation of language ‑‑ or of some other object of knowledge ‑‑ less boxlike and more like a flow or a circulation, by the same token opening up the content of language to a certain history (an imagined development of the complexes from the simples ‑‑ no doubt in an etymographic or "folk‑etymo­logical" form as a misprision of "real" or archival etymologies) and to a certain social geography (an imagined relation whereby elites and other specialists controlling particular sublanguages hold resources in trust for the default core community that can ask queries whenever special expressions need to be used).

At another level, I need to stress instead that both AVL and ESVL play into the hands of the monumental or Olympian mind‑set. Only something along MIVL lines might possibly help work towards a sustainable naturalness. It visualizes the natural as the imagined core child's simple unity. It also imagines the classi­cal ‑‑ as the adult periphery's complexly differentiated striving for a sustainable or convergent set of potential answers (gloss­es) that respond to the child's whats, hows, wheres and whys.

This picture of representing linguistic knowledge LK in a way that can take the formal representation of difficulty in its stride serves in our overall argument as a metaphor for a reinau­gurated Enlightenment that will put humanities‑rooted, friendly learners and not exactitude‑rooted, adversarial would‑be teachers in the driver's seat. Call it the Apprentice's Enlightenment rather than the Expert's.

Why should this picture be seen as Green? What does it have to do with reclothing the planet?

Industriality denudes. Cognition reclothes. Human cognition always dresses things up in categories. Serious, that is sustain­able, cognition takes this dress seriously instead of gesturing it away and seeking some natural body as uniquely truth‑giving.

Language exists as a knowledge where nature plays a big role. It is the enduring achievement of generative linguistics to have shown that this is so. There is no doubt that the biological make‑up of humans draws limits to what languages can exist and shapes creativity's linguistic drawing‑board itself at levels remote from what the subjective consciousness of speakers can reach and think about. Language is at the edge of nature. It is here that nature meets culture.

We have just finished outlining a theory of linguistic knowledge that takes this simple fact into account. Language is also at the edge of culture. It is here that culture meets na­ture. Our formulation of this meeting banks on the fact that we know language not as a Performed corpus of acts already accom­plished and to be marvelled at by the wowed crowds in the gallery of the monumental twentieth century. We know language as a Com­petence, as a knowledge of what can be done by people, with other people. This "withness" is portrayed at the heart of the formal­ism itself that sets forth what a speaker of a language knows. We know, in other words, insofar as we can keep the knowledge flow­ing from older cartoon figure to younger cartoon figure in our inner Punch and Judy theatre, from complex adult to simple child.

This move introduces a dimension of the transmitted, of the trans‑codal, into the notion of the code that linguistics in all its versions must do business with and which thus constitutes linguisticity. And the Apprentice child and the Expert adult thus find a natural entry into one proposed continuation of one of the most remarkable Competence‑based critiques of the Performance model, the model of forcibly imposed achievement standards, that the twentieth century has seen.

With this child embodying the natural cheerfulness of going ahead and this adult the classical worry of ensuring that all god's chillun get the wings they need, we can begin to refigure the classicalities of a China, an India, an Arabia, or a Western Classical Antiquity in an international inheritance. As we do this, the fundamentalisms recede. So does the threat of a United Societies of Amusement sponsored hijack of seriously cognitive projects by premature and mindless industralizations. Here is the Green component in this statement of our hopes.


Post a Comment

Subscribe to Post Comments [Atom]

<< Home