NOTE -- THIS VERSION SUPERSEDED BY LATER VERSION
AS OF 12-14-02



The Relevance Of Peircean Semiotic

To Computational Intelligence Augmentation

Joseph Ransdell

Dept of Philosophy

Lubbock, Texas 79409 USA

joseph.ransdell@yahoo.com

 

 

This is an incomplete work in progress, distributed at present for the purposes of critical feedback. It is being prepared for the proceedings volume for the

Workshop on Computational Intelligence and Semiotics, at São Paulo, Brasil, October 8-9, 2002, organized by João Queiroz (PUC/SP) and Ricardo Gudwin (UNICAMP). Two concluding paragraphs are omitted from this version.

 

1. Introduction

Peter Skagestad -- philosopher and Peirce scholar -- identifies two distinct programming visions that have animated research into computationally based intelligence which he labels, respectively, as: "Artificial Intelligence" or "AI" and "Intelligence Augmentation" or "IA". The aim of the present paper is, first, to describe the distinction of these two types of computational intelligence research for the benefit of those who might not be accustomed to recognizing these as co-ordinate parts of it, and then, second, to draw attention to a special sort of Intelligence Augmentation (IA) research which seems to me to warrant special emphasis and description, both because of its potential importance and because Skagestad’s account of the distinctive features of IA research does not seem to me to capture the most salient characteristics of this special part of it, perhaps because it may not have occurred to him that it is distinctive enough to require special attention in order to be recognized for what it is.

AI research can be characterized roughly as computer programming which aims at creating machines that can think as well or better than humans can think, whereas IA research is computer programming which aims at providing a computational basis for augmenting or increasing the effectiveness of human thinking by assisting it, as distinct from attempting to replace it by a machine simulation. The two can be regarded as being, in a general way, complementary in application, and the term "computational intelligence research" or "CI research" (as I will abbreviate it) can reasonably be regarded as embracing both. The particular type of IA to which I wish to draw attention here is computer programming which aims at supporting, augmenting, and perfecting the critical control of research communication and publication.

Although the philosophical work of Charles Peirce is relevant to AI as well as to IA, Skagestad is especially concerned to position Peirce as providing a theoretical basis for IA comparable to the foundational position of Alan Turing as regards AI in virtue of the latter’s conception of the Universal Machine and of the so-called "Turing Test" for computer intelligence. Skagestad positions Peirce in this way by explaining what is implicit in Peirce’s dictum that "all thought is in sign," construed as meaning that all thought is materially embodied. In developing Skagestad’s conception of IA further in the direction indicated I also ground this in Peirce’s dictum, but I do so by making explicit a different (but complementary) implication of the same Peircean dictum, namely, that all thought is dialogical. As an exemplary (but not prototypical) case of IA of this special sort, I use the archive and server system of primary publication created by the physicist Paul Ginsparg at Los Alamos some ten years ago which is presently in use in the fields of high energy theoretical physics and several closely associated fields in physics, astronomy, and mathematics.

 

2. The Distinction Between AI and IA Research

The present audience will require no reference to the literature on AI research, but the basis for the IA movement in computational intelligence research may not be equally familiar. The distinction is certainly implicit in much of the speculative literature on computational intelligence in the past few decades, but the overt recognition of these as two equally important developments within the broader category of computational intelligence programming seems to be relatively recent. As background for the present paper I recommend three papers by Peter Skagestad on this topic which are easily available on-line. Links to these papers are to be found on the Peirce "ARISBE" website on the following web page:

http://members.door.net/arisbe/menu/library/aboutcsp/aboutcsp.htm [link updated — B.U.]

All three of these papers are relevant, but I will only be touching here on a few of the points he makes in them, chiefly (though not exclusively) in the paper of 1993. In these papers, Skagestad distinguishes between AI or Artificial Intelligence and IA or Intelligence Augmentation as two distinguishable types of programming goals that correspond to what he regards as two distinct "computer revolutions", rooted in "two very different notional machines", namely, Alan Turing's Universal Machine, as described in his 1936 paper on computable numbers, and Vannevar Bush's Memex, as described in a paper by Bush of 1945. Skagestad says:

Both the Turing machine and the Memex attempt to mechanize specific functions of the human mind. What Turing tried to mechanize was computation and, more generally, any reasoning process that can be represented by an algorithm; what Bush tried to mechanize were the associative processes through which the human memory works. ...

The Memex, which attempts to replicate human memory, and hence may be aid to embody '"artificial memory", was not intended to rival the human mind [as AI does] but to extend the reach of the mind by making records more quickly available and by making the most helpful records available when needed. This idea directly inspired the research program known as "intelligence augmentation" (IA), which was formulated in 1962 by Douglas Engelbart with explicit indebtedness to Bush, . . .

Skagestad remarks further that:

The Turing machine is the ancestor of the inference engine under the hood of the personal computer . . . , while Bush's Memex is the ancestor of many of those features we refer to, collectively, as the user interface.

And he reminds us that:

In the sixties computers were huge, expensive machines usable only by an initiated elite; the idea of turning these machines into personal information-management tools that would be generally affordable and usable without special training was advocated only by a fringe of visionaries and was regarded as bizarre not only by the general public, but also by the mainstream of the electronics industry. The second computer revolution obviously could not have taken place without the first one preceding it, but the first computer revolution could very easily have taken place without being followed by the second one.

Phenomena of this complexity are often explainable, as regards their origins, from more than one perspective. Real things have facets, and multiple complementary perspectives on complex historical realities is usually required in order to have a reasonably sophisticated account of them overall. In this case the role of visionaries like Turing and Bush is undoubtedly important, but there are other things to be said about the origins of the conception of the computer as well, and my guess is that, as regards the conception of it as an instrument of personal use in augmenting the ability to produce text, to work with documents in various ways, and to communicate with others originated also, in part at least, as an unintended by-product of work designed to satisfy the need to document the programming involved in mainframe computing, the maintenance of which required that records be kept both for one’s own use as a programmer and for the use of other programmers as well. This in turn required the ability not only to record information but also to communicate it, which could be facilitated by making use of the powers of the computer itself as the instrument for doing such recording and transmitting.

It was by no means necessary to make such use of the computer for this purpose, though, since the recording and communicating of programs and programming notes could all have been done in ways previously used for recording and communicating things like that, namely by writing them down on sheets of paper either by hand or by use of a typewriter. But once the use of the computer itself for such purposes was recognized as a possibility and regularly practiced, it is not surprising that there would be a few people here and there perceptive enough to grasp much broader and more exciting visions of possible use, for the purpose of actualizing what Vannevar Bush had envisioned in Memex, which was, among other things, the prototypical vision for what later became the conception of hypertext linkage.

In any case, Skagestad himself draws three preliminary conclusions from his historical account of the difference of the two visions:

First, the Turing machine and the Memex each provided an indispensable piece of the technology that has become known as the personal computer, which we may today opt to conceptualize either as a personal Turing machine or as a computerized Memex;

Second, the two constructs are not rivals in the sense of offering conflicting solutions to the same problem; Bush and Turing were attacking entirely different problems, and so their respective solutions do not directly conflict with each other; but:

Third, the two constructs embody different conceptions of the human mind in general and of human-machine interaction in particular.

He continues, saying:

Turing regarded the human being as essentially indistinguishable from a machine; Bush regarded the human being as essentially a machine user, and sought to construct symbol-manipulation machines that would be "thinking machines" in the sense of machines to think with, not machines that think. While Bush's vision has served as the inspiration for a vast industry that is rapidly transforming our culture and society, Turing's vision has become the governing paradigm of the research program known as artificial intelligence (AI), and indeed for the entire interdisciplinary field known as cognitive science. So pervasive is the influence of this paradigm that one frequently hears it said that the computational model is the only comprehensive and fully articulated model of the mind available. There is, however, a different model of the mind available—one which, while not articulated by Bush, is fully supportive of the research program Bush initiated, the program today known as "intelligence augmentation" (IA). The model I have in mind is one which was articulated in the nineteenth century by Charles Peirce, and which has recently been advocated by James Fetzer as the semiotic model of the mind.

To summarize to this point, Skagestad’s basic argument is to the effect that computational intelligence research (CI research) has thus far worked chiefly from two distinctive visions of what might be achieved -- AI (Artificial Intelligence) and IA (Intelligence Augmentation) -- which are capable of being regarded as complementary rather than exclusive alternatives of CI development, but which may tend to be at odds with one another because of the importantly different conceptions of mentality which lie at their respective bases. Skagestad’s primary aim thus far, though, has not been to encourage research development in which they are capable of being mutually supportive, though he is doubtless in favor of this, but rather to make clear that the second paradigm for research into computational intelligence is conceptually independent of the first, such that what we refer to as if it were one thing, the computer, is in reality two importantly different things at once: on the one hand, an algorithm-embodying mechanism capable of mimicking mentality functionally to an extent yet to be determined; on the other, an instrument for coordinating factors variously involved in human intelligence insofar as these can be supported mechanistically in such a way as to augment human intelligence instead.

Skagestad regards the theoretical basis for the AI conception as lying in Turing’s conception of the Universal Machine, but he does not regard the corresponding historical figure in Intelligence Augmentation, Vannevar Bush, as providing the theoretical basis for the IA tradition generally. His view is rather that although Peirce did not envision its actualization in the concrete way Bush did in his conception of the Memex machine, Peirce’s philosophy does provide a theoretical basis for the IA tradition generally in a way that Bush’s more limited vision does not. Skagestad also recognizes others whose conceptions are supportive of this theoretical basis as well, most notably Karl Popper and his conception of the exosomatic evolutionary development of mind, as is explained at some length in Skagestad’s 1993 paper. But he regards Peirce’s work, which was prior to Popper’s, as being theoretically more adequate.

 

3. Is There a Unitary Principle for IA Research Generally?

I agree with Peter Skagestad both as regards the need to recognize that two distinct research projects have actually been at work in the development of computational intelligence technology, and as regards the claim that Peirce’s philosophy can provide a theoretical basis for the second kind of computational intelligence project as well as contributing importantly to the first. I take this basic agreement for granted here, but before going ahead to explain the further aspect of the IA research tradition which especially interests me, I should note first that I do not think that Skagestad has succeeded thus far in identifying precisely enough what it is that is fundamental in the IA tradition that runs from Bush through Douglas Engelbart, J.L.C. Licklider (internet development), Ivan Sutherland (computer graphics), Ted Nelson (hypertext), Alan Kay (interface design), and other stellar figures up through Tim Berners-Lee, who both invented the conception of the world wide web and at the same time established it as an actual world wide hypertext system, beginning around 1989, and who still continues with his development work on the so-called "semantic web". That is, I do not find any place where Skagestad describes IA in a way that seems to capture what the various facets of it to which he appeals in his account have in common which would justify regarding this second controlling vision as itself a single or unitary vision, though I believe there is indeed some such unifying factor to be appealed to.

Thus Skagestad at times simply refers to IA as being based on the conception of the personal computer, in contrast with the conception of the computer exemplified in the kind of computing characteristic of mainframe computing. This could perhaps be firmed up by identifying some trait or traits essentially characteristic of personal computers that could be shown to involve the rest by implication, but I do not find that this is done satisfactorily. He also frequently mentions the problematics and purposes of user interface design as of the first importance, and that, too, is certainly to the point but also is not itself satisfactorily defined. In using Bush’s vision of the Memex machine as an historical basis, he is, in effect, privileging the principles of hypertext as fundamental, and this surely is of basic importance, too. But, again, I find no attempt on Skagestad’s part to demonstrate that these principles are somehow at the bottom of it all. Networking is still another possible candidate which he uses as illustrative of the second revolution in computing, but the general idea at the basis of networking would have to be made clear and shown to be conceptually basic relative to the other factors mentioned as characteristic of IA research and this has not been done either.

My own hunch -- and it is little more than that, but it seems worth mentioning in a suggestive spirit here -- is that the key to the identity of what Skagestad characterizes as the IA tradition in computational research lies in the conception of interactive computing, which he does indeed recognize in passing but does not linger on. One reason for thinking this might be the key factor is that the conception of the personal computer seems to have developed historically in large part from the attempts of the early hacker community at MIT to take advantage of the DEC machines that came into competition with the IBM mainframes, as being more responsive to the programmers’ needs than the monoliths that preceded them. These needs included the need to play -- the fountainhead of creativity in the development of the computer generally, in my opinion -- and the games devised were interactive ones involving the production of text to be produced by the player and interpreted by the computer, and text produced by the computer to be interpreted by the player, in a continual response and counter-response which simulated human interactivity with things in one’s environment in the context of a structure of inquiry which gave sense to it all. I am referring, of course, to the "adventure" games in particular, which were games of discovery as based on clues provided by textual descriptions of what items were to be found in the labyrinthine tunnels of the "Colossal Cave" in which the adventurers found themselves.

With this, the paradigm of the computer as an algorithm-enacting machine was implicitly displaced by quite another vision of what these things were all about; for regardless of what was happening on the side of the machine -- let us assume it was nothing but the use of algorithms in application to data structures -- what was happening on the side of the game player, who was an integral part of the overall interactive system, was not algorithmic, with the result that the overall system of interaction could not itself be understood simply as the orderly triggering of algorithms and bore little overall resemblance to what the machine appeared to be in the perception of the mainframe programmer, who was accustomed to thinking in terms of the machine as dedicated to the enactment of purely deductive routines operating on data supplied to it for purposes of drawing just such deductive conclusions from it. Finding your way out of the Colossal Cave required a lot of deduction, to be sure, but algorithmic deduction was not the overall form of the activity of the interactive person-and-machine, which, in effect, humanized the latter by informing it with human spontaneity in the service of discovery.

Human and machine interactivity in the solution of problems arising in the context of discovery is the point from which I would start, then, in attempting to get a clear and unitary vision of the essence of what Peter Skagestad regards as the second computer revolution and identifies with the project of IA or Intelligence Augmentation. Skagestad might agree with me on this -- I am not suggesting any disagreement here -- but as best I can make out from what he does say in the articles mentioned, the starting point for understanding IA philosophically for him has been rather with the idea of the "exosomatic" location of mind in the circumambient material environment. Let me explain now how this relates to the Peircean dictum that all thought is in signs, which he regards -- rightly, in my opinion -- as the key conception for understanding Peirce’s semiotic as capable of providing a theoretical basis for IA generally.

 

4. Thought Is In Signs --> Thought Is Exosomatically Embodied (Skagestad)

Peter Skagestad understands the dictum "All thought is in signs" to mean that thought is not primarily a modification of consciousness, since unconscious thought is quite possible, but rather a matter of behavior -- not, however, a matter of a thinker’s behavior (which would be a special case) but rather of the behavior of the publicly available material media and artifacts in which thought resides as a dispositional power. The power is signification, which is the power of the sign to generate interpretants of itself. Thinking is semiosis, and semiosis is the action of a sign. The sign actualizes itself as a sign in generating an interpretant, which is itself a further sign of the same thing, which, actualized as a sign, generates a further interpretant, and so on. As Skagestad construes the import of this -- correctly, I believe -- the development of thinking can take the form of development of the material media of thinking, which means such things as the development of instruments and media of expression, such as notational systems, or means and media of inscription such as books and writing instruments, languages considered as material entities like written inscriptions and sounds, physical instruments of observation such as test tubes, microscopes, particle accelerators, and so forth. The evolution of mind means that cognition is still developing, not primarily in the nervous system and brain and not in some mysterious kind of immaterial mind-stuff, but rather in the material instruments and media of cognition. Thus Peirce says, for example:

A psychologist cuts out a lobe of my brain (nihil animale a me alienum puto) and then, when I find I cannot express myself, he says, 'You see, your faculty of language was localized in that lobe.' No doubt it was; and so, if he had filched my inkstand, I should not have been able to continue my discussion until I had got another. Yea, the very thoughts would not come to me [emphasis added]. So my faculty of discussion is equally localized in my inkstand.

Let me quote Skagestad’s comment on this:

As is indicated by the emphasized sentence, Peirce is not making the trivial point that without ink he would not be able to communicate his thoughts. The point is, rather, that his thoughts come to him in and through the act of writing, so that having writing implements is a condition for having certain thoughts -- specifically those issuing from trains of thought that are too long to be entertained in a human consciousness. This is precisely the idea that, sixty years later, motivated Engelbart to devise new technologies for writing so as to improve human thought processes, as well as the idea that motivated Havelock's interpretation of Plato.

I am sure you can readily see the connection of this with the development of computer graphics, the user interface, the use of the mouse, word processing, hypertext, and so forth, which is what primarily interests Peter Skagestad. The theoretical grounding of all of this in Peirce lies in his locating of thought in the media of its expression, as expressed in the dictum that "all thought is in signs."

 

5. Thought Is In Signs --> Thought Is Dialogical (Ransdell)

I agree with Peter Skagestad in all of this, and my interests certainly include those computational mechanisms that constitute and control the interface both with document and data materials and with other persons, and which include or enable the many powers of manipulation of text and graphics that have been developed in recent years, the ability to make and follow hypertext links (i.e. to associate freely and to trace associations already made), the ability to exchange messages with others in various ways, and so forth, which Skagestad is especially concerned to emphasize. But there is a further and equally valid interpretation of the dictum that "all thought is in signs" which also has implications for computationally-based Intelligence Augmentation, namely, that thought is dialogical -- hence communicational -- in form.

If thought is to be found in signs, and is actualized in their actual generation of interpretant-signs of themselves, then it is the flow of discourse as asymmetric dialogically-structured interpretation calling forth further interpretation that constitutes the flow or process of thought, and the development of intelligence is at least in part a matter of the development of critical control practices that conform to communicational norms which make discourse more efficient and effective relative to whatever ends it may have. Since the discourse or communication in question is to be made more effectively intelligent, it seems reasonable to start out by working with communication as it occurs specifically in processes of inquiry, where the function of the norms of critical control is to make inquiry more successful in the sort of results it specifically aims at. The capability of this kind of success is certainly an important part of whit we regard as intelligence. Whether the focus upon communication in inquiry in particular will provide us with an adequate basis for understanding the potentialities of IA programming designed especially to make communication in general more intelligent is another matter. This might take us only a certain distance, beyond which we will need to consider other and importantly different types of communication as well if our aim is to develop Intelligence Augmentation of this sort as extensively as we can. But understanding something of the potentialities and problematics of IA in this respect should at least provide us with a more sophisticated understanding of the role of communicational norms in intellectual life than we presently enjoy and it also enables us to take advantage of the work of Peirce -- himself a master of inquiry in a number of different fields -- in developing analytical conceptions for this purpose.

By far the most effective kinds of inquiry that have been humanly devised are those that occur in research traditions of the sort which have developed in modern times, where mastery of the use of computational mechanisms and programs of the sort which Peter Skagestad is especially concerned with is embodied in practices, habits, and skills of the inquirers in the given tradition. These might be called the "material skills" of inquiry that have developed in the given field. Some of these will be field-specific but many will be common to a number of such fields, and some will be common to all. My own concerns come to a special focus, though, not on the material skills of the inquirers but rather on what I will call the "discursive skills" of inquiry, meaning by that the mastery of those practices, habits, and skills of discussion and communicational interaction generally -- such as e.g. asserting, suggesting, questioning, critical response and counter-response, objection and elaboration, etc. -- that control the flow of discourse in the context of inquiry, according to the communicational norms developed in the various research traditions.

Such practice-embodied norms constitute the distinctive forms of life of the devotees of such traditions and they include the use of the material skills that establish, through observation and experimentation, the interaction of the inquirers with their subject-matter, which must be shared communicationally with other inquirers in the same field in order to affect the field itself. The sort of Computational Intelligence Augmentation I am chiefly concerned with, then, is that which would be achieved by devising mechanisms and programs that would increase the effectiveness of the communicational norms which encourage successful inquiry as well as those which would facilitate inquiry into the norms themselves for the purpose of identifying those the conformity to which would indeed result in more successful inquiry. Any computational devices that could be helpful in this would qualify as a contribution to IA research of this particular kind.

 

6. Inquiry and Assertion

The support for this to be found in Peirce’s philosophy is in his theory of inquiry, which is the general framework he draws upon in developing his logic. Logic includes the development of notations and derivation techniques for deduction, and the development of methodologies of induction and abduction as well, but Peirce situates these traditional logical concerns within the framework of inquiry conceived, in effect, as a general theory of assertion. However, I am hesitant to call it that because it could be more misleading than helpful to do so in view of the way speech act theory, which was pioneered by Peirce, has been developed in the past century after his death, which has taken an importantly different approach to understanding what assertion is by minimizing the social aspect of the speech act. This is done by considering the role of the addressee of the act to be limited to whatever is implicit in recognizing the given speech act as being the sort of act it is. "Uptake" is the usual term for this sort of constitutive acknowledgement of the speech act as being of this type or that, and the role that assertional acts in particular actually do play in a community of inquirers is left undeveloped and relegated implicitly to the studies dictated by the special interests of the sociologist. This is not what Peirce had in mind in conceiving logic as a general theory of assertion, however.

If you are already acquainted with Peirce’s work you will know that he prefaced his first systematic account of the logic of science with a pair of essays -- "The Fixation of Belief" and "How to Make Our Ideas Clear" -- which situate logic in the narrower sense in which it is taught in logic classes within the general framework of a process of inquiry, which might roughly be described overall as follows: A particular inquiry process is not to be regarded as having an absolute moment in time when it first begins nor a moment in time when it completely and definitively ends, but is to be thought of rather as a formally non-terminating process (i.e. non-terminating at any identifiable time) in which the starting point of a given particular inquiry process falls within an ongoing discursive process which has become informed by two or more conflicting tendencies toward acceptance of something which, however, cannot be completed because to do so would be to accept two or more contradicting assertions of opinion at once. A given inquiry is constituted by the inability of the inquirers to resolve a disagreement about what is to be accepted. This disagreement will have come about as a result of previous understanding up to that point, and the overall direction of inquiry is given by the attempt to take such steps as are required to get past the initial impasse or aporia in order to arrive at a shared acceptance of results. This shared acceptance, if it occurs, will enable further inquiry into the same subject-matter to proceed, using, when relevant, whatever is accepted as the basis for achieving still further understanding of the subject-matter.

To regard logic as a theory of assertion is to take a certain perspective on the inquiry process, regarding it particularly from the point of view of the individual inquirer, considered as motivated qua member of that research community by the aim of making a contribution to the shared understanding of the subject-matter which has already developed within the research tradition. The act of assertion occurs when the individual inquirer, having prepared him/herself sufficiently to be willing to take the risk involved in doing so, actually attempts to capture the attention of others in the research field in such a way as to cause them to come to the same conclusion which he or she has already come to and thus to contribute to the research tradition by shaping it in the direction of an ultimately stable and shared understanding of the subject-matter. The occurrence of such an act, when it is recognized for what it is, is the intentional triggering of a complex set of non-terminating communicational obligations and permissions that apply not merely to the researcher making the assertion but to everyone in the research tradition addressed by the assertion. (This is a special kind of assertion, to be sure, because it occurs within the context of communication in an ongoing research community, but it may provide helpful clues to understanding what assertion is outside of this special context.)

Now, assertions can be made both in a serious and in a playful or at least nonserious spirit, the former being the case whenever the person making the assertion takes full responsibility for making a claim which, taken seriously by the others in the research community, will put upon them the obligation to take what has been claimed seriously enough to allow themselves to be persuaded to the conclusion which the claimant has already come to, if the claimant has actually made the case for it in a way that is found to be rationally persuasive. (By whom? By each member of the given research community taken distributively, i.e. taken one by one, as distinct from the membership regarded as a collectively constituted individual. The research community is not to be regarded as a collective entity.) Other obligations are involved as well. For example, the claimant is required to be sincere about actually having arrived at the conclusion him/herself; those addressed by the claim are obligated to make known to the claimant and to the research community any serious objections they have to the claim made in case they see a serious flaw in it and think it important enough to warn others about; anyone addressed by the claim -- i.e. any member of the research community -- is permitted to respond appropriately to the claim in any other way they see fit, insofar as it bears on the question of whether the claim should be accepted; the person making the claim is required to include enough information about the method of replication of results to enable it to be tested according to the claimant’s own specifications; the claimant is expected to have some explanation in case an objection is made to the effect that replication has been attempted but failed; and so on.

This describes what I am calling for the moment "serious" assertion, and it obviously plays a special role in the inquiry process because of the power of a seriously made research claim, regarded as such by all concerned, to affect the actual course of research in a given research community in virtue of its ability to impose such obligations on those in the same community. Assertion in this sense is, of course, the same as what is usually referred to as "publication". But the inquiry process is not simply a matter of being serious, in the sense just indicated, but also involves much -- indeed, far more -- communicational activity of a preparatory sort which also affects its outcome but does so differently because what is said is not asserted seriously in that sense and thus does not trigger the same rigid and rigorous obligations as serious assertion triggers. Seriousness, in this special sense, is not a matter of how anyone feels: people can, in a nonserious way, argue about matters with great passion and intensity of conviction as regards their opinion at that moment, but still be arguing nonseriously in that it is understood that what is being said is not to be taken as invoking the application of the rigid and rigorous communicational norms associated with what is identified as a serious claim to a research finding. What makes assertion serious, in the relevant sense, is the de facto recognition and acceptance of the intent that the special rules of discourse that constitute the obligations and permissions attendant to a serious research claim obtain, and this is not a matter of how one feels but of the willingness to accept the special communicational norms associated with such claims.

As research traditions have developed across time, various kinds of communicational practices have developed within them that in one way and another qualify them as nonserious in the sense indicated: for example, informal discussions of an occasional nature with research colleagues casually encountered, including correspondence by mail; more or less loosely structured group meetings of various kinds (which can range from local discussion groups with more or less set topics and discussion agendas, to international conferences, congresses, and the like); coordinated team efforts as part of complex research projects such as are becoming increasingly common in the hard sciences; messages and sometimes long and complex threads of discussion posted to public forums and newsgroups; and even self-communication, as when we are working out our ideas in momentary isolation from others in the tradition with which we identify ourselves, which can be regarded as limit cases of the social. I have no idea how many different sorts of communicational practices might turn out to be worth recognizing, but they will obviously vary greatly as regards the controlling norms governing what is regarded as communicationally appropriate, depending on what the communication is thought of as being supposed to accomplish in contributing to the general aim of learning more in breadth and depth about the subject-matter of the research tradition. Sometimes people are in need of an opportunity to try out new ideas simply in order to find out whether they are worth exploring further; sometimes they are in need of exposing their thinking to others to get some rapid critical feedback, negative or positive; sometimes ideas are being put forth in order to lay some groundwork for establishing a possible future claim to priority in discovery; sometimes certain things are being discussed simply because the participants think their overall view of the research topics that interest them are in need of vitalization by being set within in a different context than usual; and so forth.

Which of these would be the most important as regards research aims? Are the cases of serious assertion, or what we call "publication" the most important? The answer is surely that one cannot make such a judgment a priori and apart from any context of actual concern or apart from an understanding of the extent to which the given research tradition is flourishing or is still in a stage wherein it is not clear where it is going yet. Sometimes a formal publication claim can be of the first importance. But a casual conversation in a hallway between a couple of unusually talented researchers might well make a far greater difference to the future of the given research tradition than any single act of publication does. Publication -- serious assertion in the sense indicated -- has a unique role in the process, which we will be considering further shortly, but "importance" is not the right word for it. Research is a kind of hunting activity, and to identify publication as the most important thing in research communication is like saying that the most important thing in the hunt is the coordinated attack on the prey at the close of the hunt, which is no doubt true in some cases but cannot be said to be generally true inasmuch as the complex process of hunting may well involve activities preliminary to the climactic attempt to capture or kill the prey (and be only loosely connected with it) which are actually far more important in its success than the acts of actually attacking or capturing it.

In what follows I will be illustrating what I have in mind by Intelligence Augmentation of this special type by reference to a concrete case of unusual interest, namely, the automated publication system devised by the physicist Paul Ginsparg (high energy theoretical physics) for the benefit of his own research community and several others closely associated with it. The special interest that attaches to it is due in part to the fact that understanding it requires recognizing the need to distinguish between serious assertion or formal publication and other kinds of communication that occur in the course of research. It will be important to bear in mind in understanding the case, though, that it is not being used by me as a paradigm of research communication in general, but rather because of the way in which it illustrates the special role which formal publication has come to play in research, and also becase it exposes the massive confusion that presently exists in the general understanding of how critical control actually works in research communication, which is based largely on a misunderstanding of the nature and function of peer review.

 

7. The Way the Ginsparg Publication System works

Let us turn now to the case of the automated archive and server system for pre-print distribution of publication in high energy theoretical physics, and in several related fields in physics, astronomy, and mathematics, which was first developed at the Los Alamos National Laboratory by the physicist Paul Ginsparg some twelve years or so ago. The system was recently moved to Cornell University when Ginsparg took a position there, and the official name of it now is simply "arXiv" -- the "X" is a visual pun on the Greek letter chi -- but I will refer to it here as "the Ginsparg system" in order to keep the focus on the work of Ginsparg in setting it up, which is the instance of IA application of special interest to us here. Since inquiry is a form of learning, the success of which is an increase in the understanding of things, anything which contributes to the efficiency and effectiveness of inquiry is ipso facto an augmentation of intelligence. The interest in Ginsparg’s work does not lie, however, in any special sophistication or novelty involved in the programming, considered simply as computer programming, but rather in the way the programming was developed as a material support for communication governed by certain controlling norms believed to be conducive to the furtherance of inquiry in the fields it was originally intended to serve. These norms are those just discussed above as those governing what i have thus far referred to as "serious assertion" or "formal publication". For purposes of discussing thii case, it would be preferalble, I think, to introduce my own term of art for it, and call it "primary publication" rather than "formal publication" because the publication procedure followed in this case will seem so informal as to make the term "formal pulblication" seem inappropriate

.

The way the Ginsparg system works is simple. If one wants to make a claim to a research result to one’s research peers in the field in question, one writes up the claim being made and the basis for it, considered as a conclusion, in the form commonly understood to be dictated by whatever would be required for purposes of testing or replication, whether that involves an appeal to a priori reasoning, as in the case of mathematical proof claims, or to observational or experimental procedures. The generic form of all such papers can be described quite specifically, if necessary, but there is no need for our immediate purpose to say more than that there is nothing unusual about the expectations of the people in the fields which use the Ginsparg system as their medium of primary publication as regards the form they expect such publications to take, which does not differ significantly from the form which primary publication takes in any other research field. The archive is programmed to accept several special formats, such as postscript, pdf, LateX, and HTML. It is left up to the person depositing the paper to do the formatting and encoding required (or to arrange for having it done properly).

In addition to the paper itself, one also prepares an accompanying abstract, usually involving the use of topical key words, and one deposits both paper and abstract in the archive. The abstract -- not the paper -- is then automatically distributed by email to those users of the system who have previously indicated, by a description of their own research interests, that they are interested in reading all papers that might contain material especially pertinent to their research concerns. (Since the archive is divided up by fields and subfields, one might simply signify to the machine that one is interested in any abstracts deposited that pertain to one's field.) If a reader of the abstract decides that the paper it describes might be of interest, then he or she can click on a link which will cause the whole paper to be sent to them or downloaded by them. The entire process of deposit, notification, and retrieval of papers is automated.

If one disagrees with the claim made and it seems important enough to do so formally, then one can deposit a reply to it in the same place which will also be formally correct, as that is generally understood in the field in question. Thus critical dialogical interchanges can occur in the system which are of the same general type as those which might occur in a traditional professional journal which permits "replies" as a normal part of the publication process. But it is important to understand that the arrangement is not conducive to the kind of informal discussion typical of, say, a listserver based forum or an organized discussion group or among the members of a special project team, or a "bulletin board" or "news" group discussion, much less with the kind of discussion that might occur on a "chat" line. Inappropriate responses might well be made and deposited in the archive -- there is nothing which precludes this -- but the system is designed to discourage that by making it necessary to deposit an abstract if one wants others in the field to know that one has made a reply. This helps to insure in practice a kind of formality which is of the essence of what I am calling "primary publication". Too much is at stake professionally in what appears under such an understanding to make the communicational mores of, say, an informal discussion group appropriate. In the case of the Ginsparg archive and server system there is no "policing" to insure this, as it has been found not to be necessary.

 

8. The obscured importance of the Ginsparg system

From a narrow point of view, the Ginsparg archive-and-server system is nothing more than an automated form of a system of communication that had already existed for decades in the research fields it serves, namely, the practice of distributing copies of pre-prints to others in the field, meaning by "pre-prints" papers embodying primary publication claims distributed to research peers prior to their appearance as papers published in the editorially controlled journals in the field, often prior to submission to such media of publication, and sometimes not even destined to be submitted. Pre-prints are not the same as drafts, though, since the term "draft" implies a lack of polish and completeness that would be inappropriate in something distributed as a pre-print. On the other hand, something regardable as a pre-print can also be regarded as a revisable version, and most pre-prints that are subsequently published in a journal are probably going to be revised to some extent before they appear there, even if only at the behest of the journal editor, who is often under pressure to economize on the space occupied by the paper in the journal.

Before the establishment of the Ginsparg system at Los Alamos, pre-print distribution usually meant distribution only to those well-enough connected professionally to be on the mailing list for distribution of preprints by those at the "leading edge" in the field, which of course tended to insure that those on the distribution list would be strongly advantaged thereby in their professional success as researchers. Thus there were actually two distinct venues of primary publication in such fields: the pre-print distribution system and the system of editorially controlled and "peer-reviewed" professional journals, corresponding to the distinction between well-connected and thus advantaged researchers and those not-so-well-connected and thus not in position to participate in leading edge research. The time delay involved in publication in the professional journals usually meant that, by the time those who depended on the journal literature for understanding what was at the "leading edge" could find out what was happening there, the edge would already have moved on to other matters. Any field which puts great stress on priority of discovery will tend to resort to pre-print distribution as a means of primary publication unless there is something that hinders it, and the domination of the direction of research in many fields by those in the privileged position of being able to participate in primary publication of this kind -- sometimes discussed in terms of the domination of research by "invisible colleges" of the communicationally privileged -- was a matter of growing concern in the sciences by the time Ginsparg established his automated and unrestrictedly accessible pre-print server system at Los Alamos.

Ginsparg and his associates seem to have been aware from the beginning that something of potentially momentous importance had been accomplished by the relatively simple act of installing the archive and server system on the internet with a policy of unrestricted access to deposit and retrieval. Judging from such discussion of this as I am acquainted with, the most important thing for them seems to have been that in adopting this new system they were making a transition from a system of publication which was primarily serving the special interests of just those physicists who, like themselves, happened to be in the advantaged "in-group", to a system serving the needs of all physicists the world over who are capable of accessing the internet, even if only at a minimum level of efficiency, without limitations based on special qualification or collegial connections. I will refer to this as the cosmopolitan motive in their idealism.

At the same time, though, they seem to have understood that something else was being accomplished as well which had to do somehow with an exposé of the peer review practices of the journals as being impertinent to the critical control of leading edge research. Since it is part of the received and conventional wisdom that peer review is the one thing that insures that "standards of quality" will be recognized in research and in control of publication, their typically contemptuous dismissal of it as impertinent was construed by many people as dangerously subversive of science and scholarship, especially in view of the fact that the scientific disciplines from which it was emanating are high on the scale of professional prestige and thus cannot simply be written off as complaints of the sort to be expected from people who can’t meet the supposedly high standards of peer review. This can be regarded as the anti-authoritarian aspect of their idealism, not because they explicitly construe it in that way but because it is in fact a rejection of the authoritarian conception of the role of peer review in research, and I think they have had some understanding of that even though I find no attempt to think the concept of peer review through to figure out what, exactly, is or is not happening in it and what the basis for critical control actually is or should be.

Thus Ginsparg and his associates, who created and developed the publication system, took a highly idealistic view of it for the reasons just indicated, and this idealistic zeal initially took the form of claiming that what they had accomplished at Los Alamos for their own fields could be accomplished across the board in the sciences, and not only there but in research traditions generally. Time does not permit a description here of what has happened in the past few years as this idealistic zeal ran into increasingly hardened resistance, which finally took the form of a deflationary rhetoric which has been highly successful, at least temporarily, in inducing a kind of obscurantist confusion about the Ginsparg publication system that has now largely silenced it as a reform movement. This was achieved by invoking and promulgating a certain important misunderstanding about the nature of peer review while at the same time forbidding the discussion of peer review reform in the most influential public forum devoted to the topic of free on-line scholarship, which effectively reduced the apparent significance of the success of this system of publication to a minimum by encouraging a refusal to recognize the Ginsparg system as a system of primary publication.

When the existence of the Ginsparg system became widely known, beginning some five or six years or so ago, it generated much "viewing with alarm", and dire predictions about the inevitable decline in quality of research in the fields using the system were common. It seems reasonably clear by now, however, that this predicted decline has not occurred and these pessimistic assessments seem to have given way generally to an admission, sometimes grudging, that it does seem to be working for those fields for which it was originally designed. On the other hand, it has also become increasingly clear that there is no tendency yet toward general adoption of it as a model for publication practices in the sciences generally, as Ginsparg had once thought might occur, much less toward emulation of it in scientific and scholarly research publication generally. Consequently, the initial interest in it as a revolutionary new internet-enabled publication system has now all but disappeared. Indeed, as I remarked above, it is now commonly regarded as not being a publication system at all, notwithstanding the fact that it has continued to be the chief system for primary publication -- as defined here -- in those fields which it was originally invented to serve. Yet the only value of it for publication practices generally is now usually considered to lie in the fact that it has provided the model for the development of internet archival systems of a type which might be replicated at any number of different nodal points on the internet -- university based archives of this type are now being touted as ideal nodal replications of it -- the virtue of which is that anything deposited at any such archive becomes ipso facto available as a document in a single world-wide virtual database of such documents, which can be searched and subjected generally to programs designed for purposes of retrieval of material from it, for keeping track of what is there as any master library must do, and for purposes of analysis of the documents it contains in the interest of sorting them out and describing them according to any number of different sets of criteria corresponding to various interests which someone might have in them as publications with a history.

Thus although the disinformation about the system as a publication system has had no effect on its use in the fields for which it was originally designed, which still continue to flourish with the use of it, it has effectively diverted attention from its idealistic aspect and the potentialities for encouranging reform implicit in the automated system, the major significance of it has come to appear (quite misleadingly) to be only that it is an example of how it is possible to make the transition from paper-based journal publication to on-line publication without raising any reform issues that might disturb the already prevailing systems of hegemony exercised by the various institutions and cabals that control research by supporting and controlling publication. With this, the significance of the success of the Ginsparg archive for the development of what is potentially a very important part of IA research has been obscured in such a way as to amount to a kind of "dumbing down" of our understanding of the conditions of success in scientific and scholarly research. To reverse this it is necessary to insist upon the challenge which the Ginsparg system has posed and continues to pose for peer review as that is presently understood.

 

9. The Concept of a Peer

I should emphasize that the view being proposed here is not in opposition to regarding peer review as of fundamental importance in the critical control of research. The view is rather that what has come to be called "peer review" is not peer review proper but rather a crippled form of it which is not only of limited value as a critical control principle at best but is also a kind of subversion of the peer principle that underlies the practice of authentic peer review because it treats peer review as, in effect, a system of elite control, which is directly contrary to the conception of a peer. According to my view of the matter, the working of authentic peer review is in fact best observed in action by studying the practices exemplified paradigmatically by the Ginsparg system (or any equivalent system) of primary publication. When I first became interested in this issue, I thought that it would be best not to disturb the present usage of the term "peer review" as referring to editorially commissioned pre-publication peer review, especially since the early enthusiasts for the Ginsparg publication system typically regarded peer review, in that sense, as of little real importance because leading edge research seemed to have little use for it. I have since realized, though, that since it is respect for the peer principle that lies at the basis of the critical control of research communication, this was the rhetorical mistake that has enabled those bent on obscuring the significance of the success of the Ginsparg system by denying that it has the status which it actually does have as a venue for primary publication.

To get a clear understanding of what peer review is and why it is regarded as so fundamental in the critical control of research, we have to understand first why it is thought important that the acceptance of research claims in a given field be something that happens in consequence of peer assessment of claims made. A research peer is, of course, an equal. To be more exact, a peer is a presumptive equal, not someone who has been demonstrated to be de facto equal in this or that respect but rather someone who is regarded, presumptively, as someone whose informed opinion about the subject-matter of research is important enough because competent enough, prima facie, that any disagreement of one’s own opinion with that of the peer’s opinion yields a situation in which both opinions cannot be true but neither can be decisive in respect to which of them is mistaken in virtue of one of them having superior or authoritative status in such matters. There is no relationship of authority among peers -- unless, of course, we are talking about Animal Farm, where some peers are more peerish than others.

What is presently defended as peer review in publication is actually Animal Farm "peerhood" as a critical control practice, where a class of persons is systematically afforded a procedural status that positions them as functionally authoritative while nominally seeming to be mere peers in service to their peers. The class in question is not, however, the class of commissioned peer reviewers but rather the class of editors in control of the various media available as venues, who commission peer reviewers and decide what weight, if any, to put upon their opinion in the process of their own decision-making about acceptability, revisability, and actual publication of research claims. This is not said as preliminary to a general negative criticism of the role of editors, whose selective and organizing function in research is indispensable and who deserve far more appreciation for their efforts than they commonly receive. The point is rather that it is lack of appreciation of the editorial function in research communication which is at the root of the misunderstandings about peer review, and responsible also for the inability of would-be reformers to perceive what the real import of the success of the Ginsparg publication system is, namely, that when a research tradition has reached a mature state it does not require editorial leadership at the leading edge zpg research in the field, and that, conversely, when a research tradition is unable to make effective use of suchj a system it may be because the need for editorial guidance at that point in the research process is too great for the people in the field to function effectively in the authority-free communicational environment provided by the Ginsparg system of primary publication.

There are many different reasons why a given research field may be incapable of making effective use of an authority-free primary publication environment of this sort. For example, it may be that an "invisible college" is in exclusive possession of the leading edge and the appeal of their own self-interest is simply too great for those so privileged to want to take advantage of the opportunity to make the radically egalitarian move that Ginsparg and his associates made in establishing their automated and unrestrictedly available publication system at Los Alamos. There was surely a gamble there, and there must have been a substantial number of physicists among those who initially adopted the new system who were initially resistant to the establishment of the open access pre-print server in the belief that idealism is nice but the quality of work which would appear there, under the conditions of unrestricted access, could only result in a decline in the field. It seems reasonable to suppose that this will be the case with at least some fields that probably could successfuly adopt the Ginspart system now but lack enough boldness of leadership among the most respected leading figures to make the transition from the protected environment to which they are accustomed to one that they can only "view with alarm".

Or it might be that the field is one in which the funding arrangements are such that much important research must be kept secret and primary publication must be carefully and skillfully censored to insure that nothing is discussed in it that could jeapordize the relationship with major funding sources for the field because of unintended "security" violations, whether they be commercial or governmental in character. The increasing infestation of research by private interests and the pretext the "war on terror" provides for shutting down open research in some others is no doubt sufficient to explain why a number of fields cannot possibly make use of such a system and depend to a high degree on editors in the role of censors.

Or it might be that the field or subfield is simply too inchoate and globally unfocused for the contributions to a primary publication medium such as the Ginsparg system provides to be regarded as being of much value as a venue. For such a field it could only be a collection of papers that might or might not be of interest, but it would have no rationale as a collection sinnce there would be no dialogical process to which that would be a contribution.

The concept of a peer appears in many different contexts in modern society. A familiar example of the way peer status works can be illustrated by a case where a physician is asked by a patient of another physician for a second opinion. Physicians usually do not object to a patient’s request for a second opinion provided it is understood that both opinions are on par as professional assessments in the sense that the second opinion is simply one more opinion to be duly considered rather than a definitive or determining opinion relative to the first: there is and can be no general presumption in favor of one peer’s opinions relative to another based on the status of the physician. They are in that sense equal. This does not imply, of course, that one of them may not make a far better case than the other, but that is something which the patient has to judge for himself. In case the two opinions conflict, the question of which to follow cannot be settled by turning to a third physician who will settle the matter by telling the client which is right: all that the third can do is to offer a third opinion, on par with the other two, and if it agrees with one and not the other, there is no implication that the one agreed to by two of the three is the better opinion in virtue of that. In other words, there is no authority status recognized as obtaining among physicians, all of whom are on par -- are peers -- in this sense. This obviously does not mean that one physician cannot or should not regard another as having better judgment than himself, but that is very different from one of them having an acknowledged authoritative status that the other does not have that gives the opinion of the one formally decisive force in determining that the other is wrong. In general, there are no authorities among peers, no superiors or inferiors. Recognition of peer status is a procedural matter, not recognition of a matter of fact.

If this is so the question is, then, why the egalitarian conception of a research peer should be regarded as a part of the normative rationale of research, as it has in fact come to be conceived in modern times. Although this cannot be addressed here in depth and detail, at least this much can be said, namely, that the adoption of this norm or practice in inquiry is based on the underlying assumption that in perceptual interaction with the subject-matter -- in experience of it, in other words -- the subject-matter itself will compel us to a belief or conviction about itself, provided we have made ourselves properly receptive to it conceptually and perceptually. The assumption is, in other words, that there must be a causal relationship between the subject-matter of research and the researcher in which the researcher is passive in the sense of receiving the action of the object, such that the researcher’s convictions are shaped by the subject-matter itself. A common sense illustration: What color does a certain object have which is presently outside of my range of vision? I take steps to observe it and when I do this I see that it is red, let us say. I can think anything I want, but the object itself insists upon being red whether I like it or not. Experience is what interaction with the object impresses upon you, what you emerge from your encounter with the object as having learned from it.

Now, real things are faceted in the sense that they are perceivable from multiple complementary points of view, each of which is a facet or aspect of the appearance of the same thing. As the perceiver varies in his or her relationship to the object the shift in perspective or point of view reveals other facets of the object, each of which must be taken duly into account and reconciled with the rest as different facets or aspects of the same thing. The reason we must respect others as our peers in inquiry into things is that we cannot possibly build up an adequate general understanding of our subject-matter in a research field without trusting the basic competence of others in the field except where we have definite reason for doubting it, provided we have some prima facie reason for supposing that competence to exist. Otherwise our attention would have to be constantly diverted into an investigation of the competence of each of our colleagues rather than into the subject-matter. A peer is -- logically regarded -- equivalent to a respected perspective (or set of perspectives) on the subject-matter, and to treat a peer as other than an equivalent of oneself -- whether as a superior or as an inferior -- is to derange the coordination of perspectives which is the constant task of the ongoing science.

[THE DRAFT IS INCOMPLETE, WITH THE CONCLUDING TWO SECTIONS OMITTED HERE]

 

NOTES






Page created by J.R. presumably in 2002 — B.U.
Page modified for link update and site information by B.U. August 17, 2012 — B.U.

This page is part of the Arisbe website
http://www.cspeirce.com/peirce-l/ia.htm

Questions, comments, suggestions to: