The virtual brain may be the central icon of our time, the image that best represents our sense of ourselves and the leading edge of imaginary collective definitions of the human. The virtual brain is the cumulative effect garnered from several sources, the by-product or offspring of multiple convergent factors.
On the one hand, brain research at the physiological level is making sufficient inroads to give the sense that we can generate a working model of the brain. On the other hand, there are always quantitative limitations on that knowledge, simply because of the sheer orders of magnitude involved in simulating brain operations, and so the models remain only potential explanatory metaphors. The slack left by this limitation in precision is picked up by the immense signifying power of computer graphics: from the microbiological to the cosmological, we tend to imagine physical events and processes in terms of the bright colors and seductive patterns of digitally generated diagrams.
The overlaying environment in which the virtual brain persists is both conceptually and technologically inflected. The conceptual dimension is provided by a general shift to a "bottom-up" model of brain processes in terms of self-organization and thought structures and capabilities as emergent properties (Rotman). The brain is seen as an immensely parallel, distributed hierarchy of sub-systems that are interconnected in exponentially complex ways. Conscious thought, emotion, and other cognitive behaviors are something like macro-scale outcomes of multiply layered series of microscopic components.
Our image of the brain gets its pervasive technological inflection in subtle ways. Within the rich play of the many metaphors and models that compare the brain to a computer, there emerges a sort of nexus where our image of the brain can only be made visible or accessible as a virtual image, something whose texture and form are indelibly imprinted by the computer's graphic powers. Put differently, the virtual realm of the computer provides both the conceptual space and graphic environment where the bottom-up approaches to natural, physical, and artificial phenomena become pictures worth gigabytes of text: it is as if the arguments about whether or not the brain can be simulated by a computer are subsumed by the fact that we already project the brain as an object in virtual space - we transpose the wetware of neurophysiology into the software of connectionist networks.
And, as a final turn on the virtual brain, we discover that even when the brain as such is not the explicit target of inquiry, it provides the unstated Ur-model, it marks the standard by which all other things may be judged and to which they might aspire. The popular accounts of complexity theory as it is explored at the Santa Fe Institute, for instance, constantly cross the lines between biological organism and algorithm, ecosystem and simulation, material bodies and network configurations. Complex systems, the narrative runs, persist at "the edge of chaos" - balancing global stability with local perturbance. Stuart Kauffman argues that evolutionary selection targets complexity (and is partially inflected by it) because complex systems are more robust and continue to evolve in flexible ways. Interesting here though is how the defining characteristics or advantages of complex systems sound like an apology for the brain, a proclamation of its unique features. Kauffman argues that complex systems draw evolutionary advantage from their ability "to perform extremely complex computations" that then allow for "more complicated dynamics involving the complex coordination of activities throughout a network" (82). Almost all accounts of the brain begin with the point that it is precisely the entangled, parallel complexity of operations going on in the brain all the time that must be explained, and then voice their wonder at how these operations are interconnected and synchronized. We reach a juncture where information-processing capacity, embodied (!) by the dynamics of a network, signify evolutionary stability - or, as Artificial Life proponent Chris Langton puts it, "the edge of chaos is where information gets its foot in the door in the physical world, where it gets the upper hand over energy" (Lewin 51). Once again, this claim underwrites several kinds of brain study - that the brain is the central control office for the body, that the grey cells transmitting signals convey information that then shapes the physical world.
The technographical brain persists as a nexus of relations between our mind and the different tools we use to write and the different physical scenes created by writing machines. This notion of the brain follows Bateson's thinking, for he thought of mind as both external and internal - mind is a collection of relations between differences that produce information, and this information lies in and is transmitted through pathways within the body and brain, but also between body, brain, and world (see "Substance, Form and Difference," in Steps to an Ecology of Mind).
The technographical brain narrows the parameters of this concept to examine the ways in which the changing tools we use to write with effect physical changes in our brains. The technographical brain takes as its premise Merlin Donald's idea that "We act in cognitive collectivities, in symbiosis with external memory systems. As we develop new external symbolic configurations and modalities, we reconfigure our own mental architecture in nontrivial ways" (382). This premise informs Vannevar Bush's work, particularly the 1945 article often invoked as an origin for hypertext. Bush envisioned that information retrieval networks could combat the explosion of information available, and imagined a device he called the "memex" that would be a person's own technology for storing relevant textual information. Bush called the memex "an enlarged intimate supplement to [a person's] memory" (cited in Landow 15).
For both Merlin and Bush, then, technographical machines represent external devices that in effect expand the boundaries of the brain, for they serve as extensions or prostheses of the brain that in a recursive turn then induce subtle changes in the brain's very organization.
To write an account of the technographical brain in a contemporary context, we would need to yoke together analyses of how the computer as scene of writing shifts our relationship to language with different discourses about the brain and the evolution of its internal structures, or the history of changing ideas about that physiology. Our disposition toward the technographical brain will be shaped largely by our sense of how writing machines bear on the writer's relation to text. For instance, the effect of the typewriter has been a subject of debate: McLuhan's optimistic proclamation that technology acts as "extensions of man" and that the typewriter integrated the functions of writing, speech, and publication, are contrasted by recent accounts such as that of Friedrich Kittler, who sees the instrument as producing an effect of displacement, because it sets text off in a separate, windowed space from the hand, giving it a disembodied dynamic of its own.
The same battlelines are being drawn with respect to technography now, from Richard Lanham's championing of the "electronic word" as a democratizing instrument of empowerment and creativity that can unify the "arts" to a series of critiques of "virtual realities and their discontents" (Markley) that explore the reinscription of Cartesian metaphysics in cyberspatial theorizings.
The technographical brain may be seen as a mutation of what we could term the narrative brain. The narrative brain alludes to the central place accorded the capacity to narrate in studies of the brain. We see this capacity implicated in Daniel Dennett's notion of the "multiple drafts model" for how consciousness results from an intricate filtering and sifting activity across several hierarchically distributed levels of neural processing. In this account, the stream of consciousness William James gave us as a sense of our thought processes is replaced by a dynamic of authorial selection, done by a de-centered author - that author not as central intelligence agent but as a fiction, an illusion of unity imposed retrospectively on the selected result of these entwined, parallel operations.
The technographical brain is also being generated by the propulsion in some discourse on hypertext to argue that hypertext is more congruent with contemporary conceptions of mind. Landow sees Bush as a pioneer in imagining "machines that work according to analogy and association, machines that capture the anarchic brilliance of human imagination" (18). In essence, we find increasingly that hypertextual writing in electronic environments is seen as somehow more "natural" because it recapitulates the structures and patterns of thought, that it is more congruent or in synch with them. And so, as our textual activities and products become like brains, once more we find the desire for transparency: for a transparent relation or at least formal homology between the structuring principles of hypertext and those of the brain. Such formalizing abstractions frequently figure the virtual as an immaterial realm, and collapse the idea of "text" into that of its medium (Grusin).
The technographical brain, then, should remain a potential configuration only, irreducibly distributed among the relations between brains, bodies, technograpic tools, and a medium of expression - one also in turn demarcated by the constraints of its physical means (a charcoal pencil) or its software (the design of Storyspace).
For some, this hybrid will sound like an oxymoron, as it evokes the persistence of the book in the technographic age. But this oxymoronic quality is intrinsic to the nature of certain novels that inscribe in their narratives the end of the book. The essential conflict between the two terms stems from the way in which the technographic medium changes the way that a reader inhabits and navigates a textual space. In hypertext environments, for example, we enter into a "non-linear" setting, meaning that we can alter the sequence in which we read different lexia, choosing to pursue different possible links among them. The entire mode of both clicking on words and reading them on screen entails a phenomenology that has yet to be described in adequate terms, but will surely change our sense of the way that fictions induce us to create imaginary worlds from our embodied setting. The non-linear clicking through lexia is contrasted sharply with the ostensibly fixed sequence of printed words. A crucial corollary issue surrounds the way that hypertexts cannot be said to exist as a single whole, because we refashion them each time we read; whereas the book as an object already confers a larger sense of "unity" on its pages. This unity also exists at the abstract level of "form" - the formalist values and tenets about books instilled by New Critics persist still in the educational processes by which we are socialized into the world of literature.
The technographical book, then, signals a sharp shift in the novel's sense of its own identity, in a direction consistent with several formal features of hypertext or other forms of electronic writing. To take a specific example, consider the way that Dominic Di Bernardi begins his Afterword to Jacques Roubaud's novel, which he entitles "The Great Fire of London and the Destruction of the Book": "It would be no exaggeration to state at the outset that Jacques Roubaud's The Great Fire of London will be one of the last books of its kind. Despite his self-proclaimed status as Homo lisens, a man who reads, and a lover and reader of books above all, the author elaborates a strategy both for reading and for text presentation that for all practical purposes makes paper-print obselete. Yet this obsolescence is informed by a passionately pursued dedication to the most time-honored literary traditions, both in written narrative form and their earliest manifestations as transcription of orally composed verse" (323).
Di Bernardi goes on to remark several salient points about the technographical book. He notes that there are several novels that absolutely insist on readings that refuse to follow linear sequence or any traditional trajectory of plot, including Cortazar's Hopscotch, Pavic's Dictionary of the Khazars, Julian Rios's Larva: A Midsummer Night's Babel, Butor's Boomerang, and Calvino's If on a winter night a traveler. Such formal innovations then engender "a new way of actually inhabiting the space of the book, of using the book as an object" (326). But what is interesting, Di Bernardi points out, is the immense bookishness that characterizes these post-print novels - the interlaced textual histories of Pavic's Khazar nation, the translation and publication world in Calvino, the commentaries supplied by Rios's Herr Doctor character. And so this move toward a mode of text writing and reading, seemingly so bent on leaving the world of the book behind, reinscribes a sort of graphophilia.
These technographic books seem the nearly inevitable extrapolation of the metafictional trajectory we can trace in the novel throughout the second half of the 20th century. Technographic books emerge as the history of writing itself comes to be rewritten with a new urgency, within the annals of literary theory, in the novel, and in historical accounts of print and computer textuality alike. And so as the novel became conscious of itself as not only a medium, but as a sort of ontological play where textual worlds came into existence, it began to reflect on its own history from a viewpoint that took a retrospective turn.
One way that this theoretical swerve manifests itself is in the consistent sounding of the theme of lost or destroyed texts that we find in technographic books. From the hilarity of the reader's search for one novel after another in Calvino to the spurious versions of the Khazar dictionary, we discover the theme of the mutilated text replicating the technographic book's own sense of the end of the novel. At a more complex philosophical level, we find a new inscription of the link between writing and forgetting handed down from the Egyptians and Plato. Roubaud writes his book in dedicated memory to his beloved dead wife, but nonetheless envisions in his novel a great fire that will burn a city of books, signalling what Di Bernardi calls "the destruction of all memory," in a novel "whose interweaving electrographic flames and 'luminous script' erect a monument to its obliteration" (330).
This dynamic of writing and forgetting that plays itself out in the technographic book that leaves a world of print novel behind (even as it remains in that world, of course) - this dynamic is a crucial complement to the whole ideal of technography as an extension of memory. The technographic brain finds an origin myth in Bush's memex, a personalized electronic memory bank/text. However, it is a simple fact that the storage of exponentially increasing amounts of information and the increasing access we supposedly have to it through computer searches also entails an increasing entropy of cultural memory: we will have more and more to forget because the percentage of what we can retrieve will be so small.
The technographic book, then, fulfils the cultural function of expressing several dimensions of writing in the information age that have yet to be fully recognized. It thus finally may simply carry on the tradition of the novel - the novel construed as, in Salman Rushdie's words, "the most freakish, hybrid and metamorphic of forms" (425). If we want a simple term to build around, perhaps we could use Calvino's brief apology for the hypernovel in Six Memos for the Next Millenium - the novel that sets out a series of potential novels, that can be configured in any number of ways, that we can traverse in different directions and that encourages us to continue writing its stories outside the covers of its book-skin.
Paul Harris teaches at Loyola Marymount University. He is completing a study of time and narrative in twentieth-century literature and science.