Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Sunday, February 26, 2012

Rethinking negative reinforcement

Animals in a state of relief.
As I wrote in my last post regarding the increasingly fuzzy distinction between classical and operant conditioning, old terminology can hobble new thinking, and given how awkward the language of behaviorism was at its inception, we shouldn't be surprised to discover how creaky it has become in its dotage. (The surprise lies in its holding up at all!) The positive/negative confusion has never really cleared up for many lay training students (positive punishment? WTF?), and no term has given people more trouble than "negative reinforcement," which bundles all the paradoxes and blurred connotations of behaviorist theory into seven dry-sounding but intellectually and emotionally fraught syllables. Technically, it's negative because it describes the removal of some "thing" (which may not be a thing at all). Colloquially, it's negative because the thing that gets removed needs to be nasty or at least unpleasant in order for its removal to be reinforcing, and so the deliberate use of negative reinforcement implies (and carries the ghost of) the deliberate introduction of nasty or unpleasant things, i.e., positive punishment. That's the theoretical tangle as clearly as I can state it (not very!), and it has significant consequences in practice, as teachers and trainers line up on either side of the R+/R- divide (and take occasional potshots at each other over the crevasse that yawns between them).

Does the theory still encompass what we know of reality? Do the terms describe with satisfactory accuracy our growing knowledge of how animals learn? On the contrary, they appear to be busting at the seams. We're patching as fast as we can right now, but I think our best hope of finding our way to a new kind of coherence (to a description of teaching and learning that covers our collective butts once again) may be to pick at the threads where they're coming unraveled. To combine my canyon and sewing metaphors, these may become the ropes that swing us over the training divide. (Ack.)

Some of the most exciting work in contemporary learning theory is being done by scientists and practitioners (e.g. teachers and trainers) who dare to test the boundaries between behaviorism and humanism; between the body and the mind; between emotion and thought; between psychology, ethology, and neuroscience; between biological and historical accounts of the past; between objective and subjective accounts of the present. On the scholarly and/or scientific side, Frans de Waal, Sarah Blaffer Hrdy, Oliver Sacks, Irene Pepperberg, Marc Bekoff, Mihaly Csikszentmihaly, Alison Gopnik, Timothy Wilson, Gerd Gigerenzer, Antonio Damasio, Daniel Kahneman, and V. S. Ramachandran are some of the great "unravelers" I've encountered (if only on the page), and Jaak Panksepp seems like someone who might actually help us knit a new pattern.

But I think all of us who practice learning theory with focused intent and honest reflection can contribute significantly to the radical revision now underway, and a re-examination of the R+/R- split could be an excellent place to begin. I'm not prepared to say that, as a philosophical distinction, it's totally illusory (I'd like to tackle that question in another post), but as a scientific distinction, it may be. This is one of many places where Jaak Panksepp's work is so fascinating and potentially useful, as he's been investigating the physiological and neurochemical bases of approach and avoidance, of appetite and satisfaction, of aversion and reward. I look forward to the publication of his promised book for the lay reader, because I hope it will make his insights more widely accessible. (Temple Grandin's Animals in Translation remains the best introduction to his ideas for the general reader, as far as I know.) In the meantime, I've been making my way very slowly through Affective Neuroscience and highly recommend it despite its density. I hope I don't distort its content too badly here!

In his book, Panksepp describes a discrete number of affective (emotional) processes whose physiological coherence is marked enough that he is comfortable labeling them "systems." These are activated and expressed in more or less predictable ways by animals of diverse species, and we can guess from our common evolutionary history that there are also strong similarities in how they are subjectively experienced. Panksepp is keen to avoid Skinner's mistake of choosing his terms in opposition to common parlance, so he simply capitalizes the colloquial names for these primal emotions/processes to denote their technical use: FEAR, PANIC, RAGE, and SEEKING. While this group may appear heavily weighted to the unpleasant, the SEEKING system encompasses many varieties of pleasurable anticipation.

If I understand him correctly, Panksepp suggests that most of our strongest appetites or drives (and the emotions that accompany their satisfaction or frustration) arise from various kinds of disequilibrium. A truly safe and contented animal is an animal at rest. FEAR is activated by perceived threats to the self, PANIC by social isolation, and RAGE by constraint (especially of one's access to valued resources). The SEEKING system may be engaged when any of these other emotions is in less than full flower. When we're a little anxious, a little lonely, or a little hungry, our minds/brains are primed to seek out whatever will restore our internal equilibrium: an escape route, a friendly touch, a Hostess cupcake.

In such situations, our minds are also primed to learn, to draw connections between environmental circumstances, our own behavior, and the consequences that result from their meeting. Indeed, our capacity to learn has so many advantages for our continued survival that we are primed to find it intrinsically pleasurable. Thus the SEEKING system affords us pleasures that are largely independent from the satisfaction of consuming a good meal or the relief of escaping a fearsome predator. They're compelling enough to be literally addictive - the SEEKING system appears to be modulated primarily by the action of dopamine, and gets easily hijacked by cocaine and methamphetamine among other stimulants.

In addition, while the research remains sketchy, it appears that the (intrinsically rewarding) SEEKING system is activated whether an animal is seeking out the object of some appetitive desire (food, a mate, etc.) or seeking escape from a perceived threat.

Okay, if you've followed this far, I should finally be able to bring the conversation back around to positive and negative reinforcement and the question of whether they're entirely distinct. Once we start thinking about drive or desire in terms of disequilibrium, it becomes harder to draw an absolute line between the internal pressure of hunger and the external pressure of a bit or a leg; it becomes harder to separate the gift of peace from the gift of an apple. It becomes clear that all effective teaching necessarily "exploits" one appetite or another. And it becomes much more interesting and rich to talk about how to do so in a way that best enlists an animal's SEEKING system and taps into our shared love of learning.

I don't want to tax your patience much further in this post, but in closing I'd like to quote a couple of eloquent descriptions of expert horse trainers who supposedly sit on opposite sides of the R+/R- divide, but who clearly overlap in their ability to help other animals to flourish. I already knew I needed to learn more about Alex Kurland's work, but Cindy Martin persuaded me that I'd better do it soon. She wrote in an email, "When the dog world found clicker training, many people abandoned their leashes, vowed to free-shape everything and never touch their dogs. Well, with horses, we're bound to have physical contact. Riding is about tactile cues. Our weight shifts, we squeeze with our legs, we ask with the reins. Alex developed the idea of pressure as information, below the level of a
true aversive. So is it still R-? Probably. But if we very quickly lighten pressure, by highlighting the first approximations of a desired behavior, with the click/treat, then all these kinds of pressure can be information, simply cues for the horse. And they can still learn to work for 'the release.' In fact, the release of subtle pressure can be a low value reinforcer, once the horse gets more sophisticated, and the click/treat can highlight the especially good responses. Alex calls this process, 'Shaping on a point of contact.'"

Emma Kline attended the same Buck Brannaman clinic in Spanaway that inspired me to write my bumptious letter back in November. You can find some lovely reflections on the SEEKING system on her blog, and you can also find her poetic response to seeing Buck at work:  

"At one point Buck was talking about how extraordinary it was to be with a horse that was hunting the feel. He talked about giving the horse what it wants most in the world: PEACE. No wonder this guy doesn't need to use treats.

I could feel the lines in my forehead getting deeper as I strained to see how he was utilizing the laws of science and behavior modification with an accuracy I have rarely seen. And sure enough, he was using a marker and a reward. His marker was the release and his reward was the Peace of Feeling Together.

I think that it is very important to note that this is not a "peacefulness" that comes from robbing the horse of his sense of security or taking away the little peace he, as a flight animal, is born with. Its about adding a peace the horse didn't have before. That's when horse and human become more than what we were separately. So in fact, the release is a marker and not a reward."

Saturday, February 11, 2012

Have we outgrown our vocabulary?

A couple of weeks ago now, Professor Jesús Rosales-Ruiz (of the Department of Behavior Analysis at the University of North Texas) gave an esoteric but fascinating talk on the disappearing distinction between respondent (a.k.a. classical) and operant conditioning. By tracing the history of the terms and describing the difficulties that contemporary researchers often encounter when trying to apply them with any consistency, he exposed their contingency and fragility: while they have been extremely useful as springboards to the investigation of how we learn, they may prove not to have any real substance. They might even have brought us far enough that we can safely discard them (and move forward more easily without their dead weight). Wittgenstein once noted how many stubborn philosophical problems are in fact problems of vocabulary; we are sometimes slow to recognize when we've exhausted our terms.

But dying words (and the concepts or categories they name) have something left to teach. By looking closely at their definitional foundations, and then taking note of their specific failures vis-à-vis reality, we can identify some of the perceptual biases that made them so appealing in the first place. The lay distinction between "respondent" and "operant" has always hinged on the question of whether or not a response to a given stimulus (or set of stimuli) is voluntary, whether or not it can be brought under conscious control. But as Jesús described in his talk, even during Skinner's time, the erosion of that distinction was already underway, as physiological responses that had been considered perfectly autonomic (such as blood pressure) were brought through biofeedback under conscious control. More recently (and provocatively), challenges have come from the opposite direction, as individual cells have been observed in response patterns that mimic operant conditioning. As Jesús noted, anytime that relationships between contingencies (in the environment and behavior) grow measurably more consistent, learning is taking place. Our loyalty to the terms "respondent" (or "classical") and "operant" may obscure the complex but unified realities of that process.

Among the phenomena resistant to any simple respondent/operant dichotomy has been the tendency of certain behaviors to wander from unconscious to conscious and back to unconscious "control." We're all familiar with this dynamic as it applies to our assimilation of complex skills. The famous four stages of competence trace the general pattern, from unconscious incompetence (we don't know we can't do something), to conscious incompetence (we know we can't do it), to conscious competence (we can do it with great mental effort and focus), to unconscious competence (we can do it without effort and without conscious focus). If I'm a skilled driver, or soccer player, or surgeon, as long as the given challenge falls within the range of what is well-known to me and therefore predictable, the relationship between the contingencies of the environment and the contingencies of my behavior may be so consistent as to appear reflexive, and I will hardly have the sense that I am making a voluntary decision at any juncture. Only novelty is likely to wake me from the dream of competence and force me back into a state of conscious engagement.

"Respondent" and "operant," like "unconscious" and "conscious," may only describe different modes of energetic expenditure. The brain is a highly economical organ, a regular Bartleby when it comes to the heavy lifting required for conscious thought. ("I would prefer not to.") That said, it is much more active at an unconscious level than we generally give it credit for being.


Image by Jolyon.

Monday, January 30, 2012

I love you, dammit!!

Arktomorphism?
I just spent the weekend in the company of a few hundred trainers and a smattering of scientists at the annual west coast Clicker Expo, organized by Karen Pryor and her skilled cohorts. My exhaustion last night spoke to the quality of the program and the liveliness of the other attendees -- it's good to be reminded in a training context of how much energy the brain consumes when it's fully engaged!

There were many ideas and provocations I encountered in the ballrooms and hallways of the Doubletree Hotel that I want to return to, things I'll need to gnaw on for a long time before I can digest them. What looks most temptingly chewy this morning, however, is a question that was posed to me by a fellow trainer yesterday morning. I had volunteered with a dozen or so other KPA graduates to offer a little coaching to interested parties (two sessions of twenty minutes apiece), with donated proceeds going a local charity. We worked in pairs, and it was unfortunately toward the end of our first session that our "client," whose dog was not with her, described how he would often growl when he lay on his bed and she approached and pet him. What should she do?

I really regret that our next client was waiting and we weren't able to give the question the attention it deserves, because it's loaded. Personally, emotionally, and theoretically. I didn't get any further than remarking that the dog was telling her something that she'd be wise to respect, which might have been a fine response if I'd had time to elaborate it, but was surely too brusque given the circumstances. My partner did better, noting that the dog was a terrier, asking whether the dog followed her hand when she withdrew (yes), and suggesting that the dog might be experiencing a conflict of intent: to roughhouse or to cuddle? But we had to leave it at that.

On the theoretical side, this presents as a relatively straightforward matter of strategic reinforcement, and I hope the woman with the terrier found her way later that morning to Ken Ramirez's excellent lecture, wherein he explored the promise and perils of working with secondary reinforcers, those things (not always tangible, sometimes experiential) that accrue value only by their association to other things that satisfy an animal's strong intrinsic needs (i.e. primary reinforcers). Is gentle touch a primary reinforcer? Considered broadly, for slow-developing, social mammals, it does appear to satisfy an intrinsic need, especially early in life. (Harry Harlow's poor rhesus macaques demonstrated this most tragically and persuasively.) But touch is critical at that early stage in part because it is instructive: a mother's or other's tactile tenderness teaches us what kind of touch is safe, and when. Squirming, jostling littermates and human carers contribute significantly to that education in the case of most dogs. Physical intimacy is double-edged for all of us: it has the simultaneous potential to be terribly harmful or deeply rewarding. So each of us necessarily becomes a connoisseur of touch, highly idiosyncratic in our taste for different varieties of contact.

As Ken noted, in the practical life of a trainer or pet owner, the need to draw any distinction between primary and secondary reinforcers is not nearly so pressing as the question of whether something is reinforcing at all. The question for the woman with the terrier is not whether her dog has a primal desire for touch, but whether he wants to be touched by her, in that way, in that place, at that moment. His growling suggests that he does not. Which does not mean that her desire to touch her dog in such a way under such circumstances must remain forever frustrated, only that she needs to teach her dog to enjoy it. Or risk getting bit.

There are many people who see these (sometimes irresistible) urges to kiss, hug, and cuddle our pets as yet another dangerous form of anthropomorphism. This is true to the extent that our species-typical touch repertoires do not everywhere overlap, and we need to be attentive to the places where they typically diverge. But when we're talking about an individual human and an individual dog (or cat or monkey or whale or other human), knowledge of what is typical may not only be immaterial, it may also be distorting. There are quite a few of us humans who find hugs from most people in most contexts highly aversive. Some find them aversive from all people in all contexts. Can we be shaped to enjoy them? Most of us, probably. But the more often we get hugged when we do not want to be hugged, by people who just want to show us how much they love us, the less a hug will communicate that professed love, and the more likely we'll be to interpret it as invasive and aggressive. As someone who should really know better, I am sorry to say that I think I inflicted an unwanted hug on someone this weekend, and the sincerity of my affection had no bearing on the question of whether it was rewarding for the victim. I "anthropomorphized" her, insofar as I made the narcissistic assumption that my desire to hug her was mirrored by her desire to be hugged.

Animals do this to us, too, as we'd be wise to remember the next time we get leapt on, slobbered over, or humped. My husband has a pair of black running tights that we've taken to calling his "sexy pants," because they drive our boy Pazzo into an amorous frenzy. (Perhaps not coincidentally, Pete vaguely resembles Pazzo from the waist down when he wears them.) Pazzo is clearly sincere in his passion for Pete, but the very force of that passion makes him insensitive to the question of how best to express it.

Wednesday, January 18, 2012

If dogs wore shoes...

Really??
...we'd more easily walk a mile in them. But most don't, and those who do don't look happy about it. Our relationships with our pets are wonderfully peculiar, as the ties that bind us braid together intimacy and alienation. This is true to a degree of all relationships (between dogs, between people, and certainly between cats), but when we extend our interest and care beyond the bounds of our own species, we seem sometimes to find more direct access to each other's emotions than we ever enjoy with our nearer kin. At those very moments, however, we may also be struck by the other's unfathomable otherness.

I think we need to sustain and not to collapse the tension between these simultaneous truths -- "we understand each other perfectly" and "we don't understand each other at all" -- if we want to flourish together. More, I think we should celebrate it. In the history of our relations with other animals, and particularly in the history of our domestication of other animals (and their domestication of us!), views have tended to swing from one pole to the other, from the conviction that other animals exist only as extensions of human need (or of human fear, as in the case of the benighted wolf) to the conviction that they exist utterly apart from us. Wittgenstein's oft-quoted aphorism captures the latter belief nicely: "If a lion could speak, we couldn't understand him." Likewise, Thomas Nagel's famous (and to his mind impossible) question -- what is it like to be a bat? -- encourages an all-or-nothing judgment on the possibility of shared experience. But otherness is always radical, and subjective feelings of connection are always an objective illusion. (Or rather, the connection itself is illusory, though the feeling would probably show up on a brain scan.) We have no direct means of access to any other being's perception of the world, no matter the species, so unless we wish to retreat into lonely solipsism, we have to make do with indirect means and earnest approximations.

With that limitation in mind, we have good reason (founded on objective evidence) to suppose that, in many ways, our pets' and other animals' emotional and cognitive experience strongly resembles our own. The speaker whose talk I'm most excited to hear at the upcoming Clicker Expo is Jaak Panksepp, the neuroscientist who holds the Baily Endowed Chair of Animal Well-Being Science at Washington State University (wonderful that there should be a chair so endowed). He is one of a small (but happily increasing) number of scientists who dare to emphasize the obvious homologies (common structures with common origins) among diverse animal minds, especially among the minds of humans and other social mammals. (I use "minds" advisedly, as Panksepp is interested in subjective as well as scientific modes of inquiry and description.) He is also leading research into homologies that are not so obvious, teasing out the physiology and chemistry that underlie those brain processes that we hold in common (and variations that we do not). His book Affective Neuroscience is a marvel. Densely technical in places, it nonetheless serves both as an excellent overview of contemporary research into the dynamics of primary emotions (including their influence on cognition and learning) and as an eloquent, richly speculative description of the big questions that remain.

Sunday, December 4, 2011

Befriending the unconscious mind II

In my last post I made a distinction between analytical and associative logic as one way of separating out the primary modes of thought favored by the conscious and unconscious mind. (Depending on your level of comfort ascribing "thought" to the unconscious mind, you might substitute "modes of response" in that sentence, but my own definition is pretty expansive.) However, the distinction between analysis and association is not absolute, and it gets particularly fuzzy when we contrast classical conditioning and operant conditioning. The question of whether the unconscious mind "analyzes" a given situation (and how effectively it does so relative to the conscious mind) here elbows its way to the fore.

A much simplified review: classical conditioning is Pavlov, and operant conditioning is Skinner. In the first case, an initially neutral (i.e. affectively meaningless) stimulus is paired closely with an "unconditioned" (i.e. intrinsically meaningful) stimulus often and consistently enough that it becomes meaningful even in isolation. Unless a dog is temperamentally nervous, she is unlikely to have any strong primary response to the sound of a bell. Unless she is sick, full, or finicky, however, she will almost invariably respond to the presence of food, by salivating, pricking her ears, widening her eyes, etc. As Pavlov discovered, if the sound of a bell is repeatedly paired with the arrival of food, it will soon provoke many of the same reflexive responses that food does, even in food's absence. (These responses will often extinguish if the association is not periodically maintained -- though threatening associations are more resilient than positive ones -- but there are interesting and somewhat counterintuitive laws governing the effectual timing of that maintenance. More on that another day.)

Operant conditioning involves willed (or, if you won't go so far, voluntary) behavior. In this case, some specific action by the animal repeatedly and consistently provokes a change in her environment; if that change is meaningful to the animal, she will alter her behavior accordingly. Thus the Skinner box: rat presses lever, food pellet arrives, rat presses lever again with same result, and rat soon becomes a lever-pressing fiend. A fat lever-pressing fiend. Time to add a complication, in the form of a "green" light: only when the light is on will food pellets be available at press of lever. When the light is off, the rat can press for all she's worth but press in vain. The rat soon stops pressing the lever in the absence of light.

At what point (if any) in that sequence does analysis enter in? At what point does a rat or dog or human begin to perceive "coincidence" (the predictable proximity of two previously unrelated things or events) as a relationship of cause and effect? And is that perception most potent (most behavior-altering) at a conscious or unconscious level?

Even in the case of the planet's Great Brains (i.e. humans), it appears that the unconscious gets there first and most decisively. "Gut feelings" whisper to the frontal cortex the conclusions that older, deeper structures have already drawn -- and in many cases already prompted our bodies to act upon. Gerd Gigerenzer has done some incisive research into this dynamic, and his book Gut Feelings: The Intelligence of the Unconscious is one of the best introductions I've found. I'd also highly recommend Timothy Wilson's Strangers to Ourselves: Discovering the Adaptive Unconscious, which beautifully assimilates contemporary research with earlier descriptions of the relationship between conscious and unconscious thought. But among books written for the lay reader, Antonio Damasio's Descartes' Error: Emotions, Reason, and the Human Brain remains the most coherent (if necessarily speculative) revision of the "top-down" model of human decision making that I've read, and it's a good place to go if you're ready to dig into the physiological bases of cognition (insofar as these are intelligible to us, which isn't yet very far). His "somatic marker" hypothesis turns the idea that the consciously reasoning mind is in command of the lowly body pretty much literally on its head.

Image by SubVerse Clothing

Wednesday, February 2, 2011

In scientific circles

I think the mirror test demonstrates, at the point where its popularity as a measure of self-consciousness intersects with its inadequacy, the tendency of scientific investigation to wander into tautology when it treats the phenomena of sentience. It requires great care and imagination to conceive an experiment that will yield some verifiable external measure of an internal process, and when someone succeeds as elegantly as the originator of the mirror test, there's a strong temptation among those who credit the significance of the results to "move forward," to avoid any needless backtracking (e.g. to the definitional boundaries of the phenomenon under scrutiny).
     Thus the question of whether an animal possesses self-awareness elides irresistibly with the question of whether he can, with the help of a well-placed polka dot, make a connection between his kinesthetic or proprioceptive sense and an alien image that (bizarrely) coordinates with it; in the absence of any similarly compelling measure, the mirror test becomes definitive for hundreds of scientists who go on to paint scores of unsuspecting animals in their sleep. Will a parrot pass or fail? A tamarin? A zebra? As Frans de Waal observes, "for better or worse, this test has remained the gold standard of self-identity."
     Even the test's critics seem to accept its foundational terms: if, they say, an orangutan who touches a spot on his forehead really understood the image in the mirror as a representation of his own body, then the conditions for self-awareness would be met. But, they argue, he probably just likes to poke at his face. Or he learns to do so because it makes humans grimace in that weird way that means more dates and sunflower seeds.
     Again, methodological limitations lead us to chase our tails: the mirror test measures the capacity for self-consciousness because... we don't have a better test. Or a more complete one. Hell, I don't know what it means to be self-conscious! Do you?
     My feeling, one I'd like to develop into a well-reasoned conviction (so goes the trajectory of my mental life), is that there ought to be a kind of intellectual affirmative action in the direction of granting non-human animals manifold intelligence and complex consciousness. We ought to assume they're endowed with great riches of thought and feeling until they prove otherwise, though we ought not to assume that their thoughts and feelings trace the same patterns as ours. I have some sympathy for Marc Hauser, despite his faults and all the damage he's done to the cause of anthropomorphology, because a bias in favor of non-human intelligence remains so rare, while the bias against almost defines "respectable" research.

Wednesday, January 12, 2011

Self-consciousness without mirrors

There's a question I want to dig into a little further before I arrange another rendezvous between Hamlet and Burrhus (Frederic Skinner), a question regarding self-consciousness. This is one of innumerable capacities ascribed until recently only to humans. Various experiments with mirrors and paint have widened the circle of self-conscious creatures just a little bit, to include apes and magpies (!) among a few others, but I think the assumption that underlies the research may be too restrictive to allow a full description of the phenomenon. While it may be difficult or impossible to demonstrate under scientific controls (this is clearly a case in which observation itself may distort the nature and behavior of what we observe), informal study argues strongly for the emergence of "personality" in many social species who (pronoun used advisedly) fail the mirror test. Canis familiaris, to take one salient example.
     What if one accumulates (even if unwittingly) a distinct and precious identity, an identity one is motivated to defend (even if reflexively)? Mightn't this constitute a kind of self-consciousness, whether or not the self is pinched off from consciousness and set out as an object for one's contemplation and deliberate manipulation? I think anyone who has ever observed the wounding of a dog's pride or a cat's dignity must admit the possibility.
     The counterexample of the octopus also supports a more expansive definition of self-consciousness. Experiments performed using HDTV suggest that, however intelligent, an octopus has no personality: that is, it demonstrates the patterns of behavior that we generally attribute to personality, but these patterns are extremely short-lived. An octopus that is extroverted and aggressive one day may be terribly timid the next. (Wonderful that the subject of this research was Octopus tetricus: vulgarly, the "gloomy" octopus.)
     If the range of an animal's behavior (and the probability of any specific response to a stimulus) were determined simply by a passive stockpiling of experience and not by any active sense of internal coherence - of individual integrity - one would not expect to see such wild variations in the robustness of behavioral patterns among species.
     **There's another experiment, performed back in 2008, that hints at a canine capacity for self-consciousness. Austrian researchers trained a pair of border collies to sit and shake on cue, then measured the time it took for the behaviors to extinguish when they received no reinforcement. The salient data came from a comparison between "control" trials, wherein one of the dogs worked alone, and trials wherein the two dogs worked side by side but only one received reinforcement. Behaviors extinguished significantly more quickly in the second case (and the unrewarded dog showed many more visible signs of frustration). Discussion of the research has focused primarily on the question of whether this demonstrates that dogs have a sense of "fairness," but it certainly suggests that they have a vigorous sense of "me" distinct from "him," a protective self-regard that might amount to a form of ego.

Tuesday, December 14, 2010

The romance of chaos

Many disparate and diversely colored threads I'd like to braid together here if I can. (Are your days sometimes dominated by specific metaphors? I am all tied up today in 101 Things to Do with Thread and its Fat Cousin, Yarn: braid, sew, weave, darn, knit, spin, embroider, unravel, unspool and so on and on to the fraying end.)
       I often listen to the podcast of Krista Tippett's public radio interview show, which was until recently called Speaking of Faith but now goes by the rather grand name of Being. A few months ago, Tippett spoke with the surgeon and writer Sherwin Nuland about "the biology of the spirit," and I nodded happily along with his description of spirit as an evolutionary rather than a divine endowment, as a biologically determined pleasure (in order and symmetry) that we have actively cultivated-- or cultured! New metaphor! My later segue to discussions of bacteria will now be entirely organic. I need only to let it ferment for a paragraph or two. Ha.
       Nuland once suffered from a debilitating depression, and his description of the discovery that his Orthodox Jewish "faith" consisted of little more than neurotic compulsions and obsessive thoughts which he used superstitiously as talismans against inchoate threats of hellfire and damnation-- this all struck a chord with me. (Nuland is extremely careful to note that this discovery was merely personal, and that faith may spring from sources other than neurosis; it may be true at least in the sense of being sincere, though his atheism does not allow it a corresponding substance. He believes, it seems, in the reality and even in the potential value of unanswered prayer.)
       I wanted to keep nodding along with Nuland-- he has a lovely, gravelly voice and a charming streak of irreverence toward his own most cherished insights-- but I stopped when he started talking about our perennial attraction to disorder as if it were an entirely bad thing. Unsalutary, to borrow his word. He wondered aloud why so many cultures have over time moved toward monotheism, tossing out lesser gods like worn out toys and gathering the scraps of their spiritual allegiance into one great mass. (So to speak. But I'm thinking less of Latin liturgy and more of the old impulse to make string balls).
       "Why is monotheism better?" Nuland asked, and then answered his own question: it's better because it represents a progression toward greater order. Nuland does not believe in God (or gods), but he believes in the human capacity to make meaning from apparent chaos. And from his conversation with Tippett, he appears to believe that a more orderly world is self-evidently more meaningful. To his mind, our dalliances with disorder are expressions of thanatos, a deathward vertigo that must be resisted.
       For a truly eloquent response to my no-doubt-simplified account of Nuland's love of light and clarity, one might turn to Byron, Nietzsche, Isaiah Berlin, or Lewis Hyde: all the Romantics and their multifarious offspring. Whether the discussion concerns Apollo and Dionysus, foxes and hedgehogs, or Hermes and Coyote, the central lesson is clear: disorder is vital. Life surges at the frayed edge where order unravels. Yes, we may at moments desire our own destruction, our ultimate dissolution and release from the effort of making meaning. But we may in other moods be erotically drawn to the edge, whatever its dangers. We may court chaos when we feel bold and bushy-tailed, or when the meanings we've made have become waxy and stiff. Yes, we often lose more than we willingly offer in sacrifice to change, but our health ironically depends on our appetite for risk.
       I will get back to bacteria, I promise.

Thursday, November 18, 2010

Skinner and Hamlet II

This attempt to bring two unlike minds into harmonious - or anyway not rancorous - relation will necessarily proceed slowly and piecemeal. One of the minds is, after all, fictional, though that may be the least of the challenges I face.** As a gnatty little amateur in the realm of Big Ideas, I am bound to get ahead of myself and run down a dozen or more dead ends before I find a viable path. My arm is strong and my hatchet is sturdy, but it doesn't have the keenest of blades, so I hope, gentle reader, that you'll forgive the rough work I make of this.

Who, me? Stalling? OK. I've already said that I don't think any of us (paramecium, porcupine, person) arrives tabula rasa in the world. Some native and individual proclivity for order springs into being at the moment of our inception, hungry for the world as it makes itself known to us through our various and varied senses. However, the world feeds our hunger so immediately, generously, and unremittingly that it may be impossible ever to say what any of us is in isolation from the world as we know it at any given moment.

When stated so broadly, this seems obvious, but I could say instead, "Oh, of course you're a different person with your friend than you are with your mother, and I've no idea whatsoever how you might act if your life were on the line. No more than I have about how I would act. There's nothing solid in your character or mine - we are creatures of circumstance." Or I could point you to a recent article in Discover detailing the possibility that schizophrenia, bipolar disorder and MS could be caused by a retrovirus embedded 60 million years ago in the ancestral DNA of every monkey and primate, a virus that fortunately only becomes active in special environmental circumstances. Perhaps it is only in these extreme cases that "foreign" matter speaks to us so intimately and shapes our lives so dramatically, but I'm not willing to bet on it. 

If Catholic cosmology reigns, the Ghost who speaks to Hamlet may indeed be honest, but there's no purgatory in the Protestant universe, so he needs be a demon. Freudians hear the voice of the superego, and evolutionary biologists the mischievous mutterings of a rogue amino acid sequence. Behaviorists? Good question. Maybe they'd say (in their best Jesse Jackson imitation): The Ghost is moot!

**I still love the line from Woody Allen's Purple Rose of Cairo, wherein a depressive in the Depression played by Mia Farrow falls in love with a movie archaeologist (Jeff Daniels): "I just met a wonderful new man. He's fictional, but you can't have everything."

Monday, November 8, 2010

Mind-shaped world

London's (or rather, England's) National Theatre took a cue from the Metropolitan Opera a couple of years ago and began a series of "live" high-definition broadcasts of selected shows from its season. The quotations wouldn't be there if we lived in New York, where audiences really do see shows in the moment they're being performed, but here we watch them "live." A couple of non-NT shows have made it onto this year's bill, and last night Peter and I joined two or three hundred other people at Portland's World Trade Center (still standing, but not so tall as the "real" but absent WTC) for a pre-recorded performance of A Disappearing Number by the group that used to be known as Théatre de Complicité (sorry, can't find the hat for the a) but now goes more simply by Complicite. Their work tends to the densely imagistic, cerebral, and fractured-- they don't disdain story, but they find other, unexpected paths to empathic connection. I'd seen their Noise of Time nine years ago in London and found it more intriguing than moving, but A Disappearing Number makes excellent use of their intellectual and emotional agility, as they leap nimbly from the abstract to the personal and back again.

The play weaves a fictional contemporary romance of the more or less conventional sort with the true (or at least non-fictional) romance of minds that took place between two mathematicians in the early twentieth century, Srinivasa Ramanujan and G.H. Hardy. The latter story gets unfortunately short shrift, but is implicitly honored in Complicite's romance with "maths." Fictions about mathematical and scientific genius usually have the quality of a trip to the zoo-- they invite an audience to peer through steel bars or scratched plexiglass at the strange creatures trapped within. Proof is a case in point, also A Beautiful Mind (though the book makes a more capacious cage than the film). In contrast, Complicite manages ingeniously to communicate something of the flavor of the genius itself, of the beauty that lights a mind like Ramanujan's, alien as it might at first appear.

Even in that last sentence, I have bumped inadvertently against the subject I want to address here. Late in the play, a contemporary mathematician tries to explain to her exasperated husband how it is that math is more real to her than the life they (sort of) share. She quotes Hardy from A Mathematician's Apology: "A mathematician is working with his own mathematical reality. 317 is prime, not because we think so, or because our minds are shaped in one way rather than another, but because it is so, because mathematical reality is built that way." The line hit a nerve with me, and I misremembered it later without Hardy's qualifications: "317 is prime, not because we think so, or because our minds are shaped one way rather than another, but because it is so, because reality is built that way." I've been getting impatient of late with people I might have considered intellectual kin, all the "bright" atheists who mock religious metaphysics but endow human reason with a numinous glow.

I hope that everything I write here will attest to my lifelong affection for empirically grounded reason. Math? Science? I'm a fan of both. It's just that hubris makes me itchy, whatever its source. My admiration for the scientific method is partly inspired by its implicit modesty, but that modesty goes missing sometimes when its practitioners champion themselves as the lonely votaries of Truth. Meanwhile, those who immerse themselves in "pure" math and logic (Plato's great-grandkids) don't dirty their hands with empirical observation, and their claims on Truth can be still more overweening.

I'm not a fan of the radical skepticism that permeates the "postmodern" worldview, which spirals out into an infinity of dead ends: I know you think so, but what do I? I think the search for common insight, common knowledge is possibly worthwhile and anyway much more enlivening than a proliferation of solitary sandboxes. And yet. The temptation to overreach is strong. I suppose I am a child of Kant, in the sense that I believe every description of the world is necessarily a description of the self. Empirical investigation is constrained not only by our specific and limited senses but by (un)certain a priori structures and biases that are more or less particular to us as different species of animal and different species of human. Not least of these is the bias toward meaning itself: our desire to make sense of what we perceive drives us to see relationships (e.g. of similarity, of coincidence, of cause and effect) where none may essentially exist, and it operates for the most part "in secret," below the level of conscious thought.

The dialogue between nature and nurture begins too early to allow us ever to untangle it completely (generations before an individual's conception, as recent epigenetic research has demonstrated). However, we don't need to arrive at any definitive account of what or how much is pre-written on the slate to recognize that the slate has a given shape and a surface that can only be marked in given ways (a blackboard likes chalk as paper likes a pen). Think of the senses we are missing, and those we possess in only stunted form. How would our notions of "objective observation" be altered if we could more directly perceive magnetic fields like a pigeon, shapes and speed like a bat, scents like a dog?

Yes, we have extended technology into many areas of our sensual blindness, but how much is lost in translation? If I see a sound's echo as a tight burst of peaks and valleys on a scrawled page, and learn to recognize its distinct pattern, is my apprehension of reality expanded to the same degree as if I had acquired the skill of echolocation? If a blind person scores perfectly when I test her knowledge of Newtonian optics ("420 nanometers?" "Violet?" "Yes!"), will I congratulate her and hand her the keys to my car? No. I will apologize for raising her hopes unrealistically and invite her to ride shotgun.

In their unapplied forms, math and logic try to leapfrog past this epistemological quandary. The symmetries they seek are self-sufficient, independent of the marriage (whether it may be intimate or estranged) between the mind (sorry, a mind!) and the world. If a consonance exists between those symmetries and empirical reality, we can never really know it. Hardy seems to have acknowledged this, in a line from the same Apology to which the play continually returns: "A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas... The mathematician's patterns, like the painter's or the poet's, must be beautiful; the ideas, like the colours or the words, must fit together in a harmonious way... It may be very hard to define mathematical beauty, but that is just as true of beauty of any kind. We may not know quite what we mean by a beautiful poem, but that does not prevent us from recognizing one when we read it."

The question seems finally to lie in how wide we presume to draw the circle of "we." Complicite did a remarkable job of making maths' beauty visible, audible, palpable to some of us who don't normally perceive it. But what does my dog Pazzo care for the elegance and emotional resonance of a convergent infinite series? I cannot tell whether he cares for elegance at all. The joy he takes in snatching a frisbee from the air at the moment it pauses in flight suggests that he does, but when he drinks from the toilet I am forced to reconsider.

Wednesday, November 3, 2010

Skinner and Hamlet I

I don't imagine it will come as a shock to anyone to learn that most of my friends (who trend to the liberal and artistic) wrinkle their noses at any mention of B.F. Skinner. As I've described here, I used to wrinkle mine, too. I've devoted many previous entries to a defense of Skinner, because I believe his insights have been undervalued in many circles (certainly the circles where I've been traveling). I think the reflexive rejection of behaviorism by many humanists poses a barrier to the interdisciplinary dialogue that we need if we hope ever to marry (or even to weave together as independent threads) mechanistic and holistic accounts of cognition and action, our descriptions of brain and mind as they impel the body (or vice-versa, as Antonio Damasio has intimated in his somatic marker hypothesis). Of course, many behavioral and cognitive scientists have been just as stingy and unthinking in their refusal to admit the significance of intangibles like emotion, belief, and individual character to their investigations. The imperatives of their discipline more formally forbid it.

If we cannot invent or cobble together a third vocabulary that encompasses subjective experience and objective observation, we will have to develop fluency with the conventions of scientific and poetic description at once, to take full advantage of both languages while recognizing their respective limitations. It's in the interests of this bilingual approach that I have learned to love Skinner-- I have been intent these last few months on countering my own prejudice. I feel a little more confident now that, if I invite my inner artist back to the table, the discussion won't devolve into a shouting match: "Deluded parasite!" "Heartless bastard!"

Friday, September 24, 2010

Neocortical lipstick

The widespread reluctance to acknowledge (let alone to explore or elaborate) how deeply we remain embedded in "animal" life has serious practical consequences, as it accelerates our destruction of the world we commonly inhabit. This is obvious in the sense that our failures of identification with other species remove barriers to violence and rapacious exploitation; it is less obvious in our expectation that "uniquely human reason" will rescue us from our own greedy appetites. We wishfully suppose ourselves ennobled by our comparatively well-developed cortices, but the reasoning (or rationalizing) power supplied by those wrinkly blankets obfuscates as much as it elucidates; it has made us masters at self-deception.

Jonah Lehrer makes the excellent point (in Proust Was a Neuroscientist) that the neocortex, in its very novelty, may be regarded, should be regarded as less developed than supposedly more primitive parts of the brain-- there hasn't been time to smooth out its kinks, or make its wrinkles perform most efficiently and effectively (that is to say, most adaptively). It remains fundamentally less reliable than older structures, though the dialogue that ensues between them has clearly been productive in the (geologically) short term: it has allowed us to overrun the planet. Yippee.

This is my point: the emanations of the neocortex (e.g. reason and faith) have not yet produced any notable constraint on our "animal" compulsions to consume and procreate, and to expect that they ever will is patently ridiculous, when our brains have been "designed" bottom-up for the opposite purpose. Even our most hopeful discoveries in neurology (of mirror neurons, for example, with their strong suggestion of a built-in capacity for empathy) can only embellish the fact of our dominant hunger, that is, to live beyond ourselves in the proxy of our genes. That superobjective (says the theatre gal) spawns an astonishing variety of more trivial hungers in day-to-day life, few of which consent to be curbed by reason or faith (though both propose compelling accounts of why other people's appetites should be suppressed or refused outright). Even those of us who have abdicated our procreative vocation find alternative modes of proliferation (hello, blogosphere!), and our consumption continues apace, as if we were not genetic dead ends (and indeed we may not be, if we help our nieces, nephews, or cousins to thrive).

Yes, this is to say that I am extremely pessimistic about our ability to pull ourselves by our elastic bootstraps into an enlightened state-- of mind or self-government. But if we do, the mechanism will not, I think, be reason or faith. I think it will have to be pleasure, unless it is desperation. If we cannot channel our appetites in less destructive directions (e.g. by encouraging people to remain "selfishly" childless, by cultivating our inner resources and capacity for pleasure), we will sprint ever faster toward that great brick wall of finitude.

I'm pretty sure it's already too late, at least for anything like the life I happen to lead (the outrageously wasteful kind). But crisis is normal in the long life of the planet. The dinosaurs never dreamed of us, and we can't imagine what (or who) will flourish when we're gone. I can still rage against the dying of the light in my visible spectrum-- the snuffing of lives I am disposed by evolutionary accident to cherish.

Thursday, September 2, 2010

Doh!

I’ve just finished reading an excellent book by the witty and dashingly erudite self-appointed chronicler of "wrongology," Kathryn Schulz. In Being Wrong: Adventures in the Margin of Error, Schulz doesn’t argue in favor of error, exactly, but she makes a persuasive case that we urgently need to acknowledge and make peace with our bottomless capacity to be wrong. She describes the myriad ways that our perfectly natural fear of mistakes (and the psychological upheaval that mistakes sometimes entail) can stunt our growth and stifle our noblest impulses. Our terror of error has an especially corrosive effect on our ability to feel compassion: it compels us to reject, sometimes violently, any perspective that challenges our own. Only a humble appreciation for our limitations (of apprehension, of intellect, of moral standing) can restore a healthy suppleness to our lives and interactions. Schulz also notes that, while the gap between the world as it is and the world as we perceive it can sometimes gape terrifyingly wide, without it we would lose a vital element in human experience: Art lives there, and only there. As she charmingly puts it, “Art is an invitation to enjoy ourselves in the land of wrongness.” Yes, it's good to make Plato roll in his grave, and keep him rolling!
 
The theatre seems to me an especially inviting place for the exercise of humility and compassion. Sitting more or less safely in the audience, we can vicariously rehearse a thousand ways of being horribly, disastrously wrong. In their very structure, plays insist on a multiplicity of perspective, and the best of them never come to rest in any definitive point of view. Moment to moment, they encourage us to embrace one truth, then another, to inhabit competing theories about our place and purpose in the world and to live out their repercussions for an intensely distilled hour or three. Some see this as a means to teach the avoidance of error: here’s what not to do. I see it as a means of reminding ourselves (because we continually forget): we err, we have erred, we will err again. We had better get used to it.

Thus I am drawn in my playwriting (and elsewhere) to the murky territory where good intentions collide with reality, where common passions steamroller “common” sense. I love smart characters who do stupid things for excellent reasons. Problems of scale fascinate me especially: in a world that technology has virtually collapsed, we live at once too close and too far from each other. The temptation to abstract other people’s suffering seems dangerously high, so that compassion itself sometimes becomes a blunt instrument, and the law of unintended consequences prevails. 

Errors of scale and distance lie at the heart of a play that lies close to my own heart, Trailing Colors. It's set in the aftermath of the Rwandan genocide, not in the midst of evil but in its confused wake, and it concerns most centrally a species of blundering that may not be uniquely American but appears endemic: call it oblivion heroicus. It leaps tall buildings in a single bound! Then it lands splat in the boggy boggy mud. Graham Greene skewered it beautifully in The Quiet American, and Philip Caputo made a palpable hit with Acts of Faith. I'm too sentimental to be so mercilessly satiric, but I do have sharp words for privileged women writers who feed their fictions on others' pain.
 
Ahem.

It's never too early or late to embrace being wrong, which is why I have tentative dates set for a production of Trailing Colors next year: May 5th-29th, 2011. More details to follow.

Tuesday, August 10, 2010

Inessential pleasure

From an interest in behavior modification and positive reinforcement naturally follows an interest in the origins and workings of pleasure, so I had really been looking forward to reading a new book by the Yale psychologist Paul Bloom, How Pleasure Works: The New Science of Why We Like What We Like. I mention his fancy pedigree because it serves to demonstrate his central thesis: through evolutionary advantage or accident, we ascribe value to things and derive pleasure from them according to the invisible essences with which we imagine they are endowed by their histories or identities. (Bloom seems to favor but never commits to an accidental account, wherein this "essentialism" is inessential to our survival, the kind of non-structural element in our genetic blueprint that Stephen Jay Gould would have called a spandrel.)

Bloom writes persuasively of art forgery and celebrity auctions, noting how many millions more a Picasso will fetch than a "Picasso," no matter how beautifully the fake may be executed, and observing that the value of a shirt worn by Elvis will plunge if it is washed. I must concede his point when I note the wild discrepancy between the sloppiness of his broader argument and its respectful (in some quarters glowing) reception. It seems clear evidence of the power of the Yale essence to short-circuit critical judgment.

I don't like to be harsh, but Bloom promises so much more than he delivers. His subtitle is especially misleading, when the book's scientific morsels are so few and so poorly digested. My disappointment became outright exasperation when I came to his discussion of fiction and the pleasures of virtual pain. Even as he takes on a subject that urgently requires subtlety and a fine blade, his wits are blunted for a stage fight. In the absence of any concrete evidence, he argues that the pleasure we take from fiction arises primarily (essentially) from our awareness of its artifice, our appreciation of the fact that it was constructed (for our pleasure) by some guiding intelligence.

This logic is maddeningly circular and falls to pieces as soon as one considers the ubiquity of crummy fictions. If a play or novel or film fails to please me, I am pained to think of the effort involved in creating it. Furthermore, if I am swept up in a story, nothing is more likely to spoil the fun than a gratuitous flourish of virtuosity, one that clips the guy wires suspending my disbelief. (Bloom acknowledges this danger but almost immediately dismisses it.) Yes, the critic and the expert have their own modes of enjoyment, and these may be richer than anything the unschooled can know, but surely they are incidental to the main current of pleasure in fiction, and not the other way around.

Thus Bloom dodges the question-- he speaks of creative virtuosity without asking what it is. How does William Shakespeare, or Jane Austen, or Buster Keaton delight an audience? By what means do they make us their instruments and play upon our frets so masterfully?

Here's a typical feint. In his chapter titled "Safety and Pain," Bloom examines "the classic guy-slips-on-a-banana-peel scenario." He writes, "This can be funny, particularly if you haven't seen it a thousand times and if the actor is skilled at conveying his surprise. But the same situation is not typically funny in real life. I spent much of my life in Montreal and I've seen many people tumble on ice on city streets. Onlookers wince or they reach to help, or they turn away, but they typically don't laugh. This is funny in fiction, not in real life." He notes the relevance of an actor's skill, but that doesn't dent his conviction that the humor here derives from the dislocation of slapstick from reality. He doesn't consider the possibility that they just don't know how to fall funny in Montreal.

I'd offer two counterexamples, one from each side of the fictional divide. More than twenty years ago (yikes), I was walking across my college campus on the first day back from summer break. The sound of my name drew my attention down the street to my right: my good friend Sven was gliding toward me on his skateboard. "Sven!" I exclaimed with a smile, then walked directly into an enormous, hollow lamppost. It resounded beautifully as a gong and left me flat on my back. I looked up to find Sven caught between horror and hilarity, unable to stop laughing even as he asked me if I was alright. I had to laugh too. The timing, the surprise, the social embarrassment, the weird beauty of the music we made together, the lamppost and I-- all these things contributed to the joke. But it owed nothing to fiction, as my aching head attested.

On the virtuosically constructed side, there's a "classic guy-slips-on-a-banana-peel scenario" in Buster Keaton's movie Sherlock Holmes, Jr. that I will never tire of watching, because it marries grace so delightfully with mishap. I did not stop laughing after I learned that Keaton literally broke his neck performing the stunt, though my pleasure in it became complicated by wonder and a sympathetic wince.

In short, the "essentialism" that Bloom describes seems an interesting epiphenomenon, but an unpersuasive candidate for the source of our deepest pleasures.

Thursday, July 22, 2010

Behaviorism and the Queen of Diamonds

The sinister overtones that many hear in the phrase "behavior modification" arise from the perception that it necessarily describes the coldly clinical manipulation of one creature by another. The efficacy of classical and operant conditioning rests on our ability to generate involuntary responses to selected stimuli, a process that appears suspiciously like brainwashing. If someone acquires the power to circumvent my conscious intent, to play upon my more or less submerged desires without my reason's consent, what am I but a puppet?

It's no real consolation to observe that
, under the scrutiny of neuroscientists, the whole phenomenon of willed action begins to appear less and less substantial and may prove no more than a sustained delusion: it's a necessary delusion, indispensable to our sense of mental coherence. The related delusion that we might more productively dispense with is the conviction that reason rules (or should rule) our behavior. It leads us to assume that we should seek change in ourselves and others through the careful application of logic, when the truth of our experience (and increasingly of scientific study) is that emotions have infinitely more suasive power than reasons do. Or, as Pascal noted, the heart has its own reasons, to which reason must give sway.

Maybe the general mistrust of behaviorism actually speaks to an intuition in this direction and a general anxiety around the mixing of "cold" logic with "warm" feeling: someone with the power to stimulate my most primal emotions (joy, fear, desire, disgust, etc.) may well abuse it if he regards me through the lens of reason as a mere object for the accomplishment of his ends.

This is jumbled and something I need to work through at much greater length (you see my faith in reason perseveres!), but I do know that my emotional entanglement with my training "subjects" is the sine qua non of my use of behaviorist methods.

Not that emotional entanglement is any guarantor of virtuous ends. (The Manchurian Candidate supplies a case in point.) Sigh. I'll have to try to catch this tiger by a different toe.

Wednesday, July 7, 2010

Oh, joy!

The most unexpected-- and persuasive-- outcome of my early clicker training sessions with Barley was the obvious pleasure she took from the process. She had never guessed that I could be such a source of fun. Of course, I had also suddenly become a generous dispenser of treats, but the tiny morsels of processed lamb or turkey I doled out soon became icing on the click. (Mmm. Memories of my grandmother's beagle, Bruce, whose St. Paddy's Day birthday we celebrated with towering "cakes" of marshmallow-studded Alpo.)

Scientific study of animal behavior has only recently begun to allow for the relevance of emotional states to the process of learning. Again, this reluctance arises in part from epistemological rigor, a respect for the limits of empirical investigation and the impossibility of our directly apprehending any other creature's subjective experience-- the "black box" problem-- and in part (perhaps in the main) from a reflexive and unscientific impulse to distance ourselves from "mere beasts." I've just begun reading Jaak Panksepp's Affective Neuroscience: The Foundations of Human and Animal Emotions and its first chapter summarizes most lucidly the history of the struggle to marry psychology with neuroscience, which struggle has been made more difficult by the refusal on the part of people on both sides of the divide to acknowledge and make better scientific use of the significant overlap (the structural and functional homologies) between the brains of humans and those of their mammalian cousins.

Even if Panksepp succeeds in legitimizing the scientific description of emotion in animals, it will likely be mediated by functional MRI or its future technological offspring: "Ah, there's the blood flow pattern we'd expect to see when a dog's ears and posture perk up, its (never 'his' or 'her') mouth opens slightly, and the speed of its response accelerates!" But no algorithm, however subtle and comprehensive, will ever name "joy" as well as "joy" does. Panksepp, very much to his credit, emphasizes the importance of folding non-scientific vocabulary into scientific accounts of emotion for the sake of clarity and (strange to say) accuracy.

If science were not so grand in its claims, if its practitioners more readily acknowledged the necessary gaps in its description of reality, I would not get so resentful of the ways that "objectivity" sometimes interferes with perception, so exasperated with the willful blindness and obtuseness that characterize much scientific research, especially when sentient beings are the objects of study. As I said earlier, I only became a fan of Skinner's methods when I put them into extra-scientific, personal practice-- and thereby corrupted them. If I want to be most effective as a trainer, I need to see the animal I'm training, not as a jumble of quantifiable properties and behaviors, but as a fluidly perceiving and feeling being whose mysterious intelligence is momentarily entangled with my own.

By lucky accident or divine symmetry, all creatures appear to learn best when they enjoy the learning. If it were not so, I would reconcile myself once and for all to being a crummy trainer, just to keep Barley smiling.

Sunday, June 20, 2010

Playing for keeps

We closed a satisfying four-week run of Proof with a matinee performance last Sunday. After striking the set and restoring the theater to black emptiness (a state of infinite possibility), we gathered at the BBQ joint around the corner for another demolition project, this one internal. A variation of the old Tootsie Pop question: how many beers does it take to get to the center of an actor after the play is done? I'm sure it depends on the actor-- after all, some don't drink anything stronger than cranberry juice-- but even a smart owl might have a hard time saying how we slip in and out of character while remaining perfectly, distinctly ourselves. A very smart owl might say we don't. We can't play at another self without allowing the self that's playing to blur and be altered. An actor's working philosophy and methods may encourage her to close the distance with her character or maintain a critical detachment, but there's no safe distance: she can't avoid getting touched, bruised, stained by the experience she shares with her fictional counterpart. Her work partly consists in deliberate failures of integrity.

There are many stories of "method"-oriented actors getting lost in the emotional and psychological wilds of the fictions they inhabit. I caught a cautionary glimpse of one famous disintegration back in 1989, when I saw Daniel Day-Lewis play Hamlet at the National Theatre in London. It was an extraordinary, incandescent performance, also-- quite palpably-- a risky one. Day-Lewis appeared to have only a fingernail's hold on common reality, and the ground was eroding fast. A little more than a week after the show I saw, during one of Hamlet's encounters with the Ghost, Day-Lewis experienced what he later described in an interview with Simon Hattenstone as "a very vivid, almost hallucinatory moment in which I was engaged in a dialogue with my father" (who had been dead for years). Day-Lewis left the stage and never returned, but this did not finally register with him as a professional failure. On the contrary, he counts it an unusual success: "To me, it was like a natural conclusion to the job I was doing. If I hadn't
arrived at that centre of confusion, I would have probably felt a sense of disappointment." He's nonetheless clear on the cost: "I don't think I had a breakdown, but I daresay I wasn't that far from it. I broke myself down."

That open courtship of confusion marks out the reckless (though in many cases highly disciplined) end of the broad spectrum of acting methodologies. At the other? Brechtian alienation? Mametian ventriloquism? It could be any approach that discourages emotional identification between actor and character as a dangerous or distracting indulgence. Unfortunately for those who would like to keep their selves whole and clean, even the "merely" muscular habitation of an alien persona may resonate inward.

Consider a recent bit of research out of Italy, as reported in New Scientist. In a study of patients with "locked-in" syndrome, unable to move anything but their eyes, Luigi Trojano discovered that they had great difficulty (relative to a control group) interpreting the emotional content of facial expressions. Trojano speculates that we depend
for our understanding of others' emotions on our ability to mimic them physically... which supports the further speculation that muscular imitation conjures emotion. A scientifically tenuous proposition at present, but it accords with the experience of actors and others whose masks shape and reshape their faces.

Thursday, May 27, 2010

Behaviorism and Desire

We want what we want and we want it now. (Humans are better than most animals at deferring gratification, but not always and not by much.) Any deliberate manipulation of another creature's behavior requires that we become attuned to that creature's desires, and these may be almost as idiosyncratic among dogs or dolphins as among people. To paraphrase Sam the Eagle (of Muppet Caper fame), we are all weirdos. This is where behaviorism goes productively amok.

In operant conditioning, one doesn't create behavior per se, one merely increases or decreases the likelihood that a given behavior will be performed, and one does this by controlling the behavior's consequence.

Consequences fall into four categories, defined by two binary oppositions (positive/negative, reinforcement/punishment): positive reinforcement, negative reinforcement, positive punishment, and negative punishment. "Positive" in this context refers to the addition of some thing or force, "negative" to the removal of some thing or force. "Reinforcement" names anything that increases the likelihood that a behavior will be repeated; "punishment" names anything that decreases that likelihood. More simply, reinforcement tends to the yummy and pleasing, punishment to the nasty and fearsome ("aversive" in behaviorist lingo).

So "positive reinforcement" is the introduction of something good (chocolate cake, a belly rub, a game of tug, a shoulder massage), "negative reinforcement" the removal of something bad (pressure on the bit, a parent's screaming, a scary dog or mailman): whatever I did to create either consequence, I'm more likely to repeat it. "Positive punishment," which sounds like a contradiction in terms, is the introduction of something nasty (leash jerk, skunk spray, burned fingers), while "negative punishment" is the removal of something we like (attention, bones, freedom): whatever I did to earn these consequences, I'd like to avoid repeating it.

There's a wealth of complications buried in this simple schema, but the most significant concerns the vagaries of desire. We all (human and non-human animals) like different things, and we like them with varying degrees of intensity. Our desires are fluid and changeable, shifting with experience, mood, and context. Once upon a time, I loved bananas and (very briefly) the voice of Suzanne Vega, but both now make me queasy. Conditioning wouldn't be possible if our preferences were forever fixed, but our fickleness makes us slippery subjects. And that seems very much to the good. I have learned to love Skinner only because his account of behavior remains forever incomplete; the "laws" of behaviorism, while they are powerfully, empirically predictive in the aggregate, get wonderfully complicated when they tangle with the rebelliously singular individual.

Tuesday, May 25, 2010

Learning to love Skinner III

What I found most remarkable in Karen Pryor's book Reaching the Animal Mind is how seamlessly and matter-of-factly she enlists the insights of behaviorism in a project of creative collaboration between species. Once they are taken out of the laboratory and into the world at large, the conditioning techniques that Skinner and others developed for scientific purposes become powerful tools for the achievement of warmer, fuzzier ends. They can help elucidate animals' integrity as individuals, and (not coincidentally) foster positive emotional bonds within and across species. Pryor's subtitle-- What Clicker Training Teaches Us About All Animals-- hints at her (sadly radical) proposition that humans can and should be in dialogue with other animals. Trainers may be more strongly inclined to listen (though many are not), but we all have as much to learn as to teach. More.

In order to describe Pryor's neat sleight of hand clearly, I first need to travel back to Behaviorist Psychology 101 for a quick primer in classical and operant conditioning. All conditioning involves the establishment of novel associations, and the two types are not in every situation distinct, but for simplicity's sake let's say that classical conditioning promotes reflexive, involuntary responses to a given stimulus, whereas operant conditioning engages a creature's will.

If I say "Pavlov," does the image of a drooling dog spring immediately to mind? Is it still there if I tell you "Don't think of a drooling dog"? If so, you are well conditioned to associate both "Pavlov" and "dog" with more or less specific representations of domestic canines, though the dog you imagine when I say "Pavlov" may be more slack-jowled and blank of expression than the one that "dog" conjures in another context. My point here is that there exists no intrinsic relationship between the word "Pavlov" or "dog" and any actual dog (or even the category of dogs), but if you are an English speaker with some knowledge of psychology, these unlike things have been paired often enough in your experience that one invokes the other without any deliberate effort on your part. That's a form of classical conditioning.

Operant conditioning asks a little more from you: intent and action. In this case the salient association is created between a behavior and a consequence. If it becomes strong enough (if the consequence follows consistently from the behavior), it takes on the flavor of a causal relationship, though the connection may be an arbitrary one. Pavlov's dog is the icon for classical conditioning; Skinner's lever-pressing rats embody operant conditioning at its most basic. Rat pushes lever, and out pops kibble. Far out. If you want to preserve the association but don't want fat rats, you can add a cue. Green light on, push lever: kibble. Green light off, push lever: nada, niente, SOL. As with the example given for classical conditioning, there's no ready-made relationship between lights, levers, and kibble. It's dreamt up by the scientist and taught to the rat, simply through temporally close, predictable association.

However little we might like to believe that we resemble rats, humans are subject to the same tendency to perceive causal relationships where none may exist. Like all animals, we seek coherence and control in a stubbornly chaotic world. Nothing undermines our well being more disastrously than a sense of helplessness, so we are apt to exaggerate our agency and influence. Sports fans seem especially susceptible to delusions of this kind: when I am watching my beloved San Diego Chargers on television, a thousand or more miles from the field of action, I become temporarily (absurdly) convinced that my shouted "Come on, D!" or my failure to wear my Quentin Jammer jersey could make or break my team's chances at victory. Athletes themselves take superstitious behavior to comical extremes, baseball pitchers to an art. This is where the line between classical and operant conditioning begins to blur: when a stimulus (or set of stimuli) triggers us to act automatically, even compulsively, will and intent disappear from the mix.