Archives for The Art of the Long View

Book Review: The Art of the Long View

I’ve been calling myself a futurist for the past five years, and for five years, I’ve been lying. But no longer, because I’ve read this book, which is every bit as a thought-provoking as Science Fiction for Prototyping proved disappointing. Peter Schwartz is one of the founders of the Global Business Network consulting firm, and honed his skills designing scenarios for Shell Oil in the 1980s. His scenario planning techniques underpin the Prevail Project. In The Art of the Long View, he makes a strong case for the utility of scenario planning, explains how to develop a proper futurist mindset, and how to create your own scenarios.

Scenario planning is not predicting the future. Rather, it is about challenging the official future, and the assumptions that underlie it. Scenarios force you to examine your unspoken beliefs and values, the evidence supporting them, and how you might react in the future. An organization that includes scenario planning in its process is better able to react to rapidly changing conditions, and less likely to be rendered slowly obsolete through technological change.

Scenario planning is inherently interdisciplinary. A scenario plan has to include technological, economic, cultural, and political factors, as well as individual psychology. Broad areas of knowledge rather than deep and narrow research is better suited at picking up on trends. The ideas and forces that most powerfully influence the future originate on the margins of society, among the dispossessed, the utopian, or the just plain weird. Finally, Schwartz includes a detailed, 8 stage guide to using scenarios in your own organization, with a good balance of theories and examples. Perhaps the ultimate success of scenario planning is that it creates a shared language to talk about the future.

Scenario planning might not be about predicting the future, but a futurist who makes no predictions isn’t very useful. The book was published in 1991, and some parts feel oddly anachronistic, like the Japanophilia, the groping towards a ‘digital global teenager’, and the absence of the War on Terror. On the other hand, he offers three scenarios for the world in 2005: New Empires focused on regional militarism, Market World with multicultural entrepreneurialism, and Change Without Progress, where the wealthy hollow out states, and fear of losing what little remains prevents successful action. Change Without Progress is strikingly similar to the world today, with our 1%ers and 99%ers, paralyzed multinational bodies, and collapsing infrastructure.

Scenario planning is not a strict methodology that automatically produces valid results, it’s an attitude towards the future that is based on broad understandings of historical forces and skepticism about the status quo. The results will vary on the quality of the questions you can ask, the data available, and the conversation you foster. But as far as crystal balls go, scenario planning is one of the best.

Predicting the Future of Computing

The New York Time has a fascinating interactive tool for looking at the history of computing, and for crowdsourcing when new technological breakthroughs might arrive.

It’s interesting to see what people believe will happen. Google will have mapped the entire world at 1 cm resolution (good enough to recognize faces) by 2020. Telecommuting and online dating replace ‘real world’ versions of the same by 2030. AI and cyborgs will be around by 2050, and by 2300, humanity will have achieved the Singularity, ending all forms of material suffering and deprivation.

Empathy and Creativity

There are three fundamental principles for the Prevail scenario:

1. The future is not predictable; uncertainty trumps technological determinism. Prediction is a fool’s errand. That’s why we need multiple scenarios to navigate the future.

2. Connectedness is crucial: “a gradual ramp of increased bridging of the interpersonal gap.” I share with Howard Rheingold the belief that our technologies can serve rather than substitute for fleshier connections. But I agree with Jaron Lanier that eternal vigilance is called for to assure that the design and execution of our social networking technologies not lead us astray toward bogus connectivity.

3. We can achieve social transcendence by expanding the radius of empathy, but not so far as to include all living things.

Let me expand especially on the third: Empathy is essential. And compassion. But Jaron Lanier is also right to hold out for the creativity of innovative individuals—artists, poets, scientists, musicians.

One of the most challenging aspects of the Prevail scenario lies in navigating the narrow pass between too much individualism on the one hand, and too much collectivism on the other. We tend to think in binary terms: A or non-A. But the Prevail scenario calls for a more complex path: Not the One of hyper-individualism, not the All of Communist collectivism, but the less precise Some of limited community.

The radius of empathy cannot extend into the infinite. You can’t “friend” billions. Nor can it contract to the precious self of solipsism and narcissism. Is there an ideal size to the Some of a thickly empathetic community? No. And that’s part of what makes this idea of “Not One, not All, but Some” so intractable and intellectually unsatisfying.

But, hey, that’s life. Some communities will contract too far toward the exclusivity of we precious few. That way lies tribalism. Some communities will seek such broad inclusivity that their specialness will be leveled out and homogenized. We’ll lurch from the too small Some to the too large Some and back again because there is no ideal number.

This is why Joel Garreau has to describe the Prevail scenario as a series of “fits and starts, hiccups and coughs, reverses and loops—not unlike the history we humans always have known.” Its trajectory will not follow a downward deterministic curve toward oblivion. Nor will it carve an ascending arc toward an asymptotic approach to the Singularity. Instead it will look rather more like a pubic hair.

Jay Oglivy is the author of “Creating Better Futures” and is one of the founders of the pioneering scenario-planning firm Global Business Network.

Prevailing Over Technology

Thirty years ago I wrote Taming the Tiger about our conflicted attitude towards technology: we distrust machines, even as we rely on them; we are always surprised by the unintended consequences of technology, as if our creations should be perfect; and we are eager to adopt the next new thing, even as we bemoan the good old days.

Our ambivalence, as I saw it then and still do, is the result of several misunderstandings. For example, we assume that technological change will inevitably be accompanied by loss, and we tend to romanticize past machines such as clipper ships, old handicrafts, even old towns. But the rosy image is rosy. The tall ships were inhuman work environments, dangerous and physically debilitating; old crafts often involved mind-numbing labor, and the beautiful objects that we admire in museums were available only to a wealthy few; and the old towns that we visit while on holiday lacked the technological amenities—running water, flush toilets, central heating—that we take for granted today. I think we can blame a good deal of this romanticizing tendency on the movies, which have portrayed history in highly selective ways. In truth, Robin Hood and his Merry Men endured lice and continual tooth-aches; the noble cowboy loners portrayed by Gary Cooper and Alan Ladd were illiterate, crude louts; the Edwardian swells portrayed on Masterpiece Theater suffered from gout, rheumatism (damp, drafty houses), and venereal disease.

We often confuse a device with its purpose. The hammer is an elegant tool, but the nail came first, that is, the need to hammer nails came first. Because we focus on the device we tend to fetishize machines, whether they are iPads or smart phones. Paradoxically, this attitude imbues machines with power that they don’t have, while at the same time trivializing their actual functions. For example, the so-called American love affair with the automobile in the Fifties and Sixties produced such momentous advances as chrome grills, wraparound windshields, and tailfins (meanwhile the Japanese and the Germans were actually solving transportation problems). We are seeing a replay of this distortion today in our fascination with green buildings and green cities. Certainly, our goal should be to develop—and adopt—practices and technologies that reduce global warming. But we can’t help being attracted to the symbols of greenness: grass roofs, wind machines, solar panels. The point is not to drive a hybrid SUV, but to drive less.

Another cause of our ambivalence towards technology is that we assume that machines cause technological change. The personal computer—or vapor ware—create a new world, we say. It is instructive to examine an earlier communications device: the printing press. The press famously appeared in Europe in the fourteenth century, although neither movable type nor paper-making were European inventions, but originated much earlier in Japan and China. What facilitated printing in Europe was advances in metallurgy and water-power; metallurgy, because it was needed for the spread of typesetting (the early types were made by goldsmiths), and water-power, because it permitted the manufacture of cheap paper. Cheap paper, replacing parchment made from calfskin or goatskin, was a prerequisite for printing. But the prime driver was a cultural change: a growing demand for books, that is, a growing desire to read and write. In other words, the human activity came first, the machine followed. So today, digital media are not creating a new world, they are enabling a new world that already exists.

Technology is not inhuman or dehumanizing, quite the opposite. In the concluding chapter of Taming the Tiger I quoted the German philosopher Arnold Gehlen. Gehlen wrote that technology mirrors man, “like man it is clever, it represents something intrinsically improbable, it bears a complex, twisted relationship to nature.” It is another way of saying that we are as much a part of the technological environment—and it is as much a part of us—as of the natural world.

Witold Rybczynski is an award-winning critic, professor at the University of Pennsylvania and columnist with Slate who has been thinking about humanity and technology since the 1980s, when he published his first two books, “Paper Heroes: Appropriate Technology: Panacea or Pipe Dream?” and “Taming the Tiger: The Struggle to Control Technology.”

To Prevail

I have in front of me a late 1960s advertisement from the Burroughs Corporation. It shows a sketch of a guy — in a snappy suit and crisp haircut — sitting at what one must assume is a Burroughs business computer. A large genie-like figure billows from the machine, and the caption reads MAN plus a Computer equals a GIANT!”

I love this image, despite the outdated sexism. It’s a healthy reminder that the notion of computers making humans something supremely powerful (and distinctly no longer human) isn’t just an idea dreamt up in the heady days of the 1990s, as Moore’s Law seemed to be really taking off. It’s been woven into the fabric of our relationship with “thinking machines” for decades. While there may have been no Mad Men-era Singularitarians fantasizing about being uploaded into a B6500 mainframe, it was clear even then that there was something about these devices that went beyond mere tool. They were extensions not of our bodies, but of our minds.

Of course, anyone sitting down at a 1960s Burroughs business machine right now expecting to become a figurative “giant” is in for a surprise. It may be something of a cliché at this point to note that a cheap mobile phone has far more computing power than a mainframe of a generation or two ago, but it’s true. Yet instead of making us all “giants,” our information technologies played something of a trick: they made us more human. All of the things that humanize us — love, sex, despair, creativity, sociality, storytelling, art, outrage, humor, and on and on — have been strengthened, given new power and new reach by the march of technology, not discarded.

That’s not the conventional wisdom. Western intellectual culture is in the midst of a civil war between two superficially distinct viewpoints: a claim that transformative information technologies are set to sweep away human civilization, eliminating our humanity even if they don’t simply destroy us, versus a claim that transformative information technologies are set to sweep away human civilization and replace it (and eventually us) with something better. We’re on the verge of disaster or the verge of transcendence, and in both cases, the only way to hang onto a shred of our humanity is to disavow what we have made.

But these two ideas ultimately tell the same story: by positing these changes as massive forces beyond our control, they tell us that we have no say in the future of the world, that we may not even have the right to a say in the future of the world. We have no agency; we are hapless victims of techno-destiny. We have no responsibility for outcomes, have no influence on the ethical choices embodied by these tools. The only choice we might be given is whether or not to slam on the brakes and put a halt to technological development — and there’s no guarantee that the brakes will work. There’s no possible future other than loss of control or stagnation.

Such perspectives aren’t just wrong, they’re dangerous. They’re right to see that our information technologies are increasingly powerful — but because our tools are so powerful, the last thing we should do is abdicate our responsibility to shape them. When we give up, we’re simply opening the door to those who would use these powerful tools to manipulate us, or worse. But when we embrace our responsibility, we embrace the Prevail scenario.

To Prevail is to accept that our technological tools are changing how our humanity expresses itself, but not changing who we are. It is to know that such changes are choices we make, not destinies we submit to. It is to recognize that our technologies are manifestations of our culture and our politics, and embed the unconscious biases, hopes, and fears we all carry — and that this is something to make transparent and self-evident, not kept hidden. We can make far better choices about our futures when we have a clearer view of our present.

To Prevail is to see something subtle and important that both critics and cheerleaders of technological evolution often miss: our technologies will, as they always have, make us who we are.

Human plus a Computer equals a Human.

Jamais Cascio is a founder of, and author of “Hacking the Earth.” Prevailing for him involves seeing technologies as expressions of ourselves, not alterations or degradations of human nature.

Avail to Prevail

“Prevail Project” is about the subtle and interesting notion that some day soon, some of us may not be human.

My readers are presumably human, and so might find this topic of dubious relevance.  So let’s get right into the “subtlety” aspect. There are stem cells among us, we can all agree on that, but is a stem cell “human”?  Stem cells exist as technological facts on the ground,  but do they have any “human rights”?  What happens when stem cells divide and multiply, and become a “test tube baby?”

Since the late 1970s, a huge, healthy cohort of “test tube babies” have appeared among us as our fellow citizens.  “Test tube adults” are not subjected to social stigma — we don’t consider them squalid Brave New World subhumans, as we did when they were science-fictional — but we might have done that.   We might have “relinquished” the technology through legislation, due to strong moral qualms about it.  Then many of us would have been denied parenthood.  Many of us would not be here.  One of the missing might have been you.

Nowadays, the term “test tube baby” has fallen out of use.  It’s archaic and de-controversialized, replaced by the cozier term “in vitro fertilization.”  But beneath this apparent triumph of social assimilation, the technology has continued to reticulate.  A fertilized human egg is a stem cell of sorts, because it has the capacity to form all human organs.  Further study revealed that many cells in the human body have a similar capacity.

This is the promise of “stem cell therapy” — that we pull living stem cells from the human body, tinker with their huge expressive capacity, and have them grow again inside the patient.  You could think of that as a kind of pureed and homogenized test-tube baby, if you were allowed to frame the issue in that repulsive way.  The odds are you would not be allowed that framing.  The way we mentally tackle these issues is a complex and subtle matter of “law, culture and values.”  It’s our law, culture and values that see to it that certain paradigms are unlikely to get a sustained airing.  Even if they’re quite logical and firmly based in science, in touch with the facts on the ground.

You’re very likely to hear that stem cell therapy is the murder of an unborn human being.  You’re also likely to hear, from a different point on  the ideological compass, that stem cell therapy can make the lame walk and the blind see.  You’re very unlikely to hear that stem cell therapy could be used to reduce fat, erase wrinkles or increase sexual potency, even though those are three colossal, highly profitable industries with every means, motive and opportunity for making sure that stem-cell therapy becomes mundane and unquestioned some day.

Now I ask you, in all seriousness: suppose that I’m 95 years old.  And yet my skin has no wrinkles, my hair is flaxen and wavy, I have fine muscle tone and I’m the father of four by a twenty-six-year old woman.  Am I human?  I’m not mumbling like Frankenstein or clanking like Robocop.  I can vote, and like a lot of elderly people I may be very street-wise and rather well-to-do.  Yet my body’s a chimeric patchwork of flesh that was formerly my own stem cells, rejuvenated in  a petri dish and injected back into me.

Naturally you may be entertaining a few qualms about my behavior, the way you do about, say, Barry Bonds’ baseball abilities or Silvio Berlusconi’s harem of leggy TV presenters.  But the odds are that I can put up a pretty good argument on my own behalf — I may well be a prosperous lawyer.  Or your political leader.  The odds are that you envy me.  The odds are that my entire society is sliding in my direction without ever making a conscious decision about it — maybe with the same hectic speed that we adapted desktop computers.  Or with the same blithe joy that we planted kudzu and unleashed Australian rabbits.

As Joel Garreau surmises, in that postulated situation, we have “passed an inflection point of self-modification.”  Thanks to a new suite of technical possibilities — the genetics of stem cells were just one such field of potential action — our human minds, human memories, human metabolisms have proved unexpectedly ductile.  We are changing ourselves.  We have the means:  the new means.  We have the motives — ancient motives, powerful motives, fear, greed, lust for power, spiritual transcendence, all of them.  And we have the opportunities, because there are so many areas where these practices could be made to flourish.

They could flourish in business, of course, but also in medicine, sports, the military, even in academia.  The street finds its own uses for things, and an innovation created for a legitimized purpose will undergo mission-creep as time passes.  Narcotic abuse is a major global industry despite decades of organized repression.   Sports doping, cosmetic surgery, the hairline cracks of posthumanity are everywhere.  The military will take most any step to get its lethal work done, but there’s nothing commoner than a military technology clumsily repackaged for civilian life — assault weapons, nuclear power, autonomous drones.

The question is metaphysical: “what is mankind?”  That question gets fought over every day, and in the cases of Terri Schiavo and Eluana Engaro, it shut down two different G-7 governments.  But it’s not only a metaphysical matter.  There are other pressing questions.  How, as a practical matter, can we watch the “inflection points?” How can we name and number the areas of potential crisis?  How quickly and effectively can we react?  Who are the watchmen, what are their proper duties?  Who watches the watchmen?

We’ve already had plenty of practice, much of it very unpleasant, in declaring certain people human or nonhuman.  There are stem cells, the brain-dead, the differently-abled, gays, untouchables, the bearers of contagious disease, existential ethnic enemies who must be “cleansed” or “finally solved” by whatever means possible; there are campaigners who will burn, maim and kill for the legal and ethical rights of animals.  We know how “law, culture and values” can make human history; what we don’t know is what genetics, robotics, information and nanotechnology can and will do when tossed into this ever-bubbling stew.

Maybe the stew becomes a divine ambrosia.  That would be the “heaven scenario.”  Maybe the pot breaks and its boiling contents set the  stove on fire.  That would be the “hell scenario.”  Or maybe the pot more or less keeps at it, while some well-informed people spend more effort taking judicious sips of the brew.  That would be common sense, if we had any.

Not that these commonsensical observers are the boss chefs or anything; they’re just uneasily aware that, where the good old Nail Soup of History used to have onions and barley and carrots, nowadays it’s got brocco-cauliflower, cloned mutton and athletic-performance enhancers.

I am a fan of this effort, and I am speaking out in its support, because I know that it doesn’t have to succeed to be important.  Suppose it fails — suppose that in five years, ten years, twenty years or fifty, there are in fact many former-people among us who are blatantly no longer human.  Suppose they are tomorrow’s GRIN mutants, “people” who, for some wide and no-doubt compelling variety of reasons, chose to desert the formerly-human condition.

They’ll need an effort like this.  They will need it because they will know, with an existential certainty still closed to us, that they have crossed a mighty boundary and they cannot go back.  The “Prevail Project” is about peering through the keyhole of Pandora’s Box, but whoever breaks that box has to own it.   They will not be relieved from “law, culture and values;” they will merely have the awful quandary of creating their own.  Not in a vacuum, either.  We, who are human, have the grisly comforts of our quarter-million years of natural evolution, but  these evolutionary radicals will be beset by their own ambitions — plus the ambitions of many rival radicals.

They don’t have to be human for you to pity them.  If Milton could pity Satan, you could pity that.  I like it that this effort, the “Prevail Project,” is still framed within that venue, that it’s humanistic, that it’s contemplative, that it’s literary.  That’s why it’s me, a novelist, writing this conflicted essay, instead of it being the emanation of some chrome-plated New Model Superman, or some hooded Inquisitor, hunting down mankind’s heretics.

The “Prevail Project” is an inquiry into culture, law and values; it’s not a political party or a revolutionary movement.   Nobody is going to seize power over genetics, robotics, information or nanotech by compiling some data here, or by joining in these discussions, or by diligently feeding the nonhuman spiders that so busily catalog our texts nowadays.  Contributing to a site like this is a moral act.  It’s like joining Erasmus’s “Republic of Letters,” that “humanist” coterie of the twilight half-enlightened.  A small group, maybe, but they mattered to futurity.

It wasn’t so much that they prevailed, those earnest inquirers who scribbled in their dusty Latin; it was more that they made it possible to imagine a prevalence.

Bruce Sterling is the best-selling and prize-winning author of future fictions such as “The Difference Engine” and “Holy Fire,” that, like our world today, occupy the nexus of the strange and the terrifying. He blogs at Beyond the Beyond.

The Beginning of Infinity

THE BEGINNING OF INFINITY from jason silva on Vimeo.

Consider. Consider the power of Mind to reshape Matter. Consider the notion that evolution has escape the biological to infect the ideological. Consider that our tools do ever more of ‘our’ thinking, and are even more powerful. Jason Silva is very, very excited about the possibilities of the future, and is a welcome antidote to cynicism and catastrophism.

Consider the wisdom we’ll need to use these powers for the betterment of all.

The Lanier Effect

You’re probably familiar with Jaron Lanier. VR pioneer, musician, author of You Are Not a Gadget and far too many articles to mention. He’s also the inspiration for the Prevail Scenario in Radical Evolution, and the Prevail Project in general. And more recently, he has an hour long interview over at

The interview and transcript is far too complex to be summarized here, but Jaron attempts to get at this very basic question: if the internet was supposed to connect people, get them access to information and the levers of power, and make the world better, why do people feel less secure and less wealthy today? It’s because we’re giving up our data, our decisions, and our integrity in the name of efficiency and internet fame, without asking if those are durable goods.

What you have now is a system in which the Internet user becomes the product that is being sold to others, and what the product is, is the ability to be manipulated. It’s an anti-liberty system, and I know that the rhetoric around it is very contrary to that. “Oh, no, there are useful ads, and it’s increasing your choice space”, and all that, but if you look at the kinds of ads that make the most money, they are tawdry, and if you look at what’s happening to wealth distribution, the middle is going away, and just empirically, these ideals haven’t delivered in actuality. I think the darker interpretation is the one that has more empirical evidence behind it at this point…

And so when all you can expect is free stuff, you don’t respect it, it doesn’t offer you enough to give you a social contract. What you can seek on the Internet is you can seek some fine things, you can seek friendship and connection, you can seek reputation and all these things that are always talked about, you just can’t seek cash. And it tends to create a lot of vandalism and mob-like behavior. That’s what happens in the real world when people feel hopeless, and don’t feel that they’re getting enough from society. It happens online.

What does Jaron see as the way out? Well, you’ll have to read the article to find out.

Risky Business

Twelve deep thinkers over at The Edge have a series on risk after the Fukushima disaster. I won’t try and reproduce the complexity and subtlety of their arguments, but risk and risk management are at the heart of what the Prevail Project is about. How can we think about risk in a domain of technological uncertainty? What does risk actually mean?

Risk is modern concept, compared with the ancient and universal idea of danger. Dangers are immediate and apparent; a fire, a cougar, angering the spirits. Risk is danger that has been tamed by statistics; this heater has a 0.001% of igniting over the course of its lifespan, there are cougars in the woods, and so on. Risk owes its origins to the insurance industry, and Lloyd’s of London, which was founded to protect merchant-bankers against the dangers of sea travel. While any individual ship might sink, on average, most ships would complete their voyages, so investors could band together to prevent a run of bad luck from impoverishing any single member of the group.

This kind of risk is simple and easy to understand. It is what mathematicians refer to as linear: a change in the inputs, like the season, correlates directly to an outcome, like the number of storms, and the number of ship sunk. The problem is that this idea of risk has been expanded to cover complex systems, with many inter-related parts. As complexity goes up, comprehensibility goes down, and risks expand in complicated ways. Modern society is “tightly coupled”, a concept developed by Charles Perrow in his book Normal Accidents. Parts are linked in non-obvious ways by technology, ecology, culture, and economics, and failure in a single component can rapidly propagate through the system.

The 2007 financial crisis is a perfect example of a normal accident caused by tight coupling. Financiers realized that while housing prices fluctuate, they are usually stable on a national basis, and so developed collateralized debt obligations based on ‘slices’ of the housing market nation-wide, which were rated at highly secure investments. When the housing bubble collapsed, an event not accounted for in their models, trillions of dollars in investments lost all certain value. Paralysis spread throughout the financial system, lead to a major recession. While this potted history is certainly incomplete, normal accidents are the defining feature of the times. The 2009 Gulf of Mexico oil spill, and the Fukushima meltdown are both due to events which were not accounted for in statistical models of risk, but which in hindsight appear inevitable over a long enough timescale.

Statistics and scientific risk assessment are based on history, but the world is changing, and the past is no longer a valid guide to the future. Thousand year weather events are more and more frequent, while new technologies are reshaping the fundamental infrastructure of society. When the probabilities and the consequences of an accident are entirely unknowable, how can we manage risk?

One option is the precautionary principle, which says that until a product or process is proven entirely safe, it is assumed to be dangerous. The problem with the precautionary principle is that it is different in degree, not in kind. It demands extremely high probabilities of safety, but doesn’t solve the problem of tight coupling. Another solution is basing everything off of the worst possible case: what happens if the reactor explodes, or money turns into fairy gold. System which can fail in dangerous, expensive ways, are inherently unsafe and should be chosen in favor that have more local consequences. This solution has the twin problem of demarcating realistic vs fantastic risk, after all, Rube Goldberg scenarios starting with a misplaced banana peel might leads to the end of the world. The second problem is that this discounts ordinary, everyday risk. Driving is far more dangerous per passenger-mile than air travel, yet people are much more afraid of plane crashes. A framework based on worst-case scenarios leads to paralysis, because everything might have bad consequences, and prevents us from rationally analyzing risk. The end state of the worst-case scenarios is being afraid to leave the house because you might get hit by a bus.

So the ancient idea of danger no longer holds, because we can’t know that is dangerous anymore, and mere fear of the unknown cannot stand against the impulse to understand and transform through science and technology. Risk has been domesticated in error; a society built on risk continually gambles with its future.

The solution involved decoupling, building cut-outs into complex systems so they can be stopped in an orderly manner when they begin to fail, and decentralizing powerful, large-scale infrastructure. Every object in the world is bound together in the technological and economic network that we call the global economy. We cannot assume that it will function the way it has forever, rather we should trace objects back to their origins, locate the single points of failure, the places where large numbers of threads come together, and develop alternative paths around those failure points. Normal accidents are a fact of life, but there is no reason why they have to bring down people thousands of miles away from their point of origin.

Lunch with Sheila Jasanoff

Sheila Jasanoff is one of my personal academic heroes*, so her visit to ASU last week was perhaps the highlight of the many lectures I’ve attended so far. I remember back when I was a sophomore, and Shelley Hurt handed us Designs on Nature and said something like “this is a difficult book, but this is a very important book, so pay attention!” Since then, Jasanoff has come up in nearly all of my ASU classes. Her many contributions to the field include a bunch of brilliant comparative studies on environmental and healthcare regulation in the US, UK, and EU. The idiom of co-production, which explains how “Orderings of nature and society reinforce each other, creating conditions of stability as well as change,” and her latest masterpiece, bio-constitutionality, which I won’t even attempt to explain (wait for the book).

Along with a general lecture on bio-constitutionality, Jasanoff spent a lunch with a group of graduate students, with the goal of helping us become wise. She is simply an absolute joy to listen to, intelligent, precise, relevant. She hit us with three solid thesis topics in 15 minutes, which almost makes me wish I didn’t have mine set, but onwards to the meat of the issue.

Jasanoff covered several topics of interest to STS practitioners, how to use theoretical paradigms, comparative studies, and the like. STS is a diverse field, but it shares the common question, “What difference does it make that science and technology are forces in our society?” Methodologically, you can attempt to bash everything into a theory, which leads to rigid, wooden papers, or do pure ethnography, where you go in with no preconceptions, take notes on everything, and hope that at the end of the day, something interesting emerges. Realistically, you need some conceptual guideposts, the challenge is to pick ones that help problematize and explore your research question.

A second topic was how graduate students can change the world. Jasanoff explicitly discourages trying to be policy relevant, or an intellectual who changes the world. If you want to change the world, go do it! Be a politician, or an engineer, and make things, don’t be a critic or adviser and try and sidle towards influence that way. One person asked about policy relevance, which Jasanoff is also not a big fan of. Being policy relevant means chasing the headlines, trying to use scholarship to beat professional spin-doctors and lobbyists, and that’s a race a good scholar will never win. At best, you’ll become captured by the kinds of people who control Washington DC, and who wants to work for them? What we should do, “If you succeed in crafting a voice, and talking about interesting things, the right people will find you.”

I asked about my perennial hobby-horse, the lack of conservative scientists, and conversely, the lack of credibility that science has for conservatives. While there is some truth to the idea that scientists like big government because it pays for their labs, that model is overly simplistic. Rather, in her view, scientists have become arrogant, and have failed to justify their support to the public. (True, Science the Endless Frontier is still the primary justification for federal R&D, and it’s 60 years old) Scientists shouldn’t discredit Palin et al, rather they must be humble, must empathize and understand why arguments about big government encroachment are effective in these situations. Theories of public irrationality are profoundly anti-democratic; it’s anthropologists hunting for fuzzy-wuzzies in their backyard. Scientists have effectively abrogated a public position, with disastrous results. “The Enlightenment was not a historical event. It is a process, a mission, a continuous duty to explain yourself.”

*for the record, Sheila Jasanoff is my role model, Bruce Sterling is my guru, and Robert McNamara is my idol.

Page 1 of 2:1 2 »