Category: Reading Responses


Brand You

In 2006, the editors of Time magazine named “You” (yes, you) their “Person of the Year.” Some may consider this blanket accolade a blatant and shameless attempt to grab the attention of boxing day shoppers who passed by news stands on their way to the biggest deal of the 2006 holiday season. And, considering the Christmas date of the issue’s release, it probably was. But the honour may not have been bestowed entirely without merit. After all, Time‘s celebration of You was really more of a tribute to the technology that allows You to broadcast yourself to anyone who cares to listen—with the advent of Web 2.0 websites came a “New Digital Democracy,” whereby anyone with an Internet connection may lobby, postulate and discuss ideas to and with a mass audience. In effect, Time‘s dedication to You was a signal for You to get blogging, YouTubing or Facebooking (if You have not been already), almost as if it was your civic duty.

However, consistent with the technological development of the Internet as a whole, this utopian vision of Web 2.0 websites has been met with its share of criticism. In fact, in the “Talk Back” section of the 2006 Time Person of the Year issue, a reader named Eli Stephens pointed out the irony “in having named ‘us’—bloggers, YouTubers, Wikipediasts, and others expressing ourselves on the web, as [Persons of the Year], but then, despite talking about ‘digital democracy,’ not even bothering to MENTION the results of [Time‘s] online poll [for Person of the Year], won by Hugo Chavez in a landslide.” Eli’s comment reminds us, perhaps, of the true authoritative voice that (so far) remains in print. But even this, with diminishing newspaper sales as proof, is becoming less of a concern for online publishers.

What has become a bigger concern for users of Web 2.0 technology recently has been the debate surrounding whether there is too much information published online. Are those who publish personal information or opinions in the frontier of “new democracy” opening themselves up to public scrutiny or harassment? How secure is the information entered behind the walls set up by popular Social Network Sites? How do our online personas reflect our offline identities? These questions have become particularly pressing of late due to the growing use of Web 2.0 websites by employers who are looking to find out more about their job applicants. Horror stories of hopeful job applicants who have their dreams of employment dashed due to an ill-advised Facebook photo or inebriated tweet can be found all over the Internet. But as popular marketing guru Scott Stratten would tell us, for every opportunity we are given to fail online, we are given a reciprocal opportunity to “be awesome.”

In this study I will define “Web 2.0 technology” and “Social Network Sites,” and explain why skepticism surrounds these media regarding their use as professional communication tools. I will then use rhetorical theory to explain why and how these media—specifically websites like Facebook, LinkedIn, Twitter and blogging sites—should be used to cultivate an online persona.

READ THE STUDY >>

Advertisements

Traceability, Metadata, and Stiegler

When I was creating my seminar presentation on Stiegler, I had a couple topics that I wound up cutting to meet the time constraints (which I failed to meet anyway) and because I felt they were less relevant to the overall course material, but I thought it would be nice to get some sort of use out of them, so here they are, slightly refitted make a coherent blog post:

Traceability

So the issue of “the trace” comes up a couple of times in the interview, the first time in the context of the traces of ourselves that we want to leave behind us through technology in order to preserve some sort of immortality for our selves:

“…technical objects that are made for repeating memory itself…that are made to store mnemic traces. Because from the moment when we can store memory itself…we have the possibility of the repetition of something mortal” (3).

However, Dr. O’Gorman brings up an important issue with the existence of that trace in terms of how it can be used, its potential threat:

“It’s important to delete. Because if we’re always in the process of self-recording, self-archiving, exteriorizing our memories, we leave traces everywhere as a result. And this could be dangerous” (6).

I take his point to be, and please correct me if I’m misinterpreting, that there is a danger both in terms of an ever-increasing surveillance by those in power and, in addition, that the little traces of ourselves everywhere, can easily be exploited by the “programming industry”.

Interestingly Stiegler argues against the need to eliminate the trace, instead saying, “The question is not how to prevent the recording of traces. The question is to create a consciousness of the recording of traces, a politic of the recording of traces” (6).

Now, it’s pretty much a given that wherever you go these days, you’re being recorded, leaving a trace of yourself somewhere. We also often intentionally leave traces of ourselves around, especially on the internet (look at this blog, for instance). And this can often be a bit overwhelming, and at the same time, we often avoid thinking about it. I just wish to provide one example of how we are traced online:

Stiegler mentions leaving traces while doing Google searches, which reminded me of this little “feature” on Google that records my location and uses it to send me more focused “hits” (and advertising):

And even better, it can’t be turned off:

So yes, Google not only knows what I like to search for, but where I live. And they’re storing that information. Another example would be Amazon, which likes to record my purchases, or even just items I’ve looked at, and helpfully suggest new consumables for me to buy.

However, it’s also possible to use the traces ourselves, and turn it back on the “programming industry” as well as centralized authorities, which I think is what Stiegler’s getting at with his “politic of the recording of traces”—basically increased awareness of these traces and how to use them. This is probably a more obvious example of a “trace” than we usually think of when we hear the word, but the fallout from the fatal tasering of Robert Dziekanski  by the RCMP demonstrated how the traces left by authorities committing serious, illegal, and in this case lethal, injustices can lead to them being caught lying about it by bystanders.

And of course, to truly be rid of the trace, you might as well as to be rid of all technology, so there’s that, too.

Moving on to the other deleted slide, I also wanted to discuss metadata:

Metadata

“We are living in an epoch that, because of digital networks, and in particular of course the web, the Internet, something is being produced that never before existed, in my view not since the origin of humanity. It’s that everyone can participate in the production of metadata. Metadata have existed for over 3000 years, when they appeared in what is actually Iraq now, in Mesopotamia. And since then, up until the 1990s, there has never been a situation where everyone could produce metadata. It was always very particular and very centralized systems, systems of power, which took control over the production of metadata” (6).

So, first, you will all be terribly surprised to learn that “metadata” is data about data. In my own experience I’ve usually seen “metadata” online refer to everything from background HTML code about the HTML code to  those little star ratings people now leave beside restaurant reviews, and, of course, the ever popular “like” button on Facebook.

Now, in the context of the trace, this mass participation in the production of metadata can be part of the issue of our own traceability since, in our adding information about information, we’re leaving traces of our own thoughts, opinions, and tastes lying around the internet. It’s worth noting that a lot of companies are very keen on encouraging metadata production by consumers, as it can help with their marketing, among other reasons.

However, Stiegler does make a good point in that this production of metadata makes an important shift in who can control it. Now any time a corporation says something, it is possible for people to easily comment on what the corporation said and spread that comment to a worldwide audience.

Also, this change in metadata production can, in the face of the disindividuation created by the “programming industries” potentially lead to a form of online transindividuation as people use metadata as a form of social information production (of course, contrast this with the potentially isolating effects of the online “social” environment that cuts us off from face-to-face interaction).

Anyway, while I ultimately found them a bit unrelated to the main thrust of my presentation (although I did go ahead and talk about flash mobs anyway), I was intrigued by Stiegler’s notion of sort of democratizing all these new media-enabled abilities (the trace, metadata, “happenings”), especially the tension between their use as tools of the people and as tools of the programming industries.

Work Cited

O’Gorman, Marcel and Bernard Stiegler. “Bernard Stiegler’s Pharmacy: A Conversation.” Forthcoming in Configurations.

Introduction

We are our body in the sense in which phenomenology understands our motile, perceptual, and emotive being-in-the-world. This sense of being a body I call body one. But we are also bodies in a social and cultural sense, and we experience that, too … I call this zone of bodily significance body two. Traversing both body one and body two is a third dimension, the dimension of the technological.” (xi)

I find this division of the body into two equal-but-separate binates to be particularly effective for describing how we compose our self in the world. Certainly, more useful and more clear than trying to combine body one and body two into one being. Body one might encompass the identities we consciously construct, whether online or in-person, whereas body two speaks to the – I suppose – subconscious ideologies that inform our identity in ways that are not immediately obvious.

 

… the most familiar role within which we experienced and re-experienced being a body was what I have often called an embodiment relation, that is, the relation of experiencing something in the world through an artifact, a technology.” (xi)

This speaks to the previously-encountered thesis that humans are (and have always been) prosthetic creatures. So-much so that we take for granted that many (Most? All?) of the technologies we use in daily life are tools that augment our bodies or extend our capabilities beyond natural limits.

 

Technofantasies can begin quite young… In both examples the technofantasy was based upon the intersection of technologies and human desires in both bodily and social dimensions.” (xii – xiii)

It’s significant that our utopian view of technology begins at such a young age (kindergarteners, in this example.) In Phil-Tech Meets Eco-Phil, Ihde suggests that the dystopian bent in much of modern Phil-Tech and Eco-Phil is a consequence of the utopian promises of technology going unfulfilled – perhaps the seeds of the dystopian cynicism are planted in the naivety of childhood.

 

Yet these fantasies are actually mild ones compared to the bodily social fantasies now being promoted by techno-utopians… In this mode of technofantasy, our technologies become our idols and overcome our finitude.” (xiii)

What is transhumanism (or posthumanism in the vein of genetic/technological augmentation/triumphing over the human condition) if not a utopian philosophy of technology? Humanism suggests that reason and scientific enquiry will solve the “human” problem. Transhumanism and techno-utopianism suggest technology will do the same; that we will triumph over the human condition and its associated problems (pollution, deforestation, animal abuse, etc.) through the proper implementation of technology. Ihde believes such a utopian view is ultimately counterproductive.

 

Chapter 8: Phil-Tech Meets Eco-Phil

The godfathers, Martin Heidegger, Jacques Ellul, and Herbert Marcuse as the most popular, portrayed technologies as Technology, a sort of transcendental dimension that posed a threat toward culture, created alienation, and even threatened a presumed essence of the human.” (113)

We take for granted how deep the undercurrent of dystopian cynicism re: technology runs in our society. Much of modern sci-fi fixates on either technology run amok (such as in Battlestar Galactica) or on technology as an enabler of violence and exploitation. Not that the techno-utopian sci-fi of Star Trek or even CSI (the happy resolution is only ever a lab result away…) should be ignored, but it is a truism that we – in the Western world – have a love/hate relationship with technology that in formal philosophical terms dates at least to Karl Marx and Marxist theory. At the same time technology promises to free us from the pain and misery of the human condition, it threats to strip us of our essential humanity.

 

I cite [Hans Jonas’ ethics of fear] to illustrate what I take to be a deep set of intellectual habits that seem to be common to many both in environmental studies and in much philosophy of technology: congenital dystopianism.” (114)

I like that term. A dystopianism present at the birth of a philosophy.

 

Within the precincts of the [Society for Philosophy and Technology], the best-known institutional group for philosophy of technology in North America, many commentators have noted the dominance of the dystopian. If there are godfathers of SPT, they have been Ellul, Heidegger, and the Karl Marx of industrial capitalistic alienation. Every one of these godfathers … displays some variant upon the ways in which Technology has become the degrading metaphysics of late modernity and, insofar as environmental issues enter the scene, is taken to be, in industrial embodiments, the primary cause of environmental degradation.” (114)

An interesting thought. I’m not sure what more there is to say about it that hasn’t already been said, though I am interested in the influence of Marx on Phil-Tech and it’s something I’ll have to follow up on. I’m familiar with the Marxist fear of dehumanization at the hands of technology, but I generally view that in relation to industrial technology and not Technology in general.

 

I am terming this the rhetoric of alarm. It is correlated to Jonas’s ethics of fear and its purpose is – in Paul Revere fashion – to awaken the listener to the dire fate of a presumed environmental catastrophe, the late modern equivalent of redcoats. Historically, the rhetoric of alarm is the flip side of nineteenth-century progressivist utopianism.” (114)

When I first read “rhetoric of alarm” I was immediately reminded of the Bush presidency and its politics of fear. Rhetoric of alarm is somewhat related though it appears to be less about cynical manipulation of a message and its audience, and more about an inordinate passion or naively dystopian view of a situation. An Inconvenient Truth employs such a rhetoric. Ihde relates it to Hans Jonas’ ethics of fear – but here again, the “ethics of fear” is less about frightening people than it is about opening peoples eyes to the severity of a situation such that they act accordingly.

Ihde’s later suggestion that we do best to avoid extremes of utopianism and dystopianism is well-received, but I can’t help but think his admonishment of the rhetoric of alarm is misplaced. It is not dystopian in the sense that it views the world through a cynical lens; rather, it warns of the consequences of inaction. Whether the warning is exaggerated is largely a function of political agenda or time – but I don’t think it’s productive to suggest that a well-educated person in a position to make such predictions should feel obligated to diminish the impact of their hypotheses or otherwise soften the blow so as to… What, promote civil discourse? And unfortunately, Ihde chooses extremely poor examples to highlight the limits or drawbacks of the rhetoric of alarm. His larger point about extremes still stands, but it stands because it is a truism and not because it is particularly profound.

 

Indeed, it may well have been that the utopian promises of industrialization and technologization at the turn of the century, by their very overextrapolation, led to part of the flip phenomenon in the mid-twentieth century.” (115)

Reminded here of what he said about technofantasies amongst the kindergarteners, though it would also be helpful to keep Marx in mind when trying to determine why modern Phil-Tech has a dystopian bent.

 

Excessive rhetorical strategies are often ill-founded and cause more harm than good.” (115)

That’s a good point, except he goes on to cite Malthusian extrapolation as an example of something that deploys such a strategy and I can’t help but feel that it is an extremely poor example to choose, if for no other reason than the basic premise of Malthusian extrapolation is sound and has been vindicated in numerous microcosmic cases (Easter Island, for example). Ihde also either deliberately or inadvertently misrepresents what Malthusian extrapolation states: Ihde says it is an unsound theory because if a population outgrows its food supply there will be a die-off that corrects the balance, but Malthusian extrapolation warns against that die-off. It tries to prevent the need for such a die-off by awakening people to the problem of overpopulation. To claim Malthusian extrapolation is incorrect because a die-off will restore the population balance, is like saying Peak Oil is incorrect because people will simply stop using oil once it runs out.

 

Malthus himself eventually recognized this and modified his earlier theses.” (115)

While it’s true he nuanced his earlier theses, I do not believe he ever reversed his position regarding the overpopulation problem.

 

An intellectual habit found in both philosophy of technology and ecological circles of philosophers, which has been applied, in my opinion, as badly as excessive rhetorical strategies, is the tendency to see problems as macroproblems but to propose microsolutions.” (117)

I think this is a good point, exacerbated as much by government bureaucracy as by a general inexperience human society has at planning things at the macro-scale. However, Ihde goes on to cite recycling and toxic chemical bans as examples of microsolutions to macroproblems, and once again he misses the mark… He suggests that recycling falls short because it will never solve the waste problem, and that toxic chemical bans merely “displace[s] the problems into different contexts” (117), but this ignores the fact that nobody believes recycling in and of itself is the ultimate solution to waste and garbage, and that certain toxic chemicals – such as DDT – are so dangerous that they must be outlawed even if there are no viable alternatives immediately available. His initial premise – that these are microsolutions to macroproblems – is false. Recycling may be a microsolution, but it is not a microsolution that pretends to being a macrosolution. Furthermore, his implicit argument that macroproblems require singular, monumental “macrosolutions” is naive.

 

What I am pointing to is the tendency, the intellectual habit, to think “small is beautiful,” which is, to my mind, equivalent to a form of nostalgic romanticism found among philosophers of technology and ecologists.” (117)

I think this is an interesting interpretation of the phenomenon, though I am not altogether convinced that the prevalence of microsolutions is a result of the same contrived thinking that lead to such policies as “appropriate technologies.” Appropriate technologies is a holdover of colonialist/paternalist thinking; microsolutions seem to be as much a result of human weakness as it is – perhaps – wishful thinking… “Wouldn’t it be grand if microsolution X could solve macroproblem Y? Let’s act as if it will, and see if it works…”

 

What I am really arguing is that we have not yet fully diagnosed either what our technologies can or should do, or what the environmental crises are.” (119)

This may be true, though he does not provide any evidence on which to base this claim. I am inclined to believe it because I am also cynical.

 

In short, the solutions to technoenvironmental problems that have worked call for better technologies rather than older, simpler, or no technologies. While I am far short of advising that high-tech solutions automatically solve the problems, I am suggesting that retroactive romantic returns to previous low-tech or simpler solutions sounds to me like a Bob Dole form of environmentalism. Take solutions from whence they come.” (121)

An important sentiment, especially to counter the Luddites and techno-dystopians who would otherwise scorn technology or ignore technology’s potential for change. However, it strays awfully close to the techno-utopianism he earlier maligns, particularly insofar as he does not suggest social change as a supplement to technological refinement. Doubly-so when he suggests that Phil-Tech and its philosophers will ultimately be responsible for resolving today’s environmental problems (hence his advocacy that they be more directly involved in R&D in the private sector).

 

The point of this example is that when green processes can be demonstrated to produce lower costs or contribute to higher profits, corporations will adapt accordingly.”

The Marxist in me bridles at the idea that the path to a green future is paved with economic incentives for corporations and industries. If ignorance or deregulation has created environmental catastrophe by allowing corporations to run amok, it is an overly-indulgent tactic to suggest that we should solve our problems by essentially paying corporations to act in a moral or socially-responsible manner. The incentive should be punishment for failing to act responsibly, not special economic incentive to do it at all.

 

I want to indicate that all technoenvironmental problems are complex, ambiguous, and interwoven. The tasks are not easy, but neither utopian nor dystopian attitudes ultimately help.”

A good point if not another truism… A fair summary of the chapter overall.