Thought-Factory.net Philosophical Conversations Public Opinion philosophy.com Junk for code

Mandy Martin, Puritjarra 2, 2005. For further information on MANDY MARTIN, refer here: http://www.mandy-martin.com/
If there are diverse kinds of knowledge and ways of knowing place, then we need to learn to value the different ways each of us sees a single place that is significant, but differently so, for each perspective.
RECENT ENTRIES
SEARCH
ARCHIVES
Library
Thinkers/Critics/etc
WEBLOGS
Australian Weblogs
Critical commentary
Visual blogs
CULTURE
ART
PHOTOGRAPHY
DESIGN/STREET ART
ARCHITECTURE/CITY
Film
MUSIC
Sexuality
FOOD & WiNE
Other
www.thought-factory.net
looking for something firm in a world of chaotic flux

global village & Web2 « Previous | |Next »
September 27, 2005

I've always thought the old talk about the global village was a bit suss.

logo.jpg In postmodernity creating a global village means that everything and everyone is online all the time. The implication is that geography (place) no longer matters.

Now we do live in a network society. Though I'm online most of the time I don't live in the global village as such.

I live in Adelaide and Canberra--spaces that some call local in contrast to global, but which I understand as particular places. We do need to break out of the old global village model favoured by the cosmopolitans.

In commenting on the Web2 conference Zephoria over at apophenia says it well:

On an economic level, globalization has both positive and negative implications. But on a personal level, no one actually wants to live in a global village. You can't actually be emotionally connected to everyone in the world. While the global village provides innumerable resources and the possibility to connect to anyone, people narrow their attention to only focus on the things that matter. What matters is conceptually "local." In business, the local part of glocalization mostly refers to geography. Yet, the critical "local" in digital glocalization concerns culture and social networks. You care about the people that are like you and the cultural elements that resonate with you. In the most extreme sense, the local is simply you alone. There is certain a geographical component to the local because the people in your region probably share more cultural factors with you and are more likely connected to you in network terms, but this is not a given. In fact, the folks who were most geographically alienated were the first on the digital bandwagon ---they wanted the global so that they could find others like them regardless of physical location.

Glocalization is such an ugly world. I much prefer place. I never understood the local as being alone. Place is the regional.

What then is Web2? What does it signify?

William Blaze at Abstract Dynamics has a go. He says Web2 refers to a

"... real feeling among some that there is something going on that makes the web of today different then the web of a few years ago. Blogs, open standards, long tails and the like... Which of course doesn't sound that different then say the goes of the plain old unnumbered "web", back ten years ago. But the Web 2.0 are right, the web is different now....What really separates the "Web 2.0" from the "web" is the professionalism, the striation between the insiders and the users. When the web first started any motivated individual with an internet connection could join in the building. HTML took an hour or two to learn, and anyone could build. In the Web 2.0 they don't talk about anyone building sites, they talk about anyone publishing content. What's left unsaid is that when doing so they'll probably be using someone else's software. Blogger, TypePad, or if they are bit more technical maybe WordPress or Movable Type. It might be getting easier to publish, but its getting harder and harder to build the publishing tools. What's emerging is a power relationship, the insiders who build the technology and the outsiders who just use it.

Well that's right. This weblog depends on professional support. I had no choice. It was a necessity. William mentions another difference. He says that 'the world of RSS feeds, abundant APIs and open source code really is a major departure from the "own and control" approaches of an earlier generation of companies [Microsoft] and something I'm personally in favor of.' However, he quickly qualifies this. He adds:
Privilege is what the Web 2.0 is really about. What separates the Web 2.0 from that plain old "web" is the establishment and entrenchment of a hierarchy of power and control. This is not the same control that Microsoft, AOL and other closed system / walled garden companies tried unsuccessfully to push upon internet users. Power in the Web 2.0 comes not from controlling the whole system, but in controlling the connections in a larger network of systems. It is the power of those who create not open systems, but semi-open systems, the power of API writers, network builders and standards definers.

This is the networked world of Amazon, Ebay and Google.



| Posted by Gary Sauer-Thompson at 05:59 PM | | Comments (17)
Comments

Comments

I think he is upset that the web grew beyond techies. What he calls the "web" had techies and geeks posting HTML based websites. Those same techies then produced the software for modern day blogs. HTML was a pain in the arse to maintain. I tried running multiple sites as HTML, and then later XML. It just sucks. That included the afc site which had close to static 500 pages. Dynamic sites like scoop, wordpress, blogspot etc etc are just a million times easier. He is forgetting that techies wrote those systems for themselves, and then realised they were highly usable - and saleable.

I disagree with his claim of web 2.0 being about semi-openness. He is forgetting that Google is only seven years old. I can recall when HotBot was the hottest thing around. The barriers to entry are so low, and the market so large that it will probably go through constant reinvention. Ten years ago I was using Trumpet Winsock, now, my iBook connects to a wireless network and I dont have to worry about dialing, or configuration.

Another decade from now there will be some other explosion, change, or whatever. Google will probably still be a player simply because of all the money they have tucked away. Same as Microsoft is still player despite not having an innovative hit since Win98. They have money.

But the internet is volatile, and its users fickle. I expect I wont be able to recognize or predict how I interact with the internet ten years from now.

Cameron,
I find the WEB2 understanding of the internet as a platform a bit questionable--platform for what? Multinational corporations making money?

I much prefer the flows and nodals or hubs concept myself.

I wasn't a part of the Web days. I sat alone in a room in academia writng a PhD on Heidegger using a Apple classic computer. I stepped into the internet through a user friendly Blogger, and so I was a part of,and swept along by, the formation of WB2.

So I take your word for the history of the internet. Peterme concurs with your criticism of Blaze.

However, I do think that William Blaze is right when he says:

Web 2.0 is a pretty amorphous thing, but there is something there and everyone wants to finger it.

Things have changed, even in the couple of years I've been blogging. It is more than the increase in professionalism--it's a new plateau or level that has to do with facilitating the flow of information in a networked society.

My intuition here is reinforced by the likes of this, and this and this. Nearly all my material for my weblogs comes from the internet.

I've noticed a couple of cycles through the several generations of the net (Usenet/webpages/forums/blogs) I've been involved in. All of which occur because it has needed to handle an ever increasing number of users.

The first is fragmentation. Usenet encompassed everything. Although you could make forums for specific topics there were two problems. One, people generally wanted a broader picture than that, they aren't just interested in security in Java, and the discussions tended to stultify. Two, there are limtis to that, and once you breach them you just can't read that many posts. Web forums are better than usenet because you can have twenty parallel to each other. Blogs are better again because you can specify specific authors while still getting the discussions. Some bloggers still reach that same limitation and their comments are unreadably long, but it is possible to have meta-blogs that focus on the same issues for a slightly different group.

Which falls nicely into the second trend which is localisation. Every single generation has tended to become more localised. You start talking to a global community but as it fragments into smaller groups they tend to split on geographical lines; in part because the discussions are easier to relate to, but also because the web often branches out into the real world as you meet with people.

And the third is as you said Gary: professionalism. I remember when Cricinfo was a bunch of tech-nerds, now it is a major organisation affiliated with Wisden. The problem with professionalism is it tends to also mean exclusion (even if it is virtual). What starts as a philosophical discussion group becomes an academic forum. This has a plus side because you can assume a more knowledgable audience and have more informed discussions. The negative is that (like academic institutions themselves) it can lose its relevance without people questioning and learning.

I am not sure how many more generations we'll have. There are still a lot more people who could get online, but I am not sure how many more necessarily will. At least in Western societies.

Russ, One of the problems of a decentralized system, that anyone can provide input into, is moderation. Slashdot appeared because Usenet wasnt very good at moderation. k5 appeared because Slashdot's system had its own limitations. Every technology on the internet has had some response to the problem of moderation in an open system.

It is interesting to see blogs go back to the more open style and not requiring an account to contribute as a commenter. They also dont moderate other than, what is usually a single editor who deletes repugnant posts. But blogs that scale beyond six people posting have their own issues which can require community participation, or a strong editorial team.

I think open input vs moderation will be a constant tension on the internets and drive many of the technologies.

Cameron, that's true too. Usenet built in moderation on some groups and it does decrease the amount of 'noise' on a group. I think my earlier point is that those cultural constraints tend to break down once you get more than 50-100 regular commenters. But I am not sure there is a technical solution to it, except for some of the people on the forum to leave and form another (possibly more exclusive) group.

How does open source software moverment fit into the control side of Web2?

Does open source architecture (eg., Open Office) fit into Web2? If so how? How about moodle Does it mean supporting Open Document? Or the Firefox browser?

Or is WE2 more a business model based round targeted advertising and Microft or Google? Business (competition and advertisng ) is what I see as the new developments alongside personal publishing.

Russ, Your comment on communities constant splintering as the group behaviour and identity outgrows the technology fits with your earlier comment of internet being driven by increases of the internet population. I still think the internet is volatile enough that technology from the techies/geeks drive it. Because of the low barriers of entry with software and the internet not all are profitable, so we have free backbones of the web, like apache, co-existing with profitae web backbones like google. I think this makes the web superior to the capital markets. Robber barons are unheard of on the web. Techs biggest old style robber baron, microsoft, has to compete on the web. Something the capital markets and regatory system couldn't get them to do.

Btw this is posted from a blackberry, which is changing how I interact with the internet. Ironically, the always connected nature of this device probably points to the next population explosion on the web being, not human, but semi-autonomous devices.

Gary, it depends a little on how you think of Open Source. I prefer to think of it as the opposite effect to the Tragedy of the Commons. There, the more users you have the worse the common area becomes. Bandwidth, for example, is a finite physical resource which is why we pay for it. Software on the other hand can be copied indefinitely and actually increases in value with more users. The actual costs are the cost of programming it and the cost of distribution.

Open Source effectively outsources those costs to its user-base. They distribute it because they download it, and they actually program it (or debug it) because they want some piece of added functionality, or for enjoyment, or altruism (in that order). Hence, there will be a role for Open Source in Web 2.0 [to be honest I think it is just another buzzword to describe the long-term trends I mentioned earlier] because there will always be techies like me who write their own blogging software etc.

Open Standards are more of a privatisation issue. They are only valuable if people use them regardless of who writes them. A closed standard is only an issue if it stops people doing what they want to do; it would be a foolish standard owner who didn't respond to demands for change.

I read an interesting article a few years ago (which I can't find) that pointed out that the key to the internet's success is the simplicity of its structure and the ease with which people (techies) can add to it. As described, Web 2.0 would seem to be a new layer of complex technologies over the top of that simplicity, under the auspices of a few API companies. As good as some of these APIs are, I doubt they'll be entirely successful.

Cameron,

is not power a far greater limitation to computer/internet usage, reform and progress than usability?

I have in mind Telstra blocking broadband and the increasing payment for services that is a departure from the current system of everything free.

Gary, "How does open source software moverment fit into the control side of Web2?"

I reckon "web2" is a furphy anyway. Open source is the ultimate expression of commoditisation. Any market opensource enters becomes commoditised and the market price of that product is zero. The value of open source is the productivity that comes from the organization of developers and users. It is post-capitalistic, not requiring capital to out-compete for-profit companies. It is also the most productive form of organization in a pure commodity market.

I am bright eyed enough to see it as the first step in a post-capital world that is brought on by abundance and the inevitable commodization of all products from that abundance.

As to the business side of things, the low cost of information transferral and storage means that "the long tail" can be tapped with ease. Google, Amazon and iTunes are long tail companies. Microsoft is not, they are a robber baron company of the intensive capital era. It is also why Microsoft's share price has been languishing at $20 USD for four years now.

"is not power a far greater limitation to computer/internet usage, reform and progress than usability?"

Not in commodity markets. The internet will route around damage. Free markets are supposed to as well, but are subject to the limitations of capital intensive industries, monopolies and government regulation/interference.

Because of the decentralised nature of the internet it will route around any attempt to slap power over it. Have a look at the peer to peer software. The RIAA is going after people legally by collecting their IPs. So the P2P community is innovating to encrypt and obfuscate identities. Decnetralised systems respond far more rapidly than centrally controlled ones. Government cant legislate over it.

"I have in mind Telstra blocking broadband and the increasing payment for services that is a departure from the current system of everything free."

This is why I am an advocate of opening up the Australian spectrum to the public like 802.11 has. The government treats the spectrum like a scarce resource and auctions it off to companies who raise large amounts of capital for the privilege. WiFi has shown as that spectrum can be treated as an abundant resource.

If the spectrum was open, we would see a tonne of innovation in Australia, it would also allow innovate companies (and non-monopolies) to route around the damage in Australia, which is Telstra. As you have pointed out both the Federal government and Telstra are responsible for that mess.

I recall reading in 1997 (I think it was MITs Negroponte) a comment that, "What we put through the ground today, we will put through the air, and vice versa". Phones are making the transition. I have two mobile phones (one normal, one blackberry), yet I only have one home phone. If I didnt have to ring internationally, the home phone would probably be dumped. My internet is WiFi too. I have a WiFi antenna on my roof which beams into a tower in town where the ISP connects it to an DS3. I have satellite TV too.

My telecommunications are getting to the point where they are all through the air.

Cameron,
your description of Microsoft as a robber baron company of the intensive capital era is very apt.

So how do account for all the academic work in journals being locked up behind financial walls? Is the decentalized nature of the internet getting around that closed knowledge syndrome. Or are we in the early stages of this resistance to universities turning their away from a common public world?

By power stifling change I mean the way the anti-competitive Murdoch/Packer alliance are going to block multi-channelling (new digital channels) on free-to-air television; or the development of new digital information services. So media reform by the Howard government will be limited to puitting an end to cross-media and foreign ownership restrictions whilst blocking the entry into a digital media world.

We are stuck in the world of the old media with the same old players playing the same old media games.

That sort of power means that Australia is well down the international league ladder when it comes broadband access nationwide; and so supporting and providing the electronic infrastructure that connects Australians to the global information economy.

That means, here in Victor Harbor, which is on the urban rim, we are living in a pre-digital world.

I found that article I mentioned. Actually, someone else linked to it, which is a nice coincidence.

To draw an analogy. The internet is a road, not a vehicle. Web 2.0 may well be the Model T Ford of internet development, but it won't preclude people from walking/cycling or developing their own cars. It just makes it easier for large numbers of people to do their thing.

Russ,
That is an interesting article by Searl and Weinberger.

The internet as a road or information highway----moving this bunch of bits from one end of the network to another--- is quite different from the internet as a platform which is the commerical understanding.in Web2. Platform implies proprietary control.(Microsoft, Murdoch, Telstra) but the internet it sits beneath and outside of all of their platforms.

Nice.

IHowever 'm not sure about the article's view that as business won't be interested in controling the connectivity, so they will provide content and services at a stiff price because the connectivity itself will be too low-priced.

They--eg., Telstra in Australia-- do try and control the connectivity. We cannot get access to the Internet except through dialup. As my end here at Victor Harbor, along with thousands like me, is not connected , we do not have a situation of each to each and each to all. It only applies to some as only 5 million Australian households are connected to the internet.

Doc Searls and David Weinberger are very utopian about the webg. They write:

To connect to the Internet is to agree to grow value on its edges. And then something really interesting happens. We are all connected equally. Distance doesn't matter. The obstacles fall away and for the first time the human need to connect can be realized without artificial barriers.

We are not all connected equally; distance does matter; the obstacles do not fall away.Nor does money move to the suburbs or the urban rim.

The reason for that is brue commercial and political power.

Gary, yes and no. There is a big distinction between how you make the connection and how you use it. One that I am still not sure the commerical world has managed to grasp - certainly the political world hasn't.

You are right that connections to the internet are controlled. This is because bandwidth and the physical infrastructure it depends on are scarce resources. But when you talk about connections you are talking about interface between the internet and the physical world. It isn't the internet itself.

But on the internet, there is noone able to control how you use it. You can run a peer-to-peer program to download practically anything, view and run websites, make phone-calls etc. etc. All across the same basic infrastructure and using the same basic protocol. Some of these programs don't work well with a dial-up but you can still try.

Hence, by the 'suburbs' the article mean end-users on their home computer; virtual, not physical, suburbs. BitTorrent is a good example. it can distribute extraordinary amounts of data to thousands of users without a host, by taking advantage of the hard disks and bandwidth of those same end-users.

Which is really the point of Searls and Weinberger's article. They are railing against the idea that connection providers will be able to control how you use the internet, instead of just charging an entrance fee to get onto it. For example, one of the ideas being thrown about a few years ago was that some of the bandwidth should be reserved for video transmission; which would have shifted control to the companies with access to it.

But this didn't happen and isn't likely to. Once you have access to the net, its structure makes it far too easy to route around controlling entities.

Gary,

"So how do account for all the academic work in journals being locked up behind financial walls?"

Academics should choose more open journals to publish in. There are ones around. The other issue is, the internet does not forget, courtesy of search engines. Your archives are the most valuable part of any website. Making them available always enables greater revenue to be had from them throgugh advertising, rather than subscription.


"By power stifling change I mean the way the anti-competitive Murdoch/Packer alliance are going to block multi-channelling"

They have lost. The 15-35 demographic is fleeing television and cable. Most pundits claim they are playing video games and reading the internet. This assumes thhey are content consumers only. It is more likely they are becoming content creators. Writing comments on websites, articles on blogs, creating skins for games, even game patches (I made an unoffical game patch for Red Baron 2 back in 1997). This demographic is under-cutting the information scarce environment that Murdoch/Packer require to operate. Once government treats information as an abundant form, News Corp will not exist in the same way it does today.

"That sort of power means that Australia is well down the international league ladder when it comes broadband access nationwide:"

Again this is why it is necessary IMNSHO for the spectrum to be treated as an abundant resource. Monopolies can only exist with government compliance, and government making artificial scarcity.

Now this is groundbreaking in relation to Searls and Weinberger's article, where they rail against the idea that connection providers will be able to control how you use the internet, instead of just charging an entrance fee to get onto it.

The article says that Google wants to connect all of San Francisco to the internet with a free wireless service, creating a springboard for the online search engine leader to leap into the telecommunications industry.

Lucky San Francisco.

Gary, I think that is a dumb move by google. We put in, test and maintain WiFi systems at major infrastructure points. WiFi is a typical abundance technology, it is so commodified most businesses have to give it away with a cup of coffee.