Imagine a world where everyone was constantly learning, a world where what you wondered was more interesting than what you knew, and curiosity counted for more than certain knowledge. Imagine a world where what you gave away was more valuable than what you held back, where joy was not a dirty word, where play was not forbidden after your eleventh birthday. Imagine a world in which the business of business was to imagine worlds people might actually want to live in someday. Imagine a world created by the people, for the people not perishing from the earth forever.[1]

Yeah. Imagine that.

Nope – this isn’t an attempt to reclaim John Lennon’s lost legacy. Or anybody’s for that matter. So for those of you who think I’m going to be talking about rock ‘n roll, kindly abandon ship – but depending on if you’re tuned into the Internet’s post - apocalypso carrier- wave- state of affairs, care to stay put and sail along.

Ok. Now we can start to see how the world ought to work.

Great. I’ve managed to keep your attention till now. I shall be pulling off smart ones like that once in a while to keep you hooked.

And now the real test begins.

So. It was 1969 when tadaa - UNIX was created at AT&T Bell Labs by Ken Thompson and Dennis Richie. Not originally designed for commercialization, the source code was shipped to universities around the world, most notably Berkeley in California. One of the world's first truly portable and open source operating systems, Unix soon splintered into many different versions as people modified the source code to meet their own requirements. And once companies like Sun Microsystems began to commercialize Unix, each company added proprietary extensions to differentiate their own versions of Unix. Thus began what are known as the first "Unix wars". For independent software vendors (ISVs), such proprietary variants were a nightmare. You couldn't assume that code that ran correctly on one Unix would even compile on another. This ultimately fragmented the market and alienated customers. By the late 1980s Unix was no longer lingua franca for the workstation market, but a veritable tower of Babel.[2]


During this time, in an attempt to create a common API for all Unix systems in order to fix this problem, the POSIX set of standards(shepherded by the IEEE) was born. They covered much more than the operating system APIs, going into detail on system commands, shell scripting and many other parts of what it meant to be 'Unix'. In fact, it was meant to allow someone reading it to completely re-implement their own version of a Unix operating system from scratch, with nothing more than the POSIX spec itself.

However, this ended up being less useful than it sounds, given that Microsoft Windows NT has been branded POSIX compliant and generic Linux has not.

If you’ve been enjoying reading this far, then eat this up. There’s a bulimic’s dream-feast of killer content on the way.[1]


The POSIX standard evolved over time. One of its great successes was the ease with which it adapted to the change from 32 to 64-bit computing(In contrast to Win32 API, all POSIX interfaces are defined in terms of abstract datatypes, which made this possible). In addition, a major step forward was the establishment of the Single Unix Specification (SUS) - a superset of POSIX developed in 1998 and adopted by all the major Unix vendors. The expanded SUS covered such issues as real-time programming, concurrent programming via the POSIX thread interface , and internationalization and localization, but unfortunately did not cover file Access Control Lists (ACLs). Because ACLs where sorely needed in real-world environments, individual Unix vendors, such as SGI, Sun, HP, IBM added them to their own Unix variants. But without a true standards document, they fell into their old evil ways and added them with different specifications. It was then into this environment in the early 1990s, where along came Linux.[2]

Put your ear to those tracks and listen to what’s coming like a freight train. What you’ll hear is the sound of passion unhinged, people who have had it up to here with white-bread culture, hooking up to form the biggest goddam garage band the world has ever seen.

What is a rock concert all about? How about creation, exploring a visceral and shared collective memory we’ve been brainwashed into believing never existed?[1]


Linux changed everything. In many ways the old joke is true: Linux is the Unix defragmentation tool. It captured the imagination of those who could best be described as the "collateral damage" of the Unix wars. The main features which appealed to this large group of users were : its compatibility with Unix, with which they were intimately familiar, and that it was licensed under the GNU General Public License (GPL), which not only allowed the scores of Unix refugees to contribute to its development, but also guaranteed that at the source code level Unix-style fragmentation could never happen to the result of the community's work . As Linux became more popular, programs originally written for other Unixes were first ported to it, and then after a while written for it and ported to other platforms[2] .

Linux grew by leaps and bounds during the 1990s. By the end of the decade it was clear that Linux was a powerful force, and many of the industry's largest companies began to see it as a competitive weapon. These companies also realized that the power behind Linux wasn't so much its technology as its licensing and development model - the Open Source model. Like the PC, the Internet, and other open systems we take for granted today, it is more of an ecosystem than a technology. Linux build atop all these previous ecosystems. In fact, without the Internet to enable the open source development model to work, Linux would not even exist today[2].

In another parallel universe, IBM's decision to use off-the-shelf parts in the IBM PC inadvertently created the industry's first open hardware platform. It was not long before a new wave of entrants such as Compaq, Dell and Gateway realized they could build products which were 100% compatible with the IBM PC, thus gaining access to a large base of applications and users, much as Sun had done by adopting Unix. As the PC clone market emerged, Intel and Microsoft found an entire market to sell to. Albeit the fact that both Unix and the PC began life as open systems - as ecosystems of sorts - and both grew enormously popular because of their open nature, Unix failed and the PC succeeded beyond anyone's wildest expectations, including that of IBM's. On the Unix side, each vendor tried to own the ecosystem by itself, and collectively, they managed to destroy it. Meanwhile on the PC side, the ecosystem won out in the end, for the betterment of all who embraced it , and more importantly, the existence of that ecosystem enabled the creation of other ecosystems above it - the Internet and the countless products, services and industries which were hitherto unimaginable[2].

This is an existential moment. Its characterized by uncertainty, the dissolving of the normal ways of settling uncertainties, the evaporation of memory of what certainty was once like. In times like this, we all have an impulse to find something stable and cling to it, but then we’d miss the moment entirely. There isn’t a list of things you can do to work the whirlwind. The desire to have such a list betrays the moment.[1]


At the time, IBM’s market share in computers far exceeded Microsoft’s dominance of the operating system market today. Software was a small part of the computer industry, a necessary part of an integrated computer, often bundled rather than sold separately. So, when it came time to provide an OS for the new machine, IBM decided to license it from a small company called Microsoft, giving away the right to resell the software to the small part of the market that IBM did not control. As cloned PCs were built by thousands of manufacturers, IBM lost leadership in the new market, and as software became the new sun that the industry revolved around, Microsoft, not IBM became the most important company in the computer industry. Meanwhile, Intel made a bold bet on the new commodity platform, abandoned its memory chip business as indefensible and made a commitment to be the more complex brains of the new design. The fact that most of the PCs built today bear an “Intel Inside” logo is testament to the fact that even within a commodity architecture, there are opportunities for proprietary advantage[2].

At this level, things are often radically other than what they appear. A new kind of logic is emerging, or needs to. I call it gonzo business management - paradox becomes paradigm. We're not in Kansas anymore, Toto, and we might as well used to it[1].

Tim O’ Reilly, in explaining the paradigm shift brought about by the scientific revolution promised by Open source poses a question in his talks to the audience of computer industry professionals to gauge if their thinking is ascribed by the old paradigm or the new. “ How many of you use Linux”? , he asks. Depending on the venue, 20% to 80% of the audience raises their hands. “ How many of you use Google?” Every hand in the room goes up. Every one of them uses Google ‘s massive complex of 100,000+ Linux servers, but they were blinded to the answer by a mindset in which “the software you use” is defined as the software running on the computer in front of you, he muses. Most of the “killer apps” of the Internet, Google et al, used by hundreds of millions of people, run on Linux or FreeBSD. But the operating system, as formerly defined, is to these applications only a component of a larger system. Their true platform is the Internet. His point was that it is only in studying these next-generation applications that we can begin to understand the true long-term significance of the open source paradigm shift. And that if open source pioneers are to benefit for the revolution that has been unleashed, we must look through the foreground elements of the free and open source movements, and understand more deeply both the causes and consequences of the revolution.

Just because you’re not seeing a revolution – or what Hollywood has told you a revolution ought to look like – doesn’t mean there isn’t one going on[1].


1999 was an year in which bubble headed investing in “Linux companies” had grown to galactic dimensions. In August 1999 Red Hat had the largest IPO run-up in stock market history. However, the bubble was a red herring. Linux was quietly being put to use in business everywhere, and was never about the stock market, or even about business. It was about something that caused usage to grow regardless of whatever happened among commercial suppliers. As far as the world of commerce was concerned, the spotlight soon shifted to the new set of Linux business leaders: IBM, HP, Novell, Oracle, Red Hat and others. Majority of companies even today - Yahoo , Microsoft, AOL, Apple to name a few remain committed to closed proprietary systems that serve as platforms supporting closed silos - a structure which is as old as computing, and one which won’t go away quickly, if ever. But as a defining model for the software business, the platform has been replaced by a growing assortment of open standards and open source tools that together support far more businesses than they replace. Linux and its familiar LAMP suit (Linux, Apache, MySQL, PHP, Perl, Python, PostgreSQL etc) are the obvious ones. Tomcat, JBoss, Eclipse, Squid, Asterisk, Jabber, RSS, iPodder or any of the 100,000+ projects on SourceForge are to name a few more. [2]

The offerings are often mixed, and so it gets hard to say what is open source and what is not. For instance, while Apple contributes generously to the FreeBSD kernel( as well as to Apache, KDE, GNU), the OS X operating system Apple builds on BSD is highly proprietary. So is its popular iTunes software. iPods, for all their appeal are hardware extensions of iTunes software. They are a silo. However, it would be probably be a mistake to dismiss Apple as a “proprietary” company.It has an open source strategy. So do IBM, HP, Oracle, RealNetworks, Novell, Sun, SAP and a large number of vendors that use open source strategies to support their proprietary offerings. They are all based on an acceptance of open source as a foundational infrastructure, on participation in open source projects and an appreciation for what open source provides to the world.[2] Netscape’s participation in one of the most real-world tests of the bazaar model in the commercial world with the launch of Mozilla(a grand saga in itself) scored immense success in demonstrating the effectiveness of the open-source approach commercially.

From hopelessly romantic meditations on favorite cats, to screeds so funny you’ll blow your coffee out your nose, to collective code for alternative operating systems : we’re all expressing ourselves in a new way online – a way that was never possible before, never before permitted. And make no mistake, speech once freed is a powerful drug.[1]

As far as the question of profits is concerned, there is evidently hope that there is money to be made in infrastructural technologies that have been fully commoditized (just as there is a need for electricity, phone service and rail transport, according to Nicholas Carr in “IT Doesn’t Matter” – HBR May 2003), and that there is no need to try and own those infrastructural technologies. Loosely, there seems to be the following structure to entire segments of the software market: Infrastructure(the Internet, Web, operating systems etc) tend to be open source, cooperatively maintained by user consortia and by for-profit distribution/service outfits with a role like that of Red Hat today. Applications, on the other hand, have the most tendency to remain closed. There will always be circumstances under which the use value of an undisclosed algorithm or technology will be high enough that consumers will pay for it. Middleware (like databases, development tools, or the customized top ends of application protocol stacks) tend to be more mixed. Whether they tend to go closed or open seems likely to depend on the cost of failures, with higher cost creating market pressure for more openness.[3] However, one needs to note that neither ‘applications’ nor ‘middleware’ are really stable categories. Emerging techniques such as those pertaining to Cloud computing are blurring the distinctions between the need for traditional middleware vs obtaining analogous functionality as a service.

From a cost savings perspective, proponents of open-source argue that buying into a monolithic vendor that holds out greatly reduced choice as a way to accomplish moderately reduced complexity, a buyer surrenders his IT destiny to that vendor. He upgrades when the vendor wants him to. It gets new technology when the vendor chooses to innovate. And he pays whenever the vendor demands because he has no other options. Over time, such vendor-controlled realities cost more both in hard costs and opportunity costs. Open source offers the opposite vision : maximum freedom to shift among vendors, and therefore costs less in the short term, and especially in the long term[2].

The carrier wave has been tuned at a huge cost to deliver a single message : you are not free, you desire nothing but the products we produce, you have no world but the world we give you. If you are OK with this, then eat it up. But if it already makes you want to puke, get angry. Write it, code it, paint it, play it – rattle the cage however you can. Stay hungry. Stay free. And believe it: win, lose, or draw, we’re here to stay. Armed only with imagination, we’re gonna rip the fucking lid off.[1]

There’s your market.


However, opponents of the Open Source have raised their own steamy concerns. Microsoft VP, Jim Allchin, made statements such as “open source is an intellectual property destroyer”, and painted a bleak picture in which a great industry is destroyed, with nothing to take its place. On the surface, he may even appear to be right. Linux now generates tens of billions of dollars in server hardware-related revenues, however, Red Hat, its largest distributor, has annual revenues of the order of $500 million, vs Microsoft’s $60 billion. A huge amount of software value seems to have vaporized.[2]

Apart from Linux, BIND – which runs the DNS - is probably the single most mission-critical program on the Internet, yet its maintainer has scraped by for decades on donations and consulting fees. Meanwhile, domain name registration – an information service based on the software – became a business generating hundreds of millions of dollars a year, a virtual monopoly for Network Solutions, which was handed the business on government contract before anyone realized just how valuable it would be. [2]


But is it value or overhead? Open source advocates like to say they’re not destroying actual value, but rather are squeezing inefficiencies out of the system. When competition drives down prices, efficiencies and average wealth go up. Firms unable to adapt to the new price levels undergo what the economist E.F Schumpeter called “creative destruction”, but what was “lost” returns manyfold as higher productivity and new opportunities.[2]

Tell us some good stories and capture our interest. Don’t talk to us as if you’ve forgotten how to speak. Don’t make us feel small. Remind us to be larger. Get a little of that human touch.[1]

TV with a Buy Button. Woweee !!!

One of the biggest roadblocks to any company's growth is the Bureaucracy Bottleneck, and one that most IT buyers detest. The charm of Open source is that it makes its way into enterprises via free download. MySQL had 10 million downloads in 2003, and of these would-be-customers, 5000 returned to buy a support contract/license from MySQL, bumping the company’s revenues by 100% to $10 million. All this was achieved by spending less than 10% of total revenues on sales and marketing activities, as contrasted with 40-45% spent by most proprietary software companies. The savings do not stop there. Whether the open source vendor “borrows” much of its code, or creates it almost entirely in-house and then open sources it, open source delivers development-related cost savings. For the “borrowers” , they leverage a well-developed body of code, most of it written by individuals not on their payroll. For the JBosses and MySQLs of the world, which do majority of their own development, there is still a significant QA savings from the global pool of testers who submit bug fixes and code contributions.[2]

Indeed the professional open source business model is not really about development savings. Rather, it is about maximizing distribution of one’s product; getting it beyond the purchasing firewall/ bureaucracy bottleneck to plant the product in the hands of its developer end users so that they can try and then revisit the professional open source vendor for support/service contracts. To get approval to use BEA’s Weblogic or IBM’s Websphere one would need to go through a cumbersome process. To use JBoss,I simply need to click “Click here to download”. And while I might choose to support myself through newsgroups, in production environments I will generally turn to the source of the code (JBoss in this case). [2]

There may not be twelve or five or twenty things you can do, but there are ten thousand. The trick is, you have to figure out what they are. They have to come from you. They have to be your words, your moves, your authentic voice.[1]

The Web got built by people who chose to build it.


“Given enough eyeballs, all bugs are shallow”, the quote by Eric Raymond in The Cathedral and the bazaar, seems to epitomize in a nutshell the debate in favor of open source development. Pointedly, he argues that if it weren’t true, then any system as complex as the Linux kernel, being hacked over by as many hands as the kernel was, should at some point have collapsed under the weight of unforeseen bad interactions and undiscovered “deep” bugs. Since it is true, it is sufficient to explain Linux’s relative lack of bugginess and its continuous uptimes spanning months or even years.

What if the real attraction of the Internet is not its cutting-edge bells and whistles, its jazzy interface or any of the advanced technology that underlies its pipes and wires? What if, instead, the attraction is an atavistic throwback to the prehistoric human fascination with telling tales? Five thousand years ago, the marketplace was a hub of civilization, a place to which traders returned from remote lands with exotic spices, silks, monkeys, parrots, jewels – and fabulous stories.[1]

Another characteristic of open-source method that conserves developer time is the communication structure of typical open-source projects. The typical problem that traditional software development organization addresses is Brooke’s Law: “Adding more programmers to a late project makes it later”. More generally, Brooke’s Law predicts that the complexity and communication costs of a project rise with the square of the number of developers, while work done rises linearly. It is founded on the experience that bugs tend strongly to cluster at the interfaces between code written by different people, and that communications overhead tends to rise with the number of interfaces between human beings. Of course, Brooke’s Law rests on a hidden assumption that the communications structure is necessarily a complete graph, and that everybody talks to everybody else. But on open-source projects, the halo developers work on what are in effect separable parallel subtasks and interact with each other very little; code changes and bug reports stream through the core group, and only within that small core group do we pay the full Brooksian overhead.[3]

Gerald Weinberg pointed out in his classic The Psychology of Computer Programming,that in shops where developers are not territorial about their code , and encourage other people to look for bugs and potential improvements in it, improvement happens dramatically faster than elsewhere. The bazaar method, by harnessing the full power of the egoless programming effect, strongly mitigates the effect of Brooke’s Law. The principle behind Brooke’s Law is not repealed, but given a large developer population and cheap communications its effects can be swamped by competing nonlinearities that are not otherwise visible. This resembles the relationship between Newtonian and Einsteinian physics – the older system is still valid at low energies, but if you push mass and velocity high enough you get surprises like nuclear explosions, and – yes - Linux ! [3]

Inside companies, outside companies, there are only people. All of us work for organizations of some sort, or we’re peddling something. All of us pay the mortgage or the rent. We all buy shoes and books and food and time online, plus the occasional Beanie Baby for the kid. More important, all of us are finding our voices once again. Learning how to talk to one another. Slowly recovering from a near-fatal brush with zombification after watching Night of the Living Sponsor reruns all our lives.[1]

In line with understanding its operating mechanisms, one needs to delve into understanding the open-source culture and, among other aspects , the generative patterns resulting in ownership customs. Open source licensing do nothing to restrain forking, in practice however, forking almost never happens, although pseudo-forking (separate projects using common code benefiting from common development efforts completely) is common. However, (and in contradiction to the anyone-can-hack-anything consensus theory) the open source culture has an elaborate but largely unadmitted set of ownership customs. Among these are : the strong social pressure against forking projects, the strong social pressure against distributing changes to a project without the cooperation of the moderators, the extreme social pressure against removing a person’s name from a project history, credits or maintainer list without his explicit consent etc. Also, interestingly, in the reputation-game analysis with the hacker culture, there is an inherent distrust in egotism and ego-based motivations; self-promotion tends to be mercilessly criticized, and only sublimated and disguised forms like ‘peer repute’, ‘self-esteem’, ‘professionalism’ or ‘pride of accomplishment’ are generally acceptable. There is a very strict meritocracy (the best craftsmanship wins) and there’s a strong ethos that quality must be left to speak for itself. The best brag is code that “just works” and that any competent programmer can see is good stuff. [3]

"Jim, you are a complete idiot. Your code is so brain-damaged it won't even compile. Read a book, moron." [1]

Attacking the author rather than the code is not done. Hackers feel free to flame at each other over ideological differences, but it is unheard of for any hacker to publicly attack another’s competence at technical work. Bug-hunting and criticism is always project-labeled, not person labeled. Similarly, credit is given when credit is due. Project leaders are chosen from those with enough humility and class to be able to say, when appropriate –“Yes, that does work better than my version, I’ll use it”. Finally, the reputation-game analysis explains the oft-cited dictum that you do not become a hacker by calling yourself a hacker – you become a hacker when other hackers call you a hacker. A hacker, considered in this light, (Eric Raymond uses the term hacker to elicit ‘real programmers’, rather than the colloquial, often derogatory reference today to crackers – people who break unlawfully into IT systems) is somebody who has shown that he/she both has technical ability and understands how the reputation game works. This judgement is mostly one of awareness and acculturation, and can be delivered only by those already well inside the culture. Additionally, reputation pay offs are enormous in having done work so good that nobody cares to use the alternative anymore. The most possible peer esteem comes from having done widely popular, category-killing original work that is carried by all major distributions. People who have pulled this off more than once are half-seriously considered as demigods. [3]

Imagine for a moment: millions of people sitting in their shuttered homes at night, bathed in that ghostly blue television aura. They’re passive, yeah, but more than that: they are isolated from each other. Now imagine another magic wire strung from house to house, hooking all these poor bastards up. They’re still watching the same old crap. Then, during the touching love scene, some joker lobs and off-color aside – and everybody hears it. Whoa! What was that? People are rolling on the floor laughing. The audience is suddenly connected to itself![1]

Today, with the proliferation of Web 2.0 leveraging the power of “the long tail”, and with data as its driving force, the Web has in fact become a platform, and value has moved “up the stack” with software delivered as services . There is inherently an architecture of participation where users can contribute to website content creating massive network effects. The philosophy indeed closely resembles that behind the Agile development process of open source, with innovation fostered by pulling together features from distributed vendors, and a large amount of user generated content. Ajax powered Mashups, wikis, blogs, tagging, web services, web feed formats, web based communities, social networking, video-sharing and folksonomies abound and have seen explosive growth in the last couple of years. YouTube, MySpace, Flickr, Twitter, Facebook have become modus-operandi household names, while Wikipedia, del.icio.us,craigslist have become hallmarks of the network effect of the volumes of people using them. Just as Netscape was the darling of Tech for Web 1.0, Google has emerged as that for Web 2.0 (with their respective IPOs almost serving as defining events for each era). On the other hand, much lower down the stack, data center technology has evolved with leaps and bounds with the promise of virtualization, high density computing and cloud computing. The Semantic Web is beckoning the advent of Web 3.0- in which the semantics of information and services on the web is being defined, making it possible for the web to understand and satisfy the requests of people and develop ontologies within a knowledge domain.

Inside, outside, there’s a conversation going on today that wasn’t happening at all ten years ago and hasn’t been very much in evidence since the Industrial Revolution began. Now, spanning the planet via Internet and World Wide Web, this conversation is so vast, so multifaceted, that trying to figure what it’s about is futile. It’s about a billion years of pent-up hopes and fears and dreams coded in serpentine double helixes, the collective flashback deja vu of our strange perplexing species. Something ancient, elemental, sacred, something very very funny that’s broken loose in the pipes and wires of the twenty-first century.
There are millions of threads in this conversation, but at the beginning and end of each one is a human being. That this world is digital or electronic is not the point. What matters most is that it exists in narrative space. The story has come unbound. The world of commerce became precipitously permeable while it wasn’t looking and sprang a leak from a quarter least expected. The dangers of democracy pale before the danger of uncontained life. Life with the wraps off. Life run wild. [1]

Note: This entire essay was compiled from the following sources. Markers at the end of each paragraph specify which source the lines from the paragraph were borrowed from. In the interest of time, I have not specified the exact page numbers in the books for each line.


References and Readings
1. The Cluetrain Manifesto - The end of business as usual - Rick Levine,Christopher Locke, Doc Searls, David Weinberger
2. Open Source 2.0 - The continuing evolution - Chris DiBona, Mark Stone, Danese Cooper
3. The Cathedral and the Bazaar - Eric S. Raymond
4. Unleashing Web 2.0 - Gottfried Vossen, Stephan Hagemann
5. The success of Open source - Steven Weber