Looking to the past to improve the future of the internet

Too easy

Back in October 1993 – when the World Wide Web consisted of no more than a handful of sites – I created what was probably the very first website running out of someone’s home. This was because I happened to own a computer workstation, needed to run the Web’s server software. Also, because I had a modem that could dial and connect that workstation onto the Internet. And, finally, because it was so easy to create a website.

In those early days you could get away with web sites consisting of nothing more than some text. If you really wanted to be dramatic, you might add a few “tags” – some web-specific shorthand bracketed by “<” and “>”. One tag made text bold. Another created italics. Still another, “<blink>”, did just what it said on the can: rendered blinking text onto the display.

Internet
Connection to an Internet service provider by dialing a telephone number on a conventional telephone line. This was called Dial-up. Credit: webdesignmuseum.org

Part of the joy of the early Web was this simplicity and immediacy. You’d make a change, hit save, reload the page, and bingo! – you’d see the product of your handiwork. That rapid feedback kept a small army of people busy (myself among them) adding to our fledgling websites.

Almost 30 years later, and the Web has grown into a sophisticated middle age – sophisticated here used in its original definition of “unnecessarily complex”. Most modern Web pages contain very little text. Instead, they have detailed instructions on where to grab some text from a content management system (CMS). Another set of instructions tells the browser where to grab the details of the visual layout of that text, using Cascading Style Sheets (CSS). Meanwhile, behind the scenes, masses of program code, written in Javascript, orchestrate the loading, rendering and behaviour of everything you see on screen, and everything else that you can’t see: the behavioural profilers and recommendation engines and personalisation tools.

Part of the joy of the early Web was this simplicity and immediacy.

Today’s Web pages resemble a Frankenstein-like assemblage of parts from different quadrants, often transmitted to your browser from different countries, and almost always from a range of different companies. Years of intense standardisation processes mean that these pieces generally work well together. But sometimes some of the pieces fail to arrive, or fail to work properly. We frequently blame ourselves when that happens, unless it’s fairly obvious that a particular website has fallen over. A Web composed of so many pieces from so many places has become increasingly fragile. As our life comes to rely more and more on online resources, that fragility can have consequences.

Internet
Apple’s home page in 1998 informing customers what Apple products could be purchased in store. Credit: webdesignmuseum.org

One response to this is simply to hand the entire mess of the Web over to huge tech giants. You know who they are: Google (or Alphabet, if we’re talking about the parent company), Facebook (or Meta, as they are these days), Microsoft, Apple and Amazon. Yet a centralised internet is in many ways even more fragile than the chaos we’ve created for ourselves. Every time Amazon or Google or Microsoft has a service failure, they impact not just their own customers, but many other businesses who rely on their massive “cloud” infrastructure. Centralisation makes the big bigger without providing a real solution – and runs against the entire design philosophy of the Internet, which rightly regards decentralisation of resources as a core pillar of resilience.

If we ever hope to recover the essential promise of the Web – an information publishing and accessing system free and open to all – we may need to rethink what we need from the Web, in order to separate that from what we want. We appear to have confused fireworks for substance, and beauty for utility. What we need very rarely requires much fanfare.

Consider Wikipedia, nearly unchanged in its essential form from its launch 21 years ago. Those pages of text, links and pictures operate just as they did when its corpus consisted of just a few thousand subject entries. The biggest changes to Wikipedia over that period of time have improved support for citations – so people can easily reach through Wikipedia into the “primary sources” upon which a given article draws its facts, and the behind-the-scenes conversation and moderation system which sets the gold standard for how communities holding strongly divergent views on matters of factuality can arbitrate a working consensus. Neither of those should be taken lightly, but neither do they influence the look of an article in Wikipedia, nor the fact that any of its billions of users can tap on an “edit” button within an article and begin adding their own knowledge to its vast repository.

Consider Wikipedia, nearly unchanged in its essential form from its launch 21 years ago.

Wikipedia is only one of many possible uses for the Web, but its longstanding stability, accessibility and editability make many of the Web’s more recent developments feel more like sizzle than substance. It’s a ruler by which we should be measuring the rest of our Web experience, to help us differentiate between utility and futility.

Internet
Youtube’s home page when it first launched in 2005. Credit: webdesignmuseum.org

I recently had a play with LinkTree, an Australian social media startup that just achieved “unicorn” status (a valuation greater than one billion dollars). LinkTree provides a wonderfully easy-to-use tool for the creation of a basic website that allows me to share links to my bio, my podcasts, my recent publications, and the like. It gets the authoring right, and in that it merits praise. For too long it’s been too hard for people to author their own web pages. Something that was dead easy in the Web’s earliest days has become the domain of highly trained designers, coders and engineers. It’s not to say they’re unnecessary – LinkTree certainly needed a supply of them to make their service work so well – but such work should always be in the support of individual creativity, rather than acting to replace it.

So maybe we need to step back and rethink the entire project of the Web? That’s the premise of Project Gemini, which promises a return to a pre-lapsarian Internet, before the Web and its 30 years of tooling made everything too hard for mere mortals. While it claims to have no pretensions to replace the Web, it urges us to adopt its protocols and philosophy: simplicity, utility and privacy. Since these are areas where the Web, in middle age, has grown decidedly flabby, if nothing else Project Gemini serves as a necessary goad to our under-exercised imaginations of what we should expect from the massive connectivity of the mid-21st century. It doesn’t need to be like this, it seems to say. There are other ways of being connected. After so many years of relying on the Web, can we can entertain fantasies of a new operating environment? So much relies on the Web as it is – not just the fortunes of the goliaths of technology, but the way we share information about our lives, our communities, and our planet – that it feels almost foolish to consider alternatives. We’ve made our bed, and we’ve got to lie down in it. But later, our eyes closed, are we allowed to dream of a world where everyone can simply, easily and individually share what they know? For a brief shining moment a generation ago, the Web realised that dream. Can we dare to dream a return to its origins?

In June 1996, Microsoft released the Microsoft FrontPage 1.1 WYSIWYG website editor.

Please login to favourite this article.