READ IT NOW!

Hacker Culture and the Fear of WYSIWYG was written by Eric Schrijver
and is available according to the Creative Commons Attribution Share-Alike License 3.0

Edited and typeset by Christoph Haag as part of an ongoing exploration of the situated mark-up dialect .mdsh.

Hacker Culture and the Fear of WYSIWYG was written by Eric Schrijver and is available according to the Creative Commons Attribution Share-Alike License 3.0

In 2004 I encounter the website of the Amsterdam magazine/web platform/art organisation Mediamatic. The site is remarkable in several ways. Firstly, it shows off the potential of designing with native web technologies. Its layout is a re‑appraisal of one of the core fonts available to almost all surfers: Georgia, and its Italic. The striking text‑heavy layout uses this typeface for body‑text, in unconventionally large headings and lead‑ins. Secondly, the site opens up a whole new editing experience. In edit mode, the page looks essentially the same as on the public facing site, and as I change the title it remains all grand and Italic. I had been used to content management systems proposing me sad unstyled form‑fields in a default browser style, decoupling the input of text completely from the final layout. That one can get away from the default browser style, and edit in the same style as the site itself, is nothing short of a revelation to me – even if desktop software has been showing this is possible for quite some time already.

In 2004, there are more websites with an editing experience like Mediamatic: Flickr, for example makes it possible to change the title and metadata of a photo right on the photo page itself, if one is logged in. Yet flash forward to 2014, and most Content Management Systems still offer us the same inhospitable form fields that look nothing like the page they will produce.

If we look at the experience of writing on Wordpress, the most used blogging platform, the first thing one notes is that the place where one edits the posts is quite distinct from the place that is visited by the reader: you are in the ‘back end’. There is some visual resemblance between the editing interface and the article: headings are bigger then body text, italics become italic. But the font does not necessarily correspond to the resulting posts, nor do the line‑width, line‑height and so forth. Some other elements are not visual at all: to embed youtube and the like one uses ‘shortcodes’.

Technologically, what was possible in 2004 should still be possible now – the web platform has since then only advanced, offering new functionality like contentEditable which allows one to easily make a part of a webpage editable, without much further scripting. So where are the content management systems that take advantage of these technologies? To answer this question, we will have to look at how web technologies come about.

The dominant computing paradigm and its counter-point

An editing interface that visually resembles it visual result is know as WYSIWYG, What You See Is What You Get. The term dates from the introduction of the graphical user interface. The Apple Macintosh offers the first mainstream WYSIWYG programs, and the Windows 3.1 and especially Windows 95 operating systems make this approach the dominant one.

A word processing program like Microsoft Word is a prototypical WYSIWYG interface: we edit in an interface that visually resembles as close as possible the result that comes out of the printer. Most graphic designers also work in WYSIWYG programs: this is the canvas based paradigm of programs like Illustrator, inDesign, Photoshop, Gimp, Scribus and Inkscape.

But being the dominant paradigm for user‑interfaces, especially in document creation and graphic design, does not mean the WYSIWYG legacy is the only paradigm in use. Programmer and author Michael Lopp, also known as Rands, tries to convince us that ‘nerds’ use a computer in a different way. From his self‑help guide for the nerd’s significant other, The Nerd Handbook:

Whereas everyone else is traipsing around picking dazzling fonts to describe their world, your nerd has carefully selected a monospace typeface, which he avidly uses to manipulate the world deftly via a command line interface while the rest fumble around with a mouse.

Rands introduces a hypothetical nerd that uses a text based terminal interface to interact with her computer. He mentions the ‘command line’, the kind of computer interface that sees one typing in commands and which is introduced in http://i.liketightpants.net/and/absolute‑beginners‑unix‑for‑art‑students‑part‑1

Yet who exactly is it who likes to use their computer in such as way? ‘Nerd’ is a terribly imprecise term: one can be a nerd at many things, and it is mainly a derogative term. But it seems safe to suggest that those using the command line have some familiarity with text as an interface, and with using programming codes. People that are steeped in or attracted by, the practice of programming.

Programmers as Gatekeepers

Since the 1990ies desktop publishing revolution Graphic Designers have been able to implement their own print designs without the intervention of engineers. In most cases this is not true for the web: the implementation of websites is ultimately done by programmers. These programmers often have an important say in the technology that is used to create a website. It is only normal that the programmers’ values and preferences are reflected in these choices.

This effect is reinforced because the programming community largely owns its own means of production. In contrast with print design, the programming technologies used in creating web sites (the programming languages, the libraries, the content management systems) are almost always Free Software and/or Open Source. Even commercial Content Management Systems are often built upon existing Open Source components. There are many ways in which this is both inspiring and practical. Yet if this engagement with a collectively owned and community‑driven set of tools is commendable, it has one important downside: the values of the community directly impact the character of the tools available.

Hacker Culture

Programming is not just an activity, it is embedded in a culture. All the meta‑discourse surrounding programming attributes to this culture. A particularly influential strand of computing meta‑discourse is what can be called ‘Hacker Culture’. If I were to characterise this culture, I would do so by sketching two highly visible programmers that are quite different in their practice, yet share a set of common cultural references in which the concept of a ‘hacker’ is important.

On the one hand we can look at Richard Stallman, a founder of the Free Software Movement, tireless activist for ‘Software Freedom’. Having coded essential elements of what was to become GNU/Linux, he is just as well known for his foundational texts such as the GPL license. The concept of a hacker is important to him, as evidenced in his article ‘On Hacking’.

On the other hand there is someone like Paul Graham, a Silicon Valley millionaire and venture capitalist. Influential in ‘start‑up’ culture, Graham has turned his own experience into something of a template for start‑ups to follow: start with a small group of twenty‑something programmers/entrepreneurs and create a company that tries to grow as quickly as possible, attract funding, and then either fail, be bought, or in extremely rare cases become a large publicly traded company. His vision of the start‑up is both codified in writing and brought into practice at the ‘incubator’ Y Combinator.

As different as Graham’s trajectory might be from Stallman’s, he too has written an article on what it means to be a hacker. The popular discussion forum he has run is called Hacker News. In fact, Graham refers to the people that start start‑ups as hackers.

The fact that Stallman and Graham share a certain culture is shown by the fact that their conceptions of what is a hacker is far removed from the everyday usage of the world. While to most people a hacker means someone who breaks into computer systems, Stallman and Graham agree that true sense of hacker is quite different.

Thus, contesting the mainstream concept of hacker is itself important in the subculture: Douglas Thomas already describes this mechanism in his thoroughly readable introduction on Hacker Culture1. A detailed anthropological analysis of a slice of Hacker Culture is performed in Gabriella Coleman’s Coding Freedom2, though it seems to focus on Free Software developers of the most idealistic persuasion and seems less interested in the major role Silicon Valley dollars play in fuelling Hacker Culture. For this tension too is at the heart of hacker culture: even if Hacker Culture is a place to push new conceptions of technology, ownership and collaboration, the Hacker revolution is financed by working ‘for the man’. The Hacker Culture blossoming at universities in the 1960ies was only possible only through liberal funding through the department of Defense, today many leading Free and Open Software developers work at Google.

Grown (Wo)men Afraid of Mice

If we want to know about Hacker’s Culture’s attitude towards user interfaces, we can start to look for anecdotal evidence. In an interview about his computing habits, arch‑hacker Stallman actually seems to resemble quite closely that of the hypothetical GUI-eschewing ‘nerd’ from Rands’ article:

I spend most of my time using Emacs [A text‑editor]. I run it on a text console [A terminal], so that I don’t have to worry about accidentally touching the mouse‑pad and moving the pointer, which would be a nuisance. I read and send mail with Emacs (mail is what I do most of the time).

I switch to the X console [A graphical user interface] when I need to do something graphical, such as look at an image or a PDF file.3

Richard Stallman does not even use a mouse. This might seem an outlier position, yet he is not the only hacker to take such a position. Otherwise, there would be no audience for the open source window manager called ‘ratpoison’. This software allows one to control the computer without any use of the mouse, killing it metaphorically.

The mouse is invented in the early sixties by Douglas Engelbart. It is incorporated into the Xerox Star system that goes on to inspire the Macintosh computer. Steve Jobs commissions Dean Hovey to come up with a design that is cheap to produce, more simple and more reliable than Xerox’s version. After the mouse is introduced with the Macintosh computer in 1984 it quickly spreads to PC’s, and it becomes indispensable to every day users once the Windows OS becomes mainstream in the 1990ies.

The mouse is part of the paradigm of these graphical user interfaces, just like the WYSIWYG interaction model. The ascendence of these interaction models is linked to (and has probably enabled) personal computers becoming ubiquitous in the 1990ies. It is not this tradition that Stallman and likeminded spirits inscribe themselves in. They prefer to refer to the roots of the Hacker paradigm of computing that stretch back further: back when computers where not yet personal, and when they ran an operating system called ‘Unix’.

Unix, Hacker Culture’s Gilgamesh epic

The Unix operating system plays a particular role in the system of cultural values that make up programming culture. Developed in the 1970ies at AT&T, it becomes the dominant operating system of the mainframe era of computing. In this setup, one large computer runs the main software, and various users login into this central computer from their own terminal. This terminal is an interface that allows one to send commands and view the results – the actual computation being performed on the mainframe. Variants of Unix become widely used in the world of the enterprise and in academia.

The very first interface to the mainframe computers is the teletype: an electronic typewriter that allowed one to type commands to the computer, and to subsequently print the response. As teletypes get replaced by computer terminals, with CRT displays and terminals, interfaces often stay decidedly minimal. It is much cheaper to use text characters to create interfaces than to have full blown graphical user interfaces, especially as the state of the interface has to be sent over the wire from the mainframe to the terminal. Everyone who has worked in a large organisation in the 1980ies or 1990ies will remember the keyboard driven user interfaces of the time.

This vision of computing is profoundly disrupted by the success of the personal computer. Bill Gates vision of ‘a personal computer in each home’ becomes a reality in the 1990ies. A personal computer is self‑sufficient, storing its data on its own hard‑drive, performing its own calculations. The PC is not hindered by having to make roundtrips to the mainframe continuously, and as processing speed increases PC’s replace text‑based input with sophisticated graphical user interfaces. During the dominance of Windows operating system, for most mainstream computer users Unix seems to become a relic: after conquering the homes, Windows computers conquer the workplace as well. In 1994’s Jurassic Park, when the computer‑savvy girl needs to circumvent computer security to restore the power, she is surprised to find out that it’s a Unix system.

The tables turn when in 2000 Apples new OS X operating system uses Unix. At the same time, silently but surely the Linux operating system has been building mind share. A cornerstone of the movement for Free and Open Source software, Linux is a Unix clone that is free for everyone to use, distribute, study and modify. Even if both these unixes are built on the same technology as the UNIX that powers mainframe computers, these newer versions of UNIX are used in a completely different context. Linux and OS X are designed to run on personal computers, and both come with an (optional) Graphical User Interface, making them accessible to users that have grown up on Windows and Mac OS. All of a sudden, a new generation gets to appropriate Unix. A generation which has never had to actually use a Unix system at work.

Alan Kay claims that the culture of programming is forgetful.4 It is true that a new generation of programmers completely forgets the rejection of UNIX by consumers just years before, let alone wonder on the reasons for its demise. Yet the cultural knowledge embodied in Unix is now part of a community. The way in which Unix is used today might be completely different from the 1970ies, but Unix itself and the values it embodies has become something that unites different generations self‑identifying with ‘hacker culture’.

The cultural depth of Unix far exceeds naming conventions. Unix has been described as “our Gilgamesh epic”,5 and its status is that of a living, adored, and complex artifact. Its epic nature is an outgrowth of its morphing flavors, always under development, that nevertheless adhere to a set of well-articulated standards and protocols: flexibility, design simplicity, clean interfaces, openness, communicability, transparency, and efficiency.Unix is known, loved, understood by so many hackers”, explains sci-fi writer Neal Stephenson,6 also a fan, “that it can be re-created from scratch whenever someone needs it”.7

The primacy of plain-text

If there is a lingua franca in Unix, it is ‘plain text’. Unix originated in the epoch that users would type in commands on a tele‑type machine, and typing commands is still considered an essential part of using Unix‑like systems today. Many of the core UNIX commands are launched with text commands, and their output is often in the form of text. This is as true for classic UNIX programs as for programs written today. Unix programs are constructed so that the output of one program can be fed into the input of another program: this ability to chain commands in ‘pipes’ depends on the fact that all these programs share the same format of out‑ and input, which is streams of text.

The most central program in the life of a practitioner of Hacker Culture is the text editor. Contrary to a program like Word, a text editor shows the raw text of a file including any formatting commands. This is still the main paradigm for how programmers work on a project: as a bunch of text files organised in folders. This is not inherent to programming (there have been programming environments that store code in a database, or in binary files), but has proved the most lasting and popular way to do so. Unix’ tools are built around and suited for plain text files, so this approach also contributes to the ongoing popularity of Unix – and vice versa.

While programming, one has to learn how to create a mental model of the object programmed. As the programmer only sees the codes, she or he has to imagine the final result while editing – then compile and run the project to see if projection was correct. This feedback loop is much slower than the feedback loop as we know it from WYSIWYG programs. Maybe it is the experience of slow feedback that gives programmers more tolerance for abstract interfaces then those of us outside this culture.

While WYSIWYG has a shorter feed‑back loop, it also adds additional complexity. Anyone who has used Microsoft Word knows the scenario: after applying several layers of formatting, the document’s behaviour seems to become erratic: remove a carriage return, and the whole layout of a subsequent paragraph might break. This is because the underlying structure of the rich text document (on the web, this is HTML) remains opaque to the user. With increased ease‑of‑use, comes a number of edge cases and a loss of control over the underlying structure.

This is a trade‑off someone steeped in Hacker Culture might not be willing to make. She or he would rather have an understandable, formal system by which the HTML codes are produced – even if that means editing in an environment not resembling at all the final web page – because they already know how to work this way from their experience in programming.

This is shown by the popularity of a workflow and type of tool that is known as the ‘static site generator’. In this case, the workflow for creating a website is to have a series fo plain text files. Some of them represent templates, others content. After a change, the programmer runs the ‘static site generator’ and all the content is pushed through the templates to produce a series of HTML files. The content itself is often written in a code language like ‘Markdown’, that allows one to add some formatting information through type‑writer like conventions:
*stars* becomes stars.

Hacker Culture’s bias is holding back interface design

Because programmers are gatekeepers to web technology, and because programmers are influenced by Hacker Culture, the biases’ of Hacker Culture have an impact outside of this subculture. The world of programming is responsible for its own tools, and contemporary web‑sites are built by programmers upon Open Source libraries developed by other programmers. Shaped by the culture of Unix and plain‑text, and by the practice of programming, WYSIWYG interfaces are not interesting to most Open Source developers. Following the mantra to ‘scratch one’s own itch’, developers work on the interfaces that interest them. There are scores of the aforementioned ‘static site generators’: 242 of them, on last count.

Comparatively, the offer of WYSIWYG libraries is meagre. Even if HTML5’s ContentEditable property has been around for ages, it is not used all that often; consequently there are still quite some implementation differences between the browsers. The lack of interest in WYSIWYG editors means the interfaces are going to be comparatively flakey, which in turn confirms programmers looking for an editing solution in their suspicions that WYSIWYG is not a viable solution. There are only two editor widgets based on ContentEditable that I know of: Aloha and hallo.js. Aloha is badly documented and not easy to wrap your head around as it is quite a lot of code. Hallo.js sets out to be more lightweight, but for now is a bit too light: it lacks basic features like inserting links and images.

The problem with the culture of plain‑text is not plain‑text as a format. It is plain text as an interface. Michael Murtaugh has written a thoughtful piece on this in the context of The Institute for Network Cultures’ Independent Publishing Toolkit: Mark me up, mark me down! Working with a static site generator, it becomes clear they are envisioned as a one way street: you change the source files, the final (visual) result changes. There is no way in which a change in the generated page, can be fed back into the source. Similarly, the Markdown format is designed to input by a text‑editor, and than programmatically turned into HTML. Whereas HTML allows for multiple kinds of interfaces (either more visual or more text oriented), a programmer‑driven choice for Markdown forces the Unix love of editing plain text onto everybody.

If WYSIWYG would be less of a taboo in Hacker Culture, we could also see interesting solutions that cross the divide code/WYSIWYG. A great, basic example is the ‘reveal codes’ function of WordPerfect, the most popular word processor before the ascendency of Microsoft Word. When running into a formatting problem, using ‘reveal codes’ shows an alternative view of the document, highlighting the structure by which the formatting instructions have been applied – not unlike the ‘DOM inspector’ in today’s browsers.

More radical examples of interfaces that combine the immediacy of manipulating a canvas with the potential of code can be found in Desktop software. The 3D editing program Blender has a tight integration between a visual interface and a code interface. All the actions performed in the interface are logged in programming code, so that one can easily base scripts on actions performed in the GUI. Selecting an element will also show its position in the object model, for easy scripting access.

HTML is flexible enough so that one can edit it with a text editor, but one can also create a graphical editor that works with HTML. Through the JavaScript language, a web interface has complete dynamic access to the page’s HTML elements. This makes it possible to imagine all kinds of interfaces that go beyond the paradigms we know from Microsoft Word on the one hand and code editors on the other. This potential comes at the expense of succinctness: to be flexible enough to work under multiple circumstances, HTML has to be quite verbose.

Even if the HTML5 standard has already added some modifications to make it more sparse, for adepts of Hacker Culture it is not succinct enough: hence solutions like Markdown. However, to build a workflow around such a sparse plain‑text format, is to negate that different people might want to interact with the content in a different way. The interface that is appropriate to a writer, might not be the interface that is appropriate to an editor, or to a designer.

Conclusion

The interfaces we use on the web are strongly influenced by the values of the programmers that make them, who reject the mainstream WYSIWYG paradigm. Yet What You See Is What You Get is not going anywhere soon. It is what made the Desktop computer possible, and for tasks such as document production, it is the computing reality for millions of users. Rather than posing a rejection, there is ample space to reinvent what WYSIWYG means, especially in the context of the web, and to find ways to combine it with the interface models that come from the traditions of Unix and Hacker Culture. Here’s to hoping that a new generation of developers will be able to go beyond the fetish for plain text, and help to invent exciting new ways of creating visual content.

  1. Thomas, Douglas: Hacker culture, University of Minnesota Press, Minneapolis, 2002.
  2. Coleman, Gabriella: Coding Freedom, Princeton University Press, New Jersey, 2012.
  3. : The Setup / Richard Stallman, 2010.
  4. : A Conversation with Alan Kay, 2004.
  5. Stephenson, Neal: In the Beginning...was the Command Line, William Morrow Paperbacks, New York City, 1999.
  6. ibid. p.69
  7. Coleman, Gabriella: Coding Freedom, Princeton University Press, New Jersey, 2012. p.51