I graduated from the London College of Printing in 1976, intent on a career in the industry, but ended up doing an MA at Central London Polytechnic first. While working for the Printing and Publishing Industry Training Board (PPITB) – one of dozens of boards set up to levy UK industry and provide training grants – I became deeply involved in their internal networking and computerisation, and in the research they did into the effects of computing on the print industry. I moved to work in software and network support in 1982 for a London city computer bureau before joining UCC in 1984 as a project manager for academic and research IT services.
At the time, there were two IBM 4341s and a VAX 11/780 in UCC, with a star network of terminal servers and hard-wired terminals. There was no external connectivity, but there had been intense interest in the networking done between Trinity and UCD which largely resulted in the establishment of HEAnet to connect the HEA-funded universities and NIHEs.
The arrival of the IBM-funded EARN network in 1983, as the European arm of BITNET, linked the mainframes in UCD and UCC with the networked world abroad. It offered file transfer, email (which is just file transfer with headers), and interactive messaging. EARN meant we could exchange email with users elsewhere in the world, so I wrote a very rudimentary and unofficial gateway to route email between EARN and those HEAnet sites which were not connected to EARN. Fortunately this was superseded by an official HEAnet gateway before we became swamped with email transfers.
Interactive messaging was specially interesting because of the ease with which you could write server software to accept commands and provide services. CSNEWS AT MAINE was one, providing computing-related news articles and comment on a daily basis, as well as interactive chats and discussion boards. Probably the most influential was TRICKLE AT TREARN, written by Turgut Kalfaoglu to serve requests from BITNET/EARN users for files from repositories on the Internet, which were otherwise inaccessible. To avoid inundating the network with file transfer traffic, TRICKLE broke up large files into many small chunks which it sent at timed intervals (a trickle of data).
These were all mainframe-based services and they were all accessed from terminals, so there was no graphical interface, just text, regardless of the system you were using. Within UCC you had to have a login account on one of the IBMs or the VAX to use them. By this period UCC was able to give an account to any full-time member of the academic or administrative staff, or to any postgraduate student with authorisation from their professor. Departments that wanted to use their accounts from their own offices, as opposed to the centralised terminal rooms, were expected to fund the necessary terminal equipment and cabling work. Once users saw what could be done, there was a steadily growing demand for email, although rather less for file transfer and interactive messaging.
During this period, HEAnet was working hard with the other European academic and research networks to formalise a managed service – as opposed to what was then seen as the ‘unstructured’ Arpanet/Internet model. The argument was that this was essentially an open free-for-all. This criticism was largely untrue, but the American “try it and see” approach stuck in the craw of some Europeans who wanted everything centralised and administered from above with a heavy management structure.
In 1986, the European networks founded the Réseaux Associés pour la Recherche Européenne (RARE) to coordinate this activity, and DGXIII of the European Commission contracted it to work on several development projects. HEAnet provided Irish representatives to the RARE working groups. I was nominated to WG3, which was responsible for directories and naming.
For some years, HEAnet’s activities were centred around the OSI network protocols, especially the Coloured Book suite developed by the UK’s Joint Academic Network. The objective at the time was compatibility with the OSI standard, but OSI networking turned out to be expensive, cumbersome, and proprietary. By 1990 it was becoming obvious that the use of TCP/IP was the way to go, for reasons of cost, ease of use, and compatibility with the Internet.
On RARE WG3, there was a strong leaning towards TCP/IP, especially from those who had used the Internet elsewhere, and that included the representative for CERN, Tim Berners-Lee, who demonstrated his new information-compatibility system to us at a meeting in Zurich in 1991. Researchers in different labs at CERN could now make their reports available in a single format that everyone could read: HTML.
Back at the ranch, I had been using and supporting SGML (the specification language in which HTML is written) since 1988 because we had a couple of EU-funded projects which required reporting in SGML. The early lack of any kind of viewer or formatter was a hindrance until I discovered (serendipitously from Gerti Foest, the DFN representative on RARE WG3) that the DFN had a program to convert SGML to LaTeX.
We had been using LaTeX and the underlying TeX typesetting program in UCC for some years since my colleague, the late Michael Gordon, had come to me looking for a way to improve report formatting. I had recommended it because I had seen what it could do when I first encountered it at the PPITB in 1981. An active development community meant that it was available on all platforms.
My first reaction to HTML was to ask Tim if it validated. If HTML used SGML as its basis, it should have required a formal Document Type Declaration (DTD) in order for programs to test its validity. He was quite clear that there was none: so long as the start-tags and end-tags were balanced and nested correctly, no DTD was needed because the rendering logic was built into the browser. I think quite a lot of us who were used to SGML thought this was A Bad Idea: it certainly prefigured XML in only requiring well-formedness, but it ducked the issue of rendering until CSS was invented, and it ignored the need for a validating editor for authors which could keep track of the special characters like angled brackets which make SGML, XML, and HTML work.
What tied all this together for UCC was a 1990 proposal from Professor Donnchach Ó Corráin (now retired) from UCC’s Department of History for a research database of Early Irish texts, to be called the Thesaurus Linguarum Hiberniae (treasury of the languages of Ireland). Material would be scanned or retyped, and made freely available to scholars in conjunction with the RIA.
I was asked to contribute an IT perspective, and made two suggestions: (a) that SGML would be a suitably stable format for long-term use, using the recently-announced guidelines of the Text Encoding Initiative (TEI); and (b) that SGML would make it possible to convert to HTML and deliver the texts via the World Wide Web, rather than having to manage a constantly-changing cycle of CD-ROMs and proprietary or royalty-bearing formats.
Professor Marianne McDonald of the University of California, San Diego, who had funded the Thesaurus Linguae Graecae project, generously agreed to fund this project for the first decade and extended this later as well.
We were able to identify suitable software and to build on the experience of other projects using the TEI with help from the editors of the TEI Guidelines, Michael Sperberg-McQueen and Lou Burnard, and from Elaine Brennan of the Women Writers Project. Brian Travis, one of the authors of Omnimark, kindly donated their software which converted into and out of SGML, and we bought a copy of the PAT SGML search engine from Tim Bray (later co-editor of the XML Specification). To run all this, and to serve the web, we bought a Sun IPX workstation and became the ninth web server in the world – and Ireland’s first. The core of the original site has been preserved at http://curia.ucc.ie.
The project was known by the shorter name of CURIA (Cork University and Royal Irish Academy) and continues today under the name CELT (Corpus of Electronic Texts) at http://celt.ucc.ie.
There was a general belief among many Internet developers, certainly shared by my colleagues in UCC and HEAnet, that it shouldn’t be a club for the rich. With the web spreading slowly but steadily, we thought that the barrier to entry should be as low as possible; in particular that countries with limited national connectivity should not be additionally penalised by a lack of information. In 1993, I created a comprehensive teach-yourself-HTML website on the UCC server, which led to me being invited to the Internet Society’s new programme of Developing Countries Workshops to teach sessions on how to start and run a web service. This also led to my first book, The World Wide Web Handbook , which attempted to provide much of the information that people without a connection needed before they started.
Interest in the web attracted more companies to become Internet service providers (ISPs). At first, if a customer misbehaved, abused bandwidth, or broke netiquette, an upstream provider would come under great pressure to pull the plug and protect other users. Once ISPs were commercialised, however, they were (and remain) reluctant to do anything to damage their revenue streams, regardless of how annoying others find the activities.
A number of people in Cork tried to set up local ISPs, but this was not an affordable business proposition. Telecom Eireann’s pricing of bandwidth made it appear that their business model was to deal with a small number of larger customers rather than a large number of smaller ones. The poor quality of ageing switchgear and last-mile copper also meant that dialup access was unreliable and unattractive to domestic users.
Ireland was still suffering from a lack of understanding at a senior level in TE about the nature of data. Their engineers were excellent, but many managers were still a little vague about non-voice communications. I remember taking a call wanting to know how many voice calls we were trying to multiplex over our 9600-baud connection. I gave up trying to explain that there was no voice, it was for data.
When we started the first web server at UCC, it wasn’t long before other departments and individuals saw what we were doing and asked if they could have space for their own pages. We agreed, and allowed them to write their own pages, without initially considering the effect this would have on support – and especially on the demand for learning how to write HTML, compose web pages and manage a site. To start with, the requests tended to come from disciplines with a strong computing background; later on they could come from any department or from students wanting pages for clubs and societies.
In the mid-to-late 1990s a much-needed increase in IT staff numbers allowed me to spend more time developing research computing services. I was also able to take part in the development of XML via the W3C Special Interest Group, and to write a book on the software and services which supported SGML and XML . The widening scope of IT at all levels as we entered the 2000s meant that greater formalism was needed in service administration and management, leaving far less scope for experimentation and development.
Two areas of significant longevity in UCC have been the CELT project and the Law Department’s Irish Law Site (http://www.ucc.ie/law/irishlaw/). Both are minimally-formatted and largely text-based. Both appear to break all the rules for web design, but each accounts for 5% of the total hits to the main UCC web site. Suggestions for redesign have met with requests to leave well alone, as they have highly specialised user populations who value stable URIs, stable appearance, and an absence of irrelevant surrounding material.
While the web is now a platform capable of supporting virtually the entire set of computing requirements, the underlying technologies of HTTP and HTML have remained generally unchanged, and it’s been evident that Tim made the right picks.
Last edit: June 2017
Peter Flynn 2017. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
1. Flynn, Peter (1995) The World Wide Web Handbook, International Thompson Computer Press, New York, NY. ISBN 1-85032-205-8, 350pp.
2. Flynn, Peter (1998) Understanding SGML and XML Tools, Kluwer, Boston, MA. ISBN 0-7923-8169-6, 432pp and CD-ROM.