Skip to content
HackerRank AI Day unveils new innovations in AI and features industry thought leaders. Read agenda
HackerRank Launches Two New Products: SkillUp and Engage Read now
The 2024 Developer Skills Report is here! Read now
Career Growth

Why is Computing History Vanishing?

Written By Ritika Trikha | November 30, 2015

When future generations of computer scientists look back at the advancements in their field between 1980-2015, they’ll turn to blank pages.
Historian Martin Campbell-Kelly points out that “up to the late 1970s, software history was almost exclusively technical.” Since then, traces of the technical history of computing have vanished. 

Think about it: When was the last time you read a detailed technical explanation of a breakthrough that takes you inside the mind of the inventor? Computer science has grown exponentially in the past several decades, but recent critical source codes have gone untouched. 

Martin depicts the evolution of software literature below, based on titles he found most useful since 1967. You can see that as the years go by, the emphasis moves away from pure technology to the application of technology.
computing history

Elsewhere, museum board members of the National Cryptologic Museum have been known to criticize historians’ efforts of adequately chronicling the National Security Agency’s work on cryptography.

Look at the “lack of historical context given to the recent revelations by Edward Snowden of NSA activities. Historians could be providing useful context to this acrimonious debate, but thus far we have not,” says Paul E. Ceruzzi of the Smithsonian Institution.

Unfortunatley, historians will likely turn a blind eye to such controversy. After all, it’s not too different from the events in WWII’s Bletchley Park when Alan Turing intercepted communication from the Germans.

It’s a sobering realization that historians favor prioritizing business history over technical history.  Dr. Donald Knuth, a legend in the computing world, has even spoken out about todays lack of computing history. He designated his 2014 lecture at Stanford to: Let’s Not Dumb Down Computer Science History.

In his lecture Donald asks, why are historians overlooking the technicalities of today’s breakthroughs in computer science? And how will this trend impact future generations of computer scientists?

Tracing the Missing Pieces

Since the invention of computers, historians used to be knee-deep in the technical trenches of computing. There’s plenty of analytical literature on the likes of the ENIAC, Mark I and early IBM computers. But come the pivotal 80’s—when the personal computer started proliferating in homes—historians shifted their focus onto software’s broader economic impact.

They’re covering things like funding (here) and business models (here). Shelves are filled to the brim with books on how tech giants and unicorns are revolutionizing the world. But what about its technologies? Have historians looked inside the black boxes of recent breakthroughs, like:

  • [1993] R programming language, which statisticians and data scientists depend on to create reproducible, high-quality analysis
  • [2001] BitTorrent, the peer-to-peer file sharing protocol that mandates about half of all web traffic
  • [2004] MapReduce, which has been invaluable for data processing

Trained historians have yet to place many of these revolutionary inventions under a historical microscope. No one is contextualizing how these advancements came to be and why they matter in computing history. So, what happened? 

As Knuth notes in his talk, scientists have little incentive to study computing history. It’s completely respectable to write a historical dissertation in biology, mathematics or physics. But it’s just not the case for computer science. In fact, history departments within computer science departments are rare–if at all in existence— in America. At best, it might be masked under “other” specialty for PhD candidates:
computing history

So, what does that leave us?

ACM published this infographic depicting the state of computing history today. It’s mostly categorized as a secondary interest for a subfield of history or science:

computing history
Historians of science are usually specialists within a broader history department, under the humanities umbrella. So the accounts from non-technical historians will always be less technical than the accounts of programmers. The burden is on computer scientists to write the technical history that lives up to the caliber of Knuth.

Even if you decide to embark on computing history, historians will cast a wider net in reaching audiences by writing about software’s impact on business, society and economics. Naturally, technical articles are only valuable to a tiny slither of scientists, which will then receive only limited amount of financial support.

“When I write a heavily technical article, I am conscious of its narrow scope. but nonetheless it is a permanent brick in the wall of history. when I write a broader book or article, I am aware that it will have a more ethereal value but it’s contributing to shaping our field,” Kelly writes in response to Knuth.

When Kelly wrote technically-heavy pieces, filled with jargon and acronyms, his esteemed colleagues’ told him his view was too narrow-minded. When Kelly wrote about the EDSAC in the 1950s, his critics said he neglected to include these key points:

  • EDSAC generated the world’s highest known prime number
  • It created a stepping stone to the discovery of DNA by Watson and Crick
  • EDSAC also reduced radio telescope data, a crucial process in radio astronomy

Studying the byproduct of computing is valuable. But so is studying the technical discoveries that lead to these technologies.

What also contributes to the lack of computing history is the scarcity of computer science historians jobs. Jobs in this field are either in academia or museum work. 

Factors that Contribute to The Lack of Computing History

One major that leads to the lack of computing history, is the ever-changing nature of computer science. It’s hard for historians to make definitive claims when the field change in 10 years. Just look at this piece on what’s worked in computer science since 1999.

Screen Shot 2015-11-28 at 7.39.14 PM
Concepts that were considered impractical in 1999 are unambiguously essential today. 

Another factor is the sheer exponential rate of growth of this industry. According to the Bureau of Labor Statistics, computer science is the fastest-growing professional sector for the decade 2006-2016.

The percentage increases for network systems analysts, engineers and analysts are 53%, 45% and 29%, while other sciences (like biological, electrical and mechanical engineers) hover around 10%.

To top it off, look at the growth in the total number of open source projects between 1993 and 2007. We’re amidst a paradigm shift in which much of today’s pivotal software is free and open.

The last major reason why the trade of computing history isn’t perfected? Amit Deshpande and Dirk Riehle of SAP Research, blame it on the exponential growth rate of open source software. Michael Mahoney puts it best when he says:

“We pace at the edge, pondering where to cut in.”
computing history

Donald Knuth: This is a ‘Wake Up Call’

This shift toward a more open, less patented software world is all the more reason for computer scientists and historians to wake up. As source code becomes more open, the mysteries of computing history will unravel.

Knuth sets the ideal example because he has the mind of a computer scientist and the zeal of a historian. As he entered the field in the 1950s, his detailed history on assemblers and compilers set the bar high. Still today, his Art of Computer Programming is critically acclaimed as the best literature for understanding data structures and algorithms. He practiced what few grasp today: Without technical history, we can never truly understand why things are the way they are.

We need in-depth history to learn how other scientists discovered new ideas. Reading source code is something most legendary programmers do to become a better programmer. As a kid, Bill Gates infamously dumpster dove to find the private source code for TOPS-10 operating system. Being able to see inside a programmer’s mind as he unraveled a complex knot, taught him how to solve his own problems. It’s how any new breakthrough materializes.

When Brendan Eich, for instance, set out to create Javascript in 10 days, he needed a strong foundational knowledge of existing languages to built upon. He pulled structs from C language, patterns from SmallTalk and the symmetry between data and code offered by LISP.

We’ve only just started learning what’s possible in the computer revolution. The sooner we document and analyze these formidable years, the brighter our future will be. The acclaimed computer scientists like Shockley, Brendan Eich and Donald Knuth should be as well known as mathematicians Albert Einstein, Isaac Newton and Rene Descartes.

This is not to say that historians’ current efforts in contextualizing the impact of computing has been wasted. Undoubtedly, the new field is infiltrating every industry, and this analysis is important. But, for the sake of future computer scientists, we need both breadth and depth in computing history. Computer scientists and trained historians must work together to leave a holistic imprint of today’s pivotal advancements to truly fill the pages of computer science history for future generations of hackers.

Have you noticed that there aren’t as many strong technical historical analysis of computing and computer science? How can we fill this void?  Let us know on Twitter! Tweet to @HackerRank 

Abstract, futuristic image generated by AI

What Is Flutter? A Look at Google’s UI Powerhouse