Oh! The things we had to do to debug software!
Experienced programmers have tales to tell about the things they had to do to test and debug software. Let's see what they all agree upon.
Experienced programmers have tales to tell about the things they had to do to test and debug their applications. They agree on one thing: The bad ol’ days of development are best left in the past.
I’d been typing my code into a Model 33 teletype keyboard for a while, and I was anxious to see if my application worked. The teletype was attached to a General Electric model 225 computer located at Dartmouth College. The language that I was using, late in the summer of 1964, was Dartmouth BASIC.
I typed “RUN” and almost immediately the printer came to life. “WHAT?” it said. I printed out the program code and found what I thought might be a mistake. I typed in the code again (I hadn’t yet discovered the OLD command), typed RUN, and waited. Seconds later, I got the response. “WHAT?” the printer said. Somebody at GE apparently decided that I’d wasted enough of their time-share time, because I never got another chance.
But what I did experience was the challenge of debugging software, especially in the old days when QA tools didn’t exist, there was little help because almost no one at the time had ever written a program in BASIC, and I didn’t really know what I was doing anyway.
Other languages were similarly limited. When I managed programmers in a COBOL shop years later, debugging consisted of creating vastly long print-outs, taping them to the walls, and documenting the program flow with colored pencils. It was during one such session in the mid-1980s when an unhappy voice from the back of the room began excoriating me for using COBOL.
Standing there, petite and trim, was an elderly Admiral who was clearly not impressed. “You should be using C,” Admiral Grace Hopper told me. Admiral Hopper, who designed FLOW-MATIC, the immediate precursor to COBOL, clearly didn’t think her work was the right answer to serving the military. She was right, of course, but I didn’t have the appropriation to pay for changes.
Printfs and other joys
Unfortunately, debugging and testing software was like that even at the end of the 20th century. Depending on the language and the application, developers usually had to create their own tools in order to determine what a program was doing as it executed. There were some exceptions, such as the hardware-assisted debuggers from Periscope in the late 1980s and early 1990s. And debuggers like the one built into the Manx Aztec C compiler could generate symbol tables that could be fed into SID, a debugger from Digital Research for CP/M-80. But each of the early testing tools required brute-force effort, such as setting breakpoints, running step-by-step at the source code level, and dumping the call stack.
Finding out when loops worked as intended and exited when they were supposed to sometimes meant embedding errors at specific points in the code; the crash and subsequent memory dump would show the state of execution at the time.
“This then allowed the programmer to examine all aspects of the program at a point in time,” says zOS programmer Mark Jacobs discussing software debugging in the old days.
Bruce Marold, now retired, once worked in software development. He had similar experiences working on a Fortran IV project. “My first major software project was on a timesharing Fortran IV on a teletype with no debugging resources at all. Even worse, Fortran seemed to be aggressively non-modular. Sometimes GOTO was unavoidable. I had to use brute force to locate glitches.” Marold had to calculate results in the biostatistical application by hand to verify that his program was getting the right answer.
There were other approaches. Some programmers embedded printf commands in their code to stop execution so they could examine the state of the machine – and many still do! Farther back, programmers would single-step through a program, watching indicator lights on the computer’s front panel to follow program execution.
George Dinwiddie, an independent software development consultant, tells of having to build his own circuit emulator. “Every time I had to edit code, I had to burn new EPROMs,” Dinwiddie said. “I made an emulator to avoid that. The next week, instead of doing four iterations of finding a problem and editing code, I was able to do a couple dozen because I didn’t have to burn EPROMs.”
An entire herd of yaks to shave
QA testing is hard enough, but in earlier eras, developers and testers had little debugging support. Developers had to create their own testing methods and debugging tools, which consumed more time. Plus, a lack of common test methods and procedures hurt accuracy.
Fortunately, things have changed. Perhaps the biggest change is the idea that developers should plan for testing in the first place. “Figuring out how you’re going to test before you build it is a huge advance,” says Dinwiddie. “With software you can build in tests.”
Preparing for testing leads to the ability to automate testing. “When I got the idea of ‘test first,’ in 2000, it really changed the way I thought about creating software,” Dinwiddie says. “Instead of seeing if it worked, I changed to the idea of asking the code to do what I wanted it to do.” Now he can worry about what he wants to accomplish, and then perform testing in little steps.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."—Brian W. Kernighan
“The most significant difference between software testing in the past and present is the reliance on automated testing,” says Yaniv Masjedi, CMO of Nextiva, a voice-over-internet-protocol company. “The perception that manual testing is more accurate and gives fewer errors is an obsolete concept of the past. Modern testing software can provide the same output at a much faster pace without compromising test quality.”
Mostly, testing took longer – which meant it was slower to roll out new versions. "In the past, software testing was slower because it involved testing the software with little historical data and limited technology to automate tests,” says Denis Leclair, VP engineering at Trellis. Testing takes time today because historical data can predict possible outcomes. “It is now possible to automate software tests, which reduces the time spent overall in the software testing process,” he adds.
But that doesn’t mean that some older aspects of testing aren’t still useful. “In the past, there were more interactions between members of the software testing process, and these regular meetings develop many innovative strategies to test software,” says Leclair. “If applied today, software testing will see significant developments based on constant knowledge sharing.”
While there is some nostalgia for the challenges of the good ol’ days, developers acknowledge that software development and testing has improved – not the least of which is hiring dedicated QA staff rather than waiting for users to file bug reports.
Of course most testing today avoids the intellectual team-building that happened while developers tried to figure out why their applications failed. That’s not all gone. Instead, those challenges have moved to designing reliable, mostly automated, testing. There may be less drama that way, but the result is a better product.
Naturally, at Functionize we think we offer a better path for anyone who wants to test software, automate test suites, and ensure application quality. Won’t you take a few minutes to explore its features and learn more?