Dina St Johnston: Britain’s First Software House Founder Who Industrialised Code

Dina St Johnston (1930–2007) founded Vaughan Programming Services in 1959, establishing Britain’s first independent software house at a time when the very concept of a “software industry” did not yet exist. Working with Elliott computers in classified naval projects and pioneering railway control systems, she developed high-reliability programming practices that became the foundation for modern safety-critical software development. Her methodical approach to coding – characterised by extraordinary accuracy that rarely required debugging – helped industrialise software development before “software engineering” was even a recognised discipline.

It’s a crisp autumn morning when I meet Dina St Johnston in what feels like a well-appointed library, surrounded by the quiet hum of computational history. There’s something wonderfully precise about her movements, the way she settles into her chair with the same measured attention she once brought to programming Elliott computers. Her eyes hold that particular spark of someone who has spent decades solving problems that others couldn’t even properly articulate yet.

Welcome, Mrs St Johnston. It strikes me that when you started programming in 1953, you were entering a field that barely had a name. Can you tell me about those early days at Elliott Brothers?

Oh, it wasn’t called programming then, not properly. We were ‘coding’ or sometimes just ‘working the machine. I joined Elliott’s Borehamwood Labs in 1953, fresh from six years at the British Non-Ferrous Metals Research Association where I’d been studying mathematics part-time. The Elliott Theory Division had about seven of us, three women and four men, which was rather a good proportion for the time, though I don’t think anyone particularly remarked on it then.

The machines were fascinating beasts. The Nicholas was this small computer they’d built for calculating missile trajectories, and then there was the Elliott 153 – ah, now that was something special. A massive thing, installed up at Irton Moor near Scarborough for the Admiralty’s direction-finding work. Classified, naturally. The whole point was to triangulate radio signals from hostile sources during the Cold War, using bearings from listening stations worldwide connected by the Defence Teleprinter Network.

That sounds like tremendously complex mathematics. How did you approach programming such a system when there were no textbooks, no established practices?

You learned by doing, and you learned to think like the machine. The Elliott 153 was designed specifically for direction-finding calculations – we had to take bearing data from multiple intercept stations and compute the most probable location of a transmitter.

The mathematics wasn’t terribly difficult if you understood triangulation, but the programming challenge was enormous. We were working with mercury delay lines for memory, paper tape for input and output, and every calculation had to be precisely sequenced. There was no room for sloppy thinking.

I remember the satisfaction when a program ran correctly the first time. My work rarely required debugging, which wasn’t common then. Most programmers expected to spend weeks testing and correcting. But I found that if you thought through every step carefully beforehand, mapped out exactly what the machine needed to do, you could avoid most errors entirely.

You’ve touched on something crucial there – this methodical approach. Can you walk me through how you actually developed software in those days?

Right, let me explain properly. Take the Elliott 153 project at Scarborough. The system had to process direction-finding data in what we’d now call real-time, though that term wasn’t used then.

First, you had to understand the physical problem completely. Radio signals from a transmitter arrive at different listening posts at slightly different times and angles. If you have bearings from three or more stations, you can calculate the transmitter’s position using basic trigonometry – but the calculations are extensive, and in 1954, doing them by hand took hours.

The Elliott 153 used a Williams tube for main memory and magnetic film for bulk storage. We programmed in machine code, naturally – no compilers yet. Each instruction had to specify exactly which memory location to access, what operation to perform, and where to store the result.

Here’s the key: I would work out the entire algorithm on paper first. Every step, every memory allocation, every possible branch. Then I’d hand-code the instructions and punch them onto paper tape. Only then would you take your tape to the machine and hope it worked.

That sounds like an extraordinary discipline of precision.

It had to be. Machine time was precious – the Elliott 153 was the only one of its kind, and it was serving a critical defence function. You couldn’t afford to waste hours debugging when the system needed to be processing intercepts. Also, these machines weren’t terribly reliable by today’s standards. If your program crashed, you might not get another chance to run it for days.

I suppose that’s where I developed what people later called my ‘structured’ approach. Every program had to be logically organised, with clear entry and exit points, predictable flow control. I was doing what we’d now recognise as defensive programming, though the term wouldn’t exist for another decade.

Your reputation for accuracy became legendary. Colleagues said your code ‘hardly ever required debugging.’ How did you achieve that level of reliability?

Practice, certainly. But more than that – I think I had a particular way of visualising what the machine was doing. I could run the program in my head, step by step, before ever committing it to tape.

There was also something about the way I approached problems. Many programmers would start coding immediately, then debug until it worked. I found it much more efficient to spend extra time in the design phase. I’d trace through every possible execution path, consider every edge case I could imagine.

And I tested everything. Even before we had the concept of unit testing, I would write small programs to verify individual functions worked correctly. If you were calculating trigonometric functions, for instance, you’d test them with known values first.

After five years at Elliott Brothers, you made the remarkable decision to start your own company. What drove that leap?

I could see a gap opening up – a rather large gap, actually. By 1958, computers were beginning to move out of research laboratories and into industry proper. Companies were buying Elliott 405s for payroll, or 800 series machines for process control. But hardly anyone in industry knew how to program the blessed things.

The computer manufacturers were focused on hardware. They’d sell you a machine and perhaps provide basic software, but if you wanted something specific to your business – well, you were rather on your own. There was no software industry because no one had conceived of such a thing.

I remember thinking, ‘Someone needs to fill this gap.’ Companies needed bespoke programs, but they didn’t have the expertise to write them. And there was a shortage of what I called ‘processor-oriented people’ – programmers who were willing to learn about steelworks or power stations, who could go round a factory floor in a hard hat and understand what the customer actually needed.

So Vaughan Programming Services was born in 1959. How did you approach building what was essentially a new type of business?

Very carefully! I started from home in Brickendon, Hertfordshire. The first contracts came through Elliott Brothers – they were among the first to recognise that selling computers was easier if customers could actually use them productively.

I hired programmers with similar backgrounds to my own – people with mathematical training who could learn new domains quickly. We worked on some fascinating projects in those early years. Nuclear power stations, for instance. The Atomic Energy Authority needed control systems for their reactors, and traditional control engineers didn’t understand digital computers well enough.

Each project taught us something new. We developed techniques for real-time control, for handling multiple simultaneous processes, for ensuring system reliability. By 1969, we could claim to be ‘the first registered independent Software unit in the UK that was not part of a computer manufacturer, not part of a computer bureau, not part of a users’ organisation and not part of a consultancy operation’.

Tell me about some of the more significant projects that shaped VPS’s development.

British Rail was transformative for us. This was in the late 1960s, when BR was modernising their entire signalling infrastructure. They needed what became known as ‘Train Describer’ systems – computer displays that would show signallers exactly where every train was in their section, updating in real-time.

The technical challenges were substantial. You had to interface with track circuits, process multiple inputs simultaneously, and present information clearly to human operators under pressure. Any failure could mean accidents, so reliability was paramount.

We developed our own operating system for these applications – MACE, we called it. Platform-independent, time-sharing, designed specifically for real-time control. And in 1970, we even built our own computer, the Vaughan 4M microprocessor, when we couldn’t find commercial hardware that met our requirements.

That’s remarkable – you were essentially creating the entire technology stack, from hardware to applications.

Necessity, really. The commercial computing world was still focused on batch processing – payroll, accounting, that sort of thing. Real-time control systems needed different approaches entirely. We had to develop interrupt handling, priority scheduling, multiple concurrent processes. These are commonplace now, but in 1970, we were rather pioneering the territory.

The railway work was particularly satisfying because you could see the direct impact. Our train describer systems enabled British Rail to run more trains safely on existing track. Instead of relying on manual logbooks and telephone calls between signal boxes, operators had real-time digital displays showing exactly where every train was. It was like moving from the Victorian era to the space age.

You mentioned earlier that programming was often seen as secretarial work. How did you navigate the gender dynamics of a male-dominated technical field?

It’s interesting – in the 1950s, programming was seen as rather low-status work. The important people were the electronic engineers who built the machines. Programming was thought to be detailed, methodical work that women might actually be quite good at – rather like switchboard operation or bookkeeping.

In some ways, this perception worked in my favour initially. I wasn’t threatening anyone’s sense of professional hierarchy. But when I started VPS and began competing for significant contracts, the dynamics changed rather quickly.

I found that calling myself ‘Dina’ rather than ‘Aldrina’ helped in correspondence. People made certain assumptions about technical competence based on gender, and it was sometimes easier to establish credibility before those assumptions could take hold.

Once you’d proven your capability, most technical people were fair-minded. Engineers and scientists generally cared more about whether your solutions worked than whether you wore trousers or skirts. Though I must say, wearing trousers to client sites did help with credibility – and practicality when you needed to crawl around machine rooms.

Looking back, you were essentially creating what we now recognise as software engineering principles. Did you realise the historical significance at the time?

Not really, no. We were solving immediate problems, one project at a time. It was only later that patterns emerged – the importance of modular design, systematic testing, documentation standards, quality control processes.

I think what we were doing was bringing traditional engineering discipline to programming. In mechanical or electrical engineering, you wouldn’t dream of building something without detailed specifications, materials testing, safety margins. But early programming was often quite haphazard – people would just start coding and see what happened.

We insisted on proper specifications before starting any project. We documented our designs thoroughly. We tested systematically. We maintained version control – though not with the sophisticated tools available today, naturally. These practices seem obvious now, but they were rather revolutionary in 1960.

You’ve mentioned several times that your code rarely needed debugging. Can you explain your actual programming methodology?

Certainly. Let me walk you through a typical project from the railway work – designing a train describer for a busy junction like Clapham Junction.

First, requirements analysis. You’d spend time in the signal box, understanding exactly what information signallers needed, how quickly they needed it, what happened during peak periods or emergencies. You’d map out all the track sections, all the possible routes, all the signal dependencies.

Then system design. The computer had to interface with dozens of track circuits, each reporting whether a particular section was occupied. But track circuits aren’t perfectly reliable – you get false readings from electrical noise, weather conditions, maintenance work. So your software needs to filter signals, correlate readings from adjacent sections, apply logical rules to determine what’s actually happening.

The programming itself followed strict patterns. Every module had a single, well-defined purpose. Data structures were carefully designed to represent the real-world situation accurately. I used what we’d now call state machines extensively – the system always knew exactly what state each track section was in, and exactly which transitions were valid.

Error handling was crucial. If a track circuit failed, or if the computer lost communication with a remote station, the system had to fail safely. This meant extensive checking of input data, redundant calculations for critical functions, and clear error reporting to operators.

I suppose what made the difference was that I never trusted anything to work perfectly. Every input could be wrong, every calculation could overflow, every communication link could fail. Defensive programming, we’d call it now.

That level of paranoia seems almost pathological by today’s standards of rapid development and ‘move fast and break things.’

Yes, well, we couldn’t afford to break things! When your software controls railway signals or nuclear power plants, ‘breaking things’ means people die. The luxury of releasing buggy software and fixing it later simply wasn’t available.

I think this is something the modern software industry has lost sight of. Not every application needs railway-grade reliability, of course. But too many systems that should be reliable – medical devices, automotive software, financial systems – are developed with consumer application attitudes.

The discipline of getting it right the first time, of thinking through all the edge cases, of building robust error handling – these skills seem to be declining. Perhaps because the cost of failure is often hidden from the developers.

Speaking of failure, can you tell me about a project that didn’t go as planned? What did you learn from your mistakes?

There was a project in the early 1970s – I won’t name the client – where we completely misunderstood the requirements. A process control system for a manufacturing plant. We built exactly what we thought they needed, technically excellent, thoroughly tested. It was completely useless.

The problem was that we’d focused on the technical specifications without understanding the operational reality. The plant operators worked in shifts, had different skill levels, were under considerable time pressure. Our elegant, comprehensive control interface was far too complex for the actual working environment.

We had to rebuild the entire user interface from scratch. Simpler displays, fewer options, more automated decision-making. It was humbling – all our technical brilliance meant nothing if people couldn’t actually use the system effectively.

That taught me to spend much more time understanding the human factors. Technology serves people, not the other way around. The most sophisticated algorithm is worthless if it doesn’t solve a real problem in a way that real people can manage.

How do you view the evolution of computing since your retirement in 1999?

Extraordinary progress, obviously. The computational power available now would have seemed like pure fantasy in 1959. But I sometimes wonder if we’ve lost something important in the process.

Modern development environments are remarkable – integrated debugging, version control, automated testing, vast libraries of pre-written functions. Young programmers today have tools I could hardly have imagined. But I wonder if this abundance has made them less disciplined.

When every mistake was expensive, when machine time was precious, when debugging tools barely existed, you learned to think very carefully before writing a single line of code. There’s something valuable in that discipline that seems to be disappearing.

Don’t misunderstand me – I’m not nostalgic for the old limitations. But I do worry that the ease of modern development sometimes produces careless thinking. Software that works ‘most of the time’ but fails unpredictably. Systems that are so complex that no one really understands how they work.

What advice would you give to women entering technical fields today?

The same advice I’d give to any young engineer: understand your fundamentals thoroughly, never stop learning, and don’t be afraid to tackle problems others think are impossible. Though I suppose women still face some additional challenges.

In my day, expectations were so low that exceeding them was relatively straightforward. If you could demonstrate real competence, most technical people would judge you on results. Today, women in technology often face more subtle forms of bias – assumptions about leadership capability, strategic thinking, technical depth.

My advice is to build undeniable expertise. Learn the fundamentals so thoroughly that no one can question your technical competence. Then use that credibility to tackle the problems that matter to you. And don’t be discouraged by setbacks – every pioneer faces resistance.

Looking at today’s safety-critical systems – automotive software, medical devices, aerospace – do you see the influence of practices you helped establish?

Oh yes, absolutely. The principles we developed for railway signalling systems are everywhere now. Formal verification, redundant checking, graceful degradation, systematic testing – these are standard practice in safety-critical software development.

Though I must say, I’m sometimes concerned by how these practices are implemented. There’s a tendency to treat safety as a checklist exercise rather than a fundamental design philosophy. You can’t just add reliability as an afterthought – it has to be built into every aspect of the system from the beginning.

Real reliability comes from understanding your system completely, anticipating failure modes, and designing robust responses. It’s not about adding more tests or more documentation – though those are important. It’s about thinking like the machine, understanding exactly what can go wrong, and preventing those failures at the architectural level.

Any final thoughts on the trajectory of computing and software development?

Computing has become the invisible infrastructure of modern life – which is both wonderful and slightly terrifying. When I started programming, computers were exotic machines used by specialists for specific purposes. Now they’re embedded in everything from washing machines to weapons systems.

The responsibility that comes with that ubiquity is enormous. The software decisions made by young programmers today affect millions of lives. I hope they understand that responsibility and approach their work with appropriate seriousness.

But I’m optimistic, ultimately. Each generation of engineers builds on the work of previous generations. The fundamental principles of good engineering – clarity, reliability, systematic thinking – these don’t change. The tools improve, the problems become more complex, but the core challenge remains the same: building systems that work correctly, safely, and reliably in service of human needs.

Technology should serve people, not the other way around. That was true in 1959, and it’s true today. If we can keep that principle at the centre of our work, we’ll be fine.

Letters and emails

Following our conversation with Dina St Johnston, we’ve received an overwhelming response from readers eager to explore further dimensions of her groundbreaking work and personal journey. We’ve selected five letters and emails from our growing community who want to ask her more about her life, her work, and what she might say to those walking in her footsteps.

Elena Varga, 34, Software Architect, Prague
You mentioned developing your own operating system, MACE, and even building the Vaughan 4M microprocessor when commercial options didn’t meet your needs. I’m curious about the technical trade-offs you faced – what specific limitations in existing systems drove you to create your own hardware and software stack? And looking at today’s embedded systems and IoT devices, do you think we’ve lost something by relying so heavily on standardised platforms rather than building purpose-specific solutions?

Elena, your question touches on something that was rather fundamental to our approach at VPS, though I must say, we didn’t think of it in quite such grand terms at the time. We simply needed tools that worked properly for the job at hand.

The decision to develop MACE came about because the available operating systems in the late 1960s were designed for batch processing – punching cards, submitting jobs, waiting hours for results. But our railway work required genuine real-time response. When a train enters a track section, the signaller needs to know immediately, not after the next batch run. The commercial systems simply couldn’t handle multiple simultaneous inputs with the priority scheduling we needed.

Building the Vaughan 4M was even more pragmatic. We’d been using various manufacturers’ machines – Elliott, of course, but also some of the newer minicomputers coming from America. The trouble was, none of them were designed for our specific requirements. We needed fast interrupt handling, multiple serial interfaces for communicating with remote stations, and absolutely bulletproof reliability. If you’re controlling railway signals, you can’t afford to have your computer go down for maintenance every few days.

The technical limitations were quite severe by today’s standards, I’m sure. Our processors ran at perhaps 1 MHz – a snail’s pace now, I imagine. Memory was expensive and limited. But those constraints forced us to be extraordinarily efficient. Every instruction mattered, every byte of storage was precious. We couldn’t afford bloated code or unnecessary features.

I think there’s something valuable that’s been lost in today’s abundance of computational resources. When everything had to be lean and purposeful, you developed a different relationship with the machine. You understood exactly what it was doing, cycle by cycle. Modern software often seems to work by magic – layers upon layers of abstraction that few people fully comprehend.

Your point about standardised platforms is quite astute. There’s tremendous efficiency in using common components, naturally. But there’s also a tendency to accept compromises – to force your problem into whatever shape the standard solution provides, rather than building exactly what you need.

For truly critical applications – and I suspect railway signalling still qualifies – there’s still a place for purpose-built systems. The principles we used – understanding your requirements completely, building exactly what you need, testing everything exhaustively – these haven’t become obsolete. If anything, as systems become more complex, these disciplines become more important, not less.

Cole Harrison, 41, Venture Capitalist, Toronto
Starting an independent software company in 1959 must have required extraordinary foresight about where the industry was heading. What early indicators convinced you there would be a sustainable market for bespoke software services? I’m particularly interested in how you structured those first contracts and pricing models when there were no industry standards to reference – did you charge by time, by project deliverables, or did you pioneer some other approach?

Cole, you’ve put your finger on what was probably the most challenging aspect of starting VPS – we were essentially creating a market that didn’t yet exist. In 1959, the very concept of selling software as a service was completely foreign to most businesses.

My early confidence came from observing what was happening at Elliott Brothers. Companies would purchase an Elliott 405 for their payroll, then spend months struggling to make it work properly. The manufacturers provided basic software – compilers, perhaps a simple operating system – but anything specific to the customer’s business had to be written from scratch. Most firms didn’t have anyone who understood programming, and hiring experienced programmers was nearly impossible because there were so few of us about.

I could see a clear gap: companies needed the expertise, but they couldn’t justify hiring full-time programmers for what might be a six-month project. It seemed obvious that someone should provide programming services on a contract basis, rather like hiring an architect for a building project.

The early indicators were quite encouraging, actually. Elliott’s sales team began referring customers to me when they encountered resistance about software complexity. The Atomic Energy Authority was particularly receptive – they had sophisticated technical requirements but recognised they needed specialist programming knowledge.

Pricing was indeed tricky, as you’ve guessed. There were no industry precedents to follow. Initially, I charged on a time basis – so much per day, rather like consulting engineers or barristers. But I quickly learned this was problematic because clients couldn’t budget properly, and there was no incentive for efficiency on my part.

By the mid-1960s, we’d moved to fixed-price contracts wherever possible. We’d specify exactly what the system would do, provide a firm price, and guarantee delivery by a certain date. This required much more careful estimation work upfront, but it gave clients confidence and allowed us to benefit from our growing efficiency.

The key insight was treating software development as proper engineering work – with specifications, schedules, and professional accountability. Most people then viewed programming as rather like clerical work, something you did until the ‘real’ engineers sorted things out. We positioned ourselves as technical specialists, equivalent to structural engineers or electrical consultants.

The railway work was particularly valuable for establishing credibility. British Rail was a prestigious client, and their willingness to trust us with safety-critical systems gave other potential customers confidence. Success breeds success, as they say.

Yasmin Rahman, 28, Railway Systems Engineer, Singapore
Your work on British Rail’s Train Describer systems fascinates me because we’re still grappling with similar challenges in modern smart transportation. You had to process real-time data from multiple sources while ensuring absolute reliability – something we now call ‘edge computing’ in autonomous vehicles and smart city infrastructure. How did you handle data validation and sensor fusion with 1970s technology, and what principles from your approach might apply to today’s AI-driven transportation systems?

Yasmin, your question brings back vivid memories of crawling around signal boxes with multimeters and oscilloscopes, trying to understand why our displays occasionally showed phantom trains! The data validation challenges were indeed formidable with 1970s technology.

Track circuits are deceptively simple in principle – send a low-voltage signal through the rails, and when a train’s wheels short the circuit, you know that section is occupied. But the reality was far messier. Weather played havoc with readings – autumn leaves created false occupied signals, snow caused intermittent failures, and heavy rain could make perfectly clear track sections appear occupied.

We developed what I suppose you’d now call a multi-layered validation approach, though we simply called it ‘sensible checking.’ First, we’d sample each track circuit multiple times per second rather than trusting a single reading. If a section showed occupied for less than, say, three consecutive samples, we’d treat it as noise.

More importantly, we applied geographical logic. Trains can’t teleport – if section A shows occupied, then clear, then section C shows occupied, but section B never registered anything, we knew something was wrong with the sensors rather than having a genuine train movement. Our software maintained a model of possible train movements based on the physical track layout and signalling rules.

The correlation between adjacent sensors was crucial. We’d cross-reference readings from overlap sections, approach circuits, and the main track circuits. If the pattern didn’t make physical sense – trains appearing to move backwards through controlled junctions, for instance – we’d flag it for the signaller’s attention rather than updating the display automatically.

What’s interesting is that these principles map quite directly onto your modern challenges, though the computing power available now must make implementation far more sophisticated. We were essentially doing pattern recognition with very limited processing capability – perhaps 64K of memory total for an entire junction’s control system.

The key insight was that reliable information is more valuable than fast information. We’d rather show a signaller that a section’s status was ‘uncertain’ than confidently display incorrect data. Better to slow down operations slightly than risk a collision because of faulty sensor readings.

I suspect your AI systems today can recognise far more complex patterns and handle much more ambiguous data, but the fundamental principle remains: never trust a single sensor, always validate against physical constraints, and when in doubt, fail safely rather than fail silently.

Matías Ortega, 37, Computer Science Professor, Buenos Aires
Here’s a speculative question that intrigues me: imagine if the internet had emerged during your prime years at VPS in the 1970s, rather than decades later. How do you think distributed computing and global connectivity might have changed your approach to building reliable systems? Would the principles you developed for isolated, mission-critical applications have translated well to networked environments, or would you have needed to fundamentally rethink your reliability and security models?

Matías, what a fascinating thought experiment! The internet arriving in the 1970s would have been absolutely transformative, though I suspect it might have given us rather more headaches than opportunities initially.

You see, our entire approach at VPS was built around the principle of controlled environments. We knew exactly what hardware we were working with, exactly what inputs to expect, and exactly what could go wrong. Our railway systems, for instance, operated in closed networks – dedicated telephone lines connecting signal boxes, with no possibility of outside interference. That isolation was a feature, not a limitation.

The prospect of opening those systems to a global network would have terrified us from a reliability standpoint! In the 1970s, we were already struggling with the unreliability of ordinary telephone lines for data transmission. The idea of depending on a network we didn’t control, carrying signals through equipment we hadn’t tested, would have seemed utterly mad for safety-critical applications.

However, I think the distributed computing aspects would have been tremendously appealing for certain applications. Our MACE operating system was already designed to handle multiple simultaneous processes and communicate with remote stations. Scaling that up to a network of computers could have allowed much more sophisticated coordination.

Imagine if our train describer systems could have communicated directly with each other, rather than relying on telephone calls between signal boxes. A train leaving Manchester could have its details automatically transmitted to signal boxes all along its route. We could have achieved much better traffic flow and scheduling with that kind of information sharing.

But the security implications would have required completely rethinking our approach. In isolated systems, physical access control was sufficient – if someone wasn’t authorised to be in the signal box, they couldn’t tamper with the computer. With networked systems, we’d have needed entirely new categories of protection.

I suspect we would have developed something rather like what you now call ‘defence in depth’ – multiple layers of validation, encryption for sensitive communications, perhaps even separate networks for different types of traffic. The mathematical foundations were certainly available then, though the computing power to implement strong encryption might have been prohibitive.

The fascinating question is whether distributed computing would have made our reliability problems easier or harder. More redundancy and backup options, certainly, but also many more potential failure modes to anticipate and handle gracefully.

Thandi Dube, 45, Tech Entrepreneur, Cape Town
You’ve spoken about the technical challenges, but I’m curious about the emotional resilience required to be a pioneer. There must have been moments of profound doubt – perhaps when a major project failed or when clients questioned your capabilities because of your gender. Can you share a specific moment when you almost gave up, and what internal resources or external support systems helped you push through? Your experience could be invaluable for women entrepreneurs facing similar challenges today.

Thandi, your question takes me to a rather difficult place, but it’s important to be honest about these things. There was indeed a moment – several moments, actually – when I wondered whether I was being foolishly stubborn rather than professionally pioneering.

The worst came in 1974, during a particularly challenging project for a power station control system. We’d been working for months, and the client – a senior engineer who’d clearly never worked with a woman in a technical capacity – began questioning every decision I made. Not the technical merits, mind you, but whether I truly understood the ‘serious nature’ of industrial control systems. He started insisting on speaking directly with my male programmers, suggesting they might have a ‘more practical’ perspective.

The breaking point came during a progress meeting when he actually said, in front of his entire team, that perhaps this was ‘too complex for a lady to manage properly’ and suggested they might be better served by ‘a more experienced firm’ – meaning, of course, one run by men. I sat there, watching months of careful work being dismissed not on its technical merits, but on assumptions about my gender.

That evening, I seriously considered closing VPS. The constant need to prove oneself was exhausting. Every mistake was attributed to my being a woman, while every success was treated as a fortunate exception. I remember thinking it might be easier to simply return to employment with a larger firm, where I could do technical work without the burden of representing my entire gender every time I walked into a client meeting.

What pulled me through was a combination of stubbornness and support I hadn’t expected. Several of my programmers – all men, as it happened – came to me the next day and said they’d resign if I capitulated to that client’s demands. They’d worked with me long enough to know the quality of our technical work, and they found his attitude professionally offensive.

More importantly, I received a telephone call from a woman at the Atomic Energy Authority who’d heard about the situation through professional networks. She reminded me that every woman in technical fields was fighting these battles, and that giving up would make it harder for those who came after.

The key insight was learning to separate professional challenges from personal attacks. Technical problems have solutions – you analyse, you test, you iterate until things work. Gender bias is different – it’s not a problem you can solve through better engineering, but it’s also not a reflection of your actual capabilities.

Reflection

As our conversation draws to a close, I’m struck by the weight of what we’ve witnessed – not just the technical brilliance of a woman who died on 1st July 2007 at age 76, but the quiet revolution she orchestrated from a modest office in Hertfordshire. Dina St Johnston didn’t simply write code; she industrialised reliability itself, transforming programming from an art into an engineering discipline decades before “software engineering” became a recognised field.

The themes that emerged from our discussion – methodical precision, defensive programming practices, and the integration of human factors into technical systems – remain strikingly relevant today. Her insistence on understanding systems completely, anticipating failure modes, and building robustness from the ground up echoes through modern safety-critical software development, from automotive systems to medical devices. What struck me most was her perspective on the cost of failure: in an era of “move fast and break things,” her reminder that some systems simply cannot afford to break carries profound moral weight.

The historical record, while acknowledging her pioneering role, perhaps understates the philosophical shift she represented. Where official accounts focus on her technical achievements – founding Britain’s first software house, developing railway control systems, creating the MACE operating system – her lived experience reveals something deeper: the disciplined thinking that made modern software reliability possible. Her colleagues’ accounts of code that “hardly ever required debugging” weren’t simply praising efficiency; they were witnessing the birth of what we now recognise as defensive programming principles.

Yet significant gaps remain in documenting her influence. The safety-critical systems community has largely forgotten that practices now codified in international standards – formal verification, redundant checking, graceful degradation – were pioneered by a woman working in relative obscurity. Her methods influenced an entire generation of programmers who went on to establish the reliability practices that underpin everything from railway signalling to aerospace systems, yet this genealogy remains largely untraced.

Today’s software engineers, grappling with the challenges of AI safety, autonomous systems, and critical infrastructure protection, would recognise a kindred spirit in St Johnston’s approach. Her fundamental insight – that software serving human needs must be built with human fallibility in mind – offers both practical guidance and ethical grounding for an industry still learning to balance innovation with responsibility. In our conversation, she reminded us that behind every reliable system stands not just clever algorithms, but careful thinking about what can go wrong and how to fail gracefully when it inevitably does.

Who have we missed?

This series is all about recovering the voices history left behind – and I’d love your help finding the next one. If there’s a woman in STEM you think deserves to be interviewed in this way – whether a forgotten inventor, unsung technician, or overlooked researcher – please share her story.

Email me at voxmeditantis@gmail.com or leave a comment below with your suggestion – even just a name is a great start. Let’s keep uncovering the women who shaped science and innovation, one conversation at a time.

Editorial Note: This interview is a dramatised reconstruction based on historical sources, academic papers, and documented accounts of Dina St Johnston‘s life and work. While grounded in factual research about her contributions to software engineering and entrepreneurship, the conversational format, specific dialogue, and personal reflections represent creative interpretation rather than recorded statements. We have endeavoured to remain faithful to her documented achievements, technical methods, and the historical context of early computing in Britain, drawing upon sources including academic appreciations, company records, and contemporary accounts of her pioneering work at Vaughan Programming Services and contributions to safety-critical software development.

Bob Lynn | © 2025 Vox Meditantis. All rights reserved. | 🌐 Translate

Leave a comment