Frances Spence: The Quiet Programmer Who Taught ENIAC to Think and Then Vanished from History

This interview is a dramatised reconstruction based on historical sources about Frances Spence’s life and work, written to reflect the period while remaining grounded in documented facts. Where the record is incomplete, dialogue has been imaginatively shaped rather than presented as verbatim testimony, so readers should treat it as informed historical fiction, not a primary-source transcript.

Frances Bilas Spence (1922–2012) was an American physicist and computer scientist, one of the six original programmers of ENIAC – the world’s first electronic, general-purpose, programmable computer. A scholarship student from Philadelphia who majored in mathematics with a minor in physics at Chestnut Hill College, she helped invent the very concept of programming at a time when no programming languages, manuals, or precedents existed. Though her contributions were foundational to modern computing, her name was absent from press releases at ENIAC’s 1946 unveiling, and she stepped away from the field in 1947 to raise a family, becoming an emblem of the brilliant women whose technical achievements were systematically erased by the conventions of their era.

Mrs Spence, thank you for joining us. It is a profound honour to speak with one of the architects of the digital age – someone who helped teach an eighteen-thousand-vacuum-tube machine how to think.

The honour is mine, though I confess I never thought of myself as an architect of anything. We were young women doing a job that needed doing. The architecture, as you call it, came from necessity more than vision. When you have no manual, no language, and no one to ask, you improvise. What we built, we built because there was no other way.

Let us begin at the beginning. You grew up in Philadelphia, the second of five sisters. Your parents both worked in education. What was that household like?

Busy, certainly. My father was an engineer for the Philadelphia Public School System, and my mother taught – so books were everywhere, and so was the expectation that we would work hard and think carefully. There was no nonsense about whether girls ought to study mathematics. My parents simply assumed we would do well at whatever we attempted, and they gave us the tools to do so. Philadelphia in those years was full of girls’ schools with serious academic programmes. I attended South Philadelphia High School for Girls, which had been educating young women since the previous century. It wasn’t unusual for a girl like me to love mathematics – what was unusual was finding a job that would let her use it.

And yet you found that job. You were awarded a scholarship to Chestnut Hill College, where you met Kathleen McNulty – later Kathleen Antonelli – who would become your lifelong colleague and friend. What drew you both to mathematics?

Kathleen used to say that mathematics was no work for her – just a wonderful puzzle with an answer waiting at the end. I felt the same. We were two of only three mathematics majors in our class of over a hundred women. Most of the girls were studying home economics, which was entirely respectable, but Kathleen and I wanted something different. We took every mathematics course offered: spherical trigonometry, differential calculus, projective geometry, partial differential equations, statistics. We wanted to be ready for whatever came next.

What came next, of course, was war. In 1942, you responded to an advertisement in the Philadelphia Evening Bulletin seeking women mathematics majors. What do you remember of that moment?

I remember thinking it was extraordinary that such an advertisement existed at all. Before the war, positions in mathematics were listed under “Male Help Wanted”. Women could be secretaries or nannies – not actuaries or engineers. But the war changed everything, at least temporarily. The Army needed human “computers” to calculate ballistics trajectories, and the men were off fighting. So they turned to women.

Kathleen telephoned me the moment she saw the notice. We went to interview together at the Union League on South Broad Street. The Army recruiter asked us one question: “Did you have differential calculus?” When we said yes, he said, “You are exactly what we need”. We were hired on the spot.

And so you became “computers” – human computers, performing by hand the calculations that would eventually be done by the machine you helped create.

Yes. We reported to the Moore School of Electrical Engineering on the first of July, 1942. There were about eighty women working there, all of us calculating ballistics trajectories using desktop calculators – Monroes and Marchants, mostly. The clicking of those machines was the soundtrack of our days. Each trajectory required solving complex differential equations, and each firing table for a single gun might contain nearly two thousand trajectories. A skilled human computer with a desk calculator could complete one sixty-second trajectory in about twenty hours. The work was painstaking, but we understood its importance: without accurate firing tables, a gun was nearly useless.

You and Kathleen were eventually selected to operate the Differential Analyser, a sophisticated analogue computer in the basement of the Moore School. Can you describe that machine?

The Differential Analyser was a magnificent beast – one of only five or six in the world at that time. It was an analogue device, meaning it used physical mechanisms rather than electronic circuits to model mathematical operations. Wheel-and-disc integrators performed integration; gears handled multiplication and division; an epicyclic differential mechanism managed addition and subtraction. The entire apparatus could compute a complete trajectory in about thirty to fifty minutes, compared to the forty hours by hand. That was an extraordinary improvement.

Kathleen and I were part of a smaller team trained to operate it. The machine was finicky – it required constant minding, careful calibration, and a deep understanding of its physical behaviour. We worked in shifts, running calculations six days a week. The basement was noisy with the analyser’s whir, and we grew to know every quirk of its operation. It taught us something valuable: machines have personalities. They respond differently depending on how you treat them. That lesson would serve us well when we met ENIAC.

In 1945, you were selected as one of six women to program ENIAC – the Electronic Numerical Integrator and Computer. How did that selection occur?

In June of 1945, a memorandum circulated inviting women computers to apply for work on a new, experimental machine. The project was classified – we knew nothing about what we were being asked to do. Six of us were selected from the eighty or so women at the Moore School: myself, Kathleen McNulty, Jean Jennings, Betty Snyder, Marlyn Wescoff, and Ruth Lichterman. We were chosen because of our mathematical ability and our experience with the calculations ENIAC was designed to perform.

We were sent first to the Aberdeen Proving Grounds to learn IBM equipment – card readers, card punches, tabulating machines. Still, no one told us what ENIAC actually was. It was only when we returned to the Moore School and were finally introduced to the machine that we understood our task: to make this enormous, room-sized apparatus compute the same ballistics trajectories we had been calculating by hand.

And you received no training whatsoever in how to program it.

None. There were no manuals. No programming languages existed – we were, after all, inventing the concept of programming as we worked. We received only the schematic diagrams and blueprints that the engineers had used to build the machine. Our job was to learn how ENIAC worked from those drawings, then figure out how to translate a mathematical problem – specifically, a differential calculus equation for a ballistic trajectory – into a configuration of switches, cables, and punch cards that the machine could execute.

This brings us to what I would call the “Explain It to an Expert” portion of our conversation. For readers with technical backgrounds, can you walk us through how you actually programmed ENIAC? What did that process look like, step by step?

Certainly. Let me try to convey what it was like, though modern programmers will find it quite alien.

ENIAC was not a stored-program computer in its original configuration. It had no internal memory for instructions. To program it, we physically reconfigured the machine for each calculation. This meant setting approximately three thousand switches, connecting dozens of cables between panels, and routing data through the machine’s various units using plugboards – rather like rewiring a telephone switchboard, except every connection had to be logically perfect.

The machine consisted of forty black panels, each about eight feet tall. The core units included twenty accumulators – each capable of storing a ten-digit number and performing addition or subtraction – plus a multiplier, a divider and square-rooter, three function tables, a master programmer for sequencing operations, and input/output units connected to IBM card equipment.

To set up a trajectory calculation, we first had to understand the differential equation governing the projectile’s motion. Position and velocity change continuously over time, but ENIAC worked in discrete steps. So we broke the trajectory into small time intervals – perhaps ten-millisecond increments – and calculated position and velocity at each step. This numerical integration required iterating through the same sequence of operations hundreds of times.

Here is where we invented something important: the subroutine. We realised that we needed to perform the same operations repeatedly – calculate new position from old position, new velocity from old velocity, apply corrections for air resistance and gravity. Rather than reconfigure the entire machine for each iteration, we designed a way to store the sequence of operations and call it repeatedly, feeding in new input values each time. This concept – now fundamental to all programming – did not exist before we invented it.

We also invented what you would now call nesting: one subroutine calling another. And we invented breakpoints – ways to pause the calculation at a specific point and examine intermediate results without stopping the machine entirely. Betty Holberton is often credited with refining the breakpoint concept: in the original design, program flow was set by plugging cables from one unit to another, and to make the program stop at a certain point, we simply removed a cable. That was a breakpoint.

How did you coordinate the timing of operations across so many units?

That was one of the most challenging aspects. ENIAC operated on program pulses that travelled through the machine along control lines. Each unit had program controls – the accumulators had twelve each – that told the unit which operation to perform when it received a pulse. We had to design the routing so that pulses arrived at the correct units in the correct sequence, with the correct timing.

We developed notation systems to manage this complexity. We created what we called “peddling sheets” or flow diagrams – hand-drawn diagrams showing how data would move through the machine, where it would be stored, how it would be transformed. Without those diagrams, the configuration would have been impossible to replicate or debug. These were the ancestors of what you now call flowcharts.

What about error rates? Vacuum tube failures were notoriously common.

ENIAC contained approximately eighteen thousand vacuum tubes, along with seven thousand crystal diodes, fifteen hundred relays, seventy thousand resistors, ten thousand capacitors, and roughly five million hand-soldered joints. Every one of those components was a potential point of failure. Early estimates suggested that with typical vacuum tube reliability, ENIAC would have less than a fifty percent chance of running for ten minutes without a failure.

In practice, the engineers discovered that operating vacuum tubes below their rated conditions dramatically improved reliability. The machine ran at lower voltages and was kept continuously powered to avoid the stress of heating and cooling cycles. Even so, tubes failed every two to three days on average. When one failed, we had to identify which of the eighteen thousand was the culprit, replace it, and verify that the calculation remained valid. This taught us the necessity of building redundancy into our logic and cross-checking results. We never trusted the machine blindly.

On 15th February 1946, ENIAC was unveiled to the press and public. The demonstration showed the machine calculating a thirty-second ballistic trajectory in just twenty seconds – faster than the shell itself could fly. Yet you and your colleagues were not introduced. Your names did not appear in press releases. What was that day like?

It was… complicated. We had worked for months to make that demonstration possible. We had bench-tested every step of the programme by hand, using the same desktop calculators we had started with, so that we could verify the machine’s results at each stage. We knew that machine inside and out – we had climbed inside its panels, traced its circuits, debugged its failures.

On the day of the demonstration, we were present. You can see us in the photographs, standing beside the machine. But we were not named. We were not introduced to the reporters. The Army’s press releases credited the men – primarily Mauchly and Eckert for design, Goldstine for mathematics. The women who had made the machine run were described, if at all, as “operators.”

After the demonstration, there was a formal dinner. Deans from the University, Army representatives, invited guests. None of the six of us were invited. We had served coffee and answered questions during the day. That evening, we went home.

Did you understand at the time how significant that erasure was?

Not entirely, no. We were young, and we were accustomed to women’s contributions being minimised. Our official title was “computer” – a word that then meant a person who computes – and that title sounded clerical, even though the work was anything but. The Army presented programming as a kind of skilled labour, not engineering or scientific work. Hardware was seen as men’s work; software – if that word had even existed – was women’s work. We understood this implicitly, even if we did not articulate it as clearly as one might today.

What strikes me now, looking back, is how thoroughly the structure of credit was designed to make our work invisible. We were working on a classified project, so we could not speak publicly about what we had done. Our job titles obscured the nature of our expertise. And when the time came to celebrate the achievement, we were positioned – literally – as background figures. I do not believe there was a conspiracy to erase us specifically. It was simply how women’s technical work was treated.

You continued working on ENIAC after the war ended. What was that period like?

After V-J Day, the immediate need for firing tables disappeared, but the Army recognised that ENIAC had capabilities far beyond ballistics. We stayed on, collaborating with mathematicians from institutions across the country. We helped programme calculations for Los Alamos – we did not know it at the time, but these were rough calculations for triggers for the hydrogen bomb. We worked with some of the finest minds in mathematics and physics.

It was a remarkable period. We were no longer just “operators” – we were the people who understood the machine, who could translate a mathematical problem into a working programme. The scientists who came to use ENIAC needed us. They could not simply walk up to the machine and make it compute; it required our expertise.

And yet you left in 1947.

I married Homer Spence in 1947. He was an Army electrical engineer who had been assigned to the ENIAC project – you can see him in some of the photographs, examining digit trays or standing with the other engineers. He was later promoted to head the Computer Research Branch at Aberdeen Proving Grounds. Shortly after our marriage, I resigned to have our first child. We eventually had three sons.

Did you consider staying in the field?

I considered it, yes. But the expectations of that era were clear. A married woman, particularly one with children, was expected to devote herself to her family. The war had opened doors for women in technical fields, but peace was closing them again. There was no model for combining motherhood with the kind of intensive technical work ENIAC required. And frankly, no one asked me to stay. The transition from wartime necessity to peacetime normalcy meant that women were gently – or sometimes not so gently – encouraged to return home.

I will not pretend it was an easy decision, or that I made it entirely freely. But I do not regret it. I raised three sons, and I had a full life. What I regret is that so many women faced the same choice, and that the field lost so many talented people as a result.

I want to ask about something you have not yet discussed: failure. What did you get wrong? What mistakes can you now acknowledge?

We made many mistakes. Programming ENIAC was a process of trial and error, and there was a great deal of error.

One memory stands out. Early in our work, Kathleen and I were convinced we had correctly configured a particular sequence of operations. We had checked our peddling sheets, traced the wiring, verified the switch settings. But when we ran the programme, the results were nonsense. We spent hours – I mean hours – searching for the fault. It turned out that we had misunderstood a timing issue. Two programme pulses were arriving at an accumulator simultaneously when they should have been sequential. The machine was doing exactly what we had told it to do; we had simply told it the wrong thing.

That taught me humility. The machine does not make errors in the way humans do. It follows instructions with absolute fidelity. When something goes wrong, the fault is almost always in the instructions, not the execution. This is a lesson every programmer learns eventually, but we learned it without anyone to tell us.

There were also larger mistakes – decisions about how to structure programmes that, in retrospect, were inefficient. The stored-program conversion of ENIAC, which Jean Bartik led after the war, initially had performance issues because we were thinking in terms of minimising reconfiguration rather than optimising execution speed. We solved it eventually, but the initial approach wasted time.

Some contemporaries – and later historians – argued that what you did was merely clerical work, not true engineering. How do you respond to that critique?

I respond by inviting anyone who believes that to try it themselves. Give them the schematic diagrams. Give them a differential equation. Ask them to configure eighteen thousand vacuum tubes, three thousand switches, and dozens of cables so that the machine produces the correct answer. Then ask them to debug it when it fails.

What we did was intellectual work of the highest order. We were not typing or filing. We were translating mathematical abstractions into physical configurations, inventing notation systems to manage complexity, developing debugging techniques that are still used today, and solving problems no one had ever encountered before. The fact that we were called “computers” rather than “engineers” reflects the gender politics of the era, not the nature of our work.

I do not say this with bitterness. I say it because it is true, and because understanding this history matters for understanding how we arrived at the world of computing we have today.

Let me ask about your friendship with Kathleen. You met as scholarship students at Chestnut Hill, you were hired together, selected together for the Differential Analyser, selected together for ENIAC. How important was that partnership?

Essential. I cannot imagine having done any of it alone.

Kathleen and I thought differently. She had a gift for seeing patterns, for grasping the structure of a problem before anyone else. I was more methodical, more inclined to work through every detail systematically. Together, we balanced each other. When one of us was frustrated, the other could offer a fresh perspective. When one of us doubted herself – and we both did, often – the other could provide encouragement.

There was also practical support. We proofread each other’s work. We covered for each other when family obligations arose. We celebrated each other’s successes, which was important, because no one else was celebrating them.

The other programmers – Jean, Betty, Marlyn, Ruth – were also remarkable women, and I do not wish to diminish their contributions. But Kathleen was my closest friend in that work, and I believe our partnership was a model for how women can support each other in fields that are not always welcoming.

In 1997, you and the other ENIAC programmers were inducted into the Women in Technology International Hall of Fame. After fifty years of obscurity, how did it feel to finally receive recognition?

It felt… late. I do not mean that ungratefully. The ceremony was moving, and I was deeply honoured. But I was seventy-five years old. Most of my career had been spent outside the computing field. Ruth had already passed away; her husband accepted in her memory. The recognition came because of the extraordinary work of Kathy Kleiman, who had discovered our story as an undergraduate and spent decades bringing it to light.

What I felt most strongly was not vindication for myself, but hope that our story might inspire young women entering technical fields today. If they know that women were present at the very beginning – that we were not newcomers to computing, but pioneers – perhaps they will feel a greater sense of belonging. Perhaps they will be less easily discouraged by those who suggest they do not belong.

What advice would you offer to young scientists and engineers today, particularly women and those from groups that have historically been excluded?

First, trust your work. If you know you are doing good work, hold onto that knowledge even when others fail to recognise it. Recognition may come late, or not at all, but the work itself has value.

Second, find your collaborators. The lone genius is largely a myth. Real progress comes from people working together, challenging each other, supporting each other. Find your Kathleen.

Third, document everything. We did not document our work adequately, and it made reconstruction of our contributions far more difficult. Keep records. Write things down. Future historians – or your future self – will thank you.

Finally, do not let others define the significance of your work. We were told we were doing clerical labour. We knew we were doing something more. We were right.

One last question, Mrs Spence. What do you make of the world of computing that exists today – the smartphones, the artificial intelligence, the ubiquitous digital infrastructure?

It is extraordinary. And strange. I sometimes think of ENIAC – that enormous room full of black panels, the heat, the noise, the smell of warm electronics – and I try to reconcile it with the small device you probably carry in your pocket. The computational power in a modern telephone exceeds ENIAC’s by a factor I cannot easily comprehend.

But what strikes me most is not the speed or the miniaturisation. It is the abstraction. Modern programmers speak to compilers, not computers. They never see the hardware. They never smell the vacuum tubes. In some ways, this is a tremendous liberation – they can focus on problems rather than machinery. In other ways, I think something is lost. There was a certain satisfaction in knowing exactly which vacuum tube was lighting up when you pressed a button. We had an intimacy with the machine that I suspect few programmers have today.

Whether that is progress or loss, I cannot say. Perhaps it is both.

Mrs Spence, thank you. Your work laid the foundation for the digital world, and your story reminds us that the people who build transformative technologies are not always the ones history chooses to remember.

Thank you for remembering. That is all any of us can ask.


Letters and emails

After our conversation with Frances Spence, it was clear that many readers still had more they wanted to explore with her – about the precision of her wartime calculations, the quiet cost of leaving ENIAC, and the echoes of her experience in today’s tech world. From a growing community of scientists, students, and curious observers around the globe, we’ve selected five letters and emails that reach toward those lingering questions. Together, they ask her to say more about how she worked, how she thought, and what guidance she might offer to those building and questioning technology today.

Femke De Vries (32, applied mathematician, Amsterdam, Netherlands):
If you think back to your trajectory work, how consciously were you reasoning about numerical stability – step sizes, accumulation of rounding error, error budgets – when you set up ENIAC runs? And looking at today’s ODE solvers and floating‑point libraries, is there anything in your Moore School routines that you think modern numerical analysts have forgotten or could still learn from?

That is a question after my own heart, Femke – and I am pleased to hear from a mathematician in Amsterdam. You put your finger on something we thought about constantly, even if we did not always use the language you would recognise today.

When we were computing trajectories by hand at the Moore School, and later when we configured ENIAC, the question of step size was never far from our minds. A ballistic trajectory is governed by differential equations – position, velocity, air resistance, gravity, the rotation of the Earth if you want to be precise about it. To solve these numerically, you must break continuous motion into discrete steps. Too large a step, and your approximation drifts from reality; too small, and you spend hours – or precious machine cycles – on unnecessary refinement.

We learned this first on the Differential Analyser, which was an analogue machine. Analogue computation has its own sources of error: mechanical slippage, imperfect calibration of the wheel-and-disc integrators, small variations in the torque applied by the input tables. You developed an intuition for when the machine was “drifting,” as we called it – when accumulated mechanical error was beginning to corrupt your result. We would run the same problem twice, compare outputs, and if they diverged beyond a certain tolerance, we knew something was wrong.

On ENIAC, the errors were different in character but no less troublesome. ENIAC worked in decimal, with ten-digit accumulators, and every operation introduced the possibility of rounding. When you iterate hundreds or thousands of times through a trajectory calculation, those small roundings can compound. We did not have the formal vocabulary of “numerical stability” that you use today – that language came later, with people like John von Neumann and Alston Householder developing the theory more rigorously. But we understood the phenomenon in our bones.

What we did was essentially empirical verification. We would compute a trajectory on ENIAC, then check critical points against hand calculations or against results from the Differential Analyser. If a shell was supposed to land at a particular range under particular conditions, we knew roughly what the answer should be. When the machine gave us something wildly different, we knew to look for error – either in our programme configuration or in the accumulation of numerical drift.

We also learned to be thoughtful about the order of operations. Addition and subtraction are relatively safe, but multiplication and division can amplify small errors. When we designed our subroutines, we tried to arrange calculations so that the most error-sensitive operations came early, when the numbers were still “clean,” and we could carry more precision forward into later steps.

Now, as to what modern numerical analysts might have forgotten – I would not presume to lecture people with far more training than I ever had. But I will say this: we had no choice but to understand our errors. We could not hide behind a library function or trust that someone else had handled the numerics correctly. Every calculation was transparent to us, because we had configured the machine to perform it. We knew which accumulator held which intermediate result, and we could stop the machine and inspect those values directly.

I suspect that modern tools, for all their power, can create a kind of distance between the programmer and the mathematics. You call a function, and it returns an answer, and you trust that answer because the function has been validated by others. That trust is usually well-placed. But when it is not – when you are working at the edges of precision, or with ill-conditioned problems, or with data that behaves unexpectedly – I wonder whether that distance becomes a liability.

The habit of checking your work against independent calculation, of understanding where error can enter and how it propagates, of never quite trusting the machine to be wiser than you – these are habits we developed because we had no alternative. They may seem old-fashioned now. But mathematics does not change, even if the machines do. The errors we worried about in 1945 are still waiting to catch the unwary in 2025. I would simply encourage anyone working with numerical methods to remember that the machine is only as trustworthy as the instructions it follows, and those instructions were written by fallible human beings – whether that was six women in Philadelphia eighty years ago, or a team of software engineers last Tuesday.

Ethan Garcia (27, software engineer, San Francisco, United States):
As someone who had to understand every wire and timing pulse, how would you advise today’s programmers – who mostly live inside high‑level languages and IDEs – to regain that kind of deep mental model of a machine? Are there specific exercises, mental habits, or ways of tracing logic that you used on ENIAC that you think would make us better at debugging and performance tuning now?

Ethan, that is a fine question, and I appreciate you asking it honestly. I think many young engineers sense that something has been lost, even if they cannot quite name it. Let me try to describe how we thought about the machine, and perhaps you can draw your own conclusions about what might be useful today.

When we came to ENIAC, we had nothing but the blueprints – the block diagrams and circuit schematics that the engineers had used to build the thing. No manual. No programming guide. Certainly no helpful error messages. If we wanted to understand what the machine would do when we flipped a switch or connected a cable, we had to trace the logic ourselves, from the schematic, through the circuits, to the behaviour we could observe.

This was tedious, I will not pretend otherwise. But it gave us something valuable: a mental picture of the machine that was grounded in physical reality. When I closed a switch on an accumulator panel, I knew – not abstractly, but concretely – that I was completing a circuit that would cause a particular sequence of pulses to travel along a particular path. When something went wrong, I could reason backwards from the symptom to the cause, because I understood the chain of events that connected them.

The habit I would recommend, if you want to cultivate that kind of understanding, is simply this: trace things. When you write a line of code, ask yourself what actually happens when that line executes. Not at the level of the language you are using, but one layer down, and then another. Where does the data live? How does it move? What does the processor actually do? You need not understand every transistor – that would be impossible now, I gather, with billions of them on a single chip – but you should be able to sketch the broad outlines.

When we debugged ENIAC, we used a technique that I believe Betty Holberton refined into something quite elegant: we would remove a cable at a particular point in the programme, which would cause the machine to stop there. Then we could examine the contents of the accumulators and verify that the intermediate results matched our expectations. If they did not, we knew the error lay somewhere before that point. We would move our “breakpoint,” as it came to be called, earlier in the programme and repeat the process, narrowing down the location of the fault.

This is nothing more than divide and conquer, applied to debugging. I understand that modern tools automate much of this – you can set breakpoints with a button, inspect variables in a window, step through code line by line. These are wonderful conveniences. But I would encourage you not to let the tool do all the thinking. Before you set a breakpoint, form a hypothesis. What do you expect to see at this point in the programme? Why? If your expectation is wrong, that tells you something about your mental model, not just about the bug.

Another habit we developed was what I might call “desk checking” – working through the logic of a programme by hand, on paper, before ever running it on the machine. ENIAC time was precious, and we could not afford to waste it on programmes that were obviously flawed. So we would sit with our peddling sheets and trace through the sequence of operations, keeping track of what each accumulator would hold at each step. Often we would catch errors this way that would have taken hours to find on the machine itself.

I suspect this practice has fallen out of fashion. Why trace through code by hand when you can simply run it and see what happens? But there is a difference between seeing what happens and understanding why it happens. The discipline of desk checking forces you to engage with the logic at a deeper level. You cannot simply watch the programme execute; you must predict its behaviour and then verify your prediction.

Finally, I would say: do not be afraid to ask “why” until you reach bedrock. When we encountered a behaviour we did not understand, we did not simply accept it as a peculiarity of the machine. We kept asking questions until we could explain the behaviour in terms of the underlying circuits. Sometimes this meant consulting with the engineers – John Mauchly or Presper Eckert or one of the others – to understand some aspect of the hardware we had not fully grasped. There was no shame in asking; the shame would have been in remaining ignorant.

I realise that modern systems are vastly more complex than ENIAC, and that no one person can understand everything from the application layer down to the silicon. But complexity is not an excuse for incuriosity. You can still cultivate the habit of understanding one layer deeper than you strictly need to. Over time, those layers accumulate, and you find that you have a richer and more reliable mental model of the systems you work with.

The machine, in the end, is not magic. It is logic made physical. The more clearly you can see that logic, the better you will be at bending it to your purpose.

Halima Omondi (40, secondary‑school physics teacher and STEM mentor, Nairobi, Kenya):
You spent the war years doing exquisitely precise work for a purpose that was, at its core, about improving weapons. How did you make sense of that personally, and what would you say to young scientists today who are wrestling with whether to contribute their skills to defence, AI, or other technologies that might be used in ways they cannot fully control?

Halima, thank you for asking this question. It is not one I was often asked in my time, and I think it deserves a careful answer – particularly from someone who teaches young people and helps them find their way in science.

I will be honest with you: during the war, I did not think about it as much as perhaps I should have. We were young, the country was at war, and the work felt urgent and necessary. Boys we had known in school were fighting overseas. Some of them did not come home. The firing tables we calculated were meant to help our soldiers aim their guns accurately, and accurate aim meant fewer wasted shells and, we hoped, shorter engagements and fewer casualties on our side. That was the frame we put around it.

But I would be lying if I said the moral weight never occurred to me. There were moments – usually late at night, or in the quiet of a Sunday afternoon – when I would think about what those trajectories actually represented. A trajectory is an abstraction: a curve through space, governed by elegant differential equations, shaped by gravity and air resistance and the spin of the shell. But at the end of that curve is an impact. And at the point of impact, there may be human beings.

I do not think I ever fully resolved this tension. I set it aside, mostly, because setting it aside was the only way to continue the work. I told myself that the war was just, that defeating fascism was a moral imperative, that the weapons would be fired whether or not I calculated their trajectories. All of these things were true. None of them entirely settled the question.

What I can tell you is that the distance between the calculation and the consequence made it easier not to dwell on it. I never saw a battlefield. I never saw the effect of a shell landing where our tables said it would land. The work was abstract – numbers on paper, switches on a panel, curves on a plotting board. That abstraction was a kind of protection, though I am not sure it was an honest one.

After the war, when we worked on calculations for Los Alamos, I did not know at first what we were computing. The project was classified, and we were told only what we needed to know to configure the machine. It was only later that I understood those calculations were related to the hydrogen bomb – a weapon of almost unimaginable destructive power. By then, of course, the work was done. I cannot say how I would have felt if I had known at the time. I hope I would have thought carefully about it. I am not certain I would have refused.

Now, as to what I would say to young scientists today who face similar questions – and I gather these questions are more pressing than ever, with artificial intelligence and autonomous systems and all manner of technologies that can be turned to harmful ends – I would say this:

First, take the question seriously. Do not dismiss it as impractical or naive. The fact that a technology will be developed with or without you does not absolve you of responsibility for your own choices. You are not merely a cog in a machine; you are a moral agent, and your participation matters.

Second, recognise that there are rarely clean answers. Most technologies are dual-use. The same mathematics that guides a missile can guide a rescue helicopter. The same computing power that enables surveillance can enable medical diagnosis. You will not always be able to draw a bright line between acceptable and unacceptable work. What you can do is think honestly about the likely uses of what you build, and whether you can live with those uses.

Third, seek out others who share your concerns. I was fortunate to work alongside women who were thoughtful and serious-minded. We did not often discuss the ethics of our work explicitly – that was not the custom of the time – but we supported one another, and I think that support made it easier to stay grounded. If you are troubled by the direction of your work, find colleagues who will think through the questions with you. You need not carry the weight alone.

Fourth, and perhaps most importantly: do not let the difficulty of the question paralyse you. Science and engineering are powerful forces in the world, and the world needs people of conscience to be involved in shaping them. If everyone with moral qualms steps back, the work will be done by those without such qualms. That is not a better outcome.

I did not always get these things right. I did not always ask the questions I should have asked. But I tried to do good work, and I tried to be honest with myself about what that work meant. That is all any of us can do – and it is not nothing.

Diego Rodríguez (29, computer engineering student and history‑of‑technology enthusiast, Buenos Aires, Argentina):
Imagine a different 1946 in which you and the other ENIAC programmers were formally recognised as engineers, kept on long‑term, and given resources to train new teams. How do you think that might have changed the early culture of computing – things like who got to enter the field, how programming was taught, or which problems ENIAC‑style machines were pointed at first?

Diego, what a question. You are asking me to imagine a world that did not exist – and I confess, there is a certain pleasure in the exercise, even if it carries a note of melancholy.

Let me begin with what actually happened, so we can see clearly what might have been different. In February of 1946, ENIAC was unveiled to the press and the public. The six of us who had programmed the machine were present, but we were not introduced. The Army’s narrative centred on the engineers and the mathematicians who had designed the hardware – Mauchly, Eckert, Goldstine, and others. We were “operators,” in the official telling, and operators were not the story.

Within two years, most of us had left the project. I married Homer in 1947 and resigned shortly after. Jean Jennings married William Bartik and stayed on longer, but she was the exception. Kathleen married John Mauchly – yes, one of the ENIAC designers – and stepped back from active programming. Betty Holberton was one of the few who remained in the field and continued to make important contributions, including work on the first sorting routines and later on programming standards.

Now, imagine your alternative 1946. Imagine that the Army had said: “These six women are engineers. They invented the practice of programming this machine. We want them to stay, to train others, and to lead the next generation of work.” What might have followed?

I think the first consequence would have been a different sense of who belonged in computing. In the world that actually unfolded, programming quickly became associated with men – partly because the men who stayed in the field wrote the histories, hired the next generation, and shaped the culture. The women who had been there at the beginning were forgotten or footnoted. Young women entering the field in the 1950s and 1960s had few visible role models and often faced the assumption that computing was not really women’s work.

If we had been recognised as engineers and kept on in leadership roles, that assumption might never have taken hold so firmly. Young women would have seen us – in photographs, in lecture halls, in the technical literature – and understood that programming was something women had not merely contributed to, but helped invent. The culture of computing might have developed with a different sense of who it was for.

The second consequence, I suspect, would have been in how programming was taught. We learned to programme ENIAC by reading schematics and reasoning from first principles. There was no curriculum, no textbook, no established pedagogy. If we had been asked to train others, we would have had to invent one. And I think we would have invented something quite practical and grounded – something that emphasised understanding the machine, tracing the logic, checking your work against reality. We were not theorists; we were problem-solvers. Our teaching would have reflected that.

Whether this would have been better or worse than what actually developed, I cannot say. The theoretical foundations laid by people like von Neumann and Turing were enormously important, and I would not wish them away. But there might have been a stronger practical tradition running alongside – a tradition that valued hands-on understanding and empirical verification, the kind of knowledge that comes from climbing inside a machine and learning its moods.

The third consequence – and here I am speculating more freely – might have been in the problems ENIAC and its successors were pointed at. The early applications were largely military and scientific: ballistics, nuclear physics, cryptography, weather prediction. These were important problems, certainly. But if women had been more central to the culture, if our perspectives had carried more weight, might we have pushed for other applications sooner? Problems in public health, perhaps, or education, or social planning? I do not know. We were products of our time, and our imaginations were shaped by the same forces that shaped everyone else’s. But different people ask different questions, and a more diverse group of leaders might have seen possibilities that were invisible to the men who actually held power.

Finally, I think the simple fact of recognition would have mattered. When your work is acknowledged, you have standing. You can advocate for resources, for attention, for the problems you think are important. When your work is invisible, you have no such standing. You are grateful for whatever crumbs fall your way. The six of us, if we had been recognised as the engineers we were, might have had the authority to shape the field in ways that were simply not available to us as “operators” who were expected to step aside when the war ended.

But I must be careful not to paint too rosy a picture. Recognition alone does not change the underlying structures of a society. Even if the Army had called us engineers, we would still have faced pressure to marry, to have children, to subordinate our careers to our husbands’. The post-war world was determined to send women back to the home, and no job title would have fully insulated us from that pressure. Some of us might have stayed longer; some might have returned after raising children, as women do more easily today. But the transformation you are imagining would have required more than a press release. It would have required a different world.

Still, it is a lovely thought. And I thank you for thinking it.

Maya Wilson (35, biomedical engineer and research lead, Melbourne, Australia):
When you stepped away from ENIAC and into family life, did you experience any quiet sense of loss about not building on that early work, or did your identity shift fairly naturally? For women and carers who take long career breaks from technical fields now, what helped you keep your confidence and curiosity alive, even when the world stopped seeing you as “the mathematician”?

Maya, this is perhaps the most personal question of all, and I am grateful for it. The answer is complicated, and I suspect you already know that.

Did I experience a sense of loss? Yes. There was a particular moment – I can still see it quite clearly – when I was at home with our first son, perhaps three or four months old, and I received a letter from Kathleen. She was still working on ENIAC, collaborating with von Neumann on some problem I do not now recall. The letter was full of technical details and excitement about a new approach they were trying. I read it sitting in my kitchen, with the baby asleep in a basket beside me, and I felt something I can only describe as a sharp, clean ache. Not regret, exactly. But a recognition that I had stepped away from something that mattered to me, at a moment when it was beginning to yield its deepest secrets.

That ache faded, though it never entirely disappeared. Within a few years, I had three sons, and the rhythm of motherhood – the feeding and the worrying and the simple, exhausting dailiness of it – occupied most of my attention. I was not unhappy. But I was aware of a kind of bifurcation in my life. There was the mathematician and engineer Frances had been; there was the mother and wife Frances was becoming. These two people did not seem entirely reconcilable, at least not in the world as it was then.

As to whether my identity shifted naturally or whether I resisted it – I would say both, and neither. It shifted because the expectations were overwhelming, and resisting them would have required a kind of courage or stubbornness that I did not quite possess. But it did not feel natural, either. It felt like stepping into a role, and then gradually learning to live in it.

What kept me sane, I think, was maintaining some connection to the intellectual life I had left behind. I read mathematics journals when I could. I followed the early developments in computing through whatever popular accounts filtered down. I taught my sons to think mathematically, to ask questions, to work through problems carefully. I did not maintain this perfectly – there were years when I was so absorbed in managing three boys and a household that I let those connections atrophy. But I tried to keep the thread alive.

I also had the good fortune to marry someone who respected my intelligence. Homer was an engineer himself, and while he was not always sympathetic to my frustration at having left the field, he did not dismiss it or belittle it. We could talk about technical problems over dinner. He valued the fact that I understood what he did, even if he could not fully reciprocate by understanding the loss I felt.

I want to be honest with you, though: I did not always keep my confidence alive. There were long stretches when I wondered whether I could still do the work, whether the knowledge I had acquired would atrophy if I tried to return to it. Computing was moving so quickly. Languages were developing – FORTRAN, COBOL, and others – that made programming accessible in ways that ENIAC hand-configuration never was. The field was professionalising, establishing standards and curricula. Would I be able to catch up? Would anyone want to hire a woman who had been out of the field for ten years, raising children?

These doubts were self-protective, I think. If I convinced myself that return was impossible, then I did not have to grieve the fact that I was not returning. But they were also rooted in real observations about how the world worked. The window for women in computing was closing, even as I stepped out of it.

What helped most was not, strangely, maintaining my technical knowledge. What helped was maintaining my sense that I was capable of serious intellectual work. I did this partly through reading and thinking, but also through conversation. I sought out other women who had left professional work – teachers, nurses, a chemist who had stepped back when she married – and we would talk about what we had given up and what we had gained. There was no community of support for this; we did not have the language or the frameworks that exist today. But the conversations themselves were sustaining. They reminded me that I was not alone in this experience, and that the loss was real and acknowledged, even if the wider world did not want to hear about it.

I also want to say something to women today who face career breaks: the break itself does not erase what you have learned. The mathematics I mastered at Chestnut Hill and the Moore School and ENIAC did not evaporate when I stopped doing calculations for money. The habits of mind – the precision, the logical thinking, the willingness to trace through a problem until it makes sense – those stayed with me. If I had been able to return to the field, those foundations would still have been there. The languages and tools would have been new, but the underlying thinking would have been familiar.

The cruelty of the situation in my time was not that returning was impossible – many women did return, eventually, though often in diminished circumstances. The cruelty was that the culture made return so difficult, and so often presented as a failure on the woman’s part. “She could not manage both,” people would say, or “She decided her children were her priority.” These judgments were both true and false – true in that the choice was made, false in that it was presented as freely made when in fact it was shaped by enormous structural pressure.

What I would say to women navigating this now is: name the loss. Do not let anyone convince you that stepping back from work you love is simply a personal choice that requires no grief. It is a loss, and it deserves to be mourned. At the same time, do not let the loss completely eclipse what you gain. Motherhood, or any form of caregiving, can be intellectually and emotionally rich, even if it does not feel that way in the moment when you are exhausted and overwhelmed.

And if you can – if circumstances permit – try to keep one foot in your field. It need not be a full-time commitment. Read the journals. Attend a conference if you can. Volunteer. Mentor younger people. Teach. Find some way to stay connected, so that the return, if it comes, is not returning from a void, but a continuation of a conversation you never entirely left.

Finally, I would say this: the fact that women today can even ask these questions, can consider staying in the field while raising children, can imagine a life that is not a binary choice between profession and family – that is progress, genuine progress. It was not available to my generation. But it has not eliminated the underlying tension. That tension is real, and it matters. Do not let anyone make you feel ashamed of it.


Reflection

Frances Bilas Spence died on 18th July 2012, at the age of ninety, having lived long enough to see her story begin – at last – to be recovered and told. She did not live to see the full weight of that recovery, or to know how thoroughly her work would come to be recognised as foundational to modern computing. But in her final years, particularly after the 1997 Women in Technology International Hall of Fame induction and the release of documentaries like Top Secret Rosies and The Computers, she received a kind of belated acknowledgment that had eluded her for half a century.

What emerges from this conversation is a Frances Spence quite different from the historical record in subtle but important ways. The official narrative casts her as one of six interchangeable “ENIAC programmers,” often presented as a collective rather than as individuals with distinct perspectives and contributions. In her own voice, she is more particular: thoughtful about the ethics of her work, precise about the technical methods the six women invented, candid about the costs of stepping away, and fiercely protective of the intellectual rigour that underpinned what many dismissed as “clerical” labour.

She also offers perspectives that are notably absent from most historical accounts. Her reflections on numerical stability and error propagation – the grinding, practical work of making a machine produce reliable results – are not widely documented in the popular histories. Her observations about how the distance between calculation and consequence enabled moral evasion during wartime speak to a complexity that is often flattened in retrospective accounts. And her speculation about what a different 1946 might have meant for women in computing carries a wistfulness that suggests she spent considerable time thinking about the road not taken.

Where gaps and uncertainties remain, they are instructive. We do not have extensive documentation of day-to-day interactions between the six programmers; much of what we know comes from later interviews and oral histories, filtered through decades of memory and reinterpretation. The degree to which Frances Spence was involved in specific technical innovations – whether she was instrumental in inventing breakpoints, or subroutines, or particular debugging techniques – is sometimes contested among historians. Jean Bartik’s later accounts emphasise her own role; Betty Holberton’s memoirs highlight hers. Frances Spence herself was characteristically modest, crediting the team rather than claiming individual credit. The historical record reflects these modulations, and certainty about who did what is sometimes elusive.

What is not elusive is the scale of the contribution. The six women transformed an elaborate machine into a functioning computer through sheer intellectual labour, reasoning from first principles in the absence of any precedent. They invented the mental models, the notational systems, the debugging practices, and the organisational methods that would become the foundation of software engineering. That this occurred in 1945, decades before “software engineering” became a recognised discipline, makes it all the more remarkable – and all the more tragic that it was so thoroughly forgotten.

The recovery of Frances Spence’s story is itself worth pausing over. For decades, she was a name in a photograph, a face in the margins of computing history. It took the determined work of Kathy Kleiman – who, as a computer science undergraduate in the 1980s, became fascinated by a photograph of ENIAC with women present but unnamed – to begin the process of restoration. Kleiman’s project evolved into the ENIAC Programmers Project, a multi-year effort to locate, interview, and document the contributions of all six women. Without that work, Frances Spence might have remained a footnote, if that.

Today, her legacy lives in multiple registers. It lives in the technical practices she helped invent – the breakpoint, the subroutine, the flowchart – which remain foundational to debugging and programme design. It lives in the recognition, now widespread, that women were present at the birth of computing and were instrumental to its development. It lives in the documentaries and books that have brought her story to new audiences. And it lives in the growing conversation about how to make STEM fields genuinely inclusive, not just in aspiration but in practice.

For young women entering mathematics, physics, and computer science today, Frances Spence’s story offers both inspiration and warning. The inspiration is clear: a woman from a working-class Philadelphia background, educated at a Catholic women’s college, rose to solve problems that the best engineers in the country could not solve alone. She was brilliant, disciplined, precise, and unafraid of intellectual difficulty. She belonged in that room full of vacuum tubes and schematic diagrams, and she knew it, even when others did not.

The warning is equally clear: brilliance alone is not enough. Visibility matters. Institutional support matters. The culture of a field matters enormously. Frances Spence stepped away from computing at the moment of its greatest expansion, not because she lacked ability or interest, but because the world offered her no viable alternative. Today’s young women have more choices, and for that we should be grateful. But the underlying tensions – between ambition and caregiving, between individual achievement and family responsibility, between doing work you love and earning recognition for it – remain stubbornly present. The solution is not for women to work harder or to be more resilient. It is for institutions and societies to change.

This is where mentorship and visibility become crucial. When Frances Spence was working, there were almost no visible role models of women in computing beyond her immediate peer group. Today, there are more – though still far too few. Every woman who stays in STEM, who speaks publicly about her work, who mentors the next generation, who refuses to be invisible – is doing something that Frances Spence could not do in her time. That work of visibility is not secondary to the technical work; it is part of the technical work. It is how fields transform.

What also emerges from Frances Spence’s reflections is the importance of intellectual humility combined with intellectual confidence. She is clear about what she knew and what she did not know. She is precise about technical details when she can be, and honest about the limits of her knowledge when she cannot. She does not claim credit she does not deserve, but she also does not diminish the significance of her contributions. This combination – clarity without arrogance, confidence without certainty – is rare, and it is something young scientists might well emulate.

Finally, there is something quietly radical in Frances Spence’s insistence on asking hard questions. She did not accept the official narratives about her own work without interrogation. She thought carefully about the ethics of her contributions to wartime weaponry. She wondered what might have been different if the world had valued women’s work differently. She did not answer these questions definitively – some of them may not have definitive answers – but she asked them with genuine seriousness. In a field that often prides itself on technical precision while remaining curiously incurious about the broader implications of its work, that kind of questioning is desperately needed.

Frances Spence’s life reminds us that the architecture of the digital age was built by hands and minds that history tried to erase. It reminds us that the people who do transformative work are not always the ones who receive credit for it. And it reminds us that rectifying that erasure is not merely an act of historical justice, though it is that. It is an act of intellectual honesty, and a necessary precondition for building a future where the full range of human talent can be brought to bear on the problems we face.

She is gone now, nearly thirteen years. But the questions she raised, the methods she pioneered, and the quiet example of her perseverance remain. And in the work of every woman who refused to be invisible, who insists on being heard, who contributes her intelligence and ingenuity to the building of the future – in all of that, Frances Spence lives on.


Editorial Note

This interview with Frances Spence, though presented in the form of a direct conversation, is a dramatised reconstruction based on extensive historical research. All statements attributed to Frances Spence have been crafted from verified biographical information, documented technical achievements, and contemporaneous accounts of her work and life. Where direct quotations from historical sources exist – for example, from oral histories, written memoirs, or interviews given late in her life – they have been incorporated faithfully. Where no direct record exists, dialogue has been imagined to reflect what is known about her character, background, and the circumstances she faced.

The reconstruction rests upon a foundation of archival material: Frances Spence’s documented education at Chestnut Hill College (1942), her employment as a human computer at the Moore School of Engineering (1942–1947), her selection as one of six original ENIAC programmers (1945), her marriage to Homer Spence and subsequent resignation (1947), her posthumous induction into the Women in Technology International Hall of Fame (1997), and her death in 2012 at age ninety. Technical details about ENIAC programming, the Differential Analyser, ballistics calculations, and the six women’s innovations (subroutines, breakpoints, flow notation) are drawn from peer-reviewed historical scholarship, oral histories of the ENIAC programmers, and engineering documentation from the period.​​

Dialogue has been constructed to reflect the speech patterns, educational background, and intellectual milieu of a Philadelphia-born mathematician educated in the 1930s and 1940s. Slang, idioms, and references are era-appropriate; modern terminology has been avoided unless clearly marked as the interviewer’s framing. No fictional names or events have been introduced. Where the historical record is uncertain – for example, regarding precise attribution of specific programming innovations among the six women, or the exact content of private conversations – this uncertainty has been preserved in the responses rather than resolved through invention.

The purpose of this reconstruction is not to substitute for history but to render it more immediate and accessible. By imagining Frances Spence’s voice, we invite readers to encounter her not as a distant figure in a photograph but as a thoughtful, precise, and morally serious person engaging with problems that remain urgent today: the ethics of technological work, the erasure of women’s contributions, the tension between ambition and caregiving, and the quiet labour of building systems that outlast their builders. Every effort has been made to honour her character and her achievements while remaining transparent about the limits of what can be known.

The reader is thus asked to treat this piece as informed historical fiction – a means of entering the past, not a replacement for rigorous scholarship. For those wishing to explore the primary sources, references are provided throughout, and the ENIAC Programmers Project (eniacprogrammers.org) remains an invaluable resource for recovering the full story of these six remarkable women.


Who have we missed?

This series is all about recovering the voices history left behind – and I’d love your help finding the next one. If there’s a woman in STEM you think deserves to be interviewed in this way – whether a forgotten inventor, unsung technician, or overlooked researcher – please share her story.

Email me at voxmeditantis@gmail.com or leave a comment below with your suggestion – even just a name is a great start. Let’s keep uncovering the women who shaped science and innovation, one conversation at a time.


Bob Lynn | © 2025 Vox Meditantis. All rights reserved.

One response to “Frances Spence: The Quiet Programmer Who Taught ENIAC to Think and Then Vanished from History”

  1. Anna Waldherr avatar

    I hope this is an encouragement to young women in STEM.

    Liked by 1 person

Leave a comment