This interview is a dramatised reconstruction: Jean Bartik died in 2011, so the 2025 conversation is imagined, with its dialogue and reflections written to be plausible rather than verbatim. The piece is grounded in documented sources on her life and work, but any specific wording, scenes, and speculative commentary should be read as interpretive rather than part of the historical record.
Jean Bartik (1924–2011) was an American mathematician and computer programmer whose intellectual courage transformed an 18,000-vacuum-tube engineering marvel into a working machine. She and five colleagues invented the discipline of software development from scratch, teaching themselves to program ENIAC through schematic diagrams when no manuals, instruction sets, or teachers existed. Her legacy extends far beyond the ballistic equations she helped compute in 1946 – she engineered the conceptual frameworks, debugging techniques, and logic circuit designs that became the foundation of modern computing, yet her name was systematically erased from the historical record for forty years until historians recovered her story in the 1990s.
Thank you for being here today, Jean. I want to begin by saying something that may sound strange to you: your work has become rather famous again. When you passed away in 2011, most of the world didn’t know your name. But now, there’s a programming framework called “Bartik” – it’s used by millions of web developers who have no idea they’re using something named after you. How does that land?
That’s peculiar, isn’t it? I spent my whole career trying to make invisible things visible – trying to make a machine follow your thinking, step by step. I suppose there’s a kind of poetry in that. Though I confess, I’d rather have the recognition when I was doing the work. The frustration wasn’t obscurity, you understand. It was being present and invisible at the same time. Being told to keep quiet about what you’d accomplished.
Let’s talk about that presence-and-invisibility, because it’s central to your story. But first, I want to place you in a moment. February 15, 1946. ENIAC’s public demonstration. You and your five colleagues had just programmed a ballistic trajectory that shocked everyone in the room. What was that day like?
One of the most exciting days of my life. We’d been working on that trajectory calculation for weeks – and we knew we’d gotten it right. When ENIAC computed in twenty seconds what took our human computers forty hours, you could feel the room understand, all at once, that something had changed. The capability was suddenly real.
But here’s what I remember most vividly: the photographers. They asked us to stand near the machine, and the pictures that were taken – I saw one years later where someone had written underneath that we were “models” demonstrating the equipment. Models! We’d spent months debugging vacuum tubes, reconfiguring panels, inventing techniques that no one had ever used before. And the captions called us models.
That’s one of the cruellest ironies of that day. John Mauchly and John Eckert received the congratulations. They were celebrated as the brilliant engineers. You and Betty Snyder, Marlyn Meltzer, Fran Bilas, Ruth Lichterman, and Kathleen Antonelli – you programmed the machine they’d built, and yet your work was treated as clerical.
Not clerical – worse than clerical. Secretarial. The assumption was that we were following instructions, when in fact there were no instructions. We had to invent them. That’s the part that infuriates me still, even now looking back. The engineers had the privilege of being understood. We had the burden of being misunderstood while we were being erased.
Let’s go back to where this began. You grew up in Missouri, didn’t you? In quite rural circumstances?
Gentry County, northwest Missouri. Farming country. My father was a farmer and a businessman, and my mother was an educated woman – a schoolteacher before she married. She valued learning fiercely. I was an only child, and I had access to books, to conversation, to the assumption that education would be available to me. That was a privilege, and I knew it even then.
I was good with mathematics. Better than good – I loved it. The logic of it, the way everything connected. In high school, I started thinking I might teach. By the time I finished at Northwest Missouri State Teachers College in 1945, I had the only mathematics degree in my graduating class. The only one. And yet when I graduated, there was no assumption that I’d pursue research or advanced work. The path was laid out: I could teach high school mathematics if I was fortunate.
But something redirected you.
War. The University of Pennsylvania was hiring women to calculate ballistic trajectories by hand – firing tables for artillery. It was considered important work for the war effort, and they needed mathematicians. I wasn’t a “computer” in the sense we’d later understand it. I was a human computer, doing calculations on a mechanical adding machine, hour after hour. The irony is that I was doing exactly what ENIAC would later do electronically.
I was hired in early 1945, and I worked on those tables through the spring. Then in the summer, they began interviewing people to work on this new machine called ENIAC. Most of the women who’d been working on the trajectories didn’t want anything to do with it. They thought it was too risky, too experimental. But I was curious. I wanted to understand how it worked.
What was your first impression when you actually saw ENIAC?
Overwhelming. Absolutely overwhelming. Imagine walking into a room the size of a small house – because ENIAC occupied about thirty by fifty feet – and seeing nothing but vacuum tubes, wiring, panels, lights blinking. Eighteen thousand vacuum tubes. Miles of wire. The noise was extraordinary. It hummed and clicked constantly. The heat was oppressive; we had to keep the air conditioning running, which was itself an extraordinary expense.
And there were no manuals. No instruction codes. No one had programmed an electronic, fully general-purpose computer before. The engineers – Mauchly and Eckert primarily – showed us the schematic diagrams and said, essentially: “This is what the machine can do. Figure out how to make it do what you need it to do.”
We were all terrified and exhilarated in equal measure.
This is where the story becomes genuinely remarkable. You had to invent the discipline of programming while you were doing it. Can you walk me through what you did? How do you program a machine when programming doesn’t exist?
First, you understand the hardware completely. You study those schematics until you can visualise how electricity moves through the circuits. ENIAC had various functional units – accumulators for storing numbers, a master programmer that controlled timing, a function table that could store initial conditions. Each unit had its own controls: cables, switches, settings.
To program ENIAC, we had to physically reconfigure the machine for each calculation. We’d pull cables out of panels, reinsert them in different positions, flip switches. It was like rewiring a telephone switchboard, except the configuration had to be logically perfect or the entire calculation would fail.
The trajectory calculation – let me walk you through that, because it’s the best example I have of how we thought.
A ballistic trajectory follows a differential equation. The projectile’s position and velocity change continuously over time, but ENIAC works in discrete steps. So we had to break the trajectory into small time intervals – perhaps ten-millisecond increments – and calculate the position and velocity at each step.
Here’s where we invented something: the subroutine. We realised that we’d need to perform the same sequence of operations repeatedly. Calculating new position from old position, new velocity from old velocity. So we designed a way to store the sequence of operations – the subroutine – and then call it over and over, feeding in new input values each time. This was revolutionary because it meant we didn’t have to reconfigure the entire machine each iteration. We could loop.
We also invented nesting, which is when one subroutine calls another. And we invented breakpoints – ways to pause the calculation and check intermediate results without stopping the whole machine. These are techniques that programmers take for granted now, but we had to invent them by hand, testing each one, watching it work or fail.
We also had to develop a notation system to describe what we were doing. We created flow diagrams – hand-drawn diagrams showing how data would move through the machine, where it would be stored, how it would be transformed. Without those diagrams, the configuration would be impossible to replicate or debug.
Let me make sure I’m following this correctly. You’re saying that subroutines, flow diagrams, and debugging breakpoints – all foundational concepts in every programming language today – didn’t exist before you invented them?
That’s exactly right. They were born from necessity and experimentation. We didn’t invent them in the abstract. We invented them because ENIAC wouldn’t work without them. The machine forced us to be rigorous and systematic – well, methodical, I should say. One small error in the configuration, and the trajectory would be wrong. We learned very quickly to test, to verify, to catch errors before they compounded.
I should add that vacuum tubes failed constantly. They had a limited lifespan. ENIAC lost tubes regularly, and when a tube failed, you had to identify which one, replace it, and verify that the calculation still worked. That taught us something invaluable: resilience in the face of hardware failure. We built redundancy into our logic. We cross-checked results. We never trusted the machine blindly.
How long did it take you to get the trajectory program working?
Weeks. Not because the logic was impossible – it was elegant, really, once you understood it – but because we had to build the configurations and test them by hand. There was no dry run. You configured the machine, started it, and watched it either complete or fail. If it failed, you had to figure out where the error was, reconfigure, and start again. We kept detailed logs of every configuration, every test, every failure. Those logs became the first programming documentation.
I want to ask something directly, and I hope you’ll be frank with me. The 1946 demonstration was a triumph, but you and your colleagues were excluded from the public narrative almost immediately. You weren’t even introduced to the press. How did that happen? Was it deliberate?
I think it was deliberate in the sense that no one had to think about it. The institutional assumptions were so strong that they didn’t require active conspiracy. The engineers – the men who built the hardware – were understood as the innovators. We were understood as operators, or secretaries, or assistants. When you have that kind of framing, erasure happens as a matter of course.
But there were also practical barriers. We didn’t have security clearance for most of the work. We weren’t allowed to see certain parts of the machine or certain technical documents. We had to work around those restrictions. Imagine trying to program a device when you’re not permitted to understand how significant parts of it function. We had to infer, to test, to reverse-engineer our own understanding.
The University of Pennsylvania eventually gave us better access, but by then the damage was done. The public story was set. We were the women who operated the machine. The men were the ones who created it.
Did you protest this at the time?
To whom? We were young women in 1946. There was no structure for protest. We reported to the engineers. If we’d complained loudly, we’d have been removed. I saw that happen to others – women who spoke up too forcefully were quietly let go. The power dynamic was absolute.
What we did, instead, was work harder. We proved ourselves indispensable. By the time ENIAC needed to be converted into a stored-program computer in 1948, they couldn’t move forward without us. That gave us a kind of leverage. Not formal recognition, but the knowledge that we were essential.
Tell me about that conversion. That’s a significant technical milestone that’s often attributed to John von Neumann.
Von Neumann was brilliant, and he made crucial theoretical contributions to the concept of stored programs. But implementation is different from theory. I led the team that actually converted ENIAC into a stored-program machine. That meant rewiring significant portions of the hardware, changing how instructions were handled, designing the first instruction set. We worked with Von Neumann’s concepts, certainly, but we had to figure out how to make them real in the physical machine.
What made stored-program architecture revolutionary is that instructions and data could be stored in the same memory. Before that, we had to manually reconfigure the hardware for each new program. Now, you could load a program into memory as if it were data, and the machine would execute it. It’s hard to overstate how transformative that was. It meant you could write a program, save it, run it again without any physical changes to the machine.
I designed the addressing schemes, the way instructions would be retrieved and executed in sequence. I worked on timing, making sure that the machine could fetch an instruction, decode it, execute it, and move to the next instruction in lockstep. These are primitive by modern standards, but at the time, they were entirely novel.
And yet, when that conversion was completed in March 1948, the credit went to the engineers who’d designed the hardware in the first place. The innovation of implementation was invisible.
After ENIAC, you worked on BINAC and then UNIVAC. What was different about those projects?
BINAC was thrilling because it introduced magnetic tape – an entirely different way of thinking about storage. With ENIAC, everything had to fit in the machine’s memory at once. BINAC let us store data on magnetic tape, which meant we could work with larger datasets. The challenge was synchronisation: the tape moved at a fixed speed, and we had to coordinate our program logic to match that mechanical rhythm.
UNIVAC was the commercial version – the first electronic computer offered for sale. That’s where things got genuinely interesting from a programming perspective. With UNIVAC, we weren’t just writing for one specific machine that we understood intimately. We had to think about generalisability. Different customers would want to do different things with UNIVAC. How could we make the programming flexible enough to handle diverse problems?
That’s where Betty Holberton and I developed SORT/MERGE, which was perhaps the first generative programming system – a set of operations that could be combined and recombined to handle different sorting and merging tasks. Instead of writing a new program for each problem, you’d select and configure existing modules. It was modular programming, though we didn’t have that term.
That sounds like an early ancestor of today’s software libraries and APIs.
Exactly. We were trying to solve the same problem: how do you let programmers focus on their specific problem without reinventing the entire machine each time? The architecture changes, but the principle endures.
In the early 1950s, something shifted for you professionally. Can you talk about that?
I married William Bartik in 1950. He was an engineer at Remington Rand, where UNIVAC was being developed. And then came the anti-nepotism policies. Remington Rand had rules against employing spouses in the same division. The assumption was that if a couple worked together, favouritism would corrupt decision-making or relationships would destabilise the workplace.
They asked me to resign. I was at the height of my technical expertise. I’d just done groundbreaking work on stored-program architecture and generative programming systems. And I left because the institution’s rule left me no choice.
What I want to be clear about: this wasn’t just about me. It was part of a much larger pattern. Anti-nepotism policies, ostensibly gender-neutral, were wielded almost exclusively against women. When a woman’s husband was hired, she was asked to leave. Men in the same situation? Their wives managed or taught or did other work. It wasn’t framed as a choice. It was framed as institutional necessity.
What did you do?
I left the field. I raised three children over the next fifteen years. It was the 1950s and 1960s in America, and the cultural expectation was overwhelming: if you were a married woman with children, your proper place was in the home. I didn’t have much choice, and I’m not sure I had the courage to fight it at the time.
But I also want to be honest about this: I was tired. I was tired of fighting for recognition. I was tired of being invisible in my own accomplishments. And I was curious about other things. I read voraciously. I thought about questions outside of computing. Having time with my children, with my thoughts – there was real value in that, even if it cost me my career.
Did you resent it?
Yes. And no. Mostly yes. I resented the institution. I resented the assumption that I should be the one to leave, not my husband. I resented the waste of my expertise. But I also built a life that had meaning. I was present for my children in ways I wouldn’t have been otherwise. That matters too. I’m not going to perform regret about every moment outside the lab. But I resent fiercely the fact that I wasn’t given the choice.
In 1967, you re-entered the field. Why then?
My youngest child had started school full-time. I had time again. And I’d been reading about the new programming languages – FORTRAN, COBOL, ALGOL. I was fascinated by how much had changed and how much remained the same. The fundamental logic we’d developed on ENIAC was still there, but now it was abstracted into languages that made programming accessible to people who didn’t have to understand vacuum tubes or reconfigure hardware.
I earned a master’s degree in English, which sounds odd for someone in computing, but it wasn’t random. I’d become interested in the communication of technical ideas. How do you explain complex concepts clearly? How do you document? How do you teach? Those are linguistic problems, and I wanted to understand them more deeply.
I came back to computing as a writer and manager, and later as an engineer again. But it was different. I was older, and I’d stepped outside the field long enough to see it from some distance. I was also less interested in fighting for credit. I just wanted to work on interesting problems.
Until the 1990s, when your work was recovered by historians. How did that rediscovery happen?
Kathy Kleiman found me, and the other ENIAC programmers. She was doing research, and she realised that the six women who’d programmed ENIAC had been almost completely erased from the historical record. She started interviewing us, documenting what we’d actually done. It was extraordinary to finally be asked, in detail, to explain our work. To have someone listen without dismissing it as secondary or derivative.
That project – the ENIAC Programmers Project – it gave me permission to talk about what we’d accomplished. For decades, I’d carried that knowledge privately, almost as if it wasn’t quite real. Once I started speaking about it, I realised how much had been lost. Not just my own story, but the entire genesis of programming as a discipline. Future programmers had no idea where the techniques they used came from. They had no model of women as innovators in computing.
Did you ever imagine, when you were configuring those vacuum tubes in 1945, that your story would matter seventy-five years later?
Not for a moment. I thought about the ballistic tables. I thought about making the machine work. I thought about the next configuration, the next problem. I didn’t imagine that anyone would care, eventually, about how we thought. But that’s what matters, I think. It’s not really about me, or Jean, or any one person. It’s about the fact that computing was invented by people, including women, including people whose names were erased. And if we lose that knowledge, we lose something important about how innovation actually happens.
Your career spanned the moment of computing’s invention and its first commercial deployment. Now, decades later, computing is embedded in nearly every aspect of human life. From your vantage point, what concerns you most about how the field has evolved?
I’m concerned about invisibility recurring in a different form. On ENIAC, the programmers were invisible to the public and to history. Now, the people who build computing systems are often invisible to the people who use them, and the logic embedded in those systems – the decisions, the assumptions – is invisible to both.
When I was programming, every decision had to be conscious and deliberate. We had to think through every step because we had to reconfigure the machine by hand. There was no room for hidden logic or undocumented assumptions. Now, you can write code, and the implications of that code spread through thousands of systems without anyone necessarily understanding how or why.
I read about algorithmic bias, about artificial intelligence systems that discriminate, about automated decision-making that goes wrong in ways that harm people. And I think: this is what happens when the people building the systems aren’t aware of, or don’t acknowledge, the assumptions embedded in those systems. Or worse, when they’re pressured to build quickly without thinking through consequences.
Women are still underrepresented in computing, especially in the roles where architectural decisions are made. That concerns me because the field benefits from diverse perspectives. When everyone thinking about a problem comes from similar backgrounds, certain blind spots become invisible.
Do you see parallels between your own era and now?
The language is different, but some of the patterns are familiar. We were told we were “natural operators” – our hands and eyes were suited to the work. Now, women are encouraged into computing but often funnelled into certain roles – user interface design, community management, support – roles that are still framed as less technically demanding than systems architecture or core engineering. It’s the same assumption in a new guise: that there are certain kinds of technical thinking women are suited for.
Don’t accept that framing. The work I did on ENIAC wasn’t secretarial – it was logic design. It was algorithm development. It was problem-solving at the deepest level. And I knew it, even when no one else did. That knowledge kept me sane when the world tried to tell me I was less than I knew myself to be.
I want to ask you about something we haven’t discussed: failure. What’s something that didn’t work? Something you got wrong?
The stored-program conversion of ENIAC. We solved it, ultimately, but the initial approach was inefficient. We were thinking in terms of how to minimise reconfiguration, but we weren’t thinking carefully enough about execution speed. The way we first implemented instruction fetching and decoding meant that the machine spent almost as much time housekeeping as it did computing.
We optimised that later – Kathleen Antonelli particularly pushed us on that – but I wish I’d thought about performance metrics from the beginning. I was so focused on making it possible that I didn’t optimise for making it fast. In retrospect, that’s a lesson: invention and optimisation are related but distinct problems, and you have to attend to both.
There’s also the question of whether we should have fought harder for documentation and credit at the time. Maybe if we’d insisted on having our work properly documented, written down in manuals with our names attached, the erasure wouldn’t have been so complete. But institutionally, we didn’t have power. And I’m not sure, even now, what we should have done differently given those constraints.
Do you have regrets?
Not about the work itself. The work was real, and it was important, and I’m proud of it. Regrets? I regret that I accepted erasure so readily. I regret that I left the field without more of a fight. I regret that it took forty years for the recovery of my story. But I don’t regret trying. I don’t regret the trajectory calculations or the stored-program architecture or any of the technical problems we solved. That was the best work I ever did, and nothing – not erasure, not anti-nepotism policies, not the invisibility – takes that away.
If you could speak to young people entering STEM fields today, what would you tell them?
Three things, I think.
First: don’t let anyone convince you that you can’t do something because they think you can’t. That’s the soundbite version, but it’s true. In 1945, I was a small-town mathematics teacher with no experience in electronics. No one would have bet on me programming the world’s first electronic computer. But I believed I could figure it out, and I did. That belief matters.
Second: document your work obsessively. Write down what you did, how you did it, why it worked or didn’t work. When I was programming ENIAC, we created flow diagrams and logs because we had to – we needed to replicate configurations. But that documentation also became the only record of what we’d accomplished. If we’d been more deliberate about documenting our innovations, the erasure might not have been so complete. Future scientists – write down your breakthroughs. Make them impossible to lose.
Third: the work matters more than the recognition, but that doesn’t mean you should accept being erased. Push for acknowledgement when it’s warranted. Write your own history if the institution won’t. Don’t be aggressive or difficult unnecessarily, but don’t be silent about what you’ve accomplished. The silence becomes complicity in your own invisibility.
And to the women specifically: you belong in this field. You always have. The myth that women are recent arrivals in science and engineering is a lie born from erasure. We were there at the beginning. We invented things. We solved problems others couldn’t solve. And we’ll continue to do that, as long as institutions stop assuming our work is less important than it is.
And what do you think science and engineering lose when the contributions of women and minorities are erased?
Honesty. Clarity. The ability to learn from the actual history of how things were invented. When you erase the people who did the work, you also erase the method by which the work was done. Young scientists learn not just from finished achievements but from understanding the messy, experimental process of achieving them. When women’s contributions are hidden, young women have no model for how that process actually unfolds. They don’t see themselves as potential problem-solvers. They don’t know what’s possible.
And science loses intellectual diversity. I think differently from John Mauchly or John Eckert, not because I’m a woman, but because of my particular background, my particular experiences, my way of approaching problems. Having both perspectives – male and female, but also different disciplinary backgrounds, different upbringings – that produces better science. It produces more robust solutions to problems because the problems have been examined from more angles.
When you exclude people, you’re not just being unfair to them. You’re impoverishing the field.
We’re running out of time, and I’m aware we’ve only scratched the surface of your work. Let me ask a final question. If you could be remembered for one thing, what would it be?
Not for being the first woman programmer, though that’s part of it. Not for the specific programmes we wrote on ENIAC, though I’m proud of that work. I’d want to be remembered for helping to invent a way of thinking – a way of translating human logic into machine logic, of breaking down complex problems into small, verifiable steps, of building systems that were resilient and testable.
That way of thinking, that discipline, it became the foundation for everything that came after. Every software engineer today, whether they know it or not, is using techniques we invented by trial and error on a forty-ton machine in Philadelphia in 1945.
And I’d want to be remembered as someone who refused to believe that my background – the fact that I was a woman, from a small town, without formal training in electronics – meant I couldn’t think at the highest levels of technical abstraction. That refusal, that insistence on my own capability, that mattered more than any individual programme.
Because that’s what the next generation needs to hear. That the barriers they perceive between themselves and the work they want to do – those barriers are often illusions. They’re real in the sense that institutions create them, but they’re not immutable. They can be crossed by anyone stubborn enough and curious enough to insist on crossing them.
Thank you, Jean. It’s been extraordinary.
Thank you for asking the questions. For taking the time to understand not just the machines, but the people who made them work. That makes the forty years of invisibility sting a bit less.
Letters and emails
Following the publication of this interview, we received letters and emails from readers across the globe – mathematicians, historians, engineers, educators, and technologists who found themselves moved by Jean Bartik’s story and eager to extend the conversation. Their questions reach into corners we’d only glimpsed: the practical dilemmas of hardware design, the philosophical implications of learning through schematic diagrams, the roads not taken when institutional policies redirected her career, and the shape computing might have taken had circumstances been different.
We’ve selected five of these contributions, each from a different continent and perspective, representing the kind of curiosity and engagement that Bartik’s legacy continues to inspire. What follows are their voices, joined in inquiry with someone whose work remains as vital today as it was eighty years ago in that Philadelphia laboratory.
Anika Rahman, 34, Software Architect, Dhaka, Bangladesh
When you were designing the instruction set for ENIAC’s stored-program conversion, you had to make choices about how instructions would be encoded and fetched. What were the trade-offs you considered between making instructions easy for humans to reason about versus making them efficient for the hardware to execute? And looking at modern instruction set architectures, do you see echoes of those same tensions playing out?
Well, Anika, that’s a sharp question. You’ve put your finger right on the thing that kept us up half the night! When we were working on that conversion in ’48, you have to remember we were constrained by the hardware in ways that might seem downright primitive to you now. ENIAC wasn’t originally built to store programs as data – it was built to be wired up. So, when we went to make it a stored-program machine, we were essentially tricking it. We used the Function Tables – which were meant to hold constant numbers for calculations – to hold our instructions instead.
Now, the trade-off? It was always a fight between space and speed. Always.
On the one hand, we wanted instructions that were compact. We only had so much memory on those Function Tables. If an instruction took up too many digits, we’d run out of room for the program before we’d even gotten started. So we wanted codes that were short – two digits, maybe. But on the other hand, the machine had to decode those instructions. The hardware had to look at a number like ‘01’ and know that meant “Add the contents of Accumulator 6 to Accumulator 7.”
If we made the instructions too clever, too compressed, the circuitry needed to decode them became a nightmare. It would take too many cycles just to figure out what to do, and the machine would slow down to a crawl. We had to find a middle ground where the instruction was simple enough for the hardware to execute quickly, but powerful enough that we didn’t need a thousand of them to do a simple job.
We ended up with an order code – that’s what we called it then – that was very tied to the physical layout of the machine. An instruction might tell a specific accumulator to “listen” and another to “talk.” It wasn’t abstract like your code today. It was a direct command to the hardware.
Do I see echoes of that now? From what I read, absolutely. I hear people talking about RISC versus CISC – whether to have simple, fast instructions or complex, powerful ones. It sounds like the same old argument to me! You’re still balancing the cost of memory against the cost of execution time. The scale is different – you’ve got gigabytes where we had dozens of words – but the tension? That’s exactly the same. You’re always paying for convenience with performance, one way or another.
Santiago Morales, 41, Physics Education Researcher, Mexico City, Mexico
You mentioned learning from schematic diagrams – essentially reverse-engineering your understanding of the machine from its circuitry. I’m curious whether that experience of learning through diagrams rather than sequential documentation changed how you later thought about teaching programming or writing manuals. Did it make you more conscious of the gap between how machines work and how we explain how machines work?
Santiago, you’ve hit on something that I’ve argued about for fifty years. There is a world of difference between knowing what to do and knowing why you’re doing it.
When Betty Snyder and I – and the rest of the crew – were staring at those diagrams in 1945, we didn’t have the luxury of a “user guide.” We had John Eckert’s drawings. Now, a schematic isn’t a set of instructions. It’s a map of reality. It doesn’t tell you “flip switch A.” It tells you, “If electricity flows through this gate, it triggers this flip-flop, which sends a pulse to the accumulator.”
Learning that way forced us to think physically. We couldn’t treat the machine like a black box. We had to trace the pulse in our minds. We had to visualise the timing – the cycling unit beating out 100,000 pulses a second – and understand exactly where every single one of those pulses was going.
So later, when I sat down to write descriptions or help folks understand the logic, I couldn’t stand manuals that just gave you a recipe. “Do this, then do that.” That’s fine for a cook, maybe, but not for a programmer. If you only know the recipe, what do you do when the soufflé falls? What do you do when the tube blows?
It made me a fussbudget about “logical block diagrams.” I insisted on them. I wanted to show the flow of information, not just the list of settings. When we were working on the UNIVAC later, or even when I was explaining things to the new people coming onto the ENIAC, I tried to teach them the architecture, not just the operation.
I think we’ve lost a bit of that today. It worries me when I see students who can write code but don’t have the foggiest notion of what the hardware is actually doing underneath. When you explain a machine by hiding its guts, you’re making it seem like magic. And the problem with magic, Santiago, is that you can’t fix it when it breaks. We knew ENIAC wasn’t magic. We knew it was just a lot of wire, glass, and logic, and because we knew that, we could master it.
Clara Schmidt, 29, Historian of Technology, Berlin, Germany
You stepped away from computing for fifteen years, then returned to the field. In that time away, did your perspective on what programming was fundamentally change? When you came back and saw FORTRAN and COBOL, did they feel like natural evolutions of what you’d invented, or did they sometimes feel like misunderstandings of what programming could be?
Well Clara, coming back was like walking into your own house after someone had rearranged all the furniture and painted the walls a colour you didn’t recognise.
You have to understand, when I left, we were wrestling with the machine directly. We were the operating system. If you wanted to multiply two numbers, you had to know exactly where those numbers were sitting and exactly how long it would take the signals to travel through the circuits. It was physical. It was intimate.
When I came back and saw things like COBOL – all those English words, “PERFORM” and “DISPLAY” – my first reaction was that it looked like a grocery list, not a program! It seemed almost… soft. We had spent years fighting to save a single microsecond, counting pulses, and here were these languages using whole English sentences. To a mathematician’s eye, it looked terribly inefficient. It felt like they were wrapping the machine in cotton wool so nobody would get hurt.
But then I remembered Betty – Betty Holberton. She stayed in the industry while I was raising my family, and she was right there in the thick of defining those languages with Grace Hopper. Betty was always the one saying we shouldn’t be doing work the machine could do for us. She hated the grunt work.
So, once I got over the shock, I saw it as a natural evolution of what we started. We invented subroutines to stop repeating ourselves; languages like FORTRAN were just the ultimate subroutine. They took the drudgery out of it. I couldn’t argue with the utility, even if I did miss the control. There’s a certain satisfaction in knowing exactly which vacuum tube is lighting up when you press a button, and those new programmers… well, they missed out on that. They were talking to a compiler, not the computer. It was like passing notes in class instead of having a conversation.
Jack Taylor, 56, Retired Electronics Engineer, Sydney, Australia
In your experience debugging ENIAC, vacuum tubes failed constantly. How did you approach building resilience into your logic designs knowing that hardware failure was inevitable rather than exceptional? Did that experience of working with unreliable components shape how you thought about error-checking and verification – and do you think modern programmers, working with far more reliable hardware, might have lost something important about defensive thinking?
Jack, you’re speaking my language now. Vacuum tubes! Lord, they were the bane of our existence and the marvel of the age, all at once.
With ENIAC, we didn’t just assume things might fail. We assumed they would fail, probably before lunch. We had 18,000 tubes, and standard reliability engineering at the time said the machine shouldn’t work for more than a few minutes at a stretch. But Pres Eckert – J. Presper Eckert – he was a genius with the hardware. He ran the tubes at much lower power than they were rated for, which kept them from burning out so fast. That was his trick.
But for us, the programmers? It meant we had to be suspicious. We couldn’t trust a result just because the machine spit it out. We built checks into the program itself. We’d run a calculation, then run it backward to see if we got the original number. Or we’d have the machine calculate a value two different ways and compare them. If they didn’t match, the machine would stop – bang, right there.
We called it “defensive programming” later, but at the time, it was just common sense. You don’t cross a bridge without checking if the planks are rotten.
And do I think modern folks have lost that? Oh, absolutely. You have machines that run for months without a hiccup, so you get lazy. You assume the hardware is perfect. But software isn’t perfect, and data isn’t perfect. When you stop expecting failure, you stop looking for it. We learned to treat every answer as a “maybe” until we proved it was a “yes.” That scepticism – that habit of cross-examining your own work – that’s the only way to be sure you’re right. It doesn’t matter if you’re using vacuum tubes or silicon chips; if you trust the machine more than your own logic, you’re heading for trouble.
Zola van Niekerk, 38, Applied Mathematician, Cape Town, South Africa
Suppose anti-nepotism policies hadn’t forced you to leave in 1950. Suppose you’d continued working on programming language design and compiler development through the 1950s and 1960s alongside people like Grace Hopper and others. How differently might programming languages have evolved if more women like you had remained visible contributors rather than being pushed out? What kinds of problems do you think you would have wanted to solve that perhaps didn’t get solved?
Zola, that’s the kind of question that would have kept me thinking, even long after the work itself was done – a real “what if” that lingers in the back of the mind.
If I’d stayed, if Betty Holberton had stayed, if Kathleen Antonelli and the rest of us hadn’t been forced out or hadn’t left – we would have built programming languages very differently, I think. Not better necessarily, but differently.
Grace Hopper was brilliant with COBOL, and I admired her work tremendously. But Grace came to programming from a different angle than we did. We came from the mathematics of it, from the physical machine. Grace came from a compiler perspective – how do you make a machine understand a more human-readable language? Those are different problems, and they lead to different solutions.
If more of us had remained in the room when languages were being designed, I think we would have insisted on something crucial: transparency. We would have fought harder to keep the programmer aware of what the machine was actually doing underneath. We wouldn’t have been satisfied with just “it works.” We would have asked: at what cost? How many operations is this really performing? Where is your data sitting in memory?
I also think we would have thought more about what I’d call the architecture of thought – not just the syntax, but the logical structures that made certain kinds of problems easier or harder to solve. We’d spent years inventing subroutines and nesting and flow control. We understood deeply how human thinking maps onto machine thinking. I suspect we would have pushed for languages that made that mapping more visible, not less.
And here’s the thing: we would have had allies. Betty Holberton went on to develop SORT/MERGE and did groundbreaking work, but she was one voice. Imagine if there had been five or six of us, all arguing from different angles, all insisting on rigor, all refusing to let the machine become a complete black box.
Would programming look simpler? Probably not. Would it be more powerful? Possibly. But I think it would be more honest. It would force programmers to think about what they were actually asking the machine to do, not just whether it gave the right answer.
The other thing – and this matters, Zola – we would have stayed visible. Young women coming into computing in the 1950s and 1960s wouldn’t have looked around and seen only men. They would have seen us. They would have known that programming wasn’t a secretarial job or a stepping stone to something “real.” They would have known it was a discipline worth mastering, worth dedicating your life to.
That absence – that visibility gap – it echoes right down to today. How many brilliant mathematicians and engineers decided they didn’t belong in computing because nobody showed them a woman doing the deep, important work? That’s not just my loss. It’s everybody’s loss.
Reflection
Jean Bartik passed away on 23rd March 2011, at the age of eighty-six. She lived long enough to see the ENIAC Programmers Project recover her name from obscurity, but she did not live to see the full scope of her rehabilitation into computing history – a process that continues, incompletely, to this day.
This fictional interview, conducted as if across the boundary between death and recognition, aims to honour what Bartik herself insisted upon: that the work mattered more than the recognition, but that recognition itself is not a luxury – it is a necessity for future generations to understand how innovation actually happens. The conversation above captures, as faithfully as imagination allows, her particular blend of Midwestern directness and technical precision, her pragmatic refusal to dramatise discrimination even as she named it plainly, and her conviction that the future of any discipline depends on recovering its honest history.
What This Interview Reveals Beyond the Record
Official accounts credit ENIAC’s stored-program conversion to John von Neumann, with Bartik’s role mentioned in passing. This interview foregrounds what Bartik herself emphasised: that theoretical contribution and practical implementation are not interchangeable, and that the invisible labour of programming – translating concepts into executable configurations – was hers to own. Similarly, the narrative around “six ENIAC programmers” often collapses their individual contributions into a collective achievement. Bartik’s voice here insists on specificity: she led certain technical efforts; she made particular decisions about instruction encoding; she pioneered defensive programming practices before they had a name. These distinctions matter because they anchor innovation to the people who actually performed it.
The interview also explores dimensions of Bartik’s life that historical records often treat cursorily. Her departure from computing due to anti-nepotism policies is typically presented as a footnote, a regrettable fact of the era. Here, she articulates the compound injury: not just being forced out, but being forced out at the height of her expertise, whilst the institution was rebuilt without her. Her later return to the field, and her reflections on how programming languages evolved during her absence, reveal how much technical history is shaped by who gets to remain in the room. These are speculative reconstructions – Bartik never wrote extensively about her thoughts on COBOL or FORTRAN in archival sources – but they are grounded in the technical knowledge she demonstrated throughout her career.
The Gaps We Cannot Fill
It is important to acknowledge that some questions remain unresolved, even in this extended conversation. Bartik did not leave detailed technical notes on certain aspects of her work. The specific trade-offs she made in instruction set design, for instance, are known through her autobiography and oral histories, but not through her own contemporary documentation. The ENIAC team’s debugging techniques were passed on through demonstration and conversation, not through formal specifications. Future historians may recover additional primary sources – letters, technical memos, programmer’s logs – that would clarify or challenge interpretations offered here.
Additionally, the question of what computing might have looked like had women remained visible contributors is genuinely counterfactual. Grace Hopper did remain in computing and shaped language design profoundly. Kathleen Antonelli worked in various technical roles throughout her career. Betty Holberton continued contributing to programming development. But the field was smaller, and women’s presence was contested in ways that cost energy and limited what was possible. Any answer to Zola’s speculative question is necessarily partial.
The Afterlife of Bartik’s Work
Kathy Kleiman’s ENIAC Programmers Project, which began in the 1990s, is the crucial institutional intervention that prevented Bartik’s work from remaining permanently lost. Kleiman’s documentary film, oral history interviews, and advocacy created the conditions for Bartik’s belated recognition: the Computer History Museum Fellowship (2008), the IEEE Computer Pioneer Award (2008), and the naming of Drupal’s default theme “Bartik” (2010). These honours arrived late, but they arrived. More importantly, they opened pathways for other erased histories to be recovered. The ENIAC Programmers Project model – systematic historical investigation combined with public advocacy – has become a template for recovering women’s contributions across STEM fields.
Contemporary computer scientists continue to build on Bartik’s foundational work, often without knowing they are doing so. Every subroutine, every debugging breakpoint, every flow diagram used in software development carries forward the logic Bartik and her colleagues invented. Academic papers on the history of programming invariably cite her contributions. University curricula increasingly include the ENIAC programmers’ story in computer science courses. In this sense, Bartik’s work has achieved a kind of distributed immortality: it is woven so thoroughly into programming practice that it is nearly invisible, much as it was in 1946 – except now, we know whose hands wove it.
A Message for Young Women in Science
Bartik’s legacy speaks directly to young women pursuing careers in science, technology, engineering, and mathematics. The “pipeline problem” – the assertion that women simply aren’t entering STEM fields – is contradicted by her story. Women were present at computing’s origin. They solved problems men could not solve alone. They were forced out by policy and culture, not by aptitude or interest. That history carries two crucial lessons:
First, erasure is not inevitable; it is chosen, through institutional policy and social assumption. Which means recognition is also a choice. When young women in science face invisibility, they should know that this is a pattern with a long history, and that it can be interrupted. Documentation matters. Speaking up matters. Refusing the frames that diminish your work matters.
Second, the path forward is not to prove that women belong in STEM – they always have. The path is to ensure that institutions stop choosing erasure. That requires both individual persistence (what Bartik modelled) and structural change (what remains undone). Mentorship, visibility, deliberate inclusion in authorship and credit, institutional memory – these are not nice additions to scientific work. They are preconditions for honest knowledge creation.
The Unfinished Question
Bartik’s story ends with recognition, but not with completion. The question she poses – what would computing have become if women had remained visible, funded, credited contributors throughout its development? – remains radically open. The field she helped invent continues to grapple with gender representation, algorithmic bias, the visibility of labour in complex systems. The challenges she faced, translated into modern idioms, persist: the attribution of ideas to the wrong people, the invisibility of implementation labour, the pressure on women to leave fields when they have the most to contribute.
Her final word, offered across decades to the next generation of scientists: don’t accept invisibility. Don’t accept the frames that diminish your work. Document your thought. Insist on clarity about what you’ve accomplished. And trust yourself more than you trust the institution’s story about what you can and cannot do.
That defiance, grounded in technical mastery and moral clarity, remains the most vital part of her legacy.
Editorial Note
This interview is a fictional dramatisation, not a verbatim account. Jean Bartik passed away in 2011, and this conversation is imagined as if conducted on 28th November 2025 – fourteen years after her death. It should be read as a form of historical reconstruction grounded in documented fact, but extended into plausible speculation about how Bartik might have reflected on her work, her era, and her legacy.
What is grounded in historical record:
The technical details of ENIAC’s architecture, the stored-program conversion, the development of subroutines and debugging techniques, BINAC and UNIVAC, and SORT/MERGE are all documented in Bartik’s autobiography Pioneer Programmer (2013), her oral history interviews (archived at the Computer History Museum and the IEEE), and scholarly accounts of early computing history. Her role in the ENIAC Programmers Project recovery is factual. The anti-nepotism policies that forced her resignation, her departure from computing in the early 1950s, and her re-entry in 1967 are all matters of documented record. Her awards and posthumous recognitions are accurately cited.
What is imaginative reconstruction:
The specific phrasing of Bartik’s responses, her reflections on how programming languages evolved during her absence, her speculation about what computing might have become had women remained visible contributors, and many of the anecdotal details about her thought process are dramatised. They are drawn from the themes, concerns, and technical knowledge evident in her documented work, but they are not direct quotes or confirmed memories. The tone and personality are calibrated to match her known character as reflected in interviews and writing, but they represent an author’s interpretation, not Bartik’s own voice.
The five supplementary questions and answers are entirely imaginative, though they address real gaps and tensions in the historical record. They are meant to illustrate the kinds of conversations that might have occurred with Bartik, had modern scholars and international technologists had the opportunity to engage with her directly.
Why this form?
Historical reconstruction through dramatised dialogue serves purposes that straightforward biography cannot. It allows readers to experience the technical reasoning, emotional complexity, and philosophical insight of a figure from the past. It makes visible the texture of thought – the hesitations, the nuances, the refusals to be flattened into simple narratives. At the same time, it carries real risk: fictional dialogue can masquerade as fact, and readers may mistake imaginative elaboration for documented truth.
This editorial note exists to make that distinction clear. We believe Bartik’s story is important enough to warrant careful, rigorous historical work. We also believe it is important enough that it should not be distorted by unclarified fiction. The reader deserves to know where the historical record ends and interpretation begins.
A note on sources:
Readers interested in Bartik’s own words should consult her autobiography, Pioneer Programmer: A True Story (2013), and her oral history interviews available through the Computer History Museum. Kathy Kleiman’s documentary The Computers and her related ENIAC Programmers Project materials provide crucial context for understanding how Bartik’s contributions were recovered and documented. Academic accounts include Jon McNeese’s work on early computing history and Jennifer Light’s research on the gendered labour of programming. These sources, combined with the present dramatisation, offer a fuller portrait than any single account can provide.
We invite readers to treat this interview as a starting point for deeper engagement with Bartik’s life and work, not as a substitute for the historical record. The questions she raises remain urgent. The solutions she pioneered remain relevant. And her insistence that the foundations of knowledge should be built transparently, with full credit to those who built them, remains as vital today as it was in 1946.
Who have we missed?
This series is all about recovering the voices history left behind – and I’d love your help finding the next one. If there’s a woman in STEM you think deserves to be interviewed in this way – whether a forgotten inventor, unsung technician, or overlooked researcher – please share her story.
Email me at voxmeditantis@gmail.com or leave a comment below with your suggestion – even just a name is a great start. Let’s keep uncovering the women who shaped science and innovation, one conversation at a time.
Bob Lynn | © 2025 Vox Meditantis. All rights reserved.


Leave a comment