Born Mirjam Tuwjasz to a Polish-Jewish family in 1925, Cecilia Berdichevsky became Argentina’s first computer programmer and a foundational figure in Latin American computing. Her work with the Ferranti Mercury computer “Clementina” in the 1960s placed Argentina at the forefront of scientific computing, until political upheaval ended this golden age. Today we welcome Cecilia back to discuss her extraordinary journey from accountant to computing pioneer.
Cecilia, welcome! I must tell you, it’s rather extraordinary to be speaking with you today. Not many people know that Argentina was home to some of the earliest computer programming work in Latin America, and you were at the very heart of it all. How does it feel to know that your story is finally getting the recognition it deserves?
Well, you know, recognition is nice, but it wasn’t what drove us back then. We were simply doing what needed to be done. Buenos Aires in the early 1960s was buzzing with possibilities – we had this magnificent machine, Clementina, and we were teaching her to speak Argentine, if you will. That’s what programming felt like to me – translation work, really.
Let’s start at the beginning. You weren’t always in computing, were you? You spent ten years as an accountant before making that remarkable career change at 31.
Ah, yes. Those ten years felt like a prison sentence. I was good at accounting – very good, actually – but my soul was dying in those ledger books. Day after day, the same calculations, the same frustrations with management who didn’t understand precision. When my friend Rebeca Guber suggested I study mathematics with Professor Sadosky at the university, I thought she’d gone completely mad. Thirty-one years old, starting over? But something inside me knew she was right.
And what was it like walking into that mathematics programme as a mature student?
Terrifying! Here I was, surrounded by young people who seemed to breathe mathematics, and I’m thinking about mortgage payments and whether my husband Mario thought I’d lost my mind. But Manuel Sadosky… he saw something in me. He didn’t care that I was older, that I was a woman, that I came from accounting. He cared about how I thought about problems. That first semester, when I was struggling with differential equations, he told me: “Cecilia, you don’t solve mathematics – you have a conversation with it.” That changed everything.
And then came Clementina – Argentina’s first scientific computer. Tell me about that first encounter.
Oh, Clementina! She was a beast – 18 metres long, stretching across those metal cabinets like some sleeping dragon. When they first switched her on and she played “Oh My Darling, Clementine,” we knew we had to call her Clementina. It was 1961, and she was worth 300,000 American dollars – more money than most of us had ever imagined.
But you have to understand, we weren’t intimidated. We were excited! Here was this machine that could do calculations in minutes that would take us hours by hand. I remember Cicely Popplewell, who’d worked with Alan Turing himself, was teaching us programming. She had this very proper British way of explaining things, but she made it clear: the computer is only as clever as the human telling it what to do.
You were actually the first person to write and run a programme on Clementina, weren’t you? Can you walk us through what that was like?
Yes, I was. It was a calculation I’d been working on manually for days – something that would normally take me weeks to finish properly. I spent less than 30 minutes programming it, fed in the paper tape with all those punched holes, and watched Clementina solve it perfectly.
The technical details? We were working with assembly language, essentially – very close to the machine’s own logic. You had to think in terms of memory addresses, instruction cycles, magnetic cores. Every operation had to be precisely sequenced. If you got one address wrong, one timing off, the whole programme would fail. But when it worked… it was like conducting an orchestra that could play mathematical symphonies.
That led to your scholarship to study the Mercury system in London and France. What did you discover there?
At the National Physical Laboratory in London, I learned the Francis method for calculating eigenvalues of matrices up to 15×15 – cutting-edge numerical analysis. James Wilkinson’s lectures on numerical methods were extraordinary. These weren’t just abstract mathematical concepts; they were practical tools for engineering, physics, economics.
In France, at the Centre d’Études Nucléaires de Saclay, I encountered FORTRAN for the first time. This was 1962, and FORTRAN represented a revolution – you could write programmes that looked almost like mathematical equations instead of pure machine instructions. The efficiency gains were remarkable. A programme that might take weeks to write in assembly could be done in days with FORTRAN.
Let me ask you something technical that our STEM readers would appreciate. Can you explain the computational advantages of the Mercury system over what existed before?
Certainly! The Mercury used a 48-bit word length with parallel processing capabilities – quite advanced for 1961. It had 1,500 vacuum tubes and 42,000 magnetic core storage units. The access time was about 10 microseconds per word, which sounds glacial now but was lightning-fast then.
What made it special was the instruction set architecture. We could perform floating-point arithmetic directly, without the manual scaling required by earlier machines. The Mercury could handle arrays efficiently, which was crucial for matrix operations in numerical analysis. We developed library routines for common mathematical functions – sine, cosine, logarithms – that other programmers could call directly.
The breakthrough for us was creating what we called “subroutines” – reusable blocks of code. Instead of rewriting the same calculation sequence every time, we could write it once and call it from different programmes. This sounds obvious now, but in 1962, this was revolutionary thinking. We were essentially creating the first software library in Argentina.
And you were building this alongside your colleague Rebeca Guber, who was running the Institute. What was that partnership like?
Rebeca was extraordinary – a brilliant organiser with the vision to see how computing could transform scientific research. While I focused on the programming and numerical methods, she managed the bigger picture: coordinating projects across different disciplines, managing our team of 70 people, ensuring we had the resources we needed.
We worked on everything – mathematical economics for the Agriculture Ministry, operations research for industrial optimisation, statistical analysis for social research. Rebeca would say, “Cecilia, the linguists need help with pattern analysis,” and I’d figure out how to make Clementina count word frequencies. It was interdisciplinary work before anyone called it that.
But then came 1966 – the military coup that ended this golden age. That must have been devastating.
It was heartbreaking. The Noche de los Bastones Largos – the Night of the Long Batons – when police invaded the universities with clubs. Manuel Sadosky was forced into exile. Ninety percent of our Institute staff resigned rather than work under the military regime.
I remember the morning after the coup, walking into the Institute and seeing the chaos – papers scattered, equipment neglected, Clementina starting to gather dust. Everything we’d built, all that knowledge and collaboration, just… stopped. The military didn’t understand what we were doing. They saw computing as either a threat to their control or a tool for their agenda. Neither interested us.
You stayed in Argentina while many of your colleagues left. Why?
Someone had to maintain institutional memory. My husband was established here, my life was here. But professionally, I felt a responsibility. If everyone with computing knowledge left, who would rebuild when the opportunity came?
I joined ACT – Asesores Científico Técnicos – the consulting firm that Manuel and others founded. We worked with private companies, banks, small businesses that needed automation. It wasn’t the grand scientific research we’d been doing, but it kept the knowledge alive, trained a new generation of programmers.
Looking back now, do you see any mistakes you made, or things you’d do differently?
We were naive about politics. We thought that good science would protect us, that the benefits of our work were so obvious that no government would interfere. We didn’t build enough bridges with industry, didn’t create enough economic dependence on our expertise.
Also, I think we could have been better at documenting our methods. So much knowledge was lost when people scattered. We were focused on solving the next problem, not creating systematic records for future researchers. That hurt Argentine computing for decades.
There’s something I find fascinating – that in the early 1960s, computing wasn’t seen as a “male” field the way it became later. What was your experience?
Exactly! Programming was seen as detail-oriented work requiring patience and precision – qualities people associated with women. The mathematical foundation helped too; women had always been involved in mathematical calculation work.
At the Institute, we had women throughout – programmers, analysts, project leaders. Nobody questioned whether women belonged in computing. The discrimination I faced was more about age and changing careers at 31, not about gender. The “masculinisation” of computing happened later, as the field gained prestige and economic importance.
What would you tell young women today who are interested in STEM fields, especially computing?
Don’t let anyone tell you that you don’t belong. The problems we solved in the 1960s with Clementina – numerical analysis, data processing, algorithm design – those challenges still exist, just at much larger scales.
But more importantly: learn to ask good questions. The computer will do exactly what you tell it to do, so the quality of your thinking determines the quality of your results. Be curious about problems outside your immediate field. Some of our best work came from applying computing to linguistics, economics, engineering – connecting different domains of knowledge.
And remember: every expert was once a beginner. I was 36 when I programmed my first computer. It’s never too late to learn something that fascinates you.
You’ve lived to see computing evolve from room-sized machines to smartphones. What amazes you most about this transformation?
The accessibility! In 1962, there were maybe 20 Mercury computers in the entire world. Now everyone carries more computing power in their pocket than we had in all of Clementina. But what really excites me is how computing has democratised problem-solving. Students today can test hypotheses, analyse data, create models that would have required massive institutional resources in our day.
Though I do worry sometimes that people have lost appreciation for the elegance of efficient programming. When memory and processing time were precious, we had to be clever, economical in our solutions. There’s beauty in doing more with less that I hope isn’t completely lost.
Finally, how do you want to be remembered?
As someone who proved that accountants can become programmers, that 31 isn’t too old to start over, and that good mathematics transcends politics. We showed that a small country like Argentina could contribute to the global advancement of computing science. That’s worth remembering.
But mostly, I hope people remember that we had fun. Working with Clementina, solving problems that had never been solved before, building something new with brilliant colleagues – it was joyful work. Science should be joyful.
Thank you, Cecilia, for sharing your remarkable story with us.
Thank you for listening. Now, if you’ll excuse me, I believe there are still some programmes to write.
Letters and emails
Since our interview with Cecilia Berdichevsky was published, we’ve received an overwhelming response from readers worldwide who were captivated by her remarkable journey from accountant to Argentina’s first computer programmer. We’ve selected five letters and emails from our growing community who want to ask her more about her life, her work, and what she might say to those walking in her footsteps.
Sofia Krüger, 34, Software Architecture Consultant, Vienna, Austria
Cecilia, you mentioned creating Argentina’s first software library with reusable subroutines – that’s essentially what we now call modular programming. I’m curious about the technical decisions behind this. How did you handle memory allocation for these subroutines on the Mercury system, and did you develop any early conventions for parameter passing that might surprise modern programmers? Were there trade-offs you had to make between code reusability and the Mercury’s memory constraints?
Ah, Sofia, you’re asking the right questions! You know, memory allocation on the Mercury – that was the devil in the details, wasn’t it?
When we created those subroutines back in ’62, we were working with 1,024 words of magnetic core memory – that’s all we had for active computation. Each word was 48 bits, mind you, which gave us decent precision for floating-point work. But space? Every word was precious as gold.
Our approach was quite primitive by your standards, I’m sure. We used what we called “fixed allocation” – we’d reserve specific memory blocks for our library routines during the initial programme load. No dynamic allocation, no garbage collection, nothing fancy. You had to plan your memory map like a military campaign before you even started coding.
For parameter passing, we developed our own conventions. The Mercury had these seven B-registers – brilliant little things – and we used B-register 7 as our primary parameter passing mechanism. When calling a subroutine, you’d load your parameters into consecutive memory locations, put the starting address in B7, and the subroutine would know to look there. Return values went back through the same mechanism.
The trade-offs were brutal, I’ll admit. We could only have about 20 or 30 subroutines loaded simultaneously – mathematical functions like sine, cosine, matrix operations, statistical routines. If you needed more, you had to swap them in and out from the magnetic drums, and that took seconds – an eternity when you’re trying to solve differential equations!
But here’s what might surprise you modern programmers: we developed recursive subroutines in 1963. Completely by accident, actually! I was working on a numerical integration problem and realised I needed the same calculation at different scales. We used a stack mechanism built entirely with B-register manipulation – no hardware stack support whatsoever.
Your accounting background question is spot-on, Sofia. Those ten years of ledger work taught me something invaluable about memory management: every number has a place, and you must know where it belongs at all times. My mathematical colleagues would sometimes lose track of their variables, but I approached programming like bookkeeping – every memory location was an account that had to balance.
The other advantage from accounting? I understood data validation. In bookkeeping, one wrong number destroys everything, so you check, recheck, and check again. That’s exactly how I approached parameter passing – every value had to be verified before and after each subroutine call. My programmes ran slower initially, but they rarely crashed. In the 1960s, that was worth its weight in platinum.
Ethan Carter, 31, Tech Entrepreneur, Toronto, Canada
Here’s a counterfactual that keeps me up at night: what if the 1966 coup hadn’t happened? If you’d had another decade with Clementina and that brilliant team intact, where do you think Argentine computing would have gone? Could you have developed innovations that might have changed the global trajectory of the field? I’m thinking about things like early networking, user interfaces, or distributed computing – areas where timing mattered so much.
Ah, Ethan, you’ve touched on something that keeps me awake some nights too. You know, people talk about “what if” scenarios, but when you’ve lived through one… it cuts deeper than intellectual curiosity.
If the golpistas hadn’t come in ’66, I believe we would have been connecting computers by 1968 – not just terminals to mainframes, but machine-to-machine communication. We were already discussing it with colleagues in Manchester and at MIT. The Mercury had excellent input/output capabilities, and we’d been experimenting with remote data transmission for the Agriculture Ministry’s crop forecasting project.
You have to understand, Ethan – ARPANET went live in October 1969, right? But we were thinking about distributed computing in Buenos Aires as early as 1964. Rebeca Guber had this vision of connecting Clementina to universities in Córdoba and La Plata. We’d done the preliminary engineering studies. The phone infrastructure existed – Entel had decent trunk lines between major cities.
The technical challenges were fascinating. We would have had to develop our own packet-switching protocols – there was no TCP/IP then, naturally. But we were already handling message queuing for batch processing, and our interrupt system could have managed network traffic. I suspect we would have created something resembling what you now call “store-and-forward” networking, but with real-time capabilities.
But here’s what really haunts me – user interfaces. We were working on interactive programming as early as 1963. Not just punched cards, mind you, but real-time input through our console typewriters. If we’d had another five years, I believe we would have developed graphical displays. The CRT technology was available – English Electric had demonstrated it. We could have had windowing systems by 1970, maybe earlier.
The applications? We were already doing natural language processing for linguistics research. Another decade, and we might have had primitive machine translation between Spanish, Portuguese, and English. Economic modelling across Latin American countries. Distributed scientific computing for weather prediction across the Southern Cone.
Most importantly, we would have trained hundreds more programmers and analysts. That human infrastructure – that’s what takes generations to rebuild. When 90% of our people left in 1966, Argentina lost not just knowledge, but the teaching capacity to recreate it quickly.
I sometimes think we could have made Buenos Aires the computing hub of Latin America. Not competing with Silicon Valley, but complementing it – bringing Southern Hemisphere perspectives to algorithmic thinking, different mathematical traditions, different approaches to problem-solving.
But you know what really troubles me? The political lesson we learned too late. Good science doesn’t protect itself. We thought our work was so obviously beneficial that politics couldn’t touch us. We should have been building alliances with industry, with labour unions, with civic organisations. Making ourselves economically indispensable, not just intellectually impressive.
The golpe didn’t just stop our research – it convinced a generation of young Argentines that science was somehow politically dangerous. That’s the real tragedy, Ethan. We lost not just what we had built, but what an entire generation might have contributed.
Ananya Iyer, 28, Data Scientist and Digital Humanities Researcher, Bangalore, India
Your interdisciplinary work fascinates me – linguistics, economics, social research. Today we talk about ‘domain expertise’ being crucial for meaningful data analysis. How did you approach learning enough about fields like linguistics to create meaningful computational solutions? Did you find that your accounting background actually gave you advantages when translating business problems into computational ones that your colleagues from pure mathematics might have missed?
Ah, Ananya, you’ve hit upon something very close to my heart! You know, people today talk about “domain expertise” as if it’s some revolutionary concept, but back in the early ’60s, we were swimming in problems that crossed every imaginable boundary.
The truth is, my accounting background was absolutely crucial – not just for business applications, but for everything we tackled. You see, accounting teaches you to think in systems. Every transaction affects multiple accounts simultaneously; every calculation must balance precisely. When linguists from the University came to us wanting help with frequency analysis of medieval Spanish texts, I approached it exactly like a ledger reconciliation.
We’d count word occurrences the same way I used to count inventory – establish categories, create systematic tallies, cross-check for accuracy. The Mercury was brilliant for this because we could process thousands of words in minutes instead of graduate students spending months with index cards and pencils.
But here’s what my mathematical colleagues initially missed: you can’t just feed raw data into a machine and expect meaningful results. In accounting, you learn that every number tells a story, but you have to understand the business context to interpret it correctly. So when we worked on economic modelling for the Agriculture Ministry, I’d spend hours with the economists learning about commodity pricing, seasonal variations, export markets.
The linguistics work was fascinating! We were doing primitive computational analysis of language patterns – what you might call early natural language processing now. But I realised that programming Clementina to count words was like programming her to count pesos – the machine could tally perfectly, but understanding what those tallies meant required human insight into the linguistic context.
My accounting experience taught me something else invaluable: how to translate business requirements into precise specifications. When a department head said “we need better inventory control,” I knew how to break that down into specific computational tasks – reorder points, turnover ratios, variance analysis. Similarly, when linguists said they wanted to “study medieval dialects,” I helped them define exactly what measurable patterns we should look for.
The interdisciplinary approach came naturally because accounting itself is interdisciplinary. You’re constantly working with engineers, salespeople, lawyers, production managers. Each group thinks differently, uses different vocabularies, focuses on different aspects of the same problem.
What surprised my mathematics colleagues was how much domain knowledge I absorbed just by asking the right questions. They’d approach problems abstractly – “let’s find optimal solutions.” But I’d ask: “What constraints matter in the real world? What assumptions are we making? How will users actually interact with our results?”
The social research projects were particularly rewarding. We’d analyse survey data, demographic trends, voting patterns. My accounting training helped enormously because I understood sampling techniques from auditing work, knew how to spot inconsistencies in data sets, recognised when numbers didn’t add up properly.
You know, Ananya, I think the biggest advantage my accounting background gave me was patience with messy, real-world data. Mathematicians sometimes expected clean, elegant problems. But business – and linguistics, and economics, and social research – is full of incomplete information, conflicting requirements, and imperfect measurements. That’s exactly what accountants deal with every day.
The key was always asking: what decision will this analysis help someone make? That’s fundamentally an accounting question, and it kept our computational work grounded in practical reality.
Kofi Mensah, 45, Technology Policy Analyst, Accra, Ghana
Your story hits close to home for those of us from the Global South who see how technological narratives get written by the winners. You mentioned that good science wouldn’t protect you from politics – that’s painfully relevant today. When I look at current debates about AI governance, digital sovereignty, and tech colonialism, what lessons from your experience feel most urgent? How should smaller nations protect their technological capabilities while remaining connected to global innovation?
Kofi, your question cuts right to the heart of everything we learned the hard way. You know, back in the early ’60s, we thought science was above politics. We believed that if we built something valuable enough, if we solved real problems and contributed genuine knowledge, nobody could touch us. bitter laugh How naive we were.
The lesson that haunts me most? Good intentions and brilliant work mean nothing if you don’t build the right alliances. When the military came in ’66, we had no protection because we’d isolated ourselves in an ivory tower. We collaborated beautifully with universities in Manchester, with MIT, with colleagues across Europe – but we never cultivated relationships with Argentine industrial leaders, labour unions, or civic organisations that might have defended us when politics turned ugly.
Today’s debates about digital sovereignty and technological independence – they’re absolutely critical. But here’s what I learned: you can’t build technological capability in isolation. The Americans understood this even in our era – they were already thinking about ARPANET, about connecting research institutions, about creating networks that would be resilient against disruption.
Small nations like ours – and I include Ghana, much of Africa, Latin America – we have to think strategically about technological partnerships. Not dependence, mind you, but genuine collaboration that serves our interests. When we worked with Ferranti on the Mercury, we weren’t just buying a machine – we were learning how to build and maintain complex systems. We trained our own technicians, developed our own programming methods, adapted the technology to our specific problems.
But the mistake we made was not thinking about sustainability. We should have been planning from day one: how do we manufacture components locally? How do we train enough people that our knowledge doesn’t depend on a few individuals? How do we create economic incentives for the private sector to support this work?
The Global South faces this same challenge now, but magnified enormously. Artificial intelligence, quantum computing, biotechnology – these fields require massive investments and long-term planning. Individual countries can’t compete with the United States or China in raw resources. But we can compete in creativity, in addressing problems that matter to our populations, in finding solutions that work in our contexts.
Look at what we accomplished with Clementina in just five years. We went from zero computing capability to sophisticated numerical analysis, to interdisciplinary applications nobody else was attempting. Why? Because we approached problems differently than the Anglo-American computing establishment. We brought perspectives from the Global South – different mathematical traditions, different ways of thinking about optimisation, different priorities about which problems needed solving.
The key is what I call “strategic technological cooperation.” Partner with multiple countries, not just one hegemon. Share knowledge openly within the Global South while protecting core capabilities. Invest heavily in education and training – that’s the one thing nobody can take away from you. And always, always build domestic political support for scientific work.
Most importantly, don’t let anyone convince you that technological sophistication means abandoning your own approaches to problem-solving. When we programmed Clementina, we brought Argentine perspectives to computational mathematics. That made our work unique and valuable internationally.
The Global South doesn’t need to choose between American or Chinese technological models. We need to build our own models, drawing from our own intellectual traditions while engaging seriously with global networks. That’s how small nations protect their sovereignty while remaining connected to innovation.
The tragedy of 1966 taught me that politics always catches up with science. Better to be prepared.
Valeria Mendoza, 42, Computer Science Professor, São Paulo, Brazil
As someone who’s seen Latin American women struggle for recognition in tech, I’m struck by how you described the 1960s as less gendered than computing became later. But you also lived through decades of that change. What do you think we lost when computing became masculinised? And honestly – do you ever feel bitter that your generation’s contributions were so thoroughly erased from the narrative until recently? How do you process that erasure?
Valeria, you’re asking me to open an old wound, but it’s a necessary one. You know, when people today talk about the “masculinisation” of computing, they often don’t understand what we lost. It wasn’t just that women were pushed out – it was that a whole way of thinking about problems disappeared.
In the early ’60s, programming was seen as meticulous, collaborative work. We approached it like textiles or cooking – you needed patience, attention to detail, the ability to debug step by step until everything worked perfectly. The Mercury required this kind of thinking. One wrong instruction, one misplaced address, and hours of calculation would fail. Women were considered naturally suited for this precision work.
But as computing became prestigious and lucrative, suddenly it was reframed as “engineering” – masculine, theoretical, abstract. The collaborative aspects were downplayed. Individual brilliance became more valued than careful, methodical problem-solving. Programming started to be sold as a kind of mathematical heroism rather than the craftsmanship it really was.
What we lost was devastating. Women brought different perspectives to algorithmic thinking. We approached problems through relationships, through understanding how systems connected to each other. When I programmed subroutines for economic modelling, I thought about how agricultural data related to weather patterns, how export figures affected domestic pricing. That’s not “softer” thinking – it’s more complex, more holistic.
Do I feel bitter? Yes, Valeria. Yes, I do. Not for myself – I had my moment, I contributed what I could. But for the generations of women who were told they didn’t belong in computing rooms, who might have brought revolutionary insights to artificial intelligence, to networking, to user interface design.
The real tragedy is that this wasn’t inevitable. In the Soviet Union, women remained prominent in computing throughout the Cold War. In some Eastern European countries, programming was seen as women’s work well into the 1980s. The masculinisation happened specifically in the United States and Western Europe, and it spread from there like a virus.
But here’s what makes me most angry: the loss of institutional memory. When women were pushed out, decades of accumulated knowledge went with them. Methods for debugging complex programs, approaches to training new programmers, techniques for managing large computational projects – all that wisdom was dismissed as “not real computer science.”
You ask how I process this erasure? By talking about it. By insisting that the story be told correctly. By reminding people that there were other ways to think about computing, other ways to organise technological work.
The women of my generation weren’t failed computer scientists – we were successful computer scientists working in a different paradigm. One that emphasised collaboration over competition, precision over speed, understanding over cleverness.
That paradigm is still valuable, Valeria. Maybe more valuable now than ever, as computing becomes more complex and more integrated into human life. The world needs programmers who think about relationships, about consequences, about how systems affect real people’s daily lives.
That’s not women’s work or men’s work. It’s just good work.
Reflection
Cecilia Berdichevsky died on 28th February 2010 in Avellaneda, Argentina, at the age of 84. Three years earlier, she had suffered a stroke that left her struggling to recover – a cruel irony for someone whose mind had been so sharp, so precise in navigating the logical pathways of early computing.
Our conversation today revealed layers that formal histories often miss. Where academic accounts emphasise her technical achievements – Argentina’s first programmer, pioneer of the Ferranti Mercury system – Cecilia insisted on the human elements: the late-night pub visits with colleagues, the interdisciplinary chaos of solving linguistic puzzles with mathematical tools, the accounting mindset that made her programmes more reliable than those of her pure mathematics colleagues. Her perspective challenges the sterile narrative of computing history, reminding us that innovation emerges from messy, collaborative, deeply human processes.
Perhaps most striking was how she reframed the “masculinisation” of computing not as inevitable progress, but as a devastating loss. Her description of early programming as collaborative craft-work – akin to textiles or cooking – offers a radically different vision of what computing culture might have become. When she spoke of women being pushed out, taking with them “decades of accumulated knowledge,” she wasn’t lamenting personal slights but mourning alternate futures that were never allowed to unfold.
The historical gaps are significant. Much documentation from the Argentine computing era was lost during political upheavals, and international recognition came decades too late. Only recently have scholars begun piecing together the contributions of figures like Cecilia and her colleague Rebeca Guber, whose work predated many celebrated innovations in the Global North.
Today’s debates about AI governance, digital sovereignty, and tech inclusivity echo Cecilia’s warnings about the politics of technology. Her insight that “good science doesn’t protect itself” resonates powerfully as we watch algorithmic systems reshape society. Her generation understood that technological capability without political wisdom leads to fragility.
Since her death, computer historians have increasingly cited her work as evidence that computing innovation was never confined to Silicon Valley or Cambridge. The Open University now includes her in their digital humanities curriculum, and the MacTutor History of Mathematics archive features her biography. Yet this recognition feels bittersweet – a posthumous correction to decades of erasure.
Cecilia’s true legacy lies not just in the code she wrote or the algorithms she developed, but in her insistence that computing should remain joyful, collaborative, and deeply connected to human needs. In our age of extractive tech platforms and surveillance capitalism, perhaps we need less heroic narratives about individual genius and more stories like hers – about communities of thinkers who saw computing as a tool for collective problem-solving, not personal glory.
Her final words to us – that there are “still some programmes to write” – suggest that the work of building more humane technological futures remains unfinished. The question is whether we have the wisdom to learn from voices like hers before it’s too late.
Who have we missed?
This series is all about recovering the voices history left behind – and I’d love your help finding the next one. If there’s a woman in STEM you think deserves to be interviewed in this way – whether a forgotten inventor, unsung technician, or overlooked researcher – please share her story.
Email me at voxmeditantis@gmail.com or leave a comment below with your suggestion – even just a name is a great start. Let’s keep uncovering the women who shaped science and innovation, one conversation at a time.
Editorial Note: This interview is a dramatised reconstruction based on extensive historical research into Cecilia Berdichevsky‘s life and work. While grounded in documented facts about her career, achievements, and the political context of 1960s Argentina, the conversations and personal reflections presented here are imagined interpretations of her voice and perspective. We have drawn from academic papers, biographical accounts, and historical records of the Argentine computing era to create this portrayal, but readers should understand that this represents our interpretation of how she might have spoken about her experiences, rather than actual recorded statements. The technical details and historical events described are accurate to available sources.
Bob Lynn | © 2025 Vox Meditantis. All rights reserved. | 🌐 Translate


Leave a comment