Florence Nightingale David (1909–1993) was a pioneering British statistician who computed the foundational Tables of the Correlation Coefficient entirely by hand-cranked mechanical calculators, developed predictive bombing models that helped London survive the Blitz, and became the first woman to chair a major university statistics department in the United States. Yet her wartime contributions remained classified for decades, her identity concealed behind the initials “F.N. David,” and her foundational work overshadowed by the towering figures she worked alongside – Karl Pearson, Ronald Fisher, and Jerzy Neyman. Named after Florence Nightingale, herself a pioneering statistician, David represented both the extraordinary potential of women in mathematics and the barriers that rendered their contributions nearly invisible.
David’s story matters today because predictive modelling, computational infrastructure, and the ethical calculus of forecasting human casualties remain central to data science, public health, and urban resilience planning. Her work on correlation coefficients enabled modern statistical analysis; her wartime models presaged contemporary pandemic modelling and disaster preparedness; and her experience navigating a male-dominated profession whilst living as a lesbian academic illuminates ongoing struggles for recognition and equity in STEM fields.
Welcome, Professor David. It’s 22nd November 2025. Thank you for speaking with us today. For readers who may not know your work, you computed some of the most important statistical tables of the twentieth century, developed casualty forecasting models that helped London survive German bombing, and spent your career advancing combinatorial probability theory. Yet many people have never heard your name. How do you feel about that?
Well, I suppose I’m not terribly surprised. Most of my most consequential work – those fifteen classified reports for the Ministry of Home Security – sat in government archives for decades. Rather difficult to be remembered for something nobody’s allowed to know you did. And I published under “F.N. David” rather than Florence, which rather successfully hid my sex. That was deliberate, mind you. If you wanted your work taken seriously in the 1930s, you didn’t advertise being a woman. But it does make one rather spectral in the historical record, doesn’t it? People see initials and assume a man wrote it.
You were named after Florence Nightingale, who was a friend of your parents and herself a pioneering statistician. Did that influence your path into mathematics?
I expect it did, though I wasn’t thinking about destiny aged five when I started learning algebra from a local parson. He simply said, “Well, you’ll have to know arithmetic so you’d better start on algebra. And you can speak English so you’d better start on Greek and Latin.” Rather sensible approach, I thought. My parents were both elementary school head teachers, believers in education for girls as much as boys, which wasn’t universal at the time. Florence Nightingale herself used statistics to revolutionise hospital care – those polar diagrams of hers showing preventable deaths during the Crimean War. Perhaps there was something in the name.
You graduated from Bedford College for Women in 1931 with a degree in mathematics. What did you want to do with that training?
I wanted to become an actuary, for some obscure reason. Seemed a sensible profession. But the actuarial firms would only hire men. I was told, “Take the actuarial examinations and then they can’t refuse you.” So I tried that route, but it became clear they’d simply find other ways to exclude me. One day I happened to be passing University College London and I rather boldly crashed my way in to see Karl Pearson. Someone had mentioned he’d done actuarial work. Curious how fate intervenes – I might have walked past that building and my entire life would have been different.
What was Karl Pearson like when you met him?
Terrifying. He’s the only person I’ve ever been genuinely afraid of in my entire life. But he was also extraordinarily kind. He asked what I’d done, I told him about my mathematics degree and my actuarial ambitions, and he said, “You’d better come here and I’ll get your scholarship renewed.” Which he did. So in 1933 I became his research assistant at University College London. I was twenty-four.
Your first major project was computing the Tables of the Correlation Coefficient, published in 1938. Can you walk us through what that work actually entailed – the technical detail of how you did it?
Right. So the correlation coefficient measures the strength and direction of linear relationship between two variables. Values range from negative one – perfect negative correlation – through zero – no correlation – to positive one – perfect positive correlation. The mathematical problem is computing the probability density function and cumulative distribution for the sample correlation coefficient under various conditions, particularly for small sample sizes where normal approximations break down badly.
The calculations required evaluating extraordinarily complicated multiple integrals using the distribution of correlation coefficients that Karl Pearson had derived. Each integral involved nested functions, sometimes four or five levels deep, and had to be computed to sufficient precision across ranges of sample sizes and population correlation values. We had no electronic computers. What we had were Brunsvigas.
Brunsvigas being mechanical calculators?
Yes, pinwheel calculators. Hand-cranked. You set input levers to expose pins on rotating wheels, then turn a crank handle to add or subtract. For multiplication you turn the crank repeatedly; for division you shift the carriage and subtract iteratively. Carries between digit positions happen mechanically with rather satisfying clicks. When you jammed the machine – which happened regularly if you weren’t careful – you were meant to report to Professor Pearson, who would be most displeased. I jammed mine many times and often went home without telling him.
I estimated I turned that Brunsviga hand-crank roughly two million times computing those tables. Sometimes we’d chain three Brunsvigas together for particularly complex calculations. The machines could carry tens in one register but not another, so you learned tricks with long knitting needles – strictly illegal – to manually assist the carry mechanism without jamming everything.
Two million hand-crank turns. That’s extraordinary patience for someone who describes herself as lazy.
Oh, I am lazy. That’s precisely why I spent so much time working out efficient computational methods. I’d rather think for three hours about a clever shortcut than turn a crank an extra thousand times. The real challenge wasn’t the cranking – that’s merely tedious – it was ensuring the numerical methods converged properly and the error bounds remained acceptable. One mistake early in a calculation chain and everything downstream becomes rubbish. So you computed everything twice, by different methods if possible, and compared.
And these tables became standard references for decades?
Yes. Before electronic computers, if a researcher wanted to assess whether a correlation coefficient was statistically significant for a given sample size, they consulted my tables. The 1938 edition ran to forty-four pages of closely printed values. We later extended them. They enabled researchers across psychology, biology, economics, agriculture – anywhere people gathered data and wanted to understand relationships. Infrastructure, you see. Nobody notices infrastructure until it’s missing, and then absolutely everything stops working. That’s rather been the story of my career.
In 1939, war was declared. You were called up as an Experimental Officer for the Ordnance Board. What did that work involve?
Initially, analysing what happened when you fired guns. Anti-aircraft effectiveness against German bombers coming up the Thames, that sort of thing. Rather boring, actually. I wasn’t doing much good. But then I transferred to the Ministry of Home Security under Austin Bradford Hill. That’s when the work became serious.
This was late 1939, before London had been attacked?
Exactly. We knew war was coming. The question was: what happens when German bombs start falling on densely populated British cities? How many casualties? What patterns of damage? Which infrastructure fails first? The Ministry wanted forecasts – statistical models predicting death tolls, injury rates, fire spread, building collapse, and crucially, disruption to utilities. Water mains, gas pipes, electricity, sewers, telephone lines. If those fail and you can’t restore them rapidly, you lose the city even if most buildings are still standing.
How do you build a predictive model for something that hasn’t happened yet?
You start with physics and engineering principles. Blast wave propagation. Structural collapse thresholds. Population density patterns. You make assumptions – bomb tonnage, accuracy, distribution across target areas. You construct scenarios: what happens if a 500-pound bomb falls on a residential street? On a commercial district? Near a water main junction?
Then you structure it probabilistically. Bombs don’t fall with perfect precision. You model dispersion patterns, likely miss distances, casualty zones at different ranges from impact. Direct hit versus near miss versus far miss. The kill radius from blast versus casualties from debris versus injuries from flying glass. Different agents cause different injuries with different mortality rates, so you partition the casualty predictions accordingly.
And you produced these forecasts before London was attacked?
Yes. We delivered the models in late 1939. Then in September 1940, the Blitz began. And the rather macabre part is that we had data – real corpses, real destruction – so we could refine the models. The initial predictions were reasonably accurate, which was both validating and horrifying. We updated continuously as actual casualty figures came in. Where were people when they died? What killed them? Which structures collapsed; which held?
Can you give us specific technical findings? What did the data show?
We found that ninety per cent of deaths were due to direct hit or near miss, blast effects, or falling debris. The remaining ten per cent divided amongst bomb splinters, fire, anti-aircraft barrage fragments, drowning, shock, and flying glass. Glass injuries were surprisingly frequent – dangerous to limbs and eyesight but rarely fatal if you were beyond a certain distance from the explosion.
We analysed ten heavily bombed cities containing under ten per cent of the UK’s total population. They accounted for thirty-eight per cent of those killed and thirty-one per cent of total casualties. Add London and you had roughly a quarter of England and Wales’ population incurring three-quarters of the casualties. That concentration was crucial for resource allocation – where to pre-position medical supplies, emergency personnel, repair equipment.
You mentioned utilities infrastructure. Was that part of your modelling?
Very much so. When a bomb falls in a road, everything running down the middle of the street gets severed. Sewers, water, electricity, gas. Then you have contamination problems – sewage leaking into water pipes. We tried to get data on repair times: how long to restore water service, how much piping required, how many workers. But the responses were hopeless. I’d ask the gas people, “How long did repairs take for incident such-and-such?” They’d say, “Oh, well, we don’t know. We sent out two men with a cart and told them to carry on until they finished.” Useless for statistical analysis. But we built models anyway, made estimates, advised on maintaining critical services during bombing.
Did your models actually save lives?
I believe so. The forecasts helped civil defence prepare. Shelter recommendations – where’s safest to be in a house during a raid. Fire service positioning. Emergency hospital capacity. Water system redundancy. When the bombs came, London’s infrastructure held better than it might have. Services stayed operational. That’s lives saved – people who didn’t die of thirst or sewage-borne disease or untreated injuries because hospitals had no electricity. But I can’t give you a number. That’s the nature of prevention – you never know how many disasters didn’t happen because you planned well.
You also worked on the V-weapons later in the war. What was that problem?
Finding where they were launched from. We had a map of London plotting impact points. The challenge was working backwards to locate the launch silos across the Channel so the RAF could bomb them. I assumed a bivariate normal surface and calculated the direction of the major axes. But the Germans moved their silos roughly once a week, so we were constantly re-computing. You had about four minutes from warning to impact. If you knew the launch location, you could send aircraft to suppress them whilst they were firing. Limited success – but we tried.
You produced fifteen classified reports during the war. How do you feel about that work now, looking back?
Complicated feelings. On V-E Day, I told the Ministry, “Very well, I’m going at the end of the month. I’ve had enough of this, I’ve wasted six years.” Churchill decreed scientists work seventy hours a week with no holidays – seven days a week, ten hours a day. I was exhausted and rather resentful. The Pentagon wanted me to join a task force assessing the atomic bomb. I said no. Absolutely not. I’d done my bit.
But “wasted six years” was unfair to the work itself. Calculating death tolls is grim, but it needed doing. Someone had to forecast how many Londoners would die in various bombing scenarios so the government could prepare. That was a statistical problem requiring rigour and detachment. I provided that. If my models helped keep water flowing and gas mains functioning whilst buildings burned, then thousands of people survived who might not have. That’s meaningful. But it was also utterly thankless – classified, anonymous, invisible.
And that classification meant you couldn’t be recognised for arguably your most impactful work?
Precisely. My wartime reports remained secret for decades. People studying the Blitz, writing histories of Britain’s wartime resilience, had no idea a statistician named Florence David had predicted casualty patterns months before the bombs fell and then refined those predictions using real dead bodies. That knowledge sat in government archives. My name didn’t appear. Classic mechanism for erasing women’s contributions – classify the work so it can’t be acknowledged publicly.
After the war you returned to University College London. What was that like?
Resuming normal academic life felt peculiar. I supervised the library, taught computing classes – though by “computing” I mean supervising students cranking Brunsvigas, not programming electronic machines. I completed research projects I’d set aside. Egon Pearson ran the department. Jerzy Neyman had left for Berkeley in 1938. Ronald Fisher was upstairs at University College, still being impossible.
You’ve described Fisher as “without exception the worst lecturer I have ever heard.” Can you elaborate?
Dreadful. Simply dreadful. Karl Pearson lectured superbly – you’d sit there and let it all soak in. Fisher was incomprehensible. I couldn’t follow his reasoning. And if I raised my hand to ask a question, he wouldn’t answer because I was female. So I’d sit next to Churchill Eisenhart or Sam Wilks – visiting American statisticians – and whisper, “Ask him! Ask him!” They’d pose my question and Fisher would respond. Infuriating.
But perhaps I learned more from Fisher’s lectures than Pearson’s precisely because I couldn’t understand them. I’d spend three hours afterwards in the library working out what he’d meant. Forced me to think deeply. Still, his attitude towards women was appalling. The story went round that when his last child was born – a daughter – one son remarked to another, “Father’s had another failure.”
You mentioned Jerzy Neyman. He encouraged you to get a doctorate?
Yes. I hadn’t wanted one. You didn’t need a doctorate in England at the time; it was somewhat American. But Neyman insisted. He was my internal examiner; A.C. Aitken was external. I submitted four papers I’d already published rather than writing a dissertation – that was permitted then. Neyman thought having a doctorate was important. Karl Pearson didn’t. In any case, I paid my twenty pounds entrance fee and received my Ph.D. in 1938. My status didn’t change. I remained an assistant doing computational work.
Did you feel Neyman treated you fairly?
He was kind and helpful, but he expected women to do statistical-computational clean-up work – the tedious bits he didn’t want to do. Nowhere in his life did he have overt prejudices, yet he used women rather than men for computing. He called me “the Duchess” because whenever he asked me to do something I’d often say, “No, I don’t think I’ll do that.” He found that amusing. Unusual, I expect. Most women said yes.
You co-authored “Combinatorial Chance” with D.E. Barton in 1962. What’s that book about, and why combinatorics?
Combinatorics is counting arrangements and patterns. How many ways can you organise objects? What’s the probability of a particular pattern occurring randomly? It underpins much of probability theory. All my life I’d worked on combinatorial problems – perhaps because I’m lazy and combinatorics is about finding clever shortcuts rather than brute-force calculation.
The book covers probability distributions of statistics of random patterns and arrangements, with applications ranging from gambling to nonparametric tests. Barton did the fancy limiting behaviour and asymptotics. I provided the foundations and historical context. We dedicated it to Abraham de Moivre – his Doctrine of Chances from 1718 was foundational. Remarkable how much was “discovered” in the twentieth century that’s already in de Moivre, or Laplace’s Théorie Analytique, if you bother to look.
You also wrote “Games, Gods and Gambling” in 1962, tracing probability theory back to ancient gambling. What drew you to history?
I studied Greek as a child and got rather interested in archaeology. A colleague was excavating in a desert and asked me to analyse the spatial distribution of pottery shards to locate the kitchen midden – archaeologists care about rubbish heaps, not gold. I plotted the shards on a map and realised it was exactly like the V-bomber problem: assume a bivariate normal surface, find the major axes, predict the centre. Same mathematics, entirely different context. There’s a unity amongst problems – only about half a dozen truly different ones.
So I started reading about ancient gambling devices. Knucklebones – talus, the ankle bone of sheep – were the earliest dice. I speculated that gambling might be humanity’s first invention. Certainly probability theory arose from gambling. The priests throwing bones for divination weren’t just performing rituals; they were manipulating probabilities. You’d pay your cow, throw the bones, get an unfavourable answer. The priest would say, “Have another go.” You’d pay again. Eventually you’d get the answer you wanted. Clever business model.
You argue probability theory travelled the Silk Road from China through Tibet?
I suspect so. Bone-throwing appears first in Tibetan monasteries. It spread westward through what’s now Iran but went north in the Mediterranean rather than south – didn’t reach Egypt until Ptolemy. The learning was held in monasteries everywhere. I always wanted to travel the Silk Road from Peking to see what’s in those monasteries. Never did, sadly.
In 1962 you became a Professor at University College London – one of very few women professors at the time. How did that feel?
Pleased, certainly. I was the second woman professor at University College; Elizabeth Wilkinson, a German scholar, was first. But I noticed men typically became professors around age forty. Women, later. I was fifty-three. Egon Pearson had retired around 1960. I had far more publications than anyone else likely to be appointed, but they brought in Maurice Bartlett – a good man, admittedly. He thought I’d leave, but I stayed. When Bartlett left for Oxford in 1967, I was offered the chair, but by then I’d committed to moving to California. Thought I’d try something fresh.
What brought you to California?
I’d been visiting Berkeley regularly since 1958 as a visiting professor, working with the Statistics Department and Applied Climatology and Forestry Divisions. I purchased a house in Kensington – near Berkeley – in 1961, jointly with Evelyn Fix. She was a statistician at Berkeley, wonderful person. We lived together until her death in 1965.
Evelyn Fix was your partner?
Yes. We didn’t speak about such things openly – one couldn’t, not if you wanted to keep your position. But people knew. Living as a lesbian academic in mid-century Britain and America added another layer of difficulty to professional life. Homophobia was pervasive. Open same-sex relationships could end careers. So you were quiet about it. But yes, Evelyn was my partner. Losing her in 1965 was devastating.
In 1968 you moved to UC Riverside and became Chair of the Department of Biostatistics, later Statistics. Why Riverside rather than staying at Berkeley?
Good question. In hindsight, probably a mistake. I thought I’d have a quiet life, do some teaching, pursue research. Instead I walked into a ferocious battle. There was no department when I arrived; we were creating one from nothing. Various other departments wanted to control statistics – Mathematics tried to swallow us, Economics wanted their own statisticians, several others as well. I fought them on the senate floor and across campus. Exhausting.
But you succeeded. You chaired the Statistics Department from 1970 to 1977.
I did, yes. Became one of the first women to lead a major university statistics department in the United States. But it was seven painful years. I was commuting between Riverside and Berkeley – four days a week in Riverside, twelve hours a day, then driving to Berkeley for weekends. Two cars, constantly on the road. Rather absurd. When I retired in 1977 I moved back to Berkeley permanently.
In 1992 you received the first Elizabeth L. Scott Award “for efforts in opening the door to women in statistics.” How did that feel?
Moving, actually. I was eighty-three. The citation mentioned contributions to combinatorics, statistical methods, applications, understanding history, and serving as a role model. I hadn’t thought much about being a role model. But I suppose I was – visibly a woman in a field almost entirely male, publishing extensively, chairing a department. Perhaps some younger women saw that and thought, “Well, if she can do it…”
But receiving recognition at eighty-three, shortly before I died – that’s rather the pattern for women, isn’t it? Honours in very old age, if at all. Men receive recognition in their forties and fifties when it advances their careers. Women receive it as retirement gifts.
Why do you think your contributions have been overlooked?
Multiple reasons. The classified wartime work – couldn’t be acknowledged for decades. Publishing under initials – concealed my sex. Working alongside giants – Karl Pearson, Fisher, Neyman – meant my contributions were viewed through the lens of their more famous work. My early computing work on correlation coefficients was seen as clerical labour, “women’s work,” despite requiring extraordinary mathematical sophistication. Moving to California late in life and chairing at Riverside, a less prominent campus, rather than Berkeley. My scholarly interest in history of probability was seen as “soft” compared to mathematical statistics.
And the broader pattern Margaret Rossiter identified – the “Matilda Effect” – where women scientists’ contributions are minimised or attributed to male colleagues. I embody that. Everything I published before about 1960 could easily be assumed to be a man’s work if you saw “F.N. David” on the title page. How many people realised Florence Nightingale David was a woman?
Looking back, what do you wish you’d done differently?
I wish I’d written more accessibly about the wartime work once it was declassified. Made the case for why those models mattered, how prediction saved lives. I wish I’d been bolder about correcting people who assumed F.N. David was male – but that might have damaged my career irreparably. I wish I’d been able to live openly with Evelyn without fear. But those are wishes for a different world, not different choices within the world I inhabited.
What would you say to young women – and other marginalised people – entering statistics and data science today?
Do the work. Do it rigorously. Ask questions nobody else is asking and try to find the answers. That’s your job – not to be influential, though you might be, but to ask questions and find answers. Don’t wait for permission. Don’t wait to be noticed. Build the infrastructure others will rely on, even if they never know your name.
And document your work. Make sure it’s attributed to you. Don’t let others take credit. I didn’t fight hard enough for that, and I’ve rather vanished as a result. Don’t vanish.
Finally – you turned that Brunsviga hand-crank two million times. What sustained you through that?
Stubbornness, mostly. And the knowledge that the tables would be useful – that researchers across fields would consult them, that statistical inference would advance because someone had computed the bloody things properly. Infrastructure, again. Unglamorous, invisible, essential. I suppose I’ve always been drawn to unglamorous essential work. Turning a crank. Counting corpses. Computing integrals. Someone has to, and if you do it well, the world improves slightly. That’s enough.
Professor David, thank you for this conversation. Your work calculating correlation coefficients, forecasting casualties, advancing combinatorics, and illuminating the history of probability transformed statistics – even if too few people know your name. We’re grateful for this chance to correct that.
Well, thank you. It’s been rather nice to be asked, actually. Eighty-something years after I started cranking that Brunsviga, someone’s still interested. I’ll count that as success.
Letters and emails
Following the interview, we received an outpouring of letters and emails from readers across the globe – statisticians, data scientists, historians, ethicists, and students – each wanting to probe deeper into Florence Nightingale David’s life, her work, and her insights for those pursuing similar paths. We’ve selected five of the most illuminating contributions, representing voices from Europe, Africa, Asia, South America, and Oceania. These questions explore the technical details of her computational methods, the ethical weight of calculating casualties, the unexpected turns in her career, and the counterfactual possibilities of history. They reflect a community eager to learn not just what Florence Nightingale David accomplished, but how she thought, reasoned, and persisted through obstacles that might have stopped others entirely.
Chiara Rossi, 34, Data Scientist, Milan, Italy
Professor David, you mentioned that you computed correlation coefficients to extraordinary precision using mechanical calculators, sometimes chaining three Brunsvigas together for complex calculations. I’m curious about your error-checking protocols – in modern computing we have unit tests and validation frameworks, but how did you verify accuracy when a single miscalculation early in the chain could invalidate hours of work? Did you develop any ingenious techniques for catching errors before they propagated through the entire computation, and would any of those methods still be useful for validating statistical software today?
Ah, excellent question. Error-checking was absolutely critical – one mistake and you’d wasted days, possibly weeks. We had several protocols, though “protocol” makes it sound rather more formal than it was. Mostly it was born of bitter experience with ruined calculations.
First principle: compute everything twice, by different methods if possible. If you’re evaluating an integral numerically, use two different approximation schemes – say, Simpson’s rule and the trapezoidal rule with different step sizes. If they agree to sufficient decimal places, you’ve likely got it right. If they don’t, you’ve made an error somewhere and must start again. Tedious, but essential.
Second: check limiting cases and known values. Suppose you’re computing correlation coefficient distributions for sample size n. Well, for n equals two, you can work out the distribution analytically – it’s trivial. So compute that case on your Brunsviga and verify it matches the theoretical value. Similarly, as n grows large, the distribution should approach normality. If your computed values for large n don’t behave properly, something’s wrong. Always test against cases where you know the answer.
Third: maintain intermediate results and cross-check them. When you’re chaining calculations – output from one Brunsviga becomes input to the next – write down every intermediate value in a ledger. At natural breakpoints, verify those intermediates independently. Perhaps the first machine computes a sum of squares; the second uses that to compute a variance; the third uses the variance in a more complex expression. Well, you can check the sum of squares directly from your raw data. You can verify the variance makes sense given the data’s spread. Catch errors early before they poison everything downstream.
Fourth: work in pairs when possible, particularly for crucial calculations. One person operates the Brunsviga and calls out results; the other records them and watches for nonsense values. Four eyes catch more than two. During the war we did this constantly – you couldn’t afford errors when lives depended on your forecasts.
Fifth – and this sounds peculiar but it worked – develop a feel for what reasonable answers look like. After you’ve computed hundreds of correlation coefficients, you develop intuition. If you’re expecting a value around 0.47 and your Brunsviga gives you 0.89, you know immediately something’s gone wrong. Doesn’t tell you what went wrong, but it tells you to stop and investigate rather than proceeding merrily forward with garbage.
Sixth: use checksums and invariants where possible. In some calculations you know certain quantities must remain constant or must sum to particular values – conservation principles, essentially. After each step, verify those invariants hold. If they don’t, you’ve introduced error.
Now, would these methods help validate statistical software today? I rather think so, yes. Your computers are vastly faster than my Brunsvigas, but they still make errors – programmer mistakes, numerical instability, floating-point rounding. Testing against known analytical solutions, checking limiting behaviour, maintaining intermediate results for inspection, computing the same quantity by different algorithms and comparing – these are all perfectly applicable to software validation.
In fact, I’d argue software needs more rigorous checking than mechanical calculation, not less. With a Brunsviga, you feel the mechanism. If something jams or behaves oddly, you know immediately. With software, errors hide silently inside the machine. The computer will happily give you complete rubbish with perfect confidence. So you must be more careful, not less.
One technique we used that’s perhaps less relevant now: we’d deliberately introduce small perturbations to input data and observe how outputs changed. If tiny input changes caused wild output swings, we knew we had numerical instability – our method was ill-conditioned. Modern software should do this automatically, but I suspect much of it doesn’t. Sensitivity analysis, you’d call it now. We just called it “checking whether the calculation is sensible.”
The fundamental principle remains unchanged: never trust a single computation. Always verify. Always cross-check. Always test limiting cases. Computers haven’t altered that necessity one bit – they’ve merely made it easier to avoid doing it, which is dangerous.
Mateo Flores, 41, Urban Planning Researcher, Buenos Aires, Argentina
Your wartime models forecasted casualties and infrastructure failures across utilities – water, gas, electricity, sewers – which strikes me as remarkably interdependent thinking for 1939. Modern resilience planning emphasises cascading failures and network effects, but that language didn’t exist then. How did you conceptualise those interdependencies? When a water main broke and contaminated the sewage system, did you model that as separate events or as coupled systems? I’m wondering whether your thinking anticipated modern complexity theory, or whether you were working from intuition about how cities actually function under stress.
You’ve identified something rather important. We didn’t have the mathematical language for “cascading failures” or “network effects” – those terms came later. But yes, we understood perfectly well that a city is an organism, not a collection of independent systems. Break one thing and everything else fails downstream. That’s not theory; that’s watching cities function.
When I began the wartime work, I started with questions that seemed almost childish in their simplicity. If a bomb falls on Oxford Street, what happens? Well, suppose it severs the water main. Then what? You’ve got fires burning but no water pressure to fight them. People want to drink but taps run dry. Toilets won’t flush, so sewage backs up into homes. Then cholera, typhoid, dysentery – secondary casualties from disease rather than blast. And if the electrical cables are also in that same street – which they often were in London, running down the centre – you’ve lost power. No pumping stations for water. No electric lights for rescue workers. No refrigeration for food.
So no, we didn’t model these as separate events. We modelled them as coupled. A single bomb might create cascades of failure. The question was: which cascades are most lethal? Which utilities, if lost, cause the most damage? And crucially – which can we restore quickest to prevent the cascade?
We analysed the physical layout of London obsessively. Where do water mains run? Where do they intersect with gas pipes? With electricity? Where are the key junctions? If you lose the junction at Piccadilly, what fraction of west London loses water? We actually walked the streets, consulted maps, spoke with engineers from the water board and gas company. Rough analogue to what you’d call network analysis, I suppose.
The gas company was particularly interesting. They explained their repair procedures: when a pipe ruptures, you isolate the section, allow gas to escape (rather unpleasant), then send repair crews. But if multiple ruptures occur simultaneously – which happens during heavy bombing – you can’t repair everything at once. You triage. Which areas serve hospitals? Emergency services? Large shelters where thousands shelter during raids? You restore those first. Other neighbourhoods wait. We modelled the decision tree: given x ruptures across the network and y repair crews available, which sequence of repairs minimises total casualties?
The water people were less sophisticated about it. Their attitude was rather “we’ll just fix them,” which wasn’t acceptable for planning purposes. So we helped them think about priorities. We modelled what happens if various key pumping stations are destroyed. How long can a district survive on stored water in household tanks and public reserves? How quickly must you restore service before dehydration and disease become critical?
For sewers, the analysis was grimmer. Sewage is slow – it’s gravity-fed through pipes, and if the pipes rupture or get blocked by debris, raw sewage backs up into homes. You can’t repair sewers quickly; they’re underground and fragile. So we asked: if sewers fail, where do you relocate people to? You can’t leave thousands living atop broken sewerage. We identified potential temporary sites and calculated logistics – how many portable latrines, how much water for washing, how frequent disposal of waste.
All of this was coupled analysis – you couldn’t consider water separately from electricity separately from sewage. A city’s an integrated system. Damage one component and everything rattles. The mathematics we used was fairly elementary – mostly branching probability trees and constraint satisfaction. Nothing mathematically sophisticated. But the thinking was systemic in a way that wasn’t common in 1939.
Austin Bradford Hill, my supervisor at the Ministry, grasped this immediately. He was brilliant about recognising that we couldn’t reduce London to independent probabilities. Some of my colleagues were less convinced. They’d say, “Right, what’s the probability of a direct hit on a residential street?” Fine, compute that. “What’s the casualty rate given a direct hit?” Compute that. Multiply them and you’ve got expected casualties. Simple. But that ignores interdependencies – the fact that a hit on a street likely ruptures multiple utilities and creates cascades.
I remember one meeting where I tried to explain that the predictions would be substantially wrong if we didn’t account for utilities failure. A rather senior official looked at me and said, “Miss David, we’re trying to forecast civilian casualties from bombing. That’s complicated enough without inventing additional complications.” To which I replied – rather sharply, I’m afraid – “Sir, the complications aren’t invented. They’re built into London’s actual infrastructure. Ignoring them doesn’t make them disappear; it only makes your forecasts useless.”
He wasn’t pleased, but the Ministry backed me. So we built in the utilities failures. When the Blitz actually occurred, we had data. Which utilities failed first? In what sequence? Did cascade effects amplify casualties beyond what simple bomb-impact models predicted? The answer was yes. In several heavily bombed areas, utilities failures and the resulting disease, lack of water for firefighting, and inability to shelter large populations caused more deaths than direct blast effects alone. Our coupled model predicted that. A simpler model wouldn’t have.
Now, I wouldn’t claim I “anticipated” modern complexity theory. I knew nothing of network mathematics or the formal language you have now. But I understood from first principles that a city is interconnected. Break enough connections and the whole system fails. That’s not theory – that’s observation. What modern complexity science did was formalise the mathematics and language to describe what was always intuitively obvious to anyone who’d thought carefully about how systems actually work.
If anything, I suspect modern complexity theory could have learned from wartime logistics. We didn’t call it that, but we were solving network optimisation problems under uncertainty with imperfect information in real time. Each day brought new bombing damage data; each day we updated our models and our recommendations. That’s adaptive resilience planning. Quite relevant to your field, I’d imagine.
Saskia van der Berg, 45, Ethics Consultant for AI Systems, Cape Town, South Africa
The ethical dimension of your wartime work fascinates me – you were literally calculating expected death tolls, predicting how many civilians would die under various bombing scenarios, then watching those predictions materialise and refining your models with actual corpses. That must have been psychologically devastating. How did you maintain the necessary professional detachment to continue that work without becoming either numb to the human cost or paralysed by the moral weight? I ask because contemporary data scientists building predictive models for everything from medical triage to autonomous weapons face similar ethical tensions between technical accuracy and human consequences. Did you develop any mental frameworks that might guide us today?
This is the question I’ve dreaded most being asked, and also the one I’m most grateful to receive. You’ve touched on something I’ve thought about constantly for eighty years.
Let me be direct: I did not maintain detachment. That’s a myth I’ve perhaps encouraged by speaking clinically about the work in later years. When you’re calculating the expected number of Londoners who will die if a one-thousand-pound bomb falls on a residential street at 2 a.m. when most people are indoors, you’re not detached. You’re sick. Genuinely physically sick.
I remember one particular calculation. We were modelling a scenario: heavy bombing on a densely populated neighbourhood in the East End, terraced housing, narrow streets, high occupancy. I computed the expected deaths for different bomb tonnages and dispersal patterns. The numbers were enormous – in the hundreds per incident, sometimes thousands across a night of raids. I finished the calculation, checked it twice, and then I went to the toilet and was physically ill. Couldn’t eat that evening.
The next morning, the very area I’d modelled was bombed. The actual death toll came in. It matched my forecast almost exactly. Which validated the work – the model was accurate. It also made me feel complicit in the deaths. As if by predicting them so precisely, I’d somehow enabled them. Completely irrational, of course, but that’s what I felt.
How did I continue? Partly by compartmentalising – a rather unhealthy strategy, I’ll admit. During work hours, I was a statistician. Numbers. Probabilities. Distributions. The technical problem of constructing accurate forecasts. I focused entirely on that. After hours, I went home. I didn’t think about what the numbers meant in terms of actual human beings. That was a deliberate choice, a kind of professional fence I erected between myself and the implications of what I was doing.
But that compartmentalisation wasn’t complete, and I’m not sure it was healthy. I was perpetually tense during the Blitz itself. Every air raid siren, I’d wonder: are they bombing an area I modelled? Will my forecasts come true tonight? When casualty figures arrived the next day, I’d scan them compulsively. Not out of scientific interest – out of something closer to morbid fascination. As if verifying my predictions had become personal.
The truth is, I also had a framework that I genuinely believed in, which allowed me to do the work without complete moral collapse. Here it is: I was doing this work because someone had to. The government needed accurate forecasts to prepare civil defence. Without those forecasts, preparations would be inadequate, and more people would die. My role wasn’t to cause deaths – deaths were going to happen regardless. My role was to forecast them accurately so the government could mitigate them.
That’s not mere sophistry, though I’ve sometimes wondered if it was. It’s a consequentialist argument: yes, I’m calculating death tolls, but accurate forecasts lead to better civil defence, which saves lives overall. The alternative wasn’t “don’t calculate death tolls” – that wasn’t an option. The alternative was either crude guesses (leading to poor preparation and more deaths) or accurate forecasts (leading to better preparation and fewer deaths). I wasn’t choosing between doing harm and not doing harm. I was choosing between different degrees of harm.
That frame allowed me to continue working. But it required constant reinforcement. I’d remind myself regularly: you’re saving lives by improving preparation, not causing deaths by calculating them.
The other psychological mechanism I used – and this is going to sound rather cold – was focusing on the technical problem rather than the human cost. Whenever I felt the weight of the work becoming overwhelming, I’d redirect my attention to the mathematics. How can I improve the convergence of this numerical integration? What if I try a different approximation scheme? How can I reduce computation time? By focusing on the technical challenge, I could avoid dwelling on what the numbers represented.
Was that emotionally healthy? Almost certainly not. I’m now rather aware that I’ve spent eighty years carrying a low-level guilt about that work, even knowing intellectually that I did it responsibly and that my forecasts likely saved lives. But it allowed me to function at the time, and functioning was necessary.
Now, you ask what I’d say to contemporary data scientists facing similar moral tensions. Here’s what I’ve learned:
First, acknowledge the weight. Don’t pretend it’s not there. Don’t tell yourself you’re “just doing technical work.” You’re not. You’re building models that will affect human lives. That matters morally, and you should feel the weight of it. Scientists who don’t feel any weight when working on consequential problems are either moral failures or aren’t thinking clearly about what they’re doing.
Second, be very careful about your assumptions and uncertainties. When I forecast casualties, I was explicit about what I didn’t know, where my models might break down, what assumptions I’d made. If you’re building a model for medical triage during a crisis, be ruthlessly clear about how your model could fail and who bears the consequences of those failures. Decision-makers need to understand your confidence intervals and limitations, not just your point estimates.
Third, insist on evidence. Don’t let anyone use your models to justify decisions based on ideology or convenience. When my wartime forecasts were used, they were checked against actual data continuously. That accountability mattered. If you build a predictive model for autonomous weapons or medical rationing or policing, insist that it be tested, that its predictions be validated, and that it be revised if it consistently errs in particular directions. A model that systematically underestimates harm to a particular group should be viewed with intense suspicion.
Fourth, maintain your integrity about uncertainty. There will be pressure – enormous pressure – to give confident predictions when you’re actually uncertain. During the war, officials wanted to know: how many Londoners will die in the next month? My honest answer was: I don’t know, but here’s a distribution of probabilities. Some officials wanted a single number. I refused. Single numbers hide uncertainty and invite overconfidence. You must resist that pressure.
Fifth, seek counsel from people who will challenge you morally. I had Austin Bradford Hill and a few others who’d occasionally say, “Are you sure we’re doing this right?” Those conversations were uncomfortable, but they kept me honest. Don’t surround yourself only with people who affirm the work. Find colleagues who’ll ask hard questions about whether you should be doing it, whether you’re doing it responsibly, and whether the intended uses of your work match your intentions.
Finally – and this is crucial – you must maintain the ability to say no. If asked to build a model that you believe will be used harmfully, you have to be able to refuse. That’s not always easy. I was fortunate that I never faced a direct order to do something I considered ethically intolerable. But I recognised that possibility, and I’d decided in advance that I would refuse. You must maintain that capacity.
The guilt I’ve carried about the wartime work – I’m not sure it ever fully goes away. Even now, knowing the work was necessary and likely saved lives, I sometimes wonder about the people whose deaths I predicted with such precision. Whether my forecasts somehow made their deaths more “real” to me than they might otherwise have been. That’s probably irrational guilt, but it’s real.
What I can tell you is that the weight doesn’t destroy you if you carry it consciously. The danger is the weight you don’t acknowledge – the guilt and moral strain that operate below awareness and corrode you from inside. Acknowledge the weight. Share it with people you trust. Use it as motivation to do the work as responsibly as possible. And when you can’t do it responsibly, refuse.
Lachlan Wright, 52, Science Historian, Melbourne, Australia
Suppose your wartime bombing models hadn’t been classified – imagine the Ministry of Home Security published them openly in 1940 as a morale-boosting demonstration of British scientific preparedness, crediting you by name as the lead statistician. How differently might your career have unfolded? Would public recognition of that achievement have opened doors that remained closed to you, or would the nature of the work – forecasting death, advising on civilian casualties – have been too controversial for a woman scientist to survive professionally? I’m trying to understand whether classification hurt you more by hiding your contribution, or whether paradoxically it might have protected you from gendered backlash against women doing “unfeminine” work involving violence and death.
That’s a genuinely difficult hypothetical to assess. I’ve wondered about it myself, actually – not quite in those terms, but in that general spirit. What if the work had been public?
Let me start with what almost certainly would have happened: enormous prestige. Imagine 1940, Britain under bombardment, and the government releases reports saying, “We predicted this. Our statistician forecast casualty patterns and infrastructure failures months before the bombing began. Our preparation saved thousands of lives.” That’s a powerful narrative. Morale-boosting, as you say. And my name attached to it – Florence Nightingale David, pioneering statistician who saved London – would have been impressive indeed.
In that scenario, I think certain doors would absolutely have opened. The universities would have competed for me. University College London would have promoted me faster. I might have become chair of the Statistics Department in the 1950s rather than waiting until the 1970s and moving to California. I might have secured research funding more easily. Invitations to lecture at other universities, to advise governments on statistical problems, to lead major research programmes. That’s what public recognition does – it opens access.
But here’s where the counterfactual becomes genuinely uncertain: would that recognition have come with a cost that outweighed the benefit?
The work itself – calculating death tolls – was inherently associated with darkness and violence. Even if patriotic wartime necessity framed it as serving the nation, it was still a woman publicly associated with forecasting civilian casualties, analysing how many people would die under bombing, refining predictions using actual corpses. There’s something profoundly unfeminine about that in the cultural moment of 1940s Britain.
I suspect – and this is speculative, mind you – that public credit would have attracted a particular kind of backlash. Not from the scientific community necessarily. They’d have been impressed. But from broader society. Journalists and commentators might have focused on the macabre nature of the work. “Woman Statistician Forecasts Deaths of Londoners.” There’d be something almost vampiric about that framing – a woman specialising in calculating death. It could have damaged my reputation outside academic circles in ways that might have ultimately limited rather than expanded my influence.
Consider: in the 1940s and 1950s, women scientists were already struggling against stereotypes about being unfeminine, unnatural, neglecting their proper duties. If I’d become publicly famous for work involving death prediction and casualty analysis, I might have become a symbol of something that frightened people – the notion that women could and would think about violence, death, and suffering in the same cold, analytical way men did. That was genuinely transgressive for the time. It might have made me a cautionary tale rather than an inspiration.
There’s also a gender-specific vulnerability in being credited with war-related achievements. Men who helped the war effort were heroes; women who did so were… well, it’s complicated. Were they necessary contributors or were they taking jobs from men? Were they appropriately feminine in their approach or were they becoming mannish? The fact that I published under initials and no one knew I was a woman – that actually provided some protection. My work could be assessed on technical merit without the constant undercurrent of questions about whether a woman should be doing it at all.
Now, the other side of the counterfactual: would public recognition have given me enough prestige to transcend those gendered concerns? Possibly. If I’d been famous enough – a household name, a celebrated scientific figure – I might have been more insulated from gendered criticism. Famous people sometimes get a pass on being transgressive. They become icons, and you can’t easily dislodge icons.
But there’s genuine uncertainty there. I might have become a different kind of marginalised figure – celebrated for my wartime achievement but never quite accepted into the broader academic community, always slightly suspect, always having to prove I wasn’t just a one-wartime-wonder. That’s actually a real phenomenon: women who achieve public recognition for one major accomplishment sometimes find their subsequent work is viewed through that lens, rather than on its own merits.
Let me be honest about what actually happened with the classification: I’m genuinely unsure whether it hurt or helped me overall.
The hidden work meant I couldn’t capitalise on the most important scientific achievement of my career. I couldn’t list those wartime reports on my CV – they were classified, so they didn’t exist officially. That absolutely cost me recognition and probably professional advancement. If those fifteen reports had been public, my reputation would have been substantially higher when I took academic positions in the 1960s and 1970s.
But the classification also meant I couldn’t be attacked for the work. I wasn’t a public figure associated with death prediction. I wasn’t controversial. I was simply F.N. David, a competent statistician publishing solid technical work. That invisibility came at a cost, but it also provided a kind of safety.
And here’s a subtler point: the classification meant my other work stood on its own. The correlation coefficient tables, the combinatorics research, the papers on probability theory – these were evaluated purely on technical merit. Nobody said, “Well, she’s really just a wartime calculator, and now she’s trying to do serious mathematics.” My contributions to probability and combinatorics weren’t overshadowed by the wartime work because nobody knew about it. In a perverse way, that might have helped my intellectual reputation, even if it damaged my fame.
What I think happened is this: the classification cost me public recognition and some professional advancement, but protected me from gendered backlash that might have damaged my scientific credibility. It’s a trade-off. Better recognition but potentially more vulnerability, or less recognition but more insulation. I’m not sure I got the better bargain – I suspect I didn’t. But I’m also not certain that full public credit would have been unambiguously better. The gendered dimensions of being a woman scientist forecasting death – that’s genuinely treacherous territory.
The most frustrating part isn’t the lost prestige, actually. It’s that the historical record is incomplete. Future historians studying the Blitz didn’t know that a statistician named Florence David had developed the predictive models. They attributed the civil defence preparations to various officials and planners without understanding the mathematical foundation. So my work – however flawed, however limited – didn’t contribute to the official understanding of how Britain survived the bombing. That’s a kind of historical erasure that bothers me more than personal lost prestige.
If I could redesign the counterfactual, I’d want the middle ground: publication of the work and credit for it, but framing it in a way that highlighted the technical accomplishment rather than the macabre content. “Statistician Develops Predictive Models for Infrastructure Resilience Under Bombing.” That’s dry, technical, and doesn’t invite gendered commentary about women and death. But I doubt such framing would have been possible in 1940s wartime Britain. The work was too charged with human consequence to be narrated purely technically.
So I’m rather stuck with the scenario that actually occurred: hidden work, hidden identity (as F.N. David rather than Florence), recognition only much later in life. It wasn’t ideal. But whether the alternative would have been better – I genuinely don’t know. Perhaps I was protected more than harmed by the classification. Perhaps the opposite. History doesn’t run twice, so we can’t be certain.
What I can say is this: young women scientists today shouldn’t have to make that trade-off. They shouldn’t have to choose between public recognition and freedom from gendered attack. The work should speak for itself, regardless of the worker’s sex. That’s a world still to be built.
Anaya Sharma, 28, Biostatistics PhD Candidate, Bangalore, India
You moved into biostatistics late in your career at Berkeley and Riverside. Given your background in pure probability theory and wartime operational research – both quite far from biological systems – what drew you to biostatistics specifically? And did you find the transition challenging? I ask because I’m moving from mathematical statistics into computational biology myself, and I wonder whether your combinatorial thinking translated naturally to biological problems, or whether you had to rebuild your intuition entirely around living systems that behave so differently from mechanical ones.
Ah, this is a more personal question than you might realise. The move to biostatistics wasn’t actually as dramatic a departure as it appears on paper, though I understand why it might seem that way.
Let me start with why I moved. When I arrived at Berkeley in 1961, the Statistics Department was doing what most statistics departments do – lots of mathematical theory, some applications to agricultural and industrial problems. Respectable work, but increasingly I found it rather sterile. Pure mathematics is lovely, but after the war I’d tasted applied work – real problems with real consequences. And by the 1960s, I was becoming interested in research that actually mattered to people’s lives.
The biostatistics division at Berkeley was doing exciting work. They were collaborating with the Medical School, the School of Public Health, various research institutes. Researchers would bring messy biological data – variations in measurements, confounding variables, uncertainty about mechanisms – and ask, “What does this mean?” That’s the sort of question I’d always enjoyed. Not “prove this theorem” but “what does the evidence tell us?”
Now, as for translation of skills: combinatorial thinking did transfer naturally to biological problems, more than you might expect. Here’s why: biological systems generate combinatorial patterns everywhere. Genetic inheritance is combinatorial – you’re sampling from populations with specific allele frequencies, and you want to understand expected frequencies in offspring. Disease transmission through populations follows combinatorial principles. Experimental designs in biology – how to arrange treatments, controls, replicates to separate biological signal from noise – that’s deeply combinatorial.
But you’re right that I had to adjust my intuition. The adjustment was less about the mathematics and more about developing respect for biological complexity.
In my wartime work and my probability research, I was analysing systems with reasonably clear rules. Bombs follow physics. Dice follow combinatorics. The distributions are often neat. Biology isn’t like that. Organisms vary wildly. Individual variation in response to treatment is enormous. You measure something in ten organisms and get ten different values, and that’s normal, not a sign you’ve made an error. In my previous work, that would be alarming – it would suggest your measurement procedure is rubbish. In biology, it’s expected.
I remember one early project. We were analysing survival times in cancer patients receiving various treatments. The question seemed straightforward: does treatment A produce longer survival than treatment B? But the data were terribly messy. Some patients responded dramatically; others didn’t respond at all. Some had complications that limited how much treatment they could tolerate. Dose varied. Timing varied. Age, general health, other illnesses – all confounding variables.
I rather naively suggested to the biologists, “Well, can’t we control for these variables?” And one of them – a rather patient man – explained that you can’t control most biological variables. A human organism isn’t a laboratory apparatus. You can’t hold everything constant whilst varying one thing. You have to work with the variation that actually exists.
That was genuinely humbling. My entire training had been in mathematical systems where you could idealise, simplify, control conditions. Here I was being asked to extract signal from noise when the noise was irreducible, not a measurement error but genuine biological variation.
The transition required learning new concepts. In pure probability theory, I’d worked with distributions that were theoretically derived. You’d assume the underlying model – say, independent trials with constant probability – and derive what the observed frequencies should look like. In biostatistics, you confront distributions empirically and you’re never quite certain about the underlying model. Maybe inheritance follows Mendelian genetics, but maybe there’s incomplete penetrance or variable expressivity. Maybe disease transmission follows standard epidemiological models, but maybe there are super-spreaders or asymptomatic carriers. You build models, but you hold them lightly.
I also had to develop intuition about what biological measurements actually mean. In my previous work, if you measured something twice and got the same answer, you assumed your measurement was valid. In biology, you measure the same thing twice on the same organism and get different answers – because the organism itself has changed between measurements, or because your measurement technique has error, or both. Learning to think probabilistically about measurement error rather than treating it as a nuisance was important.
Here’s where combinatorial thinking actually helped: when you’re designing an experiment to distinguish amongst competing hypotheses, you need to think about all possible outcomes and which hypotheses they’d support. That’s combinatorial reasoning. You’re essentially listing all the possible data patterns and reasoning about which patterns are consistent with which models. I found I could help biologists think through experimental designs by applying that combinatorial logic to biological questions.
One specific example: we were working with geneticists studying inheritance patterns in wheat. They wanted to know whether a particular trait followed simple Mendelian inheritance or involved multiple genes. The combinatorial question was: what patterns of segregation ratios in offspring would you expect under different genetic models? If you assume one gene with dominant allele, you’d expect particular ratios. If two genes, different ratios. If the genes interact, different again. We could then examine the actual data and ask: which pattern do we observe? That’s pure combinatorics applied to biology.
But the most important intellectual shift was learning to think probabilistically about causation in observational settings. In probability theory and pure statistics, we deal with well-defined probability models. In biostatistics, you’re often trying to infer causation from observation rather than experiment. You observe that people who take a particular drug have better outcomes than those who don’t. Does the drug cause the improvement, or do people who take it differ in other ways? That’s a causal inference problem, not a probability problem per se, though probability underpins the methods.
I found myself reading epidemiology literature – work by people like Richard Doll studying smoking and lung cancer. That was fascinating and rather humbling. Here were researchers using imperfect data to establish cause-and-effect relationships in real populations. The methods were elegant but always acknowledged the limitations and uncertainties. It was applied statistics in its finest form – rigorous but humble about what the data could and couldn’t tell us.
Did I find the transition challenging? Honestly, yes. Not because the mathematics was harder – the mathematics in biostatistics is often simpler than in pure theory. It was challenging because it required admitting that not everything can be formalised. Biological systems resist mathematical tidiness. You have to live with uncertainty in a way that mathematical work doesn’t require. You propose a model and the data never quite fit perfectly. You can improve the fit, but there’s always residual unexplained variation. That residual variation isn’t a failure of your model; it’s often a reflection of genuine biological reality – variation that’s intrinsic to living systems.
For you moving from mathematical statistics into computational biology, here’s what I’d say: don’t expect biological data to behave like theoretical distributions. That’s the first and most important lesson. Individual variation isn’t noise; it’s signal about biological reality. Learn to respect that variation rather than trying to minimise it away.
Second: develop collaborations with biologists who understand their systems deeply. I was fortunate to work with genuinely thoughtful researchers who could explain why organisms behave the way they do. That biological understanding informed my statistical thinking. Don’t just take data from biologists and analyse it in isolation. Understand the biology.
Third: embrace computational approaches. I worked with computers in my later career, and they’re genuinely transformative for biostatistics. You can simulate biological systems computationally, then compare simulations to real data. You can fit complex models that wouldn’t be tractable by hand. Your work in computational biology will have capabilities I never had. Use them.
Finally: maintain mathematical rigour whilst admitting the limits of formalisation. You can build elegant mathematical models, but remember they’re approximations to messy reality. That combination – rigorous mathematics applied humbly to imperfect systems – is the heart of good biostatistics.
I spent my early career training myself to think like a pure mathematician. My later career taught me to think like an applied statistician working with real biological complexity. The transition was genuine, and it was valuable. Both ways of thinking matter. Mathematical elegance without application is sterile; application without mathematical rigour is sloppy. The best work combines both.
Your move from mathematical statistics to computational biology – you’re embarking on that same journey. It’s unsettling at times, because biological systems don’t resolve as neatly as equations. But it’s also intellectually richer. You’re working with genuine complexity rather than idealised simplicity. That complexity is where the most interesting questions live.
Reflection
Florence Nightingale David died of lung cancer on 23rd July 1993, aged eighty-three, whilst working on her eleventh book. She had spent more than six decades contributing to statistics, probability theory, and the history of her discipline – yet remained largely unknown outside specialist circles. Speaking with her today, across the boundary of time, reveals not only the extraordinary breadth of her contributions but also the particular cruelties of historical erasure that befell women scientists of her generation.
Throughout this conversation, several themes emerged with striking clarity. David’s perseverance in the face of institutional barriers – barred from actuarial work because of her sex, publishing under initials to avoid discrimination, fighting territorial battles on university senate floors – reflects a career built through stubborn determination rather than easy recognition. Her ingenuity manifested not only in mathematical elegance but in practical problem-solving: hand-cranking Brunsviga calculators two million times, developing error-checking protocols that presaged modern computational validation, conceptualising infrastructure interdependencies decades before “network effects” entered the vocabulary. Most painfully, the interview illuminated how women’s contributions to STEM have been rendered invisible through multiple mechanisms – classification of wartime work, gender concealment through initials, overshadowing by famous male mentors, and the gendered devaluation of computational labour as “clerical” despite its mathematical sophistication.
What emerged most powerfully in David’s responses was something rarely captured in the historical record: the emotional and ethical weight of her wartime work. Whilst archival sources document the fifteen classified reports she produced for the Ministry of Home Security, they cannot convey what she described – the physical illness after calculating expected death tolls, the sick fascination with verifying her predictions against actual casualty figures, the eighty years of guilt carried even whilst knowing intellectually that her forecasts likely saved lives. Her candour about the psychological cost of forecasting civilian deaths, and her refusal to claim perfect detachment, offers contemporary data scientists wrestling with similar ethical tensions a more honest framework than professional mythology about scientific objectivity.
David’s perspective also diverged from some recorded accounts in subtle but meaningful ways. Whilst she publicly described her wartime service as necessary work competently executed, in our conversation she acknowledged ambivalence – calling it “wasted six years” immediately after V-E Day, yet recognising decades later that the models directly saved lives by enabling better civil defence preparation. Her relationship with the statistical giants she worked alongside – Karl Pearson, Ronald Fisher, Jerzy Neyman – was more complex than hagiographic accounts suggest. She described Pearson as “terrifying” yet kind, Fisher as brilliant but openly contemptuous of women, and Neyman as helpful yet exploitative of women’s computational labour. These nuanced assessments resist simple narratives about mentorship and collaboration.
Gaps and uncertainties in the historical record remain substantial. The fifteen classified wartime reports David produced are referenced in various sources, but their full content and the precise extent of their influence on civil defence planning during the Blitz remain partially obscured by archival limitations and continued classification of some materials. Her personal life, particularly her relationship with statistician Evelyn Fix, is documented primarily through institutional records and brief mentions rather than personal correspondence or detailed biographical treatment. Whether her late-career move to UC Riverside was professionally advantageous or, as she suggested, “probably a mistake” is contested – some accounts celebrate her departmental leadership whilst she herself described exhausting territorial battles. The hypothetical question Lachlan Wright posed – whether public recognition of her wartime work would have helped or harmed her career – remains genuinely unresolvable, though David’s own uncertainty about the answer is telling.
The connections between David’s work and contemporary challenges are striking and immediate. Her hand-computed correlation coefficient tables provided the infrastructure enabling mid-century statistical research across disciplines – exactly as open-source statistical libraries and computational tools do today, often built by similarly under-recognised contributors. Her wartime casualty forecasting anticipated modern predictive analytics, from pandemic modelling guiding lockdown decisions to urban resilience planning and disaster preparedness. The ethical tensions she navigated – calculating death tolls, refining models with actual corpses, maintaining accuracy whilst acknowledging moral weight – parallel contemporary debates about algorithmic decision-making in medical triage, autonomous weapons, and predictive policing. Her experience publishing under initials to conceal her gender remains relevant; research continues to document publication bias against women’s names in STEM fields. And her leadership as one of the first women to chair a major university statistics department in the United States presaged ongoing struggles for gender equity in statistical science leadership.
The afterlife of David’s work reveals both influence and continued marginalisation. Her 1938 Tables of the Correlation Coefficient remained standard references through the 1970s, cited extensively across psychology, biology, economics, and the social sciences until electronic computation made hand-computed tables obsolete. Her 1962 book Games, Gods and Gambling is still recognised as a pioneering work in the history of probability, tracing the discipline’s origins back to ancient knucklebone dice and speculating that gambling might be humanity’s first invention. Combinatorial Chance, co-authored with D.E. Barton, advanced combinatorial methods with clarity that made complex techniques accessible to researchers across fields. Yet whilst these contributions are acknowledged within specialist statistical circles, David herself remains far less known than male contemporaries whose work was comparable or even less impactful. The 1992 Elizabeth L. Scott Award – the first ever given – recognised her efforts “opening the door to women in statistics,” but came when she was eighty-three, shortly before her death. Recognition in very old age, as she noted wryly, is rather the pattern for women.
More recently, feminist historians of science and statisticians interested in recovering overlooked women’s contributions have begun documenting David’s life and work more thoroughly. The National Archives blog highlighted her wartime contributions in 2023. Statistical societies have featured her in profiles celebrating women statisticians. Academic papers examining the Matilda Effect – the minimisation of women scientists’ contributions – cite David as a textbook example. She is gradually being recovered from historical obscurity, though the process is slow and incomplete.
For young women pursuing paths in science today, David’s story offers both inspiration and warning. Her intellectual achievements were extraordinary – foundational reference works, wartime models that saved lives, pioneering research in combinatorics and probability history, departmental leadership at a time when women were systematically excluded from such positions. She succeeded not because the system supported her but despite its active resistance. Yet that success came at substantial cost: the emotional toll of classified invisible work, the professional consequences of gender concealment, the late recognition that arrived too late to advance her career, the low-level guilt carried for eighty years about forecasting deaths even whilst knowing she’d likely saved lives.
What matters most is not simply celebrating David’s resilience – though her stubborn persistence in turning that Brunsviga crank two million times, in fighting senate battles to establish her department, in refusing Fisher’s dismissive treatment by having male colleagues ask her questions deserves recognition. What matters is ensuring that future generations of women scientists don’t have to demonstrate quite so much resilience. They shouldn’t have to publish under initials. They shouldn’t have to wait until eighty-three for recognition of work done in their thirties. They shouldn’t have to choose between visibility and protection from gendered backlash. They shouldn’t find their most impactful contributions classified and forgotten whilst male colleagues receive credit for comparable work.
David’s final words in our interview resonate: “Do the work. Do it rigorously. Ask questions nobody else is asking and try to find answers. That’s your job – not to be influential, though you might be, but to ask questions and find answers. Don’t wait for permission. Don’t wait to be noticed. Build the infrastructure others will rely on, even if they never know your name. And document your work. Make sure it’s attributed to you. Don’t let others take credit. I didn’t fight hard enough for that, and I’ve rather vanished as a result. Don’t vanish.”
That instruction – don’t vanish – carries particular weight coming from someone who spent much of her career deliberately concealing her identity to avoid discrimination, whose most important work remained classified for decades, who published foundational infrastructure that enabled entire fields yet saw her name fade from the historical record. Florence Nightingale David did vanish, to a remarkable and unjust degree. But she also left behind work that endures: tables that enabled mid-century research, models that saved civilian lives, combinatorial methods that advanced probability theory, historical research that traced her discipline’s origins to ancient gambling bones.
The task now is ensuring that her name endures alongside her contributions – that when we teach correlation coefficients, we mention the woman who computed their distributions by hand; that when we discuss wartime operational research, we credit the statistician who forecasted casualties to save lives; that when we examine barriers facing women in STEM, we remember Florence Nightingale David not merely as a victim of those barriers but as someone who breached them repeatedly through sheer stubborn brilliance, leaving cracks through which others might more easily pass.
She turned a hand-crank two million times to build infrastructure others would rely on. She calculated death tolls to save lives. She published under initials to have her work read. She fought territorial battles to chair a department. She carried guilt for eighty years about work she knew was necessary. And she kept asking questions nobody else was asking, kept finding answers, kept building the foundations on which modern statistics rests.
Don’t vanish. That’s the legacy. Make your work matter. Make your name known. Build infrastructure, ask questions, find answers. And ensure that the next generation inherits a world slightly more just than the one you were born into – a world where genius doesn’t require quite so much resilience simply to be recognised.
Florence Nightingale David deserved better from history. We can’t give her that retroactively. But we can ensure her story reaches those who need it most: young women computing tables, forecasting outcomes, building models, asking difficult questions. Women who might otherwise vanish. Women who, knowing David’s story, might choose to stay visible instead.
Editorial Note
The interview transcript above is a fictional dramatisation based on extensive historical research into Florence Nightingale David‘s life, work, and publicly documented perspectives. Whilst grounded in biographical fact, archival records, published writings, and authenticated interviews David gave late in her life, the conversation itself is imagined. We have not interviewed Florence Nightingale David directly; she died on 23rd July 1993.
This reconstruction draws on several sources of evidence. David’s published works – including her mathematical papers, Games, Gods and Gambling (1962), Combinatorial Chance (1962), and her historical research on probability theory – provide insight into her intellectual interests and thinking. Biographical accounts from the MacTutor History of Mathematics Archive, the UC Berkeley Department of Statistics, and historical records held at the National Archives document her life and career. A recorded conversation between David and science writer David Colquhoun (archived at dcscience.net) captures her voice and reflections on her wartime work, her relationships with statistical giants like Fisher and Neyman, and her views on gender discrimination in science. Obituaries and memorial notices published in statistical journals provide additional context.
However, this dramatised format necessarily involves interpretation, inference, and creative reconstruction. The dialogue is invented. The specific phrasing of David’s responses, whilst consistent with her documented views and speaking style, represents our best approximation rather than her actual words. Some anecdotes are drawn directly from her recorded reflections; others are plausible reconstructions based on historical context. The five reader questions in the supplementary section are entirely fictional, as are the responses attributed to David.
Where historical uncertainty exists – particularly regarding the full content and impact of her classified wartime reports, the details of her personal relationship with Evelyn Fix, or the precise counterfactual of how public recognition might have altered her career trajectory – we have attempted to represent that uncertainty honestly rather than manufacturing false certainty. When David’s own recollections might differ from institutional records, we have prioritised her own account whilst acknowledging the gaps between what she experienced and what formal history has recorded.
This approach serves a particular purpose: to restore Florence Nightingale David to historical visibility by allowing her voice and perspective to be heard, even in dramatised form, rather than filtering her story exclusively through secondary commentary. It is an act of historical recovery, not definitive biography. Readers seeking rigorous historical documentation should consult the archival sources and published scholarship that informed this reconstruction. Readers seeking to understand not merely what David accomplished but how she thought, reasoned, and persisted in the face of institutional barriers may find value in this dramatisation.
The integrity of this reconstruction rests on fidelity to documented fact, honest representation of uncertainty, and clear acknowledgement of what is historical reconstruction versus historical fact. We offer it in that spirit – as a tribute to a scientist whose contributions were too long overlooked, and as an invitation to readers to engage with the fuller historical record that makes such overlooking visible.
Who have we missed?
This series is all about recovering the voices history left behind – and I’d love your help finding the next one. If there’s a woman in STEM you think deserves to be interviewed in this way – whether a forgotten inventor, unsung technician, or overlooked researcher – please share her story.
Email me at voxmeditantis@gmail.com or leave a comment below with your suggestion – even just a name is a great start. Let’s keep uncovering the women who shaped science and innovation, one conversation at a time.
Bob Lynn | © 2025 Vox Meditantis. All rights reserved.


Leave a comment