This Is Why Pregnancy Algorithms Must Evolve
What happens when a clinical tool is considered “accurate” - but only for some people
That’s the uncomfortable truth behind Norway’s eSnurra algorithm for pregnancy dating. It was designed to improve consistency and precision. But here's the catch: eSnurra was built on data from a population that doesn’t reflect the full diversity of Norway today.
And that raises a critical question: How inclusive is a standard if it was never validated for everyone?
Where the Problem Starts
eSnurra was developed in 2007 using data from ~40,000 pregnancies, mostly white, mostly from Trondheim. It uses ultrasound measurements to estimate due dates and fetal growth. Sounds scientific. Sounds neutral.
But science isn’t neutral when it overlooks the populations it’s supposed to serve.
And here's where things get dangerous: 📍 📚 All ultrasound midwives in Norway are trained at NTNU in Trondheim. And they’re being taught that eSnurra is the best method, without being taught about its data limitations, its bias, or its lack of validation for non-white populations.
That’s not just a clinical gap - it’s a failure in evidence-based teaching.
Why It Matters
Different racial and ethnic groups have documented variations in fetal growth patterns and gestational length. Ignoring that means eSnurra may misclassify gestational age for certain groups, especially people of African, South Asian, or Middle Eastern backgrounds.
That misclassification can have consequences:
Misclassification of gestational age
Missed preterm risks
Unnecessary inductions or emergency C-sections
Increased maternal and neonatal complications
Erosion of trust among marginalized patients
And when future midwives are taught one tool, one dataset, one “truth,” without the full context, we’re not teaching them how to provide inclusive care - we’re training them to perpetuate bias.
All because the algorithm wasn’t made to see everyone. That’s not evidence-based medicine. That’s an algorithm taught as a belief system.
What Inclusion in Healthcare Actually Requires
Here’s the part we don’t talk about enough: 📊 Inclusion isn’t just about access to care. It’s about whether the tools, data, and decisions within that care reflect the people receiving it.
Clinical algorithms must be validated for diverse populations. Not just once. Not just in a footnote. But as a standard of evidence-based practice.
A Truly Inclusive Approach Would Involve:
Publishing the demographic breakdown of training data.
Conducting follow-up studies for underrepresented populations.
Teaching healthcare professionals to critically assess the tools they use.
Norway has the scientific capacity to lead here. But only if it moves beyond local expertise and acknowledges the national—and increasingly international diversity it serves.
Key Takeaways
Inclusion requires scrutiny, not just standardization.
Teaching unvalidated tools as universal truths puts patients at risk.
If we want inclusive healthcare, we need evidence-based medicine and evidence-based education.
Lesson Learned
A clinical standard that isn’t inclusive isn’t a standard - it’s a shortcut. And shortcuts don’t belong in maternal care.
P.S.
This issue is bigger than algorithms. It’s about who gets seen, heard, and served in our systems. If you're working in healthcare policy, public health, or digital health innovation, let’s collaborate to center equity from the start.
📩 inclusiveleadership.solutions
And if you want to dig deeper into what inclusive leadership looks like in practice, check out Inclusive Leadership Trends for 2025. It’s packed with insights on the intersection of inclusion, data, and decision-making.