Blood tests are typically standardised for all patients, but machine learning technology can help determine what constitutes a "normal" range for each individual.
If you’ve had a doctor request a blood test, you’ve probably undergone a complete blood count (CBC), one of the most frequently administered blood tests globally. Hundreds of millions of these tests are performed annually to help diagnose various conditions and monitor patients' health.
However, despite their popularity, the interpretation and application of these tests in clinical settings can lack the accuracy that would be optimal. Currently, blood test results rely on generalised reference intervals that overlook individual variations.
Brody H. Foy and his team at the University of Washington are researching how computational methods can enhance clinical blood testing. They analysed two decades of blood count data from thousands of patients across both the East and West coasts to establish more personalised definitions of "normal" lab values.
Their recently published study utilised machine learning to determine individualised healthy blood count ranges and assess patients' risk for future illnesses.
Understanding clinical tests and complete blood counts
Commonly, people view clinical tests as solely diagnostic, such as a coronavirus or pregnancy test that yields a definitive positive or negative result. However, most tests measure biological traits that fluctuate within certain limits.
A complete blood count provides an intricate profile of different blood cell types, indicating the quantities of red blood cells, white blood cells, and platelets present. These metrics are routinely used in nearly all medical fields.
For instance, haemoglobin, which enables red blood cells to transport oxygen, can indicate iron deficiency if levels are low. Platelets assist in blood clotting; a low platelet count might suggest internal bleeding. Elevated white blood cell counts could signal an infection, as the body increases its production to combat illness.
Questioning normal ranges and reference intervals
This leads to the crucial inquiry: What exactly defines too high or too low in blood tests?
Traditionally, clinicians establish reference intervals by measuring a blood test among a cohort of healthy individuals. They usually take the middle 95% of these values and categorise them as "normal," considering anything outside this range either too low or too high. However, these intervals can be problematic because what is considered normal for one patient may not apply to another.
Blood count markers are largely influenced by genetics and environmental factors, meaning each individual's healthy value can vary significantly. For instance, while a typical platelet count might range from 150 to 400 billion cells per litre, an individual may strive to maintain a set point of 200, defining their normal range as 150 to 250.
Variances between a patient's actual normal range and the population-based reference interval can complicate diagnoses. A doctor might overlook a condition if a patient's set point is too far from a defined cutoff, or conversely, might order unnecessary tests if it's too close.
Establishing personalised normal values
Fortunately, many individuals undergo blood counts during regular checkups, which provides valuable data. Foy and his colleagues were able to estimate set points for blood counts in over 50,000 patients based on their clinical visit history. This approach enabled them to analyse how the body regulates these set points and explore ways to personalise lab test interpretations more effectively.
Over several decades, they discovered that individual normal ranges were approximately three times narrower than the population-level ranges. For example, while the standard "normal" white blood cell count ranges from about 4.0 to 11.0 billion cells per litre, most people's individual ranges were more likely between 4.5 to 7, or 7.5 to 10.
Employing these personalised set points to analyse new test results improved the identification of conditions like iron deficiency, chronic kidney disease, and hypothyroidism. The researchers were able to detect when an individual's result fell outside their personalised range, suggesting potential health issues even if it was considered normal within the broader population.
These set points also served as reliable indicators of future disease risk. For instance, individuals with higher white blood cell set points showed an increased likelihood of developing Type 2 diabetes.