Real-Time Data in Digital Health: Why mHealth Still Struggles to Prove Impact

Written by:
Friedrich Lämmel
Real-time data application in digital health

It may feel like, in 2026, saying that “digital health has entered a phase of rapid expansion” might feel like the same old story. But, with recent discoveries in the digital medicine shopping review, this statement reaches a deeper meaning. Today, hundreds of thousands of mobile health (mHealth) applications are available across app stores, supporting everything from chronic disease management to mental well-being and lifestyle changes to suit any taste. 

For patients, these tools offer accessibility, personalization, and continuous support. For healthcare systems and insurers, they promise scalable, preventative care. This is often what you hear and read in many advertising articles. Yet one thing gets left out: proving that these solutions actually work.

Traditional clinical validation methods, particularly randomized controlled trials (RCTs), were designed for static interventions like drugs or medical devices. They struggle to keep pace with software that evolves continuously, adapts to individual users, and integrates into everyday life. As a result, there is a growing gap between innovation and evidence.

This is where real-time data comes into play. Instead of relying solely on controlled, time-limited studies, the data generated during routine app usage captures how digital health tools perform in real life.

Basically, continuous data is emerging as the missing link: a way to evaluate effectiveness continuously, at scale, and within context, bringing digital health closer to real-world impact.

Why Traditional Evaluation Methods Fall Short

RCTs are too slow and rigid

Randomized controlled trials (RCTs) have long been considered the gold standard for evaluating medical interventions. However, their structure makes them difficult to apply to digital health. RCTs are designed for static treatments with clearly defined inputs and outputs, whereas mHealth applications are dynamic by nature. Features are updated frequently, user interfaces evolve, and algorithms continuously improve.

This creates a fundamental mismatch:

  • Apps change faster than trials can be completed
  • Personalization introduces variability that is hard to standardize
  • User interaction makes blinding difficult to maintain

By the time an RCT concludes, the product being evaluated may no longer reflect what users actually experience. This undermines both the relevance and the timeliness of the evidence generated.

Digital health doesn’t behave like drugs

Unlike pharmaceuticals, digital health solutions are not consumed in a standardized way. Their effectiveness depends heavily on user behavior, engagement patterns, and context. Two users may interact with the same app in entirely different ways, leading to very different outcomes.

On top of that, digital products are continuously iterated based on user feedback and data insights. This constant evolution makes it difficult to “freeze” a version for evaluation, as required in traditional study designs.

So what is the implication? Evaluating digital health requires fundamentally new approaches. Rather than adapting legacy frameworks, the industry needs evidence models that reflect how these solutions actually function in the real world.

What Is Real-Time Data in mHealth?

Real-time data refers to data collected during the routine use of digital health applications, outside of controlled clinical environments. Instead of relying on structured, time-limited studies, it captures how people actually interact with health tools in their daily lives, making it far more reflective of real outcomes.

In the context of mHealth, continuous data typically falls into three main categories:

  • User input data: Information actively provided by users, such as symptom tracking, questionnaires, mood logs, or medication intake
  • Device-generated data: Passive data collected through sensors and wearables, including heart rate, activity levels, sleep patterns, and other physiological signals
  • System-generated data: Data from broader healthcare systems, such as electronic health records (EHRs), claims data, or clinical test results

What makes real-time data particularly powerful is its ability to combine these sources into a more complete, longitudinal view of health. Unlike traditional clinical data, which often represents isolated snapshots, it reflects continuous behavior, context, and change over time.

This is why it is increasingly seen as the foundation for scalable, continuous evidence. It enables digital health solutions to be evaluated not just once, but continuously, based on how they perform in real-world conditions, across diverse populations, and over extended periods of time.

What the Research Shows: Data Use in Today’s mHealth Landscape

To understand how real-world data is actually being used in practice, the study analyzed 72 studies across 61 different mHealth applications, offering a broad view of how digital health solutions are currently evaluated.

The review focused on three key questions:

  • How is real-world data being used?
  • What types of data are being collected?
  • How is effectiveness being measured?

One of the clearest findings is that, despite the growing sophistication of digital health technologies, data collection remains heavily skewed toward manual input.

Across the apps analyzed:

  • ~71% rely on user-reported data, such as surveys, symptom tracking, or self-logged behaviors
  • Only ~26% incorporate device-generated data, including wearables and sensor-based inputs
  • Just ~7% integrate clinical or system-level data, such as electronic health records or claims

This reveals a significant gap between what is technically possible and what is currently implemented. From a broader perspective, the ecosystem is still underutilizing some of its most powerful capabilities. Passive, continuous data collection through wearables remains limited, and connections to clinical systems are rare. As a result, many applications still depend on fragmented and episodic data rather than building a comprehensive, longitudinal view of health.

In simple terms, today’s mHealth data landscape is characterized by:

  • Over-reliance on manual inputs and surveys
  • Underuse of wearable and sensor data
  • Minimal integration with broader healthcare systems

Where mHealth Works Best Today

While digital health spans a wide range of use cases, the research shows that its impact is not evenly distributed. Two areas in particular stand out: mental health and metabolic health.

These domains are especially well-suited to mHealth because they align closely with the types of data that can be collected in real-world settings. In mental health, subjective input is not a limitation but a strength. Self-reported data such as mood, stress levels, or behavioral patterns are often the most relevant indicators, making app-based tracking both practical and meaningful.

Metabolic health, on the other hand, benefits from objective, continuously collected data. Wearables can capture signals like activity levels, heart rate, and sleep, all of which are closely linked to metabolic function. This enables a more dynamic and longitudinal view of health compared to traditional, snapshot-based measurements.

The key takeaway is that real-world data is not equally effective across all use cases.

Some conditions naturally lend themselves to continuous monitoring and behavioral insights, while others still depend more heavily on clinical diagnostics. As a result, the success of mHealth is not just about technology, but about matching the right data approach to the right health domain.

What’s The Big Problem: Weak Evidence Designs

While the growth of real-world data in digital health is promising, the way it is currently used raises a critical concern: the quality of evidence being generated is often limited.

The review shows that the majority of studies rely on relatively weak evaluation methods:

  • Around 57% use single-group pre/post designs, where outcomes are measured before and after using an app, without a comparison group
  • Only ~4% use randomized or high-quality study designs, such as controlled trials
  • Very few studies include robust comparators, making it difficult to isolate the true effect of the intervention

This creates a fundamental challenge. While large amounts of data are being collected, the lack of strong study designs limits how confidently we can interpret the results. Improvements observed in users may be influenced by external factors, behavioral biases, or natural variation over time rather than the app itself.

In other words, the issue is not the absence of data, but the lack of methodological rigor in turning that data into reliable evidence.

What Needs to Change in mHealth

One of the biggest missed opportunities in today’s mHealth landscape is the limited use of continuous data. Despite the technical ability to capture health signals in real time, only around 44% of studies actually leverage continuous data collection. Most still rely on snapshots, isolated measurements that fail to reflect how health evolves over time.

This is a critical gap. The real promise of mHealth lies in its ability to move beyond episodic insights and instead provide a longitudinal view of health. Continuous data enables a deeper understanding of patterns, behaviors, and early risk signals, opening the door to more timely and personalized interventions.

However, unlocking this potential requires a shift across the entire ecosystem.

For product teams, this means designing with passive and multimodal data in mind from the beginning, rather than relying primarily on manual inputs. For insurers and healthcare providers, it requires stronger integration between digital health tools and clinical systems to validate outcomes and support real-world evidence generation. And at a broader level, evaluation models need to evolve toward approaches that reflect real usage, including comparator-based studies and multi-source data integration.

How Thryve Powers Digital Health 

The findings make one thing clear: while mHealth holds enormous potential, evidence generation is still lagging behind innovation. Data is being collected at scale, but too often remains fragmented, underutilized, or difficult to translate into meaningful outcomes.

This is where infrastructure becomes critical. To truly unlock the value of real-world data, digital health platforms need to move beyond isolated integrations and build systems that can unify, standardize, and activate data across sources. At Thryve, this shift is already happening in practice. By enabling access to a broad ecosystem of devices and harmonizing biometric data streams, with our wearable API, we provide the foundation needed to:

  • Seamless Device Integration: Easily connect over 500 other health monitoring devices to your platform, eliminating the need for multiple integrations.
  • Standardized Biometric Models: Automatically harmonize biometric data streams, including heart rate, sleep metrics, skin temperature, activity levels, and HRV, making the data actionable and consistent across devices.
  • GDPR-Compliant Infrastructure: Ensure full compliance with international privacy and security standards, including GDPR and HIPAA. All data is securely encrypted and managed according to the highest privacy requirements.

If you’re building digital health solutions powered by real-world data, book a demo with Thryve to turn fragmented data into actionable evidence.

Friedrich Lämmel

CEO of Thryve

Friedrich Lämmel is CEO of Thryve, the plug & play API to access and understand 24/7 health data from wearables and medical trackers. Prior to Thryve, he built eCommerce platforms with billions of turnover and worked and lived in several countries in Europe and beyond.

About the Author