Meet Andre, Our Heart and Brains Behind AI Strategy

Written by:
Friedrich Lämmel
A slide introducing Andre

André joined Thryve Health in October 2025 as Head of AI Strategy & Development, where he partners with leadership and Business Development to identify and shape AI products that unlock the value of Thryve’s longitudinal wearable datasets for the insurance ecosystem. He has already delivered a production-grade conversational claims assistant for a large German statutory insurer’s activity-based rewards program and personalizes responses using time-series RAG with “episodic memory” over sensor signals. Next, André is advancing Thryve’s roadmap toward World Models in healthcare by combining JEPA-style predictive representations with Sensor-LLMs to turn fragmented, multimodal sensor streams into stable, clinically meaningful narratives of risk, recovery, and resilience.  

But that’s only the tip of the iceberg. Andre is also the author of our upcoming AI newsletter series, where he shares practical insights on AI application in healthcare and insurance. Sounds like something you don’t want to miss? Subscribe to our newsletter to stay tuned!

With his growing role in shaping our AI direction, we wanted to sit down with Andre to learn more, not just about the strategy, but about the person behind it. How did his career begin? What shaped his thinking? And how does a background in pure mathematics evolve into building predictive health intelligence? 

We asked Andre a couple of questions. 

What made you want to work with AI in healthcare?

My journey wasn't a straight line. It started in pure mathematics. I came to Berlin to do my PhD at Humboldt, modelling complex systems like climate and fluids. I spent years studying how noise and randomness drive transitions in high-dimensional systems. It was intellectually fascinating, but I eventually hit a wall: classical math often 'underfits' the messy reality of the world, and I felt my work was trapped in scientific papers rather than impacting real lives.

The turning point was realizing that human physiology is the ultimate complex dynamical system. It has deep internal mechanics, biological rules that should be predictable, but it operates on so many scales simultaneously. The 'output' of your health isn't just defined by your genes; it’s constantly shifted by outside mechanisms: your environment, your habits, even the stress of your commute. I realized that no human doctor can hold all those scales in their head at once. That’s where AI comes in.

I saw this clearly when I took a leap into cardiac AI. I vividly remember deploying my first model to detect ischemia from raw ECG data. Seeing an algorithm handle the non-linearities of the heart and flag a life-threatening condition before a doctor might catch it—that was the spark. I realized we could use math to map how lifestyle inputs change biological outputs, moving medicine from reactive (treating the sick) to preventive (catching the signal early).

That mission is what drives me today. Healthcare is incredibly tough, data is siloed, and the sector is highly regulated, but that complexity is exactly why I love it. I want to build systems that break down those silos and don't just treat the symptom, but understand the multi-scale mechanics that caused it. For me, AI in healthcare isn’t about the hype; it’s about the quiet, rigorous work of redefining how we make sense of human health.

What’s the hardest part about making sense of health data from wearables or sensors?

The hardest part is the invisible work: overcoming the fragmentation of the ecosystem.

First, we are dealing with a 'Tower of Babel' problem. You have hundreds of devices, from clinical-grade patches to consumer smartwatches, and they all speak different dialects. One device samples heart rate every second; another, every minute. One defines 'rest' completely differently from another. Before we can even think about advanced AI, we have to do the heavy lifting of harmonization. It’s about translating these disparate streams into a single, unified language that allows us to compare apples to apples across thousands of users.

There is often a naive misunderstanding that you can just feed raw data into your favorite chatbot, and it will magically understand it. That simply fails. You cannot build intelligence without a rigorous, unified data model first. That foundational work isn't flashy, but it is the prerequisite for everything else.

Second, you face the challenge of data sparsity. In a perfect world, we’d have a continuous stream of data. In reality, life gets in the way: batteries die, watches are forgotten, sensors disconnect. Sparsity is like trying to read a novel where every third page is torn out. You have the beginning and the end of a chapter, but you miss the context in the middle. The hardest part is building infrastructure that bridges these silences without hallucinating false patterns. We have to distinguish between 'missing data' (technical failure) and 'no activity' (human behavior), because often, the silence itself tells a story.

Finally, there is the disconnect with other modalities. Wearables give us an exciting, continuous movie of your physiology, the what and the when. But they often lack the why. A spike in heart rate might be stress, or it might be an infection. To truly understand it, we need to bridge the gap between this sensor data and isolated islands of information like clinical records or lab results. Bringing these players together is arguably the biggest challenge; the wheel of collaboration turns slowly, and there are a lot of 'cold starts' in getting these systems to talk to one another.

But this is exactly what we are solving at Thryve. We aren't just observing these hurdles; we are clearing them. We are actively moving past the era of just 'gathering' data into the era of connecting it. We are building the unified framework that fills those sparse gaps and bridges those isolated islands of data. By turning that fragmented 'Tower of Babel' into a coherent story, we are unlocking the ability to see the full picture, turning scattered puzzle pieces into a clear map for prevention."

What do you wish more people understood about AI in healthcare?

I wish people would stop asking 'Will AI replace the doctor?' and start asking 'How can AI let the doctor be a doctor again?' 

There is a huge misconception that AI is here to replace clinicians or coaches. I believe the exact opposite. We are entering an era where human contact won't be an option, it will be a requirement. Even if a robot could perfectly mimic the warmth of a hand holding yours, in your most vulnerable moment, you wouldn't want the simulation. You would want the shared humanity of a real person. You cannot automate that connection.

In reality, the most powerful AI isn't a replacement; it is a Guardian.

It never sleeps. It watches the data 24/7, heart rate variability, sleep patterns, activity trends, filters out the noise, and only flags the moments that truly matter.  But building this Guardian is infinitely harder than people realize because it requires multimodal datasets. You can't just train a model on medical textbooks; you need to digitize and connect everything: continuous sensor streams, lab results, clinical notes, and subjective patient feedback. Collecting, harmonizing, and structuring that data is the unglamorous, difficult key to everything we do. Without that 'plumbing,' intelligence is just a hallucination.

This is why at Thryve, we prioritize reliability over pure generation. We need to move beyond the hype of 'confident' AI to the rigor of reliable AI. In healthcare, a hallucination isn't just a glitch; it's a clinical risk. We are building systems designed with epistemic uncertainty, models that can mathematically quantify their own confidence. Instead of guessing, they are architected to recognize out-of-distribution events and trigger a 'human-in-the-loop' protocol. I wish the market valued this safety architecture as much as it values the generative capabilities we see in the headlines. True innovation in our sector isn't about how creatively a model can write; it's about how reliably it knows when not to speak.

Ultimately, this technology is about empowerment. It’s not just about monitoring; it’s about making us conscious. When AI works well, it acts like a mirror, revealing how your daily choices, your sleep, your stress, your movement, actually shape your biology. It gives you the clarity to understand your own body, transforming health from something that 'happens to you' into a conscious choice you make every day. That is the real promise of AI: it doesn't just treat us; it can wake us up before any treatment." We are standing at the threshold of a fundamental shift: from Retrospective Sick-Care to Predictive Health-Intelligence. The future of AI is not about building a smarter doctor to fix the break; it’s about building the invisible scaffolding that prevents the break from happening in the first place.

What’s one thing coming in AI that you’re genuinely excited about?

I am genuinely excited about the shift from Generative AI to Physical Intelligence.

We are reaching the saturation point of language models. LLMs have been a miracle for processing text, but we have to remember: the map is not the territory. Language is a low-dimensional description of the world, but the human body is a high-dimensional, continuous physical reality.

This is why I am focused on tackling what I call 'Tokenization Debt’ and the concept of the world model. Think of a world model as the AI’s internal rehearsal of reality; it’s an abstraction that understands the deep 'why' of physical cause-and-effect instead of just the 'what' of a text string.

When we force continuous biological signals, heart rates, metabolic shifts, and circadian rhythms into discrete text tokens, we create a structural mismatch.  We end up optimizing for the statistical patterns found in the training pool, effectively capping the model’s intelligence at what has already been seen rather than allowing it to grasp the actual physiological system. Our bodies are not chatbots; they are systems governed by biological laws, not the frequency of textual datasets.

That is why Yann LeCun’s vision for JEPA (Joint Embedding Predictive Architecture) is such a critical breakthrough.

Current generative models fall into the trap of trying to reconstruct every noisy pixel or token. In healthcare, that’s like trying to predict the exact shape of every wave to understand the tide. You don't need to know the position of every drop of water; you need to know if the tide is coming in or going out. That’s the difference between memorizing noise and understanding the underlying force. JEPA changes the game by predicting abstract states. It ignores the noise (the wave) and models the physics (the tide).

In Healthcare, this unlocks 'Simulation': We move from AI that simply retrieves medical guidelines to a World Model that can abstract and simulate physiological consequences. A chatbot can tell you 'You are stressed.' A World Model understands the mechanics of that stress and can simulate the future: 'If you run 5k today with this specific viral load, your recovery will crash.'

In Robotics, this unlocks 'Physical Intuition': If we want robots to assist in hospitals, lifting patients, or performing care, they cannot rely on text. They need a World Model to understand the physics of care, giving them the intuition to be gentle and effective in a chaotic physical environment.

This transition, from reciting medical text to simulating human health, is the most exciting frontier I have ever seen. It is the key to Agentic AI. LLMs struggle to plan long-term because they reason word-by-word. A true health agent needs to reason hierarchically. So, I’m not excited about 'bigger' models. I’m excited about the new stack: World Models as the predictive engine of biology, and LLMs as the interface to communicate it. That is how we move from AI that just chats to AI that actually reasons."

If you’re building digital health or insurance solutions and want to turn fragmented sensor data into reliable, predictive health intelligence, book a demo with Thryve and see how our AI infrastructure can support your next step.

Friedrich Lämmel

CEO of Thryve

Friedrich Lämmel is CEO of Thryve, the plug & play API to access and understand 24/7 health data from wearables and medical trackers. Prior to Thryve, he built eCommerce platforms with billions of turnover and worked and lived in several countries in Europe and beyond.

About the Author