Deploy Once, Measure Twice: Evaluating RAG Systems
Organisations are increasingly using Retrieval Augmented Generation (RAG) to give teams fast, reliable access to information. Our 2 whitepapers introduce a modular approach to RAG that breaks retrieval into clear, configurable components. They show how architecture, dataset structure, and hybrid search strategies influence retrieval performance, helping teams design systems that adapt to different use cases with confidence. Enjoy!
Designing a RAG system is not just about choosing a model, but also about making the right architectural decisions.
In this video series, our experts walk through four common enterprise dilemmas:
· Privacy or advanced features?
· Vendor lock-in or open source tooling?
· Open source or proprietary LLMs?
· Federated or centralised data storage?
Each video unpacks the trade-offs and explains how a modular approach helps you balance control, flexibility, and performance.
Dilemma 1: Privacy or Features
Do you really have to choose between strict data privacy and advanced AI capabilities?
In this video, Paul Verhaar, Area Director Data Science & AI, shares how his team approaches this challenge from an architectural perspective. It is about designing systems that balance control and innovation.
Curious how we think about complex AI trade-offs? Step inside our Data Science & AI team.
Dilemma 2: Vendor locked or Open source
Is your AI stack tied to one vendor or built for long-term flexibility?
Tooling decisions define how independent and future-ready your architecture is. In this video, Paul shows how our Data Science & AI team approaches interoperability as a strategic principle.
Want to work on modular, platform-agnostic AI systems? This is how we build them.
Dilemma 3: Open source or
Proprietary LLMs
Open source or proprietary LLMs? Bigger models or smarter architecture?
In this video, Marten Koopmans, Tech Lead Data Science & AI, explains how the team evaluates model choices based on real enterprise use cases, not hype.
If you value technical depth and the freedom to choose the right solution, you will want to see how we work.
Dilemma 4: Federated or Centralised data storage
Do you need to centralise everything to make AI work?
Our Data Science & AI team regularly tackles complex enterprise landscapes where the answer is not obvious. In this video, Marten explores how we approach distributed data environments.
Looking to solve real architectural challenges in AI? Discover how our team thinks.