User Story Bilge Yücel DevRel Engineer Nils Hilgers Lead AI Engineer @LHIND Lufthansa Industry Solutions Uses Haystack to Power Enterprise RAG Learn how Lufthansa Industry Solutions (LHIND) built an enterprise-grade, compliant AI knowledge assistant October 24, 2025
Lufthansa Industry Solutions Uses Haystack to Power Enterprise RAG Learn how Lufthansa Industry Solutions (LHIND) built an enterprise-grade, compliant AI knowledge assistant October 24, 2025When you think of Lufthansa, you might picture planes, airports, or global travel, but Lufthansa Industry Solutions (LHIND) is making an impact in a different way: as a full-service IT company delivering digital solutions for clients both inside and outside the Lufthansa Group. At LHIND, a subsidiary of the Lufthansa Group, teams work on a wide range of projects that span cloud infrastructure, AI, and enterprise data systems to custom software development, process automation, and digital transformation initiatives. Among them is SmartAssistantAI, an enterprise AI chatbot implementation to make company knowledge accessible to everyone, instantly and securely. Behind the product is Nils Hilgers, Lead AI Engineer at LHIND and his team of engineers and product builders. Together, they’re rethinking enterprise search through the lens of retrieval-augmented generation (RAG) and enterprise-grade security standards. To bring that vision to life, the team selected Haystack as one of their key solutions for powering their AI Assistant. The Challenge: Connecting Scattered Knowledge LHIND’s engineering group was tasked with building a secure, centralized assistant capable of answering employees’ questions using the company’s internal documentation. The challenge wasn’t just accuracy — it was compliance and control. The system needed to: - Handle multiple data sources (SharePoint, internal wikis etc.) - Operate under GDPR, ISO 27001, and Lufthansa Group’s own IT governance standards - Deliver explainable, source-cited results With a small team of developer and engineers working in an agile setup, supported by a dedicated platform team, they set out to design a solution that could unify data retrieval and LLM-based reasoning without sacrificing traceability or maintainability. Choosing Haystack: Flexibility Meets Stability When the project began, the team evaluated several orchestration frameworks to structure their RAG pipelines. They needed something reliable enough for production but flexible enough to adapt as requirements evolved. “We needed a graph orchestration framework with well-thought fundamentals that is stable for production” says Nils. After testing a few alternatives, Haystack stood out for: - Orchestration layer built on directed graphs with easy serialization and visualization - Unified filtering across different vector database providers - Jinja-based prompt templating, which made their prompts more maintainable Having used the older 1.x REST API for some demos, the team already knew Haystack’s foundations and migrating to 2.x resulted in cleaner, more maintainable code. The Technical Architecture: How It All Comes Together The assistant is a cloud-native, microservice-based system built around modularity and open-source principles. It combines Haystack pipelines with custom middleware and observability tooling. It’s not a public-facing product, rather, a secure solution deployed in enterprise environments where control over data and access is critical. Core Components 1. Frontend and Authentication A modular frontend built with microfrontends allows different configurations per customer like custom stylesheet, logo, and login interfaces through an admin interface. A Golang-based authentication middleware enforces role-based access control (RBAC) and ensures user permissions are respected end-to-end. 2. Ingestion Pipelines Data…
