Build AI-Powered Games with NVIDIA DLSS 4.5, RTX, and Unreal Engine 5
Today, game developers can begin integrating NVIDIA DLSS 4.5 with Dynamic Multi Frame Generation, Multi Frame Generation 6X, and the second-generation transformer model for NVIDIA Super Resolution. In this post, we’ll go over new technologies and resources to share with our game-developer community, including: - A new NVIDIA TensorRT for RTX plugin for Unreal Engine’s Neural Network Engine (NNE) - NVIDIA Kimodo for easier motion generation - A guide to using ComfyUI to help produce pre-production assets - More than a dozen new sessions from GDC and GTC now available on YouTube - Our April “Level Up with NVIDIA” webinar, highlighting path-traced hair in Unreal Engine 5.7 Integrate DLSS 4.5 Dynamic Multi Frame Generation At CES 2026, we introduced DLSS 4.5, extending its AI-driven rendering pipeline with a second-generation transformer model for Super Resolution to deliver another major upgrade to image quality. DLSS 4.5 also introduced Dynamic Multi Frame Generation and an updated 6X Multi Frame Generation mode, enabling significantly higher frame rates while maintaining responsiveness. The release built on the rapid adoption of DLSS 4, which was already supported by more than 250 games and applications with Multi Frame Generation, making it one of the fastest-adopted gaming technologies from NVIDIA, and overall, DLSS technologies are now available in more than 700 games and apps. The DLSS 4.5 SDK with Dynamic Multi Frame Generation and Multi Frame Generation 6X is now available to developers, as well as the second-generation transformer model for Super Resolution. Built on Streamline, the SDK offers a consistent integration path across DLSS features, allowing developers to selectively adopt capabilities like Ray Reconstruction or Dynamic Multi Frame Generation. Updated APIs, documentation, and sample code help reduce integration time and make it easier to bring DLSS into both new and existing projects. Accelerating AI Workloads in Unreal Engine’s NNE with TensorRT for RTX The TensorRT for RTX plugin provides a runtime for Unreal Engine’s NNE, enabling efficient deployment of AI models directly within real-time applications. By leveraging RTX GPUs across desktops, laptops, and workstations, TensorRT for RTX accelerates workloads such as rendering, language, speech, and animation while maintaining strong performance on consumer hardware. In practice, developers can see 1.5x performance improvements compared to DirectML-based approaches, making it easier to integrate responsive AI-driven features into games and interactive experiences. Access the plugin today. NVIDIA Kimodo for motion generation NVIDIA Kimodo is a research project exploring a new approach to generating high-quality human motion for interactive applications. Built as a kinematic motion generation model, Kimodo can synthesize realistic 3D character animation from simple inputs such as text, keyframes, or trajectory constraints. Trained on a large dataset of high quality 3D motion capture data, it is designed to produce natural, physically plausible motion while remaining responsive to developer input and control. For game developers, Kimodo highlights a path toward more scalable animation workflows. Instead of relying entirely on authored or captured animation clips, developers can generate motion data to prototype behaviors, create variations, or fill gaps between animations. This can help…

