Directory

NVIDIA ACE - Get Started | NVIDIA Developer

Get Started With NVIDIA ACE

NVIDIA ACE is a suite of technologies for bringing digital humans to life with generative AI. NVIDIA ACE encompasses technology for every part of the digital human - from speech and translation, vision and intelligence to realistic animation and behavior, to lifelike appearance. ACE is available as NVIDIA NIMâ„¢ microservices, which are easy to deploy and highly performant - optimized to run in the cloud, on premises, or on NVIDIA RTXâ„¢ AI PCs.

Documentation


ACE NIM

To make it easier for developers to build digital humans, NVIDIA ACE 24.06 introduces general availability for many components within our suite of digital human technologies, including NVIDIA® Riva, NVIDIA Audio2Face-3D™, and NVIDIA Omniverse™ RTX Renderer.
Get started now through
NVIDIA AI Enterprise.

Speech

Riva Automatic Speech Recognition

For speech to text. On-device inference coming soon.

Get ContainerDocumentation

Riva Neural Machine Translation

For text translation for up to 32 languages.

Get ContainerDocumentation

Riva Text-to-Speech

For text to speech. On-device inference coming soon.

Get ContainerDocumentation

Language

ACE Agent

For dialog management and RAG workflows.

Get ContainerDocumentation

Nemotron-Mini 4B Instruct

Nemotron-Mini 4B Instruct model is optimized through distillation, pruning and quantization for speed and on-device inference. This model is purpose built with instruction tuning, providing better roleplay, RAG and function calling capabilities.

Try NowDownload AIM SDK Plugin

Animation

Animation Graph

For animation blending, playback and control.

Get ContainerDocumentation

Audio2Face-3D

For audio to 3D facial animation and lip sync. On-device coming soon.

Get ContainerDocumentation

Omniverse RTX Renderer

For streaming ultra-realistic visuals to any device.

Get ContainerDocumentation

ACE Tools and Reference Workflows

Developers can integrate ACE NIMs directly into their products, tools, services, or games and experiences for domain-specific AI workflows such as NPCs and customer service assistants.

Please note that in order to access ACE Workflows you will be required to review and accept the NVIDIA ACE License Agreement.

Game Characters

The NVIDIA Kairos reference showcases NPCs interacting with natural language. This workflow contains an Audio2Face plug-in for Unreal Engine 5 alongside a configuration sample.

Documentation

Customer Service Agents

NVIDIA Tokkio reference showcases interactive customer service capabilities for healthcare, IT, retail, and more through Omniverse rendering. This reference features a configurator tool, quick deployment scripts, ACE Agent quickstart script, and helm charts.

Documentation

Unified Cloud Services Tools

Simplifies deployment of multi-modal applications.

Documentation

Avatar Configurator

 Build and configure custom characters with base, hair, and clothes.

Documentation

Autodesk Maya ACE

Streamline facial animation in Autodesk Maya or dive into the source code to develop your own plugin for the digital content creation tool of your choice.

Documentation

ACE Early Access

As an early access partner, you have access to the pre-release software and supporting resources below. In exchange, we ask for your feedback to help us continue improving our microservices.

Unreal Engine 5 Renderer Microservice 0.1.0

UE 0.1.0 microservice allows you to use Unreal Engine 5.4 for to customize and render your avatars.

Nemotron-3 4.5B SLM 0.1.0

State-of-the-art model built for on-device RTX PC inference.The model is available with INT4 quantization for minimal VRAM usage and supports role-play and RAG use cases.

Audio2Face-2D 0.1.0

Animate a person’s portrait photo using audio, alongside support for lip-sync, blinking, and head pose animation.

VoiceFont 1.1.1

Create a unique voice, with reduced latency for real-time use cases. Now with support for concurrent batches across all GPUs.

Apply Here

ACE Examples

Get started with how to use ACE microservices below. These video tutorials provide getting started tips for common digital human use cases.

Text to Gesture

Create Sentiment Analysis and Send Audio to A2X and AnimGraph (00:44)

Connect All Microservices in UCF (6:34)

Reallusion Character

Exporting Character From Reallusion Character Creator and Preparing Character in Audio2Face (11:07)

Setup and Streaming Through a Reference App and Fine Tuning (14:41)

Stylised Avatar

Making and Animating a Stylised 3D Avatar From Text Inputs (1:43)

Make Vincent Rig Compatible For UE5 and A2X Livelink (5:35)

Make Vincent Blueprint Receive A2X Animation Data (11:53)

Create Python App to Generate Audio From Text and Animate Vincent (8:17)