Audio to Animation AI-Integration Now Available in iClone with Audio2Face
The new, seamless solution leverages Reallusion’s iClone, Character Creator, and NVIDIA’s Audio2Face to revolutionize multi-lingual facial lip-sync animation production.
The new, seamless solution leverages Reallusion’s iClone, Character Creator, and NVIDIA’s Audio2Face to revolutionize multi-lingual facial lip-sync animation production.
California-based Elara Systems harnesses Adobe Substance 3D Painter, Maya, Unreal Engine and NVIDIA Omniverse, connecting 3D pipelines to create AI character-driven 3D medical animations that help raise public awareness through relatable storytelling.
As part of tech giant’s CES Special Address, company announced Blender enhancements that include Omniverse Launcher and Audio2Face, Audio2Gesture, and Audio2Emotion — generative AI tools that enable instant 3D character animation.
New technology generates interactive AI avatars with ray-traced 3D graphics by connecting speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies.
Powerful system produces physically simulated data for training neural networks; first implementations shown are NVIDIA DRIVE Sim and NVIDIA Isaac Sim apps.