Convai × Copresence: Photoreal Avatars that Think, Talk, and Act in Unreal Engine

By
Convai Team
October 15, 2025

Convai and Copresence have partnered to deliver an end‑to‑end pipeline for intelligent, photoreal 3D avatars in Unreal Engine. Creators scan a face with the Copresence app, convert it to a MetaHuman, and connect Convai for real‑time conversation, lip‑sync, and scene actions. The result: lifelike avatars that look like you and can talk, listen, and move on command—ready for games, virtual production, training, and brand experiences.

Copresence highlighted the workflow in a community tutorial by Anderson Rohr (Third Move Studios), demonstrating the full path from scan to MetaHuman to Convai‑powered intelligence. 

Watch the full video below to get started:

What each partner brought to the pipeline

Copresence — phone scan to MetaHuman

  • Create a high‑fidelity facial scan on the Copresence app and export for engine use.

  • Use the MetaHuman import path inside Unreal to generate a fully rigged character, then transfer Copresence textures and hair.

  • The plugin demonstrated in the tutorial is compatible with Unreal Engine 5.5, with installation via the project’s Plugins folder.

Convai — real‑time intelligence for avatars

  • Add conversational AI (multilingual), TTS, lip‑sync, and in‑scene actions (e.g., follow, move, interact) through the Convai Unreal plugin.

  • Configure with your Convai API key, assign a Convai Character ID, and enable components like Convai Face Sync for expressive speech.

Together, these steps produce avatars that not only appear human but also behave intelligently in live scenes.

Guided setup

1) Scan and prep with Copresence

  1. Install and open the Copresence app; complete a face scan.

  2. Download the MetaHuman export (zip) from your Copresence account.

  3. In Unreal Engine (UE 5.5 as shown), install and enable the Copresence, MetaHuman, and Convai plugins, then restart the project.

  4. Use the Copresence → MetaHuman Import flow, select your scan, and Create MetaHuman.

  5. Open MetaHuman Creator to make refinements, then, via Quixel Bridge, Download and Add the MetaHuman to your Unreal project.

  6. In the Copresence tab, Transfer Assets (textures, custom hair) to the MetaHuman blueprint. If prompted, enable any missing plugins and restart.

2) Make the MetaHuman intelligent with Convai

  1. In Unreal: Edit → Project Settings → Convai, paste your API key from your Convai account.

  2. In your Convai dashboard, create a Character, choose voice & language, and copy the Character ID.

  3. In Unreal, select your MetaHuman in the Outliner and paste the Character ID into the exposed field (if provided by the plugin).

  4. Open the MetaHuman blueprint:

    • Set Anim Class (Body) to Convai MetaHuman Body Anim.

    • Set Anim Class (Face) to Convai MetaHuman Face Anim.

    • In Class Settings, change the base class to Convai Base Character.

    • Add Component → Convai Face Sync for real‑time lip‑sync.

    • Compile and Save.

3) Test actions, movement, and voice

  1. Add the First Person feature pack (optional) and set the Game Mode Override accordingly.

  2. Add a Nav Mesh Bounds Volume that covers the walkable area.

  3. Press Play. Use F10 to check microphone settings.

  4. Talk to your avatar (“What’s your name?”, “Follow me”), and confirm speech, lip‑sync, and navigation work as expected.

Use cases

1. Training & L&D

The partnership unlocks massive potential for Learning and Development. Imagine an on-brand and photrealistic AI instructor who demonstrates safety procedures, explains assembly steps, or evaluates trainees in real time—all while speaking and gesturing naturally. 

Organizations can use these avatars to deliver immersive soft-skills training, sales simulations, and medical role-plays—bridging the gap between digital learning and real-world behavior.

Convai handles real-time dialogue and reactions, while Copresence ensures the instructor or trainee avatar looks lifelike, creating stronger emotional connection and engagement.

2. Brand & Events

In physical and virtual events alike, brands can now deploy AI greeter avatars that are both hyper-realistic and intelligent. Picture a photoreal brand ambassador that recognizes visitors, answers questions, and provides product demos—all powered by Convai’s conversational AI.

With Copresence scans, these digital hosts can even mirror real brand representatives, creating a powerful sense of familiarity and continuity between the physical and digital experience.

3. Virtual Production & Broadcast

Film and virtual production teams can leverage this workflow to create on-brand digital hosts or co-anchors who can interact with talent, respond in real time, and improvise on set.

With photoreal Copresence scans and Convai’s real-time language understanding, creators can script interactive dialogue segments, live interviews, or AI-driven co-hosts for immersive broadcast experiences—all inside Unreal Engine.

Quick links

FAQ

1. What does the Convai × Copresence integration enable?
It connects Copresence’s photoreal scanning pipeline with Convai’s conversational AI system—allowing creators to bring life-like avatars into Unreal Engine that can speak, emote, and act naturally.

2. Which Unreal Engine versions are supported?
The Copresence MetaHuman Import Wizard supports UE 5.5 and has a manual setup path for UE 5.6. Convai’s plugin offers support for multiple Unreal versions and continues to receive regular updates.

3. Can I use the workflow beyond MetaHumans?
Yes. Copresence exports also support GLTF/GLB and Blender formats, while Convai’s system works with a range of custom avatar rigs inside Unreal and other engines.

4. Is this suitable for real-time or live experiences?
Absolutely. Both Copresence and Convai are optimized for real-time rendering and interaction. This makes them ideal for events, simulations, and virtual productions that require immediate responsiveness.

5. Do I need specialized hardware or mocap suits?
No—just a smartphone for scanning and a PC running Unreal Engine. Copresence’s app handles photoreal capture, while Convai provides the AI and animation pipeline.

6. Can avatars perform complex tasks like following or pointing?
Yes. Convai’s Action System allows avatars to navigate scenes, follow players, perform gestures, and even interact with in-scene objects through blueprint integration.

Conclusion

The Convai × Copresence partnership marks a major step toward true digital presence. By merging photoreal scanning with agentic conversational AI, creators can now generate avatars that look, sound, and think like real people—all within Unreal Engine.

Whether you’re training the workforce of tomorrow, crafting immersive game worlds, or enhancing live events with digital hosts, this workflow offers a scalable, production-ready path to intelligent digital humans.

What was once a multi-week pipeline of scanning, rigging, and scripting is now achievable in a single afternoon—with Copresence capturing your likeness and Convai bringing it to life.