On April 20, NCSoft made a developer commentary video for Project M. The South Korean game company is positioning Project M as its most ambitious endeavor, where players’ in-game decisions will affect events.
The interaction-based action-adventure game Project M focuses on an immersive worldview, where the plot develops and evolves depending on the player’s knowledge.
With the development of a Digital Human, NCSOFT is presenting one of its technological achievements in artificial intelligence through Project M.
To harness the potential of cutting-edge AI and graphics technology to an unparalleled degree, Project M was built using Unreal Engine 5, which produced graphics with astounding detail.
During Epic Games’ State of Unreal presentation at the Game Developers Conference in March 2023, NCOSFT unveiled a brand-new trailer for Project M, a future action-adventure game with immersive interactive aspects currently being created for unidentified consoles.
The creation, known as The Digital Human, was made using NCSOFT’s AI technology and its cutting-edge art and graphics technological capabilities. It converts text data into real human speech that incorporates a specific person’s accent, voice, and feelings. The company’s AI text-to-speech (TTS) synthesis technology produced the artificial human spoken voice for the trailer.
The voice-to-face technology from NCSOFT was used to synchronize the lipsync and facial expressions. This facial animation technology is AI-based, and it creates facial animations that correspond to the given text or speech commands.
Players can replicate the temporarily available time and space in Project M’s informative particle-based environment. With the knowledge the player gains from playing, the basic gameplay is intended to enlarge and alter the narrative.
Through the use of motion capture, 3D scanning, and visual effects technology, this project employs Unreal Engine 5 to create lifelike images.
Developer commentary provides a sneak peek into the latest updates on Project M
Yoo Seung-hyun, the Project Director, used this video to describe the visual R&D and AI technologies used in the GDC trailer and Project M’s development status.
Yoo stated that We are building a reality-based universe while incorporating the developers’ creativity regarding the status and direction of Project M’s development. Development is ongoing to enhance immersion and realistic experience in the game, and all play is naturally tied to one another.
Project M’s high-quality graphics, featured in a video, heavily include NC’s visual R&D. Realistic images are created using techniques like 3D scanning, motion capture, visual effects (VFX), and new technologies like Nanite, Virtual Shadow MAP, and Lumen of Unreal Engine 5.
The scene replicating a Seoul alleyway, in particular, portrays the exteriors of numerous buildings and objects in detail and at a realistic level. The alleys, hospitals, and covert bases seen in the trailer are not simply props for video production, but playable play builds available to all users.
Project M is being developed with the use of AI technologies. Game development makes use of the AI voice and face animation that were used to create the artificial persons in the trailer.
Many NPCs with appropriate interaction and performance are required to communicate a vivid plot, according to Yoo. We are now integrating artificial intelligence (AI) technologies to boost game production efficiency into the pipeline.