3 Technologies That Will Revolutionize The Digital Human Introduced By Unity

MMO Guides
7 min readDec 15, 2022

--

Unity, which has been introducing a tech demo to introduce its latest functions every year, unveiled another tech demo ‘Enemy Miss’ in March. The new Tech Demo, created by the Unity Demo Team, which produced ‘Herald’, which has surpassed 4 million views, contained various concerns and Unity’s answers to implementing ‘Digital Human’, which has become a hot topic recently. .

In addition, Unity shares a graphic technology to realistically realize the digital human package by releasing demonstrations and digital human packages for free at the Unite 2022 held offline in November. What did Unity want to show developers through Tech Demo and Digital Human Package? Mark Soul Gel Unity, senior Evangelist who visited Korea to attend Paragraph Asia, responded to the questions by demonstrating the demo.

■ Capture the light that changes in real time, adaptive probe volume

If ‘Heretics’ showed the foundation of digital human and diligence, ‘Enemy’ showed the technology to catch the technology not only to raise the technology but also to change in real time. In particular, Solace Evangelist emphasized ‘light’. Real-time lighting technology was also applied to Heretics, but the preparation process was needed, such as dividing the light layer and applying it right away.

In 2019, Unity, which announced the scalable rendering pipeline, introduced HDRP for rendering high-quality graphics, and research on real-time lighting has continued. Unity wanted to capture and reveal the impressions that change depending on the change of light based on the feedback and output of not only engine developers but also developers using engine.

Among the digital human fields, the noteworthy part of the Unity was the phenomenon in which the shape reflected on the surface, such as snow and sleep, was reflected in other objects. This field, which is familiar with the term Caustic for developers, appears when mentioning real-time rate racing. The reflection of the object is a phenomenon that is closely related to light, and it is a field that can be described in real life only when it is operated and applied immediately in real time.

Normally, this work requires the materials that artists have made in advance, taking into account the changes in the change of light. Otherwise, in order to render the changes in the engine, there are many things to do in various ways, so the delay must be caused. It may take several hours or days to bake light depending on the size, not just a few seconds of delay.

To solve this, Unity introduced ‘Adaptive probe volume’. It is a technology that deploys and automatically distributed thousands of light probes in the scene, and based on the probe, it is possible to capture and bake the changes in the object in real time as the light source changes in the scene.

In the end, thousands of light probes may be related in this process, so you may think that there will be a problem in optimization. However, as the light probe itself is a concept that comes out to reduce the operation of light in Unity, the actual impact on the fact is not so great. In addition, there is also a menu that can adjust the probe volume separately so that developers can use it as efficiently as they want. This can be used to reduce the number of probes, and even change the shape or size so that it can achieve the desired effect.

Another advantage of the adaptive probe volume is that it can be applied directly within the Unity even with any asset. In fact, many of the objects of the enemy demo, including the statue displayed later, took pictures and scanned and modeled. In the past, light maps should be added, so it was difficult to scan and model the modeling, but now it can be applied immediately without such procedures as much as the probe is operated.

■ Skin and hair expressions that are more sophisticated with AI technology, and real-time expression change raging

Unity expressed the details of Gym’s hair and hair in the field that is interested in showing the ‘hair system’ separately. This is because the creation of each strand is not only a resource, but also a total challenge, from setting up like a live-action to a resulting operation.

Solace Evangelist said that in order to appear as ‘realistic’, not only technical but also human mechanics and artistic parts must be considered. Normally, people are literally easy to think about hair. So, it’s easy to realize that the hair is really fluttering and moving. However, Unity also paid attention to the blind spot of that point of view. Due to the human structure, the hair is a skull, exactly fixed to the scalp.

In addition to the skull, the scalp and the muscles under them are extremely rare to move from the foundation unless other musculoskeletal system moves. So the hair is moving and the outline of the hairstyle does not change. However, other hairs of the body, such as mustache, beards, Penna, and fluffy, are not. The facial expressions, that is, the movement of musculoskeletal, changes the position of the roots and the direction changes.

In other words, if you focus only on ‘hair’, it is easy to overlook that the hair musculoskeletal movements or coordinate values occur as the musculoskeletal movement. The trivial point is likely to cause unpleasant valley, so Unity tried to grasp the hairs all over the body and to implement them close to due diligence.

Another barrier encountered to implement a hair system in digital human technology was to consider not only race but also the quality of hair that depends on the type of animal. Considering the fur of animals was not just a word that means the hair in English. This is because it was an area to consider in order to be used comprehensively beyond digital human beings. Especially in games and creations, various styles such as not only live-style hair systems but also cartoon winds are used.

In order to catch this, Unity has collected its own data and has accumulated feedback from developers and creators. In addition, this year, Diva Dynamics has been acquired to reinforce and apply digital human technology. Chiba Dynamics is a company that has accumulated digital human know-how using AI technology. In addition, the AI and digital human know-how of Chiba Dynamics have been added to the data and technology that Unity has accumulated. The hair that is physically applied to the LOD can be implemented comprehensively in the Unity Hair System.

As the creation and rendering of hundreds of thousands of strands of hairs ate a lot of resources, I was curious about the secret to improving this part dramatically. So I wanted to see that Unity’s recent technology trend, which uses data-based technology stacks, or DOTS, is not integrated one by one. But Solace Evangelist replied that it was not.

Of course, DOTS is a very good technology that can effectively handle millions of objects, but it’s not a technology applied to the hair system. This is because there is a limit to dealing with it efficiently when it is added to high resolution and RTX, and if the amount of objects increases, it will be even more difficult to handle it on the object base.

To put it more easily, DOTS is a useful technology when implementing a number of objects with details close to due diligence. The hair system is close to building the details of each object. I don’t know how it will change in the future, but at this stage, the hair system is not associated with DOTS.

Another important point was the acquisition of Diva Dynamics. Unity has developed Unity engines and additional services by acquiring not only its own technology development but also companies with various technologies such as Books, and Chiba Dynamics was one of them. There have been several times this year that Unity will be with Chiba Dynamics and show more advanced technology.

In general, in order to express facial expressions, the technique of making its expression in advance and blending according to the situation is often used based on modeling. However, in the demonstration that Solace Evangelist actually showed, the expression of the modeling was freely changed by the rigging of the face trainer without a separate blend shape. In addition, as it worked right in Maya and Unity, it could be confirmed in the engine.

Another advantage of Face Trainer is that it can be applied immediately even if the face data changes. This is because it is not a separate frame separately, but a method of grabbing and applying the Rigging Point of Face Data based on the learned data. The data is also applied based on various data from 4D capture to pastry, and it is possible to push the riding graph out of the parameter or pull it out of the manga.

It is interesting to create a realistic figure with the technology of digital human, but there is still a task that it is difficult to use 100% in other places besides video content. For developers, no matter how much PC or console specifications are getting better, the burden on the optimization is less, and the capacity is large, so it is less noticeable, but it is hard to imagine it to put a whole resource that crosses the gigabyte unit on an asset one.

Because it is. Moreover, in the game, not only live-action, but also cartoons

--

--

MMO Guides
MMO Guides

Written by MMO Guides

Looking For Game Guides? Here you are in right Place, we offers poe builds, fortnite guides and other more hot game tips

No responses yet