قالب وردپرس درنا توس
Home / Technology / DirectX raytracing is the first step to a graphic revolution

DirectX raytracing is the first step to a graphic revolution



Enlarge / This image from EAs SEED group shows realistic shadows, reflections and highlights with DXR.

At the GDC, Microsoft announced a new feature for DirectX 12: DirectX Raytracing (DXR). The new API provides hardware-accelerated raytracing for DirectX applications, heralding a new era of gaming with realistic lighting, shadows and materials. One day, this technology could enable the kind of photorealistic images we've become used to in Hollywood blockbusters.

What kind of GPU you have, be it Nvidia's monstrous $ 3,000 Titan V or the small integrated thing in your $ 35 Raspberry Pi, the basic principles are the same; In fact, many aspects of GPUs have changed since the 3D accelerators first appeared on the market in the 1

990s. They are all based on a common principle: rasterization.

Here's how it's done today

A 3D scene consists of several elements: There are 3D models that consist of triangles of textures applied to each triangle. there are lights that illuminate the objects; and there is a viewing window or camera that looks at the scene from a certain position. In essence, the camera rasterizes a raster pixel grid (hence rasterization). For each triangle in the scene, the rasterization engine determines whether the triangle overlaps each pixel. If so, the color of this triangle is applied to the pixel. The rasterization engine works from the farthest triangles and approaches the camera. So if one triangle obscures another, the pixel is colored first by the back triangle and then by the back triangle.

Front, overwriting process is the reason why rasterization is also known as the painter's algorithm; Remember the fabulous Bob Ross, who first set the sky far in the distance and then wrote it down in mountains, then the happy little trees, then maybe a small building or a broken fence, and finally the foliage and plants the closest

Much of the development of the GPU has focused on optimizing this process by eliminating the amount that needs to be drawn. For example, objects that are out of view of the viewport can be ignored. their triangles can never be visible through the grid. The parts of objects that lie behind other objects can also be ignored; their contribution to a particular pixel is overwritten by a pixel closer to the camera, so there is no point in calculating their contribution.

GPUs have become more complicated over the last two decades as vertex shaders process the individual's triangles, geometry shaders to create new triangles, pixel shaders that modify the pixels after rasterization, and shaders to perform physics and other calculations , However, the basic operating model has remained the same.

Screening has the advantage that it can be performed quickly; The optimizations that skip hidden triangles are effective and greatly reduce the work of the GPU. The screening also allows the GPU to flow through the triangles one at a time instead of having them all in memory at the same time.

But rasterization has problems that limit its visual fidelity. For example, an object that is outside the field of view of the camera can not be seen. Therefore, it is skipped by the GPU. However, this object could still throw a shadow within the scene. Or it could be visible from a reflective surface within the scene. Even within a scene, white light reflected from a bright red object will tend to redden everything that comes from that light; This effect is not found in rasterized images. Some of these deficiencies can be fixed with techniques such as shadow mapping (which allows objects outside the field of view to cast shadows in it), but the result is that rasterized images always look different than in the real world. Enter [19659003GrundsätzlichfunktioniertdieRasterisierungnichtsowiediemenschlicheSehkraftfunktioniertWirstrahlenkeinStrahlengitterausunserenAugenausundsehennichtwelcheObjektedieseStrahlenschneidenVielmehrwirddasLichtderWeltinunsereAugenreflektiertEskannaufdemWegmehrereObjekteabprallenundbeimDurchlaufentransparenterObjektekannesaufkomplexeWeisegebogenwerden

raytracing

Ray tracing is a technique for generating computer graphics that mimics this physical process in more detail. From each light source within a scene, light rays are projected to jump around until they hit the camera. Raytracing can produce much more accurate images; Advanced raytracing engines can deliver photorealistic images. For this reason, raytracing is used to render graphics in movies: computer images can be integrated with live action material without looking out of place or artificial.

But raytracing has a problem: it is enormously compute-intensive. The screening has been extensively optimized to try to limit the workload of the GPU. In raytracing, all of this effort is in vain, as potentially any object can add shadows or reflections to a scene. Raytracing has to simulate millions of rays of light, and some of these simulations can be wasted, reflected behind the screen, or hidden behind something else.

This is not a problem for movies; The companies that produce movie graphics will spend hours rendering individual frames, using huge server farms to process each image in parallel. But it's a huge problem for games where you only get 16 milliseconds to draw each frame (for 60 frames per second) or even less for VR.

However, modern GPUs are very fast nowadays. And although they are not yet fast enough to track highly complex high-fidelity games, have enough computational resources to use some [4599020] bits of raytracing. That's where DXR comes in. DXR is a raytracing API that extends the existing rasterization-based Direct3D 12 API. The 3D scene is arranged in a way that is suitable for raytracing, and with the DXR API, developers can create rays and track their way through the scene. DXR also defines new shader types that allow programs to interact with the rays as they interact with objects in the scene.

Due to the performance requirements, Microsoft expects DXR to at least temporarily be used to fill in some of the things raytracing does very well and rasterization does not: things like reflections and shadows. DXR should make things look more realistic. We can also see simple, stylized games that only use raytracing.

The company says it has been working on DXR for almost a year, and Nvidia has a lot to say about it. Nvidia has its own ray tracing engine designed for the Volta architecture (although currently the only video card supplied with Volta is the Titan V, so its application is likely to be limited). When run on a Volta system, DXR applications automatically use this engine.

Microsoft vaguely says that DXR will work with hardware that is currently on the market and that it will have a fallback level that lets developers experiment with DXR on whatever hardware they have. If DXR is widely used, we can imagine that future hardware may include features tailored to the needs of raytracing. On the software side, Microsoft says that EA (with the Frostbite engine used in the Battlefield series), Epic (with the Unreal engine), Unity 3D (with the Unity engine), and others DXR support will be available soon.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *