How to set up smartphones and PCs. Informational portal
  • home
  • Windows 8
  • Realism of a three-dimensional image. Creation of realistic product images

Realism of a three-dimensional image. Creation of realistic product images

3D art includes a kind of graffiti, three-dimensional computer graphics, realistic drawings that create the illusion of a three-dimensional scene.

Artists have always strived for a believable representation of nature and surrounding things. In our modern age, this is easy to achieve with advanced devices. However, there is something fascinating and especially appealing about the many hand-drawn 3D images. After all, the technique of 3D drawing requires a lot of skill and patience, not to mention talent.

We offer you to admire the creations of different masters, whose works are made in a realistic 3D genre.

1. Points.

Simple, elegant and whimsical 3D drawing that looks realistic.

2. "Hall of the Giants", Palazzo Te, Mantua, Italy

The 16th century illusionist frescoes by Giulio Romano date back to the origins of 3D art.

3. 3D pencil drawing of Nagai Hideyuki

The artist creates a three-dimensional illusion using only a sketchbook and colored pencils.

4. Museum of 3D paintings in Chiang Mai, Thailand

There is a whole museum dedicated to 3D art in Thailand. Its halls are filled with large frescoes that look completely real.

5. Coca Cola is an illusion

Often times, inspiration for 3D art comes from popular objects in our daily lives. The classic option is a bottle of Cola.

6. Computer graphics: Girl

Who would have thought that this girl did not exist?

7. Columns of the Corinthian order

Lovely 3D pencil drawing of two Corinthian columns.

8. Realistic waterfall in the town of Dvur Kralove, Czech Republic

Part of a city park in the Czech Republic has been turned into the illusion of a beautiful waterfall.

9. Globe

It's not uncommon for 3D art to be used in marketing. This picture of the globe encourages people to fight poverty.

10.Igor Taritas

The young artist creates paintings using the foundations of hyperrealism. This canvas exudes the depth of the real world, as if we can go on stage if we wish.

11. Davy Jones by Jerry Groshke

A classic Pirates of the Caribbean character created by a 3D CG artist.

12. Kazuhiko Nakamura

Japanese 3D artist who creates creative steampunk photography using software.

13. Kurt Wenner: Wild Rodeo in Calgary, Canada

One of the most famous contemporary 3D artists, Kurt Wenner, portrayed a fictional rodeo in a Canadian city.

14.Léon Cyrus, Ruben Ponzia, Remco van Scheik and Peter Westering

Four artists have teamed up to create this incredible illusion of a Lego army.

15. Lodz, Poland

Swimming pool near a busy shopping center in Lodz, Poland. Hope no one jumped into it.

16. Market

Beautiful 3D still life, painted on the asphalt near the vegetable market. It complements the ambience with perfect sophistication.

17. MTO, Rennes, France

Street artist MTO created a series of large-scale 3D murals in Rennes, France. His wall paintings feature giants trying to infiltrate people's homes. The pictures are both stunning and terrifying.


Ways to achieve realism in 3D graphics

Works made using 3D computer graphics attract the attention of both 3D designers and those who have a rather vague idea of ​​how it was all done. The most successful works in 3D cannot be distinguished from real filming. Such works, as a rule, give rise to heated debates around themselves about what it is: a photograph or a three-dimensional fake. Inspired by the work of renowned 3D artists, many take up the study of 3D editors, believing that they are as easy to master as Photoshop. Meanwhile, programs for creating 3D graphics are quite difficult to learn, and it takes a lot of time and effort. But even after learning the tools of a three-dimensional editor, it is not easy for a novice 3D designer to achieve a realistic image. Once in a situation where the scene looks "dead", he can not always find an explanation for this. What's the matter?

The main problem with creating photorealistic images is the difficulty of accurately simulating the environment. The picture, which is obtained as a result of rendering (rendering) in a three-dimensional editor, is the result of mathematical calculations according to a given algorithm. It is difficult for software developers to find an algorithm that would help describe all the physical processes that take place in real life. Therefore, modeling the environment rests on the shoulders of the 3D artist himself. There is a certain set of rules for creating a realistic 3D image. Regardless of which 3D editor you are working in and the scenes of what complexity you create, they remain unchanged. The result of working in a three-dimensional editor is a static file or animation. Depending on what the final product will be in your case, the approach to creating a realistic image may differ.

We start with composition

The location of objects in a 3D scene is of great importance for the final result. They should be positioned in such a way that the viewer does not get lost in conjectures, looking at the part of the object that accidentally fell into the frame, and at first glance could recognize all the components of the scene. When creating a three-dimensional scene, you need to pay attention to the position of objects relative to the virtual camera. Remember that objects closer to the camera lens appear larger visually. Therefore, you need to make sure that objects of the same size are on the same line. Regardless of what kind of plot the three-dimensional scene has, it must necessarily reflect the consequences of some events that happened in the past. So, for example, if someone's footprints lead to a snow-covered house, then, looking at such a picture, the viewer will conclude that someone has entered the house. When working on a 3D project, pay attention to the general mood of the scene. It can be conveyed by a well-chosen decoration element or a certain range of colors. For example, adding a candle to the scene will accentuate the romance of the setting. If you are modeling cartoon characters, the colors should be bright, but if you are creating a disgusting monster, choose dark shades.

Don't forget the details

When working on a 3D project, you should always take into account how visible the object is in the scene, how much light it is, etc. Depending on this, the object should have a greater or lesser degree of detail. The three-dimensional world is virtual reality, where everything resembles theatrical scenery. If you can't see the back of the object, don't model it. If you have a bolt with a screwed on nut, you should not model the thread under the nut; if the facade of the house is visible in the scene, there is no need to model the interior; if you are modeling a scene of a night forest, the main attention should be paid only to those objects that are in the foreground. The trees located in the background will hardly be visible in the rendered image, so it makes no sense to model them with leaf precision.

Often, when creating three-dimensional models, small details play almost the main role, which make the object more realistic. If you are having trouble making your scene look realistic, try increasing the level of detail on your objects. The more small details the scene contains, the more believable the final image will look. The option with an increase in the detail of the scene is almost a win-win, but it has one drawback - a large number of polygons, which leads to an increase in the rendering time. To make sure that the realism of the scene directly depends on the degree of detail, you can use such a simple example. If you create three models of a blade of grass in a scene and visualize them, then the image will not make any impression on the viewer. However, if this group of objects is cloned many times, the image will look more impressive. You can control the detail in two ways: as described above (by increasing the number of polygons in the scene), or by increasing the resolution of the texture. In many cases, it makes sense to pay more attention to the creation of the texture, rather than the object model itself. At the same time, you will save system resources required for rendering complex models, thereby reducing rendering time. Better to make a higher quality texture than increase the number of polygons. The wall of a house is a great example of judicious use of texture. You can model each brick individually, which will take both time and resources. It is much easier to use a photograph of a brick wall.

If you need to create a landscape

One of the most difficult tasks that 3D designers often have to deal with is modeling nature. What is the problem of creating our natural environment? The point is that any organic object, be it an animal, a plant, etc., is heterogeneous. Despite the seemingly symmetrical structure, the shape of such objects defies any mathematical description that 3D editors deal with. Even those objects that, at first glance, have a symmetrical appearance, upon closer examination, turn out to be asymmetric. So, for example, the hair on a person's head is located unequally on the right and left sides, most often he brushes them to the right, and a leaf on a tree branch can be damaged by a caterpillar in some place, etc. The best solution for simulating organic matter in 3D can be considered a fractal algorithm, which is often used in the settings of materials and various 3D modeling tools. This algorithm is better than other mathematical expressions to help simulate organic matter. Therefore, when creating organic objects, be sure to use the capabilities of the fractal algorithm to describe their properties.

Subtleties of material creation

The materials that are imitated in three-dimensional graphics can be very diverse - from metal, wood and plastic to glass and stone. Moreover, each material is determined by a large number of properties, including surface relief, specularity, pattern, size and brightness of the flare, etc. When rendering any texture, you need to remember that the quality of the material in the resulting image very much depends on many factors, including the lighting parameters (brightness, angle of incidence of light, color of the light source, etc.), the rendering algorithm (the type of renderer used and its settings), raster texture resolution. Also of great importance is the method of projecting the texture onto the object. A poorly mapped texture can "give out" a 3D object with a formed seam or a suspiciously repetitive pattern. In addition, in reality, objects are usually not perfectly clean, that is, they always have traces of dirt. If you are modeling a kitchen table, then, despite the fact that the pattern on the kitchen oilcloth is repetitive, its surface should not be the same everywhere - the oilcloth can be worn at the corners of the table, have cuts from a knife, etc. To prevent your 3D objects from looking unnaturally clean, you can use hand-made (for example, in Adobe Photoshop) pollution maps and blend them with the original textures, getting a realistic "worn" material.

Adding motion

When creating animation, the geometry of objects plays a more important role than in the case of a static image. During the movement, the viewer can see objects from different angles, so it is important that the model looks realistic from all angles. For example, when modeling trees in a static scene, you can go for the trick and simplify your task: instead of creating a "real" tree, you can make two intersecting perpendicular planes and apply a texture to them using a transparency mask. When creating an animated scene, this method is not suitable, since such a tree will look realistic from only one point, and any rotation of the camera will "give out" a fake. In most cases, once the 3D objects disappear from the lens of the virtual camera, it is best to remove them from the scene. Otherwise, the computer will perform a task that no one needs, calculating invisible geometry.

The second thing to consider when creating animated scenes is the movement in which most objects in reality reside. For example, the curtains in the room sway from the wind, the hands of the clock go on, etc. Therefore, when creating animation, it is imperative to analyze the scene and designate those objects for which you need to set motion. By the way, movement gives realism to static scenes. However, unlike animated ones, movement in them should be guessed in frozen little things - in a shirt sliding from the back of a chair, crawling caterpillars on a trunk, a tree bent from the wind. While it is relatively easy to create realistic animation for simpler objects in the scene, it is almost impossible to simulate the movement of a character without auxiliary tools. In everyday life, our movements are so natural and habitual that we do not think, for example, whether to throw our heads back while laughing or bend down, passing under a low canopy. Modeling such behavior in the world of three-dimensional graphics is associated with many pitfalls, and it is not so easy to recreate the movements, and even more so facial expressions, of a person. That is why, to simplify the task, the following method is used: a large number of sensors are hung on the human body, which record the movement of any part of it in space and send the corresponding signal to the computer. That, in turn, processes the information received and uses it in relation to some skeletal model of the character. This technology is called motion capture. When moving the shell that is put on the skeletal base, it is also necessary to take into account the muscular deformation. For those 3D animators who are busy with character animation, it will be useful to study anatomy in order to better navigate the systems of bones and muscles.

Lighting is not only light but also shadows

Creating a scene with realistic lighting is another challenge to be solved in order to make the final image more realistic. In the real world, light beams are reflected and refracted many times in objects, resulting in mostly fuzzy, blurry edges for the shadows cast by objects. The rendering apparatus is mainly responsible for the quality of shadow display. There are separate requirements for the shadows cast in the scene. A shadow cast from an object can say a lot - how high it is above the ground, what is the structure of the surface on which the shadow falls, what source the object is illuminated with, etc. If you forget about the shadows in the scene, such a scene will never look realistic, since in reality each object has its own shadow. In addition, the shadow can emphasize the contrast between the foreground and background, as well as "give out" an object that is not in the field of view of the lens of the virtual camera. In this case, the viewer is given the opportunity to imagine the environment of the scene himself. For example, on the shirt of a three-dimensional character, he can see a falling shadow from branches and leaves and guess that a tree is growing on the back side of the shooting point. On the other hand, too many shadows will not render the image more realistic. Be careful not to cast shadows from auxiliary lights. If there are several objects emitting light in the scene, for example, lanterns, then all elements of the scene should cast shadows from each of the light sources. However, if in such a scene you will use auxiliary light sources (for example, in order to highlight dark parts of the scene), you do not need to create shadows from these sources. The auxiliary source should be invisible to the viewer, and the shadows will betray his presence.

When creating a scene, it is important not to overdo it with the number of light sources. It's better to spend a little time trying to find the best position for it than using multiple lights where you can get by with just one. In the case when the use of several sources is necessary, make sure that each of them casts shadows. If you cannot see the shadows from a light source, then perhaps another, stronger, light source is overexposing them. When placing light sources in a scene, be sure to pay attention to their color. Daylight sources have a blue tint, but to create an artificial light source, you need to give it a yellowish color. It should also be borne in mind that the color of the source simulating daylight also depends on the time of day. Therefore, if the subject of the scene implies evening time, the lighting can be, for example, in the reddish shades of the sunset.

The most important thing is miscalculation

Visualization is the final and, of course, the most crucial stage in the creation of a three-dimensional scene. The 3D graphics editor renders the image, taking into account the geometry of objects, the properties of the materials from which they are made, the location and parameters of light sources, etc. If we compare working in 3ds max with video filming, then the value of the rendering engine can be compared with the film on which the material is filmed. Just as bright and faded images can be produced on two films from different companies, the result of your work can be realistic or only satisfactory, depending on which rendering algorithm you choose. The existence of a large number of rendering algorithms has led to an increase in the number of external plug-in renderers. Often the same renderer can integrate with different 3D graphics packages. In terms of speed and quality of the rendered image, external renderers, as a rule, surpass the standard rendering apparatus of 3D editors. However, it is impossible to unequivocally answer the question of which of them gives the best result. The concept of "realism" in this case is subjective, because there are no objective criteria by which one could assess the degree of realism of the visualizer.

However, we can say for sure that in order for the final image to be more realistic, the rendering algorithm must take into account all the features of the propagation of a light wave. As we said above, hitting objects, a ray of light is reflected and refracted many times. It is impossible to calculate the illumination at each point in space, taking into account an infinite number of reflections, therefore two simplified models are used to determine the light intensity: raytracing and the global illumination method. Until recently, the most popular rendering algorithm was light ray tracing. This method consisted in the fact that the 3D editor tracked the path of the beam emitted by the light source with a given number of refractions and reflections. Tracing cannot provide a photorealistic image, because this algorithm does not provide for the effects of reflective and refractive caustics (glare resulting from the reflection and refraction of light), as well as the properties of light scattering. Today, the use of the global illumination method is a prerequisite for obtaining a realistic image. If, when tracing, only those parts of the scene hit by light rays are rendered, the global illumination method calculates the diffusion of light in unlit or shadowed parts of the scene based on an analysis of each pixel in the image. This takes into account all reflections of light rays in the scene.

One of the most common ways to render GIs is Photon Mapping. This method involves the calculation of global illumination, based on the creation of the so-called photon map - information about the illumination of the scene, collected using tracing. The advantage of Photon Mapping is that once saved as a photon map, the photon tracing results can later be used to create a global illumination effect in 3D animation scenes. The quality of Global Illumination, calculated using photon tracing, depends on the number of photons, as well as the depth of the tracing. Using Photon Mapping, you can also render caustics. In addition to rendering global illumination, external renderers let you render materials with Sub-Surface Scattering in mind. This effect is a prerequisite for realism in materials such as leather, wax, delicate fabric, etc. Light rays hitting such a material, in addition to refraction and reflection, are scattered in the material itself, thereby causing a slight glow from the inside.

Another reason why images rendered using pluggable renderers are more realistic than images rendered using standard rendering algorithms is the ability to use camera effects. These include, first of all, the Depth of Field, motion blur. The depth of field effect can be used when you need to draw the viewer's attention to some detail of the scene. If the image contains a depth of field effect, the viewer will first notice the sharpened elements of the scene. The depth of field effect can be helpful when you need to visualize what the character is seeing. With the help of the depth of field effect, you can focus the character's gaze on one or another object. The effect of depth of field is a must for a realistic image even when the attention in the scene is drawn to a small object - for example, a caterpillar on the barrel. If the picture shows equally clearly all objects that come into focus, including branches, leaves, trunk and caterpillar, then such an image will not look realistic. If such a scene existed in reality, and the shooting was conducted not with a virtual camera, but with a real camera, only the main object - a caterpillar - would be in focus. Anything in the distance from her would look blurry. Therefore, the effect of depth of field must be present in a three-dimensional image.

Output

The hardware capabilities of workstations are increasing every day, which makes it possible to even more effectively use the tools for working with three-dimensional graphics. At the same time, the arsenal of tools for 3D graphics editors is being improved. At the same time, the basic approaches to creating photorealistic images remain unchanged. Meeting these requirements does not guarantee that the resulting image will look like a photograph. However, ignoring them will most likely cause failure. Creating a photorealistic image while working on a 3D project alone is an incredibly difficult task. As a rule, those who devote themselves to three-dimensional graphics and work with it professionally, show themselves only at one of the stages of creating a three-dimensional scene. Some know all the subtleties of modeling, others know how to masterfully create materials, others "see" the correct lighting of scenes, etc. Therefore, when starting to work in 3D, try to find the area in which you feel most confident and develop your talents.

Sergey and Marina Bondarenko, http://www.3domen.com

Imagine how the object will fit into the existing development. It is very convenient to view various variants of the project using a three-dimensional model. In particular, you can change the materials and coating (textures) of project elements, check the illumination of individual areas (depending on the time of day), place various interior elements, etc.

Unlike a number of CAD systems that use additional modules or third-party programs for rendering and animation, MicroStation has built-in tools for creating photorealistic images (BMP, JPG, TIFF, PCX, etc.), as well as for recording animation clips in standard formats (FLI, AVI ) and a set of frame-by-frame pictures (BMP, JPG, TIFF, etc.).

Creation of realistic images

The creation of photorealistic images begins with the assignment of materials (textures) to various elements of the project. Each texture is applied to all elements of the same color in the same layer. Considering that the maximum number of layers is 65 thousand, and the number of colors is 256, it can be assumed that an individual material can actually be applied to any element of the project.

The program provides the ability to edit any texture and create a new one based on a raster image (BMP, JPG, TIFF, etc.). In this case, for the texture, you can use two images, one of which is responsible for the relief, and the other for the texture of the material. Both the relief and the texture have different placement parameters on the element, such as: scale, rotation angle, offset, method of filling uneven surfaces. In addition, the relief has a "height" parameter (variable in the range from 0 to 20), and the texture, in turn, has a weight (variable in the range from 0 to 1).

In addition to the picture, the material has the following adjustable parameters: scattering, diffusion, gloss, polish, transparency, reflection, refraction, base color, glare color, the ability of the material to leave shadows.

The texture display can be previewed using standard 3D solids as an example, or on any element of the project, and you can use several types of shading of the element. Simple tools for creating and editing textures allow you to get almost any material.

An equally important aspect for creating realistic images is the way of rendering (rendering). MicroStation supports the following well-known shading methods: hidden line removal, hidden line filling, permanent shading, smooth shading, Phong shading, ray tracing, radio city, particle tracing. During rendering, the image can be smoothed (remove jagging), as well as create a stereo image that can be viewed using glasses with special light filters.

There are a number of display quality settings (corresponding to the image processing speed) for ray tracing, radio traffic, and particle tracing. To accelerate the processing of graphic information, MicroStation supports graphic acceleration methods - QuickVision technology. To view and edit the created images, there are also built-in modification tools that support the following standard functions (which, of course, cannot compete with the functions of specialized programs): gamma correction, tint adjustment, negative, blur, color mode, crop, resize, rotate , mirroring, converting to another data format.

When creating realistic pictures, a considerable part of the time is taken up by placing and managing light sources. Light sources are classified as global and local lighting. Global illumination, in turn, consists of ambient light, flash, sunlight, sky light. And for the sun, along with brightness and color, the azimuth angle and the angle above the horizon are set. These angles can be automatically calculated based on the specified geographical position of the object (at any point on the world map), as well as by the date and time of the object's consideration. The light of the sky depends on the cloudiness, the quality (opacity) of the air, and even on the reflection from the ground.

Local light sources can be of five types: remote, point, conical, surface, sky opening. Each source can have the following properties: color, luminous intensity, intensity, resolution, shadow, attenuation at a certain distance, cone angle, etc.

Light sources can help in identifying unlit areas of the subject where additional lighting needs to be placed.

Cameras are used to view the elements of the project from a certain angle and for free movement of the view throughout the file. Using the keyboard and mouse control keys, you can set nine types of camera movement: flight, rotation, descent, slide, bypass, rotation, swimming, trolley movement, tilt. Four different types of movement can be connected to the keyboard and mouse (modes are switched by holding the Shift, Ctrl, Shift + Ctrl keys).

Cameras make it possible to inspect the object from different angles and look inside. By varying the camera parameters (focal length, lens angle), you can change the perspective of the view.

To create more realistic images, it is possible to connect a background image, for example, a photograph of an existing landscape.

Most users are well aware of which of the PC components we use to get images on the monitor - of course, this is a video adapter. But not many people know the subtleties and nuances of technologies for increasing the realism of three-dimensional images, because in our time of the rapid development of 3D graphics and the birth of many realistic computer games - it is not enough just to display a good image on the monitor, you need to make it as realistic as possible.

We will consider the most common technologies that are already well established and are actively used by video card manufacturers. This material is intended for advanced users and assumes a more detailed introduction to technology than just a superficial overview.

MIP mapping technology

Let's start with the most commonly used technology, which is called MIP mapping... The main purpose of this technology is to improve the quality of texturing of 3D objects.

To make the image look more realistic, developers need to consider such an important concept as the depth of the scene. Realism, in this case, implies a high-quality blur as the image is removed, as well as a change in color shades. Therefore, for the construction of any kind of surfaces, many different textures are used, which makes it possible to regulate this phenomenon. If it is necessary, for example, to build an image of a road that tends to the horizon, then in the case of using one texture, you can simply forget about realism, since a solid color or flicker will appear in the background.


Just the same, for the implementation of this set of textures, the technology is used Mip mapping, it makes it possible to use textures with varying degrees of detail, which adds its advantages, for example, the realism of the road, which is described above.

The principle of operation is to determine for each pixel of the image the corresponding Mip-map, and then there is a selection of one texel (pixel map), which is assigned to the pixel. This is such a complex system of image texturing, but it is thanks to this system that we feel much more realism in games and 3D films.

Filtration technologies

These technologies are usually used in conjunction with Mip mapping technology. Filtering technologies are needed to correct various texturing artifacts. Simply put, the point of filtering is to calculate the color of an object based on neighboring pixels.

There are different types of filtering:

Bilinear. When an object is in motion, various kinds of pixel dragging may be noticeable, which in turn causes a flickering effect. To reduce this effect, bilinear filtering is used, the principle of which is to select four neighboring pixels to display the surface of the current one.

Trilinear.The principle of operation of trilinear filtering is similar to bilinear, but more advanced, here the average value of 8 pixels is taken to determine the color of the current pixel. Trilinear filtering solves a lot of errors associated with texturing outlines and incorrect calculation of scene depth.

Anisotropic filtration ... The most advanced type of filtration and is currently used in all new video adapters... Using anisotropic filtering, one pixel is calculated over 8-32 texels (texture pixels).

Anti-aliasing

The essence of the Anti-aliasing technology is to eliminate the jaggedness of the edges of objects, in other words, to smooth the image.


The principle of operation of the most common anti-aliasing technology is to create a smooth transition between the border and the background color. The color of the points that lie on the boundary of objects is determined by the average value of the boundary points.

So, with grief in half, the main technologies for increasing the realism of a three-dimensional image were considered. Perhaps, not everything was clear, but in any case, such in-depth information would not be superfluous.

Building a three-dimensional image

With the growth of computing power and the availability of memory elements, with the advent of high-quality graphic terminals and output devices, a large group of algorithms and software solutions have been developed that allow you to form an image on the screen that represents a certain volumetric scene. The first such solutions were intended for tasks of architectural and mechanical engineering design.

When forming a three-dimensional image (static or dynamic), its construction is considered within a certain coordinate space, which is called stage... The scene implies work in a three-dimensional, three-dimensional world - therefore, the direction has received the name of three-dimensional (3-Dimensional, 3D) graphics.

Separate objects are placed on the scene, made up of geometric volumetric bodies and sections of complex surfaces (most often so-called B-splines). To form an image and perform further operations, surfaces are divided into triangles - minimal flat figures - and further processed exactly as a set of triangles.

At the next stage “ world”Coordinates of grid nodes are recalculated using matrix transformations into coordinates species, i.e. depending on the point of view of the scene. Viewpoint position is usually called camera position.

Workspace preparation system
3D graphics Blender (example from the site
http://www.blender.org
)

After the formation frame("Wire mesh") is performed painting over- giving the surfaces of objects some properties. Surface properties are primarily determined by its light characteristics: luminosity, reflectivity, absorption and scattering power. This set of characteristics allows you to define the material, the surface of which is modeled (metal, plastic, glass, etc.). Transparent and translucent materials have a number of other characteristics.

As a rule, during the execution of this procedure, and clipping invisible surfaces... There are many methods for doing this, but the most popular is
Z-buffer
when an array of numbers is created to represent “depth,” the distance from a point on the screen to the first opaque point. The next points on the surface will be machined only when their depth is shallower, and then the Z coordinate will decrease. The power of this method directly depends on the maximum possible value of the distance of the scene point from the screen, i.e. on the number of bits per point in the buffer.

Calculation of a realistic image. Performing these operations allows you to create the so-called solid models objects, but this image will not be realistic. To form a realistic image on the scene are placed sources of light and performed illumination calculation every point of visible surfaces.

To make objects look realistic, the surface of objects is "covered" texture - image(or the procedure that forms it), defining the nuances of appearance... The procedure is called “texture mapping”. During texture mapping, stretching and anti-aliasing techniques are applied - filtration... For example, anisotropic filtering, mentioned in the description of video cards, does not depend on the direction of texture transformation.

After determining all the parameters, it is necessary to perform the image formation procedure, i.e. calculation of the color of points on the screen. The calculation procedure is called rendering During the execution of such a calculation, it is necessary to determine the light falling on each point of the model, taking into account the fact that it can be reflected, that the surface can cover other areas from this source, etc.

There are two main methods for calculating illumination. The first is the method backward ray tracing... With this method the trajectory of those rays is calculated, which eventually fall into the pixels of the screen- in the opposite direction. The calculation is carried out separately for each of the color channels, since light of different spectrum behaves differently on different surfaces.

Second method - method of emissivity - provides for the calculation of the integrated luminosity of all areas falling into the frame, and the exchange of light between them.

The resulting image takes into account the specified characteristics of the camera, i.e. viewers.

Thus, as a result of a lot of calculations, it becomes possible to create images that are difficult to distinguish from photographs. To reduce the number of calculations, they try to reduce the number of objects and, where possible, replace the calculation with a photograph; for example, when forming the background of an image.

Solid model and the final result of the model calculation
(example from the site http://www.blender.org)

Animation and virtual reality

The next step in the development of technologies for three-dimensional realistic graphics was the possibility of its animation - movement and frame-by-frame changes in the scene. Initially, only supercomputers could cope with this volume of calculations, and they were used to create the first three-dimensional animation videos.

Later, hardware specifically designed for computation and formation of images was developed - 3D accelerators... This made it possible in a simplified form to perform such a formation in real time, which is used in modern computer games. In fact, now even ordinary video cards include such tools and are a kind of narrow-purpose mini-computers.

When creating games, filming films, developing simulators, in the tasks of modeling and designing various objects, the task of forming a realistic image has another significant aspect - modeling not just the movement and changes of objects, but modeling their behavior, corresponding to the physical principles of the surrounding world.

This direction, taking into account the use of all kinds of hardware for transmitting the influences of the outside world and increasing the effect of presence, received the name virtual reality.

To embody such realism, special methods are created for calculating the parameters and transforming objects - changing the transparency of water from its movement, calculating the behavior and appearance of fire, explosions, collisions of objects, etc. Such calculations are quite complex, and a number of methods have been proposed for their implementation in modern programs.

One of them is processing and using shaders - light-changing procedures(or exact position)at key points according to some algorithm... Such processing allows you to create the effects of "glowing cloud", "explosion", to increase the realism of complex objects, etc.

Interfaces for working with the “physical” component of image formation have appeared and are being standardized, which makes it possible to increase the speed and accuracy of such calculations, and hence the realism of the created world model.

Three-dimensional graphics are one of the most spectacular and commercially successful areas of information technology development, often referred to as one of the main drivers of hardware development. Means of three-dimensional graphics are actively used in architecture, mechanical engineering, in scientific works, when shooting films, in computer games, in teaching.

Examples of software products

Maya, 3DStudio, Blender

The topic is very attractive for students of all ages and arises at all stages of studying a computer science course. The attractiveness for students is explained by the large creative component in practical work, the visual result, as well as the broad applied orientation of the topic. Knowledge and skills in this area are required in almost all areas of human activity.

In basic school, two types of graphics are considered: raster and vector. The issues of difference of one species from another are discussed, as a result - the positive aspects and disadvantages. The areas of application of these types of graphics will allow you to enter the names of specific software products that allow you to process this or that type of graphics. Therefore, materials on the topics: raster graphics, color models, vector graphics - will be in greater demand in the basic school. In high school, this topic is complemented by an examination of the features of scientific graphics and the possibilities of three-dimensional graphics. Therefore, the following topics will be relevant: photorealistic images, modeling of the physical world, compression and storage of graphic and streaming data.

Most of the time is occupied by practical work on the preparation and processing of graphic images using raster and vector graphic editors. In high school, this is usually Adobe Photoshop, CorelDraw and / or MacromediaFlach. The difference between the study of certain software packages in basic and high school is to a greater extent manifested not in the content, but in the forms of work. In basic school, this is practical (laboratory) work, as a result of which students master the software product. In high school, the main form of work becomes an individual workshop or project, where the main component is the content of the task, and the software products used to solve it remain only a tool.

The basic and high school tickets contain questions related to both the theoretical foundations of computer graphics and the practical skills of processing graphic images. Such parts of the topic as calculating the information volume of graphic images and the features of encoding graphics are present in the control measuring materials of the unified state exam.

Top related articles