Computer Graphics BCA 33 - Download as PDF File .pdf), Text File .txt) or read online. Pixel, frame, buffer, application of computer graphics, Raster Interactive Computer Graphics: Whenever the user has control over the image by providing him. Introduction to Computer Graphics. Version , January David J. Eck. Hobart and William Smith Colleges. This is a PDF version of a free on-line book that.

Author: | LAINE ESBENSEN |

Language: | English, Spanish, Dutch |

Country: | Madagascar |

Genre: | Art |

Pages: | 305 |

Published (Last): | 13.11.2015 |

ISBN: | 219-6-57316-226-1 |

PDF File Size: | 13.24 MB |

Distribution: | Free* [*Regsitration Required] |

Uploaded by: | ERLINDA |

TERMS AND CONDITIONS. SITE TREE. VIDEO LECTURES · E-BOOKS · BLOGS JOB ALERTS ADVISORY BOARD STUDY MATERIAL. Download BCA Books & Notes For All Semesters in PDF - 1st, 2nd, 3rd Year. BCA Full Form is Bachelor of Computer Applications. Digital Marketing; Android Development; Computer Graphics and Animation; Program Elective II; Open. SEMESTER 6: BCA.S ADVANCE. JAVA. 4. 3. BCA.S MULTIMEDIA. SYSTEM. 4. 3. BCA.S COMPUTER. GRAPHICS. 4 .

C Graphics Basics Graphics programming, initializing the graphics, C Graphical functions, simple programs. Unit 3 C Graphics Introduction 3. To begin with we should know why one should study computer graphics. Its areas of application include design of objects, animation, simulation etc. Though computer graphics gained importance after the introduction of monitors, these are several other input and output devices that are important for the concept of computer graphics.

Much of the focus of this book is on profiling and code testing, as well as performance optimization. It also explores much of the technology behind the Doom and Quake 3-D games, and 3-D graphics problems such as texture mapping, hidden surface removal, and the like. Thanks to Michael for making this book available.

Badler Norman I. Badler, Cary B. Phillips and Bonnie L. Webber PDF Pages English This book is intended for human factors engineers requiring current knowledge of how a computer graphics surrogate human can augment their analyses of designed environments. It will also help inform design engineers of the state-of-the-art in human gure modeling, and hence of the human-centered design central to the emergent notion of Concurrent Engineering.

Free Computer Graphics Books. Artificial Intelligence. Compiler Design. Computation Theory. Computer Algorithm. Computer Architecture. Computer Graphics. Functional Programming. Look at the following example. A A solid arrow is being displayed.

Suppose the screen edge is as shown by dotted lines. After clipping, the polygon becomes opened out at the points A and B. But to ensure that the look of solidly is retained, we B should close the polygon along the line A- B. This is possible only if we consider the arrow as a polygon — not as several individual lines.

Hence we make use of special polygon clipping algorithms — the most celebrated of them is proposed by Sutherland and Hodgeman. The basis of the Sutherland Hodgeman algorithms is that it is relatively easy to clip a polygon against one single edge of the screen at a time i.

At first sight, it looks like a rather simplistic and too obvious a solution, but when put in practice this has been found to be extremely efficient.

An algorithm can be represented by a set of vertices v1, v2, v3 vn which means there is an edge from v1 to v2, v2 to v3. Now e is an edge of the screen and has two sides. Any vertex lying on one side of the edge will be visible which we call the visible side. While any other vertex will not be visible if it is on the other side the invisible side. For example for the top edge of the screen, any vertex above it is on the invisible side whereas any vertex below it is visible.

Now coming back to the algorithm. It tests each vertex of the given polygon in turn against a clipping edge e. Vertices that lie on the visible side of e are included in the output polygon, while those that are on the invisible side are discarded. We formally see an algorithm and also the application of the algorithm to a specific example.

Algorithm Sutherland — Hodgeman v1, v2 v3. Now to illustrate this algorithm consider the 5 edges polygon below. Now, let us consider ab as the clipping edge e. Beginning with v1 the vertex v1 is an the visible edge of ab so retain it in the output polygon now take the second vertex v2, the vertices v1, v2 are on different side of a.

Compute the intersection of V1, V2 let it be i1, add i1 to the output polygon. Now consider the vertex v3, v2 and v3 are an different sides of ab. Compute the intersection of v2 and v3 with ab. Let it be i3. Now consider v3, v4 and v5 are all on the same side visible side of the polygon, and hence when considered are after the other, they are included in the output polygon straightaway.

Now repeat the same sequence with respect to the edge b c, for this output polygon of stage 1 v1, i1 and iz are on the same side of bc and hence get included in the output polygon of stage 2 since iz and v3, are the different sides of the line be, the intersection of bc with the line iz is is v3 computed. Let this point be i3. Similarly, v3, v4 are the different sides of bc, their intersection. After going through two more clippings against the edges cd and da, the clipped figure looks like the one below.

Assuming a screen of some size say x pixels, this size given the maximum size of the picture that we can represent. But the picture on hand need not always be corresponding to these sizes. Common sense suggests that if the size of the picture to be displayed is larger than the size of the screen, two options are possible i clip the picture against the screen edges and display the visible portions. This will need fairly large amount of computations, but in the end, we will be seeing only a portion of the picture.

This would enable us to see the entire picture, though with a smaller dimensions. The converse can also be true. If we have a very small picture to be displayed on the screen, we can either display it as it see, thereby seeing only a cramped picture or scale it up so that the entire screen is used to get a better view of the same picture.

However, a picture need not always be presented on the complete screen.

More recent applications allow us to see different pictures on the different part of the screen. Such a situation is encountered when several pictures are being viewed simultaneously either because we want to work on them simultaneously or we want to view several of them for comparison purposes.

In such a scenario, the problem is still the same: The only change is that since the window sizes are different for different pictures, we should have a general transformation mechanism that can map a picture of any given size to fit into any window of any given size. Now we derive a very simple and straightforward method of transforming the world coordinates to the full screen coordinates or for that matter any window size Since different authors use different nomenclatures, in this course, we follow the following conventions.

We are interested only in a part of this picture. This view port can be a part of screen or the full screen itself. The following diagrams illustrate the situation and also the various coordinate values that we will be using.

The dotted lines indicate the window while the picture is in full lines. The window is bounded by the coordinates wx1 and wxr the x-coordinates on the left side of window and the x — coordinates on the right side of the window and wyt and wyb The y- coordinate on the bottom of the window. It is easy to see that these coordinates enclose a window between them The dotted rectangle of the figure ,. Now consider any point xcw yws on the window.

To convert this to the view port coordinates, the following operations are to be done in the sequence. This will ensure that the entire window fits into the view port without leaving blank spaces in the view port. This can be done by simply changing the x and y coordinates in the ratio of the x-size of view port to the x size of window and y — size of view port to y — size of the window respectively i.

It may be noted that in each of the above ratios, the numerator defines the overall space available in the view port and the denominator, the overall space available in the window. This can be achieved by the following sequence. Now considering any point xw, yw to be transformed, we get the following sequence on applying the above sequence of operations.

The equation in the step c indicates the complete window to view port transformation. Regular figures like straight lines or regular curves can be transformed by transforming only their end points. Define Clipping 2. Define Windowing 3. Explain the 4 bit code to define regions used in rejection method. What is the other name of the most popular polygon clipping algorithm? With usual notations, state the equations that transform the window coordinates to screen coordinates.

Answers 1. The process of dividing the picture to it's visible and invisible portions, allowing the invisible portion to be discarded. Specifying an area or a window around a picture in world coordinate, so that the contents of the window can be displayed or used otherwise. Sutherland - Hodgeman algorithm 5. We have familiarized ourselves with many of the interactive input devices.

But since the computer expects perfect input values, any errors made in the use of such devices can create problems - like not drawing the lines completely on the tablet or overdrawing it.

Similarly, the end points of lines may not appear exactly on a pixel value. One can go on listing such in accuracy's, which would make the computer's understanding of the situation difficult. On the other hand, insisting that one should be able to draw perfectly also is not advisable.

Hence, several techniques are available that can cover up for the deficiencies of the input and still make the computer work satisfactorily. In this block, you will be introduced to various positing techniques using positional constants, concept of modular constraints, ability to draw straight lines interactively using rubber hand techniques selection and the concept of menus.

The implementation details, however, are omitted. We have seen several input devices, which bear resemblance to pens and pencils — example light pens, joysticks etc.

To some extent they are intentionally made to resemble the device that the user is familiar with. For example, writing on a pad with a pen like stylus is more convenient for the user. However, there is a basic difference between the targets of such inputs i. Herein lies the difference. The human can understand variations of input to a large extent.

For example the letter A may be written in different ways by different people or for that matter, the same person may write it in different ways at different times. While a human can understand the variations, a computer normally cannot.

In other words, the input to human can vary over a range, while the inputs to a computer needs to be precise. Similarly while drawing a circle, if the two ends do not meet properly, a human being can still consider it as a circle, whereas a computer may not.

At the same time, training a person to say, precisely write. In other words, whereas a common user can be made to be aware of what he wants and would be willing to get it as fast and accurately as possible, making him acquire graphic arts skills would be inexcusable. On the other hand, it is desirable to make the computer understand what he wants to input or alternately, we can make the input devices cover up for the miner lapses of the user and feed a near perfect input to the computer — like making it cover the circle, when the user stops just short of closing it or ends up making the two ends one next to the other.

There are several astonishingly simple ways to make the life of the user more comfortable and at the same time improve effectiveness of the input device. In other words, the graphical input device should not only be influenced by the way it is used, but should also consider other factors like what the user is trying to say or what is the next logical step in the sequence of events and extrapolate or interpolate the same.

Of course, some guess work is involved in the process, but most often it should work satisfactorily. In fact, the very simple concept of cursor is a good example of input technique. It can be thought of as a feedback technique. It helps the user to know what he is doing and in fact, ascertains that the function that he is working on is actually working. However, in this chapter, we look at slightly more sophisticated user-friendly techniques. The algorithms are fairly involved and hence we will only be discussing the details, without going into the implementation details.

This can be considered the most basic of graphical input operations. One way of using it is to choose the symbol or picture involved, moving the cursor to the position required and pressing a predetermined key to place in that position.

While in earlier DOS versions, using a combination of pre-selected keys in proper order was doing this operation; the advent of mouse has simplified the matter. The concept of selection, positioning and final movement are all done with the click of buttons. One of the problems faced by inexperienced users while drawing figures is the concept of positioning. For example, we may want to put an object exactly at the end of a straight-line or a cross at the center of the circle etc.

Because of lack of coordination between the eyes and the hand movements, the object may end up either a little away from the line or inside the line as below. Similarly, while locating a center of the circle the cross may get located very near to the center of the circle, but not exactly at the center.

In fact, it is easy to appreciate that in the case of putting a rectangle at the end of. Though we are not considering the implementation aspects of the same, it is easy to note that writing an algorithm for this is fairly straight forward. For example, if the x,y values of the end of the lines is say 10,50 and a box is brought to a position say It is easier to see that the first example is the case where the box is slightly above the line and the second where it is inside the line.

There can be other types of constraints as well. In a certain figure, only horizontal and vertical lines are there, say like in a grid design, any angular lines can be brought into any one of these positions by putting an angular constraint that no straight line can be at any angle other than 00 and The same can be extended to draw lines at any particular angle.

Now let us go back to the problem of attaching a box to the end of a line. Suppose the end of the line does not terminate always at integer value. Then positional constraints cannot be used. Again this relieves the user of the difficulty of exactly putting the box to the end of the line.

Rubber banding is a very simple, but useful technique for positioning. The user, if he wants to draw a line, say, specifies the end points and as he moves from one point to another, the program displays the line being drawn. The effect is similar to an elastic line being stretched from one point to another and hence the name for the technique. By altering the end points, the position of the line can be modified. The technique can be extended to draw rectangles, arcs, circles etc. The technique is very useful when figures that pass through several intermediate points are to be drawn.

In such cases, just by looking at the end. Hence, the positioning can be done dynamically, however, rubber band techniques normally demand fairly powerful local processing to ensure that lines are drawn fast enough.

As the name suggests, it involves choosing a symbol or a portion of a figure and positioning it at any desired point.

It is possible to achieve a accurate and visible results without bothering to know about the actual coordinates involved. It is often desirable to display the coordinate position or the dimensions along with the object. This would be helpful in ascertaining the location of the object, when mere visible accuracy of location may not be enough, but they may have to be positioned w. The more difficult problem is that the coordinates need to keep changing as the figure is being dragged around and this demands rapid calculation on the part of the system.

Normally the dimensions are displayed only when the object is being manipulated or moved around and will stay only long enough for the user to take note of them. This ensures that they do not obscure the active parts of the picture, once the completed picture is on display.

One of the important points to be addressed is to select parts of the picture for further operations. Once the selection is made properly, tasks like.

But the actual selection process poses several problems. The first one is about the choice of coordinates. When a point is randomly chosen at the starting point of the selection process, the system should be able to properly identify its coordinates.

The second problem is about how much is being selected. This can be indicated by selecting a number of points around the figure or by enclosing the portion selected by a rectangle. The other method is to use multiple keys i. The mouse facilitates the same operation by the use of multiple buttons on it.

Once the selection is made, normally the system is supposed to display the portion selected so that user can know he has actually selected what he had wanted to. This feed back is done either by changing the color of the screen, modifying the brightness or by blinking. The use of mouse an input technique normally implies menus being provided by the system.

The menu concept helps the user to overcome the difficulty of having to draw simple and often used objects by providing them as a part of the system. In this unit, we get ourselves introduced to the realm of 3- dimensional graphics. Through 2-dimensional pictures help us in a number of areas, there are several applications where it is simply not sufficient to meet the requirements.

We look into those areas where 2-D displays fall short of the demands initially. Then, since we have only a 2-dimensional display to represent 3-dimensional objects we briefly look into the various alternatives available for the user in brief.

Of course, in the subsequent blocks, we study same of them with greater depth. Since the computers can do fast computations and the displays can draw them for the visual analysis of the designer, CAD has gained immense popularity in recent years. Obviously, a mere 2-dimensional picture seldom tells the complete story. Further, design details like fixtures etc can be studied only in 3-Dimensions. Hence the use of 3-Dimensional pictures is obviously the key in CAD.

This is another fast growing area, A sequence of pictures that educate or explain some concept or simply are of entertainment value are presented with motion incorporated. In such cases, mere 2-dimensional animation is of little interest and the viewer is to be treated to a virtual concept of depth. There are certain experiments that are either too costly or for certain other reasons can not be conducted in full scale reality. In certain other cases, a preliminary sequence of oeprations are done on the computer before a full fledged experimentation is taken.

The examples of flight simulation or nuclear testing illustrate the concepts. In such a case, definitely a 2-Dimensional simulation is of very little use and for the trainee to experience fully the various complexities involved, an experience of depth is to be provided.

Similarly in the case of a nuclear testing, a realistic study can be made only by having a 3-dimensional view on the screen. In fact, the list of applications that need 3-D views can go on endlessly. Instead, we simply underline the fact that using the 2-dimensional screen to provide a 3-dimensional effect is of prime importance and move on to the various ways in which this can be achieved.

At the outset itself, it is to be made clear that since we are using a 2-dimensional screen for a 3-dimensional display, what we can achieve is only an approximation. Even this approximation is achieved at the cost of computational overheads i. Further there is a limit to the amount of computations that can be done. Going through some of the applications that need 3-dimensional views, it is clear that the effects are to be achieved within reasonable time.

In an animation picture, if time delays prevent a continuous stream of pictures being presented to the viewer, then the whole idea behind animation is lost. In case of simulation, the limitations are more stringent. Otherwise, the entire meaning of simulation is lost.

The aim of this discussion is to highlight the fact that the methods of presentation depend not only on how good is one scheme than the other, but also on how fast can one scheme gets executed than the other. With the rapid changes in hardware technologies, some of the schemes that were unattractive previously have become useful now and the process will continue in future also. They constitute the front view, top view and some times the side view.

This is the simplest of the available techniques and can be done quite rapidly and also with reasonable accuracy. But the views will be useful only to trained engineers and architects whereas a common viewer may not be able to make much out of it.

But a common man cannot make any thing out of it. Thus, this method may be useful in applications like CAD, but is useless as far as animation or simulation is concerned. Three parallel projection views of an object, showing relative proportions from different viewing positions. When we see a number of objects or even a large object, parts that are nearer to the eye appear larger than those that are for away. Thus a matchbox can obscure a building, which is far away.

This is the way all humans see and understand things in real life. Thus, the scheme provides very realistic depth information and is best suited for animation and simulation applications. But the draw back is that even though the method provides a feel of depth, it seldom provides the actual information about the depth. The case of a matchbox obscuring the building clarifies the situation. It also is fairly computation intensive.

An object and its prospective view. One depth cue that is not computationally intensive is the concept of intensity cues. As an object moves further from the viewer, its intensity decreases.

Further, if it is made up of wide lines, the width of the lines decrease with increasing distance. The reason why we see depth is because of the stereoscopic effect of the eyes. We get two views of the same object by the two eyes and when these are superimposed, we get the idea about the depth.

In fact a clear idea about the depth coordinates cannot be got if only one eye is functional. The same can be used even in the case of computer displays. How exactly can we show two images differ? Either two different screens showing slightly displayed images of the same object can be shown or the same screen can be used to alternate the two views at more than 20 times per second.

The method of polarized glasses is of a recent origin. The same can be used in the reverse method to give an indication of depth, especially in motion pictures. The objects that are supposed to be nearer to the viewer can be made to move faster than those that are to be shown further away.

The viewer automatically gets the feeling of difference in depths of the various objects.

This could be a very useful technique especially in animation and simulation pictures. Those who have done artistic pictures know that shading is a very powerful method of shading depth. Depending on the direction of incident light and the depth of the point under consideration, shades are generated.

If they can be represented graphically, excellent ideas about depth can be created in the viewer. Raster graphics, which allow each pixel to be set to a large number of brightness values, is ideally suited for such shading operations. Name the method of sharing fast moving sequence of pictures. State two reasons why simulation is resorted to? What is the need for 3-dimentional representations of pictures? Name the type of projections normally used in engineering drawings. Which projection gives the most realistic view of the object?

What is stereoscope technique? How can one produce the stereoscope effect with a computer display? What is kinetic depth effect? Certain experiments may be too costly; certain other experiments need lot of changes to be made, which is easier to incorporate on a computer. Most of the objects we see in real life are 3-dimentional.

Also in applications like animation or simulation, where realism is of prime importance, not able to give a concept of depth would make the whole concept useless. Parallel Projection. Perspective Projection. The technique of showing two different pictures which are slightly displaced from each other, so that the user gets the idea of a third dimension is called the stereoscope technique. Either by using two screen displaced slightly from each other or by using a single screen to produce both the views, one after the other at speeds greater than 20 times per second.

In moving objects, the following points move slowly compared to the nearby points. If a similar technique is used in moving pictures, the viewer gets a cue about the depth of the object. We talk about polygons, since any object of any random shape can be though of as a polygon — a figure bounded by a number of sides. Thus if we are able to do certain operations on the polygons, they can be extended to all other bodies.

So for, we have seen the line drawing algorithms. But if only a figure bounded by a number of sides is given, we do not know complex when a large number of polygons is there in the screen. We do not know whether the objects behind the present object are visible or not. So, we would like to make a distinction between objects that are inside the polygon and those that are outside and display them differently.

We make use of the property of coherence- i. Using this, we introduce you to the YX algorithm, which makes use of the intersections of polygons with the scan lines and the concept of coherence to suggest an efficient scan conversion methodology.

They are useful in representative, thickness, depth, or objects line up one behind another. Needless to say, the ability to display the third dimension is of prime importance in realistic display of objects, especially in video games and animation.

Generating a display of a solid object means one should be able to. This concept is called the mask of the area. One simple way of representative such pixels is to use a 1 to indicate pixels that lie inside the area and use a 0 to indicate pixels outside. The shading rule deals with the pixel intensity of each pixel within the solid area.

Such a mechanism would give the effect of shadows to pictures so that pixels that lie nearer to the observer would caste a shadow on those that are far away. A variable shading technique is of prime importance in presenting realistic 3- dimensional pictures.

When one speaks of 3-dimensions and a number of objects, the understanding is that some of the objects that are nearer are likely to cover the objects that are far away. Since each pixel can represent only one object, the pixel should become a part of the object that is nearest to the observer i. In the subsequent blocks, we see more about the other aspects.

The simplest algorithm of scan conversion can do something like this i Display the boundary of the solid object ii For each pixel on the screen, try to find out whether it lies inside the boundary or on the boundary or outside it.

Suitably arrange the mask of each. Though this method is simple and reliable, it needs enormous amounts of computations to achieve the purpose. Obviously more efficient methods for scan conversion are needed. One may note that the trade off involved is not just the time involved, but such inordinate delays avoid a proper, real time modifications and display of the picture.

Hence, several algorithms have been devised so that certain inherent properties of the pictures are utilized to reduce the computations. One such broad technique is to.

The performance of a scan conversion algorithm can be substantially improved by taking advantage of the property of coherence i. Similarity if a pixel is outside a polygon, most of its adjacent ones also will be most probably outside it.

A corollary is that the coherence property changes only at the boundaries i. The property of coherence can be applied to all its neighboring pixels and hence their status need not be checked individually.

Consider the following example. Given a polygon, it is to be scan converted. Suppose we want to identify all those pixels that lie inside the polygon and those that lie outside. This can be done in stages, scan line by scan line. Consider the scan line a. This is made up of a number of pixels.

Beginning with left most point of the scan line, compute the intersections of the edges of the polygon with the particular scan line. Starting at the left most pixels, all pixels lie outside the polygon up to the first intersection.

From then on all pixels lie inside the polygon until the next intersection. Then afterwards, all pixels lie outside.

Now consider a line b. It has more than two intersections with the polygon. In this case, the first intersection indicates the beginning of the series of pixels inside. Now we write this observation as an algorithm. This algorithm is called the yx Algorithm We will see at the end of the algorithm, why this peculiar name.

However, we leave this portion to the student. Build a list of all those x,y intersections. Sort the list so that the intersections of each scan line are at one place. Then sort them again with respect to the x coordinate values. Understanding this concept is central to the algorithm. To simplify the operations, in stage 1, we simply computed the intersections of every edge with every intersecting scan line.

This gives a fairly large number of unordered points. Now sort these points w. Then of the scan line with value 2 and soon. Then the points a1 and a2 appear in the order, similarly of b and c. Remove the intersection points in pairs. The first of these points indicate the beginning of the series of pixels that should lie inside the polygon and the second one ends the series. In the case of the scan line b, we get two pairs of intersections, since we have two sets of pixels inside the polygon for that scan line, while an intermediate set lies outside.

This information can be used to display the pictures. Incidentally this algorithm is called the yx algorithm, since it sorts the elements first w. We leave it to the student to try and write a xy algorithm and ensure that it does the job equally well. Note that we have not commented on the scan line c of the picture.

The peculiarity of that line is that the intersection lies exactly on the vertex of a polygon. In such a case, it is very easy to see that the algorithm fails. This is because the intersection of the scan line with the vertex not only defines the beginning of a series of pixels that lie inside the polygon, but also the end of the series. Now how do we treat such intersections?

One earlier solution suggested was never to have such intersections at all i,e. Then every scan line will have two intersections instead of one. But obviously this solution is not a welcome one since it distorts the picture altogether. This solves the problem elegantly, the only problem being that how do we identify?

The answer is to keep track of the direction of the polygon edges. Once this is done, if the direction is different, then include two points instead of one into the list of intersections With the same coordinate values, of course.

Now, we write a simple algorithm that treats such singularity problems. This algorithm also takes care of the other imminent problem — that of horizontal edges.

A horizontal edge would intersect with every pixel of the scan line and how to deal with such a situation wherein every pixel can be considered to be inside the polygon is also dealt with here. A variable yprev is used to keep track of the previous intersection of the edge. Whenever an intersection is found, not only is a new pair of x,y stored as in the yx algorithm, but the y coordinate is stored to indicate the previous intersection by storing it in yprev.

Initially its value is set to 0.

Go to the next edge of the polygon. If these are no more edges to be processed, exit. If it has no intersections at all or a very large no. Go to step 2 4. The y coordinate of the last intersection is stored in yprev. Go to step 2 to findout whether any edges are still there. The y coordinate of the last intersection is preserved in yprev. Go to step2. Note that this algorithm does not generate intersection nor does it produce the scan conversion.

The scan conversion algorithm, which does the conversion, will only pass its intersection values to the singularity algorithm to check for the specific cases. The other aspect to be taken care of while displaying polygons is to decide on the priority.

In 3 dimensional graphics, it is obvious that two or more polygons tend to overlap one another. In such cases, only the polygon that it is closest to the observer will visible. This polygon obscures any more polygons behind it. But the problem is that the front polygon may not cover the polygon behind it completely.

That means the farther polygon is visible in those places where it is not covered by the front polygon, but will not be visible in those regions where the front polygon covers it. One solution to solve this problem is to find the intersections of the polygons display the front polygon completely and display the back polygon s in these areas where the front polygon is not covering it.

But, if you consider cases wherein a large number of polygons is covering one another at different regions, this method becomes unwieldy. A very ingenious method to solve this problem of assigning priorities to the algorithms has been devised. Imagine a painter painting these polygons on his canvas. What does he do?

He does not bother himself about intersections or partially obscurities. He begins by painting the furthest polygon, say in a particular color. This obviously has the least priority in display i. Then he begins painting the next polygon in front. He simply goes about painting this second polygon, without bothering about the previous polygon.

This new polygon, let us say polygon 2, has a higher priority than the polygon 1. Now, once the second polygon is painted, in a different color, it is simple to analyze that the parts of the polygon 1 that are covered by polygon 2 automatically get covered and becomes invisible.

Similarly if a polygon 3 is painted, it gets the highest priority in display. Thus, an extremely simple concept emerges. Do not bother about any mathematical formulations. Start from the farthest polygon and keep displaying them in the order of increasing priorities. The priorities are automatically taken care of. Expressed in technical terms the algorithm can be expressed as follows. Assign a priority to each polygon, the lowest priority to the polygon that is farthest from the viewer and the highest priority to the one that is nearest.

Perspective projection. Visible line and surface identification. Surface rendering. Three Dimensional Object representations. Bezier curves and surfaces. B-Spline curves and surfaces. Visibility , Image and object precision Z- buffer algorithm. Floating horizons. Computer Animation: Design of Animation Sequences. Key Frame Systems. Morphing Simulating Accelerations. Motion Specifications. Kinematics and Dynamics. ED Think-Tank B.

SC Think-Tank B. Sc Model Paper M. Sc Prev. Model Paper M. Sc Final Model Paper M. Com Model Paper B. Com I Model Paper B. Com Hons Model Paper B. Com Model Paper M. Com Prev. TOP Computer Graphics. Share On:. You might also like. Wireless Technology. System analysis and Design.