Computer Graphics

James D. Foley

Mentioned 22

A guide to the concepts and applications of computer graphics covers such topics as interaction techniques, dialogue design, and user interface software.

More on Amazon.com

Mentioned in questions and answers.

I am trying to get the 2D screen coordinates of a point in 3D space, i.e. I know the location of the camera its pan, tilt and roll and I have the 3D x,y,z coordinates of a point I wish to project.

I am having difficulty understanding transformation/projection matrices and I was hoping some intelligent people here could help me along ;)

Here is my test code I have thrown together thus far:

public class TransformTest {

public static void main(String[] args) {

    // set up a world point (Point to Project)
    double[] wp = {100, 100, 1};
    // set up the projection centre (Camera Location)
    double[] pc = {90, 90, 1};

    double roll = 0;
    double tilt = 0;
    double pan = 0;

    // translate the point
    vSub(wp, pc, wp);

    // create roll matrix
    double[][] rollMat = {
            {1, 0, 0},
            {0, Math.cos(roll), -Math.sin(roll)},
            {0, Math.sin(roll), Math.cos(roll)},
    };
    // create tilt matrix
    double[][] tiltMat = {
            {Math.cos(tilt), 0, Math.sin(tilt)},
            {0, 1, 0},
            {-Math.sin(tilt), 0, Math.cos(tilt)},
    };
    // create pan matrix
    double[][] panMat = {
            {Math.cos(pan), -Math.sin(pan), 0},
            {Math.sin(pan), Math.cos(pan), 0},
            {0, 0, 1},
    };

    // roll it
    mvMul(rollMat, wp, wp);
    // tilt it
    mvMul(tiltMat, wp, wp);
    // pan it
    mvMul(panMat, wp, wp);

}

public static void vAdd(double[] a, double[] b, double[] c) {
    for (int i=0; i<a.length; i++) {
        c[i] = a[i] + b[i];
    }
}

public static void vSub(double[] a, double[] b, double[] c) {
    for (int i=0; i<a.length; i++) {
        c[i] = a[i] - b[i];
    }      
}

public static void mvMul(double[][] m, double[] v, double[] w) {

    // How to multiply matrices?
} }

Basically, what I need is to get the 2D XY coordinates for a given screen where the 3D point intersects. I am not sure how to use the roll, tilt and pan matrices to transform the world point (wp).

Any help with this is greatly appreciated!

The scope of this is way too large to get a good answer here: I'd recommend reading a good reference on the topic. I've always liked the Foley and VanDam...

As long as I program, I always did it with MS-technologies. There was DOS, MFC , VB6, then .net with WinForms and now WPF.

In all these technologies, the GUI-thing was always more or less the same, because it based on Win32 (except DOS). With WPF now all has totally changed. MS introduced a lot of new possibilities, beginning with the declarative way to build UIs, lookless controls, animations et cetera. I like this new UI-technology a lot, also the fundamentals beneath it (DependencyProperty-System, RoutedEvents and so forth).

But what I don’t know, because I always used MS-technologies, is if this whole construct is an MS-specific invention or are these things only a good compilation of technologies and patterns that are state of the art and used in many other modern environments.

Is there some information about the comparison of modern UI technologies that shows links and common patterns?

If you study graphics technology, you'll realize that WPF isn't all that remarkable - it's an implementation of some very well-established concepts on modern hardware and modern Windows. To illustrate, an old book here on my shelf printed in 1991, "Computer Graphics, Principles and Practice" contains a lot of the ideas WPF is built on.

Probably the most fundamental difference of WPF to GDI (the predecessor Windows graphics system) is that WPF is a retained-mode graphics system, whereas GDI was non retained-mode. This means that in WPF, there is a visual tree, and data structure, which represents the visual scene to be viewed, gets clipped and rasterized on a regular basis, and that data-structure always remains in memory, managed by WPF itself.

Once this is understood, that the heart of WPF is a tree structure representing the scene, one finds that the rest is built on principles of handling the specifics of rasterizing the visual tree on top of a mature 3D display system (DirectX). The layering on of threading (DispatchObject), data binding mechanism (DependencyObject), and UI idioms of layout, input, focus, eventing (UIElement), and styling (FrameworkElement) are all natural progressions of ideas in Win32 or other UI constructions. To give an example of the latter: even though nothing like DependencyObject ever existed in Win32, a popular 3D tool (Maya), which represents a 3D scene as a Directed Acyclic Graph has a similar subsystem, where nodes have properties, and when properties are updated, values are pushed via node connections across the graph, and all nodes which are interested in that property are notified of the new value. From this it can be seen that once you have a central data structure (tree or graph), layering on new capabilites is a straightfoward software engineering problem.

Having stated all this, WPF should be recognized for what it is: a mature fruit which embodies decades of research, trial and error in building a user interface and graphics technology, and a solid basis for building rich client applications well into the future.

I'm trying to move my game development into the third dimension, but I'm having a bit of troubles understanding what I actually have to do. I've created a 2D MMORPG before using C and SDL, which wasn't too hard. But I can't seem to find any useful resources explaining how 3D programming actually works. I have a basic understanding of the vector math involved, but I just can't seem to find any clear, in-depth explanation of how everything else like lighting and shaders work. I've found plenty of code samples and such, but all of them just throw in a comment like "//Apply the lighting", which doesn't really tell me anything about what it's actually doing and why.

I'm not looking for an API-Specific tutorial; it's easy enough to learn a new API--I'm just not sure what to actually do with it.

I'd suggest you check out the NeHe OpenGL tutorials, starting with the simplest ones, and learn a little bit of OpenGL. A good OpenGL reference (such as the Red Book, though that's maybe a bit advanced to start with) will also be a big help.

If you want to understand the concepts of 3D computer graphics and rendering tutorials will not be of much help. As you found out they teach you the API but don't help with the fundamental understanding.

You need the bible of grpahics programming:

http://www.amazon.com/Computer-Graphics-Principles-James-Foley/dp/0201121107

alt text

It does not cover any OpenGL, DirectX or shaders, just the fundamentals. But that's what you need. For example: Once you've understood how lighting works the three lines that enable lighting on OpenGL will suddenly make perfect sense.

The standard text back "in the day" was Foley and Van Dam, some subjects covered there are a bit long in the tooth but the fundamental mathematics behind 3-D transformations and projections hasn't changed.

Alan Watt's text is also good, but isn't really an introductory book.

You might also take a look at David Eberly's web site, he has written several books on the subject and there is a wealth of related information to be found there as well. Expect plenty of math.

I've noticed that a number of top universities are offering courses where students are taught subjects relating to Computer Graphics for their CS majors. Sadly this is something not offered by my university and something I would really like to get into sometime in the next couple of years.

A couple of the projects I've found from some universities are great, although I'm mostly interested in two things:

  • Raytracing:
    • I want to write a Raytracer within the next two years. What do I need to know? I'm not a fantastic programmer yet (Java, C and Prolog are my main languages as of today) but I'm slowly learning every day. Also, my Math background isn't all that great, so any pointers on books to read or advice on writing such a program would be fantastic. I tend to pick these things up pretty quickly so feel free to chuck references at me.
  • Programming 3D Rendered Models
    • I've looked at a couple of projects where students have developed models and used them in games. I've made a couple of 2D games with raster images but have never worked with 3D models. What would I need to learn in regards to programming these models? If it helps I used to be okay with 3D Studio Max and Cinema4D (although every single course seems to use Maya), but haven't touched it in about four years.

Sorry for posting such vague and, let's be honest, stupid questions. It's just something I've wanted to do for a while and something that'd be good as a large project for me to develop in my own time.

Related Questions

The book "Computer Graphics: Principles and Practice" (known in the Computer Graphics circles as the "Foley-VanDam") is the basic for most computer graphics courses, and it covers the topic of implementing a ray-tracer in much detail. It is quite dated, but it's still the best, afaik, and the basic principles remain the same.

I also second the recommendation for Eric Lengyel's Mathematics for 3D Game Programming and Computer Graphics. It's not as thorough, but it's a wonderful review of the math basics you need for 3D programming, it has very useful summaries at the end of each chapter, and it's written in an approachable, not too scary way.

In addition, you'll probably want some OpenGL or DirectX basics. It's easier to start working with a 3D API, then learn the underlying maths than the opposite (in my opinion), but both options are possible. Just look for OpenGL on SO and you should find a couple of good references as well.

Will you please provide me a reference to help me understand how scanline based rendering engines works? I want to implement a 2D rendering engine which can support region-based clipping, basic shape drawing and filling with anti aliasing, and basic transformations (Perspective, Rotation, Scaling). I need algorithms which give priority to performance rather than quality because I want to implement it for embedded systems with no fpu.

I'm probably showing my age, but I still love my copy of Foley, Feiner, van Dam, and Hughes (The White Book).

Jim Blinn had a great column that's available as a book called Jim Blinn's Corner: A Trip Down the Graphics Pipeline.

Both of these are quited dated now, and aside from the principles of 3D geometry, they're not very useful for programming today's powerful pixel pushers.

OTOH, they're probably just perfect for an embedded environment with no GPU or FPU!

All paint programs, independent of how simple or complex they are, come with a fill tool. This basically replaces the color of a closed region with another color. I know that there are different APIs to do this, but I am interested in the algorithm. What would be an efficient algorithm to implement this tool?

A couple of things I can think of quickly are:

  1. Convert image into a binary map, where pixels in the color to be replaced are 1 and all other colors are 0.
  2. Find a closed region around the point you want to change such that all the pixels inside are 1 and all the neighbouring pixels are 0.

Sample Image

These kinds of algorithms are discussed in detail in Computer Graphics: Principles and Practice. I highly recommend this book if you're interested in understanding how to rasterize lines, fill polygons, writing 3d code without the benefit of using DirectX or OpenGL APIs. Of course, for real world applications, you'll probably want to use existing libraries, but if you're curious about how these libraries work, this is an awesome read.

I am starting to learn Linear Algebra but it is has been very mathematical and I don't know its actual usage in programming. I heard it is a very useful subject for movements(animate) and graphics. I thought I could make my learning process for linear algebra more fun if I could learn it from its application through programming. That's learn through the practical way and not just working out on paper.

Since I am still learning the very basics of linear algebra, I am thinking where and how are basic concepts of linear algebra used in programming? What kind of interesting things could be done with basic knowledge of linear algebra such as row-echelon form, LU deposition, linear combination/system, etc. Any tutorials on any languages such as Java, Actionscript, PHP or others teaching the usage of basic linear algebra concepts to create interesting simple things?

Thanks!

As you've already stated, the most likely place that you'll find it is in graphics and games programming. You don't say what language you'd like to program in, so I'll assume Java:

http://www.java3d.org/

All techniques are not created equal. You will use LU decomposition and eigenvalues more if you're doing scientific computing.

This is a very good book. Don't be fooled by the date: the mathematics haven't changed. I'd also recommend looking at OpenGL.

I'd like to make a rotating object (sphere, box, etc.) using only the canvas. But I can't find a tutorial. Help If you saw somewhere or explain how to do it.

Like this example, only without any effects

Hope you like math. 3D effects can always be achieved on a 2D plane if you are willing to write some code.

Some resources that will probably help:

An Intro to Computer Graphics

and for help with the math,

A Book on Linear Algebra

I know how to test intersection between a point and a triangle.

...But i dont get it, how i can move the starting position of the point onto the screen plane precisely by using my mouse coordinates, so the point angle should change depending on where mouse cursor is on the screen, also this should work perfectly no matter which perspective angle i am using in my OpenGL application, so the point angle would be different on different perspective angles... gluPerspective() is the function im talking about.

You need to generate a ray (line) passing through the mouse location perpendicular to the screen.

I would recommend getting some basic information on 3d geometry and 2d projections before you go much further.

Check out Wikipedia

A book search on Google has come up with quite a few titles.

Foley & Van Dam though is the definitive book - here on Amazon.co.uk or here on Amazon.com

I'm looking for some material on how homogeneous coordinates, perspectives, and projections work in 3d graphics on a basic level. An approach using programming would be stellar. I've been searching around and my searches are convoluted with OpenGL, Direct3d, and material more concerned with the mathematical proofs than the actual application. Does anyone know of a place where I could find this information (online access preferred)?

I'm reading the following two books together right now to learn WPF's 3D graphics model and the underlying math at the same time. Both books are outstanding in my opinion, though it may not be what you're looking for:

3D Math Primer for Graphics and Game Development

3D Programming for Windows (this is a WPF 3d book, though the title doesn't reflect it)

You probably have to look for pre-OpenGL textbooks like Foley and van Dam's Computer Graphics: Principles and Practice in C (2nd Edition).

In particular, Ch 11 Representing Curves and Surfaces and Ch 15 Visible-Surface Determination would be relevant, but earlier material on how to draw lines and shapes would also be useful if you are truly doing everything from scratch. Something as simple as drawing a line is non-trivial if you think about it.

alt text

I'm making a software rasterizer for school, and I'm using an unusual rendering method instead of traditional matrix calculations. It's based on a pinhole camera. I have a few points in 3D space, and I convert them to 2D screen coordinates by taking the distance between it and the camera and normalizing it

Vec3 ray_to_camera = (a_Point - plane_pos).Normalize();

This gives me a directional vector towards the camera. I then turn that direction into a ray by placing the ray's origin on the camera and performing a ray-plane intersection with a plane slightly behind the camera.

Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);

float dot = ray_to_camera.GetDotProduct(m_Direction);
if (dot < 0)
{
   float time = (-m_ScreenDistance - plane_pos.GetDotProduct(m_Direction)) / dot;

   // if time is smaller than 0 the ray is either parallel to the plane or misses it
   if (time >= 0)
   {
      // retrieving the actual intersection point
      a_Point -= (m_Direction * ((a_Point - plane_pos).GetDotProduct(m_Direction)));

      // subtracting the plane origin from the intersection point 
      // puts the point at world origin (0, 0, 0)
      Vec3 sub = a_Point - plane_pos;

      // the axes are calculated by saying the directional vector of the camera
      // is the new z axis
      projected.x = sub.GetDotProduct(m_Axis[0]);
      projected.y = sub.GetDotProduct(m_Axis[1]);
   }
}

This works wonderful, but I'm wondering: can the algorithm be made any faster? Right now, for every triangle in the scene, I have to calculate three normals.

float length = 1 / sqrtf(GetSquaredLength());
x *= length;
y *= length;
z *= length;

Even with a fast reciprocal square root approximation (1 / sqrt(x)) that's going to be very demanding.

My questions are thus:
Is there a good way to approximate the three normals?
What is this rendering technique called?
Can the three vertex points be approximated using the normal of the centroid? ((v0 + v1 + v2) / 3)

Thanks in advance.

P.S. "You will build a fully functional software rasterizer in the next seven weeks with the help of an expert in this field. Begin." I ADORE my education. :)

EDIT:

Vec2 projected;

// the plane is behind the camera
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);

float scale = m_ScreenDistance / (m_Position - plane_pos).GetSquaredLength();

// times -100 because of the squared length instead of the length
// (which would involve a squared root)
projected.x = a_Point.GetDotProduct(m_Axis[0]).x * scale * -100;
projected.y = a_Point.GetDotProduct(m_Axis[1]).y * scale * -100;

return projected;

This returns the correct results, however the model is now independent of the camera position. :(

It's a lot shorter and faster though!

This is called a ray-tracer - a rather typical assignment for a first computer graphics course* - and you can find a lot of interesting implementation details on the classic Foley/Van Damm textbook (Computer Graphics Principes and Practice). I strongly suggest you buy/borrow this textbook and read it carefully.

*Just wait until you get started on reflections and refraction... Now the fun begins!

Right now, I think a combination of C and openGL is what I need to learn, but it seems like there is still more to it that I need. Also, I'm not sure where to start. I know some C, from reading the C Programming Language (K&R).

If you understand the concepts behind 3D computer graphics then OpenGL + C is about all you need. If you need some help with the concepts then I'd suggest the NeHe OpenGL tutorials and a good reference book like The Red Book.

I would recommend extremely highly getting, reading, and working through some of the examples in the book Computer Graphics: Principles and Practice. Yes, the book is MASSIVELY out of date; it's still the canonical reference for this sort of thing.

I already have the basics of ambient occlusion down. I have a raycaster and am capable of shooting rays about a hemisphere uniformly. It seems like those are the basics of what are needed for radiosity but I don't know where to go from there. Do I find how much light comes from each face? (I'm making my game out of cubes like minecraft) After that what do I do?

If you're interested in computer graphics "theory", I'd highly recommend Foley/van Dam:

http://www.amazon.com/Computer-Graphics-Principles-Practice-2nd/dp/0201848406

If you're just interested in what it is, and how it works, Wikipedia has a great article (with visual examples and math equations):

http://en.wikipedia.org/wiki/Radiosity_%283D_computer_graphics%29

And for an over-simplified one-liner, I guess you could say "radiosity is a more sophisticated technique for rending ambient lighting in a ray traced image".

IMHO ...

I hate to ask such a dumb question but I just can't firgure out how to flip an image using Android OpenGL.

I try using gl.glScalef(-1,y,z) android gl.glRotatef(180,0,1,0) but when I do this the image flip but it also change the positions which I do not want. I'm sure there a easy way to do this I'm just not getting.

Here is my draw code:

public void draw(GL10 gl){
    gl.glLoadIdentity();
    gl.glTranslatef(position.x, position.y, 0);
    gl.glRotatef(angle, rotX, rotY, rotZ);
    gl.glScalef(scaleX, scaleY, scaleZ);

    gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId[0]);

    gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    gl.glEnable(GL10.GL_BLEND);

    gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexsBuffer);
    gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);

    gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer);

    gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    gl.glDisable(GL10.GL_BLEND);

    if(animation == true){
        PlayAnimations();
    }
}
  1. center the object (remember the translation)
  2. perform the flipping by scaling to -1 with respect to the desired axis.
  3. then "reverse translate" the object.

For more information, please grab yourself a copy of Computer Graphics by James D. Foley. http://www.amazon.com/Computer-Graphics-Principles-Practice-2nd/dp/0201848406

I have a class that holds a 4x4 matrix for scaling and translations. How would I implement rotation methods to this class? And should I implement the rotation as a separate matrix?

You want to make sure you find a reference which talks about the right kind of matrix that's used for computer graphics (namely 3D homogeneous coordinates using a 4x4 transformation matrix for rotation/translation/skewing).

See a computer graphics "bible" such as Foley and Van Dam (pg. 213), or one of these:

I'm new to Android and Java programming and I've manage to create a simple paint program, but how do I add a zoom feature? Right now I'm just extending the View class and using the "onDraw()" method.

Do I have to use a Drawable to be able to add zooming functionality? I'm not really understanding the differences between the two.

If I am way off base then please point me to a good tutorial on paint/zooming.

I think your question is beyond the scope that stackoverflow Q/A format can provide. I know you're asking for 'simple' but imho that's probably due to your lack of perception about the scope of the question that you're asking.

In order to support zooming you need to know what kind of image processing engine you want. Are you creating a vector or raster based drawing program? If you do not understand the difference between the two then you'll going to have a difficult time figuring out what to do.

You should probably at least gain a basic understanding about these various topics (links to books pulled more or less from the top of amazon's search results):

Wikipedia links:

Open source image processing apps (not android but source code never hurts to see how others have done something)

I'm sorry there isn't an easy and direct answer to your question. I'm also not saying that you need to become an expert in these topics to do what you want. You just need to familiarize yourself with them and then you'll probably be able to implement what you want without much difficulty.

Are there properties of digital images (e.g. dct coefficients, pixel values, YCbCr, others) that remain constant when filters like binarization, grayscale, sepia, etc, or tilting the image by a certain degree are applied. It would also be helpful if you could suggest any reading or online tutorial for basic image processing.

Q: Are there properties of digital images ... that remain constant ...

A: Sure: height and width ;-)

Q: ...or tilting the image by a certain degree...

A: Whoops - maybe not even height and width ;)

ANYWAY -

Your question is far, far too broad.

SUGGESTION: Get a copy of Foley/van Damm:

As the title says, I'd like to program a 3d game (probably a BattleZone clone), but without the use of an API like OpenGL, DirectX, and the like. At the heart of the matter, I'd just like to learn how to draw basic 3d shapes to the screen and manipulate them. Don't care if it looks like crap. I've used OpenGL to achieve similar ends before, but really didn't learn about these topics.

The problem is, I have no idea where to start. I downloaded the Doom source code, but it's a bit over my head. Although I've programmed a bit, graphical matters are very much out of my depth.

I'd be very grateful if anyone could offer links or code (in any language) that would help me along in my purpose.

Sounds like an exciting project. I did something similar in the late 90's. Before OpenGL and DirectX became popular, there were a ton of great books on the subject.

Fundamentally you will have to learn how to

  • Represent 3D geometry
  • Transform that geometry (translate and rotate)
  • Project that geometry onto a 2D screen.

Each of those major topics has many sub-topics (for example, complex objects can be constructed from a number of polygons. You may want to limit polygons to being constructed of triangles only, or support other polygons. You may want to load common model formats e.g. .obj files so that you can create models with off the shelf tools).

The topics are way too broad for a detailed answer here. Whole books are written on the subject, including

Black Art of 3D Game Programming (Book, amazingly still available)

For a good introduction to the general topics, have a look at:

http://en.wikipedia.org/wiki/3D_projection

http://en.wikipedia.org/wiki/Orthographic_projection

http://en.wikipedia.org/wiki/Transformation_matrix#Perspective_projection

Doom, which you already looked at, used a special optimization called heightfield rendering and does not allow for rendering of arbitrary 3D shapes (e.g., you will not find a bridge in Doom that you can walk under).

I have the second edition of Computer Graphics: Principles and Practice in C and it uses SRGP (Simple Raster Graphics Programming) and SIGGRAPH which is a wrap-around SRGP, if you look up articles and papers on graphics research you'll see that both these libraries are used a lot, and they are way more direct and low level than the APIs you mentioned. I'm having a hard time locating them, so if you do, please give a link. Note that the third edition is in WPF, so I cannot guarantee much as to it's usefulness, and I don't know if the second edition is still in print, but I have found numerous references to the book, and it's got it's own page in Wikipedia.

Another solution would be the Win32 API which again does not provide much in terms of rendering, but it is trivial to draw dots and lines onto a window. I have written a few tutorials on it, but I didn't cover drawing pixels and lines, so they'll only be useful if you have trouble with the basics of setting up a window. Note that it is not intended for real-time rendering, so it may get slow.

Finally you can look at X11 programming, the foundation of most modern operating systems with a GUI. I haven't found the libraries for Windows, but again I didn't invest too much time on it. I know it is available for CIGWIN and for Linux in general though, and I believe it would be very interesting to look at the core of graphics since you're already looking under the hood of 3D graphics.

I'm writing a ray tracer (using left-handed coordinates, if that makes a difference). It's for the sake of teaching myself the principles, so I'm not using OpenGL or complex features like depth of field (yet). My camera can have an arbitrary position and orientation; I indicate them by way of three vectors, location, look_at, and sky, which behave like the equivalent POV-Ray vectors. Its "film" also has a width and height. (The focal length is implied by the distance from position to look_at.)

My problem is that don't know how to cast the rays. I have two quantities, vx and vy, that indicate where the ray should end up. They both vary from -1 to 1. If they're both -1, I'm casting the ray from the camera's position to the top-left corner of the "film"; if they're both 1, the bottom-right; if they're both 0, the center; and the rest is apparent.

I'm not familiar enough with vector arithmetic to derive an equation for the ray. I would appreciate an explanation of how to do so.

You've described what needs to be done quite well already. Your field of view is determined by the distance between your camera and your "film" that you're going to cast your rays through. The further away the camera is from the film, the narrower your field of view is.

Imagine the film as a bitmap image that the camera is pointing to. Say we position the camera one unit away from the bitmap. We then have to cast a ray though each of the bitmap's pixels.

The vector is extremely simple. If we put the camera location to (0,0,0), and the bitmap film right in front of it with it's center at (0,0,1), then the ray to the bottom right is - tada - (1,1,1), and the one to the bottom left is (-1,1,1).

That means that the difference between the bottom right and the bottom left is (2,0,0).

Assume that your horizontal bitmap resolution should be 1000, then you can iterate through the bottom line pixels as follows:

width = 1000;
cameraToBottomLeft = (-1,1,1);
bottomLeftToBottomRight = (2,0,0);

for (x = 0; x < width; x++) {
    ray = cameraToBottomLeft + (x/width) * bottomLeftToBottomRight;
    ...
}

If that's clear, then you just add an equivalent outer loop for your lines, and you have all the rays that you will need.

You can then add appropriate variables for the distance of the camera to the film and horizontal and vertical resolution. When that's done, you could start changing your look vector and your up vector with matrix transformations.

If you want to wrap your head around computer graphics, an introductory textbook could be of great help. I used this one in college, and I think I liked it.

i use EASYBMP library and i want to know the most effective way to scale , rotate , shear and reflect algorithm. i want the most optimized to do it.

The most effective way to scale, rotate, shear and reflect is to use the power of your graphics card - for example through OpenGL.

If you still want to do bitmap pixel operations yourself, typically you do this using linear algebra. This is not super easy to figure out, so I recommend finding some good study material, for example this book.

what i am trying to learn to do is draw a 3D shape in a JFrame.

All I want to use to do this is my IDE. If anyone can help with how to draw any 3d shape like a square, and if its possible, how to rotate it. Or could someone put a link to a recource for learning how to draw 3d shapes only with java like i said above. If anyone needs more detail please ask. (please do not ask me to do this differently if it is possible to do it this way).

thanks.

Java has a 3-D package, which may or may not meet your criterion of "no other libraries or framework or anything":

If it does, then you're still going to have to at least use the 2D Java package:

Q: How do you draw 3D objects using the 2D primitives?

A: It's not necessarily difficult, depending on how in-depth you want to go. At this simplest, just:

a) define a 3d coordinate system (normalized points 0.0 - 1.0 are always good)

b) write the functions to transform your 3D model into 2D coordinates

Lots of books (and tutorials), including:

I'm assuming your goal is to "learn the basics".

I want to achieve the effect of a 2D image I have but a little inclined, for example a plane, I want the image can be rotated about its axis Y. .. anyone can help me with some idea of how to do ..**

Basically you need a little linear geometry/algebra, and/or a package to do them for you.

From the geometry point of view, you think of the image as if it's on a plane in space; you're looking at it as if it were back-projected on your monitor. If the picture is exactly parallel to that screen, and the same size, each point is mapped to a pixel on the screen. Otherwise you have to go through a computation that makes that mapping, which involves a trig function for the angles in the x,y,z directions between that plane and the plane of the screen. The linear algebra comes in because the easy way to handle this computation is as a series of multiplications of 4×4 matrices.

Now, you could program all that yourself, and for what you're thinking of it wouldn't be all that difficult. See any good computer graphics text, like Shirley, or Foley and van Damm.

As far as a package, there's good 3D graphics in Java. Even better, there are good tutorials: