If you’ve seen some of my work at flashandmath.com, then you know that I enjoy playing with particles, especially the 3D variety (for example, see here and here). Below is one of my first adventures with 3D particles in the HTML5 canvas: particles which begin their life on a sphere and then fly away. The effect looks like some sort of microscopic infectious agent, or perhaps just something from nature that would make you sneeze. Call it whatever you like, but I’ll just call it a dusty sphere.

It is important to not that I am making use of a 2D canvas context in this example, instead of using the exciting but still not widely supported WebGL 3D context.

Click here or on the screenshot below to see the dusty sphere in action, and then read my comments below to learn more about drawing in 3D and also about how to evenly distribute points randomly on the surface of a sphere.

dusty sphere screencap

Download

Download the full commented source code here: DustySphere.zip

About the code

The code used here is built off of my earlier examples, and features linked lists and an object pool for efficiency. See my earlier post here for a little more discussion of the particle animation code. Also being reused here is the idea of “envelope” parameters (attack, hold, and decay) which control the evolution of the particles over time. In my earlier examples, the envelope parameters were used to change the size of the particles over time; here they control the alpha value, so each particle can fade in at the beginning of its life, and fade out at the end.

I have also made a few changes to the basic particle animation engine: before, the particles were defined using a constructor function so that they could inherit a prototype method, but I have simplified the code by eliminating this setup. The particles are simply JavaScript Objects, and parameters such as position and velocity components are added dynamically.

Drawing in 3D the simple way

The 3D imaging used here is very simple for a couple of reasons. First of all, the objects to be drawn are simple dots. If they were planar images (like sides of cubes, flipping playing cards, or twirling snowflakes), we would have to worry about skewing the images properly when they are viewed obliquely. But simple dots suggest spherical objects which look the same no matter what angle you view them from. Also, to simplify matters the particles are all given the same color, so we don’t have to worry about depth sorting. If you’re not familiar with depth sorting, the idea is that when we draw objects in 3D, objects which are behind other objects have to be drawn first and then the nearer objects are painted over them. But if they are all the same color then the layering is not detectable, so we can draw the objects in whatever order we like.

So without having to worry about skewing images or depth sorting, drawing in 3D just comes down to proper scaling of coordinates and object sizes. Objects further away should appear smaller, and nearer objects should appear bigger. In the coordinate setup used here, the x and y axes are in their usual position for canvas drawing: x goes from left to right, y goes from top to bottom. The third axis, z, can be thought of as pointing out of the computer screen toward you. (The more mathematical-minded reader will note that this choice of z direction creates a left-handed coordinate system instead of the usual right-handed version. This was an arbitrary choice.)

The transformation to use is simple. First we must set a few parameters. The first parameter, fLen, can be thought of as the distance from the viewer’s eye to the origin, where the line of sight is along the z-axis. Second, we define two coordinates projCenterX and projCenterY which set the position in the 2D view plane where the 3D origin will be projected. Then anything which is to be drawn at the 3D point (x,y,z) should be drawn in the 2D plane at the projected x and y coordinates


projX = fLen/(fLen - z)*x,
projY = fLen/(fLen - z)*y.

Rotating and projecting

To make things a bit more interesting, the whole space containing the particles rotates sowly about a vertical axis. An equivalent way to think of this is that the viewer’s eye (or camera) is rotating around the display. For an extensive and well-written article explaining the mathematics behind 3D projections and rotations of coordinates, see Barbara Kaskosz’s excellent post at flashandmath here. In the example here, however, things are simplified because the rotation is occuring automatically (not from user interaction), and the rotation only alters the x and z coordinates (because the rotation is about a vertical axis).

Here is how this all comes together in the code for the demo above. First, a current rotation angle turnAngle is set by adding a fixed amount turnSpeed on every frame:

turnAngle = (turnAngle + turnSpeed) % (2*Math.PI);

We will need the sine and cosine of this angle twice each, so we first calculate

sinAngle = Math.sin(turnAngle);
cosAngle = Math.cos(turnAngle);

We will now determine the rotated 3D coordinates rotX and rotZ for a particle p which is at a point with coordinates p.x, p.y, p.z (note that the y coordinate will remain unchanged). If we were rotating about the y-axis itself, the correct transformation would be

rotX = cosAngle*p.x + sinAngle*p.z;
rotZ = -sinAngle*p.x + cosAngle*p.z;

But in fact, we are rotating about a vertical axis at the center of our sphere, which is set to a z‑coordinate sphereCenterZ. So the rotational transformation has to be adjusted accordingly:

rotX = cosAngle*p.x + sinAngle*(p.z - sphereCenterZ);
rotZ = -sinAngle*p.x + cosAngle*(p.z - sphereCenterZ) + sphereCenterZ;

Finally, we project these new 3D coordinates (rotX, y, rotZ) to the 2D viewing plane using the transformation described above:

m = fLen/(fLen - rotZ);
p.projX = rotX*m + projCenterX;
p.projY = p.y*m + projCenterY;

It is at this point in the 2D viewing plane where we will draw our particle, and its size will be scaled by the same factor m.

Depth based darkening

To add to the 3D effect, particles which are further away from the viewer are colored more darkly than nearer particles. This is achieved by simply lowering the alpha value of particles further back, which has the effect of darkening them as they are drawn on a black background. Again, because the particles all have the same color, the alpha blending makes depth-sorting unnecessary.

Randomly distributing points on a sphere

Choosing a random point on a sphere is easy if you understand spherical coordinates (see the Wikipedia entry here). You simply need to randomly choose a random angle theta; ranging from 0 to 2pi; and another angle phi; ranging from 0 to pi (the radius coordinate will be the constant sphere radius). But if you set these random angles in the naive way:

//WRONG way
theta = Math.random()*2*Math.PI;
phi = Math.random()*Math.PI;

then points near the poles of the sphere will be more heavily weighted and this will not give you an even distribution of points. Counteracting this bias can be accomplished using the arccosine function:

//RIGHT way
theta = Math.random()*2*Math.PI;
phi = Math.acos(Math.random()*2-1);

This has the effect of more frequently choosing angles near the equator, which is what you want, because in an even distribution of points there are more points near the equator than there are near the poles.

The particle motion and evolution

Each particle has a lifetime which is kept track of using its age property, which increments ahead by one on every update of the screen. The particle will have different attributes and behaviors according to its age. If the age is still less than its stuckTime, then the particle will remain stuck to its initial position. After this period of time, its velocity and position will be updated according to some acceleration factors. The initial velocity parameters will cause the particle to fly outwards away from the sphere center (after it becomes unstuck). Some random acceleration amounts will be added on each frame to create some irregular motion.

The particles also change their alpha value so that they fade in and out over time. These timing parameters are called the attack, hold, and decay times. A particle will go from alpha zero to its maximum alpha in the attack time, hold the maximum alpha for the duration of the hold time, then fade back to zero alpha in the decay time. When a particle reaches the end of this lifecycle, it is removed from the list of active particles and placed into a recycle bin (object pool) to be used again later when another particle needs to be added to the display.

Experiment!

It is easy to modify this example to your liking. You simply need to decide how you want the particles to move through 3D space, and set the x, y, and z coordinates of the particles within the code. You can leave the projection computations in the code untouched, and change the rotational speed or remove the rotation completely. Just be aware that this is not the ultra-fast hardware accelerated 3D rendering that you’ll get from WebGL or Flash Stage3D. But for simple 3D drawing, a 2D canvas is sufficient.