We finally got to get something approaching headtracking in Processing, using a Wiimote and two cheap IR leds fixed to a bar (a kind of homebuilt IR-bar). We use the OPENGL mode of processing, and use darwiinremoteOSC and the libraries oscP5 and netP5 to obtain the IR data from darwiinremote. You can download the sketch here: opengl2.pde.

The first part of the headtracking consists of getting the fx, fy and fz coordinates of the camera. The camera itself tracks 2 LEDs and gives us their X and Y coordinates in the FOV of the camera. These coordinates are already normalized by darwiinremote. We first need to isolate two points out of the data transmitted by the wiimote:

void ir( float f10, float f11,float f12, float f20,float f21, float f22, float f30, float f31, float f32, float f40, float f41, float f42 ) { ir[0] = f10; ir[1] = f11; ir[2] = f12; ir[3] = f20; ir[4] = f21; ir[5] = f22; ir[6] = f30; ir[7] = f31; ir[8] = f32; ir[9] = f40; ir[10] = f41; ir[11] = f42; points = 0; for (int i = 0; i < 4; i += 3) { if (ir[i+2] < 15) { x[points] = 0.5 - ir[i]; y[points] = 0.5 - ir[i+1]; points++; } } }

The oscP5 code calls ir when new IR data is received, and gives us 12 parameters. Those are 4 times x, y, and brightness values for each tracked LED. If the brightness is bigger than 15, no point was recognized. We isolate the valid points, and use the two first values in x[] and y[] to calculate the distance of our viewer to the screen. The camera has a FOV of PI/4 (45 degrees). Thus the angle to the viewer is PI/4 and spans from -1 to 1. We store this value in the variable fov. We then calculate the distance dist between the two points. The bigger this distance, the nearer the viewer is to the screen. The angle from the camera to the bar is then fov / 2.0 * dist. Using this angle, we can then calculate the distance to the viewer relative to the width of the bar (in mm here, but all the exact calculation has not been done yet, it should be in pixel values).

float fov = (PI / 4.0); float barWidth = 150.0; // bar width in mm if (points >= 2) { float dx = x[0] - x[1]; float dy = y[0] - y[1]; float dist = sqrt(dx * dx + dy * dy); float angle = fov * dist / 2.0; float headDist = (barWidth / 2.0) / tan(angle); }

Once we have the distance, we can calculate the real camera coordinates. I copied this formula out of Johnny Lee's code and have to admit I can't wrap my head around them at the moment (trigonometry burnout). We average the X and Y positions of the head bar, and then scale them to the real x and y using the distance and the fov. We update the camera position taking into account the axis of the wiimote (empirically determined, we put the wiimote with buttons down on the table, we don't have a stand yet):

float rx = sin(fov * mx) * headDist * 0.5; float ry = sin(fov * my) * headDist * 1.5; fx = -rx; fy = -ry; fz = headDist;

This is the calculation of the camera position. We now need to update our opengl view according to the position. We have to update both the camera position and the frustum of the camera (because we are moving relative to the actual viewing screen.

First we move the camera to fx, fy, fz and make it look straight ahead:

camera(fx, fy, fz, fx, fy, 0, 0, 1, 0);

We set the near plane at fz. We then shift the frustum coordinates according to our position. The normal frustum if we are at 0, 0, viewing_distance should be -width/2, width/2, -height/2, height/2.

float near = fz; float left, right, top, bottom; float angle = radians(60.0); float facd = (width/2.0) / tan(angle / 2.0); near = 20.0; left = -(width/2.0) + fx; right = (width/2.0) + fx; top = -(height/2.0) + fy; bottom = (height/2.0) + fy;

Viewing_distance is empirically determined to be width/2.0, and our FOV (the viewers FOV) to be 60 degrees. We scale the frustum values by viewing_distance / tan(FOV/2) to have the right perspective.

left /= facd / fz; right /= facd / fz; top /= facd / fz; bottom /= facd / fz;

Finally, we scale the whole view to have our the coordinates on our near plane:

left *= near / fz; right *= near / fz; top *= near / fz; bottom *= near / fz; frustum(left, right, top, bottom, near, 60000);

We can then call our main drawing routine which will draw a tunnel and a few targets to make for a real environment.

This is still very beta and I haven't understood all the issues in the code yet, but it looks cool and it's fun to play with

[…] Wiimote Headtracking in Processing via wesen’s Twitter (follow CDM at Twitter: cdmblogs) […]

Pingback by Create Digital Motion » Preview: Wiimote Headtracking, Now in Processing — August 25, 2008 @ 4:53 pm

[…] has been playing around with [Johnny Lee]'s Wiimote head tracking code. He's posted a preliminary port outlining the code in the Processing environment. It relies on darwiinremoteOSC so you won't […]

Pingback by Wiimote head tracking in Processing — August 26, 2008 @ 1:24 am

Nice job, I’m using SDL + OpenGL + wiiuse library to get some 3D objects controlled by the wii. I’m going to try your equation on my configuration to see if it works.

cheers

Comment by epokh — January 26, 2009 @ 1:05 am

[…] [Manuel] has been playing around with [Johnny Lee]’s Wiimote head tracking code. He’s posted a preliminary port outlining the code in the Processing environment. It relies on darwiinremoteOSC so you won’t […]

Pingback by Wiimote head tracking in Processing - Hack a Day — March 17, 2009 @ 12:57 am