Wednesday, March 6, 2013

Two PS3 Eyes

I know it's been a long time since my last post. You would be forgiven for thinking that this project had died its predicted death. But you'd be wrong. If anything, I've been working harder on the project since my last post. I haven't written because I've been working so hard, and because I wanted to have something concrete to show you. Well, I don't have anything concrete, but I owe an update anyway.

Cameras

The biggest development was that I bought two cameras. While I had been doing lots of research into very expensive cameras that could provide 1MP resolutions at greater than 100fps, I was convinced go a different way (by an extended family member who has been getting involved -- that's right, a second fool is involved in this project! -- who I'll call Mr. W because I like privacy) So I bought two Playstation Eye cameras. As the name would suggest, they are intended to be used with a Playstation, but they use the ubiquitous USB 2.0 interface, and the open source community has developed drivers for Linux (and other platforms). They are almost like a regular webcam. Their first advantage is that they can output 125fps if you accept a resolution of only 320x240 (or 60fps at 640x480). Their second advantage is that they are cheap -- just $22 from Amazon. So I was convinced that there was nothing to lose in trying them out.

It was a good idea. While I'm not sure that this 320x240 resolution will be sufficient in the end, I am learning a lot without having to pay for expensive cameras yet. And it's possible that 320x240 will be enough. Mr. W argues that with 125 fps, there will be enough observations of the ball for the ambiguity introduced by the big pixels to be averaged out, leading to an accurate path prediction.

Do the cameras work? Yep. I managed to get them working with guvcview, a Linux webcam program. That software can select the resolution and frame rate and can make snapshots and video recordings. If I run two instances of guvcview, I can run both cameras at the same time. There are some difficulties: if I leave the preview windows for the two cameras live on my desktop while recording, the load on my poor laptop prevents it from processing all the frames. But minimizing those preview windows solves the problem. I also learned that guvcview needs to be restarted every time you change resolution, frame rate, or output format. The software doesn't suggest that this is necessary, but I couldn't get it to take effect without restarting the program. Once you know that, it's no problem.

I even got them to work with OpenCV directly with their calibration program. However, for the most part, it has been easier for my current work to just record two video files and work from those.

Camera Synchronization

One of the downsides of these cameras is that there is no synchronization of the frames between the two eyes. They take 125 frames per second, but that means they could be offset from each other as much as 4ms (i.e. half of 1000ms/125). So far I haven't found a sure way to determine the offset. Mr. W believes that once you know the offset, you can just interpolate the latest frame with its predecessor to match up with the latest from from the opposite eye. Maybe. Sounds pretty noisy to me, and we're already starting with a lack of accuracy from our low resolution.

Even that requires knowing the offset between the cameras to calculate the interpolation. It's possible we could do that in software -- like maybe I can get the time the frame arrived at the computer. So far I've only seen "pull" commands to get the most recent frame, which is not conducive to knowing the time that frame arrived. I fear that would mean hacking the driver. Or it's possible we could do that with hardware -- like a sync-calibration thingy that moves at a steady speed against a yard stick. I can imagine a motor spinning a clock hand at a high speed. As long as it moves at a constant speed around the clock face, we could use the hour markings to measure its progress (which might mean making the clock hand point in both directions to negate gravity during the spinning). But it wold have to be faster than a second hand. Ideally, I think it would pass a marking every 8ms or less... so that's 625 rpm instead of 1 rpm.

Actually, there is another way, if I want to get fancy. There are some instructions online for how to hack the electronics to make one camera drive the frame rate of the other camera. It might be easy. But more likely it will end badly. For instance, it requires some very fine soldering skills, and we've seen how my soldering is sub-optimal in a previous post.

Accessories

I bought two cheap tripods to stick these cameras onto. However the cameras aren't designed for tripods, so don't have the normal mounting hole. I've been taping them to the tripod, which is working well enough. (Side note: these tripods are horrible. They look nice, are tall, sturdy, and made of light aluminum. But the adjustment screws leave way too much play after they are tightened, making them useless for preserving the orientation of the camera between sessions. But they're good enough to hold the camera off the ground.


Having introduced these cameras, I'll save my tales of woe for another post. There is indeed more to say here, and there is some minor progress on the building-the-robot front as well.

No comments:

Post a Comment

Be nice, remember I'm an amateur, but by all means please give me feedback!