With all the talk about the lid camera and the ability to adjust for the fisheye, I thought this was interesting.
I’m designing a traffic signal where they are using this system for video detection of vehicles.
It’s a single fisheye camera that is mounted on one of the traffic signal poles at the intersection, usually on a riser above the signal mast arm (where the red/yellow/green lights hang out over the road). It can see the entire intersection and can track objects from horizon to horizon, about 250’ out on each leg.
Most video detection software tracks a user-generated fence on the image. When it detects movement in that fence it places a call to the controller. This system still does that, but in addition it also tracks every moving object on screen from horizon to horizon. In doing so it doesn’t have the level of error that a lot of video systems have due to things like occlusion (like a big truck in the through lane blocking the view of the left turn lane, and therefore 1. placing a call for the left turn and 2. not counting other vehicles that may have passed through that detection zone while the truck was blocking it.)
The part that makes it relevant to is that on the software side you can flatten the image to get a more traditional look, and pan/tilt/zoom in real time on live video. And all with zero moving parts… which means 100 different people could be logged into the same camera and no one user will affect what any other user can see.
It’s a pretty amazing piece of tech which shares a little bit on a very fringe level with