Satellite imagery has become part of our everyday lives through applications likeGoogle Maps. However, the current technology involves capturing tons of high-resolution images and stitching them together to form one larger image. This not only creates a huge amount of work to precisely align these images, it also leaves live-action surveillance susceptible to drop-outs as subjects move between cameras (yeah, I’ve seen 24 too).
It turns out that a team from Sony and the University of Alabama are working on an imaging system that can capture a huge area with a single camera. The imaging system would actually be built up from a large array of light-sensitive chips, all placed at in the focal plane of a large multiple lens system. The end result doesn’t look that much different than the complex eye of an insect.
One major advantage of a single camera approach is that near real time images could be transmitted to ground personnel, without the overhead of joining multiple images together. Also, this approach would allow for recording sequential images (the current design could support a rate of up to 4 frames per second).
According to the team’s recently published patent application, the camera could image an area of up to 10 square kilometers from a 7.5 kilometer altitude. The camera’s gigapixel resolution would allow it to capture images at a precision of up to 50 centimeters per pixel from that height.