Most of them are 360° “Stereographic” panoramas, such as the examples above. These are the sort that take a wide view and wrap the horizon around until it meets itself – resulting in an image that looks like a “little planet”.
Shooting the Source Images
Typically I create these panoramas by combining 24 separate digital photographs, covering every angle and with bracketed exposures. I shoot the source images on a Canon EOS 5d with a 16mm fisheye lens producing a set of 12 Mega-pixel images, in Raw mode. I have the camera mounted in portrait-mode, (that is – image taller than it is wide) attached to the tripod via a specialist panorama-bracket, to prevent parallax errors between the overlapping shots.
With the camera on a panorama-head, that has been adjusted so that the camera rotates around the focal point of the lens, objects in the scene will be in the same relationship to each other in adjacent images, and so the images will (have the potential to) make a pixel-perfect match.
(I chose my words carefully in the last paragraph, because no lens is perfect. The edge of an image is rarely perfectly sharp nor straight. Typically the image will either bulge out (known as ”barrel distortion”) or will be slightly concave (known as ”pin cushion distortion”). Even with a good lens where these distortions will be invisible to the human eye – matching adjacent images will really highlight the slightest flaw. So more processing is required by the panorama-stitching program, to compensate for the distortions at the image-edges.)
I take the 12Mp Raw images and pre-process them into (16-bit colour) TIFF images using Apple’s Aperture. The TIFFs are then stitched and blended together into a rectangular panorama using a program called “Hugin”. The resulting image will usually be 12,000 × 6,000 pixels as a 16-bit colour TIFF. The equivalent of a single 72 Mega pixel image. So they can be printed the size of a wall and still show very fine detail!So, to recap, six horizontal shots are taken at 60° intervals, with a generous overlap between images, and one shot, straight up, of the sky and one downward shot of the ground. Each ”shot” actually consisted of three bracketed exposures ranging from “under-exposed” by -2 stops to “over-exposed” by +2 stops. Hence (6xHorizontal + 1xDown + 1xUp) x3 = 24 shots all together.
My Panorama Stitching Software – Hugin
Hugin was named after a particular crow in Norse mythology that could fly around the whole world in a single day and come back and give the king a complete picture of what it had seen. It is a combined GUI (Graphical User Interface) and work-flow management system, that provides access to a suite of free, open source programs for creating high quality panoramas.
Hugin is based on original work of Prof Helmut Dersch, who in 1998 released his PanoramaTools suite of programs. Although originally based on Dersch’s work and still largely using his approach, I think all of the original tools have been replaced by new ones in the modern Hugin system.In Dersch’s original Panorama Tools (and in the early versions of Hugin), the photographer had to identify a set of “Control Points” for each overlapping pair of images. Each of these pairs of points represented the photographer explaining to the programs “this spot in this left-hand image, represents the same place in the real world, as this spot in the right-hand image”. This had to be done repeatedly for 10 or perhaps as many as 20 points for each seam between two adjacent images.
When 24 images are to be aligned, selecting control points one-by-one would be a long process. The programs in the Hugin tool-chain still make use of the control point idea – but there are now tools to automate the process of finding the matching control points. The photographer’s role is now simply to identify which images are adjacent to which and to filter out any obviously mistaken points placed by the automated system. (For example, a building with lots of identical window frames across an image-join can confuse a program where a person would see the obvious differences.)
Once the control points are in place Hugin uses a program called “Nona” to stretch and align each image and compensate for lens features such as barrel and pin-cushion distortion, so that it lines up with its neighbours.
Then it uses a program called “Enblend” to smooth the seams between the 8 shots making up one panorama, and another program “Enfuse” to combine the multiple exposures and make the High Dynamic Range (HDR) panorama. The result is a 360° wide by 180° tall “Equi-rectangular” panoramic image, such as this one:
Often there will still be some “imperfections” still visible at this stage. If there was anything moving at the time the images were shot, particularly if it moved across the edge of one of the source images, it will be difficult, if not impossible for a program to properly match the join between images.
Grass or trees moving in the breeze can cause the stitching software a great deal of confusion. Similarly people who walked onto the edge of a shot may have completely left the field of view by the time the photographer took the next shot. No amount of automated image matching will solve that sort of problem! So often the photographer has to settle for the best that the automated system can manage and then clean up the “imperfections” matching by hand the content from the stretched images, using a program such as Photoshop. (Masking is another option … but I will cover that in a separate journal entry.)
An extreme example such as this one:
where the crowd were dancing and waving in time to the music, might require days of work in Photoshop to match by hand, images that no automated system could get right. (Again that by-hand repair is a subject for another time.)
Finally the cleaned-up equi-rectangular panorama is taken into a fresh Hugin session in which it is rendered as the characteristic “little planet” shape by using a “Stereographic transformation”. The equi-rectangular image of the shoreline, above would end up as a stereographic panorama like this:
I use a 16mm Fisheye lens. Any very wide angle lens would do, but the longer the focal length the more shots it takes to get around the 360°. If the vertical (when the lens is in portrait mode) angle of view is less than about 95° you would need to shoot more than one horizontal line of images and that means more than twice as many seams to stitch and blend. So a 24mm wide angle lens is probably usable – a 35mm would mean a LOT more effort.
Related to the issue of lens focal length is the size of the sensor in the camera body. The deciding factor in my choice of a Canon EOS 5d was its “full size” sensor. That is, the size of the sensor in the 5d is 35mm x 24mm – the same size as single frame in a traditional roll of 35mm film.
The consequence of this is that the full width of the image produced by the lens is recorded by the sensor. There are very few full-size sensor cameras on the market. For example, most of the other Canon digital SLR cameras have a smaller sensor. They are said to have a “multiplication factor of 1.6”, meaning that such a camera with my 16mm lens mounted on it would record only a central part of the image, as if it was using a 26mm lens (16×1.6 = 26). Similarly a 24mm lens on a camera with a smaller sensor, would deliver the equivalent of a 38mm lens (24×1.6 = 38) on a full-frame camera.
I have tried to outline the process of panorama stitching. I have probably left some aspects unexplained or used some unexplained jargon without realising. So do feel free to ask me to explain any of it that is still not clear.