Monday, July 27, 2009

Fake security camera

Fake security cameras, or dummy cameras, are non-functional surveillance cameras designed to fool intruders, or anyone who it is supposedly watching. Those cameras are intentionally placed in a noticeable place, so passing people notice them and believe the area to be monitored by CCTV.

The cheapest fake security cameras can be recognized by not having real lenses (the "lenses" are just an opaque piece of plastic) Other fake cameras include broken real cameras, motion sensors disguised as cameras, or empty camera housings. They may have flashing lights, or a motor to simulate pan-tilt motion.

Since dummy cameras are non-functional, they are generally used in environments where the only need for a security camera is to deter minor theft and vandalism, such as small businesses like restaurants and convenience stores. Professional thieves have the experience to recognize a dummy camera, so they do not stop these people from acting.

Dummy cameras are also used to augment real surveillance systems to increase the deterrent effect at a minimal additional cost. Many camera vendors offer dummy cameras that look identical to the real ones they sell. A typical camera kit may include four real cameras and four dummies. The subjects being monitored are likely to assume that all of the cameras are real.

Digital video cameras

Digital video cameras do not require a video capture card because they work using a digital signal which can be saved directly to a computer. The signal is compressed 5:1, but DVD quality can be achieved with more compression (MPEG-2 is standard for DVD-video, and has a higher compression ratio than 5:1, with a slightly lower video quality than 5:1 at best, and is adjustable for the amount of space to be taken up versus the quality of picture needed or desired). The highest picture quality of DVD is only slightly lower than the quality of basic 5:1-compression DV.

Saving uncompressed digital recordings takes up an enormous amount of hard drive space, and a few hours of uncompressed video could quickly fill up a hard drive. Holiday uncompressed recordings may look fine but one could not run uncompressed quality recordings on a continuous basis. Motion detection is therefore sometimes used as a work around solution to record in uncompressed quality.

However, in any situation where standard-definition video cameras are used, the quality is going to be poor because the maximum pixel resolution of the image chips in most of these devices is 320,000 pixels (analogue quality is measured in TV lines but the results are the same); they generally capture horizontal and vertical fields of lines and blend them together to make a single frame; the maximum frame rate is normally 30 frames per second.

That said, multi-megapixel IP-CCTV cameras are coming on the market. Still quite expensive, but they can capture video images at resolutions of 1, 2, 3, 5 and even up to 11 Mpix. Unlike with analogue cameras, details such as number plates are easily readable. At 11 Mpix, forensic quality images are made where each hand on a person can be distinguised. Because of the much higher resolutions available with these types of cameras, they can be set up to cover a wide area where normally several analogue cameras would have been needed.

Closed-circuit television camera

Closed-circuit television camera Can record straight to a video tape recorder which are able to record analogue signals as pictures. If the analogue signals are recorded to tape, then the tape must run at a very slow speed in order to operate continuously. This is because in order to allow a 3 hour tape to run for 24 hours, it must be set to run on a time lapse basis which is usually about 4 frames a second. In one second, the camera scene can change dramatically. A person for example can have walked a distance of 1 meter, and therefore if the distance is divided into 4 parts i.e. 4 frames or 'snapshots' in time, then each frame invariably looks like a blur, unless the subject keeps relatively still.

Analogue signals can also be converted into a digital signal to enable the recordings to be stored on a PC as digital recordings. In that case the analogue video camera must be plugged directly into a video capture card in the computer, and the card then converts the analogue signal to digital. These cards are relatively cheap, but inevitably the resulting digital signals are compressed 5:1 (MPEG compression) in order for the video recordings to be saved on a continuous basis.

Another way to store recordings on a non-analogue media is through the use of a digital video recorder (DVR). Such a device is similar in functionality to a PC with a capture card and appropriate video recording software. Unlike PCs, most DVRs designed for CCTV purposes are embedded devices that require less maintenance and simpler setup than a PC-based solution, for a medium to large number of analogue cameras.

Some DVRs also allow digital broadcasting of the video signal, thus acting like a network camera. If a device does allow broadcasting of the video, but does not record it, then it's called a video server. These devices effectively turn any analogue camera (or any analogue video signal) into a network camera.

Digital still cameras

Digital still cameras can be purchased in any high street shop and can take excellent pictures in most situations.

The pixel resolution of the current models have easily reached 7 million pixels (7-mega pixels). Some point and shoot models like those produced by Canon or Nikon boast resolutions in excess of 10 million pixels.

At these resolutions, and with high shutter speeds like 1/125th of a second, it is possible to take jpg pictures on a continuous or motion detection basis that will capture not only anyone running past the camera scene, but even the faces of those driving past.

These cameras can be plugged into the USB port of any computer (most of them now have USB capability)and pictures can be taken of any camera scene. All that is necessary is for the camera to be mounted on a wall bracket and pointed in the desired direction.

Modern digital still cameras can take 500kb snapshots in the space of 1 second, and these snapshots are then automatically downloaded by the camera software straight to the computer for storage as timed and dated JPEG files. The images themselves don't need to stay on the computer for long. If the computer is connected to the Internet, then the images can automatically be uploaded to any other computer anywhere in the world, as and when the pictures are taken.

The user doesn't need to lift a finger except to simply plug the camera in and point it in the desired direction. The direction could just as easily be the street outside a house, or the entrance to a bank or underground station.

Digital still cameras are now being made with in-built wireless connectivity, so that no USB cable is required; images are simply transmitted wirelessly through walls or ceilings to the computer.

Network cameras

IP cameras or network cameras are analogue or digital video cameras, plus an embedded video server having an IP address, capable of streaming the video (and sometimes, even audio).

Due to the fact that network cameras are embedded devices, and do not need to output an analogue signal, resolutions higher than CCTV analogue cameras are possible. A typical analogue CCTV camera has a PAL (768x576 pixels) or NTSC (720x480 pixels), whereas network cameras may have VGA (640x480 pixels), SVGA (800x600 pixels) or quad-VGA (1280x960 pixels, also referred to as 'megapixel') resolutions.

An analogue or digital camera connected to a video server acts as a network camera, but the image size is restricted to that of the video standard of the camera. However, optics (lenses and image sensors), not video resolution, are the components that determine the image quality.

Network cameras can be used for very cheap surveillance solutions (requiring one network camera, some Ethernet cabling, and one PC), or to replace entire CCTV installations (cameras become network cameras, tape recorders become DVRs, and CCTV monitors become computers with TFT screens and specialised software. Digital video manufacturers claim that turning CCTV installations into digital video installations is inherently better).

There continues to be much over the merits and price-for-performance of IP cameras as compared to analog cameras. Many in the CCTV industry claim that many analog cameras can outperform IP cameras a lower price.

Closed-circuit television camera

Closed-circuit television (CCTV) is the use of video cameras to transmit a signal to a specific place, on a limited set of monitors.

It differs from broadcast television in that the signal is not openly transmitted, though it may employ point to point wireless links. CCTV is often used for surveillance in areas that may need monitoring such as banks, casinos, airports, military installations, and convenience stores.

In industrial plants, CCTV equipment may be used to observe parts of a process from a central control room; when, for example, the environment is not suitable for humans. CCTV systems may operate continuously or only as required to monitor a particular event. A more advanced form of CCTV, utilizing Digital Video Recorders (DVRs), provides recording for possibly many years, with a variety of quality and performance options and extra features (such as motion-detection and email alerts).

Surveillance of the public using CCTV is particularly common in the UK, where there are reportedly more cameras per person than in any other country in the world.[1] There and elsewhere, its increasing use has triggered a debate about security versus privacy

History

The first CCTV system was installed by Siemens AG at Test Stand VII in Peenemünde, Germany in 1942, for observing the launch of V-2 rockets.[2] The noted German engineer Walter Bruch was responsible for the design and installation of the system.

CCTV recording systems are still often used at modern launch sites to record the flight of the rockets, in order to find the possible causes of malfunctions,[3][4] while larger rockets are often fitted with CCTV allowing pictures of stage separation to be transmitted back to earth by radio link.[5]

In September 1968, Olean, New York was the first city in the United States to install video cameras along its main business street in an effort to fight crime.[citation needed] The use of closed-circuit TV cameras piping images into the Olean Police Department propelled Olean to the forefront of crime-fighting technology.

The use of CCTV later on became very common in banks and stores to discourage theft, by recording evidence of criminal activity. Their use further popularised the concept. The first place to use CCTV in the United Kingdom was King's Lynn, Norfolk.[6]

In recent decades, especially with general crime fears growing in the 1990s and 2000s, public space use of surveillance cameras has taken off, especially in some countries such as the United Kingdom.

Uses

Crime prevention and prevalence in the UK

Outside government special facilities, CCTV was developed initially as a means of increasing security in banks. Experiments in the UK during the 1970s and 1980s (including outdoor CCTV in Bournemouth in 1985), led to several larger trial programs later that decade.[6]

These were deemed successful in the government report "CCTV: Looking Out For You", issued by the Home Office in 1994, and paved the way for a massive increase in the number of CCTV systems installed. Today, systems cover most town and city centres, and many stations, car-parks and estates.

The exact number of CCTV cameras in the UK is not known but a 2002 working paper by Michael McCahill and Clive Norris of UrbanEye,[7] based on a small sample in Putney High Street, estimated the number of surveillance cameras in private premises in London is around 500,000 and the total number of cameras in the UK is around 4,200,000.

According to their estimate the UK has one camera for every 14 people, although it has been acknowledged that the methodology behind this figure is somewhat dubious.[8] The CCTV User Group estimate that there are around 1.5 million CCTV cameras in city centres, stations, airports, major retail areas and so forth. This figure does not include the smaller surveillance systems such as those that may be found in local corner shops. [9]

However, there is little evidence that CCTV deters crime.[10] According to a Liberal Democrat analysis, in London "Police are no more likely to catch offenders in areas with hundreds of cameras than in those with hardly any."[11] A 2008 Report by UK Police Chiefs concluded that only


3% of crimes were solved by CCTV.

Cameras have also been installed in taxis in the hope of deterring violence against drivers,[13][14] and in mobile police surveillance vans.[15] In some cases CCTV cameras have become a target of attacks themselves.[16] Middlesbrough council have recently installed "Talking CCTV" cameras in their busy town-centre.[17] It is a system pioneered in Wiltshire, which allows CCTV operators to communicate directly with the offenders they spot.[18]

How Does a Closed Circuit TV Work?

Features
Closed circuit TV systems can be found in a variety of settings, ranging from banks to stores, airports and military installations. Many homes also employ a closed circuit system. As such, home monitoring systems will be the focus of this article. Surveillance has become a needed presence within today's society, and closed circuit TV systems serve this need well.

The most basic of systems will include a video camera and a monitor. The camera acts as the input device, recording any activities taking place within the space in need of surveillance. The monitor receives this input, and displays the recorded activities. Standard surveillance cameras do not come with a lens because of the varied possible angles available within any one scene. However, the camera screw mount for the lens is a standard thread, so most lens types will fit.

The monitor itself is quite similar to a tv set minus the tuning circuits. If multiple cameras are hooked up to the monitor, a switcher control will allow you to rotate through the areas under surveillance, or maintain input from a single camera. Coaxial cables, telephone wires, fiber optic strands and microwave radio systems are all used as connector cables, running from camera to monitor. The type of system you have in place will determine which connector is needed.

Equipment Options

When selecting a camera for a closed circuit system, lighting considerations will determine which camera is best suited. Camera classifications fall into three categories: general purpose, low lux and color cameras.

Brightly-lit areas will accommodate all three camera types. However, the general purpose and color cameras require bright lighting in order to render a clear picture. Low lux cameras are known for their ability to render dark settings in visible and lighter hues. The drawback with lux cameras is they only provide a black and white output.

Monitor selections deal primarily with size differences. What size monitor to use depends on the viewing distance being recorded. A 9-inch monitor will suffice for viewing distances of 14 to 16 inches. Distances of 36 to 50 inches require a 12-inch monitor. Distances of 50 to 76 inches require a 15 to 19-inch monitor.

Switcher controls are only needed if multiple cameras are hooked up to a single monitor. Switcher controls come in four categories. Manual Passive Switchers are the least expensive option, with a single mechanical switch for rotating through camera views. Homing Sequential Switchers include timer options which allow interval rotations of each camera. Bridging Sequential Switchers work the same as the home sequential option with an addition monitor connection. The second monitor can be set to survey one area in particular as the first camera rotates through scenes from the other cameras. Alarm Programming Sequential Switchers perform the same function as the bridging sequential switcher while providing terminal connectors to each camera.
Rotation functions can be set in intervals, or
programmed in response to a motion sensor signal


Lens Requirements

A clear rendition of the area under surveillance is a primary objective in setting up a closed circuit TV system. As lenses must be purchased separately, there are a couple things to keep in mind when making a selection. Camera lens options vary in focal length, zoom capabilities, iris control and spot filtering.

Focal length and zoom capability will determine how much area the camera will be able to cover. A long focal length provides detailed views at a distance, whereas a short focal length gives a wide view of the immediate scene. The addition of a manual or motorized zoom lens feature provides a closer examination of selected scene details.

Iris control has to do with the amount light that enters through the center of the lens. This feature is necessary if you're using a low light level camera. Manual and automatic control options are available. The spot filter feature is also needed when using the low level camera. Spot filters work in conjunction with the iris control when the lighting in a setting is dim. It works to filter the available light across the field of vision.

Silicon Imaging Oscar Filmmaking Digital Cinema Cameras go 3D

Silicon Imaging, the company that enabled the digital shooting of this years Oscar winning Best Picture Slumdog Millionaire, is now changing the face of stereo-3D cinematography and production. The company unveiled the world’s first integrated 3D cinema camera and stereo visualization system at NAB 2009. The SI-3D shoots uncompressed raw imagery from two synchronized cameras and encodes directly to a single stereo CineFormRAW QuickTime file, along with 3D LUT color and convergence metadata. The stereo file can to be instantly played back and edited in Full 3D on an Apple Final Cut timeline, without the need for proxy conversions.

Traditionally, 3D content was captured from two independent left and right cameras, each with its own settings, color controls, record start, timecode, content management and monitoring outputs. A variety of complex devices would be used to synchronize the recordings or combine the outputs for viewing. The content would then have to go through a tedious process of being ingested or converted to formats compatible with the editing or grading systems, matched up from the independent left and right sources, flipped if the shot was on a beam splitter and the timeline adjusted to have the first frame overlapped. A color grade could then be applied, convergence adjusted and finally a stereo image viewed for dailies playback.

“The SI-3D camera system streamlines the entire stereo-3D content acquisition and post production process;” states Ari Presler, CEO of Silicon Imaging. “Combining two cameras into a single control, processing and recording platform enables shooting and instant playback like a traditional 2D camera with the added tools needed on-set to analyze and adjust the lighting, color, flip orientation and stereo depth effects. In post, a unified stereo file plus associated metadata can be immediately graded for dailies, edited, and viewed in either 2D or 3D.”

The SI-3D system uses two remote SI-2K Mini cameras with an P+S interchange lens mount connected to a single processing system via gigabit Ethernet where they are synchronized and controlled through the familiar SiliconDVR touchscreen interface. On-set, each camera can be viewed individually or in stereo mixed modes using modern 3D LCD and DLP displays. Various tools are used to visualize and adjust the focus, lighting and 3-D effects including alignment grid overlays, false color zebras, digital zooming, edge detection, spot meters, dual histograms, parallax shifts, anaglyph mixing and wiggle displays.

Unlike modern HD cameras, which develop and compress colorized imagery, the SI system captures raw “digital negatives” where they are non-destructively developed and colorized for preview using the cinematographer's desired "look" for the scene. This color metadata, along with stereo convergence, flip orientation from beam splitter rigs and alignment data are encoded into a single CineFormRAW QuickTime stereo file. These files can be edited directly in Apple Final Cut without the need for conversion or rendering. With the addition of CineForm’s Neo3D, convergence plus stereo or individual eye color adjustments can be dynamically controlled and modified, while viewing live 3D playback using side-by-side, over-under, or interlaced output modes.

“Driven by increasing numbers of 3D film projects planned by Hollywood studios, the demand for efficient 3D camera and post workflows has increased significantly in the last two years,” said David Taylor, CEO of CineForm, Inc. “The combination of the Silicon Imaging SI-3D camera with CineForm high-fidelity compression-based 3D workflow will significantly reduce overall project complexity and costs.”

"The Silicon Imaging camera’s form factor and flexible lens mounting system enable us to develop innovative lightweight beam-splitter and parallel rigs to shoot steadicam and hand-held stereo footage with incredible latitude and film like results" stated Max Penner, CTO of ParadiseFX. We have the SI Mini’s as part of our 3D camera package to shooting feature films including Thomas Jane’s “Dark Country 3D”, Patrick Lussier’s, “My Bloody Valentine 3D” and Joe Dante’s “The Hole 3D”.”

The SI-3D system is also establishing new benchmarks in image quality and data rates with its ability to record dual-stream 12-bit uncompressed raw directly to mobile 2.5” SSD (Solid State Drives), with peak rates up to 200Mbytes/sec (1.6Gbit/sec). A 250 GB drive can store up to 1-hour of footage per camera. The resulting Silicon Imaging Video (.SIV) footage can be seamlessly viewed and graded directly in Iridas FrameCycler and Speedgrade XR with look and stereo metadata applied. The files can also be exported as a CinemaDNG sequences or converted to CineFormRAW 2D or 3D files, at a later time.

“There is an incredible amount of latitude and resolution from the Silicon Imaging cameras” states William White, CEO of 3D Camera Company. “Shooting directly to SSD gives us the flexibility to record stereo footage in an extreme lightweight and rugged configuration as shoulder or vehicle mounted for ‘Rescue 3D’ and even body worn for shooting from a skydiver in the upcoming ‘Human Flight 3D’. The SI-3D system with on-set visualization and integrated stereo workflow will speed up our entire shooting and production process for creating compelling 3D content.”

Supported Digital Still Cameras

RAW file support: if you are using RAW shooting mode with your camera, digiKam is probably well able to deal with it. RAW support depends on the libraw library. To find out if your particular camera is supported, bring up the list of supported RAW cameras from the Help->RAW camera support menu.

How to setup and work with RAW files is described in RAW Decoder Settings and RAW Workflow.

An easy-to-use camera interface is provided, that will connect to your digital camera and download photographs directly into digiKam Albums. More than 1000 digital cameras are supported by the gphoto2 library. Of course, any media or card reader supported by your operating system will interface with digiKam.

Current digital cameras are characterized by the use of Compact Flash™ Memory cards and USB or FireWire (IEEE-1394 or i-link) for data transmission. The actual transfers to a host computer are commonly carried out using the USB Mass Storage device class (so that the camera appears as a disk drive) or using the Picture Transfer Protocol (PTP) and its derivatives. Older cameras may use the Serial Port (RS-232) connection.

Transfers using gPhoto2: PTP and RS-232 Serial Port

digiKam employs the gPhoto2 program to communicate with digital still cameras. gPhoto2 is a free, redistributable set of digital camera software applications which supports a growing number of over 800 cameras. gPhoto2 has support for the Picture Transfer Protocol, which is a widely supported protocol developed by the International Imaging Industry Association to allow the transfer of images from digital cameras to computers and other peripheral devices without the need of additional device drivers.

Many old digital still cameras used Serial Port to communicate with host computer. Because photographs are big files and serial port transfers are slow, this connection is now obsolete. digiKam supports these cameras and performs image transfers using the gPhoto2 program. You can find a complete list of supported digital cameras at this url.

Note
libgphoto2 needs to be built with libexif to retrieve thumbnails to digiKam properly. EXIF support is required for thumbnail retrieval on some libgphoto2 camera drivers. If EXIF supported is not set with libgphoto2, you might not see thumbnails or the thumbnail extraction might be very slow.

Transfers using Mass Storage device

Of the devices that are not directly supported by gPhoto2, there is support for the Mass Storage protocol, which is well supported under GNU/Linux®. This includes many digital cameras and Memory Card Readers. Mass Storage interfaces are:

USB Mass Storage: a computer interface using communication protocols defined by the USB Implementers Forum that run on the Universal Serial Bus. This standard provides an interface to a variety of storage devices, including digital cameras.

FireWire Mass Storage: a computer interface using communication protocols developed primarily by Apple Computer in the 1990s. FireWire offers high-speed communications and isochronous real-time data services. Like USB Mass Storage, this standard provides an interface to a variety of storage devices, including digital still cameras. Almost all recent digital cameras support USB version 1 and eventually will support USB version 2; a very few support FireWire.

Cameras

The equipment you will need (or can get hold of) will vary tremendously depending on your budget.

Digital equipment is far more widely available than traditional 16mm and 35mm film cameras and there are a large number of local organisations that own cameras that can be hired – sometimes even borrowed – by members of the public. Hook into your local filmmaking community and you may be amazed by what is available for a very low cost and sometimes even for free.

There’s a myriad of different cameras available, depending on which format you are shooting. Besides the basic camera, you might need a set of lenses, a zoom, a head, a tripod, and if you are shooting on film, maybe a video assist (allowing you to see what you have just shot, as film needs to be processed before it can be watched).

For film cameras, you will definitely need to talk to a camera hire company about what they have available. If you have a camera person who uses them for paid work, then it will be much easier to get a good deal. If not, phone them up for a chat and explain who you are and what you need. They are usually a very friendly crowd and happy to help if they can.

There are a number of online production directories that you can use to find camera hiring companies nationwide (see related links: production - online production directories for a selection of some of the most well-known).

Lights

This is an incredibly important part of the filmmaking process, and one that you will need to invest some time in to make sure that the shoot is not wasted. Much of your decision-making will be based on whether you are shooting interiors or exteriors. An experienced camera person comes in handy for lighting tips, so get advice and experiment. You can use online production directories to source professional lighting hire companies.

Sound

This is a vital area and one that often gets overlooked by first-time filmmakers. The most important thing that you have to remember is that you must record the dialogue well, everything else can be cheated in post-production, but getting actors back in to re-record their dialogue is annoying. It can be very expensive and should be avoided.

Once the dialogue has been secured, the secondary concern is to record the atmosphere of the room so that it can be used as background in post-production mixing. This is referred to as the ‘wild track’ by the sound recordist. There will need to be a moment of quiet on set while the sound mixer records the sound of the room.

microphone
The ideal is a good shotgun mike. These can be pricey, but it’s a good idea to have more than just the microphone on the camera.

mixers and DAT records
A mixer is a box that has different tracks from one to four. Its job is to control the sound levels. You can feed your microphone straight into your camera, but you will have very little control over the sound levels, so it is better if your microphone feeds into a mixer first. If you do record straight into the camera, be careful, as certain cameras have been known to distort sound. Even better is record onto a separate DAT machine. This is the ideal, as it gives you a better quality and will be more flexible in post-production.

Transport

This can be a tricky one on low funds. If you have managed to blag some kit then you will need to get it around. Sometimes the facilities houses will rent you a truck, but that can be just as pricey as going to your local van rental place. Talk to whoever you are hiring your kit from and see if they have any suggestions. If all else fails, find a mate with a very big car and plan to do lots of journeys. Bear in mind that if you do hire a van and plan to leave any kit in it then, for insurance purposes, it will need to be in a 24-hour lock-up.

Make sure that your designated driver has the relevant license and insurance to cover you in case of accident or theft of the vehicle, especially if it is a hire car/van

Space Cameras


From our first space journey on Oct. 3,1962, Hasselblad cameras have played an integral part in the Space program, capturing the images that help us to understand our world and its surroundings. There are a range of special modifications and improvements required to meet the stringent demands of space travel. We then apply the knowledge and expertise we gain in space and bring it back to earth, further improving the Hasselblad line. All to ensure that we continue to provide the finest photographic equipment on – or off - the planet

Hasselblad 500EL/M


This is the first Hasselblad SLR space camera and it was equipped with a HC3-70 prism viewfinder. It was used for the first time at the Apollo-Soyuz flight in July 1975.

Hasselblad 500c


The Hasselblad 500C, with a planar 80mm lens (modified), was the first Hasselblad camera to be used by NASA in space. It was purchased by the astronaut Walter M. Schirra from a camera shop in Houston, Texas.
Modification, carried out by NASA, involved removing the lining, mirror, focusing screen and hood, among other things, to make the camera lighter.

Hasselblad SWC


With a Biogon 38mm lens, made its space debut on 3 June 1966, on a voyage in Gemini 9. The camera was largely standard: Only the lining had been removed and the viewfinder was specially designed. The camera was used on four voyages in 1966

Hasselblad EC (Electric Camera) 500 EL


This camera was taken on the manned voyage which passed close to the moon on 21-27 December 1968. During this voyage 10 revolutions were made around the moon, the purpose being to survey possible future landing sites. The HEC was fitted with a magazine for 70 mm film.

Hasselblad 203S


This space camera is a focal-plane shutter camera based on the standard 203FE version. It is equipped with a special version of the Winder CW. The film magazines use 70 mm perforated film and are equipped with electronic data imprinting, enabling the recording of time and picture number for each exposure. Since the computers onboard have full control over the position of the shuttle it is fairly easy to identify over which spot on the earth the picture was taken.

Hasselblad EDC (Electric Data Camera)


This is a specially designed version of the motorized 500EL intended for use on the surface of the moon, where the first lunar pictures were taken on 20 July 1969 by Neil Armstrong. The camera is equipped with a specially designed Biogon lens with a focal length of 60 mm, with a polarization filter mounted on the lens. A glass plate (Reseau-Plate), provided with reference crosses which are recorded on the film during exposure, is in contact with the film, and these crosses can be seen on all the pictures taken on the moon from 1969 to 1972. The 12 HEDC cameras used on the surface of the moon were left there. Only the film magazines were brought back.

Cameras for a higher cause


After W.W. II, the U.S.A. had been involved in a race for space with the U.S.S.R., to gain ultimate superpower status.
Both countries continued their work and research, and were able to execute a manned space flight by the 1960's.
Around this time they had started using cameras for recording.
At first, NASA (National Aeronautics and Space Administration) used primarily 70mm-format films.
They found, however, that they needed a more portable camera for more active shooting situations.
Nikon, whose cameras had a reputation for reliability in the U.S. market, was selected as a special manufacturer of 35mm cameras for NASA.
Although the Nikon U.S. distributor accepted the order of the special cameras for NASA, a special team at Nippon Kogaku's Ohi Plant took charge of product development.

Space photography


A camera used in space would be subjected to a vacuum and zero-gravity conditions.
As the spacecraft compartment is airtight, it is crucial that harmful gas or fire never be generated.
The camera should be easy to operate for someone wearing gloves.
And reliability became a major issue.
The rays of the sun and their reflection on the camera body may be stronger than those on the earth's surface, and the weight of cargo aboard the craft should be limited as much as possible for launching, so there's no room for a spare camera in case the main one malfunctions.
In order to meet these demanding conditions, Nippon Kogaku's special product development team used the Nikon F as the base body and made numerous modifications.
For example, the leather-like body cover generally used for the Nikon F had been changed to a metal plate painted in matte black.
Adhesive used adhered to NASA specifications.
For plastic parts, materials generally used for F cameras had to be changed to specified parts.
The battery chamber was designed to prevent accidental leakage from the camera body. Electrical parts were soldered in accordance with NASA standards.
The standard thickness of the plating was modified. Dimensions were also changed to accommodate thinner polyester-based films.
Modifications made to operating parts included an enlarged finger pad for the film advance lever, a larger film rewinding knob, and enlarged film counter figures and windows.
Interchangeable lenses were also modified.
The addition of two horns on the focusing ring was the most significant change.
It made focusing simple as the user needed only to rotate the ring using the horn.
NASA's standards for shutter accuracy were even more stringent than those of Nikon.

Nikon — and users — benefitted from NASA experience



The technologies Nikon used in developing cameras for NASA finally went into use in 1971.
The modified F camera and some modified interchangeable lenses were provided to NASA for the Apollo 15 mission.
Then, in 1973, a modified version of the F camera with a motor drive and modified lens were supplied for use aboard Skylab.
The cameras Nikon developed for use in space exploration are still in use today, and maintenance is still being provided.
These NASA cameras were of course very costly.
It is said that Nippon Kogaku took heavy losses. However, these losses were balanced out by the

value of the experience in the space project. Nippon Kogaku took what they had learned and used it to improve the reliability and operational performance of Nikon products.
The development of the camera for NASA using the Nikon F body as a base and the development of the Nikon F2 occurred in parallel.
NASA did not require increasing numbers of the modified F2 cameras, and in fact the camera was never actually manufactured.

F3 and F4 cameras for NASA


After some time had passed, Nikon went to work on camera models for NASA that were based on the F3 body.
There were the "Small Camera", which was equipped with a motor drive, and the "Big Camera" for long film that were delivered to NASA for use aboard the space shuttle in 1981.
While the Nikon F3 was still being developed and many issues had yet to be decided, NASA went ahead and formally declared the Nikon F3 to be an official NASA camera.
The F3 models for NASA, and those for mass consumption, were developed side-by-side at the Ohi Plant.
Another special team was assigned to the development of the F3 for NASA. The "Big Camera" was equipped with an interchangeable film back and used a thinner special long film for bulk loading.
Members of the special team needed to concentrate on developing a new technology that would accelerate film advancement.
After much effort and brainstorming, they solved the problem and succeeded in delivering the cameras for the space shuttle.
The F3 for NASA had many of the same features as the F3 for mass consumption, including internal parts.
Compared to the modified F models for NASA, the F3 for NASA was much more similar to the F3 models made for the public.

Nikon F3
"Small Camera"
In 1989, Nikon delivered the modified F4 to NASA. There were only a few small differences between the modified F4 and mass-consumption F4 models.
Nikon positively applied the experiences obtained during development of NASA cameras to the development of cameras for the general public.
At the same time, NASA learned about the specifications that were required for the camera's use in space.
These were the reasons why very few modifications were required for recent NASA cameras.

Phototelevision Cameras




The first cameras in space used photographic film, which was automatically developed and scanned for transmission to Earth. This seems mechanically complex, but in the context of 1959 technology, this approach had several advantages. Film could rapidly capture an image, far beyond the resolution and sensitivity of vidicon television tubes. An enormous amount of visual information could be stored on a roll of film. The images could then be repeatedly rewound and scanned at whatever rate was convenient for telemetry transmission. The American Lunar Orbiter missions adopted the same strategy six years later.
The Enisei camera system was developed at the Leningrad Scientific Research Institute of Television (NII-380) by P.F. Bratslavets and others. Adjacent frame pairs were simultaneously exposed, through 500 mm and 200 mm objective lenses, onto 35 mm aerial-reconnaissance film (obtained from American spy balloons, according to one Russian account). The system cycled through four exposure times, 1/200 to 1/800 sec, as it photographed the Moon.

After photography, the film was automatically developed, fixed and dried in chambers of chemicals, and then scanned by a flying spot CRT and a photomultiplier tube. It could scan the film slowly at 1000 pixels per line resolution and 1.25 lines per second, or rapidly at 50 lines per second, during its return orbit to Earth.

The radio system was developed by E.Ia. Boguslavskii, at the Research Institute for Space Device Engineering (RNII-KP). He championed the use of impulse transmitters, which later enabled remarkable telemetry rates from planetary distances. However, on Luna-3, Boguslavskii surprised his colleagues by constructing a continuous-carrier FM video transmitter, operating on 183.6 MHz.

On October 7, 1959, Man's first view of the far side of the Moon was returned by Luna-3. A pair of frames at correct relative size, show typical 500 mm and 200 mm views. Locked in 3-axis stabilization, the spacescraft spent 40 minutes photographing the Moon, then resumed spin stabilization. The camera held 40 frames, and frames 26 to 38 were definately received and recorded in full resolution. Some reports claim 17 frames were recorded. Six frames have been published. The mission was timed to photograph the fully illuminated Moon, but this angle of light meant low terrain contrast.
The periodic bands of static seen in the frames above were due to spacecraft spin and dead spots in its antenna's radiation pattern. Temporary receivers were set up in the Crimea and Kamchatka, with magnetic tape, 35mm film recording and instant thermal-paper recording devices. The high-gain telemetry receiving stations in Simferopol and Evpatoriia were not completed in time for Luna-3's flight. Photographs were made from the magnetic tape recordings, at varying amplifications, to study the full range of signal contrasts.

The Mars probe, 1M, was designed to carried an identical or very similar camera in 1960. Image transmission was on 3.7 GHz, the continuous-carrier Pluton telemetry signal, it probably did not contain an impulse transmitter. The camera was later removed to save weight, after the optimum-energy launch window was missed. Both 1M probes were subsequently destroyed by rocket failures.

Mars-1, in 1962, contained a complex 32-kilogram camera. It contained both 35 and 750 mm lenses and used 70 mm film. It alternately shot square images and larger 3×1 rectangular images. It had a capacity of 112 pictures on a roll of film, and these could be scanned at 1440, 720 lines or 68 lines for rapid preview. Individual frames could be rescanned and transmitted later, by telecommand. The camera system may have been built by Bratslavets, but after this time, deep-space camera systems were constructed in the Research Institute of Space Device Engineering.
The camera is also reported to have contained an ultraviolet spectrograph. The UV spectrum was projected onto the film next to the picture. A 3-4 micron infrared diffraction spectrometer was also onboard and oriented parallel to the axis of the camera. Both the UV and IR spectrometers were designed by A.I. Lebedinskii.

The camera contained its own 6 GHz transmitter using pulse position modulation. The 50 watt transmitter worked by emitting 25,000 watt pulses of very short duration. This was before the invention of redundant coding systems, and high-power impulse transmission was an ingenious method for increasing data bandwidth over distances of 300 million kilometers. This system was probably built by Boguslavskii. Image were sent as discrete pixels, but gray levels were probably encoded as analog pulse position, not binary digital values. The high-quality transmission rate was 90 pixels per second, requiring about 6 hours to send a 1440×1440 image.


This camera was also carried by a similar spacecraft on an unpublicized 1962 photo-flyby mission to Venus. Radio contact with Mars-1 was lost at 106 million km, due to loss of attitude-control propellant. The Venus probe was stranded in parking orbit.

The 1965 Zond-3 mission returned 23 pictures (with orange filter) and an UV spectra of the far side of the Moon. A 106.4 mm objective lens was used on this camera. In addition, some test patterns were pre-exposed at the start and end of the film. Images were taken and developed every 2.25 minutes, with alternating 1/100 and 1/300 second exposures. A rapid 67 line/picture survey scan was first performed, and then commands were sent to rescan images at high resolution, with some resent several times. It continued on to a distance equivalent to Mars fly-by, rewinding the film and testing image transmission several times.

As before, a 5-centimeter-band impulse transmitter sent pixel values to Earth, or alternatively, an 8-centimeter-band continuous wave transmitter could send the results. Most likely, both systems were tested at various distances. In high-quality mode, images were sent at 550 pixels per second (2 seconds per scanline), requiring 34 minutes to send a 1100×110 image.

A 285-355 nm UV spectrograph was incorporated into the camera and recorded onto three frames of the film. A second, coaxial, UV spectrometer measured 190-275 with a photomultiplier detector and output digital telemetry. A coaxial 3-4 micron IR spectrometer was included on Mars missions, to investigate common organic molecular absorption bands, and a 6-40 micro IR spectrometer was included on Venus missions to investigate thermal balance. Spectrometers were designed by A.I. Lebedinskii and V.A. Krasnopol'skii.

Zond-2 may have carried two of these cameras with 200 and 500 mm lenses, but failed en route to Mars. Luna-12 carried two cameras of this design (one with a 500 mm lens) in a low-altitude lunar orbit in 1966. Luna-12 returned 40 images per camera at a doubled scanning speed. An identical mission on Luna-11 experienced a failure of its orientation system and photographed black space. Venera-2 carried one camera with a 200 mm lens to Venus, but the spacecraft failed before its final planetary-encounter telemetry playback.

Optical-Mechanical Cycloramic Cameras



Selivanov and Iuri M. Gektin designed landscape cameras for Moon, Mars and Venus landers. Instead of panning a television camera, he decided to scan the scene with a pinpoint photometer. This required a much simpler apparatus with some advantages. A precise measurement of luminance was made at each pixel, and the entire landscape was returned as a single seamless image.

These cameras probably evolved from early cycloramic telephotometers by A.M. Kasatkin and others, used for low-resolution UV imaging and photometry from high altitude rockets. Luna-4 through Luna-8 contained a cycloramic optical-mechanical camera built by I.A. Rosselevich's team at the Leningrad Scientific Research Institute of Television. It was heavier and lower resolution than Selivanov's Luna-9 camera, and it operated inside a pressurized glass cylinder instead of being exposed to vacuum.

On the Luna-9 camera, seen above, the objective lens was focused at the hyperfocal distance, returning a sharp image of terrain between 1.5 meters and the horizon. Logarithmic photometry and automatic gain control (governed by a photocell) allowed the camera to operate with a wide range of luminance, from 80 to 150,000 lux. Sensitivity could also be adjusted by telecommand. The PMT and amplifer were the same as in the film scanner of the Zond-3 phototelevision camera. Remarkably, while containing vacuum tubes, a motor and the 1700 volt power supply for the PMT, the camera weighed 1.3 kilograms and consumed only 2.5 watts.

The upper assembly with oscillating mirror and motor rotated freely in the metal sleeve, making electrical contact through brushes. Scanning was vertical, with slow rotation to sweep out the horizontal image swath. The finely built mechanical action of the mirror was precise to 1/3 pixel spacing. A full 29° × 360° panorama of 6000 vertical lines could be returned in 100 minutes. On command, the camera could scan forward, in reverse, or at 4× speed for quick survey or positioning. A 250 Hz analog video signal was generated, which was frequency modulated on a 1.5 KHz subcarrier. That in term was phase modulated onto the 183.538 MHz telemetry carrier

250 cycles per line is theoretically equivalent to 500 pixels, which is how the resolution is often reported. Lunar images were sent as analog video, because a strong communication channel could be established between the Moon and the 32-meter dish at Simferopol. For later missions to Mars and Venus, the video signal was digital from camera to ground station.

The images above show part of the lunar landscape revealed by the Luna-13 camera. Pieces of the landing craft are seen in the distance on the left view. On the right, a detail at original resolution shows the extended gamma-ray densitometer and a close view of the lunar soil. These first landers relieved fears that the lunar surface might be composed of dust, into which spacecraft would sink.

It is important to remember that we can only see scans of printed images, many generations of duplication from the original electronic signal. Unless the magnetic tapes of the FM video signal are read and processed into modern digital images, we will not see the true quality of these images.

Luna-9 was the first spacecraft to land on the Moon, using an airbag landing system similar to the recent Mars Pathfinder. In 1966, it returned three panoramas. Its signals were also intercepted by the British radio telescope at Jodrell Bank, and a Manchester newspaper published the pictures before the Russian press.

Luna-13 returned five panoramas from another landing site, later that year. Taken over several days, they show the surroundings under different angles of illumination (the Moon rotates 13° per day). It had two cameras for redundancy or stereo, but one failed.

In 1971, Mars-3 was the first spacecraft to land on the red planet. Two cycloramic cameras were installed, as on Luna-13. Like the second generation lunar cameras, they had 500 × 6000 pixel resolution, and scanned at 4 lines per second.

Optical-Mechanical Linear Cameras



Conventional cameras focus an image onto a 2-dimensional image sensor. One problem with this is the limitation of resolution imposed by image sensor technology. It is easier to build a 1-dimensional camera and allow the orbital motion of the spacecraft to sweep it across the planet. An innovation often attributed to Landsat-1, Soviet scientists first deployed linear cameras a year earlier, on Luna-19. Built by Arnold Selivanov and Iuri Gektin, they represent an evolution of the panoramic camera used on Luna-9 in 1966.

These cameras, for 1971 and 1974 low-orbit survey, were designed to produce long, high-quality panoramas of the lunar surface. They used a photomultiplier tube (4) as the detector, with a spinning prism to scan a 180° "cylindrical fisheye" image. The scan rate was 4 lines per second. From an altitude of 100 kilometers, the craft could resolve 100 meters along the direction of scanning, and 400 meters along the perpendicular direction of flight. The images extend to the lunar horizon, which was used to help calculate the precise orbital motion of the satellite

The Luna-19 and Luna-22 "heavy orbiters" are still somewhat mysterious missions, although one objective was the mapping of the Moon's uneven gravitational field. Luna-22 adjusted its orbit until it was skimming the lunar surface at 15 to 30 kilometers distance. By one report, Luna-19 returned 5 panoramas and Luna-22 returned 10.


The Mars-4, Mars-5 and Venera-9 orbiters contained linear cameras designed by Gektin and his team. They scanned images 30° wide and arbitrarily long, as the orbit of the spacecraft swept across the planet. The camera design was similar to the cycloramic camera on Luna-9, but its scanning mirror oscillated without the need of a rotating assembly, using the satellite's orbital motion to sweep out an image swath. It used automatic gain control and operated in a logarithmic-photometer mode. Each scanline included some black and white calibration stripes transmitted during the return stroke.

The box, above left, is an analog 4-track tape-loop recording device designed to work with this linear camera. It recorded up to 45 minutes of two 1000 Hz video signals as well as two synchronization signals from the onboard crystal oscillator. Both cameras could be simultaneously recorded for 45 minutes, or one camera could record for 90 minutes. The video could be read and digitized for transmission to Earth, at two speeds (i.e., at two pixels/line resolution).

Reports claim the tape recorder was also used to store the video signal from the lander, although technical papers stress that the radio signal from the Venera and Mars landers to the orbiter was digital, not analog.

The Mars cameras used two photomultiplier tubes and returned images in three wavelength ranges. A PMT-112 (AgOCs cathode) with a red glass long-pass filter was used to image in infrared. A PMT-114 (multialkali cathode, also used on Venera lander) was used with red and orange glass filters to image those colors. The cameras scanned at 4 lines/second, generating 1000 Hz video (250 cycles/line), which was recorded on magnetic tape. The primary readout rate was 1 line/second, transmitted to Earth probably at 256 or 512 pixels/line. The option existed to scan at 4 lines/second and send reduced resolution at higher speed. Mars-4 returned 2 panoramas, and Mars-5 returned 5 panoramas.

The Venus cameras both used the PMT-114 with violet and ultraviolet filters to obtain images in those spectral ranges. It scanned at 2 lines/second, generating 1000 Hz video (500 cycles/line). During transmission to Earth, the tape could be read and transmitted at 256 pixels/line in the primary mode, or at a slower special rate of 512 pixels/line. Venera-9 performed 17 survey missions from October 26 to December 25, 1975, using the ultraviolet camera with the violet camera sometimes recording simultaneously. Resolution was 6.5 to 30 km, depending on the spacecraft altitude.

The panoramas, recorded over 30 to 50 minutes, were probably about 256 × 6000 × 6-bits in size, and contained highly elongated images of the planet. They were contrast enhanced and linearly compressed by scanline averaging, to reduce noise and geometric distortion. These images were higher resolution than the later Pioneer Venus cloud photometer, but unfortunately the images from this survey have never been released to the public. The poor-quality images above are scanned photocopies of printed pictures

In 1988, the Soviet Union launched Fobos-1 and -2, Mars orbiters with small vehicles intended to land on Phobos. Selivanov and Gektin's team designed a 28 kilogram optico-mechanical camera, similar in basic design to the Mars-5/Venera-9 linear cameras. Called TERMOSKAN, the camera contained two detectors: One for 600-950 nm returned images in the red and near-infrared range. The other, cooled by liquid nitrogen, imaged the thermal infrared wavelengths from 8.5 to 12 μm. Seen above is the third of four scans around the equator of Mars, 512×3100 pixels, from Olympus Mons to the Valles Marineris



The spacecraft was 3-axis stabilized, with the TERMOSKAN camera pointed away from the Sun. A moving mirror scanned one dimension at 512 pixels/line and 1 line/second. The nearly circular orbit of the spacecraft moved the camera in a swath across the illuminated face of the planet. The faint horizontal streak is the shadow of Phobos, following the spacecraft's orbit.

Above is a full sized section from the second scan in the far infrared. With 1.8 km resolution, the Fobos-2 images are several times higher resolution than the recent thermal IR images from Mars Global Surveyor. Each scan line consists of 384 pixels of image and 128 pixels of calibration data (which has been omitted). A later version of the camera was installed on Mars-96, which was destroyed in a launch mishap

Linear optical-mechanical cameras have been applied to non-military Earth observation satellites. In the early 1970s, two scanners were developed by Selivanov's team, for the Meteor weather satellites: MSU-M scanned 4 lines/sec by oscillating mirror (similar to the Mars-5 camera). It swept a 3000 km swath at four bands in the visible and infrared. MSU-S scanned 48 lines/sec by spinning prism (similar to the Luna-19 camera). It swept out a 2000 km swath with 240 meter resolution, in two spectral bands.

The two images above show images gathered from MIR in the 1990s. The latest spinning-prism scanner, the MSU-SK, has been installed on Meteor-3M, Okean and Resurs-O satellites, as well as the MIR space station. It sweeps out a 600 km wide swath with an arc-shaped scan, returning up to 4756 pixels/line. It is combined with the MSU-E push-broom camera, which uses three 2048-element linear CCD sensors. The MSU-E returns 200 lines/sec in a 45 - 78 km swath, running down the center of the MSU-SK image. A 24-bit image is returned, consisting of three channels selected from the set of 5 spectral bands on the MSU-SK and 3 bands on MSU-E.



Returned-Film Camera Systems


The highest quality images of the Earth and Moon have come from returned film, taken automatically or by astronauts. In America, the civilian space program was forbidden to develop automatic returned-film camera systems, a matter of some dispute during the planning of Landsat. In the Soviet Union, the division between military and civilian space programs was less distinct. With high resolution returned-film imagery from Resurs-F available for topographic and Earth-resource applications, Soviet linear-scanning satellites like Resurs-O were designed for wider coverage than Landsat



The world's first surveillance satellite was the Zenit-2, developed concurrently with the Vostok manned missions, and using the same spacecraft. Since 1961, over 700 Zenit or Resurs-F satellites have flown, carrying a variety of camera systems and returning them in the spherical landing capsule. The original Ftor-2 camera system, consisting of a 200 mm and 1000 mm camera, was designed by Iu.V. Riabushkin.
The Zenit-8 capsule above shows two telescopic KFA-3000 cameras, with a folded 3000 mm focal length. It probably held about 1800 frames of film, each 30 × 30 cm, yielding 2-3 meter resolution. The camera systems were used an average of three times, before worn out by repeated launching and reentry.
The Resurs-F1 capsule above shows five cameras. Two KFA-1000 cameras shot 30 × 30 frames of b/w or spectrozonal film through 1000 mm objectives (4-6 meter resolution). Three KATE-200 cameras shot 18 × 18 cm color film through 200 mm objectives (15-30 meter resolution). Spectrozonal film recorded 570-680 nm and 680-810 nm wavelengths in separate emulsion layers.



Examples of Soviet returned-film imagery are impressive. The Resurs-DK camera has a resolution of 1 meter. Russian companies now sell returned-film imagery from regions outside their national boundaries


Zond-5 through Zond-8 returned film images of the Moon and Earth from 1968 to 1970. The camera system was developed at the Moscow State University of Geodesy and Cartography (MIIGAiK) under Boris N. Rodionov. Zond-6 and 8 carried a 400 mm camera using 13 × 18 cm frames of panchromatic film. Zond-7 carried a 300 mm camera shooting on 5.6 × 5.6 cm film (both color and panchromatic). The original Zond-8 negatives have been digitized in Moscow to about 8000 × 6000 pixels, and are still among the best close images of that planet.