Kein Holz vor der Hütte


Kein Holz vor der Hütte

High-dynamic-range imaging (HDRI or HDR) is a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than possible using standard digital imaging or photographic techniques. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject matter.[1][2][3][4]

Non-HDR cameras take photographs with a limited exposure range, resulting in the loss of detail in bright or dark areas. HDR compensates for this loss of detail by capturing multiple photographs at different exposure levels and combining them to produce a photograph representative of a broader tonal range.

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR)[5] or standard-dynamic-range (SDR)[6] photographs. HDR images can also be acquired using special image sensors, like oversampled binary image sensor. Tone mapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.
In photography, dynamic range is measured in EV differences (known as stops) between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light. Compare that, for example, 210=1024:
High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a camera’s raw image format, because 8 bit JPEG encoding doesn’t offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

The images from any camera that allows manual exposure control can be used to create HDR images. This includes film cameras, though the images may need to be digitized so they can be processed with software HDR methods.

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[10] As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file.[11] The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.[12] Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picture taking.[13]

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.[14]

Camera characteristics
Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and spectral calibration affect resulting high-dynamic-range images.[15][15]

Tone mapping
Main article: Tone mapping
Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast.

Software
Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include
Adobe Photoshop
Dynamic Photo HDR
HDR PhotoStudio
Luminance HDR
Oloneo PhotoEngine
Photomatix Pro
PTGui
Comparison with traditional digital images
Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[16][17][18]

HDR images often don’t use fixed ranges per color channel—other than for traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don’t use integer values to represent the single color channels (e.g.m, 0..255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.
The idea of using several exposures to fix a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.[20]

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This is effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.[22]

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which features dodging and burning prominently, in the context of his Zone System.

With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response over the years, or shot in black and white to use tone mapping methods.
Film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force".[23] This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color.[24] The dynamic range of this extended range film has been estimated as 1:108.[25] It has been used to photograph nuclear explosions,[26] for astronomical photography,[27] for spectrographic research,[28] and for medical imaging.[29] Wyckoff’s detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

Late-twentieth century[edit]
The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[30] In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image, by the same group.[31]

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping this result. Global HDR was first introduced in 1993[1] resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[2]

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[32] Mann’s method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann’s process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.[32]

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

While custom high-dynamic-range digital video solutions had been developed for industrial manufacturing during the 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors and cameras.[34] A few companies such as RED[35] and Arri[36] have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight latitude in the ‘x’ channel. The ‘x’ channel can be merged with the normal channel in post production software. With the advent of low-cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras.[37] Similar methods have been described in the academic literature in 2001[38] and 2007.[39]

Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be upgraded even if manual intervention would be needed for some frames (as this happened in the past with black&white films’ upgrade to color). Also, special effects, especially those in which real and synthetic footage are seamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in which capturing temporal aspects of changes in the scene demands high accuracy. This is especially important in monitoring of some industrial processes such as welding, predictive driver assistance systems in automotive industry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speed up the image acquisition in all applications, in which a large number of static HDR images are needed, as for example in image-based methods in computer graphics. Finally, with the spread of TV sets with enhanced dynamic range, broadcasting HDR video may become important, but may take a long time to occur due to standardization issues. For this particular application, enhancing current low-dynamic range rendering (LDR) video signal to HDR by intelligent TV sets seems to be a more viable near-term solution.

More and more CMOS image sensors now have high dynamic range capability within the pixels themselves. Such pixels are intrinsically non-linear (by design) so that the wide dynamic range of the scene is non-linearly compressed into a smaller dynamic range electronic representation inside the pixel.[41] Such sensors are used in extreme dynamic range applications like welding or automotive.

Some other sensors designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

Quelle:

en.wikipedia.org/wiki/High-dynamic-range_imaging

de.wikipedia.org/wiki/High_Dynamic_Range_Image

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

History and evolution
Precursor technologies
Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

First camera photography (1820s)
Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

Because Niépce’s camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre’s efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.
Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre’s invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot’s paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot’s process, unlike Daguerre’s, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot’s famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

Black-and-white
See also: Monochrome photography
All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

Color
Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell’s idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

Agfa’s similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa’s product.

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

Digital photography
Main article: Digital photography
See also: Digital camera and Digital versus film photography
In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge’s study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

Technical aspects
Main article: Camera
The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

The camera (or ‘camera obscura’) is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person’s eyes and brain merge the separate pictures together to create the illusion of motion.[27]

Camera controls are interrelated. The total amount of light reaching the film plane (the ‘exposure’) changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens’s range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:
Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

Quelle:
en.wikipedia.org/wiki/Photography
de.wikipedia.org/wiki/Fotografie

Posted by !!! Painting with Light !!! #schauer on 2013-11-02 15:24:51

Tagged: , Schauer , Christian , Schaibing , Oberdiendorf , HDR , Passau , Hauzenberg , Untergriesbach

Related posts