Sixty years of advanced imaging at the French-German Research Institute of Saint-Louis: from the Cranz-Schardin camera to computational optics

Frank Christnacher
  • Corresponding author
  • French-German Research Institute of Saint-Louis, AVP, Saint-Louis, France
  • Email
  • Further information
  • Frank Christnacher received his MSc degree in the field of Solid-State Physics in 1988 and his PhD in the field of Optical Data Processing and Pattern Recognition in 1992, both from the University of Haute-Alsace (France). He is currently the head of the ‘Advanced Visionics and Processing’ group of the French-German Research Institute of Saint-Louis. He specializes in the area of night vision imaging systems and active imaging, and has initiated and led numerous international scientific collaborations.
  • Search for other articles:
  • degruyter.comGoogle Scholar
, Martin Laurenzis
  • French-German Research Institute of Saint-Louis, AVP, Saint-Louis, France
  • Further information
  • Martin Laurenzis received his MSc degree in Physics from the University of Dortmund (Germany) in 1999 and his PhD in Electrical Engineering and Information Technology from the University of Aachen (Germany) in 2005. He has been with the French-German Research Institute of Saint-Louis (France) since 2004. His research interests include nonline-of-sight and compressed optical sensing. He is a member of SPIE, the European Optical Society, and the German Association of Applied Optics.
  • Search for other articles:
  • degruyter.comGoogle Scholar
, Yves Lutz
  • French-German Research Institute of Saint-Louis, AVP, Saint-Louis, France
  • Further information
  • Yves Lutz received his MSc degree in Laser Physics from the University of Franche Comté and his PhD in Laser Physics from the University of Haute-Alsace, France. He specializes in the field of development of laser sources and laser illumination devices. Currently, he is working on illumination sources for long-range active imaging applications.
  • Search for other articles:
  • degruyter.comGoogle Scholar
and Alexis Matwyschuk
  • French-German Research Institute of Saint-Louis, AVP, Saint-Louis, France
  • Further information
  • Alexis Matwyschuk received his MSc degree and his PhD from the University of Haute-Alsace (France) in the field of Optics and Electrical Engineering. He specializes in laser optics, holography, and optical data processing. He is currently leading research and development projects with industrial and governmental partners.
  • Search for other articles:
  • degruyter.comGoogle Scholar

Abstract

In 2019, the French-German Research Institute of Saint-Louis (ISL) is celebrating its 60th anniversary. Since the beginning, advanced imaging technologies were one of the institute’s flagship areas of research and, from the Cranz-Schardin camera to computational optics, ISL never stopped innovating. Each technological innovation is a testimony to its time, and the research works in visionics make no exception to this rule. Each decade was marked by innovations that made it possible to develop means of vision or visualization, which ensure that our institute remains at the forefront of the research in this field. High-speed cameras, holography, lasers, or active imaging systems developed at ISL are examples of this. The science of photon, photonics, still has a bright future ahead, and there is no doubt that the latest discoveries and technological advances in this field will be applied to systems that will allow our armed forces to maintain their technological superiority and our soldiers to carry out their missions with greater security.

1 Introduction

This year, the French-German Research Institute of Saint-Louis (ISL) is celebrating its 60th anniversary, but the roots of the institute go back to the early end of the Second World War when a team of German scientists led by Professor Schardin, from the Air Force Technical Academy (TechnischeAkademie der Luftwaffe) in Berlin-Gatow, came to the Upper Rhine to work for the French Army. There, Professor Schardin worked in the field of ballistics and became renowned worldwide for his work in high-speed physics. In his early years, as a permanent assistant to the eminent German ballistics Professor Carl Cranz, he developed the famous Cranz-Schardin camera, a revolutionary electro-optical photography and high-speed cinematography method, which used electric sparks or X-ray flash as an illuminator, and was able to work at frame rates over 106 images/s. This technique brought huge improvements in the comprehension of ballistic phenomena and, moreover, in high-speed physics. Since 1945, the LRSL (Laboratoire de Recherches de Saint-Louis), and afterward the ISL, maintained its leading position in the domain of high-speed phenomena. In the beginning of the 1960s, the invention of the laser began a true revolution and allowed the emergence of new techniques like holography and interferometric holography. Today, with the introduction of semiconductor lasers, ISL has a leading position on range-gated active imaging and deploys all its research effort in a newly emerging domain: computational imaging.

2 High-speed imaging with spark or X-ray illumination

2.1 Cranz-Schardin camera or spark chronolens

In 1929, Schardin developed the Cranz-Schardin camera also known as the spark chronolens. This revolutionary camera is still used for the visualization of ultra-fast physical phenomena [1]. Even nowadays, hundreds of scientific papers continue to mention this technique each year. Figure 1 shows the setup as it was used at that time.

Figure 1:
Figure 1:

Cranz-Schardin chronolens with spark illumination. This camera contributed to spectacular progress in the comprehension of aerodynamics and gas physics.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

It consists of an illumination, a camera, and a spherical mirror. The illuminator is composed of 24 spark cells (0.3 μs of illumination duration, Figure 1 left) coupled with a 24-lens camera (center, Figure 1) capable of imaging a chronogram of 24 images of the same scene at an imaging rate of >1 million images/s, i.e. >1 MHz. It is amazing to note that this imaging rate remains challenging, even today. Each of the 24 spark illuminators is conjugated to one of the 24 lenses of the camera through a convex mirror (foreground, Figure 1), and the phenomenon to study is placed between the system and the mirror. Figure 2 shows two sequences of images taken with the Cranz-Schardin camera.

Figure 2:
Figure 2:

Visualization of the propagation of a planar shock wave over a VW beetle model (A) and the propagation of a kinetic projectile through an apple (B).

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

Figure 3 presents one of the fastest video sequences made by Schardin with a frame rate of 6.5 MHz. This movie taken in the late 1930s gives a visualization of the shock wave produced by the collision of a projectile on a glass plate in polarized light.

Figure 3:
Figure 3:

Images from a movie visualizing the shock wave in a glass plate due to a projectile impact. Title screens (A, B) and images (C–E) taken at a frame rate of 6.5 Mfps!

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

At the same time, Schardin was developing Kerr cells [2], which were used as very fast shutters.

2.2 X-ray imaging for ballistics application

The first basic studies on X-ray imaging were conducted by Schardin in Berlin-Gatow to study the effect of hollow charges on armoring. In fact, it was the advent of flash radiography that allowed the visualization of the deformation of the coating and its transformation into a projectile. The first X-ray generators emitting very short pulses, well below 1 μs, were designed in 1938–1939 by the Siemens Company where Dr. Thomer worked. At the request of Schardin, he was assigned to Gatow’s laboratory in 1939. The first radiographic photographs of the functioning of hemispheric charges were, thus, obtained in 1940, and the researchers had the opportunity to understand the formation of a projectile through the deformation of a shaped charge and to study their penetration power (Figure 4).

Figure 4:
Figure 4:

First visualization of a projectile formation from a spherical-shaped charge in the 1940s (A) and visualization of projectile formation from a conical-shaped charge and its effect on armor (B).

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

3 Laser imaging

The advent of lasers in the early 1960s allowed the development of new visualization techniques particularly adapted to the study of flows. ISL researchers developed a lot of flow visualization techniques that can be classified in different categories depending on whether they allow a global flow visualization or a local measurement of the flow characteristics, and whether there is a need of seeding the flow or not. Table 1 lists the different techniques according to these characteristics.

Table 1:

Flow visualization techniques developed at ISL.

Global flow visualizationGlobal measurementsLocal measurements
Without seedingShadowgraphy, Schlieren technique, classical or holographic interferometry, differential interferometryRAMAN scattering
With seedingPhotography or holography of particle fluorescence induced by laserInterferometric laser velocimetry, particle image velocimetry, holographic particle image velocimetryLaser anemometry

For example, differential interferometry with polarized laser light brought very nice results in aerodynamics (Figure 5). With this method, the fringe deformation is proportional to the difference of the optical path in the airflow.

Figure 5:
Figure 5:

Differential interferogram of a supersonic projectile.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

At the same time, ISL developed the technique of holographic cinematography [3]. In 1983, ISL made the first holographic movie at 24 Hz on a 35-mm film (small train impacting a wall of plastic cubes). This movie was awarded the Gaumont price in 1985. The same year, ISL was recording the first holographic movie of a living person at 25 Hz on a 126-mm film. ‘Christiane et les holobulles’ presented a young woman blowing soap bubbles [4]. This movie was awarded the ‘Great international price of the future’ during the International Exhibition of Techniques and Energy of the Future in Toulouse, France.

Figure 6 shows the recording setup and the three-dimensional (3D) holographic image restitution used to build up the movie and some images from the movie ‘Christiane et les holobulles’.

Figure 6:
Figure 6:

Recording and play back of a holographic movie: (A) recording setup, (B) holographic play back, and (C) some single frames of the movie ‘Christiane et les holobulles’.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

However, the main field of research to benefit from the achievements in holography was non-destructive testing and control. On this topic, ISL developed collaborations with a lot of companies to test different kinds of materials and structures. Holographic interferometry could put in evidence small defects in a very wide diversity of materials subject to different mechanical stresses. Material defects in rocket propellants, gluing defects in composite materials or on helicopter blades could be evidenced by submitting the structure to a mechanical, thermal, pressure, or vibration load. The advent of holographic interferometry with a double reference beam allowed to access the quantitative displacement measurement. In Figure 7, we can see different results of holographic interferometry including a vibration map of a car brake disk [5], a detection of a buried mine by visualization of a seismic wave propagation [6], and a visualization of a crack in a concrete wall [7].

Figure 7:
Figure 7:

Some results of holographic interferometry: (A) vibration mode visualization of a car brake disk during rotation, (B) detection of a buried mine (depth 5 cm) by visualization of a seismic wave propagation (recording delay of 600 μs between the three holograms), and (C) crack visualization in a concrete wall: double reference holographic image, phased image, and pseudo relief representation.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

4 Range-gated active imaging

The advent of digital holography familiarized the teams with the use of low-light cameras and lighting techniques, which naturally led the ISL to focus on active imaging for surveillance applications. The research on active imaging began in 2001. Very rapidly, the first high-TRL prototypes of range-gated active imaging systems allowed new military applications, and a lot of new fields of research related to this topic could be explored. Figure 8 shows some of the prototypes that have been developed at ISL over the years for different kinds of applications [8].

Figure 8:
Figure 8:

Some high TRL range-gated active imaging systems built at ISL: (A) a gated viewing system with an 808-nm laser diode illumination sold to the French army (STAT) in 2003 with a range of 3 km, (B) system with an eye-safe 1574 nm laser illumination sold to the French police in 2008 for soccer stadium surveillance (range: 3 km), (C) system for UAV detection and tracking from 2014 working at 1574 nm with a range of 2 km, (D) a very long-range surveillance system from 2015 working at 800 nm with a variable divergence/FOV from 0.2° to 5° and a maximum range of over 20 km, (E) an underwater system from 2015 with a 532-nm illumination, and (F) a dual-wavelength night vision goggle from 2016 working at 1530 nm and 1064 nm at a range of 200 m.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

4.1 Performance modeling

To be able to fit with the operational needs based on the user specifications, an appropriate modeling of the system has to be worked out. For this, the ISL scientists developed a modeling environment under Matlab. The software takes into account most of the system and environmental parameters in order to be able to predict the global performance of a system before its development (Figure 9).

Figure 9:
Figure 9:

The image given by the system is the convolution of a theoretical image, the transfer function of the system, and noise.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

It includes the source, optics, intensifier, camera, and display parameters to provide the image quality and the energetic balance of the global system. For the propagation medium, the visibility and the turbulence level are the two main parameters that are taken into account.

4.2 Illumination techniques

The performance of a range-gated active imaging system depends on the quality of the illumination. Depending on the nature of the laser source (solid state laser or semiconductor laser), the range-gated active imaging system will work in flash mode (one laser pulse/image) or in accumulation mode (multiple laser pulses/image) to collect a sufficient amount of reflected photons [9], [10].

4.2.1 Semiconductor laser source

Laser diodes are the most adapted laser source to build an efficient laser illuminator. Their advantages are numerous: high electrical-to-optical efficiency, high mean power, compactness of the laser emission component and of the power management. The only drawback is their low and asymmetrical beam parameter product (BPP), but this can be overcome by different collimation techniques. With these techniques, a high-quality illumination can be produced with a high homogeneity and, most importantly, without speckle.

A lot of work has been done at ISL to increase the illumination quality [11], [12], [13], [14], [15]. It consists in equalizing, as much as possible, the BPP between the two axes using a light duct or a light fiber and then to collimate their output with a collimation lens. Figure 10 shows two techniques developed at ISL: the light duct and the restacking glass plates.

Figure 10:
Figure 10:

Two collimation techniques used to homogenize the output light of a laser diode stack. The first uses a wedge waveguide and the second uses restacking glass plates.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

This results in a very homogeneous and top-hat distribution of light in the far field. These laser illuminators use high-power laser diodes, emitting a laser beam at a wavelength between 800 and 860 nm. The laser divergence can be adjusted from some hundreds of mrads down to 2 mrads for the smallest field of view, and the mean illumination power can be as high as 30 W. With this technique, we are able to visualize objects at distances over 20 km.

4.2.2 Solid state laser source

In some cases, in particular for SWIR imaging, when the sensor cannot work in accumulation mode, the use of a solid state laser as an illumination source cannot be avoided. In this case, one image corresponds to one laser pulse. However, solid state lasers are very coherent: this leads to an increase in the inhomogeneity of the illuminated field due to speckle and also to parasite interferences and diffraction effects.

In this case, it is important to find solutions that are able to break down the coherence of the source by increasing the spatial diversity. The trick is to sample the incident laser beam into a high number of beamlets to create uncorrelated beamlets. This can be done using adapted light pipes [16], optical steppers [17], or both [18]. The result is a very homogeneous field of illumination with a top-hat illumination (Figure 11). The left image is illuminated with coherent light and has a high speckle contrast. The right image was recorded with homogenized light. The speckle contrast is significantly reduced.

Figure 11:
Figure 11:

Laser illumination in SWIR range-gated imaging with (A) a classical and (B) an ISL waveguide illumination.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

Figure 12 gives some examples of images taken with a SWIR active imaging system. It is easy to see that depending on the integration time, the depth of the visualized range gate is different. Figure 12B shows the result with a long integration time (500 ns). All the objects in the background and the foreground are visible. From Figure 12C–E, we used a small integration time, which leads to select only a short-range gate in space and, therefore, to visualize a small depth of the scene. The different images are taken with different delays between the laser pulse and the opening of the camera. In this case, a simple processing of the images can give the distance of the object for each pixel of the image. We will take a closer look at this property in the next paragraph.

Figure 12:
Figure 12:

Example of SWIR range-gated imaging with long (B) and short integration times (C–E) compared with a color camera image (A).

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

4.3 3D range imaging

When using a range-gated active imaging system, the gate distance is given by the delay between the laser pulse and the camera opening time. As a matter of fact, each image of a sequence is time stamped. This leads to the possibility of using this information for a 3D reconstruction of the scene. In the last years, ISL developed two techniques for 3D scene visualization [19]. The tomographic method uses a very small slice of light (small pulse width and gate duration), which is translated through the depth of the scene due to the variation of the sensor delay [20].

As discussed in the literature [21], all the images produced are processed and used to reconstruct a virtual 3D model of the scene. Thus, the precision of the 3D model depends on the light slice thickness and on the value of the step used to scan the scene. As a matter of fact, one has to compromise between precision, processing time, and reconstruction algorithm (weighted mean, correlation, maximum detection…). Figure 13 shows the principle of this method, the resulting range gates at different ranges, and the 3D reconstructed landscape. In this example, the reconstructed field sampled by 200 images has a depth of about 1 km.

Figure 13:
Figure 13:

Principle of the 3D imaging tomographic method.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

Figure 14 shows two more results of 3D reconstruction [21]. By adapting the pulse width, this method can digitize either a kilometric scene or a centimetric scene. The left-hand image was acquired with a pulse width and exposure time equal to 200 ns, i.e. a range gate of 60 m; the right-hand image was acquired with a pulse width and exposure time equal to 2 ns, i.e. a range gate of 60 cm depth. In both cases, this method gives very good results in terms of image quality, but even if the image acquisition and processing is automated, the translation of the range gate through the depth of the scene is still lasting for a few seconds.

Figure 14:
Figure 14:

3D reconstruction results of (A) a kilometer depth scene and (B) a human body.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

To overcome this drawback and to enable the use of active imaging in stealthy application (military application where the use of a laser illuminator has to be as furtive as possible), we developed a second method based on the use of two images only. Here, the image acquisition process will only need two short pulses of light and exposure times longer than the laser pulse width. In this case, the depth-intensity profile will include a plateau (see Figure 15).

Figure 15:
Figure 15:

The formation of range gates can be seen in (A) the space-time diagram. 3D reconstruction using intensity correlation can obtained from areas with overlapping range gates (B).

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

In this case, for a given sensor delay, i.e. for a given positioning of the depth-intensity profile, the rising and falling edges of the profile contain the 3D information modulated by the object reflectance, and the plateau will only contain the reflectance information. Based on this intrinsic property of the depth-intensity profile, one can extract the 3D information of a scene using a minimum of two images. Figure 15 shows that, using two images and an exposure time two times greater than the pulse width, it is possible to have an equal width for the rising edge, the plateau, and the falling edge. The index 0 denotes the sensor delay, and the indices i and i+1 denote the two succeeding images where the gated viewing delay is shifted by a factor Δzr or Δzf.

Comparing the intensity of the plateau Ip,i from image i with the linear rising edge Ip,i+1 from image i+1, and the linear falling edge If,i from image i with intensity of the plateau Ip,i+1 from image i+1, the depth of the scene can be calculated from (F, et al., 2014):

Rising edge, z=z0,i+Ir,i+1Ip,iΔzr,i+1   (1)

and

Falling edge, z=z0,i+(1If,iIp,i+1)Δzf,i   (2)

As it is shown by Eqs. (1) and (2), the 3D reconstruction will consist in acquiring two images (i and i+1) of the scene, shifted in depth from a half of the pulse width, and in dividing the two images one by the other. A simple test on the gray levels will determine the equation that applies. Figure 16 shows an example of a 3D reconstruction with the two-image method.

Figure 16:
Figure 16:

Examples for intensity correlation: (A) and (B) are the two recorded images, (C) is the Z-map, and (D) is the reconstructed 3D scene model.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

In this example, a pulse width of 2 μs and an exposure time of 4 μs were used; z0,i was placed at 950 m. These values allow to reconstruct a scene of a 600-m depth. The same method can also apply for small objects. In this case, one has to use smaller pulse widths and exposure times. Figure 17 gives an example of 3D reconstruction of a head model. In this case, we used a pulse width of 1 ns and an exposure time of 2 ns. The reconstructed depth was about 30 cm.

Figure 17:
Figure 17:

Examples for face recognition with an artificial head: (A) 3D scene model and (B) 3D depth of scene visualization.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

5 Computational imaging to see around the corner

In the previous sections of this paper, it was assumed that the target to visualize was in the line-of-sight of the vision system, even if the range was very high. For instance, by recording intensity images, we measure the photon flux impinging a focal plane array (FPA) from a certain observation direction. The amount of photons or the intensity is interpreted as a convolution between the illumination laser pulse and the surface reflectance properties of the line-in-sight scenario. In non-line-of-sight (NLoS) imaging, we take into account that a typical photon bounces off multiple surfaces before our FPA detector can detect it. The aim of the computational imaging approach is to reveal information about objects outside the direct-line-of-sight and carried by multi-bounce photons. Thus, we assume every surface to have a Lambertian bidirectional reflection distribution function (BRDF) leading to diffuse omnidirectional reflection of the light and a total loss of imaging information.

At each diffuse reflection, the photons, which are coming from the target surface, spread into the whole space and the amount of photons that are traveling into the direction of the detector reduces rapidly. As depicted in Figure 18, we typically assume a three-bounce scenario. In such a scenario, we neglect the presence of higher numbers of bounces and assume these as a blurred background signal. Further in our analysis, we assume that the photons are bouncing off three surfaces by diffuse reflection and are propagating from a light source to a spot on a relay surface, then to the target surface to a sensing area, and, finally, to the detector.

Figure 18:
Figure 18:

Non-line-of-sight imaging relay on a three-bounce scenario (A) to analyze photons, which bounce off surfaces outside the direct field-of-view. A typical application of NLoS sensing could be (B) the detection of hidden persons inside a building.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

At ISL, we are investigating two different computational imaging approaches based on ordinary intensity images or based on highly precise measurements of the photon’s times of flight. Figure 19 illustrates the first approach. A continuous laser pointer illuminates a spot on a relay surface, and the light propagates into the hidden scene (fiber-coupled laser diode SemiNex 4PN-108, SemiNex, Peabody, MA, USA).

Figure 19:
Figure 19:

Analysis-of-Synthesis approach for non-line-of-sight tracking of spatial motion (3Ds) and rotation (three axes) of a hidden object using a continuous-wave laser pointer and an ordinary intensity camera.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

Then, an ordinary intensity camera (Xenics Xeva-1.7-320, Xenics, Leuven, Belgium) samples the blurred intensity distribution of light, which is coming from the target, illuminating the relay surface. In our approach, we use an analysis-by-synthesis algorithm to estimate the position and orientation of the target with six degrees of freedom (x, y, z, θx, θy, θz). This method renders a synthetic image from a scene hypothesis and compares the result to the measured data. Then, the algorithm calculates the most probable position and orientation of the target by optimizing the scene hypothesis iteratively. Our algorithm is able to track an object within an area of several meters in real time (25 Hz) with centimetric precision [22].

Furthermore, the transient imaging approach uses high precise measurements of the photon roundtrip time. By knowing the time and the direction of arrival, we can reconstruct the position and some surface shapes by back-projection of the recorded information. In Figure 20, an example shows a raw data set and the result of the back-projection. In this case, the computational process reveals the position of three hidden targets [23].

Figure 20:
Figure 20:

Reconstruction of the non-line-of-sight position of a hidden object using a back-projection algorithm.

Citation: Advanced Optical Technologies 8, 6; 10.1515/aot-2019-0036

6 Conclusion

From the Cranz-Schardin camera to computational imaging, ISL never stopped innovating. During its 60 years of existence, the Institute has always taken advantage of the scientific and technological innovations of its time to put them at the service of its two governing ministries and, thus, at the service of the military and security forces of our two countries. The science of photon, photonics, still has a bright future ahead, and there is no doubt that the latest discoveries and technological advances in this field will be applied to systems that will allow our armed forces to maintain their technological superiority and our soldiers to carry out their missions with greater security.

References

  • [1]

    H. Schardin, J. SMPTE 71, 329–334 (1962).

  • [2]

    H. Schardin, J. SMPTE 61, 273–285 (1953).

  • [3]

    H. Pâques and P. Smigielski, in ‘Cinéholographie’ (C.R. Académie des Sciences, Paris, 1965) 260, pp. 6562–6564.

  • [4]

    P. Smigielski, H. Fagot and F. Albe, Proc. SPIE 600, 186–193 (1986).

  • [5]

    P. Smigielski, Proc. SPIE 1553, 436–446 (1992).

  • [6]

    F. Christnacher, P. Smigielski, A. Matwyschuk, M. Bastide and D. Fusco, Proc. SPIE 3745, 361–365 (1999).

  • [7]

    F. Christnacher, P. Smigielski, A. Matwyschuk, D. Fusco and Y. Guillard, Proc. SPIE 3745, 239–243 (1999).

  • [8]

    M. Laurenzis and F. Christnacher, Adv. Opt. Technol. 2, 397–405 (2013).

  • [9]

    F. Christnacher, M. Laurenzis and S. Schertzer, Proc. SPIE 8896, (2013).

  • [10]

    F. Christnacher, M. Laurenzis and S. Schertzer, Opt. Eng. 53, 043106 (2014).

  • [11]

    Y. Lutz, J.-M. Poyet and N. Metzger, Proc. SPIE 8896, 889608 (2013).

  • [12]

    Y. Lutz and N. Metzger, Proc. SPIE 8170, 81700C (2011).

  • [13]

    Y. Lutz and M. Laurenzis, Adv. Opt. Technol. 3, 179–185 (2014).

  • [14]

    Y. Lutz and J.-M. Poyet, Opt. Laser Technol. 57, 90–95 (2014).

  • [15]

    Y. Lutz, E. Bacher and S. Schertzer, Opt. Laser Technol. 96, 1–6 (2017).

  • [16]

    M. Laurenzis, Y. Lutz, F. Christnacher, A. Matwyschuk and J.-M. Poyet, Opt. Eng. 51, 061302 (2012).

  • [18]

    J.-M. Poyet and Y. Lutz, Opt. Eng. 55, 075103 (2016).

  • [19]

    F. Christnacher, M. Laurenzis, D. Monnin, S. Schertzer and G. Schmitt, in ‘8th NATO Symposium on Military Sensing, RTO-MP-SET 169, Friedrichshafen’ (Germany, 2011).

  • [20]

    D. Monnin, A. L. Schneider, F. Christnacher and Y. Lutz, in ‘A 3D outdoor scene scanner based on a night-vision range-gated active imaging system. 3rd Int. Symposium on 3DPVT’ (Chapel Hill, US, 2006).

  • [21]

    F. Christnacher, M. Laurenzis, D. Monnin, G. Schmitt, N. Metzger, et al., Proc. SPIE 9250, (2014). DOI: https://doi.org/10.1117/12.2066817.

  • [22]

    J. Klein, C. Peters, J. Martín, M. Laurenzis and M. B. Hullin, Nat. Sci. Rep. 6, 32491 (2016).

  • [23]

    M. Laurenzis and A. Velten, J. Electron. Imaging 23, 063003 (2014).

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1]

    H. Schardin, J. SMPTE 71, 329–334 (1962).

  • [2]

    H. Schardin, J. SMPTE 61, 273–285 (1953).

  • [3]

    H. Pâques and P. Smigielski, in ‘Cinéholographie’ (C.R. Académie des Sciences, Paris, 1965) 260, pp. 6562–6564.

  • [4]

    P. Smigielski, H. Fagot and F. Albe, Proc. SPIE 600, 186–193 (1986).

  • [5]

    P. Smigielski, Proc. SPIE 1553, 436–446 (1992).

  • [6]

    F. Christnacher, P. Smigielski, A. Matwyschuk, M. Bastide and D. Fusco, Proc. SPIE 3745, 361–365 (1999).

  • [7]

    F. Christnacher, P. Smigielski, A. Matwyschuk, D. Fusco and Y. Guillard, Proc. SPIE 3745, 239–243 (1999).

  • [8]

    M. Laurenzis and F. Christnacher, Adv. Opt. Technol. 2, 397–405 (2013).

  • [9]

    F. Christnacher, M. Laurenzis and S. Schertzer, Proc. SPIE 8896, (2013).

  • [10]

    F. Christnacher, M. Laurenzis and S. Schertzer, Opt. Eng. 53, 043106 (2014).

  • [11]

    Y. Lutz, J.-M. Poyet and N. Metzger, Proc. SPIE 8896, 889608 (2013).

  • [12]

    Y. Lutz and N. Metzger, Proc. SPIE 8170, 81700C (2011).

  • [13]

    Y. Lutz and M. Laurenzis, Adv. Opt. Technol. 3, 179–185 (2014).

  • [14]

    Y. Lutz and J.-M. Poyet, Opt. Laser Technol. 57, 90–95 (2014).

  • [15]

    Y. Lutz, E. Bacher and S. Schertzer, Opt. Laser Technol. 96, 1–6 (2017).

  • [16]

    M. Laurenzis, Y. Lutz, F. Christnacher, A. Matwyschuk and J.-M. Poyet, Opt. Eng. 51, 061302 (2012).

  • [17]

    Y. Lutz, SPIE Newsroom (2015). Available at: http://spie.org/news/6197-instantaneous-generation-of-spatial-diversity-for-speckle-contrast-reduction.

  • [18]

    J.-M. Poyet and Y. Lutz, Opt. Eng. 55, 075103 (2016).

  • [19]

    F. Christnacher, M. Laurenzis, D. Monnin, S. Schertzer and G. Schmitt, in ‘8th NATO Symposium on Military Sensing, RTO-MP-SET 169, Friedrichshafen’ (Germany, 2011).

  • [20]

    D. Monnin, A. L. Schneider, F. Christnacher and Y. Lutz, in ‘A 3D outdoor scene scanner based on a night-vision range-gated active imaging system. 3rd Int. Symposium on 3DPVT’ (Chapel Hill, US, 2006).

  • [21]

    F. Christnacher, M. Laurenzis, D. Monnin, G. Schmitt, N. Metzger, et al., Proc. SPIE 9250, (2014). DOI: https://doi.org/10.1117/12.2066817.

  • [22]

    J. Klein, C. Peters, J. Martín, M. Laurenzis and M. B. Hullin, Nat. Sci. Rep. 6, 32491 (2016).

  • [23]

    M. Laurenzis and A. Velten, J. Electron. Imaging 23, 063003 (2014).

FREE ACCESS

Journal + Issues

Advanced Optical Technologies is a strictly peer-reviewed scientific journal. The major aim of Advanced Optical Technologies is to publish recent progress in the fields of optical design, optical engineering, and optical manufacturing. Advanced Optical Technologies has a main focus on applied research and addresses scientists as well as experts in industrial research and development.

Search

  • View in gallery

    Cranz-Schardin chronolens with spark illumination. This camera contributed to spectacular progress in the comprehension of aerodynamics and gas physics.

  • View in gallery

    Visualization of the propagation of a planar shock wave over a VW beetle model (A) and the propagation of a kinetic projectile through an apple (B).

  • View in gallery

    Images from a movie visualizing the shock wave in a glass plate due to a projectile impact. Title screens (A, B) and images (C–E) taken at a frame rate of 6.5 Mfps!

  • View in gallery

    First visualization of a projectile formation from a spherical-shaped charge in the 1940s (A) and visualization of projectile formation from a conical-shaped charge and its effect on armor (B).

  • View in gallery

    Differential interferogram of a supersonic projectile.

  • View in gallery

    Recording and play back of a holographic movie: (A) recording setup, (B) holographic play back, and (C) some single frames of the movie ‘Christiane et les holobulles’.

  • View in gallery

    Some results of holographic interferometry: (A) vibration mode visualization of a car brake disk during rotation, (B) detection of a buried mine (depth 5 cm) by visualization of a seismic wave propagation (recording delay of 600 μs between the three holograms), and (C) crack visualization in a concrete wall: double reference holographic image, phased image, and pseudo relief representation.

  • View in gallery

    Some high TRL range-gated active imaging systems built at ISL: (A) a gated viewing system with an 808-nm laser diode illumination sold to the French army (STAT) in 2003 with a range of 3 km, (B) system with an eye-safe 1574 nm laser illumination sold to the French police in 2008 for soccer stadium surveillance (range: 3 km), (C) system for UAV detection and tracking from 2014 working at 1574 nm with a range of 2 km, (D) a very long-range surveillance system from 2015 working at 800 nm with a variable divergence/FOV from 0.2° to 5° and a maximum range of over 20 km, (E) an underwater system from 2015 with a 532-nm illumination, and (F) a dual-wavelength night vision goggle from 2016 working at 1530 nm and 1064 nm at a range of 200 m.

  • View in gallery

    The image given by the system is the convolution of a theoretical image, the transfer function of the system, and noise.

  • View in gallery

    Two collimation techniques used to homogenize the output light of a laser diode stack. The first uses a wedge waveguide and the second uses restacking glass plates.

  • View in gallery

    Laser illumination in SWIR range-gated imaging with (A) a classical and (B) an ISL waveguide illumination.

  • View in gallery

    Example of SWIR range-gated imaging with long (B) and short integration times (C–E) compared with a color camera image (A).

  • View in gallery

    Principle of the 3D imaging tomographic method.

  • View in gallery

    3D reconstruction results of (A) a kilometer depth scene and (B) a human body.

  • View in gallery

    The formation of range gates can be seen in (A) the space-time diagram. 3D reconstruction using intensity correlation can obtained from areas with overlapping range gates (B).

  • View in gallery

    Examples for intensity correlation: (A) and (B) are the two recorded images, (C) is the Z-map, and (D) is the reconstructed 3D scene model.

  • View in gallery

    Examples for face recognition with an artificial head: (A) 3D scene model and (B) 3D depth of scene visualization.

  • View in gallery

    Non-line-of-sight imaging relay on a three-bounce scenario (A) to analyze photons, which bounce off surfaces outside the direct field-of-view. A typical application of NLoS sensing could be (B) the detection of hidden persons inside a building.

  • View in gallery

    Analysis-of-Synthesis approach for non-line-of-sight tracking of spatial motion (3Ds) and rotation (three axes) of a hidden object using a continuous-wave laser pointer and an ordinary intensity camera.

  • View in gallery

    Reconstruction of the non-line-of-sight position of a hidden object using a back-projection algorithm.