Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Current Directions in Biomedical Engineering

Joint Journal of the German Society for Biomedical Engineering in VDE and the Austrian and Swiss Societies for Biomedical Engineering

Editor-in-Chief: Dössel, Olaf

Editorial Board: Augat, Peter / Buzug, Thorsten M. / Haueisen, Jens / Jockenhoevel, Stefan / Knaup-Gregori, Petra / Kraft, Marc / Lenarz, Thomas / Leonhardt, Steffen / Malberg, Hagen / Penzel, Thomas / Plank, Gernot / Radermacher, Klaus M. / Schkommodau, Erik / Stieglitz, Thomas / Urban, Gerald A.


CiteScore 2018: 0.47

Source Normalized Impact per Paper (SNIP) 2018: 0.377

Open Access
Online
ISSN
2364-5504
See all formats and pricing
More options …

Video tracking of swimming rodents on a reflective water surface

Olaf Christ
  • Corresponding author
  • Section for Neuroelectronic Systems, Clinic for Neurosurgery, University Medical Center Freiburg, Engesser Str. 4, 79108 Freiburg, Germany (tel.: +49(0)761-270-50072; fax: +49(0)761-50081)
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Ulrich G. Hofmann
  • Section for Neuroelectronic Systems, Clinic for Neurosurgery, University Medical Center Freiburg, Engesser Str. 4, 79108 Freiburg, Germany (tel.: +49(0)761-270-50072; fax: +49(0)761-50081)
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2015-09-12 | DOI: https://doi.org/10.1515/cdbme-2015-0058

Abstract

Animal models are an essential testbed for new devices on their path from the bench to the patient. Potential impairments by brain stimulation are often investigated in water mazes to study spatial memory and learning. Video camera based tracking systems exist to quantify rodent behaviour, but reflections of ambient lighting on the water surface and artefacts due to the waves caused by the swimming animal cause errors. This often requires tweaking of algorithms and parameters, or even potentially modifying the lab setup. In the following, we provide a simple solution to alleviate these problem using a combination of region based tracking and independent multimodal background subtraction (IMBS) without hav ing to tweak a plethora of parameters.

Keywords: Video tracking; water maze; region based tracking; independent multimodal background subtraction

1 Introduction

Deep-brain stimulation as well as cochlea and retina implants are the most prominent, though not the only, examples of modern therapeutical devices dubbed electroceuticals [7]. These, like all other kinds of medical devices, have to prove their efficacy and safety before widespread use in patients is permitted. Yet, some aspects of these devices are permanently under investigation, like the potential side effects deep-brain stimulation might have on cognition and learning in patients. Our research aims at shedding light on this question in the context of brain stimulated animal models in a forced learning paradigm in a water-maze. Water-mazes are laboratory devices classically used for probing hippocampal functions predominantly associated with both working and spatial memory and spatial navigation [810]. Rats are trained to locate the site of a hidden escape platform using visual cues located around the maze. As the training progresses, the rat is getting better and better at finding the platform and is also taking a more direct route towards the escape platform. This is considered a result of learning the position of the platform. Because the behavioural experiments usually involve several repetions and many rats, automatic video analysis promises to be an attractive alternative over manual analysis. However, in reality, reflections of ligthing on the water surface can render those systems useless, because movement artefacts have to be removed by hand from the tracking data. Furthermore, current systems usually require the user to tweak and understand a lot of parameters. While the problem arising from of ambient lighting can be often aleviated by modifications of the lab’s illumination, reflections on the water surface and artefacts due to waves caused by the animal swimming through the water are harder to come by. To that end, we propose combining independent multimodal background subtraction (IMBS) [1] with region based tracking [2]. Since the region based tracking algorithm itself only works on binary images, it is able to work on any input segmented by a separate algorithm. Such an algorithm can either provide a complete binary images as it’s the case with IMBS or it can be incorporated into the tracking process acting as a homogeneity criterion to only make binary decisions locally when they are actually needed.

2 Material and methods

For our behavioural experiments on brain-stimulated rodents, we routinely use the double H maze [4], a novel water-escape task adressing the issue of otherwise lacking control over the rodents’ strategy, i.e. the use allocentric (declarative-like) or egocentric (declarative) memory to reach it’s goal [11]. All protocols were approved by the Animal Care Committee of the University of Freiburg, and efforts were made to minimize the number of animals used with respect to statistical constraints. A video camera is mounted 3m above the maze to film the experiment. To exclude the experimenter from accidentally being tracked, we use a simple binary mask to limit the tracking to the area of interest. The procedure consists of the following steps:

  1. Histogram equalization of the input video frame.

  2. Apply hand painted “no track” mask to image to remove pixels outside the area of interest

  3. Threshold the image to identify (most of) the reflections caused by the lights

  4. Dilate the result from step 3 to cover up movement of less bright reflections that are potentially erroneously considered by IMBS as foreground.

  5. Run IMBS on the equalized input image

  6. Erode and dilate the IMBS foreground mask

  7. Use dilated threshold image from step 4 to remove unwanted pixels from the IMBS foreground mask image to get rid of reflections tracked by IMBS

  8. Use the resulting binary image as an input for region tracking to actually track the area corresponding to the swimming rat.

Figure: 1 illustrates the procedure graphically.

Step 1: Take a video frame. Step 2: Apply Histogram equalization to the input frame. Step 3: Binarize the equalized image to identify bright reflections. Step 5: Run IMBS and exclude previously identified bright reflections from tracking. For visualization purposes, the excluded reflections are shown in blue and the trackable regions, classified as foreground pixels, are shown in green. The pink rectangle is not part of the tracking procedure and only to mark the position of the rat.
Figure 1

Step 1: Take a video frame. Step 2: Apply Histogram equalization to the input frame. Step 3: Binarize the equalized image to identify bright reflections. Step 5: Run IMBS and exclude previously identified bright reflections from tracking. For visualization purposes, the excluded reflections are shown in blue and the trackable regions, classified as foreground pixels, are shown in green. The pink rectangle is not part of the tracking procedure and only to mark the position of the rat.

2.1 Background subtraction using IMBS

By necessity, all trials in the double-H maze are recorded on video for subsequent analysis of errors, latencies, first choices, etc. Before settling on IMBS, we have evaluated all 35 algorithms provided by the Background Subtraction Library (bgslibrary) [5]. We decided on using IMBS, because it clearly outperformed it’s competitors when confronted with a dynamically changing water background. This is not surprising, since IMBS has specifically been designed for this kind of scenario. The way IMBS works is to let B (i, j) denote a set of tuples 〈r, g, b, d〉, where r, g, b are RGB values and d ∈ [1, ∈] the number of sampled background pixels associated with those r, g, b values. Furthermore, let P denote the sampling period, N the number of background samples to analyze. Therefore, the tuple 〈r, g, b, dD〈 is a significant background pixel with respect to a minimal number of occurrences D to consider. To address the problem of having a dynamically changing background and gradual illumination changes, a noisy sensor as well as movement of small background elements, up to N/D tuples for each element of B are used for approximating a multimodal probability distribution. IMBS does not model the background by using a predefined distribution, but instead discretizes the unknown distribution and non-regular patterns are modeled by an adaptive number of tuples for each pixel.

2.1.1 Improvements over standard IMBS

Not surprisingly, even though IMBS performs tremendously well, it does misclassify some reflections as a part of the foreground. However, since those reflections are significantly brighter than the background, they can easily be identified by simple thresholding. Furthermore, the moment the rat enters a uniform area showing a reflection, the area becomes nonuniform. And since the rat itself is not reflective, reflections occuring at and around it’s position can be safely discarded.

3 Region based tracking

3.1 Tracking by growing and shrinking

The utilized tracking algorithm was first described in [2, 3] and combines well known region growing [1217] with it’s inverse operation coined region shrinking [2, 3]. Furthermore, the algorithm provides the object’s contour as a byproduct of the actual tracking procedure. The algorithm is extremely fast even on low end computers, since it never operates on the whole image. In fact, on average only 10 percent of the pixels have to be accessed per frame.

3.2 Algorithm description

Although the algorithm has been explained in great detail in [2], we briefly explain how growing and shrinking work together to track regions.

3.3 Seeding

We use a regular grid grid of seed pixels to find areas potentially satisfying a predefined criterion called the homogeneity criterion. Seed pixels not satisfying this criterion are discarded. Others are grown to seed regions and tracked over time as long as the homogeneity criterion is satisfied.

3.4 Shrinking

After growing a region in one frame, some part of a that region might fall outside the tracked object in the following frame. Hence, the part of that region outside the displaced object has to be shrunk until it’s back within the boundaries of the object. The process of shrinking is illustrated in Fig. 2 and Fig. 3

From left to right: Region B has moved. A is the segmentation of region B in the previous frame. Since B has moved, parts of A and B intersect. Shrinking removes the overlapping part of A outside B until region A has been shrunk to the new boundaries of B and must now be grown until B is fully segmented. Images used with permission.
Figure 2

From left to right: Region B has moved. A is the segmentation of region B in the previous frame. Since B has moved, parts of A and B intersect. Shrinking removes the overlapping part of A outside B until region A has been shrunk to the new boundaries of B and must now be grown until B is fully segmented. Images used with permission.

From left to right: An initially grown area A shrinks to its intersection with the new area (B), before growing to the boundaries of B. Images used with permission.
Figure 3

From left to right: An initially grown area A shrinks to its intersection with the new area (B), before growing to the boundaries of B. Images used with permission.

3.5 Growing

Shrinking moved the overlapping part of the region back inside the tracked object. Now, regular region growing takes care of the part not covered by the region anymore. The region is now again fully segmented.

3.6 Making the contour

The boundary of a tracked region is made out of a 4-connected chain of vectors, each representing one of four sides of a pixel and completely surrounding the region and ending at the beginning of the first vector. Hence, generating a contour is merely the process of following that chain and producing a list of contour points. This contour is always closed.

4 Results

We have presented a very simple procedure, which requires no parameter tuning and can be put into immediate use when trying to track animals swimming in a water maze of arbitray shape. Implemented on a standard Dell desktop PC using a Core i5 CPU and 16GB of RAM. All processing steps are performed at an overall framerate of about 15 frames per second or 60 ms per frame with IMBS taking spending 55 ms per frame and the actual tracker only spending 5 ms per frame. Step 5 of Fig. 1 shows the actual immediate tracking result (note the lighting reflections) and Fig. 4 summarizes a rats path on its way to the escape platform.

This figure shows the path of the rat from the start to it’s escape platform. Outliers during tracking appearing further away than the length of the animal have been excluded from the path. The resulting path appears to be mostly unaffected by artefacts due to waves on the water surface.
Figure 4

This figure shows the path of the rat from the start to it’s escape platform. Outliers during tracking appearing further away than the length of the animal have been excluded from the path. The resulting path appears to be mostly unaffected by artefacts due to waves on the water surface.

5 Conclusion

While tracking animals in a water maze exhibiting moving reflections remains challenging, one can achieve very promising results using very recently published algorithms and without having to tune a plethora of parameters. Our algorithm is in active use to analyze behavioral implications of deep brain stimulated rats, which results are reported elsewhere.

Funding

This work was partially funded by the German Research Foundation (DFG) within the framework of the German Excellence Initiative as part of the Cluster of Excellence BrainLinks-BrainTools (EXC1086).

References

  • [1]

    D. Bloisi, L. Iocchi, ”Independent Multimodal Background Subtraction”, In Proceedings of the Third International Con- ference on Computational Modeling of Objects Presented in Images: Fundamentals, Methods and Applications, Rome, Italy, pp. 39-44, 2012 Google Scholar

  • [2]

    Raul Rojas Felix v. Hundelshausen. Tracking regions and edges by shrinking and growing. In In Ondrej Drbohlav, ed- itor, Proceedings of the Computer Vision Winter Workshop, CVWW03, pages 33–38, February 2003. Google Scholar

  • [3]

    Raul Rojas Felix v. Hundelshausen. Tracking regions. In Proceedings of the RoboCup 2003 International Symposium, pages 10–11, Padova Italy, July 2003. Google Scholar

  • [4]

    Sarah Pol-Bodetto, Hélène Jeltsch-David, Lucas Lecourtier, Nathalia Rusnac, Célia Mam-Lam-Fook, Brigitte Cosquer, Karin Geiger, Jean-Christophe Cassel The double-H maze test, a novel, simple, water-escape memory task: Acquisition, recall of recent and remote memory, and effects of systemic muscarinic or NMDA receptor blockade during training Be- havioural brain research Volume 218, Issue 1, 17 March 2011, Pages 138-51 Google Scholar

  • [5]

    Sobral, Andrews, BGSLibrary: An OpenCV C++ Background Subtraction Library, IX Workshop de Visão Computacional (WVC’2013), Rio de Janeiro, Brazil, 2013Google Scholar

  • [6]

    Bradski, G. The OpenCV Library Dr. Dobbs Journal of Soft- ware Tools (2000)

  • [7]

    Famm, K., B. Litt, K. J. Tracey, E. S. Boyden and M. Slaoui A jump-start for electroceuticals. Nature 496: 159-161. (2013). Google Scholar

  • [8]

    Morris, R. G., P. Garrud, J. N. Rawlins, and J. O’Keefe. (1982a). Place navigation impaired in rats with hippocam- pal lesions. Nature 297 (5868):681-683. Google Scholar

  • [9]

    Morris, R. G. M. (1981). Spatial localisation does not de- pend on the presence of local cues. Learning and Motivation 12:239-260.Google Scholar

  • [10]

    Morris, R. G. M., P. Garrud, J. N. Rawlins, and J. O’Keefe. (1982b). Place navigation impaired in rats with hippocampal lesions. Nature 297 (5868):681-683. Google Scholar

  • [11]

    Kirch, R. D., R. Pinnell, U. G. Hofmann and J. C. Cassel. The Double-H Maze: A Robust Behavioral Test for Learning and Memory in Rodents. J Visualized Experiments. 2015 Google Scholar

  • [12]

    Zucker, S. W. Region growing: Childhood and Adolescence Computer Graphics and Image Processing, 1976, 5, 382–399 Google Scholar

  • [13]

    Brice, C. R. and Fennema, C. L. Scene analysis using re- gions Artificial Intelligence, 1970, 1, 205–226Google Scholar

  • [14]

    Adams, R. and Bischof, L. Seeded region growing IEEE Transactions on Pattern Analysis and Machine Intelligence, 1994, 16(6), 641–647Google Scholar

  • [15]

    Siebert, A. Dynamic Region Growing Vision Interface ’97, 1997Google Scholar

  • [16]

    Hojjatoleslami, S. A. and Kittler, J. Region growing: a new approach IEEE Transactions of Image Processing, 1998, 7(7), 1079–1084Google Scholar

  • [17]

    Pavlidis, T. Integrating region growing and edge detection IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 1990, 12(3), 225–233 Google Scholar

About the article

Published Online: 2015-09-12

Published in Print: 2015-09-01


Author's Statement

Conflict of interest: Authors state no conflict of interest. Material and Methods: Informed consent: Informed consent is not applicable. Ethical approval: All protocols were approved by the Animal Care Committee of the University of Freiburg, and efforts were made to minimize the number of animals used with respect to statistical constraints.


Citation Information: Current Directions in Biomedical Engineering, Volume 1, Issue 1, Pages 232–235, ISSN (Online) 2364-5504, DOI: https://doi.org/10.1515/cdbme-2015-0058.

Export Citation

© 2015 by Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in