Color is not a physical quantity of an object. It cannot be measured. We can only measure reflectance, i.e. the amount of light reflected for each wavelength. Nevertheless, we attach colors to the objects around us. A human observer perceives colors as being approximately constant irrespective of the illuminant which is used to illuminate the scene. Colors are a very important cue in everyday life. They can be used to recognize or distinguish different objects. Currently, we do not yet know how the brain arrives at a color constant or approximately color constant descriptor, i.e. what computational processing is actually performed by the brain. What we need is a computational description of color perception in particular and color vision in general. Only if we are able to write down a full computational theory of the visual system then we have understood how the visual system works. With this contribution, a computational model of color perception is presented. This model is much simpler compared to previous theories. It is able to compute a color constant descriptor even in the presence of spatially varying illuminants. According to this model, the cones respond approximately logarithmic to the irradiance entering the eye. Cells in V1 perform a change of the coordinate system such that colors are represented along a red-green, a blue-yellow and a black-white axis. Cells in V4 compute local space average color using a resistive grid. The resistive grid is formed by cells in V4. The left and right hemispheres are connected via the corpus callosum. A color constant descriptor which is presumably used for color based object recognition is computed by subtracting local space average color from the cone response within a rotated coordinate system.
© 2012 by Walter de Gruyter GmbH & Co.