What is Debayering?

b

For those of us who use One Shot Color CCD, CMOS or DSLR cameras to image, we have to deal with a process called Debayering. But what is this process and why is this process so important to our sub exposures from these kinds of cameras?

In the mid 1970s, a Kodak scientist named Bryce Bayer developed what is known as the Bayer Matrix and patented it in 1976. The Bayer Matrix is basically a set of photosensors or pixels as we call them now whose main characteristic is that these matrix combinations of red, green and blue color filters placed over pixels would see light just like the human eye does. The most common matrix pattern for this is the Red-Green-Green-Blue pattern or RGGB pattern. Since the human retina in the eye is more sensitive to green light, this matrix became the standard matrix for color photosensors.

The raw output of these cameras - yes, I hope you’re shooting your DSLR in RAW format - is formatted so that each pixel is filtered to record only one color. The problem is that the data from each color filtered pixel on its own, can’t specify the values of that color. What you need to do is use an algorithm that will interpolate a set of the surrounding similar red, green or blue pixel values to estimate what the value of a particular color filtered pixel (red, green or blue) should be and outputs this into raw data from the sensor data. This is what happens for those of us who use PixInsight and execute the Debayer process. There are several algorithms available in this process to use which are also called demosaicing methods (no, I did not make that up); the best of these methods to use most of the time is called VNG, or The Variable Number of Gradients methods. This is the default setting in PixInsight’s Debayer process.

Start now your 1-week free trial and access fully-calibrated sets of images that are just waiting to be post-processed!
Try it free

So, when do you use the Debayer process? Use this process AFTER you’ve calibrated and cosmetically corrected your images. The reason? Image calibration (bias, darks and flats with the light frames) is a pixel by pixel process. Each pixel is compared to and manipulated in each of the calibration frames. If we used the Debayer process before we did any calibration, the interpolation done by the Debayering algorithm would ruin our ability to do image calibration and that would NOT be good.

After debayering your sub exposures, you continue processing as normal to register / align your data and then use image integration to stack them. After that, it’s on to getting that first look at what you’ve captured and then on to all the fun you love and enjoy with post processing.

As always, stay safe, get more people to turn their lights off at night and clear skies!


This blog post was originally published in our Telescope Live Community.

The Community represents Telescope Live's virtual living room, where people exchange ideas and questions around astrophotography and astronomy. 

Take advantage of our 1-week free trial to immediately access tons of data ready to be post-processed.
Try it free

Join the conversation now to find out more about astrophotography and to improve your observation and post-processing skills!