A quasi Hubble Palette image of the Cygnus Wall in the North America Nebula.
As I mentioned in a recent post, I’ve been experimenting using OSC data to recreate a Hubble palette look to my images. Normally Hubble Palette is done using a monochrome camera and individual filters, H-alpha, Oxygen III and Sulfur II.
Although I own and have used a monochrome CCD with narrowband filters, I have been making more use out of the OSC cooled cmos cameras lately. I own a QHY168C 16mp model and currently have on loan from QHYCCD, a QHY268C 26mp camera.
The QHY268Cis a new breed of cooled cmos camera from QHYCCD. It’s a back-illuminated, 16bit, 26-megapixel high resolution deep sky object imaging camera. It provides the highest resolution greyscale, using the Sony IMX571 sensor with a small pixel size of just 3.76um. Combined with low read noise and zero amp glow this is an super camera to image with. The data obtained with the QHY268C is high resolution and clean with little noise.
So what is a quasi Hubble palette image anyway? Well, quasi meaning sort of but not really, refers to the fact that this way of processing OSC data gives the appearance of the Hubble palette but it isn’t really.
The first thing to know is there is no Sulfur II (SII) channel in the image. This is because the special OSC narrowband filter used (Optolong L-eNhance or L-eXtreme) to capture the data, doesn’t permit the SII wavelength to come through and reach the camera sensor. The filter L-eNhance does however pass the H-alpha (Ha), H-beta (Hb) and Oxygen III (OIII) wavelengths. The L-eXtreme passes only Ha and OIII. We can use this to our advantage in creating our quasi Hubble palette image.
An image of IC5067 in the Pelican Nebula processed as a quasi Hubble Palette.
At the heart of this image processing technique is splitting out the R-G-B channels of the master light frame we’ve produced by stacking and combining all of our individual light frames into one unstretched image, we can then remap these to the Hubble palette. When I use the word remap, I mean using the R-G-B data in a different way than how we usually do to produce a colour image. Instead we will assign R channel as SII, the G channel as Ha and the B channel as our OIII.
We then use some pixelmath in Pixinsight to recombine these channels into one colour image again, effectively creating our quasi Hubble palette look. Again there is no SII specifically in this which is part of the real Hubble palette but I will say the end image result was quite surprising and impressive as I did not know what to expect when trying this new technique.
To understand better this new processing technique I developed, watch my Youtube video where I discuss and demonstrate it further. And of course, don’t forget to comment below with any thoughts or questions you might have!
My story began more than 40 years ago looking up at the Moon with a small collapsible telescope my Father had. Encouraged by my parents, who bought me my very own telescope, a 4.5″ reflector, I began to explore the night sky from my family home backyard. Today I do astrophotography from my home in Kitchener, Ontario and also with remote telescopes located in New Mexico and Australia. Some of my images have won awards and have been featured online and in magazines.
Leave a comment...