Fermi Science Support Center

Extended Source Analysis

The analysis of an extended source is performed similarly to the process described in the binned likelihood tutorial. The only extra step needed is to create a two dimensional template that describes the object you are wanting to analyze. This template can be created from several different sources including maps from other observatories or you can create your own from scratch. There are a few simple rules you have to follow in creating your template but once you've done that, pretty much any shape is possible to model.

This sample analysis is based on the Centaurus A analysis performed by the LAT team and described in Abdo, A. A. et al. 2010, Science, 328, 728. At certain points we will refer to this article and its suplement as well as the Cicerone. The goal of this tutorial is to reproduce the data analysis performed in this publication including calculating the spectral shape and fluxes of the central core of Cen A and the large radio lobes. This tutorial uses a user contributed tool (make1FGLxml) and assumes you have the most recent ScienceTools installed. We will also make significant use of python, so you might want to familiarize yourself with python (there's a beginner's guide at http://wiki.python.org/moin/BeginnersGuide). This tutorial also assumes that you've gone through the non-python based binned likelihood thread. It is also useful if you have gone through the Likelihood Analysis with Python tutorial. This tutorial should take approximately 8 hours to complete (depending on your computer's speed) if you do everything but there are some steps you can skip along the way which shave off about 4 hours of that.

Note: In this tutorial, commands that start with '[user@localhost]$' are to be executed within a shell environment (bash, c-shell etc.) while commands that start with '>>>' are to be executed within an interactive python environment. Commands are in bold and outputs are in normal font. Text in ITALICS within the boxes are comments from the author.

Get the Data

For this thread the original data were extracted from the LAT data server with the following selections (these selections are similar to those in the paper):

  • Search Center (RA,Dec) = (201.47,-42.97)
  • Radius = 10 degrees
  • Start Time (MET) = 239557420 seconds (2008-08-04T15:43:40)
  • Stop Time (MET) = 265507200 seconds (2009-06-01T00:00:00)
  • Minimum Energy = 210 MeV
  • Maximum Energy = 31000 MeV

We've provided direct links to the event files as well as the spacecraft data file if you don't want to take the time to use the download server. For more information on how to download LAT data please see the Extract LAT Data tutorial.

You'll first need to make a file list with the names of your input event files:

[user@localhost]$ ls -1 *PH*.fits > CenA_events.list

In the following analysis we've assumed that you've named your list of data files CenA_events.list and the spacecraft file SC.fits.

Perform Event Selections

We are going to follow the prescription described in the unbinned likelihood tutorial to filter and prepare our data set.

First, execute gtselect:

[user@localhost]$ gtselect evclsmin=3 evclsmax=4
Input FT1 file[] @CenA_events.list
Output FT1 file[] CenA_filtered.fits
RA for new search center (degrees) (0:360) [] 201.47
Dec for new search center (degrees) (-90:90) [] -42.97
radius of new search region (degrees) (0:180) [] 10
start time (MET in s) (0:) [] 239557420
end time (MET in s) (0:) [] 265507200
lower energy limit (MeV) (0:) [] 210
upper energy limit (MeV) (0:) [] 31000
maximum zenith angle value (degrees) (0:180) [] 105
Done.

Next, you need to run gtmktime with a similar filter as that used in the publication.

[user@localhost]$ gtmktime
Spacecraft data file[] SC.fits
Filter expression[] (DATA_QUAL==1)&&(LAT_CONFIG==1)&&(abs(ROCK_ANGLE)<52)
Apply ROI-based zenith angle cut[] yes
Event data file[] CenA_filtered.fits
Output event file name[] CenA_filtered_gti.fits

Create and View a Counts Map

Now we can go ahead and create a counts map of the region so we can visualize the lobes and see the core of CenA in gamma rays. We can't make any quantitative statements yet since we haven't produced a template to model but we can at least look and see what we have.

[user@localhost]$ gtbin
This is gtbin version ScienceTools-v9r18p6-fssc-20101108 Type of output file (CCUBE|CMAP|LC|PHA1|PHA2) [] CMAP
Event data file name[] CenA_filtered_gti.fits
Output file name[] CenA_CMAP.fits
Spacecraft data file name[] NONE
Size of the X axis in pixels[] 140
Size of the Y axis in pixels[] 140
Image scale (in degrees/pixel)[] 0.1
Coordinate system (CEL - celestial, GAL -galactic) (CEL|GAL) [] CEL
First coordinate of image center in degrees (RA or galactic l)[] 201.47
Second coordinate of image center in degrees (DEC or galactic b)[] -42.97
Rotation angle of image axis, in degrees[] 0
Projection method e.g. AIT|ARC|CAR|GLS|MER|NCP|SIN|STG|TAN:[] AIT

Open this FITS image up in your favorite FITS viewer such as ds9. If you use the default settings you'll see the following (using the 'heat' color map):

Cen A Counts Map

Visualizing the lobes with this raw image is not easy. You can kind of see some emission in the center but not really resolve it. However, if you can smooth this image in ds9 by clicking the 'Analysis Menu' and then selecting 'Smooth'. This is better, but not quite good enough. So click 'Analysis' again and select 'Smooth Parameters'. This should bring up a dialog box where you can select the Kernel Radius. Set this to 9 and click 'Apply' and then close the dialog box. The resulting image is pretty washed out now so click 'Color' and then select 'b'. If you play with the color scale by holding the right mouse button while dragging it around you can wind up with a smoothed count map that looks a little like the one in the publication:

Cen A Smoothed Counts Map

We're not going to get something perfect here since we're manually playing with the color scaling and the paper used a sophisticated adaptive smoothing algorithm while we're just using the simple gaussian algorithm included in ds9. However, you can already see the central core of CenA and some diffuse emission to the north and south of the core that kind of looks like extended emission. You can also see several point sources in the ROI that are other gamma-ray sources we'll need to model. Since Cen A is near the galactic plane you can make out some of the galactic diffuse emission at the bottom of the image.

Create a Template for the Extended Emission

Now that we've convinced ourselves that there is extended lobe emission we need to find a way to describe the spatial extent. In this case we'll follow the example of the publication and use the WMAP k-band observations as a template. The exact WMAP template that was used in the publication is not available but we can get a very similar one from NASA's SkyView. Navigate to SkyView using and click on the SkyView Query Form link. We want to download the WMAP data in a region around Cen A so input 'Cen A' into the 'Coordinates of Source' box of the form. Then select 'WMAP K' in the 'Infrared CMB:' box. We need to match our ROI so under 'Common Options' input '140' as the 'Image size (pixels)' and 14 as the 'Image Size (degrees)'. Once you've input all of this, hit the 'Submit Request' button.

SkyView Query Form

On the resulting page, you'll see an image of the WMAP data showing a fuzzy image of Cen A in the K-Band. This is nice but what we really want is the FITS image. So click the FITS link and download it into your working directory. In the following section we're going to assume that the SkyView image is called 'skv.fits' so rename the downloaded file accordingly. You can also download the WMAP image directly from us: skv.fits. Here's what the downloaded WMAP K-band image looks like in ds9 using the 'heat' color map and square-root scaling without any smoothing:

Raw WMAP K-Band Image

The raw image by itself is not in the correct form for the analysis we want to do. First of all, we want to subtract out the central core of Cen A and we want to divide the map into two parts so that we can seperately model the north and south lobes. Furthermore, there are a few things we need to do to make any map compatible with the Science Tools:

  • The template must be in J2000 coordinates. The likelihood code does not transpose coordinates into J2000.
  • The background must be set to 0. The likelihood code will use any pixel over 0 as part of the template.
  • The total flux must be normalized to 1.
The first point is already taken care of since the WMAP image is in J2000 coordinates. We'll take care of the second point by setting all of the points below a certain value to 0. This isn't the best way to do it but it's quick which is what we're going for in this thread. In the final step, we'll integrate over the entire map to get our normalization factor and divide each pixel by this number. So let's get started. We're going to use pyFits to do this which is included in the Science Tools (you could use a similar tools like IDL or ftools if you wanted). You can find more details on pyFits at the pyFits website.

Start up python, import the pyfits and math modules and open the WMAP image:

[user@localhost]$ python
Python 2.6.5 (r265:79063, Nov 9 2010, 12:49:33)
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyfits
>>> import math as m
>>> wmap_image = pyfits.open('./skv.fits')

You can view the header of the image via the following command. The image is in the first HDU of the FITS file so you access the '0'th element of the wmap_image object.

>>> print wmap_image[0].header
...output suppressed...

Now, define a function that calculates the distance of one pixel from another. We're going to use this to subtract out the center of the image.

>>> def distance(x0,y0,x,y):
>>>  return m.sqrt((x-x0)**2 + (y-y0)**2)

Now we do a rough background suppression by looping through the image and setting any pixels less than 0.5 mK to 0. We'll also save the resulting image to a file so we can see what we've done.

>>> for x,row in enumerate(wmap_image[0].data):
>>>  for y in enumerate(row):
>>>   if(y[1] < 0.5):
>>>    wmap_image[0].data[x,y[0]] = 0
>>> wmap_image.writeto('CenA_wmap_k_above5.fits')

If you open up this image in ds9 you'll notice that there's still some background left in the south east portion of the image so we'll get rid of this by setting any pixels that are more than 5 degrees away from the center to 0 and write this out to a file.

>>> for x,row in enumerate(wmap_image[0].data):
>>>  for y in enumerate(row):
>>>   if(distance(70,69,x,y[0])>50):
>>>    wmap_image[0].data[x,y[0]] = 0
>>> hdulist.writeto('CenA_wmap_k_nobkgrnd.fits')

The resulting image should only have emission from the radio galaxy left and the rest of the image set to 0. This is important because any non-zero pixels will be treated as part of the template by the likelihood code and will skew your results. Now we will continue on to subtract out the center 1 degree of the image so that the core of Cen A is not included in the template. By looking at the image in ds9 we know that the center pixel of the image is (70,69) so we cut out the center using the following lines of code (and save the result).

>>> for x,row in enumerate(wmap_image[0].data):
>>>  for y in enumerate(row):
>>>   if(distance(70,69,x,y[0])<10):
>>>    wmap_image[0].data[x,y[0]] = 0
>>> wmap_image.writeto('CenA_wmap_k_nocenter.fits')

At this point, we want to divide up our map into north and south pieces and normalize each of them. We need to normalize the flux of each template that we're using to 1. This means doing a two dimenstional integral over the image making sure to take into account the size of each bin. In practice, this means summing up all of the bins, multiplying this by (pi/180)^2 and multiplying this by the pixel area. We will now make the north map by setting all the south pixels to zero, normalize it and save it. We'll then close the image, open the image we saved before we did the zeroing out of the south pixels and do the same for the south region.

>>> wmap_image[0].data[0:69,0:140] = 0
>>> sum = 0
>>> for element in wmap_image[0].data.flat:
>>>  sum += element
>>> norm = sum * (m.pi/180)**2 * (0.1**2)
>>> for x,row in enumerate(wmap_image[0].data):
>>>  for y in enumerate(row):
>>>   wmap_image[0].data[x,y[0]] /= norm
>>> wmap_image.writeto('CenA_wmap_k_nocenter_N.fits')
>>> wmap_image.close()

Now open the file back up and do the same for the south region.

>>> wmap_image = pyfits.open("Cena_wmap_k_nocenter.fits")
>>> wmap_image[0].data[70:140,0:140]=0
>>> sum = 0
>>> for element in wmap_image[0].data.flat:
>>>  sum += element
>>> norm = sum * (m.pi/180)**2 * (0.1**2)
>>> for x,row in enumerate(wmap_image[0].data):
>>>  for y in enumerate(row):
>>>   wmap_image[0].data[x,y[0]] /= norm
>>> wmap_image.writeto('CenA_wmap_k_nocenter_S.fits')
>>> wmap_image.close()

In the end, you should have two FITS files that look like the following images. The max value in the North template should be around 1670 and the max in the South should be around 900. If you are far away from these values, make sure you exectuted the above steps correctly.

WMAP South Template WMAP North Template

Create a 3D Counts Map and Compute the Livetime

Now we can continue to follow the binned analysis prescription by creating a 3D counts map

[user@localhost]$ gtbin
This is gtbin version ScienceTools-v9r18p6-fssc-20101108 Type of output file (CCUBE|CMAP|LC|PHA1|PHA2) [] CCUBE
Event data file name[] CenA_filtered_gti.fits
Output file name[] CenA_CCUBE.fits
Spacecraft data file name[] NONE
Size of the X axis in pixels[] 140
Size of the Y axis in pixels[] 140
Image scale (in degrees/pixel)[] 0.1
Coordinate system (CEL - celestial, GAL -galactic) (CEL|GAL) [] CEL
First coordinate of image center in degrees (RA or galactic l)[] 201.47
Second coordinate of image center in degrees (DEC or galactic b)[] -42.97
Rotation angle of image axis, in degrees[] 0
Projection method e.g. AIT|ARC|CAR|GLS|MER|NCP|SIN|STG|TAN:[] AIT
Algorithm for defining energy bins (FILE|LIN|LOG) [] LOG
Start value for first energy bin in MeV[] 210
Stop value for last energy bin in MeV[] 31000
Number of logarithmically uniform energy bins[] 20

Then compute the livetime. This will take something like 30 minutes to complete.

[user@localhost]$ gtltcube
Event data file[] CenA_filtered_gti.fits
Spacecraft data file[] SC.fits
Output file[] CenA_ltcube.fits
Step size in cos(theta) (0.:1.) [] 0.025
Pixel size (degrees)[] 1
Working on file SC.fits
.....................!

Generate XML Model File

We need to create an XML file with all of the sources of interest within the Region of Interest (ROI) of Cen A so we can correctly model the background. For more information on the format of the model file and how to create one, see the likelihood analysis tutorial. We'll use the user contributed tool make1FGLxml.py to create a template based on the LAT 1-year catalog. You'll need to download the FITS version of this file at http://fermi.gsfc.nasa.gov/ssc/data/p6v11/access/lat/1yr_catalog/ and get the make1FGLxml.py tool from the user contributed software page. Also make sure you have the most recent galactic diffuse and isotropic model files which can be found at http://fermi.gsfc.nasa.gov/ssc/data/p6v11/access/lat/BackgroundModels.html.

Now that we have all of the files we need, you can generate your model file in python:

>>> from make1FGLxml import *
This is make1FGLxml version 02.
>>> mymodel = srcList('gll_psc_v02.fit','CenA_filtered_gti.fits','CenA_model.xml')
>>> mymodel.makeModel('gll_iem_v02.fit','gal_v02','isotropic_iem_v02.txt','eg_v02')
Creating file and adding sources

For more information on the make1FGLxml module, see the usage notes.

You should now have a file called 'CenA_model.xml'. The module finds lots of 1FGL sources within and right outside of our ROI. Some of these sources correspond to the Cen A lobes and some of them are discrete sources within the ROI. Since we are trying to duplicate the results found in the article, we are going to delete all of the objects from the XML file except those found in Table S2 in the paper. These sources include

  • 1FGL J1300.9-3745
  • 1FGL J1304.0-4622
  • 1FGL J1304.3-4352
  • 1FGL J1305.4-4928
  • 1FGL J1307.0-4030
  • 1FGL J1307.6-4259
  • 1FGL J1320.1-4007
  • 1FGL J1328.2-4729
  • 1FGL J1334.2-4448
  • 1FGL J1347.8-3751
  • 1FGL J1400.1-3743
We also want to add in entries for Cen A and for Cen A's lobes. These should look like: <source name="CenA" type="PointSource">
 <spectrum type="PowerLaw2">
  <parameter free="1" max="1e4" min="1e-4" name="Integral" scale="1e-07" value="0.04" />
  <parameter free="1" max="5.0" min="0.0" name="Index" scale="-1.0" value="2.7" />
  <parameter free="0" max="500000" min="20" name="LowerLimit" scale="1" value="1e2" />
  <parameter free="0" max="500000" min="20" name="UpperLimit" scale="1" value="3e4" />
 </spectrum>
 <spatialModel type="SkyDirFunction">
  <parameter free="0" max="360" min="-360" name="RA" scale="1" value="201.47" />
  <parameter free="0" max="90" min="-90" name="DEC" scale="1" value="-42.97" />
 </spatialModel>
</source>

<source name="CenA_SouthLobe" type="DiffuseSource">
 <spectrum type="PowerLaw2">
  <parameter free="1" max="1e4" min="1e-4" name="Integral" scale="1e-07" value="0.04" />
  <parameter free="1" max="5.0" min="0.0" name="Index" scale="-1.0" value="2.7" />
  <parameter free="0" max="500000" min="20" name="LowerLimit" scale="1" value="1e2" />
  <parameter free="0" max="500000" min="20" name="UpperLimit" scale="1" value="3e4" />
 </spectrum>
 <spatialModel file="cena_wmap_nocenter_S_normalized.fits" type="SpatialMap">
  <parameter free="0" max="1e3" min="1e-3" name="Prefactor" scale="1.0" value="1.0"/>
 </spatialModel>
</source>

<source name="CenA_NorthLobe" type="DiffuseSource">
 <spectrum type="PowerLaw2">
  <parameter free="1" max="1e4" min="1e-4" name="Integral" scale="1e-07" value="0.04" />
  <parameter free="1" max="5.0" min="0.0" name="Index" scale="-1.0" value="2.7" />
  <parameter free="0" max="500000" min="20" name="LowerLimit" scale="1" value="1e2" />
  <parameter free="0" max="500000" min="20" name="UpperLimit" scale="1" value="3e4" />
 </spectrum>
 <spatialModel file="cena_wmap_nocenter_N_normalized.fits" type="SpatialMap">
  <parameter free="0" max="1e3" min="1e-3" name="Prefactor" scale="1.0" value="1.0"/>
 </spatialModel>
</source>

For these three sources we are fitting a powerlaw2 model (see the Cicerone for descriptions of the different spectral and spatial models) from 100 to 30000 MeV and we're starting at some reasonable values for the index (-2.7) and integral (0.04e-7). The core of Cen A will be modeled by a point source (<spatialModel type="SkyDirFunction">) and the two lobes will be modeled using the templates we created from the WMAP sky map (<spatialModel file="some_map.fits" type="SpatialMap">). The final XML file can be downloaded directly from us: CenA_model.xml.

Compute the Binned Exposure and Source Maps

Now we run gtsrcmaps to generate a source map and a binned exposure map using the model file we just created

[user@localhost]$ gtsrcmaps
Exposure hypercube file[] CenA_ltcube.fits
Counts map file[] CenA_CCUBE.fits
Source model file[] CenA_model.xml
Binned exposure map[] CenA_BinnedExpMap.fits
Source maps output file[] CenA_srcMaps.fits
Response functions[] P6_V3_DIFFUSE
Generating SourceMap for CenA....................!
Generating SourceMap for CenA_NorthLobeComputing binned exposure map....................!
....................!
Generating SourceMap for CenA_SouthLobe....................!
Generating SourceMap for _1FGLJ1300.9-3745....................!
Generating SourceMap for _1FGLJ1304.0-4622....................!
Generating SourceMap for _1FGLJ1304.3-4352....................!
Generating SourceMap for _1FGLJ1305.4-4928....................!
Generating SourceMap for _1FGLJ1307.0-4030....................!
Generating SourceMap for _1FGLJ1307.6-4259....................!
Generating SourceMap for _1FGLJ1320.1-4007....................!
Generating SourceMap for _1FGLJ1328.2-4729....................!
Generating SourceMap for _1FGLJ1334.2-4448....................!
Generating SourceMap for _1FGLJ1347.8-3751....................!
Generating SourceMap for _1FGLJ1400.1-3743....................!
Generating SourceMap for eg_v02....................!
Generating SourceMap for gal_v02....................!

Run the Likelihood Analysis

It's time to actually run the likelihood analysis now. First, you need to import the pyLikelihood module and then the BinnedAnalysis functions. As discussed in the likelihood thread it's best to do a two-pass fit here, the first with the DRMNFB optimizer and a looser tolerance and then finally with the NewMinuit optimizer and a tighter tolerance. This allows us to zero in on the best fit parameters. For more details on the pyLikelihood module, check out the pyLikelihood Usage Notes.

>>> import pyLikelihood
>>> from BinnedAnalysis import *
>>> obs = BinnedObs(srcMaps='CenA_srcMaps.fits',expCube='CenA_ltcube.fits',binnedExpMap='CenA_BinnedExpMap.fits',irfs='P6_V3_DIFFUSE')
>>> like1 = BinnedAnalysis(obs,'CenA_model.xml',optimizer='DRMNFB')

Set the tolerance to something large to start out with, do the first fit and save the results to a file.

>>> like1.tol = 0.1
>>> like1.fit(verbosity=0)
76538.757527438313
>>> like1.logLike.writeXml('CenA_fit1.xml')

Now, create a new BinnedAnalysis object using the same BinnedObs object from before and the results of the previous fit and tell it to use the NewMinuit optimizer. We also double check that the tolerance is what we want (0.001) and then we fit the model and save the results.

>>> like2 = BinnedAnalysis(obs,'CenA_fit1.xml',optimizer='NewMinuit')
>>> like2.tol
0.001
>>> like2.fit(verbosity=0)
76534.963449390125
>>> like2.logLike.writeXml('CenA_fit2.xml')

Now that we've done the full fit we can verify that we've gotten values close to what's in the publication. First check the results for the core:

>>> like2.Ts('CenA')
251.1039266467269
>>> like2.model['CenA']
CenA
Spectrum: PowerLaw2
0 Integral: 1.551e+00 2.634e-01 1.000e-04 1.000e+04 ( 1.000e-07)
1 Index: 2.714e+00 1.075e-01 0.000e+00 5.000e+00 (-1.000e+00)
2 LowerLimit: 1.000e+02 0.000e+00 2.000e+01 5.000e+05 ( 1.000e+00) fixed
3 UpperLimit: 3.000e+04 0.000e+00 2.000e+01 5.000e+05 ( 1.000e+00) fixed

So, the core is detected with high significance (TS = 251) and the fit resulted in a spectral index of 2.7 +- 0.1. The integral flux is (1.6 +- 0.3) x 10^-7. This matches well (within statistical errors) the index and flux reported in the article (index = 2.67 +- 0.10, flux = 1.50 +0.25/-0.22).

Now, check the north and south lobes.

>>> like2.Ts('CenA_NorthLobe')
37.75456161540933
>>> like2.model['CenA_NorthLobe']
CenA_NorthLobe
Spectrum: PowerLaw2
4 Integral: 7.631e-01 2.524e-01 1.000e-04 1.000e+04 ( 1.000e-07)
5 Index: 2.587e+00 2.083e-01 0.000e+00 5.000e+00 (-1.000e+00)
6 LowerLimit: 1.000e+02 0.000e+00 2.000e+01 5.000e+05 ( 1.000e+00) fixed
7 UpperLimit: 3.000e+04 0.000e+00 2.000e+01 5.000e+05 ( 1.000e+00) fixed
>>> like2.Ts('CenA_SouthLobe')
79.677223227452487
>>> like2.model['CenA_SouthLobe']
CenA_SouthLobe
Spectrum: PowerLaw2
8 Integral: 1.073e+00 2.261e-01 1.000e-04 1.000e+04 ( 1.000e-07)
9 Index: 2.451e+00 1.293e-01 0.000e+00 5.000e+00 (-1.000e+00)
10 LowerLimit: 1.000e+02 0.000e+00 2.000e+01 5.000e+05 ( 1.000e+00) fixed
11 UpperLimit: 3.000e+04 0.000e+00 2.000e+01 5.000e+05 ( 1.000e+00) fixed

The north lobe is detected at a TS value of 38 while the south lobe is detected at a higher TS of 80. This is roughly what the LAT team reported. The north lobe has a spectral index of 2.6 +- 0.2 and an integral flux of (0.76 +- 0.25) x 10^-7. The south lobe has a spectral index of 2.5 +- 0.1 and an integral flux of (1.1 +- 0.2) x 10^-7. The integral fluxes are spot on what is reported in the paper (0.77 +0.23/-0.19 and 1.09 +0.24/-0.21 for the north and south lobes respectively) and still within statistical errors for the indices (2.52 +0.16/-0.19 for the north and 2.60 +0.14/-0.15 for the south). The main reason for any discrepancy would be that we are using a slightly different template than was used by the LAT team but the fact that our results are so close indicates that the method is robust. If you get results that are outside of the errors, double check that you've followed the thread accurately.

Create some Residual Maps

It's easiest to visualize our results by generating some model maps based on the model we just created and then using these models to create residuals maps based on the accual counts map. We're going to create four different model maps:

  • CenA_ModelMap_All.fits: A map of the full model (nothing commented out)
  • CenA_ModelMap_AllBkgrnd.fits: A map of all of the background sources (comment out the Cen A sources)
  • CenA_ModelMap_CenA.fits: A map of just the Cen A sources (comment out everything but the Cen A sources)
  • CenA_ModelMap_Diff.fits: A map of just the diffuse background sources (comment out everything but the isotropic and galactic sources)

To do this, we'll need to edit the output xml model file ('CenA_fit2.xml') each time and then run gtmodel. You don't need to edit it at all for the first one:

[user@localhost]$ gtmodel
Source maps (or counts map) file[] CenA_srcMaps.fits
Source model file[] CenA_fit2.xml
Output file[] CenA_ModelMap_All.fits
Response functions[] P6_V3_DIFFUSE
Exposure cube[] CenA_ltcube.fits
Binned exposure map[] CenA_BinnedExpMap.fits

For the next file, comment out the Cen A sources in the xml file (you comment out text in xml using '' and '' before and after the text you want to comment out) and run gtmodel again.

[user@localhost]$ gtmodel
Source maps (or counts map) file[] CenA_srcMaps.fits
Source model file[] CenA_fit2.xml
Output file[] CenA_ModelMap_AllBkgrnd.fits
Response functions[] P6_V3_DIFFUSE
Exposure cube[] CenA_ltcube.fits
Binned exposure map[] CenA_BinnedExpMap.fits

Continue to comment out select parts of the model xml file according to the list above and run gtmodel afterwards to get the next two files. You should end up with the four images below:

All Sources
Background
CenA_ModelMap_All.png: All of the sources in our model. You can see how the Cen A lobes are being modeled as well as numerous point sources and the diffuse backgrounds. CenA_ModelMap_AllBkgrnd.png: Only the background sources. The CenA lobes and core are not in this image.
Cen A Sources
Diffuse Sources
CenA_ModelMap_CenA.png: Just the Cen A sources. This looks really nice. CenA_ModelMap_Diffuse.png: Only the glactic and extragalactic diffuse emission. This is basically a blur in the background of all of our sky regions.

Now, we'll use the ftool, farith to subtract these model maps from our data to create residual maps.

[user@localhost]$ farith CenA_CMAP.fits CenA_ModelMap_All.fits CenA_CMAP_Resid.fits SUB [user@localhost]$ farith CenA_CMAP.fits CenA_ModelMap_AllBkgrnd.fits CenA_CMAP_CenA.fits SUB [user@localhost]$ farith CenA_CMAP.fits CenA_ModelMap_CenA.fits CenA_CMAP_Bkgrnd.fits SUB [user@localhost]$ farith CenA_CMAP.fits CenA_ModelMap_Diffuse.fits CenA_CMAP_Sources.fits SUB

This should give you the following residual counts maps (smoothed with a 9 pixel gaussian like before):

Residuals
Cen A Only
CenA_CMAP_Resid.png: Counts map minus all of the sources in the model showing that there's not really anything left after we subtract everything out. CenA_CMAP_CenA.png: Everthing subtracted except the Cen A core and lobes. You can really see the extend of the radio galaxy in this map.
Background Sources
Non-Diffuse Sources
CenA_CMAP_Bkgrund.png: This shows a counts map with the Cen A core and lobes removed. The galaxtic and extragalactic emission dominates here but you can see a few individual point sources. CenA_CMAP_Sources.png: This shows the region with the galactic and extragalactic emission removed so you can see the point sources along with the Cen A sources.

Finished

So, it looks like we've generally reproduced the results from the Cen A paper. You can go on from here and produce a proper spectrum using the model you have produced here (see the python likelihood thread) and you can plot a profile if you like (try using pyfits and matplotlib, both included in the science tools). You can also create any type of template you wish now to analyze extended diffuse sources using arbitrary shapes (create them in python and use pyfits to save them). Just make sure you follow the prescription outlined above.


Last updated by: Elizabeth Ferrara 08/18/2010