Fermi Gamma-ray Space Telescope

Upper Limits with Python

This sample analysis shows a way to determine an upper limit on the GeV emission from Swift J164449.3+573451 similar to what was done in Burrows et al. (Nature 476, page 421). To compute the upper limit, we use the profile likelihood method. This entails scanning in values of the normalization parameter, fitting with respect to the other remaining free parameters, and plotting the change in log-likelihood as a function of flux. Assuming 2*log-likelihood behaves asymptotically as chi-square, a 90% confidence region will correspond to a change in log-likelihood of 2.71/2. Note that this change in log-likelihood corresponds to a two-sided confidence interval. Since we are interested in an upper-limit, this change in log-likelihood actually corresponds to a 95% CL upper-limit. See Rolke et al. (2005) for more details. We will first cover an unbinned example and at the end of the page we include modifications for binned data. This tutorial assumes that you have gone through the standard likelihood analysis.

You can download this tutorial as a Jupyter notebook and run it interactively. Please see the instructions for using the notebooks with the Fermitools.

Get the Data

For this thread the original data were extracted from the LAT data server with the following selections (these selections are similar to those in the paper):

  • Search Center (RA,Dec) = (251.2054,57.5808)
  • Radius = 30 degrees
  • Start Time (MET) = 322963202 seconds (2011-03-28T00:00:00)
  • Stop Time (MET) = 323568002 seconds (2011-04-04T00:00:00)
  • Minimum Energy = 100 MeV
  • Maximum Energy = 300000 MeV

In the following analysis we have assumed that you have event and spacecraft files L181102105258F4F0ED2772_PH00.fits and L181102105258F4F0ED2772_SC00.fits

We also assume that you have downloaded the recommended models for a point source analysis gll_iem_v07.fits and iso_P8R3_SOURCE_V3_v1.txt.

Perform Event Selections

You could follow the unbinned likelihood tutorial to perform your event selections using gtlike, gtmktime etc. directly from the command line, and then use pylikelihood later. But we're going to go ahead and use python.

We start python:

[user@localhost]$ python
Type "help", "copyright", "credits" or "license" for more information.
>>>

We first import the gt_apps module to gain access to the tools.

>>> import gt_apps as my_apps

We first run gtselect (called 'filter' in python)..

>>> my_apps.filter['evclass'] = 128
>>> my_apps.filter['evtype'] = 3
>>> my_apps.filter['ra'] = 251.2054
>>> my_apps.filter['dec'] = 57.5808
>>> my_apps.filter['rad'] = 10
>>> my_apps.filter['emin'] = 100
>>> my_apps.filter['emax'] = 300000
>>> my_apps.filter['zmax'] = 90
>>> my_apps.filter['tmin'] = 322963202
>>> my_apps.filter['tmax'] = 323568002
>>> my_apps.filter['infile'] = 'L181102105258F4F0ED2772_PH00.fits'
>>> my_apps.filter['outfile'] = 'SwiftJ1644_filtered.fits'

Once this is done, we can run gtselect:

>>> my_apps.filter.run()

Now, we need to find the GTIs. This is accessed within python via the maketime object:

>>> my_apps.maketime['scfile'] = 'L181102105258F4F0ED2772_SC00.fits'
>>> my_apps.maketime['filter'] = '(DATA_QUAL>0)&&(LAT_CONFIG==1)'
>>> my_apps.maketime['roicut'] = 'no'
>>> my_apps.maketime['evfile'] = 'SwiftJ1644_filtered.fits'
>>> my_apps.maketime['outfile'] = 'SwiftJ1644_filtered_gti.fits'
>>> my_apps.maketime.run()

Livetime Cube and Exposure Map

We can now compute the livetime cube and exposure map

Livetime Cube

>>> my_apps.expCube['evfile'] = 'SwiftJ1644_filtered_gti.fits'
>>> my_apps.expCube['scfile'] = 'L181102105258F4F0ED2772_SC00.fits'
>>> my_apps.expCube['outfile'] = 'SwiftJ1644_ltCube.fits'
>>> my_apps.expCube['zmax'] = 90
>>> my_apps.expCube['dcostheta'] = 0.025
>>> my_apps.expCube['binsz'] = 1
>>> my_apps.expCube.run()

Exposure Map

>>> from GtApp import GtApp
>>> expCubeSun = GtApp('gtltcubesun','Likelihood')
>>> expCubeSun.command()
>>> my_apps.expMap['evfile'] = 'SwiftJ1644_filtered_gti.fits'
>>> my_apps.expMap['scfile'] ='L181102105258F4F0ED2772_SC00.fits'
>>> my_apps.expMap['expcube'] ='SwiftJ1644_ltCube.fits'
>>> my_apps.expMap['outfile'] ='SwiftJ1644_expMap.fits'
>>> my_apps.expMap['irfs'] = 'CALDB'
>>> my_apps.expMap['srcrad'] = 20
>>> my_apps.expMap['nlong'] = 120
>>> my_apps.expMap['nlat'] = 120
>>> my_apps.expMap['nenergies'] = 37
>>> my_apps.expMap.run()

Generate XML Model File

We need to create an XML file with all of the sources of interest within the Region of Interest (ROI) of SwiftJ1644 so we can correctly model the background. We'll use the user contributed LATSourceModel package to create a model file based on the LAT 14-year LAT catalog. You'll need to download either the XML or FITS version of this file at http://fermi.gsfc.nasa.gov/ssc/data/access/lat/14yr_catalog/ and put it in your working directory. Install the LATSourceModel package from the user contributed software area by following the instructions on the linked GitHub page. Also make sure you have the most recent galactic diffuse and isotropic model files, which can be found here. The catalog and background models are packaged with your installation of the FermiTools, which can be found at: $FERMI_DIR//refdata/fermi/galdiffuse/.

Now that we have all of the files we need, we can generate your model file in python:

>>> from LATSourceModel import SourceList
>>> mymodel = SourceList(catalog_file='gll_psc_v32.xml',ROI='SwiftJ1644_filtered_gti.fits',output_name='SwiftJ1644_model.xml',DR=4)
>>> mymodel.make_model()
Creating spatial and spectral model from the 4FGL DR-4 catalog: gll_psc_v32.xml
Added 221 point sources and 0 extended sources.
Building ds9-style region file...done!
File saved as ROI_SwiftJ1644_model.reg.

Compute the diffuse source responses.

The gtdiffrsp tool will add one column to the event data file for each diffuse source. The diffuse response depends on the instrument response function (IRF), which must be in agreement with the selection of events, i.e. the event class and event type we are using in our analysis. Since we are using SOURCE class, CALDB should use the P8R3_SOURCE_V3 IRF for this tool.

>>> my_apps.diffResps['evfile'] = 'SwiftJ1644_filtered_gti.fits'
>>> my_apps.diffResps['scfile'] = 'L181102105258F4F0ED2772_SC00.fits'
>>> my_apps.diffResps['srcmdl'] = 'SwiftJ1644_model.xml'
>>> my_apps.diffResps['irfs'] = 'CALDB'
>>> my_apps.diffResps.run()

Run the Likelihood Analysis

First, import the pyLikelihood module and then the UnbinnedAnalysis functions. For more details on the pyLikelihood module, check out the pyLikelihood Usage Notes.

>>> import pyLikelihood
>>> from UnbinnedAnalysis import *
>>> obs = UnbinnedObs('SwiftJ1644_filtered_gti.fits','L181102105258F4F0ED2772_SC00.fits',expMap='SwiftJ1644_expMap.fits',
expCube='SwiftJ1644_ltCube.fits',irfs='CALDB')

>>> like = UnbinnedAnalysis(obs,'SwiftJ1644_model.xml',optimizer='Minuit')
>>> like.tol = 0.1
>>> likeobj = pyLike.Minuit(like.logLike)
>>> like.fit(verbosity=0,covar=True,optObject=likeobj)
13846.995
>>> likeobj.getQuality()
2
>>> like.logLike.writeXml('Swift_fit1.xml')

This output corresponds to the MINUIT fit quality. A "good" fit corresponds to a value of "fit quality = 3"; if you get a lower value it is likely that there is a problem with the error matrix. Now we try with NewMinuit

>>> like2 = UnbinnedAnalysis(obs,'Swift_fit1.xml',optimizer='NewMinuit')
>>> like2.tol = 0.0001
>>> like2obj = pyLike.NewMinuit(like2.logLike)
>>> like2.fit(verbosity=0,covar=True,optObject=like2obj)
13845.478
>>> print like2obj.getRetCode()
156

If you get anything other than '0' here, then NewMinuit didn't converge. We can start by deleting sources with low or negative TS, which tend to hinder convergence. First, we delete sources with TS levels below 10 and run the fit again.

>>> sourceDetails = {}
>>> for source in like2.sourceNames():
...    sourceDetails[source] = like.Ts(source)
... hit RETURN
>>> for source,TS in sourceDetails.iteritems():
...    print source, TS
...    if (TS < 10):
...       print "Deleting..."
...       like2.deleteSource(source)
... hit RETURN
>>> like2.fit(verbosity=0,covar=True,optObject=like2obj)
13875.5016
>>> print like2obj.getRetCode()
0
>>> like2.logLike.writeXml('Swift_ts10.xml')

Since we have achieved convergence, we need to add the SwiftJ1644 source to the top of Swift_ts10.xml model file.
<source name="SwiftJ1644" type="PointSource">
 <spectrum type="PowerLaw2">
  <parameter free="true" max="10000.0" min="0.0001" name="Integral" scale="1e-07" value="1.0"/>
  <parameter free="true" max="5.0" min="0.0" name="Index" scale="-1.0" value="2.0"/>
  <parameter free="false" max="500000.0" min="20.0" name="LowerLimit" scale="1.0" value="100.0"/>
  <parameter free="false" max="500000.0" min="20.0" name="UpperLimit" scale="1.0" value="300000.0"/>
 </spectrum>
 <spatialModel type="SkyDirFunction">
  <parameter free="false" max="360.0" min="-360.0" name="RA" scale="1.0" value="251.2054"/>
  <parameter free="false" max="90.0" min="-90.0" name="DEC" scale="1.0" value="57.5808"/>
 </spatialModel>
</source>

With the Swift source in the XML file, we can now calculate the upper limit (the paper used an upper energy limit of 10GeV so that's what we are using here).

>>> like3 = UnbinnedAnalysis(obs,'Swift_ts10.xml',optimizer='NewMinuit')
>>> from UpperLimits import UpperLimits
>>> ul = UpperLimits(like3)
>>> ul['SwiftJ1644'].compute(emin=100,emax=10000)
0 1.0 -13.4300991026 1.00205720998e-07
1 1.4 -8.92520452834 1.40305015072e-07
2 1.8 -3.74480762174 1.80414368585e-07
3 2.2 1.98072667571 2.20529512108e-07
(2.1614544695382939e-07, 2.1562851854024907)
>>> print ul['SwiftJ1644'].results
[2.16e-07 ph/cm^2/s for emin=100.0, emax=10000.0, delta(logLike)=1.35]

Note that this is in ph/cm^2/s and not ergs/cm^2/s.

Binned Data Upper Limits

In the case of binned data, one needs to follow the standard likelihood analysis or the python version in order to generate livetime cubes, counts cubes, source maps and exposure maps. The upper limits steps are nearly identical to the previous section, but one needs to import BinnedAnalysis and use BinnedObs with the proper file format instead >>> import pyLikelihood
>>> from BinnedAnalysis import *
>>> obs = BinnedObs(srcMaps='file_name',binnedExpMap='file_name',
expCube='file_name',irfs='CALDB')

>>> like = BinnedAnalysis(obs,'XML_file_name',optimizer='NewMinuit')

Last updated by: J. Eggen 09/13/2023