Summed Likelihood Analysis with Python

This sample analysis shows a way of performing joint likelihood on two data selections using the same XML model. This is useful if you want to do the following:

• Coanalysis of Front and Back selections (not using the combined IRF)
• Coanalysis of separate time intervals
• Coanalysis of separate energy ranges
• Pass 8 PSF type analysis
• Pass 8 EDISP type analysis
This tutorial also assumes that you've gone through the standard binned likelihood thread using the combined front + back events, to which we will compare.

You can download this tutorial as a Jupyter notebook and run it interactively. Please see the instructions for using the notebooks with the Fermitools.

Get the Data

For this thread we use the same data extracted for the binned likelihood thread with the following selections:

• Search Center (RA,Dec) = (193.98,-5.82)
• Radius = 15 degrees
• Start Time (MET) = 239557417 seconds (2008-08-04T15:43:37)
• Stop Time (MET) = 302572802 seconds (2010-08-04T00:00:00)
• Minimum Energy = 100 MeV
• Maximum Energy = 500000 MeV

We've provided direct links to the event files as well as the spacecraft data file if you don't want to take the time to use the download server. For more information on how to download LAT data please see the Extract LAT Data tutorial.

You'll first need to make a file list with the names of your input event files:

``` [user@localhost]\$ ls *PH*.fits > binned_events.txt ```

In the following analysis we've assumed that you've named your list of data files binned_events.txt.

Perform Event Selections

You could follow the binned likelihood tutorial to perform your event selections using gtlike, gtmktime etc. directly from the command line, and then use pylikelihood later. But we're going to go ahead and use python. The gt_apps module provides methods to call these tools from within python.

So, let's jump into python:

``` [user@localhost]\$ python Type "help", "copyright", "credits" or "license" for more information. >>> ```

We first import the gt_apps module to gain access to the tools.

``` >>> import gt_apps as my_apps ```

We need to run gtselect (called 'filter' in python) twice. Once, we select only the front events and the other time we select only back events. You do this with evtype=1 (front) and evtype=2 (back).

``` >>> my_apps.filter['evclass'] = 128 >>> my_apps.filter['evtype'] = 1 >>> my_apps.filter['ra'] = 193.98 >>> my_apps.filter['dec'] = -5.82 >>> my_apps.filter['rad'] = 15 >>> my_apps.filter['emin'] = 100 >>> my_apps.filter['emax'] = 500000 >>> my_apps.filter['zmax'] = 90 >>> my_apps.filter['tmin'] = 239557417 >>> my_apps.filter['tmax'] = 302572802 >>> my_apps.filter['infile'] = '@binned_events.txt' >>> my_apps.filter['outfile'] = '3C279_front_filtered.fits' ```

Once this is done, we can run gtselect:

``` >>> my_apps.filter.run() ```
Now, we select the back events

``` >>> my_apps.filter['evtype'] = 2 >>> my_apps.filter['outfile'] = '3C279_back_filtered.fits' ```

We run gtselect again:

``` >>> my_apps.filter.run() ```

Now, we need to find the GTIs for each data set (front and back). This is accessed within python via the maketime object:

``` >>> my_apps.maketime['scfile'] = 'L181126210218F4F0ED2738_SC00.fits' >>> my_apps.maketime['filter'] = '(DATA_QUAL>0)&&(LAT_CONFIG==1)' >>> my_apps.maketime['roicut'] = 'no' >>> my_apps.maketime['evfile'] = '3C279_front_filtered.fits' >>> my_apps.maketime['outfile'] = '3C279_front_filtered_gti.fits' >>> my_apps.maketime.run() ```
We follow a similar procedure for back events

``` >>> my_apps.maketime['evfile'] = '3C279_back_filtered.fits' >>> my_apps.maketime['outfile'] = '3C279_back_filtered_gti.fits' >>> my_apps.maketime.run() ```

Livetime and Counts Cubes

Livetime Cube

We can now compute the livetime cube. We only need to do this once since in this case we made the exact same time cuts and used the same GTI filter on front and back datasets.

``` >>> my_apps.expCube['evfile'] = '3C279_front_filtered_gti.fits' >>> my_apps.expCube['scfile'] = 'L181126210218F4F0ED2738_SC00.fits' >>> my_apps.expCube['outfile'] = '3C279_front_ltcube.fits' >>> my_apps.expCube['zmax'] = 90 >>> my_apps.expCube['dcostheta'] = 0.025 >>> my_apps.expCube['binsz'] = 1 >>> my_apps.expCube.run() ```

Counts Cube

The counts cube is the counts from our data file binned in space and energy. All of the steps above use a circular ROI (or a cone, really). Once you switch to binned analysis, you start doing things in squares. Your counts cube can only be as big as the biggest square that can fit in the circular ROI you already selected. We start with front events

``` >>> my_apps.evtbin['evfile'] = '3C279_front_filtered_gti.fits' >>> my_apps.evtbin['outfile'] = '3C279_front_ccube.fits' >>> my_apps.evtbin['algorithm'] = 'CCUBE' >>> my_apps.evtbin['nxpix'] = 100 >>> my_apps.evtbin['nypix'] = 100 >>> my_apps.evtbin['binsz'] = 0.2 >>> my_apps.evtbin['coordsys'] = 'CEL' >>> my_apps.evtbin['xref'] = 193.98 >>> my_apps.evtbin['yref'] = -5.82 >>> my_apps.evtbin['axisrot'] = 0 >>> my_apps.evtbin['proj'] = 'AIT' >>> my_apps.evtbin['ebinalg'] = 'LOG' >>> my_apps.evtbin['emin'] = 100 >>> my_apps.evtbin['emax'] = 500000 >>> my_apps.evtbin['enumbins'] = 37 >>> my_apps.evtbin.run() ```
We repeat for back events

``` >>> my_apps.evtbin['evfile'] = '3C279_back_filtered_gti.fits' >>> my_apps.evtbin['outfile'] = '3C279_back_ccube.fits' >>> my_apps.evtbin.run() ```

Exposure Maps

The binned exposure map is an exposure map binned in space and energy. We first need to import the python version of 'gtexpcube2' which doesn't have a gtapp version by default. This is easy to do (you can import any of the command line tools into python this way). Then, you can check out the parameters with the pars function. Here we generate exposure maps for the entire sky.
``` >>> from GtApp import GtApp >>> expCube2= GtApp('gtexpcube2','Likelihood') ```
Now we can run

``` >>> expCube2['infile'] = '3C279_front_ltcube.fits' >>> expCube2['cmap'] = 'none' >>> expCube2['outfile'] = '3C279_front_BinnedExpMap.fits' >>> expCube2['irfs'] = 'P8R3_SOURCE_V3' >>> expCube2['evtype'] = '1' >>> expCube2['nxpix'] = 1800 >>> expCube2['nypix'] = 900 >>> expCube2['binsz'] = 0.2 >>> expCube2['coordsys'] = 'CEL' >>> expCube2['xref'] = 193.98 >>> expCube2['yref'] = -5.82 >>> expCube2['axisrot'] = 0 >>> expCube2['proj'] = 'AIT' >>> expCube2['ebinalg'] = 'LOG' >>> expCube2['emin'] = 100 >>> expCube2['emax'] = 500000 >>> expCube2['enumbins'] = 37 >>> expCube2.run() >>> expCube2['infile'] = '3C279_front_ltcube.fits' >>> expCube2['outfile'] = '3C279_back_BinnedExpMap.fits' >>> expCube2['evtype'] = '2' >>> expCube2.run() ```

Compute source maps

The sourcemaps step convolves the LAT response with your source model generating maps for each source in the model for use in the likelihood calculation. We use the same XML file as in the standard binned likelihood analysis. Go ahead and download the XML file to your working directory. You should also download the recommended models for a normal point source analysis gll_iem_v07.fits and iso_P8R3_SOURCE_V3_v1.txt.

``` >>> my_apps.srcMaps['expcube'] = '3C279_front_ltcube.fits' >>> my_apps.srcMaps['cmap'] = '3C279_front_ccube.fits' >>> my_apps.srcMaps['srcmdl'] = '3C279_input_model.xml' >>> my_apps.srcMaps['bexpmap'] = '3C279_front_BinnedExpMap.fits' >>> my_apps.srcMaps['outfile'] = '3C279_front_srcmap.fits' >>> my_apps.srcMaps['irfs'] = 'P8R3_SOURCE_V3' >>> my_apps.srcMaps['evtype'] = '1' >>> my_apps.srcMaps.run() ```
Similarly for the back events
``` >>> my_apps.srcMaps['expcube'] = '3C279_front_ltcube.fits' >>> my_apps.srcMaps['cmap'] = '3C279_back_ccube.fits' >>> my_apps.srcMaps['srcmdl'] = '3C279_input_model.xml' >>> my_apps.srcMaps['bexpmap'] = '3C279_back_BinnedExpMap.fits' >>> my_apps.srcMaps['outfile'] = '3C279_back_srcmap.fits' >>> my_apps.srcMaps['irfs'] = 'P8R3_SOURCE_V3' >>> my_apps.srcMaps['evtype'] = '2' >>> my_apps.srcMaps.run() ```

Run the Likelihood Analysis

First, import the BinnedAnalysis and SummedAnalysis libraries. Then, create a likelihood object for both the front and the back datasets. For more details on the pyLikelihood module, check out the pyLikelihood Usage Notes.

``` >>> import pyLikelihood >>> from BinnedAnalysis import * >>> from SummedLikelihood import * >>> front = BinnedObs(srcMaps='3C279_front_srcmap.fits',binnedExpMap='3C279_front_BinnedExpMap.fits', expCube='3C279_front_ltcube.fits',irfs='CALDB') >>> likefront = BinnedAnalysis(front,'3C279_input_model.xml',optimizer='NewMinuit') >>> back = BinnedObs(srcMaps='3C279_back_srcmap.fits',binnedExpMap='3C279_back_BinnedExpMap.fits', expCube='3C279_front_ltcube.fits',irfs='CALDB') >>> likeback = BinnedAnalysis(back,'3C279_input_model.xml',optimizer='NewMinuit') ```

Then, create the summedlikelihood object and add the two likelihood objects, one for the front selection and the second for the back selection.

``` >>> summed_like = SummedLikelihood() >>> summed_like.addComponent(likefront) >>> summed_like.addComponent(likeback) ```

Perform the fit and print out the results

``` >>> summedobj=pyLike.NewMinuit(summed_like.logLike) >>> summed_like.fit(verbosity=0,covar=True,optObject=summedobj) >>> 214628.579 ```

Print TS for 3C 279 (4FGL J1256.1-0547)

``` >>> summed_like.Ts('4FGL J1256.1-0547') >>> 30191.550 ```

We can now compare to the standard binned likelihood analysis that uses only one data set containing both Front and Back event types that are represented by a single, combined IRF set. You will need to download the files created in that analysis thread or rerun this python tutorial with the combined dataset (evtype=3).

``` >>> all = BinnedObs(srcMaps='3C279_binned_srcmaps.fits',binnedExpMap='3C279_binned_allsky_expcube.fits', expCube='3C279_binned_ltcube.fits',irfs='CALDB') >>> likeall = BinnedAnalysis(all,'3C279_input_model.xml',optimizer='NewMinuit') ```

Perform the fit and print out the results

``` >>> likeallobj=pyLike.NewMinuit(likeall.logLike) >>> likeall.fit(verbosity=0,covar=True,optObject=likeallobj) >>> 73014.121 ```

Print TS for 3C 279 (4FGL J1256.1-0547)

``` >>> likeall.Ts('4FGL J1256.1-0547') >>> 29261.558 ```

The TS for the front + back analysis is 29261.558, a bit lower than what we found for the separate front and back analysis 30191.550. The important difference is that in the separated version of the analysis each event type has a dedicated response function set instead of using the averaged Front+Back response. This should increase the sensitivity, and therefore, the TS value.

The FSSC Team 03/15/2019