# Binned Likelihood with Energy Dispersion (Python)

The following tutorial shows a way of performing binned likelihood with energy dispersion. Technical details can be found here. This tutorial assumes that you've gone through the standard binned likelihood analysis thread. You can also watch a video tutorial.

You can download this tutorial as a Jupyter notebook and run it interactively. Please see the instructions for using the notebooks with the Fermitools.

## Get the Data

For this thread we use the same data extracted for the binned likelihood thread with the following selections:

• Search Center (RA,Dec) = (193.98,-5.82)
• Start Time (MET) = 239557417 seconds (2008-08-04T15:43:37)
• Stop Time (MET) = 302572802 seconds (2010-08-04T00:00:00)
• Minimum Energy = 100 MeV
• Maximum Energy = 500000 MeV

You'll first need to make a file list with the names of your input event files:

``` [user@localhost]\$ ls *PH*.fits > binned_events.txt ```

In the following analysis we've assumed that you've named your list of data files binned_events.txt.

## Perform Event Selections

You could follow the binned likelihood tutorial to perform your event selections using gtlike, gtmktime etc. directly from the command line, and then use pylikelihood later. But we're going to go ahead and use python. The gt_apps module provides methods to call these tools from within python.

So, let's jump into python:

``` [user@localhost]\$ python Type "help", "copyright", "credits" or "license" for more information. >>> ```

We first import the gt_apps module to gain access to the tools.

``` >>> import gt_apps as my_apps ```

Start by running gtselect (called 'filter' in python).

``` >>> my_apps.filter['evclass'] = 128 >>> my_apps.filter['evtype'] = 3 >>> my_apps.filter['ra'] = 193.98 >>> my_apps.filter['dec'] = -5.82 >>> my_apps.filter['rad'] = 15 >>> my_apps.filter['emin'] = 100 >>> my_apps.filter['emax'] = 500000 >>> my_apps.filter['zmax'] = 90 >>> my_apps.filter['tmin'] = 239557417 >>> my_apps.filter['tmax'] = 302572802 >>> my_apps.filter['infile'] = '@binned_events.txt' >>> my_apps.filter['outfile'] = '3C279_filtered.fits' ```

Once this is done, we can run gtselect:

``` >>> my_apps.filter.run() ```

Now, we need to find the GTIs. This is accessed within python via the maketime object:

``` >>> my_apps.maketime['scfile'] = 'L181126210218F4F0ED2738_SC00.fits' >>> my_apps.maketime['filter'] = '(DATA_QUAL>0)&&(LAT_CONFIG==1)' >>> my_apps.maketime['roicut'] = 'no' >>> my_apps.maketime['evfile'] = '3C279_filtered.fits' >>> my_apps.maketime['outfile'] = '3C279_filtered_gti.fits' >>> my_apps.maketime.run() ```

## Livetime and Counts Cubes

#### Livetime Cube

We can now compute the livetime cube.

``` >>> my_apps.expCube['evfile'] = '3C279_filtered_gti.fits' >>> my_apps.expCube['scfile'] = 'L181126210218F4F0ED2738_SC00.fits' >>> my_apps.expCube['outfile'] = '3C279_ltcube.fits' >>> my_apps.expCube['zmax'] = 90 >>> my_apps.expCube['dcostheta'] = 0.025 >>> my_apps.expCube['binsz'] = 1 >>> my_apps.expCube.run() ```

#### Counts Cube

The counts cube is the counts from our data file binned in space and energy. All of the steps above use a circular ROI (or a cone, really). Once you switch to binned analysis, you start doing things in squares. Your counts cube can only be as big as the biggest square that can fit in the circular ROI you already selected.

``` >>> my_apps.evtbin['evfile'] = '3C279_filtered_gti.fits' >>> my_apps.evtbin['outfile'] = '3C279_ccube.fits' >>> my_apps.evtbin['scfile'] = 'NONE' >>> my_apps.evtbin['algorithm'] = 'CCUBE' >>> my_apps.evtbin['nxpix'] = 100 >>> my_apps.evtbin['nypix'] = 100 >>> my_apps.evtbin['binsz'] = 0.2 >>> my_apps.evtbin['coordsys'] = 'CEL' >>> my_apps.evtbin['xref'] = 193.98 >>> my_apps.evtbin['yref'] = -5.82 >>> my_apps.evtbin['axisrot'] = 0 >>> my_apps.evtbin['proj'] = 'AIT' >>> my_apps.evtbin['ebinalg'] = 'LOG' >>> my_apps.evtbin['emin'] = 100 >>> my_apps.evtbin['emax'] = 500000 >>> my_apps.evtbin['enumbins'] = 37 >>> my_apps.evtbin.run() ```

## Exposure Maps

The binned exposure map is an exposure map binned in space and energy. We first need to import the python version of 'gtexpcube2' which doesn't have a gtapp version by default. This is easy to do (you can import any of the command line tools into python this way). Then, you can check out the parameters with the pars function. Here we generate exposure maps for the entire sky.
``` >>> from GtApp import GtApp >>> expCube2= GtApp('gtexpcube2','Likelihood') ```
Now we can run

``` >>> expCube2['infile'] = '3C279_ltcube.fits' >>> expCube2['cmap'] = 'none' >>> expCube2['outfile'] = '3C279_BinnedExpMap.fits' >>> expCube2['irfs'] = 'P8R3_SOURCE_V3' >>> expCube2['evtype'] = '3' >>> expCube2['nxpix'] = 1800 >>> expCube2['nypix'] = 900 >>> expCube2['binsz'] = 0.2 >>> expCube2['coordsys'] = 'CEL' >>> expCube2['xref'] = 193.98 >>> expCube2['yref'] = -5.82 >>> expCube2['axisrot'] = 0 >>> expCube2['proj'] = 'AIT' >>> expCube2['ebinalg'] = 'LOG' >>> expCube2['emin'] = 100 >>> expCube2['emax'] = 500000 >>> expCube2['enumbins'] = 37 >>> expCube2.run() ```

## Compute source maps

The sourcemaps step convolves the LAT response with your source model generating maps for each source in the model for use in the likelihood calculation. We use the same XML file as in the standard binned likelihood analysis. Go ahead and download the XML file to your working directory. You should also download the recommended models for a normal point source analysis gll_iem_v07.fits and iso_P8R3_SOURCE_V3_v1.txt.

``` >>> my_apps.srcMaps['expcube'] = '3C279_ltcube.fits' >>> my_apps.srcMaps['cmap'] = '3C279_ccube.fits' >>> my_apps.srcMaps['srcmdl'] = '3C279_input_model.xml' >>> my_apps.srcMaps['bexpmap'] = '3C279_BinnedExpMap.fits' >>> my_apps.srcMaps['outfile'] = '3C279_srcmap.fits' >>> my_apps.srcMaps['irfs'] = 'P8R3_SOURCE_V3' >>> my_apps.srcMaps['evtype'] = '3' >>> my_apps.srcMaps.run() ```

## Run the Likelihood Analysis

First, import the BinnedAnalysis library. Then, create a likelihood object for the dataset. For more details on the pyLikelihood module, check out the pyLikelihood Usage Notes. Initially, we will do an analysis without weights

``` >>> import pyLikelihood >>> from BinnedAnalysis import * >>> obs = BinnedObs(srcMaps='3C279_srcmap.fits',binnedExpMap='3C279_BinnedExpMap.fits', expCube='3C279_ltcube.fits',irfs='CALDB') >>> like = BinnedAnalysis(obs,'3C279_input_model.xml',optimizer='NewMinuit') ```

Perform the fit and print out the results

``` >>> likeobj=pyLike.NewMinuit(like.logLike) >>> like.fit(verbosity=0,covar=True,optObject=likeobj) >>> 75689.458 ```

Check that NewMinuit converged. If you get anything other than '0', then NewMinuit didn't converged.

``` >>> print(likeobj.getRetCode()) >>> 0 ```

Print TS for 3C 279 (4FGL J1256.1-0547)

``` >>> like.Ts('4FGL J1256.1-0547') >>> 27810.844 ```

Now, we will repeat the likelihood analysis but turning energy dispersion on this time .

``` >>> import pyLikelihood >>> from BinnedAnalysis import * ```

At this point, you have to create a BinnedConfig object and pass that object to BinnedAnalusis. For the appropriate choice of edisp_bins, please read this.

``` >>> conf = BinnedConfig(edisp_bins=-2) >>> obs2 = BinnedObs(srcMaps='3C279_srcmap.fits',binnedExpMap=' 3C279_BinnedExpMap.fits', expCube='3C279_ltcube.fits',irfs='CALDB') >>> like2 = BinnedAnalysis(obs2,'3C279_input_model.xml',optimizer='NewMinuit',config=conf) ```

Perform the fit and print out the results

``` >>> likeobj2=pyLike.NewMinuit(like2.logLike) >>> like2.fit(verbosity=0,covar=True,optObject=likeobj2) >>> 75785.905 >>> print(likeobj2.getRetCode()) >>> 0 >>> like2.Ts('4FGL J1256.1-0547') >>> 27643.552 ```

After verifying that the fit converged, we see that the TS including energy dispersion is a bit lower than what we found neglecting energy dispersion. The effect is most relevant at energies < 300 MeV, but also induces a smaller systematic offset at higher energies. Please refer to a more complete explanation here.

The FSSC Team 12/8/2020