This repository contains the code used to perform the analysis described in the paper "A stacked search for spatial coincidences between IceCube neutrinos and radio pulsars" (https://arxiv.org/abs/2306.03427). The code is written in Python 3.10 and uses the following packages: numpy, scipy, matplotlib, pandas, numba, multiprocessing.
Where $\mathcal{N}$ is the number of neutrino events in the $\delta \pm 5^\circ$ region around the $i$th neutrino event divided by the total number of events.
It is also not transparent how the authors derive upper limits from Fig.1. A statistical robust way to determine upper limits would be a data simulation with varying signal strength that would allow to show the distribution of a maximum-LLH TS in the presence of the signal. For instance, the 90% confidence level sensitivity could be derived by injection signal at a level such that 90% of the simulated data show a maximum-LLH value larger than that observed from the actual data. In order to validate that their statistical method is working, the authors should show that an injected flux is, on average, recovered by the analysis. Given the fact that the signal and background terms have different normalizations, I doubt that this signal-recovery test would be successful.
The background term that appears in Eq.(3) is normalized to the inverse area of a solid angle with a 5 degree radius. This term is not motivated and not even normalized. The authors should investigate how their results change by increasing this angle. The signal and background terms that appear in the Eq.(1) need to have the same normalization, so that the n_s value becomes meaningful.