Full statistical models encapsulate the complete information of an experimental result, including the likelihood function given observed data. Their publication is of vital importance for a lasting legacy of HEP experiments. This is particularly relevant for the accurate reinterpretation of LHC results in the context of different BSM theories. In fact, major steps have taken forward by the experimental community in this regard, including the pyhf framework for full statistical model publication. However, the systematic publication of such models has not been reached yet. A fundamental reason is that even the likelihoods are often complex high-dimensional functions that are very hard to parametrize, and even more, are very time-consuming to evaluate. Thus, we turn to Machine Learning (ML) to parametrize LHC likelihoods.
In this talk, we will first introduce LHC reinterpretation framework and then discuss two approaches for ML likelihood parametrization. In the first one, we focus on learning the profile likelihoods of LHC new physics searches with neural networks (NN); a fast and practical solution for efficient statistical reinterpretation, emphasizing on their deployment strategy. For the second approach, we deal with the likelihood given observed data. We demonstrate that normalizing flows, a powerful kind of generative networks with explicit density estimation, can efficiently describe such complex functions, by taking likelihoods of EFT-based fits as examples. Finally, we discuss the possibility of learning the full statistical model altogether.
Diptaparna Biswas