Our paper "DeGAS: Gradient-Based Optimization of Probabilistic Programs without Sampling", by Francesca Randone, Romina Doz, Mirco Tribastone and Luca Bortolussi was accepted at TACAS 2026!
Remember SOGA, our PPL to enable inference of probabilistic programs with continuous and discrete variables?
DeGAS is its evolution, aimed at solving complex optimization problems.
DeGAS augments SOGA syntax with parameters, and applies a semantic smoothing to ensure that the posterior is always differentiable, even in presence of discrete random variables. By doing that, it allows you to optimize the parameters using torch gradient-based optimization. We have used DeGAS to optimize parameters in CPS models observing very nice results. In particular, we were able to optimize complex loss functions, not just standard likelihoods.
📖 Pre-print available at: https://arxiv.org/pdf/2601.15167
📦 Replication package available at: https://zenodo.org/records/18197807
👾 DeGAS available at: https://github.com/frarandone/DeGAS (repo under active development)