Legal regulations play a pivotal role in shaping citizens' daily lives, yet their complexity often renders them inaccessible to those without specialized legal expertise. Recent advances in natural language processing (NLP) have shown promise in creating summaries of legal texts to enhance their comprehensibility. However, the effectiveness of these summaries, particularly when generated by Large Language Models (LLMs), has not been extensively evaluated among the general public audience-i.e. non-experts. This study evaluates the capability of LLMs, specifically small open-source models and GPT-4o, in summarizing Italian legal judgments to make them more understandable for individuals without legal training. To assess the quality and comprehensibility of these summaries, participants were presented with a questionnaire containing comprehension questions formulated by legal experts. While this response doesn't directly measure the summary's quality, it serves as a strong indicator of the summary's practical usefulness. The findings reveal that although these models are not yet fully capable of the task, they have demonstrated significant promise. However, the study also showed that while human-made summaries resulted in better comprehension and more accurate responses, they come at a higher cost compared to AI-generated summaries.