Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias ElicitationRiccardo Cantini, Giada Cosenza, Alessio Orsino, Domenico TaliaJan 28, 2025 Cite DOI ProjectLarge Language Models Bias Fairness Stereotype Jailbreak Adversarial Robustness Sustainable AI Ethical AI