Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias ElicitationRiccardo Cantini, Giada Cosenza, Alessio Orsino, Domenico TaliaJul 11, 2024 Cite DOI ProjectLarge Language Models Bias Fairness Stereotype Jailbreak Adversarial Robustness Sustainable AI Ethical AI