DOI: 10.3390/computers13120311 ISSN: 2073-431X

Open-Source Artificial Intelligence Privacy and Security: A Review

Younis Al-Kharusi, Ajmal Khan, Muhammad Rizwan, Mohammed M. Bait-Suwailam

This paper reviews the privacy and security challenges posed by open-source artificial intelligence (AI) models. The increased use of open-source machine learning models, while beneficial for resource efficiency and collaboration, has introduced significant privacy risks and security vulnerabilities. Key threats include model inversion, membership inference, data leakage, and backdoor attacks, which could expose sensitive data or compromise system integrity. Our review highlights that many open-source models are vulnerable to these attacks due to their transparency and accessibility. We also identify that adversarial training, differential privacy (DP), and model sanitization techniques can effectively mitigate some of these risks, though achieving a balance between transparency and security remains a challenge. The findings highlight the need for continuous research and innovation to ensure that open-source AI models remain both secure and privacy-compliant in increasingly critical applications across various industries.

More from our Archive