Egho-Promise, Ehiglator Iyobor, Asante, George, Balisane, Hewa, Aina, Folayo ORCID: 0000-0002-3795-2406 and Kure, Halima
(2025)
Towards Improved Privacy in AI and Machine Learning Applications: Challenges and way forward.
Journal of Emerging Technologies and Innovative Research (JETIR), 12
(5).
k342-k357.
ISSN 2349-5162
Preview |
PDF (VOR)
- Published Version
Available under License Creative Commons Attribution Non-commercial. 363kB |
Official URL: https://www.jetir.org/view.php?paper=JETIR2505B23
Abstract
Artificial intelligence (AI) and machine learning (ML) systems depend heavily on large datasets to function effectively. These datasets contain details of people and can be names, addresses, account numbers, credit card numbers, health data, and behaviour data, among others. Such enormous amounts of data are typically collected, stored, and analysed. This could lead to privacy violations if not sensitively done. Privacy violations are caused by causes such as inadequate security, illegal access, or hacking, all of which have negative repercussions for the individuals and businesses involved. This study aims to identify the privacy concerns unique to AI and ML applications and assess the efficacy of different privacy-preserving approaches. Specifically, the study seeks to identify the privacy challenges to AI and ML applications and evaluate the effectiveness of various privacy-preserving techniques that apply to AI and ML applications. This study used a qualitative research approach based on case studies, and data was acquired from secondary sources such as published papers, websites, and publications. It has been found that recent advances in artificial intelligence (AI) and machine learning (ML) have shown a new dawn in corporate functions, guiding efficiency, innovations, and insights across several industries. Although the use of personal data by these technologies has been extensively adopted, it has raised several privacy issues. The research identified certain privacy issues specific to AI and ML applications, such as overfitting, data leakage, illegal access, model inversion attacks, re-identification, and Privacy audits. It can be concluded that, despite the continuous development of AI and ML technologies and their successful deployment in all fields of human activity, privacy remains one of the most pressing concerns that are yet to be adequately addressed. Designing AI and ML applications to achieve superior levels of performance while maintaining individual privacy is a complex task that will require a combination of technical, normative, and ethical approaches, as well as the collaborative efforts of technical experts, ethicists, legislators, and users. Techniques such as Data Anonymisation and Pseudonymisation, differential privacy, federated learning, homomorphic encryption and secure multi-party computation should be used to improve privacy in AI and ML applications
Repository Staff Only: item control page