Federated Learning and Explainable AI: A Transparent and Secure Approach to Fraud Detection
Main Article Content
Abstract
In the realm of financial fraud detection, the integration of Explainable AI (XAI) and Federated Learning (FL) is a new way of boosting transparency and privacy protection. The project uses the Paysim1 dataset from Kaggle to determine the authenticity of transactions and detect fraudulent transactions with greater accuracy. The current system mainly uses Deep Neural Networks (DNN), Recurrent Neural Networks (RNN), and Stochastic Gradient Descent (SGD) for detecting fraud. Although these approaches provide valuable insights, they tend to be non- interpretable and need centralized processing of data, which is a privacy and transparency concern. Conversely, our method makes advantage of Gradient Boosting Machines (GBMs), Random Forests, and Decision Trees. These are chosen due to their stability, interpretability, and better performance in dealing with intricate datasets. In addition to this, Federated Learning is also used in order to promote privacy by permitting models to get trained on decentralized devices without actually sharing raw data. This way not only confidentiality of data gets preserved but there is also development of a collective learning environment through which fraud detection models can improve accuracy. This blend of Explainable AI and Federated Learning seeks to mitigate the pressing issue of transparent yet privacy- protected solutions in fraud detection in financial institutions.
