Sentiment Analysis of Nepali COVID19 Tweets Using NB, SVM AND LSTM
Volume-3 | Issue-3
Deniable Authentication Encryption for Privacy Protection using Blockchain
Volume-3 | Issue-3
Blockchain-Enabled Federated Learning on Kubernetes for Air Quality Prediction Applications
Volume-3 | Issue-3
Smart Fashion: A Review of AI Applications in Virtual Try-On & Fashion Synthesis
Volume-3 | Issue-4
Hybrid Parallel Image Processing Algorithm for Binary Images with Image Thinning Technique
Volume-3 | Issue-3
Smart Medical Nursing Care Unit based on Internet of Things for Emergency Healthcare
Volume-3 | Issue-4
QoS-aware Virtual Machine (VM) for Optimal Resource Utilization and Energy Conservation
Volume-3 | Issue-3
Probabilistic Neural Network based Managing Algorithm for Building Automation System
Volume-3 | Issue-4
Fusion based Feature Extraction Analysis of ECG Signal Interpretation - A Systematic Approach
Volume-3 | Issue-1
Multi-scale CNN Approach for Accurate Detection of Underwater Static Fish Image
Volume-3 | Issue-3
Real Time Anomaly Detection Techniques Using PySpark Frame Work
Volume-2 | Issue-1
Deniable Authentication Encryption for Privacy Protection using Blockchain
Volume-3 | Issue-3
Smart Fashion: A Review of AI Applications in Virtual Try-On & Fashion Synthesis
Volume-3 | Issue-4
Sentiment Analysis of Nepali COVID19 Tweets Using NB, SVM AND LSTM
Volume-3 | Issue-3
Audio Tagging Using CNN Based Audio Neural Networks for Massive Data Processing
Volume-3 | Issue-4
Frontiers of AI beyond 2030: Novel Perspectives
Volume-4 | Issue-4
Smart Medical Nursing Care Unit based on Internet of Things for Emergency Healthcare
Volume-3 | Issue-4
Early Stage Detection of Crack in Glasses by Hybrid CNN Transformation Approach
Volume-3 | Issue-4
ARTIFICIAL INTELLIGENCE APPLICATION IN SMART WAREHOUSING ENVIRONMENT FOR AUTOMATED LOGISTICS
Volume-1 | Issue-2
Deep Convolution Neural Network Model for Credit-Card Fraud Detection and Alert
Volume-3 | Issue-2
Volume - 5 | Issue - 3 | september 2023
Published
20 July, 2023
Artificial Intelligence (AI) has witnessed significant advancements in recent years, enabling its widespread adoption across various domains. However, this progress has also given rise to new challenges, particularly in the context of adversarial machine learning. Adversarial attacks exploit vulnerabilities in AI models, resulting in their misclassification or misbehaviour. To address this critical issue, it is crucial to develop trustworthy AI systems that can withstand such adversarial threats. This paper presents a comprehensive study that covers the types of adversarial machine learning cyber-attacks, methods employed by adversaries to launch such attacks, effective defence mechanisms, and potential future directions in the field. It starts by exploring various types of adversarial ML attacks, characteristics and potential consequences of each attack type, emphasizing the risks they pose to privacy, security, and fairness in AI systems and delving into the methods employed by adversaries to launch adversarial ML attacks. By understanding the tactics used by adversaries, researchers and practitioners can develop robust defence mechanisms that can withstand these attacks. Building upon this understanding, a range of defence strategies can be invented for defending against adversarial ML attacks and emerging research areas, such as the integration of secure multi-party computation, differential privacy, and federated learning are used to enhance the resilience of AI models. By understanding the nature of adversarial attacks and implementing effective defence strategies, AI systems can be fortified against malicious manipulations. The findings of this study contribute to the development of trustworthy AI systems, ensuring their resilience, transparency, and fairness.
KeywordsTrust AI ML Adversarial Attacks
Full Article PDF