IRO Journals

Journal of Artificial Intelligence and Capsule Networks

Sentiment Analysis of Nepali COVID19 Tweets Using NB, SVM AND LSTM
Volume-3 | Issue-3

Deniable Authentication Encryption for Privacy Protection using Blockchain
Volume-3 | Issue-3

Blockchain-Enabled Federated Learning on Kubernetes for Air Quality Prediction Applications
Volume-3 | Issue-3

Smart Fashion: A Review of AI Applications in Virtual Try-On & Fashion Synthesis
Volume-3 | Issue-4

Hybrid Parallel Image Processing Algorithm for Binary Images with Image Thinning Technique
Volume-3 | Issue-3

Smart Medical Nursing Care Unit based on Internet of Things for Emergency Healthcare
Volume-3 | Issue-4

QoS-aware Virtual Machine (VM) for Optimal Resource Utilization and Energy Conservation
Volume-3 | Issue-3

Probabilistic Neural Network based Managing Algorithm for Building Automation System
Volume-3 | Issue-4

Fusion based Feature Extraction Analysis of ECG Signal Interpretation - A Systematic Approach
Volume-3 | Issue-1

Multi-scale CNN Approach for Accurate Detection of Underwater Static Fish Image
Volume-3 | Issue-3

Real Time Anomaly Detection Techniques Using PySpark Frame Work
Volume-2 | Issue-1

Deniable Authentication Encryption for Privacy Protection using Blockchain
Volume-3 | Issue-3

Smart Fashion: A Review of AI Applications in Virtual Try-On & Fashion Synthesis
Volume-3 | Issue-4

Sentiment Analysis of Nepali COVID19 Tweets Using NB, SVM AND LSTM
Volume-3 | Issue-3

Audio Tagging Using CNN Based Audio Neural Networks for Massive Data Processing
Volume-3 | Issue-4

Frontiers of AI beyond 2030: Novel Perspectives
Volume-4 | Issue-4

Smart Medical Nursing Care Unit based on Internet of Things for Emergency Healthcare
Volume-3 | Issue-4

Early Stage Detection of Crack in Glasses by Hybrid CNN Transformation Approach
Volume-3 | Issue-4

ARTIFICIAL INTELLIGENCE APPLICATION IN SMART WAREHOUSING ENVIRONMENT FOR AUTOMATED LOGISTICS
Volume-1 | Issue-2

Deep Convolution Neural Network Model for Credit-Card Fraud Detection and Alert
Volume-3 | Issue-2

Home / Archives / Volume-5 / Issue-3 / Article-2

Volume - 5 | Issue - 3 | september 2023

Trustworthy AI Principles to Face Adversarial Machine Learning: A Novel Study
CH.E.N. Sai Priya  , Manas Kumar Yogi
Pages: 227-245
Cite this article
Priya, C. S. & Yogi, M. K. (2023). Trustworthy AI Principles to Face Adversarial Machine Learning: A Novel Study. Journal of Artificial Intelligence and Capsule Networks, 5(3), 227-245. doi:10.36548/jaicn.2023.3.002
Published
20 July, 2023
Abstract

Artificial Intelligence (AI) has witnessed significant advancements in recent years, enabling its widespread adoption across various domains. However, this progress has also given rise to new challenges, particularly in the context of adversarial machine learning. Adversarial attacks exploit vulnerabilities in AI models, resulting in their misclassification or misbehaviour. To address this critical issue, it is crucial to develop trustworthy AI systems that can withstand such adversarial threats. This paper presents a comprehensive study that covers the types of adversarial machine learning cyber-attacks, methods employed by adversaries to launch such attacks, effective defence mechanisms, and potential future directions in the field. It starts by exploring various types of adversarial ML attacks, characteristics and potential consequences of each attack type, emphasizing the risks they pose to privacy, security, and fairness in AI systems and delving into the methods employed by adversaries to launch adversarial ML attacks. By understanding the tactics used by adversaries, researchers and practitioners can develop robust defence mechanisms that can withstand these attacks. Building upon this understanding, a range of defence strategies can be invented for defending against adversarial ML attacks and emerging research areas, such as the integration of secure multi-party computation, differential privacy, and federated learning are used to enhance the resilience of AI models. By understanding the nature of adversarial attacks and implementing effective defence strategies, AI systems can be fortified against malicious manipulations. The findings of this study contribute to the development of trustworthy AI systems, ensuring their resilience, transparency, and fairness.

Keywords

Trust AI ML Adversarial Attacks

Full Article PDF
×

Currently, subscription is the only source of revenue. The subscription resource covers the operating expenses such as web presence, online version, pre-press preparations, and staff wages.

To access the full PDF, please complete the payment process.

Subscription Details

Category Fee
Article Access Charge
For single article (Indian)
1,200 INR
Article Access Charge
For single article (non-Indian)
15 USD
Open Access Fee (Indian) 5,000 INR
Open Access Fee (non-Indian) 80 USD
Annual Subscription Fee
For 1 Journal (Indian)
15,000 INR
Annual Subscription Fee
For 1 Journal (non-Indian)
200 USD
secure PAY INR / USD
Subscription form: click here