UNDER CONSTRUCTION!!!

Tech News

Keeping You Up To Date With The Latest Tech News & Virus Threats
Font size: +

Five Strategies to Navigate the Black Box Problem of AIOps

Five Strategies to Navigate the Black Box Problem of AIOps

By Anthony Cote, Senior Enterprise Product and Solutions Marketing Manager, NETSCOUT

By now, most are familiar with the old data science adage "garbage in, garbage out," which refers to the phenomenon where bad data fed into an AI system can lead to less-than-satisfying results. 

Unfortunately, most organizations only realize their data is bad after it has been integrated into their systems or when these become little more than unpredictable 'black boxes,' which produce unsatisfying results. And there are a lot of black boxes out there. 

Peering Into the Black Box

When an AI system becomes a 'black box,' its logic, analysis, and decisions are often erroneous and challenging for users—and even the developers who built the models—to understand. This lack of transparency is hazardous to organizations because it makes it difficult to trust the system's entire output, especially when the data used to train the model is flawed, biased, or incomplete. This can also create a single point of failure (SPOF) if an AI-driven process or platform misinterprets or overlooks critical issues, making the AI an organization-wide vulnerability.

Building prior knowledge into a machine learning (ML) model can make it more reliable. But figuring out what the model needs and how to include that isn't always straightforward. Models may do well with training data but can struggle with real-world data that’s noisy, incomplete, or outdated. Real-world data is rarely as clean or consistent as in controlled training environments. This can cause big problems for sophisticated AI technologies, including inefficient automation and inaccurate predictions.

Related:The Impact and Future of AI in Financial Services

For example, an AIOps and cybersecurity platform relying on incomplete logs might misinterpret a credential-stuffing attack as a normal server performance increase that typically happens during a seasonal traffic spike. This could potentially allow attackers to compromise sensitive systems undetected. Alternatively, suppose an automated AI program analyzes fragmented data from departments like IT, finance, and customer service. In that case, it might produce conflicting automated actions, such as different automated responses to customer complaints. This would make the AI program hard to trust, turning it into a black box—where outcomes are more likely to be questioned than followed.

Unboxing AIOps Platforms, Automation, and High-fidelity Data

AIOps and cybersecurity platforms, including those from Cisco, ServiceNow, Palo Alto Networks, and Splunk, rely heavily on advanced AI and ML algorithms to analyze machine-generated data and make predictions and recommendations. AI relies on large datasets, making the quality of data provided to these platforms crucial. High-fidelity AI-ready data—accurate, trustworthy, clean, and granular—enhances the ability of AIOps and cybersecurity platforms to deliver precise and actionable insights.

Related:Artificial General Intelligence: Are We There Yet?

False positives are very common when companies rely on less granular, accurate, and clean data to run their AI models, especially those using deep learning algorithms, which have complex layers that make it hard to see how decisions are made.

To avoid the black box problem, organizations should consider adding a few strategies to their AI roadmaps, whether they are using third-party AIOps and security platforms or building custom AI systems, including:

Robust data governance: Ensure high-quality data through regular audits, validation, and cleaning. AIOps platforms can help automate these tasks and maintain data integrity.

Explainable AI (XAI): Implement methods to clarify how AI-based systems make decisions, including visualizing decision pathways and explaining key factors influencing outcomes.

Model monitoring: Continuously monitor AI-driven systems for performance anomalies and model drift, identify shifts in behavior, and ensure data integrity.

Feedback loops: Ask users to provide input on AI performance. This can help developers refine their algorithms, reduce biases, catch errors, and enhance AI-enabled decision-making.

Data democratization: Ensure data insights are easy to understand and can be filtered into reports specific to all stakeholders—from the Chief Information Officer and Chief Information Security Officer to business executives—so they can quickly find and review the necessary information.

Related:AI and the Green Market Revolution Will Intertwine

By adopting these strategies, organizations can bring more transparency and structure to their AI-based initiatives. This can help them avoid the problem of “garbage in, garbage out,” reduce false positives, and create more reliable outputs from their AI models.

About the Author

Anthony Cote is Senior Enterprise Product and Solutions Marketing Manager at NETSCOUT.

About the Author

Sign up for the ITPro Today newsletter

Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

Newsletter Sign-Up

(Originally posted by Industry Perspectives)
×
Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

Google Flights Will Finally Give You The Cheapest ...
Today Only: Score $150 Off a 55-Inch Hisense QLED ...
 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Wednesday, 16 October 2024

Captcha Image

I Got A Virus and I Don't Know What To Do!

I Need Help!