New Blog: How to Connect MLRun to an External Monitoring Application

How to Connect MLRun to an External Monitoring Application

Integrating MLRun with an external monitoring application is simple and straightforward. Here’s how it works.

As organizations transition from experimenting with LLMs to deploying gen AI applications and driving business value, data professionals face operationalization challenges. These include hallucinations, bias, model misuse, PII leakage, harmful content, inaccuracy, and more. Detecting and addressing these issues requires robust monitoring solutions in the AI pipeline. 

By ensuring monitoring is part of AI pipeline orchestration, data professionals can implement a continuous feedback loop. The monitoring results can be used to fine-tune models, ensuring they are high-performing, reliable and accurate. This ensures risks are mitigated before reaching production, allowing for the integrity and operational stability of gen AI applications. 

MLRun can integrate with any monitoring application, regardless of its ecosystem. This means users can use MLRun to orchestrate their gen AI application, including tasks like data preparation, model tuning, customization, validation and model optimization. Then, they can view monitoring results either in MLRun or their monitoring application of choice, and feed the results back to the AI pipeline.

How to Integrate Your Monitoring Application with MLRun: 3 Steps to Success

Integrating MLRun with an external monitoring application is simple and straightforward. Here’s how it works:

Step 1: Find the SDK or API of Your External Application

Integrating with your monitoring application takes place through their SDK or API. Explore and identify your application’s SDK or find the API endpoints, request payloads and response structure in the documentation.

Step 2: Define a Python Class for Integration

In MLRun, implement a Python class that inherits from MLRun’s ModelMonitoringApplication base class.

This class must include the do_tracking method, which defines the logic for interacting with the external application through the API or SDK.

The do_tracking method returns a list of key-value metrics and outcomes, including details like detected drift or model performance metrics. This abstraction ensures compatibility with any monitoring application.

Step 3: Register and Deploy the Monitoring Function

After defining the Python class, register it as a monitoring function in MLRun. Use the set_model_monitoring_function method to add the function to your MLRun project and deploy it.

Once deployed, the monitoring application integrates seamlessly into the MLRun workflow.

You can see an example of how this works with open-source Evidently right here.

Why Integrate Your Monitoring Application with MLRun?

MLRun offers several key advantages for integrating external monitoring applications:

  1. Generic and Modular Design – Integrate any monitoring tool, whether it’s open-source, an industry-standard application or a custom-built solution.
  2. Ease of Integration – Developers can rely on SDKs or APIs provided by monitoring tools, ensuring compatibility without extensive rework.
  3. Centralized Monitoring – All monitoring activities, regardless of the tool, are centralized within the MLRun environment, allowing for fine-tuning of the LLM.
  4. Scalability – Organizations can adapt as their monitoring needs evolve, leveraging MLRun to integrate new tools as required.

Get Started Now

Model monitoring is foundational for maintaining reliable gen AI applications. MLRun simplifies the process by offering a generic, modular approach to integrating external monitoring applications. Whether your organization uses a market-leading tool or a custom-built solution, MLRun can fit seamlessly into your monitoring strategy.

Get started with MLRun today.

Join the Conversation

Recent Blog Posts
Deploying Your Hugging Face Models to Production at Scale with MLRun
Using Hugging Face and MLRun together significantly shortens the model development, training, testin...
Alexandra Quinn
January 30, 2025