Latest MLRun Demo: Real-Time Agent Copilot

Tutorial: Build a Smart Call Center Analysis Gen AI  App with MLRun, Gradio and SQLAlchemy

Developing a gen AI app requires multiple engineering resources, but with MLRun the process can be simplified and automated. In this blog post, we show a tutorial of building an application for a smart call center application. This includes a pipeline for generating data for calls and another pipeline for call analysis. For those of you interested in the business aspect, we added information in the beginning about how AI is impacting industries.

You can follow the tutorial along with the respective Notebook and clone the Git. Don’t forget to star us on Github when you do! You can also watch the tutorial video.

How AI is Impacting the Economy

AI is changing our economy and ways of work. According to McKinsey, AI’s most substantial impact is in three main areas:

  • Productivity – Improving how businesses are run, from customer interactions to coding to content creation.
  • Product Transformation – Changing how products meet customer needs. This includes conversational interfaces and co-pilots, as well as hyper-personalization, i.e customer-specific content at a granular level.

Redistributing profit pools – AIaaS (AI-as-a-Service) is added to the value chain, resulting in new solutions and entire value chains being replaced.

AI Pitfalls to Avoid

When building a gen AI app and operationalizing LLMs, it’s important to perform the following actions:

  1. Define a value roadmapWithout a clear value roadmap, projects can easily drift from their intended goals. This roadmap aligns the AI initiative with business objectives, ensuring that the development efforts lead to tangible benefits.
  2. Avoid technological and operational debtAvoiding this debt ensures the long-term sustainability and maintainability of the AI system.
  3. Take into consideration the human experienceIgnoring the human experience can lead to an AI solution that users find difficult or unpleasant to use, impeding adoption and productivity.
  4. Use a scalable and resilient gen AI architecture to ensure you reach production – Otherwise, the architecture might fail under increased loads or during unexpected disruptions.
  5. Implement processes to ensure AI maturity and governanceWithout proper processes, the AI system can become unreliable, biased, or non-compliant with regulations. Governance ensures that the AI operates within acceptable ethical and legal boundaries.
  6. Define quantifiable KPIsClear KPIs create accountability and focus, ensuring that the project stays on track.

Now let’s dive into the hands-on tutorial.

Tutorial: Building a Gen AI Application for Call Center Analysis

The following tutorial shows how to build an LLM call center analysis application. We’ll show how you can use gen AI to analyze customer and agent calls so your audio files can be used to extract insights.

This will be done with MLRun in a single workflow. MLRun will:

  • Automate the workflows
  • Auto-scale resources
  • Automatically distribute inference jobs to workers
  • Automatically log and parse the values of the workflow steps

As a reminder, you can:

Installation

  1. First, you will need to install MLRun, Gradio and SQLAlchemy and add tokens. The project is created in the Notebook.

Data Generation Pipeline

  1. Now it’s time to generate call data. You can skip this if you already have your own audio files for analysis. We also have saved generated data in the Git repo you can use, enabling you to run the demo without an OpenAI key.

This comprises six steps, some of which are based on MLRun’s Function Hub:

The resulting workflow will look like this:

As you can see, no code is required. More details on each step and when to use them, in the documentation.

  1. Run the workflow by calling the project’s method project.run. You can also configure the workflow with arguments.

Data Analysis Pipeline

  1. Now it’s time for the data analysis pipeline. The steps in this pipeline are:
  • Inserting calls
  • Diarization
  • Transcription
  • PII recognition
  • Analysis
  • Post-processing

And it looks like this:

 

Similarly, no coding is required here either.

  1. Run the workflow and view the results.

Here’s how some of the steps are executed:

  • Analysis – Generating a table with the call summary, its main topic, customer tone, upselling attempts and more:

6. You can also use your database and the calls for developing new applications, like prompting your LLM to find a specific call in your call database in a RAG based chat app.To hear what a real call sounds like, watch the video of this tutorial.

Advanced MLRun Capabilities

In addition to simplifying the building and running of the pipelines, MLRun also allows auto logging, auto distribution and auto scaling resources.

Try MLRun for yourself.

Recent Blog Posts