Model Monitoring with R Markdown, pins, and RStudio Connect

This is a companion discussion topic for the original entry at https://blog.rstudio.com/2021/04/08/model-monitoring-with-r-markdown


ModelOps or MLOps (for “model/machine learning operations”) focuses on the real-world processes involved in building, deploying, and maintaining a model within an organization’s data infrastructure. Developing a model that meets your organizations needs and goals is a big accomplishment, but whether that model’s purpose is largely predictive, inferential, or descriptive, the “care and feeding” of your model often doesn’t end when you are done developing it. How is the model going to be deployed? Should you retrain the model on a schedule? Based on changes in model performance? When should you kick off retraining the same kind of model with fresh data versus go back to the drawing board for a full round of model development again? These are the kinds of questions that ModelOps deals with.

Model monitoring is a key component of ModelOps, and is typically used to answer questions about how a model is performing over time, when to retrain a model, or what kinds of observations are not being predicted well. There are a lot of solutions out there to address the need for model monitoring, but the R ecosystem offers options that are code-first, flexible, and already in wide use . When we use this approach to model monitoring, we gain all the benefits of handling our data science logic via reusable, extensible code (as opposed to clicks), as well as the enormous open source community surrounding R Markdown and related tools.

In this post, I’ll walk through one option for this approach.

  • Deploy a model as a RESTful API using Plumber
  • Create an R Markdown document to regularly assess model performance by:
    • Sending the deployed model new observations via httr
    • Evaluating how the model performed with these new predictions using model metrics from yardstick
    • Versioning the model metrics using the pins package
    • Summarize and visualize the results using flexdashboard
  • Schedule the R Markdown dashboard to regularly evaluate the model and notify us of the results

Predicting injuries from traffic data

I recently developed a model to predict injuries for traffic crashes in Chicago. The data set covers traffic crashes on city streets within Chicago city limits under the jurisdiction of the Chicago Police Department, and the model predicts the probability of a crash involving an injury.

I work on the tidymodels team developing open source tools for modeling and machine learning, but you can use the R ecosystem for monitoring any kind of model, even one trained in Python. I used Plumber to deploy my model on RStudio Connect, but depending on your own organization’s infrastructure, you might consider deploying a Flask API or another appropriate format.

Monitor model performance

There are new crashes everyday, so I would like to measure how my model performs over time. I built a flexdashboard for model monitoring; this dashboard does not use Shiny but it’s published on RStudio Connect as a scheduled report that re-executes automatically once a week. I get an email in my inbox with the new results every time!

Model monitoring flexdashboard

The monitoring dashboard uses httr to call two APIs:

  • the city of Chicago’s API for the traffic data to get the latest crashes
  • the model API to make predictions on those new crashes

The dashboard also makes use of pins to publish and version model metrics each time the dashboard updates. I am a huge fan of the pins package in the context of ModelOps; you can even use it to publish and version models themselves!

Basic model monitoring should cover at least the model metrics of interest, but in the real world, most data practitioners need to track something specific to their domain or use case. This is why inflexible ModelOps tooling is often frustrating to work with. Using flexible tools like R Markdown, on the other hand, let me build a model monitoring dashboard with a table of crashes that were misclassified (so I can explore them) and an interactive map of where they are around the city of Chicago.

To learn more

All the code for this demo is available on GitHub, and future posts will address how to use R for other ModelOps endeavors. If you’d like to learn more about how RStudio products like Connect can be used for tasks from serving model APIs to model monitoring and more, set up a meeting with our Customer Success team.

This topic was automatically closed after 83 days. New replies are no longer allowed.


If you have a query related to it or one of the replies, start a new topic and refer back with a link.