Skip to main content

ForestFlow: A Policy-Driven Machine Learning Model Server

ForestFlow is an LF AI Foundation incubation project licensed under the Apache 2.0 license.
It is a scalable policy-based cloud-native machine learning model server for easily deploying and managing ML models.

Features

Sized to Fit your Needs
ForestFlow can be run as a single instance (laptop or server) or deployed
as a cluster of nodes that work together and automatically manage and distribute work. ForestFlow can be run natively or as a docker container. It also offers native Kubernetes integration for easily deploying on Kubernetes clusters with very little configuration.

Shadow Deployments
Instead of spinning up auxiliary systems for data scientists to test and validate new models against real production traffic, ForestFlow allows for model deployment in Shadow Mode where models can be deployed to simply mirror inference requests asynchronously without impacting user-facing traffic.

Automatic Resource Management
ForestFlow automatically scales down (hydrates) models and resources when not in use and automatically re-hydrates models back into memory to serve inference requests as needed maintaining cost efficient memory and resource management.

Multi-Tenancy
Models are grouped into Contracts that define a namespace for each use-case and input features. This means you can deploy models for multiple use-cases and chose between different routing policies to direct inference traffic between model variants serving each use-case.

Policy-Based Routing
Automation is one of the key tenants of ForestFlow. Routing inference traffic between ML models based on time or model performance metrics is as simple as selecting a routing policy for the contract or namespace governing each use-case.

Policy-Based Phase-In/Expiration
Choosing when and how much traffic a model receives and how it’s expired and removed is done through automation and selection of policy. This allows for time or model performance-based canary deployments and retiring of models.

Cloud Native
ForestFlow was built with cloud in mind using technologies that allow for easy containerized deployments and a pluggable design with command-query responsibility segregation and a subscription model allows us to extend what it has to offer with an ex-system of microservices.

Streaming Inference
The core of ForestFlow is built using AKKA allowing us to expand the inference interface to allow for multiple inference interfaces like streaming inference from a queue-like systems like Kafka.

Payload Logging
Our event-based design allows users to subscribe to events and act on them. ForestFlow comes with event subscribers to log inference requests and results giving you an opportunity to monitor model performance and subsequently take action providing a feedback-loop for performance-based policies.

Ease of Use
ForestFlow provides API-compatibility with the GraphPipe library and clients for inference in addition to providing REST APIs for inference. As the landscape continues to evolve, we will continue to revisit how clients interact to provide a simple, yet, powerful experience.

GitHub

Please visit us on GitHub where our development happens. We invite you to join our community both as a user of ForestFlow and also as a contributor to its development. We look forward to your contributions!

Join the Conversation

ForestFlow maintains three mailing lists. You are invited to join the one that best meets your interest.

ForestFlow-Announce: Top-level milestone messages and announcements

ForestFlow-Technical-Discuss: Technical discussions

ForestFlow-TSC: Technical governance discussions