Top MLOps Tool Repositories On Github

MLOps has been launched to offer an end-to-end machine studying improvement course of to design, construct and handle reproducible, testable, and evolvable ML-powered software program. MLOps allowed organisations to collaborate throughout departments and speed up workflows, which often hit the wall resulting from numerous points within the manufacturing. In the following part, we current high MLOps device repos which can be out there on Github

(Image credit: Microsoft Azure)

Here are the highest Github MLOps instruments repos:

Seldon Core

Seldon core is an MLOps framework to package deal, deploy, monitor and handle 1000’s of manufacturing machine studying fashions. It converts machine studying fashions (constructed on Tensorflow, Pytorch, H2o, and so on.) or language wrappers (constructed on Python, Java, and so on.) into manufacturing microservices.

Seldon Core makes scaling to 1000’s of manufacturing machine studying fashions potential and offers superior ML capabilities that embrace Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and extra. It makes deployment straightforward by means of their pre-packaged inference servers and language wrappers.

Check the total repo here.

Polyaxon

Polyaxon can be utilized for constructing, coaching, and monitoring giant scale deep studying purposes. This platform is constructed to handle reproducibility, automation, and scalability for ML purposes. Polyaxon may be deployed into any information centre, cloud supplier, or may be hosted. It helps all the main deep studying frameworks reminiscent of Tensorflow, MXNet, Caffe, Torch, and so on.

According to the group that developed Polyaxon, the platform makes it growing ML purposes sooner, simpler, and extra environment friendly by managing workloads with sensible container and node administration. It even turns GPU servers into shared, self-service assets for groups.

Installation: $ pip set up -U polyaxon

Check the total repo here.

Hydrosphere Serving

Hydrosphere Serving provides deploying and versioning choices for machine studying fashions in manufacturing. This MLOps platform:

  • Can serve machine studying fashions developed in any language or framework. It wraps them in a Docker picture and deploys it onto the manufacturing cluster, exposing HTTP, gRPC and Kafka interfaces.
  • Shadows site visitors between totally different mannequin variations to look at how totally different mannequin variations behave on the identical site visitors.
  • Versions management fashions and pipelines as they’re deployed.

Check the total repo here.

Metaflow

Metaflow was initially developed at Netflix to handle the wants of its information scientists who work on demanding real-life information science tasks. Netflix open-sourced Metaflow in 2019.

Metaflow helps customers design your workflow, run it at scale, and deploy it to manufacturing. It variations and tracks all of your experiments and information robotically. Metaflow offers built-in integrations to storage, compute, and machine studying companies within the AWS cloud. No code adjustments required.

Check the total repo here.

See Also

ClearML: Zero Integration MLOps Solution

Kedro

Kedro is an open-source Python framework which can be utilized for creating reproducible, maintainable and modular information science code. Kedro is constructed on the foundations of  software program engineering and applies them to machine-learning code; utilized ideas embrace modularity, separation of issues and versioning.

Check the total repo here

BentoML

As a versatile, high-performance framework, BentoML can be utilized for serving, managing, and deploying machine studying fashions. It does this by offering a normal interface for describing a prediction service and abstracting away the way to run mannequin inference effectively and the way mannequin serving workloads can combine with cloud infrastructures.

BentoML’s options embrace:

  • Production-ready on-line serving.
  • Supports a number of ML frameworks together with PyTorch, TensorFlow.
  • Containerized mannequin server for manufacturing deployment with Docker, Kubernetes and so on.
  • Discover and package deal all dependencies robotically.
  • Serve any Python code together with educated fashions.
  • Health verify endpoint and Prometheus /metrics endpoint for monitoring.

Check the total repo here.

Flyte

Flyte offers production-grade, container-native, type-safe workflow platforms optimized for giant scale processing. It is written in Golang and permits extremely concurrent, scalable and maintainable workflows for machine studying and information processing. It connects disparate computation backends utilizing a kind secure information dependency graph and information all adjustments to a pipeline, making it potential to rewind time. 

Check the total repo here.


Join Our Telegram Group. Be a part of an attractive on-line neighborhood. Join Here.

Subscribe to our Newsletter

Get the newest updates and related provides by sharing your electronic mail.

LEAVE A REPLY

Please enter your comment!
Please enter your name here