Documentation
Complete developer guide with detailed documentation for all modules, configuration options, and best practices.
View DocumentationFlink SQL Integration
Use ML inference directly in Flink SQL with UDFs and table functions. Declarative API for seamless integration.
Explore SQL FeaturesGetting Started
Quick start guide for integrating real-time ML inference into Apache Flink streaming applications in minutes.
Get StartedAvailable Modules
ml-inference-core
FoundationCore abstractions, configurations, and utilities for all ML inference operations.
View Module Details →otter-stream-sql
New!Flink SQL integration for ML inference with UDFs, table functions, and declarative API.
View Module Details →otter-stream-onnx
Neural NetworksHigh-performance ONNX Runtime integration with GPU acceleration support.
View Module Details →otter-stream-tensorflow
SavedModelNative TensorFlow SavedModel integration with automatic signature discovery.
View Module Details →otter-stream-pytorch
TorchScriptPyTorch TorchScript integration via Deep Java Library with GPU detection.
View Module Details →otter-streams-xgboost
Gradient BoostingHigh-performance gradient boosting inference for tabular data using XGBoost4J.
View Module Details →otter-stream-pmml
XML StandardPMML support via JPMML for portable model deployment across platforms.
View Module Details →otter-stream-remote
Cloud ServicesRemote inference clients for cloud ML services and HTTP/gRPC endpoints.
View Module Details →otter-stream-examples
ExamplesProduction-ready examples demonstrating real-world use cases and best practices.
View Module Details →Why Otter Streams?
High Performance
Optimized for low-latency inference with efficient resource management and parallel processing
Flink SQL Support
Full SQL integration with UDFs, table functions, and declarative ML inference
Multi-Framework
Native support for ONNX, TensorFlow, PyTorch, XGBoost, and PMML model formats
Enterprise Features
Built-in monitoring, caching, error handling, and fault tolerance for production deployments
Cloud Native
Support for remote inference endpoints including AWS SageMaker, GCP Vertex AI, and Azure ML
Scalable
Designed to scale horizontally with your Flink cluster for high-throughput workloads