All Classes Interface Summary Class Summary Enum Summary Exception Summary
| Class |
Description |
| AsyncInferenceExecutor |
AsyncInferenceExecutor provides a simple abstraction for executing tasks asynchronously
using a fixed-size thread pool.
|
| AsyncMLInferenceFunction<T,R> |
Asynchronous ML inference function for Flink streams.
|
| AsyncMLPredictFunction |
Async function for non-blocking inference.
|
| AsyncModelInferenceFunction<IN,OUT> |
Asynchronous function for performing ML inference in Apache Flink streams.
|
| AsyncResultHandler<T> |
Handles async inference results.
|
| AuthConfig |
Configuration for authentication with remote ML inference endpoints.
|
| AuthConfig.Builder |
Builder for creating AuthConfig instances.
|
| BatchInferenceProcessor |
Batches inference requests for improved throughput.
|
| CacheConfig |
Configuration for model and result caching in SQL inference.
|
| CacheConfig.Builder |
|
| CacheStrategy |
Defines caching strategies for ML inference operations in Otter Stream.
|
| CEPInferenceIntegration |
Integration between Flink CEP and ML inference for pattern-based decisions.
|
| ConfigurationValidator |
Validates SQL inference configurations.
|
| ConfigurationValidator.ValidationResult |
Result of configuration validation.
|
| DDLParserHelper |
Helper for parsing SQL DDL options.
|
| DecisionEngine<T> |
Generic decision engine abstraction.
|
| DroolsDecisionEngine |
Stateless Drools-based decision engine.
|
| EndpointConfig |
|
| EndpointConfig.Builder |
|
| FeaturePreprocessor |
Preprocesses features before inference (normalization, encoding, etc.).
|
| FraudDetectionExample |
Example demonstrating real-time fraud detection using OtterStream's ML inference capabilities.
|
| FunctionRegistrationHelper |
Helper class for registering SQL functions programmatically.
|
| HttpInferenceClient |
HTTP-based remote inference client for REST API model endpoints.
|
| HttpModelLoader |
Loads models from HTTP/HTTPS endpoints.
|
| InferenceCircuitBreaker |
Circuit breaker to prevent cascading failures in inference operations.
|
| InferenceCircuitBreaker.State |
|
| InferenceConfig |
Comprehensive configuration for ML inference operations in Apache Flink streams.
|
| InferenceConfig.Builder |
Builder for creating InferenceConfig instances with sensible defaults.
|
| InferenceContext |
Context for inference execution with metadata.
|
| InferenceEngine<T> |
Core interface for ML inference engines in Otter Stream.
|
| InferenceEngine.EngineCapabilities |
Describes the capabilities of an inference engine.
|
| InferenceEngineFactory |
Factory for creating inference engines based on model format.
|
| InferenceErrorBuilder |
Builder for user-friendly error messages.
|
| InferenceException |
Exception thrown when inference operations fail.
|
| InferenceMetrics |
Collects and records metrics for ML inference operations.
|
| InferenceMetricsCollector |
Collects and tracks inference metrics per model.
|
| InferenceMetricsCollector.ModelMetrics |
Metrics for a single model.
|
| InferenceResult |
Container for ML inference results including predictions and metadata.
|
| InferenceRetryHandler |
Handles retry logic for failed inference operations.
|
| InferenceSession |
Wrapper class for ONNX Runtime sessions providing simplified access to inference capabilities.
|
| InputOutputSchema |
Schema definition for model inputs and outputs.
|
| InputOutputSchema.FieldSchema |
|
| InputOutputSchemaExtractor |
Extracts input/output schema from loaded models.
|
| JsonFeatureExtractor |
Extracts features from JSON strings for model input.
|
| LocalInferenceEngine<T> |
Abstract base class for local inference engines that load models from files.
|
| LocalModelLoader |
Loads models from local filesystem or HDFS.
|
| MetricsCollector |
Central collector managing metrics for multiple models.
|
| MinioModelLoader |
Loads models from MinIO object storage.
|
| MLInferenceDynamicTableFactory |
Factory for creating ML inference table sources.
|
| MLInferenceDynamicTableSource |
Dynamic table source for ML inference with lookup support.
|
| MLInferenceFunction |
Flink SQL scalar function for ML inference.
|
| MLInferenceFunctionFactory |
|
| MLInferenceLookupFunction |
Lookup function for temporal joins with ML predictions.
|
| MLPredictAggregateFunction |
Aggregate function for batch inference over windows.
|
| MLPredictAggregateFunction.Accumulator |
|
| MLPredictScalarFunction |
Flink SQL Scalar Function for ML model inference.
|
| MLPredictTableFunction |
Table function that returns multiple rows.
|
| ModelCache<K,V> |
Thread-safe LRU cache for storing ML model predictions and inference results.
|
| ModelCache |
Thread-safe LRU cache for loaded inference engines.
|
| ModelConfig |
Configuration for ML models in Otter Stream inference framework.
|
| ModelConfig.Builder |
Builder for creating ModelConfig instances.
|
| ModelDescriptor |
Metadata descriptor for a registered model.
|
| ModelFormat |
Enumeration of supported ML model formats in Otter Stream.
|
| ModelHealthChecker |
Performs periodic health checks on loaded models.
|
| ModelHealthChecker.HealthStatus |
Health status for a single model.
|
| ModelLoader<T> |
Interface for loading ML models from various sources.
|
| ModelLoader |
|
| ModelLoaderFactory |
Factory for creating appropriate model loaders based on source type.
|
| ModelLoadException |
Exception thrown when model loading fails.
|
| ModelLoadingContext |
Context information for model loading operations.
|
| ModelLoadingProgressTracker |
Tracks progress of model loading operations.
|
| ModelLoadingProgressTracker.LoadingProgress |
Represents loading progress for a single model.
|
| ModelMetadata |
Immutable metadata container for machine learning models.
|
| ModelMetadata.Builder |
|
| ModelRegistrationManager |
Manages model registration, loading, and lifecycle.
|
| ModelRegistry |
Central registry for model metadata and lifecycle management.
|
| ModelServerConnectionPool |
Manages HTTP connection pooling for remote model servers.
|
| ModelSession |
Manages stateful inference sessions.
|
| ModelSource |
Interface for different model source implementations.
|
| ModelSourceConfig |
Configuration for model source locations and loading strategies.
|
| ModelSourceConfig.Builder |
|
| ModelSourceConfig.SourceType |
Enumeration of supported source types.
|
| ModelVersionManager |
Manages multiple versions of a model.
|
| ModelWarmupUtility |
Utility for warming up models to avoid cold start latency.
|
| OnnxInferenceEngine |
|
| OnnxModelLoader |
|
| OtterStreamSQLConstants |
Constants used throughout Otter Stream SQL module.
|
| PatternInferenceFunction<T,R> |
Process function that applies ML inference to CEP pattern matches.
|
| PmmlInferenceEngine |
|
| ReinforcementHandler |
Handles feedback loops for reinforcement learning scenarios.
|
| RemoteInferenceEngine |
Abstract base class for remote inference engines communicating with external model endpoints.
|
| ResultPostprocessor |
Postprocesses inference results (denormalization, thresholding, etc.).
|
| RuleEngineProvider |
Provides a rule engine instance for Flink SQL and DataStream pipelines.
|
| RuleOutcome |
Mutable rule outcome passed as a Drools global.
|
| S3ModelLoader |
Loads ML models from AWS S3 or S3-compatible storage (e.g.
|
| SageMakerInferenceClient |
AWS SageMaker remote inference client for hosted ML models.
|
| SerializationUtils |
Serialization utilities for distributing objects in Flink.
|
| SqlInferenceConfig |
Configuration for SQL-based ML inference operations.
|
| SqlInferenceConfig.Builder |
|
| TensorConverter |
Converts between Java objects and TensorFlow tensors.
|
| TensorFlowGraphDefEngine |
TensorFlow GraphDef (frozen graph) inference engine.
|
| TensorFlowInferenceEngine |
TensorFlow SavedModel inference engine using TensorFlow Java API.
|
| TensorFlowSavedModelEngine |
TensorFlow SavedModel inference engine.
|
| TorchScriptInferenceEngine |
|
| TypeUtils |
Utilities for Flink type conversions.
|
| ValidationUtils |
Validation utilities for input data.
|
| VertexAIInferenceClient |
Google Vertex AI remote inference client for Google Cloud ML models.
|
| XGBoostInferenceEngine |
XGBoost inference engine for gradient boosting tree models.
|