Uses of Class
com.codedstream.otterstream.inference.exception.InferenceException
-
-
Uses of InferenceException in com.codedstream.otterstream.inference.engine
Methods in com.codedstream.otterstream.inference.engine that throw InferenceException Modifier and Type Method Description voidInferenceEngine. close()Closes the inference engine and releases all resources.voidLocalInferenceEngine. close()InferenceResultInferenceEngine. infer(Map<String,Object> inputs)Performs inference on a single input.abstract InferenceResultLocalInferenceEngine. infer(Map<String,Object> inputs)InferenceResultInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference on multiple inputs.abstract InferenceResultLocalInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)voidInferenceEngine. initialize(ModelConfig config)Initializes the inference engine with the given configuration.voidLocalInferenceEngine. initialize(ModelConfig config)protected voidLocalInferenceEngine. loadModelDirectly(ModelConfig config)Override this method for engines that handle their own model loading. -
Uses of InferenceException in com.codedstream.otterstream.inference.function
Methods in com.codedstream.otterstream.inference.function that throw InferenceException Modifier and Type Method Description protected voidAsyncModelInferenceFunction. initializeEngine()Initializes the inference engine lazily. -
Uses of InferenceException in com.codedstream.otterstream.onnx
Methods in com.codedstream.otterstream.onnx that throw InferenceException Modifier and Type Method Description voidOnnxInferenceEngine. close()Closes the engine and releases all native resources.InferenceResultOnnxInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on the provided inputs.InferenceResultOnnxInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference on multiple input sets.voidOnnxInferenceEngine. initialize(ModelConfig config)Initializes the ONNX inference engine with the provided configuration. -
Uses of InferenceException in com.codedstream.otterstream.pmml
Methods in com.codedstream.otterstream.pmml that throw InferenceException Modifier and Type Method Description voidPmmlInferenceEngine. close()Closes the PMML inference engine and releases resources.InferenceResultPmmlInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on the provided inputs using the PMML model.InferenceResultPmmlInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference by sequentially processing multiple input sets.voidPmmlInferenceEngine. initialize(ModelConfig config)Initializes the PMML inference engine by loading and parsing a PMML model file. -
Uses of InferenceException in com.codedstream.otterstream.pytorch
Methods in com.codedstream.otterstream.pytorch that throw InferenceException Modifier and Type Method Description voidTorchScriptInferenceEngine. close()Closes the engine and releases all DJL and native resources.InferenceResultTorchScriptInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on the provided inputs using the PyTorch model.InferenceResultTorchScriptInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Batch inference implementation.voidTorchScriptInferenceEngine. initialize(ModelConfig config)Initializes the PyTorch inference engine by loading a TorchScript model. -
Uses of InferenceException in com.codedstream.otterstream.remote
Methods in com.codedstream.otterstream.remote that throw InferenceException Modifier and Type Method Description voidRemoteInferenceEngine. close()Closes the engine and releases resources.abstract InferenceResultRemoteInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on remote endpoint (abstract).InferenceResultRemoteInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference using sequential processing.voidRemoteInferenceEngine. initialize(ModelConfig config)Initializes the remote inference engine with endpoint configuration.abstract booleanRemoteInferenceEngine. validateConnection()Validates connection to remote endpoint (abstract). -
Uses of InferenceException in com.codedstream.otterstream.remote.http
Methods in com.codedstream.otterstream.remote.http that throw InferenceException Modifier and Type Method Description voidHttpInferenceClient. close()Closes the HTTP client and releases resources.InferenceResultHttpInferenceClient. infer(Map<String,Object> inputs)Sends inference request to remote HTTP endpoint.voidHttpInferenceClient. initialize(ModelConfig config)Initializes the HTTP inference client with connection configuration.booleanHttpInferenceClient. validateConnection()Validates connection to remote endpoint using HTTP HEAD request. -
Uses of InferenceException in com.codedstream.otterstream.remote.sagemaker
Methods in com.codedstream.otterstream.remote.sagemaker that throw InferenceException Modifier and Type Method Description voidSageMakerInferenceClient. close()Closes the SageMaker client and releases AWS resources.InferenceResultSageMakerInferenceClient. infer(Map<String,Object> inputs)Invokes SageMaker endpoint for inference.voidSageMakerInferenceClient. initialize(ModelConfig config)Initializes the SageMaker inference client with AWS configuration. -
Uses of InferenceException in com.codedstream.otterstream.remote.vertex
Methods in com.codedstream.otterstream.remote.vertex that throw InferenceException Modifier and Type Method Description InferenceResultVertexAIInferenceClient. infer(Map<String,Object> inputs)Performs single inference using Vertex AI PredictionService.InferenceResultVertexAIInferenceClient. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference using Vertex AI native batch support.voidVertexAIInferenceClient. initialize(ModelConfig modelConfig)Initializes the Vertex AI inference client with Google Cloud configuration. -
Uses of InferenceException in com.codedstream.otterstream.tensorflow
Methods in com.codedstream.otterstream.tensorflow that throw InferenceException Modifier and Type Method Description voidTensorFlowInferenceEngine. close()Closes the TensorFlow engine and releases native resources.InferenceResultTensorFlowInferenceEngine. infer(Map<String,Object> inputs)Performs single inference using TensorFlow SavedModel.InferenceResultTensorFlowInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference (simplified implementation).voidTensorFlowInferenceEngine. initialize(ModelConfig config)Initializes the TensorFlow inference engine by loading a SavedModel. -
Uses of InferenceException in com.codedstream.otterstream.xgboost
Methods in com.codedstream.otterstream.xgboost that throw InferenceException Modifier and Type Method Description voidXGBoostInferenceEngine. close()Closes the XGBoost engine and releases native resources.InferenceResultXGBoostInferenceEngine. infer(Map<String,Object> inputs)Performs single inference using XGBoost model.InferenceResultXGBoostInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference using XGBoost's efficient matrix operations.voidXGBoostInferenceEngine. initialize(ModelConfig config)Initializes the XGBoost inference engine by loading a model file. -
Uses of InferenceException in com.codedstreams.otterstreams.sql.runtime
Methods in com.codedstreams.otterstreams.sql.runtime that throw InferenceException Modifier and Type Method Description voidTensorFlowGraphDefEngine. close()voidTensorFlowSavedModelEngine. close()static InferenceEngine<?>InferenceEngineFactory. createEngine(ModelConfig config)<T> TInferenceRetryHandler. executeWithRetry(Supplier<T> operation, String operationName)Executes operation with retry logic.InferenceResultTensorFlowGraphDefEngine. infer(Map<String,Object> inputs)InferenceResultTensorFlowSavedModelEngine. infer(Map<String,Object> inputs)InferenceResultTensorFlowGraphDefEngine. inferBatch(Map<String,Object>[] batchInputs)InferenceResultTensorFlowSavedModelEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference by running each input separately.protected voidTensorFlowGraphDefEngine. loadModelDirectly(ModelConfig config)protected voidTensorFlowSavedModelEngine. loadModelDirectly(ModelConfig config) -
Uses of InferenceException in com.codedstreams.otterstreams.sql.udf
Methods in com.codedstreams.otterstreams.sql.udf that throw InferenceException Modifier and Type Method Description DoubleMLInferenceFunction. eval(Map<String,Object> features, String modelName)
-