Uses of Class
com.codedstream.otterstream.inference.model.InferenceResult
-
-
Uses of InferenceResult in com.codedstream.otterstream.inference.engine
Methods in com.codedstream.otterstream.inference.engine that return InferenceResult Modifier and Type Method Description InferenceResultInferenceEngine. infer(Map<String,Object> inputs)Performs inference on a single input.abstract InferenceResultLocalInferenceEngine. infer(Map<String,Object> inputs)InferenceResultInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference on multiple inputs.abstract InferenceResultLocalInferenceEngine. inferBatch(Map<String,Object>[] batchInputs) -
Uses of InferenceResult in com.codedstream.otterstream.inference.function
Methods in com.codedstream.otterstream.inference.function with parameters of type InferenceResult Modifier and Type Method Description protected OUTAsyncModelInferenceFunction. transformResult(IN input, InferenceResult result)Transforms inference result into output record. -
Uses of InferenceResult in com.codedstream.otterstream.onnx
Methods in com.codedstream.otterstream.onnx that return InferenceResult Modifier and Type Method Description InferenceResultOnnxInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on the provided inputs.InferenceResultOnnxInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference on multiple input sets. -
Uses of InferenceResult in com.codedstream.otterstream.pmml
Methods in com.codedstream.otterstream.pmml that return InferenceResult Modifier and Type Method Description InferenceResultPmmlInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on the provided inputs using the PMML model.InferenceResultPmmlInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference by sequentially processing multiple input sets. -
Uses of InferenceResult in com.codedstream.otterstream.pytorch
Methods in com.codedstream.otterstream.pytorch that return InferenceResult Modifier and Type Method Description InferenceResultTorchScriptInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on the provided inputs using the PyTorch model.InferenceResultTorchScriptInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Batch inference implementation. -
Uses of InferenceResult in com.codedstream.otterstream.remote
Methods in com.codedstream.otterstream.remote that return InferenceResult Modifier and Type Method Description abstract InferenceResultRemoteInferenceEngine. infer(Map<String,Object> inputs)Performs single inference on remote endpoint (abstract).InferenceResultRemoteInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference using sequential processing. -
Uses of InferenceResult in com.codedstream.otterstream.remote.http
Methods in com.codedstream.otterstream.remote.http that return InferenceResult Modifier and Type Method Description InferenceResultHttpInferenceClient. infer(Map<String,Object> inputs)Sends inference request to remote HTTP endpoint. -
Uses of InferenceResult in com.codedstream.otterstream.remote.sagemaker
Methods in com.codedstream.otterstream.remote.sagemaker that return InferenceResult Modifier and Type Method Description InferenceResultSageMakerInferenceClient. infer(Map<String,Object> inputs)Invokes SageMaker endpoint for inference. -
Uses of InferenceResult in com.codedstream.otterstream.remote.vertex
Methods in com.codedstream.otterstream.remote.vertex that return InferenceResult Modifier and Type Method Description InferenceResultVertexAIInferenceClient. infer(Map<String,Object> inputs)Performs single inference using Vertex AI PredictionService.InferenceResultVertexAIInferenceClient. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference using Vertex AI native batch support. -
Uses of InferenceResult in com.codedstream.otterstream.tensorflow
Methods in com.codedstream.otterstream.tensorflow that return InferenceResult Modifier and Type Method Description InferenceResultTensorFlowInferenceEngine. infer(Map<String,Object> inputs)Performs single inference using TensorFlow SavedModel.InferenceResultTensorFlowInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference (simplified implementation). -
Uses of InferenceResult in com.codedstream.otterstream.xgboost
Methods in com.codedstream.otterstream.xgboost that return InferenceResult Modifier and Type Method Description InferenceResultXGBoostInferenceEngine. infer(Map<String,Object> inputs)Performs single inference using XGBoost model.InferenceResultXGBoostInferenceEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference using XGBoost's efficient matrix operations. -
Uses of InferenceResult in com.codedstreams.otterstreams.sql.runtime
Methods in com.codedstreams.otterstreams.sql.runtime that return InferenceResult Modifier and Type Method Description InferenceResultTensorFlowGraphDefEngine. infer(Map<String,Object> inputs)InferenceResultTensorFlowSavedModelEngine. infer(Map<String,Object> inputs)InferenceResultTensorFlowGraphDefEngine. inferBatch(Map<String,Object>[] batchInputs)InferenceResultTensorFlowSavedModelEngine. inferBatch(Map<String,Object>[] batchInputs)Performs batch inference by running each input separately.InferenceResultBatchInferenceProcessor. submitAndWait(Map<String,Object> features)Submits an inference request and waits for result.
-