admin管理员组文章数量:1435859
I logged an ONNX model (converted from a pyspark model) in MLFlow like this:
with mlflow.start_run() as run:
mlflow.onnx.log_model(
onnx_model=my_onnx_model,
artifact_path="onnx_model",
input_example=input_example,
)
where input_example
is a Pandas dataframe that gets saved to artifacts.
On Databricks experiments page, I can see the model being logged along with input_example.json
that indeed contains the data I provided as input_example
while logging the model.
How to use that data now to make predictions for testing whether ONNX model was logged correctly or not? On model artifacts page in Databricks UI, I see:
from mlflow.models import validate_serving_input
model_uri = 'runs:/<some-model-id>/onnx_model'
# The logged model does not contain an input_example.
# Manually generate a serving payload to verify your model prior to deployment.
from mlflow.models import convert_input_example_to_serving_input
# Define INPUT_EXAMPLE via assignment with your own input example to the model
# A valid input example is a data instance suitable for pyfunc prediction
serving_payload = convert_input_example_to_serving_input(INPUT_EXAMPLE)
# Validate the serving payload works on the model
validate_serving_input(model_uri, serving_payload)
版权声明:本文标题:pyspark - How to use input_example in MLFlow logged ONNX model in Databricks to make predictions? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1745671155a2669575.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论