目录

Workflow Inference API

Workflow Inference API is listening on port 8080 and only accessible from localhost by default. To change the default setting, see TorchServe Configuration.

The TorchServe server supports the following APIs:

Predictions API

To get predictions from a workflow, make a REST call to /wfpredict/{workflow_name}:

POST /wfpredict/{workflow_name}

curl Example

curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/kitten_small.jpg

curl http://localhost:8080/wfpredict/myworkflow -T kitten_small.jpg

The result is JSON object returning the response bytes from the leaf node of the workflow DAG.

文档

访问 PyTorch 的全面开发人员文档

查看文档

教程

获取面向初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并解答您的问题

查看资源