目录

Building and Running ExecuTorch with Core ML Backend

Core ML delegate uses Core ML apis to enable running neural networks via Apple’s hardware acceleration. For more about coreml you can read here. In this tutorial we will walk through steps of lowering a PyTorch model to Core ML delegate

What you will learn in this tutorial:
  • In this tutorial you will learn how to export MobileNet V3 model so that it runs on Core ML backend.

  • You will also learn how to deploy and run the exported model on a supported Apple device.

Prerequisites (Hardware and Software)

In order to be able to successfully build and run the ExecuTorch’s Core ML backend you’ll need the following hardware and software components.

Hardware:

Software:

Setting up your developer environment

  1. Make sure that you have completed the ExecuTorch setup tutorials linked to at the top of this page and setup the environment.

  2. Run install_requirements.sh to install dependencies required by the Core ML backend.

cd executorch
sh backends/apple/coreml/scripts/install_requirements.sh
  1. Install Xcode.

  2. Install Xcode Command Line Tools.

xcode-select --install

Build

AOT (Ahead-of-time) components:

Exporting a Core ML delegated Program:

  • In this step, you will lower the MobileNet V3 model to the Core ML backend and export the ExecuTorch program. You’ll then deploy and run the exported program on a supported Apple device using Core ML backend.

cd executorch

# Generates ./mv3_coreml_all.pte file.
python3 -m examples.apple.coreml.scripts.export_and_delegate --model_name mv3

Runtime:

Running the Core ML delegated Program:

  1. Build the runner.

cd executorch

# Generates ./coreml_executor_runner.
sh examples/apple/coreml/scripts/build_executor_runner.sh
  1. Run the exported program.

cd executorch

# Runs the exported mv3 model on the Core ML backend.
./coreml_executor_runner --model_path mv3_coreml_all.pte

Deploying and running on a device

Running the Core ML delegated Program using the Demo iOS App:

  1. Please follow the Export Model step of the tutorial to bundle the exported MobileNet V3 program. You only need to do the Core ML part.

  2. Complete the Build Runtime and Backends section of the tutorial. When building the frameworks you only need the coreml option.

  3. Complete the Final Steps section of the tutorial to build and run the demo app.


Running the Core ML delegated Program using your own App

  1. Build Core ML delegate. The following will create a executorch.xcframework in the cmake-out directory.

cd executorch
./build/build_apple_frameworks.sh --Release --coreml
  1. Create a new Xcode project or open an existing project.

  2. Drag the executorch.xcframework generated from Step 2 to Frameworks.

  3. Go to the project’s Build Phases - Link Binaries With Libraries, click the + sign, and add the following frameworks:

- executorch.xcframework
- coreml_backend.xcframework
- Accelerate.framework
- CoreML.framework
- libsqlite3.tbd
  1. Add the exported program to the Copy Bundle Phase of your Xcode target.

  2. Please follow the running a model tutorial to integrate the code for loading a ExecuTorch program.

  3. Update the code to load the program from the Application’s bundle.

using namespace torch::executor;

NSURL *model_url = [NBundle.mainBundle URLForResource:@"mv3_coreml_all" extension:@"pte"];

Result<util::FileDataLoader> loader =
        util::FileDataLoader::from(model_url.path.UTF8String);
  1. Use Xcode to deploy the application on the device.

  2. The application can now run the MobileNet V3 model on the Core ML backend.


In this tutorial, you have learned how to lower the MobileNet V3 model to the Core ML backend, deploy, and run it on an Apple device.

Frequently encountered errors and resolution.

If you encountered any bugs or issues following this tutorial please file a bug/issue here with tag #coreml.

文档

访问 PyTorch 的全面开发人员文档

查看文档

教程

获取面向初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并解答您的问题

查看资源