目录

Building and Running ExecuTorch with MediaTek Backend

MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices that equips with MediaTek Neuron Processing Unit (NPU). This document offers a step-by-step guide to set up the build environment for the MediaTek ExecuTorch libraries.

What you will learn in this tutorial:
  • How to export and lower a PyTorch model ahead of time with ExecuTorch for MediaTek devices.

  • How to build MediaTek backend and examples.

  • How to deploy the exported models on device with ExecuTorch runtime.

Tutorials we recommend you complete before this:

Prerequisites (Hardware and Software)

Host OS

  • Linux operating system

Supported Chips:

  • MediaTek Dimensity 9300 (D9300)

Software:

Setting up your developer environment

Follow the steps below to setup your build environment:

  1. Setup ExecuTorch Environment: Refer to the Setting up ExecuTorch guide for detailed instructions on setting up the ExecuTorch environment.

  2. Setup MediaTek Backend Environment

  • Install the dependent libs. Ensure that you are inside backends/mediatek/ directory

    pip3 install -r requirements.txt
    
  • Install the two .whl downloaded from NeuroPilot Portal

    pip3 install mtk_neuron-8.2.13-py3-none-linux_x86_64.whl
    pip3 install mtk_converter-8.9.1+public-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
    
  • Set evironment variables for building backend

    export NEURON_BUFFER_ALLOCATOR_LIB=<path_to_buffer_allocator.so>
    

Build

Ahead of time:

Exporting a PyTorch Model for MediaTek Backend:

  1. Lower and export the .pte file for on-device execution. The export script samples are povided under example/mediatek/. For example, the following commnad exports the .pte using the scripts provided.

cd executorch

./examples/mediatek/shell_scripts/export_oss.sh mobilenetv3
  1. Find the .pte files under the directory named as same as the model.

Runtime:

Build MediaTek Backend for ExecuTorch Runtime

  1. Navigate to backends/mediatek/scripts/ directory.

  2. Build MediaTek Backend: Once the prerequisites are in place, run the mtk_build.sh script to start the build process:

    ./mtk_build.sh
    
  3. MediaTek backend will be built under cmake-android-out/backends/ as libneuron_backend.so.

Build a runner to execute the model on the device:

  1. Build the runners and the backend by exedcuting the script:

./mtk_build_examples.sh
  1. The runners will be built under cmake-android-out/examples/

Deploying and running on a device

  1. Push MediaTek universal SDK and MediaTek backend to the device: push libneuronusdk_adapter.mtk.so and libneuron_backend.so to the phone and export it to the $LD_LIBRARY_PATH environment variable before executing ExecuTorch with MediaTek backend.

    export LD_LIBRARY_PATH=<path_to_usdk>:<path_to_neuron_backend>:$LD_LIBRARY_PATH
    

文档

访问 PyTorch 的全面开发人员文档

查看文档

教程

获取面向初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并解答您的问题

查看资源