目录

Memory Planning Inspection in ExecuTorch

After the Memory Planning pass of ExecuTorch, memory allocation information is stored on the nodes of the ExportedProgram. Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.

Usage

User should add this code after they call to_executorch(), and it will write memory allocation information stored on the nodes to the file path “memory_profile.json”. The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.

from executorch.util.activation_memory_profiler import generate_memory_trace
generate_memory_trace(
    executorch_program_manager=prog,
    chrome_trace_filename="memory_profile.json",
    enable_memory_offsets=True,
)

Chrome Trace

Open a Chrome browser tab and navigate to chrome://tracing/. Upload the generated .json to view. Example of a MobileNet V2 model:

Memory planning Chrome trace visualization

Note that, since we are repurposing the Chrome trace tool, the axes in this context may have different meanings compared to other Chrome trace graphs you may have encountered previously:

  • The horizontal axis, despite being labeled in seconds (s), actually represents megabytes (MBs).

  • The vertical axis has a 2-level hierarchy. The first level, “pid”, represents memory space. For CPU, everything is allocated on one “space”; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows.

Further Reading

文档

访问 PyTorch 的全面开发人员文档

查看文档

教程

获取面向初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并解答您的问题

查看资源