Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for resharding width-sharded tensors to/from DRAM #15526

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Commits on Nov 28, 2024

  1. Add support for resharding width-sharded tensors to/from DRAM

    This commit introduces a new width-shard to width-shard reshard kernel,
    modeled after the height-sharded tensor reshard special case implemented
    in `reshard_multi_core_same_width`.
    
    Supported operations include:
    
    - L1 to L1
    - L1 to DRAM
    - DRAM to L1
    
    Currently, only row-major tensors are supported. For unsupported cases,
    we fall back to the generalized reshard implementation.
    
    Unit tests have been added to validate the new kernel functionality.
    esmalTT committed Nov 28, 2024
    Configuration menu
    Copy the full SHA
    8c0efe8 View commit details
    Browse the repository at this point in the history