Skip to content

Releases: tenstorrent/tt-metal

v0.52.0-rc15

07 Sep 02:15
3a010a7
Compare
Choose a tag to compare
v0.52.0-rc15 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10747521160

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated
  • #11990: Replace ttnn.experimental.tensor.* to ttnn.* in ttnn folder
  • #11132 Run Post-Commit Python Tests agai...
Read more

v0.52.0-rc14

06 Sep 22:49
81f9fae
Compare
Choose a tag to compare
v0.52.0-rc14 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10745849640

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated
  • #11990: Replace ttnn.experimental.tensor.* to ttnn.* in ttnn folder
  • #11132 Run Post-Commit Python Tests agai...
Read more

v0.52.0-rc13

06 Sep 19:44
dc76271
Compare
Choose a tag to compare
v0.52.0-rc13 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10743792121

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated
  • #11990: Replace ttnn.experimental.tensor.* to ttnn.* in ttnn folder
  • #11132 Run Post-Commit Python Tests agai...
Read more

v0.52.0-rc12

06 Sep 04:07
cb68490
Compare
Choose a tag to compare
v0.52.0-rc12 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10731919856

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated
  • #11990: Replace ttnn.experimental.tensor.* to ttnn.* in ttnn folder
  • #11132 Run Post-Commit Python Tests agai...
Read more

v0.52.0-rc11

06 Sep 02:15
Compare
Choose a tag to compare
v0.52.0-rc11 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10730882573

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated
  • #11990: Replace ttnn.experimental.tensor.* to ttnn.* in ttnn folder
  • #11132 Run Post-Commit Python Tests agai...
Read more

v0.52.0-rc9

04 Sep 14:00
Compare
Choose a tag to compare
v0.52.0-rc9 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10702489425

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated
  • #11990: Replace ttnn.experimental.tensor.* to ttnn.* in ttnn folder
  • #11132 Run Post-Commit Python Tests agai...
Read more

v0.52.0-rc8

04 Sep 02:16
Compare
Choose a tag to compare
v0.52.0-rc8 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10693502206

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated
  • #11990: Replace ttnn.experimental.tensor.* to ttnn.* in ttnn folder
  • #11132 Run Post-Commit Python Tests agai...
Read more

v0.52.0-rc6

02 Sep 02:15
dcd47ef
Compare
Choose a tag to compare
v0.52.0-rc6 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10659227832

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME
  • Fused AllGather+Matmul
  • #12124: support moreh_nll_loss support large wight
  • [Bugfix] Fixed is allocated

v0.52.0-rc5

31 Aug 02:15
2e14e61
Compare
Choose a tag to compare
v0.52.0-rc5 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10641311471

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300
  • #0: Fixed Conv2dConfig in broken tests
  • #0: Falcon40b T3K demo mismatch tokens fixed
  • #12069: Add catch and handling for device initialize exception, typic…
  • Point metal to new UMD main branch
  • Update CODEOWNERS
  • #11993: Fix offset calculation for uneven shard in reshard fast path
  • Update CODEOWNERS
  • #12117: Refactor DeviceMesh->MeshDevice, DeviceGrid->MeshShape
  • #11854: Move .umd that houses cluster descriptor to TT_METAL_HOME

v0.52.0-rc4

30 Aug 13:30
Compare
Choose a tag to compare
v0.52.0-rc4 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/10632943556

📦 Uncategorized

  • #0: Remove run_operation from async_runtime.hpp
  • #11640: Include simulation device in tt_cluster
  • #11342: Replace tt_lib with ttnn function in experimental/functional
  • #11649: update tt_lib with ttnn support for non working folder
  • Perf dashboard and batching support for Mistral-7B and Llama3.1-8B
  • Adding fix for llama CI failure caused by ttnn.experimental.tensor.typecast
  • Fold sharded support
  • #9450: add env flag to skip recompiling and reloading FW
  • Move semaphores into kernel config ring buffer
  • #10874: Enable test cases for concurrent instances in CCL all gather
  • [Falcon7b] Remove hf reference files and import from transformers instead
  • #11768: Fix watcher pause feature
  • [Improvement] Added some graph names in the separate file
  • Migrate CB configs into kernel config ring buffer
  • #0: Feed more data to visualizer
  • #11490: ttnn and tt_metal shapes are mixed
  • Migrate sharded ops from TTL to TTNN
  • #8865: Port ttnn ops to dispatch profiling infra
  • #11700: update write_tensor with copy_host_to_device_tensor
  • TTNN sweep low pic unit tests
  • Add sweeps for ops: topk, frac, trunc, ceil to TTNN
  • LLK Test Coverage Follow-up
  • Llama3.1 70b Prefill - MLP and Attention
  • #10866: Read profiler buffer with EnqueueReadBuffer in fast dispatch mode
  • Lpremovic/0 expand llk ctest coverage
  • #11313: Migrate layernorm_distributed to ttnn
  • [Blackhole Bringup] Fixes for maxpool
  • #11850: Remove Llama3.1-8B output matching to avoid blocking CI
  • modify keys within device_info
  • #0: remove extra arch-wormhole labels for single-card workflows
  • #0: fix cloud-virtual-machine label
  • #11564: added test for generating sample data with many different use cases to the visualizer
  • #0: Remove llk_io.cc for WH and BH as well. GS was removed in 7b8e627
  • #9527: Moving bcast to operations/data_movement
  • #10332: Make ttnn::event_synchronize block only in the app thread
  • #11554: Replace tt_lib in sweeps, integration_tests
  • #11877: Make dispatch core order in the core descriptor match for E75 with 1 and 2 CQs
  • #11845: fix worker ring direction assignment in reduce scatter
  • FD Optimizations/Cleanup
  • #11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18
  • Revert "#11881: Add -Wno-vla-cxx-extension to CMake to fix build on clang18"
  • #10163: Add backward support for remainder op
  • Added ttnn.hypot_bw unit test
  • #0: Add another codeowner for conv2d
  • #11334: Remove unnecessary code for previous ci/cd csvs
  • #0: Bump timeout for single-card perf tests to see if that helps with timeouts
  • Removed "" graph_consts.hpp
  • [Falcon7b] Re-enable decode perplexity test with seq len 2048
  • [Falcon7b] Fix duplicate loading of rotary embeddings in prefill/decode
  • [Falcon7b] Re-enable demo perf-mode tests on galaxy, update targets, prevent multinomial errors (during perf-mode) using nan-to-num
  • [Blackhole Bringup] Add pack_untilize tests & fixes
  • #0: Consolidate demo tests for single card and t3000 to use impls rather than copy
  • Collection of small dprint/watcer changes
  • #11917: disable test
  • #11706: Use new Conv2D API in UNet Shallow
  • #11925 Update ttnn.arange binding
  • #0: Remove test include from packet_demux
  • #7709: Fix exp like ops ttnn doc issues
  • #11126: Resnet Demo with new conv API
  • Added ttnn.argmax sweeps, API calls and unit tests
  • #10515: For matmul corner case, if CBs don't fit, choose different program config
  • [Mixtral8x7B] Increase demo max context length to 32k
  • Added ttnn.topk unit test
  • #0: (MINOR) Update to v0.52.0
  • #11847: Add tt-smi reset command environment variable for sweeps
  • #11000: Enable uint8 A2D and (un)pack reconfig
  • #0: Do not use mount-cloud-weka label because we may no longer need it as cloud fixed it
  • #0: fixed External Operation logging
  • #0: Update matmul_multi_core_reuse to support mixed precision
  • #11138: Move large global vars in prefetcher and dispatcher to the stack
  • Enabling BH L1 data cache
  • #0: Move Unary device operation to tmp
  • Moved tracked methods out of tensor
  • #11964: Only write branch is if the repo is not detached
  • #11622: add concat sweep
  • #0: Refactor Python dynamic modules creation
  • #0: Update resnet test infra to print total batch size for multi device
  • #11930: Increase status checks
  • Convs on BH
  • #9630: assert out concat when concatenating along padded dimensions
  • Use product codes for cards instead of arch for eager-package-main
  • #11929: Move work_split_tilize
  • #11693: Move DeviceModule bindings and replace ttnn.experimental APIs
  • #11247: Remove in-place flag in binary operations
  • #11591: Move hack delay from trisc.cc to trisck.cc before run_kernel
  • #8865: Optimize softmax dispatch time
  • #0: skip yolov4 failing sub_modules
  • #11519: Restore path reservation for mms and convs
  • #5337: Fix Mixtral total number of generated tokens in perf benchmark
  • #11883: use fixed_string.size() instead of sizeof to ensure compatiablity with newer versions of reflect
  • #11559: Replace tt_lib in tests/ttnn files
  • #11915: Add sweep vector tagging and related infra changes
  • #0: fix fetch q write assert by using correct data offset for enqueue write buffer
  • update conv path in CODEOWNERS:
  • enable all enablable unit tests for convs with new api
  • Fix size_t compilation failure
  • Update perf and latest features for llm models (Aug 26)
  • Split up n300 demo tests into functionality and performance
  • #10718: Fix issue with negative pipeline queue times
  • #11642: demux ttnn::typecast into ttnn::experimental::typecast on gra…
  • #11569: Enable Conv2D WH unit tests for UNet shapes
  • #11591: Fix race by making only unpacker zero out RISCV_DEBUG_REG_DBG_FEATURE_DISABLE at start of kernel
  • Update CODEOWNERS
  • Add missing include to graph_trace_utils.hpp
  • #0: Always initialize l1_banking allocator even when size is 0
  • update slack notification include workflow run
  • #8868: Fixed conv for Stride>2
  • #11430: Refactoring moreh_mean
  • #11832: Remove tracking of writes per block and only track last block
  • #11644: Migrate AutoFormat to TTNN Experimental
  • Added ttnn.i0_bw unit test
  • #11938: Refactoring moreh_bmm
  • #11646: Replace ttnn.experimental.tensor.* in models/demos
  • Add support for cur_pos tensor arg in sdpa decode
  • #5659: Add Width Sharded support to Conv2d
  • Remove noinline attribute from sdpa_decode compute kernel
  • Updated sfpi compiler to address missing SFPNOP insertion
  • Move compute kernel config to TTNN
  • Add fold to resnet
  • [BugFix] Fixed tensor::is_allocated.
  • Revert "[BugFix] Fixed tensor::is_allocated."
  • #8598: sinh fix
  • #11646: Replace ttnn.experimental.tensor.* to ttnn.* in models/experimental, tests
  • #10754: Add data-parallel support for UNet Shallow on N300