Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kernel size is not supported in ttnn.avg_pool2d #12151

Open
jdh8 opened this issue Sep 3, 2024 · 4 comments
Open

Kernel size is not supported in ttnn.avg_pool2d #12151

jdh8 opened this issue Sep 3, 2024 · 4 comments

Comments

@jdh8
Copy link
Contributor

jdh8 commented Sep 3, 2024

Kernel size is not supported in ttnn.avg_pool2d

E       TypeError: avg_pool2d(): incompatible function arguments. The following argument types are supported:
E           1. (arg0: ttnn._ttnn.deprecated.tensor.Tensor, *, memory_config: ttnn._ttnn.deprecated.tensor.MemoryConfig = MemoryConfig(memory_layout=TensorMemoryLayout::INTERLEAVED,buffer_type=BufferType::DRAM,shard_spec=std::nullopt), dtype: Optional[ttnn._ttnn.deprecated.tensor.DataType] = None) -> ttnn._ttnn.deprecated.tensor.Tensor
E       
E       Invoked with: ttnn.Tensor([[[[ 0.82812,  0.74609,  ...,  0.00000,  0.00000],
E                      [ 0.51172,  0.50391,  ...,  0.00000,  0.00000],
E                      ...,
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000],
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000]],
E       
E                     [[ 0.67188,  0.50000,  ...,  0.00000,  0.00000],
E                      [ 0.13281,  0.28906,  ...,  0.00000,  0.00000],
E                      ...,
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000],
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000]],
E       
E                     ...,
E       
E                     [[ 0.84766,  0.14844,  ...,  0.00000,  0.00000],
E                      [ 0.19922,  0.91016,  ...,  0.00000,  0.00000],
E                      ...,
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000],
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000]],
E       
E                     [[ 0.41406,  0.63281,  ...,  0.00000,  0.00000],
E                      [ 0.79688,  0.78125,  ...,  0.00000,  0.00000],
E                      ...,
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000],
E                      [ 0.00000,  0.00000,  ...,  0.00000,  0.00000]]]], shape=Shape([1, 7, 7[32], 4[32]]), dtype=DataType::BFLOAT16, layout=Layout::TILE), [3, 3]
E       
E       Did you forget to `#include <pybind11/stl.h>`? Or <pybind11/complex.h>,
E       <pybind11/functional.h>, <pybind11/chrono.h>, etc. Some automatic
E       conversions are optional and require extra headers to be included
E       when compiling your pybind11 module.

../../tt-metal/ttnn/ttnn/decorators.py:328: TypeError

Originally posted by @jdh8 in tenstorrent/pytorch2.0_ttnn#143 (comment)

@jerrysky3
Copy link
Contributor

jerrysky3 commented Oct 21, 2024

There is a similarity between AvgPool2D and MaxPool2D (which already supports kernel size), so I think it might be a good idea to generalize MaxPool2D, similar to the generic reduction kernel. Here is the brief proposal:

  • Generalize MaxPool2DOp::invoke to a template and accept different pool op
    • Depends on the pool op, it uses the corresponding padding values when creating the haloed tensor
  • Generalize MaxPool2D (the uop) to a template and accept different pool op
  • Generalize max_pool_2d_multi_core_sharded_with_halo_v2_impl_new to accept different pool op
    • Depends on the reduce op, uses MAX or SUM for REDUCE_OP when compiling the compute kernel
    • If pool op is AVG_POOL, fill scalar CB with kernel size

@mywoodstock
Copy link
Contributor

@jerrysky3 The proposal sounds great -- would be good to have the generic reduce op support. In order to support SUM in addition to MAX, there will also be a need for updates to the maxpool kernels. We are currently in process of generalizing w.r.t. input tensor shapes as well.

@jerrysky3
Copy link
Contributor

jerrysky3 commented Oct 24, 2024

Hi @mywoodstock Thanks for the feedback! I planned to work on this issue this and next week. And yes I will touch a little bit of the maxpool kernel, will there be any big conflict with your changes to generalize the input tensor shapes? (Or if you already have a sharable prototype, I can take a look to see where the change conflicts will be)

@mywoodstock
Copy link
Contributor

@jerrysky3 These updates would be orthogonal to the tensor shape generalization updates, so should be easy to merge.

jerrysky3 added a commit that referenced this issue Oct 28, 2024
jerrysky3 added a commit that referenced this issue Oct 28, 2024
jerrysky3 added a commit that referenced this issue Oct 30, 2024
jerrysky3 added a commit that referenced this issue Oct 30, 2024
jerrysky3 added a commit that referenced this issue Oct 30, 2024
ayerofieiev-tt added a commit that referenced this issue Nov 25, 2024
### Ticket
#12151

### Problem description
In ttnn currently there are `global_avg_pool2d` and `avg_pool2d`. Both
of them do global average pooling (compute a single average of each
channel). It seems that `avg_pool2d` is not a public API (couldn't be
found on https://docs.tenstorrent.com/ttnn/latest/ttnn/ttnn) and most
places in tt-metal use `global_avg_pool2d`

This change removes the avg_pool2d and replace all its remaining usages
with `global_avg_pool2d`. This frees up the name `ttnn.avg_pool2d` for
the implementation of the actual average pooling API with kernel size
(#12151)

### What's changed
- Remove the export of `ttnn.avg_pool2d` (which is actually global
average pool2d)
- Replace the remaining usages of `avg_pool2d` in tt-metal with
`global_avg_pool2d`

### Checklist
- [x] Post commit CI passes
(https://github.com/tenstorrent/tt-metal/actions/runs/11546482848)
- [ ] Blackhole Post commit (if applicable)
- [ ] Model regression CI testing passes (if applicable)
- [ ] Device performance regression CI testing passes (if applicable)
- [x] New/Existing tests provide coverage for changes

Co-authored-by: Artem Yerofieiev <169092593+ayerofieiev-tt@users.noreply.github.com>
spoojaryTT pushed a commit that referenced this issue Nov 25, 2024
### Ticket
#12151

### Problem description
In ttnn currently there are `global_avg_pool2d` and `avg_pool2d`. Both
of them do global average pooling (compute a single average of each
channel). It seems that `avg_pool2d` is not a public API (couldn't be
found on https://docs.tenstorrent.com/ttnn/latest/ttnn/ttnn) and most
places in tt-metal use `global_avg_pool2d`

This change removes the avg_pool2d and replace all its remaining usages
with `global_avg_pool2d`. This frees up the name `ttnn.avg_pool2d` for
the implementation of the actual average pooling API with kernel size
(#12151)

### What's changed
- Remove the export of `ttnn.avg_pool2d` (which is actually global
average pool2d)
- Replace the remaining usages of `avg_pool2d` in tt-metal with
`global_avg_pool2d`

### Checklist
- [x] Post commit CI passes
(https://github.com/tenstorrent/tt-metal/actions/runs/11546482848)
- [ ] Blackhole Post commit (if applicable)
- [ ] Model regression CI testing passes (if applicable)
- [ ] Device performance regression CI testing passes (if applicable)
- [x] New/Existing tests provide coverage for changes

Co-authored-by: Artem Yerofieiev <169092593+ayerofieiev-tt@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants