Skip to content
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.

Commit

Permalink
[pre-commit.ci] auto fixes from pre-commit.com hooks
Browse files Browse the repository at this point in the history
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] committed Jul 2, 2024
1 parent 7db144c commit a4874c8
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
identity and expression, level of experience, education, socioeconomic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.

Expand Down
4 changes: 2 additions & 2 deletions bestla/bestla/kernel_jit.h
Original file line number Diff line number Diff line change
Expand Up @@ -1566,8 +1566,8 @@ class PaddingTransInterleaveCvt : protected xbyak::JitAvx512f {
// Complex number matrix(interleaved) - vector(as diagonal matrix) multiplication; Typically used for
// shift-RoPE
//
// vector: fp16 values; view every adjacent 2 values on colunm as a complex num
// src: bf16 ⌈row/row_pack⌉ x n_tile x row_pack; view every adjacent 2 values on colunm as a complex num
// vector: fp16 values; view every adjacent 2 values on column as a complex num
// src: bf16 ⌈row/row_pack⌉ x n_tile x row_pack; view every adjacent 2 values on column as a complex num
// dst: same as src
class CScaleInterleavedBF16FP16 : protected xbyak::JitAvx512_fp16 {
public:
Expand Down
2 changes: 1 addition & 1 deletion neural_speed/core/ne_layers.c
Original file line number Diff line number Diff line change
Expand Up @@ -685,7 +685,7 @@ static inline bool ne_are_same_shape(const struct ne_tensor* t0, const struct ne
return (t0->ne[0] == t1->ne[0]) && (t0->ne[1] == t1->ne[1]) && (t0->ne[2] == t1->ne[2]) && (t0->ne[3] == t1->ne[3]);
}

// check if t1 can be represented as a repeatition of t0
// check if t1 can be represented as a repetition of t0
static inline bool ne_can_repeat(const struct ne_tensor* t0, const struct ne_tensor* t1) {
static_assert(NE_MAX_DIMS == 4, "NE_MAX_DIMS is not 4 - update this function");

Expand Down

0 comments on commit a4874c8

Please sign in to comment.