diff --git a/apis/docs/v1/docs.md b/apis/docs/v1/docs.md
index e0a8cde5e7..32fb972f10 100644
--- a/apis/docs/v1/docs.md
+++ b/apis/docs/v1/docs.md
@@ -13,7 +13,12 @@
- [Empty](#payload-v1-Empty)
- [Filter](#payload-v1-Filter)
- [Filter.Config](#payload-v1-Filter-Config)
+ - [Filter.DistanceRequest](#payload-v1-Filter-DistanceRequest)
+ - [Filter.DistanceResponse](#payload-v1-Filter-DistanceResponse)
+ - [Filter.Query](#payload-v1-Filter-Query)
- [Filter.Target](#payload-v1-Filter-Target)
+ - [Filter.VectorRequest](#payload-v1-Filter-VectorRequest)
+ - [Filter.VectorResponse](#payload-v1-Filter-VectorResponse)
- [Flush](#payload-v1-Flush)
- [Flush.Request](#payload-v1-Flush-Request)
- [Info](#payload-v1-Info)
@@ -218,9 +223,41 @@ Filter related messages.
Represent filter configuration.
-| Field | Type | Label | Description |
-| ------- | ------------------------------------------ | -------- | ------------------------------------------ |
-| targets | [Filter.Target](#payload-v1-Filter-Target) | repeated | Represent the filter target configuration. |
+| Field | Type | Label | Description |
+| ------ | ------------------------------------------ | ----- | ------------------------------------------ |
+| target | [Filter.Target](#payload-v1-Filter-Target) | | Represent the filter target configuration. |
+| query | [Filter.Query](#payload-v1-Filter-Query) | | The target query. |
+
+
+
+### Filter.DistanceRequest
+
+Represent the ID and distance pair.
+
+| Field | Type | Label | Description |
+| -------- | ---------------------------------------------- | -------- | ----------- |
+| distance | [Object.Distance](#payload-v1-Object-Distance) | repeated | Distance |
+| query | [Filter.Query](#payload-v1-Filter-Query) | | Query |
+
+
+
+### Filter.DistanceResponse
+
+Represent the ID and distance pair.
+
+| Field | Type | Label | Description |
+| -------- | ---------------------------------------------- | -------- | ----------- |
+| distance | [Object.Distance](#payload-v1-Object-Distance) | repeated | Distance |
+
+
+
+### Filter.Query
+
+Represent the filter query.
+
+| Field | Type | Label | Description |
+| ----- | ----------------- | ----- | --------------------- |
+| query | [string](#string) | | The raw query string. |
@@ -233,6 +270,27 @@ Represent the target filter server.
| host | [string](#string) | | The target hostname. |
| port | [uint32](#uint32) | | The target port. |
+
+
+### Filter.VectorRequest
+
+Represent the ID and vector pair.
+
+| Field | Type | Label | Description |
+| ------ | ------------------------------------------ | ----- | ----------- |
+| vector | [Object.Vector](#payload-v1-Object-Vector) | | Vector |
+| query | [Filter.Query](#payload-v1-Filter-Query) | | Query |
+
+
+
+### Filter.VectorResponse
+
+Represent the ID and vector pair.
+
+| Field | Type | Label | Description |
+| ------ | ------------------------------------------ | ----- | ----------- |
+| vector | [Object.Vector](#payload-v1-Object-Vector) | | Distance |
+
### Flush
@@ -609,11 +667,11 @@ Insert related messages.
Represent insert configurations.
-| Field | Type | Label | Description |
-| ----------------------- | ------------------------------------------ | ----- | --------------------------------------------------- |
-| skip_strict_exist_check | [bool](#bool) | | A flag to skip exist check during insert operation. |
-| filters | [Filter.Config](#payload-v1-Filter-Config) | | Filter configurations. |
-| timestamp | [int64](#int64) | | Insert timestamp. |
+| Field | Type | Label | Description |
+| ----------------------- | ------------------------------------------ | -------- | --------------------------------------------------- |
+| skip_strict_exist_check | [bool](#bool) | | A flag to skip exist check during insert operation. |
+| filters | [Filter.Config](#payload-v1-Filter-Config) | repeated | Filter configurations. |
+| timestamp | [int64](#int64) | | Insert timestamp. |
@@ -897,10 +955,10 @@ Represent a vector.
Represent a request to fetch raw vector.
-| Field | Type | Label | Description |
-| ------- | ------------------------------------------ | ----- | ---------------------------- |
-| id | [Object.ID](#payload-v1-Object-ID) | | The vector ID to be fetched. |
-| filters | [Filter.Config](#payload-v1-Filter-Config) | | Filter configurations. |
+| Field | Type | Label | Description |
+| ------- | ------------------------------------------ | -------- | ---------------------------- |
+| id | [Object.ID](#payload-v1-Object-ID) | | The vector ID to be fetched. |
+| filters | [Filter.Config](#payload-v1-Filter-Config) | repeated | Filter configurations. |
@@ -983,19 +1041,19 @@ Search related messages.
Represent search configuration.
-| Field | Type | Label | Description |
-| --------------------- | ---------------------------------------------------------------------- | ----- | -------------------------------------------- |
-| request_id | [string](#string) | | Unique request ID. |
-| num | [uint32](#uint32) | | Maximum number of result to be returned. |
-| radius | [float](#float) | | Search radius. |
-| epsilon | [float](#float) | | Search coefficient. |
-| timeout | [int64](#int64) | | Search timeout in nanoseconds. |
-| ingress_filters | [Filter.Config](#payload-v1-Filter-Config) | | Ingress filter configurations. |
-| egress_filters | [Filter.Config](#payload-v1-Filter-Config) | | Egress filter configurations. |
-| min_num | [uint32](#uint32) | | Minimum number of result to be returned. |
-| aggregation_algorithm | [Search.AggregationAlgorithm](#payload-v1-Search-AggregationAlgorithm) | | Aggregation Algorithm |
-| ratio | [google.protobuf.FloatValue](#google-protobuf-FloatValue) | | Search ratio for agent return result number. |
-| nprobe | [uint32](#uint32) | | Search nprobe. |
+| Field | Type | Label | Description |
+| --------------------- | ---------------------------------------------------------------------- | -------- | -------------------------------------------- |
+| request_id | [string](#string) | | Unique request ID. |
+| num | [uint32](#uint32) | | Maximum number of result to be returned. |
+| radius | [float](#float) | | Search radius. |
+| epsilon | [float](#float) | | Search coefficient. |
+| timeout | [int64](#int64) | | Search timeout in nanoseconds. |
+| ingress_filters | [Filter.Config](#payload-v1-Filter-Config) | repeated | Ingress filter configurations. |
+| egress_filters | [Filter.Config](#payload-v1-Filter-Config) | repeated | Egress filter configurations. |
+| min_num | [uint32](#uint32) | | Minimum number of result to be returned. |
+| aggregation_algorithm | [Search.AggregationAlgorithm](#payload-v1-Search-AggregationAlgorithm) | | Aggregation Algorithm |
+| ratio | [google.protobuf.FloatValue](#google-protobuf-FloatValue) | | Search ratio for agent return result number. |
+| nprobe | [uint32](#uint32) | | Search nprobe. |
@@ -1105,12 +1163,12 @@ Update related messages
Represent the update configuration.
-| Field | Type | Label | Description |
-| ----------------------- | ------------------------------------------ | ----- | ------------------------------------------------------------------------------------------------ |
-| skip_strict_exist_check | [bool](#bool) | | A flag to skip exist check during update operation. |
-| filters | [Filter.Config](#payload-v1-Filter-Config) | | Filter configuration. |
-| timestamp | [int64](#int64) | | Update timestamp. |
-| disable_balanced_update | [bool](#bool) | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+| Field | Type | Label | Description |
+| ----------------------- | ------------------------------------------ | -------- | ------------------------------------------------------------------------------------------------ |
+| skip_strict_exist_check | [bool](#bool) | | A flag to skip exist check during update operation. |
+| filters | [Filter.Config](#payload-v1-Filter-Config) | repeated | Filter configuration. |
+| timestamp | [int64](#int64) | | Update timestamp. |
+| disable_balanced_update | [bool](#bool) | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
@@ -1179,12 +1237,12 @@ Upsert related messages.
Represent the upsert configuration.
-| Field | Type | Label | Description |
-| ----------------------- | ------------------------------------------ | ----- | ------------------------------------------------------------------------------------------------ |
-| skip_strict_exist_check | [bool](#bool) | | A flag to skip exist check during upsert operation. |
-| filters | [Filter.Config](#payload-v1-Filter-Config) | | Filter configuration. |
-| timestamp | [int64](#int64) | | Upsert timestamp. |
-| disable_balanced_update | [bool](#bool) | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+| Field | Type | Label | Description |
+| ----------------------- | ------------------------------------------ | -------- | ------------------------------------------------------------------------------------------------ |
+| skip_strict_exist_check | [bool](#bool) | | A flag to skip exist check during upsert operation. |
+| filters | [Filter.Config](#payload-v1-Filter-Config) | repeated | Filter configuration. |
+| timestamp | [int64](#int64) | | Upsert timestamp. |
+| disable_balanced_update | [bool](#bool) | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
@@ -1321,10 +1379,10 @@ Represent the discoverer service.
Represent the egress filter service.
-| Method Name | Request Type | Response Type | Description |
-| -------------- | ---------------------------------------------------------- | ---------------------------------------------------------- | ----------------------------------------- |
-| FilterDistance | [.payload.v1.Object.Distance](#payload-v1-Object-Distance) | [.payload.v1.Object.Distance](#payload-v1-Object-Distance) | Represent the RPC to filter the distance. |
-| FilterVector | [.payload.v1.Object.Vector](#payload-v1-Object-Vector) | [.payload.v1.Object.Vector](#payload-v1-Object-Vector) | Represent the RPC to filter the vector. |
+| Method Name | Request Type | Response Type | Description |
+| -------------- | ------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ----------------------------------------- |
+| FilterDistance | [.payload.v1.Filter.DistanceRequest](#payload-v1-Filter-DistanceRequest) | [.payload.v1.Filter.DistanceResponse](#payload-v1-Filter-DistanceResponse) | Represent the RPC to filter the distance. |
+| FilterVector | [.payload.v1.Filter.VectorRequest](#payload-v1-Filter-VectorRequest) | [.payload.v1.Filter.VectorResponse](#payload-v1-Filter-VectorResponse) | Represent the RPC to filter the vector. |
diff --git a/apis/grpc/v1/filter/egress/egress_filter.pb.go b/apis/grpc/v1/filter/egress/egress_filter.pb.go
index 5d60c42564..7d7df2bd4e 100644
--- a/apis/grpc/v1/filter/egress/egress_filter.pb.go
+++ b/apis/grpc/v1/filter/egress/egress_filter.pb.go
@@ -48,40 +48,44 @@ var file_v1_filter_egress_egress_filter_proto_rawDesc = []byte{
0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x18, 0x76, 0x31, 0x2f, 0x70, 0x61, 0x79, 0x6c, 0x6f,
0x61, 0x64, 0x2f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
- 0x32, 0xe0, 0x01, 0x0a, 0x06, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x12, 0x6e, 0x0a, 0x0e, 0x46,
- 0x69, 0x6c, 0x74, 0x65, 0x72, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x1b, 0x2e,
- 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63,
- 0x74, 0x2e, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x1a, 0x1b, 0x2e, 0x70, 0x61, 0x79,
- 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x2e, 0x44,
- 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x22, 0x22, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1c, 0x3a,
- 0x01, 0x2a, 0x22, 0x17, 0x2f, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2f, 0x65, 0x67, 0x72, 0x65,
- 0x73, 0x73, 0x2f, 0x64, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x66, 0x0a, 0x0c, 0x46,
- 0x69, 0x6c, 0x74, 0x65, 0x72, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x12, 0x19, 0x2e, 0x70, 0x61,
- 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x2e,
- 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x1a, 0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
- 0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x2e, 0x56, 0x65, 0x63, 0x74, 0x6f,
- 0x72, 0x22, 0x20, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1a, 0x3a, 0x01, 0x2a, 0x22, 0x15, 0x2f, 0x66,
- 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2f, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73, 0x2f, 0x76, 0x65, 0x63,
- 0x74, 0x6f, 0x72, 0x42, 0x6b, 0x0a, 0x23, 0x6f, 0x72, 0x67, 0x2e, 0x76, 0x64, 0x61, 0x61, 0x73,
- 0x2e, 0x76, 0x61, 0x6c, 0x64, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x76, 0x31, 0x2e, 0x66, 0x69, 0x6c,
- 0x74, 0x65, 0x72, 0x2e, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73, 0x42, 0x10, 0x56, 0x61, 0x6c, 0x64,
- 0x45, 0x67, 0x72, 0x65, 0x73, 0x73, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x50, 0x01, 0x5a, 0x30,
- 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x76, 0x64, 0x61, 0x61, 0x73,
- 0x2f, 0x76, 0x61, 0x6c, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x2f,
- 0x76, 0x31, 0x2f, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2f, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73,
- 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+ 0x32, 0xfe, 0x01, 0x0a, 0x06, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x12, 0x7d, 0x0a, 0x0e, 0x46,
+ 0x69, 0x6c, 0x74, 0x65, 0x72, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x22, 0x2e,
+ 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65,
+ 0x72, 0x2e, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
+ 0x74, 0x1a, 0x23, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46,
+ 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x52, 0x65,
+ 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x22, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1c, 0x3a, 0x01,
+ 0x2a, 0x22, 0x17, 0x2f, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2f, 0x65, 0x67, 0x72, 0x65, 0x73,
+ 0x73, 0x2f, 0x64, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x75, 0x0a, 0x0c, 0x46, 0x69,
+ 0x6c, 0x74, 0x65, 0x72, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x12, 0x20, 0x2e, 0x70, 0x61, 0x79,
+ 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x56,
+ 0x65, 0x63, 0x74, 0x6f, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x21, 0x2e, 0x70,
+ 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72,
+ 0x2e, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
+ 0x20, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1a, 0x3a, 0x01, 0x2a, 0x22, 0x15, 0x2f, 0x66, 0x69, 0x6c,
+ 0x74, 0x65, 0x72, 0x2f, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73, 0x2f, 0x76, 0x65, 0x63, 0x74, 0x6f,
+ 0x72, 0x42, 0x6b, 0x0a, 0x23, 0x6f, 0x72, 0x67, 0x2e, 0x76, 0x64, 0x61, 0x61, 0x73, 0x2e, 0x76,
+ 0x61, 0x6c, 0x64, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x76, 0x31, 0x2e, 0x66, 0x69, 0x6c, 0x74, 0x65,
+ 0x72, 0x2e, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73, 0x42, 0x10, 0x56, 0x61, 0x6c, 0x64, 0x45, 0x67,
+ 0x72, 0x65, 0x73, 0x73, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x50, 0x01, 0x5a, 0x30, 0x67, 0x69,
+ 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x76, 0x64, 0x61, 0x61, 0x73, 0x2f, 0x76,
+ 0x61, 0x6c, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x76, 0x31,
+ 0x2f, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2f, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73, 0x62, 0x06,
+ 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var file_v1_filter_egress_egress_filter_proto_goTypes = []any{
- (*payload.Object_Distance)(nil), // 0: payload.v1.Object.Distance
- (*payload.Object_Vector)(nil), // 1: payload.v1.Object.Vector
+ (*payload.Filter_DistanceRequest)(nil), // 0: payload.v1.Filter.DistanceRequest
+ (*payload.Filter_VectorRequest)(nil), // 1: payload.v1.Filter.VectorRequest
+ (*payload.Filter_DistanceResponse)(nil), // 2: payload.v1.Filter.DistanceResponse
+ (*payload.Filter_VectorResponse)(nil), // 3: payload.v1.Filter.VectorResponse
}
var file_v1_filter_egress_egress_filter_proto_depIdxs = []int32{
- 0, // 0: filter.egress.v1.Filter.FilterDistance:input_type -> payload.v1.Object.Distance
- 1, // 1: filter.egress.v1.Filter.FilterVector:input_type -> payload.v1.Object.Vector
- 0, // 2: filter.egress.v1.Filter.FilterDistance:output_type -> payload.v1.Object.Distance
- 1, // 3: filter.egress.v1.Filter.FilterVector:output_type -> payload.v1.Object.Vector
+ 0, // 0: filter.egress.v1.Filter.FilterDistance:input_type -> payload.v1.Filter.DistanceRequest
+ 1, // 1: filter.egress.v1.Filter.FilterVector:input_type -> payload.v1.Filter.VectorRequest
+ 2, // 2: filter.egress.v1.Filter.FilterDistance:output_type -> payload.v1.Filter.DistanceResponse
+ 3, // 3: filter.egress.v1.Filter.FilterVector:output_type -> payload.v1.Filter.VectorResponse
2, // [2:4] is the sub-list for method output_type
0, // [0:2] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
diff --git a/apis/grpc/v1/filter/egress/egress_filter_vtproto.pb.go b/apis/grpc/v1/filter/egress/egress_filter_vtproto.pb.go
index fc9d0d34e2..e73d367db4 100644
--- a/apis/grpc/v1/filter/egress/egress_filter_vtproto.pb.go
+++ b/apis/grpc/v1/filter/egress/egress_filter_vtproto.pb.go
@@ -43,9 +43,9 @@ const _ = grpc.SupportPackageIsVersion7
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type FilterClient interface {
// Represent the RPC to filter the distance.
- FilterDistance(ctx context.Context, in *payload.Object_Distance, opts ...grpc.CallOption) (*payload.Object_Distance, error)
+ FilterDistance(ctx context.Context, in *payload.Filter_DistanceRequest, opts ...grpc.CallOption) (*payload.Filter_DistanceResponse, error)
// Represent the RPC to filter the vector.
- FilterVector(ctx context.Context, in *payload.Object_Vector, opts ...grpc.CallOption) (*payload.Object_Vector, error)
+ FilterVector(ctx context.Context, in *payload.Filter_VectorRequest, opts ...grpc.CallOption) (*payload.Filter_VectorResponse, error)
}
type filterClient struct {
@@ -57,9 +57,9 @@ func NewFilterClient(cc grpc.ClientConnInterface) FilterClient {
}
func (c *filterClient) FilterDistance(
- ctx context.Context, in *payload.Object_Distance, opts ...grpc.CallOption,
-) (*payload.Object_Distance, error) {
- out := new(payload.Object_Distance)
+ ctx context.Context, in *payload.Filter_DistanceRequest, opts ...grpc.CallOption,
+) (*payload.Filter_DistanceResponse, error) {
+ out := new(payload.Filter_DistanceResponse)
err := c.cc.Invoke(ctx, "/filter.egress.v1.Filter/FilterDistance", in, out, opts...)
if err != nil {
return nil, err
@@ -68,9 +68,9 @@ func (c *filterClient) FilterDistance(
}
func (c *filterClient) FilterVector(
- ctx context.Context, in *payload.Object_Vector, opts ...grpc.CallOption,
-) (*payload.Object_Vector, error) {
- out := new(payload.Object_Vector)
+ ctx context.Context, in *payload.Filter_VectorRequest, opts ...grpc.CallOption,
+) (*payload.Filter_VectorResponse, error) {
+ out := new(payload.Filter_VectorResponse)
err := c.cc.Invoke(ctx, "/filter.egress.v1.Filter/FilterVector", in, out, opts...)
if err != nil {
return nil, err
@@ -83,9 +83,9 @@ func (c *filterClient) FilterVector(
// for forward compatibility
type FilterServer interface {
// Represent the RPC to filter the distance.
- FilterDistance(context.Context, *payload.Object_Distance) (*payload.Object_Distance, error)
+ FilterDistance(context.Context, *payload.Filter_DistanceRequest) (*payload.Filter_DistanceResponse, error)
// Represent the RPC to filter the vector.
- FilterVector(context.Context, *payload.Object_Vector) (*payload.Object_Vector, error)
+ FilterVector(context.Context, *payload.Filter_VectorRequest) (*payload.Filter_VectorResponse, error)
mustEmbedUnimplementedFilterServer()
}
@@ -93,14 +93,14 @@ type FilterServer interface {
type UnimplementedFilterServer struct{}
func (UnimplementedFilterServer) FilterDistance(
- context.Context, *payload.Object_Distance,
-) (*payload.Object_Distance, error) {
+ context.Context, *payload.Filter_DistanceRequest,
+) (*payload.Filter_DistanceResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method FilterDistance not implemented")
}
func (UnimplementedFilterServer) FilterVector(
- context.Context, *payload.Object_Vector,
-) (*payload.Object_Vector, error) {
+ context.Context, *payload.Filter_VectorRequest,
+) (*payload.Filter_VectorResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method FilterVector not implemented")
}
func (UnimplementedFilterServer) mustEmbedUnimplementedFilterServer() {}
@@ -119,7 +119,7 @@ func RegisterFilterServer(s grpc.ServiceRegistrar, srv FilterServer) {
func _Filter_FilterDistance_Handler(
srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor,
) (any, error) {
- in := new(payload.Object_Distance)
+ in := new(payload.Filter_DistanceRequest)
if err := dec(in); err != nil {
return nil, err
}
@@ -131,7 +131,7 @@ func _Filter_FilterDistance_Handler(
FullMethod: "/filter.egress.v1.Filter/FilterDistance",
}
handler := func(ctx context.Context, req any) (any, error) {
- return srv.(FilterServer).FilterDistance(ctx, req.(*payload.Object_Distance))
+ return srv.(FilterServer).FilterDistance(ctx, req.(*payload.Filter_DistanceRequest))
}
return interceptor(ctx, in, info, handler)
}
@@ -139,7 +139,7 @@ func _Filter_FilterDistance_Handler(
func _Filter_FilterVector_Handler(
srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor,
) (any, error) {
- in := new(payload.Object_Vector)
+ in := new(payload.Filter_VectorRequest)
if err := dec(in); err != nil {
return nil, err
}
@@ -151,7 +151,7 @@ func _Filter_FilterVector_Handler(
FullMethod: "/filter.egress.v1.Filter/FilterVector",
}
handler := func(ctx context.Context, req any) (any, error) {
- return srv.(FilterServer).FilterVector(ctx, req.(*payload.Object_Vector))
+ return srv.(FilterServer).FilterVector(ctx, req.(*payload.Filter_VectorRequest))
}
return interceptor(ctx, in, info, handler)
}
diff --git a/apis/grpc/v1/payload/payload.pb.go b/apis/grpc/v1/payload/payload.pb.go
index 6ea1603db4..ad64c80e02 100644
--- a/apis/grpc/v1/payload/payload.pb.go
+++ b/apis/grpc/v1/payload/payload.pb.go
@@ -1056,9 +1056,9 @@ type Search_Config struct {
// Search timeout in nanoseconds.
Timeout int64 `protobuf:"varint,5,opt,name=timeout,proto3" json:"timeout,omitempty"`
// Ingress filter configurations.
- IngressFilters *Filter_Config `protobuf:"bytes,6,opt,name=ingress_filters,json=ingressFilters,proto3" json:"ingress_filters,omitempty"`
+ IngressFilters []*Filter_Config `protobuf:"bytes,6,rep,name=ingress_filters,json=ingressFilters,proto3" json:"ingress_filters,omitempty"`
// Egress filter configurations.
- EgressFilters *Filter_Config `protobuf:"bytes,7,opt,name=egress_filters,json=egressFilters,proto3" json:"egress_filters,omitempty"`
+ EgressFilters []*Filter_Config `protobuf:"bytes,7,rep,name=egress_filters,json=egressFilters,proto3" json:"egress_filters,omitempty"`
// Minimum number of result to be returned.
MinNum uint32 `protobuf:"varint,8,opt,name=min_num,json=minNum,proto3" json:"min_num,omitempty"`
// Aggregation Algorithm
@@ -1136,14 +1136,14 @@ func (x *Search_Config) GetTimeout() int64 {
return 0
}
-func (x *Search_Config) GetIngressFilters() *Filter_Config {
+func (x *Search_Config) GetIngressFilters() []*Filter_Config {
if x != nil {
return x.IngressFilters
}
return nil
}
-func (x *Search_Config) GetEgressFilters() *Filter_Config {
+func (x *Search_Config) GetEgressFilters() []*Filter_Config {
if x != nil {
return x.EgressFilters
}
@@ -1427,6 +1427,55 @@ func (x *Filter_Target) GetPort() uint32 {
return 0
}
+// Represent the filter query.
+type Filter_Query struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // The raw query string.
+ Query string `protobuf:"bytes,1,opt,name=query,proto3" json:"query,omitempty"`
+}
+
+func (x *Filter_Query) Reset() {
+ *x = Filter_Query{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_v1_payload_payload_proto_msgTypes[25]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Filter_Query) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Filter_Query) ProtoMessage() {}
+
+func (x *Filter_Query) ProtoReflect() protoreflect.Message {
+ mi := &file_v1_payload_payload_proto_msgTypes[25]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Filter_Query.ProtoReflect.Descriptor instead.
+func (*Filter_Query) Descriptor() ([]byte, []int) {
+ return file_v1_payload_payload_proto_rawDescGZIP(), []int{1, 1}
+}
+
+func (x *Filter_Query) GetQuery() string {
+ if x != nil {
+ return x.Query
+ }
+ return ""
+}
+
// Represent filter configuration.
type Filter_Config struct {
state protoimpl.MessageState
@@ -1434,13 +1483,15 @@ type Filter_Config struct {
unknownFields protoimpl.UnknownFields
// Represent the filter target configuration.
- Targets []*Filter_Target `protobuf:"bytes,1,rep,name=targets,proto3" json:"targets,omitempty"`
+ Target *Filter_Target `protobuf:"bytes,1,opt,name=target,proto3" json:"target,omitempty"`
+ // The target query.
+ Query *Filter_Query `protobuf:"bytes,2,opt,name=query,proto3" json:"query,omitempty"`
}
func (x *Filter_Config) Reset() {
*x = Filter_Config{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[25]
+ mi := &file_v1_payload_payload_proto_msgTypes[26]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1453,7 +1504,7 @@ func (x *Filter_Config) String() string {
func (*Filter_Config) ProtoMessage() {}
func (x *Filter_Config) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[25]
+ mi := &file_v1_payload_payload_proto_msgTypes[26]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1466,12 +1517,233 @@ func (x *Filter_Config) ProtoReflect() protoreflect.Message {
// Deprecated: Use Filter_Config.ProtoReflect.Descriptor instead.
func (*Filter_Config) Descriptor() ([]byte, []int) {
- return file_v1_payload_payload_proto_rawDescGZIP(), []int{1, 1}
+ return file_v1_payload_payload_proto_rawDescGZIP(), []int{1, 2}
}
-func (x *Filter_Config) GetTargets() []*Filter_Target {
+func (x *Filter_Config) GetTarget() *Filter_Target {
if x != nil {
- return x.Targets
+ return x.Target
+ }
+ return nil
+}
+
+func (x *Filter_Config) GetQuery() *Filter_Query {
+ if x != nil {
+ return x.Query
+ }
+ return nil
+}
+
+// Represent the ID and distance pair.
+type Filter_DistanceRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // Distance
+ Distance []*Object_Distance `protobuf:"bytes,1,rep,name=distance,proto3" json:"distance,omitempty"`
+ // Query
+ Query *Filter_Query `protobuf:"bytes,2,opt,name=query,proto3" json:"query,omitempty"`
+}
+
+func (x *Filter_DistanceRequest) Reset() {
+ *x = Filter_DistanceRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_v1_payload_payload_proto_msgTypes[27]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Filter_DistanceRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Filter_DistanceRequest) ProtoMessage() {}
+
+func (x *Filter_DistanceRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_v1_payload_payload_proto_msgTypes[27]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Filter_DistanceRequest.ProtoReflect.Descriptor instead.
+func (*Filter_DistanceRequest) Descriptor() ([]byte, []int) {
+ return file_v1_payload_payload_proto_rawDescGZIP(), []int{1, 3}
+}
+
+func (x *Filter_DistanceRequest) GetDistance() []*Object_Distance {
+ if x != nil {
+ return x.Distance
+ }
+ return nil
+}
+
+func (x *Filter_DistanceRequest) GetQuery() *Filter_Query {
+ if x != nil {
+ return x.Query
+ }
+ return nil
+}
+
+// Represent the ID and distance pair.
+type Filter_DistanceResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // Distance
+ Distance []*Object_Distance `protobuf:"bytes,1,rep,name=distance,proto3" json:"distance,omitempty"`
+}
+
+func (x *Filter_DistanceResponse) Reset() {
+ *x = Filter_DistanceResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_v1_payload_payload_proto_msgTypes[28]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Filter_DistanceResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Filter_DistanceResponse) ProtoMessage() {}
+
+func (x *Filter_DistanceResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_v1_payload_payload_proto_msgTypes[28]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Filter_DistanceResponse.ProtoReflect.Descriptor instead.
+func (*Filter_DistanceResponse) Descriptor() ([]byte, []int) {
+ return file_v1_payload_payload_proto_rawDescGZIP(), []int{1, 4}
+}
+
+func (x *Filter_DistanceResponse) GetDistance() []*Object_Distance {
+ if x != nil {
+ return x.Distance
+ }
+ return nil
+}
+
+// Represent the ID and vector pair.
+type Filter_VectorRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // Vector
+ Vector *Object_Vector `protobuf:"bytes,1,opt,name=vector,proto3" json:"vector,omitempty"`
+ // Query
+ Query *Filter_Query `protobuf:"bytes,2,opt,name=query,proto3" json:"query,omitempty"`
+}
+
+func (x *Filter_VectorRequest) Reset() {
+ *x = Filter_VectorRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_v1_payload_payload_proto_msgTypes[29]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Filter_VectorRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Filter_VectorRequest) ProtoMessage() {}
+
+func (x *Filter_VectorRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_v1_payload_payload_proto_msgTypes[29]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Filter_VectorRequest.ProtoReflect.Descriptor instead.
+func (*Filter_VectorRequest) Descriptor() ([]byte, []int) {
+ return file_v1_payload_payload_proto_rawDescGZIP(), []int{1, 5}
+}
+
+func (x *Filter_VectorRequest) GetVector() *Object_Vector {
+ if x != nil {
+ return x.Vector
+ }
+ return nil
+}
+
+func (x *Filter_VectorRequest) GetQuery() *Filter_Query {
+ if x != nil {
+ return x.Query
+ }
+ return nil
+}
+
+// Represent the ID and vector pair.
+type Filter_VectorResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // Distance
+ Vector *Object_Vector `protobuf:"bytes,1,opt,name=vector,proto3" json:"vector,omitempty"`
+}
+
+func (x *Filter_VectorResponse) Reset() {
+ *x = Filter_VectorResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_v1_payload_payload_proto_msgTypes[30]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Filter_VectorResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Filter_VectorResponse) ProtoMessage() {}
+
+func (x *Filter_VectorResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_v1_payload_payload_proto_msgTypes[30]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Filter_VectorResponse.ProtoReflect.Descriptor instead.
+func (*Filter_VectorResponse) Descriptor() ([]byte, []int) {
+ return file_v1_payload_payload_proto_rawDescGZIP(), []int{1, 6}
+}
+
+func (x *Filter_VectorResponse) GetVector() *Object_Vector {
+ if x != nil {
+ return x.Vector
}
return nil
}
@@ -1491,7 +1763,7 @@ type Insert_Request struct {
func (x *Insert_Request) Reset() {
*x = Insert_Request{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[26]
+ mi := &file_v1_payload_payload_proto_msgTypes[31]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1504,7 +1776,7 @@ func (x *Insert_Request) String() string {
func (*Insert_Request) ProtoMessage() {}
func (x *Insert_Request) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[26]
+ mi := &file_v1_payload_payload_proto_msgTypes[31]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1547,7 +1819,7 @@ type Insert_MultiRequest struct {
func (x *Insert_MultiRequest) Reset() {
*x = Insert_MultiRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[27]
+ mi := &file_v1_payload_payload_proto_msgTypes[32]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1560,7 +1832,7 @@ func (x *Insert_MultiRequest) String() string {
func (*Insert_MultiRequest) ProtoMessage() {}
func (x *Insert_MultiRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[27]
+ mi := &file_v1_payload_payload_proto_msgTypes[32]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1600,7 +1872,7 @@ type Insert_ObjectRequest struct {
func (x *Insert_ObjectRequest) Reset() {
*x = Insert_ObjectRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[28]
+ mi := &file_v1_payload_payload_proto_msgTypes[33]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1613,7 +1885,7 @@ func (x *Insert_ObjectRequest) String() string {
func (*Insert_ObjectRequest) ProtoMessage() {}
func (x *Insert_ObjectRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[28]
+ mi := &file_v1_payload_payload_proto_msgTypes[33]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1663,7 +1935,7 @@ type Insert_MultiObjectRequest struct {
func (x *Insert_MultiObjectRequest) Reset() {
*x = Insert_MultiObjectRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[29]
+ mi := &file_v1_payload_payload_proto_msgTypes[34]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1676,7 +1948,7 @@ func (x *Insert_MultiObjectRequest) String() string {
func (*Insert_MultiObjectRequest) ProtoMessage() {}
func (x *Insert_MultiObjectRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[29]
+ mi := &file_v1_payload_payload_proto_msgTypes[34]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1708,7 +1980,7 @@ type Insert_Config struct {
// A flag to skip exist check during insert operation.
SkipStrictExistCheck bool `protobuf:"varint,1,opt,name=skip_strict_exist_check,json=skipStrictExistCheck,proto3" json:"skip_strict_exist_check,omitempty"`
// Filter configurations.
- Filters *Filter_Config `protobuf:"bytes,2,opt,name=filters,proto3" json:"filters,omitempty"`
+ Filters []*Filter_Config `protobuf:"bytes,2,rep,name=filters,proto3" json:"filters,omitempty"`
// Insert timestamp.
Timestamp int64 `protobuf:"varint,3,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
}
@@ -1716,7 +1988,7 @@ type Insert_Config struct {
func (x *Insert_Config) Reset() {
*x = Insert_Config{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[30]
+ mi := &file_v1_payload_payload_proto_msgTypes[35]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1729,7 +2001,7 @@ func (x *Insert_Config) String() string {
func (*Insert_Config) ProtoMessage() {}
func (x *Insert_Config) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[30]
+ mi := &file_v1_payload_payload_proto_msgTypes[35]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1752,7 +2024,7 @@ func (x *Insert_Config) GetSkipStrictExistCheck() bool {
return false
}
-func (x *Insert_Config) GetFilters() *Filter_Config {
+func (x *Insert_Config) GetFilters() []*Filter_Config {
if x != nil {
return x.Filters
}
@@ -1781,7 +2053,7 @@ type Update_Request struct {
func (x *Update_Request) Reset() {
*x = Update_Request{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[31]
+ mi := &file_v1_payload_payload_proto_msgTypes[36]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1794,7 +2066,7 @@ func (x *Update_Request) String() string {
func (*Update_Request) ProtoMessage() {}
func (x *Update_Request) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[31]
+ mi := &file_v1_payload_payload_proto_msgTypes[36]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1837,7 +2109,7 @@ type Update_MultiRequest struct {
func (x *Update_MultiRequest) Reset() {
*x = Update_MultiRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[32]
+ mi := &file_v1_payload_payload_proto_msgTypes[37]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1850,7 +2122,7 @@ func (x *Update_MultiRequest) String() string {
func (*Update_MultiRequest) ProtoMessage() {}
func (x *Update_MultiRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[32]
+ mi := &file_v1_payload_payload_proto_msgTypes[37]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1890,7 +2162,7 @@ type Update_ObjectRequest struct {
func (x *Update_ObjectRequest) Reset() {
*x = Update_ObjectRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[33]
+ mi := &file_v1_payload_payload_proto_msgTypes[38]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1903,7 +2175,7 @@ func (x *Update_ObjectRequest) String() string {
func (*Update_ObjectRequest) ProtoMessage() {}
func (x *Update_ObjectRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[33]
+ mi := &file_v1_payload_payload_proto_msgTypes[38]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1953,7 +2225,7 @@ type Update_MultiObjectRequest struct {
func (x *Update_MultiObjectRequest) Reset() {
*x = Update_MultiObjectRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[34]
+ mi := &file_v1_payload_payload_proto_msgTypes[39]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1966,7 +2238,7 @@ func (x *Update_MultiObjectRequest) String() string {
func (*Update_MultiObjectRequest) ProtoMessage() {}
func (x *Update_MultiObjectRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[34]
+ mi := &file_v1_payload_payload_proto_msgTypes[39]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2006,7 +2278,7 @@ type Update_TimestampRequest struct {
func (x *Update_TimestampRequest) Reset() {
*x = Update_TimestampRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[35]
+ mi := &file_v1_payload_payload_proto_msgTypes[40]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2019,7 +2291,7 @@ func (x *Update_TimestampRequest) String() string {
func (*Update_TimestampRequest) ProtoMessage() {}
func (x *Update_TimestampRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[35]
+ mi := &file_v1_payload_payload_proto_msgTypes[40]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2065,7 +2337,7 @@ type Update_Config struct {
// A flag to skip exist check during update operation.
SkipStrictExistCheck bool `protobuf:"varint,1,opt,name=skip_strict_exist_check,json=skipStrictExistCheck,proto3" json:"skip_strict_exist_check,omitempty"`
// Filter configuration.
- Filters *Filter_Config `protobuf:"bytes,2,opt,name=filters,proto3" json:"filters,omitempty"`
+ Filters []*Filter_Config `protobuf:"bytes,2,rep,name=filters,proto3" json:"filters,omitempty"`
// Update timestamp.
Timestamp int64 `protobuf:"varint,3,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
// A flag to disable balanced update (split remove -> insert operation)
@@ -2076,7 +2348,7 @@ type Update_Config struct {
func (x *Update_Config) Reset() {
*x = Update_Config{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[36]
+ mi := &file_v1_payload_payload_proto_msgTypes[41]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2089,7 +2361,7 @@ func (x *Update_Config) String() string {
func (*Update_Config) ProtoMessage() {}
func (x *Update_Config) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[36]
+ mi := &file_v1_payload_payload_proto_msgTypes[41]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2112,7 +2384,7 @@ func (x *Update_Config) GetSkipStrictExistCheck() bool {
return false
}
-func (x *Update_Config) GetFilters() *Filter_Config {
+func (x *Update_Config) GetFilters() []*Filter_Config {
if x != nil {
return x.Filters
}
@@ -2148,7 +2420,7 @@ type Upsert_Request struct {
func (x *Upsert_Request) Reset() {
*x = Upsert_Request{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[37]
+ mi := &file_v1_payload_payload_proto_msgTypes[42]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2161,7 +2433,7 @@ func (x *Upsert_Request) String() string {
func (*Upsert_Request) ProtoMessage() {}
func (x *Upsert_Request) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[37]
+ mi := &file_v1_payload_payload_proto_msgTypes[42]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2204,7 +2476,7 @@ type Upsert_MultiRequest struct {
func (x *Upsert_MultiRequest) Reset() {
*x = Upsert_MultiRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[38]
+ mi := &file_v1_payload_payload_proto_msgTypes[43]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2217,7 +2489,7 @@ func (x *Upsert_MultiRequest) String() string {
func (*Upsert_MultiRequest) ProtoMessage() {}
func (x *Upsert_MultiRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[38]
+ mi := &file_v1_payload_payload_proto_msgTypes[43]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2257,7 +2529,7 @@ type Upsert_ObjectRequest struct {
func (x *Upsert_ObjectRequest) Reset() {
*x = Upsert_ObjectRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[39]
+ mi := &file_v1_payload_payload_proto_msgTypes[44]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2270,7 +2542,7 @@ func (x *Upsert_ObjectRequest) String() string {
func (*Upsert_ObjectRequest) ProtoMessage() {}
func (x *Upsert_ObjectRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[39]
+ mi := &file_v1_payload_payload_proto_msgTypes[44]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2320,7 +2592,7 @@ type Upsert_MultiObjectRequest struct {
func (x *Upsert_MultiObjectRequest) Reset() {
*x = Upsert_MultiObjectRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[40]
+ mi := &file_v1_payload_payload_proto_msgTypes[45]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2333,7 +2605,7 @@ func (x *Upsert_MultiObjectRequest) String() string {
func (*Upsert_MultiObjectRequest) ProtoMessage() {}
func (x *Upsert_MultiObjectRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[40]
+ mi := &file_v1_payload_payload_proto_msgTypes[45]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2365,7 +2637,7 @@ type Upsert_Config struct {
// A flag to skip exist check during upsert operation.
SkipStrictExistCheck bool `protobuf:"varint,1,opt,name=skip_strict_exist_check,json=skipStrictExistCheck,proto3" json:"skip_strict_exist_check,omitempty"`
// Filter configuration.
- Filters *Filter_Config `protobuf:"bytes,2,opt,name=filters,proto3" json:"filters,omitempty"`
+ Filters []*Filter_Config `protobuf:"bytes,2,rep,name=filters,proto3" json:"filters,omitempty"`
// Upsert timestamp.
Timestamp int64 `protobuf:"varint,3,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
// A flag to disable balanced update (split remove -> insert operation)
@@ -2376,7 +2648,7 @@ type Upsert_Config struct {
func (x *Upsert_Config) Reset() {
*x = Upsert_Config{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[41]
+ mi := &file_v1_payload_payload_proto_msgTypes[46]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2389,7 +2661,7 @@ func (x *Upsert_Config) String() string {
func (*Upsert_Config) ProtoMessage() {}
func (x *Upsert_Config) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[41]
+ mi := &file_v1_payload_payload_proto_msgTypes[46]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2412,7 +2684,7 @@ func (x *Upsert_Config) GetSkipStrictExistCheck() bool {
return false
}
-func (x *Upsert_Config) GetFilters() *Filter_Config {
+func (x *Upsert_Config) GetFilters() []*Filter_Config {
if x != nil {
return x.Filters
}
@@ -2448,7 +2720,7 @@ type Remove_Request struct {
func (x *Remove_Request) Reset() {
*x = Remove_Request{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[42]
+ mi := &file_v1_payload_payload_proto_msgTypes[47]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2461,7 +2733,7 @@ func (x *Remove_Request) String() string {
func (*Remove_Request) ProtoMessage() {}
func (x *Remove_Request) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[42]
+ mi := &file_v1_payload_payload_proto_msgTypes[47]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2504,7 +2776,7 @@ type Remove_MultiRequest struct {
func (x *Remove_MultiRequest) Reset() {
*x = Remove_MultiRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[43]
+ mi := &file_v1_payload_payload_proto_msgTypes[48]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2517,7 +2789,7 @@ func (x *Remove_MultiRequest) String() string {
func (*Remove_MultiRequest) ProtoMessage() {}
func (x *Remove_MultiRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[43]
+ mi := &file_v1_payload_payload_proto_msgTypes[48]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2554,7 +2826,7 @@ type Remove_TimestampRequest struct {
func (x *Remove_TimestampRequest) Reset() {
*x = Remove_TimestampRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[44]
+ mi := &file_v1_payload_payload_proto_msgTypes[49]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2567,7 +2839,7 @@ func (x *Remove_TimestampRequest) String() string {
func (*Remove_TimestampRequest) ProtoMessage() {}
func (x *Remove_TimestampRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[44]
+ mi := &file_v1_payload_payload_proto_msgTypes[49]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2605,7 +2877,7 @@ type Remove_Timestamp struct {
func (x *Remove_Timestamp) Reset() {
*x = Remove_Timestamp{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[45]
+ mi := &file_v1_payload_payload_proto_msgTypes[50]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2618,7 +2890,7 @@ func (x *Remove_Timestamp) String() string {
func (*Remove_Timestamp) ProtoMessage() {}
func (x *Remove_Timestamp) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[45]
+ mi := &file_v1_payload_payload_proto_msgTypes[50]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2663,7 +2935,7 @@ type Remove_Config struct {
func (x *Remove_Config) Reset() {
*x = Remove_Config{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[46]
+ mi := &file_v1_payload_payload_proto_msgTypes[51]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2676,7 +2948,7 @@ func (x *Remove_Config) String() string {
func (*Remove_Config) ProtoMessage() {}
func (x *Remove_Config) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[46]
+ mi := &file_v1_payload_payload_proto_msgTypes[51]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2715,7 +2987,7 @@ type Flush_Request struct {
func (x *Flush_Request) Reset() {
*x = Flush_Request{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[47]
+ mi := &file_v1_payload_payload_proto_msgTypes[52]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2728,7 +3000,7 @@ func (x *Flush_Request) String() string {
func (*Flush_Request) ProtoMessage() {}
func (x *Flush_Request) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[47]
+ mi := &file_v1_payload_payload_proto_msgTypes[52]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2753,13 +3025,13 @@ type Object_VectorRequest struct {
// The vector ID to be fetched.
Id *Object_ID `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
// Filter configurations.
- Filters *Filter_Config `protobuf:"bytes,2,opt,name=filters,proto3" json:"filters,omitempty"`
+ Filters []*Filter_Config `protobuf:"bytes,2,rep,name=filters,proto3" json:"filters,omitempty"`
}
func (x *Object_VectorRequest) Reset() {
*x = Object_VectorRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[48]
+ mi := &file_v1_payload_payload_proto_msgTypes[53]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2772,7 +3044,7 @@ func (x *Object_VectorRequest) String() string {
func (*Object_VectorRequest) ProtoMessage() {}
func (x *Object_VectorRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[48]
+ mi := &file_v1_payload_payload_proto_msgTypes[53]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2795,7 +3067,7 @@ func (x *Object_VectorRequest) GetId() *Object_ID {
return nil
}
-func (x *Object_VectorRequest) GetFilters() *Filter_Config {
+func (x *Object_VectorRequest) GetFilters() []*Filter_Config {
if x != nil {
return x.Filters
}
@@ -2817,7 +3089,7 @@ type Object_Distance struct {
func (x *Object_Distance) Reset() {
*x = Object_Distance{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[49]
+ mi := &file_v1_payload_payload_proto_msgTypes[54]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2830,7 +3102,7 @@ func (x *Object_Distance) String() string {
func (*Object_Distance) ProtoMessage() {}
func (x *Object_Distance) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[49]
+ mi := &file_v1_payload_payload_proto_msgTypes[54]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2876,7 +3148,7 @@ type Object_StreamDistance struct {
func (x *Object_StreamDistance) Reset() {
*x = Object_StreamDistance{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[50]
+ mi := &file_v1_payload_payload_proto_msgTypes[55]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2889,7 +3161,7 @@ func (x *Object_StreamDistance) String() string {
func (*Object_StreamDistance) ProtoMessage() {}
func (x *Object_StreamDistance) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[50]
+ mi := &file_v1_payload_payload_proto_msgTypes[55]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2956,7 +3228,7 @@ type Object_ID struct {
func (x *Object_ID) Reset() {
*x = Object_ID{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[51]
+ mi := &file_v1_payload_payload_proto_msgTypes[56]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2969,7 +3241,7 @@ func (x *Object_ID) String() string {
func (*Object_ID) ProtoMessage() {}
func (x *Object_ID) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[51]
+ mi := &file_v1_payload_payload_proto_msgTypes[56]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3004,7 +3276,7 @@ type Object_IDs struct {
func (x *Object_IDs) Reset() {
*x = Object_IDs{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[52]
+ mi := &file_v1_payload_payload_proto_msgTypes[57]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3017,7 +3289,7 @@ func (x *Object_IDs) String() string {
func (*Object_IDs) ProtoMessage() {}
func (x *Object_IDs) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[52]
+ mi := &file_v1_payload_payload_proto_msgTypes[57]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3057,7 +3329,7 @@ type Object_Vector struct {
func (x *Object_Vector) Reset() {
*x = Object_Vector{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[53]
+ mi := &file_v1_payload_payload_proto_msgTypes[58]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3070,7 +3342,7 @@ func (x *Object_Vector) String() string {
func (*Object_Vector) ProtoMessage() {}
func (x *Object_Vector) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[53]
+ mi := &file_v1_payload_payload_proto_msgTypes[58]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3120,7 +3392,7 @@ type Object_TimestampRequest struct {
func (x *Object_TimestampRequest) Reset() {
*x = Object_TimestampRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[54]
+ mi := &file_v1_payload_payload_proto_msgTypes[59]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3133,7 +3405,7 @@ func (x *Object_TimestampRequest) String() string {
func (*Object_TimestampRequest) ProtoMessage() {}
func (x *Object_TimestampRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[54]
+ mi := &file_v1_payload_payload_proto_msgTypes[59]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3171,7 +3443,7 @@ type Object_Timestamp struct {
func (x *Object_Timestamp) Reset() {
*x = Object_Timestamp{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[55]
+ mi := &file_v1_payload_payload_proto_msgTypes[60]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3184,7 +3456,7 @@ func (x *Object_Timestamp) String() string {
func (*Object_Timestamp) ProtoMessage() {}
func (x *Object_Timestamp) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[55]
+ mi := &file_v1_payload_payload_proto_msgTypes[60]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3226,7 +3498,7 @@ type Object_Vectors struct {
func (x *Object_Vectors) Reset() {
*x = Object_Vectors{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[56]
+ mi := &file_v1_payload_payload_proto_msgTypes[61]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3239,7 +3511,7 @@ func (x *Object_Vectors) String() string {
func (*Object_Vectors) ProtoMessage() {}
func (x *Object_Vectors) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[56]
+ mi := &file_v1_payload_payload_proto_msgTypes[61]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3278,7 +3550,7 @@ type Object_StreamVector struct {
func (x *Object_StreamVector) Reset() {
*x = Object_StreamVector{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[57]
+ mi := &file_v1_payload_payload_proto_msgTypes[62]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3291,7 +3563,7 @@ func (x *Object_StreamVector) String() string {
func (*Object_StreamVector) ProtoMessage() {}
func (x *Object_StreamVector) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[57]
+ mi := &file_v1_payload_payload_proto_msgTypes[62]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3361,7 +3633,7 @@ type Object_ReshapeVector struct {
func (x *Object_ReshapeVector) Reset() {
*x = Object_ReshapeVector{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[58]
+ mi := &file_v1_payload_payload_proto_msgTypes[63]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3374,7 +3646,7 @@ func (x *Object_ReshapeVector) String() string {
func (*Object_ReshapeVector) ProtoMessage() {}
func (x *Object_ReshapeVector) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[58]
+ mi := &file_v1_payload_payload_proto_msgTypes[63]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3419,7 +3691,7 @@ type Object_Blob struct {
func (x *Object_Blob) Reset() {
*x = Object_Blob{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[59]
+ mi := &file_v1_payload_payload_proto_msgTypes[64]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3432,7 +3704,7 @@ func (x *Object_Blob) String() string {
func (*Object_Blob) ProtoMessage() {}
func (x *Object_Blob) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[59]
+ mi := &file_v1_payload_payload_proto_msgTypes[64]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3478,7 +3750,7 @@ type Object_StreamBlob struct {
func (x *Object_StreamBlob) Reset() {
*x = Object_StreamBlob{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[60]
+ mi := &file_v1_payload_payload_proto_msgTypes[65]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3491,7 +3763,7 @@ func (x *Object_StreamBlob) String() string {
func (*Object_StreamBlob) ProtoMessage() {}
func (x *Object_StreamBlob) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[60]
+ mi := &file_v1_payload_payload_proto_msgTypes[65]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3563,7 +3835,7 @@ type Object_Location struct {
func (x *Object_Location) Reset() {
*x = Object_Location{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[61]
+ mi := &file_v1_payload_payload_proto_msgTypes[66]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3576,7 +3848,7 @@ func (x *Object_Location) String() string {
func (*Object_Location) ProtoMessage() {}
func (x *Object_Location) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[61]
+ mi := &file_v1_payload_payload_proto_msgTypes[66]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3629,7 +3901,7 @@ type Object_StreamLocation struct {
func (x *Object_StreamLocation) Reset() {
*x = Object_StreamLocation{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[62]
+ mi := &file_v1_payload_payload_proto_msgTypes[67]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3642,7 +3914,7 @@ func (x *Object_StreamLocation) String() string {
func (*Object_StreamLocation) ProtoMessage() {}
func (x *Object_StreamLocation) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[62]
+ mi := &file_v1_payload_payload_proto_msgTypes[67]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3709,7 +3981,7 @@ type Object_Locations struct {
func (x *Object_Locations) Reset() {
*x = Object_Locations{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[63]
+ mi := &file_v1_payload_payload_proto_msgTypes[68]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3722,7 +3994,7 @@ func (x *Object_Locations) String() string {
func (*Object_Locations) ProtoMessage() {}
func (x *Object_Locations) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[63]
+ mi := &file_v1_payload_payload_proto_msgTypes[68]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3755,7 +4027,7 @@ type Object_List struct {
func (x *Object_List) Reset() {
*x = Object_List{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[64]
+ mi := &file_v1_payload_payload_proto_msgTypes[69]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3768,7 +4040,7 @@ func (x *Object_List) String() string {
func (*Object_List) ProtoMessage() {}
func (x *Object_List) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[64]
+ mi := &file_v1_payload_payload_proto_msgTypes[69]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3793,7 +4065,7 @@ type Object_List_Request struct {
func (x *Object_List_Request) Reset() {
*x = Object_List_Request{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[65]
+ mi := &file_v1_payload_payload_proto_msgTypes[70]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3806,7 +4078,7 @@ func (x *Object_List_Request) String() string {
func (*Object_List_Request) ProtoMessage() {}
func (x *Object_List_Request) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[65]
+ mi := &file_v1_payload_payload_proto_msgTypes[70]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3837,7 +4109,7 @@ type Object_List_Response struct {
func (x *Object_List_Response) Reset() {
*x = Object_List_Response{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[66]
+ mi := &file_v1_payload_payload_proto_msgTypes[71]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3850,7 +4122,7 @@ func (x *Object_List_Response) String() string {
func (*Object_List_Response) ProtoMessage() {}
func (x *Object_List_Response) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[66]
+ mi := &file_v1_payload_payload_proto_msgTypes[71]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3918,7 +4190,7 @@ type Control_CreateIndexRequest struct {
func (x *Control_CreateIndexRequest) Reset() {
*x = Control_CreateIndexRequest{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[67]
+ mi := &file_v1_payload_payload_proto_msgTypes[72]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3931,7 +4203,7 @@ func (x *Control_CreateIndexRequest) String() string {
func (*Control_CreateIndexRequest) ProtoMessage() {}
func (x *Control_CreateIndexRequest) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[67]
+ mi := &file_v1_payload_payload_proto_msgTypes[72]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3971,7 +4243,7 @@ type Discoverer_Request struct {
func (x *Discoverer_Request) Reset() {
*x = Discoverer_Request{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[68]
+ mi := &file_v1_payload_payload_proto_msgTypes[73]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3984,7 +4256,7 @@ func (x *Discoverer_Request) String() string {
func (*Discoverer_Request) ProtoMessage() {}
func (x *Discoverer_Request) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[68]
+ mi := &file_v1_payload_payload_proto_msgTypes[73]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4031,7 +4303,7 @@ type Info_Index struct {
func (x *Info_Index) Reset() {
*x = Info_Index{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[69]
+ mi := &file_v1_payload_payload_proto_msgTypes[74]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4044,7 +4316,7 @@ func (x *Info_Index) String() string {
func (*Info_Index) ProtoMessage() {}
func (x *Info_Index) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[69]
+ mi := &file_v1_payload_payload_proto_msgTypes[74]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4085,7 +4357,7 @@ type Info_Pod struct {
func (x *Info_Pod) Reset() {
*x = Info_Pod{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[70]
+ mi := &file_v1_payload_payload_proto_msgTypes[75]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4098,7 +4370,7 @@ func (x *Info_Pod) String() string {
func (*Info_Pod) ProtoMessage() {}
func (x *Info_Pod) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[70]
+ mi := &file_v1_payload_payload_proto_msgTypes[75]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4186,7 +4458,7 @@ type Info_Node struct {
func (x *Info_Node) Reset() {
*x = Info_Node{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[71]
+ mi := &file_v1_payload_payload_proto_msgTypes[76]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4199,7 +4471,7 @@ func (x *Info_Node) String() string {
func (*Info_Node) ProtoMessage() {}
func (x *Info_Node) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[71]
+ mi := &file_v1_payload_payload_proto_msgTypes[76]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4280,7 +4552,7 @@ type Info_Service struct {
func (x *Info_Service) Reset() {
*x = Info_Service{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[72]
+ mi := &file_v1_payload_payload_proto_msgTypes[77]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4293,7 +4565,7 @@ func (x *Info_Service) String() string {
func (*Info_Service) ProtoMessage() {}
func (x *Info_Service) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[72]
+ mi := &file_v1_payload_payload_proto_msgTypes[77]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4366,7 +4638,7 @@ type Info_ServicePort struct {
func (x *Info_ServicePort) Reset() {
*x = Info_ServicePort{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[73]
+ mi := &file_v1_payload_payload_proto_msgTypes[78]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4379,7 +4651,7 @@ func (x *Info_ServicePort) String() string {
func (*Info_ServicePort) ProtoMessage() {}
func (x *Info_ServicePort) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[73]
+ mi := &file_v1_payload_payload_proto_msgTypes[78]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4421,7 +4693,7 @@ type Info_Labels struct {
func (x *Info_Labels) Reset() {
*x = Info_Labels{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[74]
+ mi := &file_v1_payload_payload_proto_msgTypes[79]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4434,7 +4706,7 @@ func (x *Info_Labels) String() string {
func (*Info_Labels) ProtoMessage() {}
func (x *Info_Labels) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[74]
+ mi := &file_v1_payload_payload_proto_msgTypes[79]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4469,7 +4741,7 @@ type Info_Annotations struct {
func (x *Info_Annotations) Reset() {
*x = Info_Annotations{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[75]
+ mi := &file_v1_payload_payload_proto_msgTypes[80]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4482,7 +4754,7 @@ func (x *Info_Annotations) String() string {
func (*Info_Annotations) ProtoMessage() {}
func (x *Info_Annotations) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[75]
+ mi := &file_v1_payload_payload_proto_msgTypes[80]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4522,7 +4794,7 @@ type Info_CPU struct {
func (x *Info_CPU) Reset() {
*x = Info_CPU{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[76]
+ mi := &file_v1_payload_payload_proto_msgTypes[81]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4535,7 +4807,7 @@ func (x *Info_CPU) String() string {
func (*Info_CPU) ProtoMessage() {}
func (x *Info_CPU) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[76]
+ mi := &file_v1_payload_payload_proto_msgTypes[81]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4589,7 +4861,7 @@ type Info_Memory struct {
func (x *Info_Memory) Reset() {
*x = Info_Memory{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[77]
+ mi := &file_v1_payload_payload_proto_msgTypes[82]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4602,7 +4874,7 @@ func (x *Info_Memory) String() string {
func (*Info_Memory) ProtoMessage() {}
func (x *Info_Memory) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[77]
+ mi := &file_v1_payload_payload_proto_msgTypes[82]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4652,7 +4924,7 @@ type Info_Pods struct {
func (x *Info_Pods) Reset() {
*x = Info_Pods{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[78]
+ mi := &file_v1_payload_payload_proto_msgTypes[83]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4665,7 +4937,7 @@ func (x *Info_Pods) String() string {
func (*Info_Pods) ProtoMessage() {}
func (x *Info_Pods) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[78]
+ mi := &file_v1_payload_payload_proto_msgTypes[83]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4701,7 +4973,7 @@ type Info_Nodes struct {
func (x *Info_Nodes) Reset() {
*x = Info_Nodes{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[79]
+ mi := &file_v1_payload_payload_proto_msgTypes[84]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4714,7 +4986,7 @@ func (x *Info_Nodes) String() string {
func (*Info_Nodes) ProtoMessage() {}
func (x *Info_Nodes) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[79]
+ mi := &file_v1_payload_payload_proto_msgTypes[84]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4750,7 +5022,7 @@ type Info_Services struct {
func (x *Info_Services) Reset() {
*x = Info_Services{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[80]
+ mi := &file_v1_payload_payload_proto_msgTypes[85]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4763,7 +5035,7 @@ func (x *Info_Services) String() string {
func (*Info_Services) ProtoMessage() {}
func (x *Info_Services) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[80]
+ mi := &file_v1_payload_payload_proto_msgTypes[85]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4798,7 +5070,7 @@ type Info_IPs struct {
func (x *Info_IPs) Reset() {
*x = Info_IPs{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[81]
+ mi := &file_v1_payload_payload_proto_msgTypes[86]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4811,7 +5083,7 @@ func (x *Info_IPs) String() string {
func (*Info_IPs) ProtoMessage() {}
func (x *Info_IPs) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[81]
+ mi := &file_v1_payload_payload_proto_msgTypes[86]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4853,7 +5125,7 @@ type Info_Index_Count struct {
func (x *Info_Index_Count) Reset() {
*x = Info_Index_Count{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[82]
+ mi := &file_v1_payload_payload_proto_msgTypes[87]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4866,7 +5138,7 @@ func (x *Info_Index_Count) String() string {
func (*Info_Index_Count) ProtoMessage() {}
func (x *Info_Index_Count) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[82]
+ mi := &file_v1_payload_payload_proto_msgTypes[87]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4927,7 +5199,7 @@ type Info_Index_Detail struct {
func (x *Info_Index_Detail) Reset() {
*x = Info_Index_Detail{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[83]
+ mi := &file_v1_payload_payload_proto_msgTypes[88]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -4940,7 +5212,7 @@ func (x *Info_Index_Detail) String() string {
func (*Info_Index_Detail) ProtoMessage() {}
func (x *Info_Index_Detail) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[83]
+ mi := &file_v1_payload_payload_proto_msgTypes[88]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -4987,7 +5259,7 @@ type Info_Index_UUID struct {
func (x *Info_Index_UUID) Reset() {
*x = Info_Index_UUID{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[84]
+ mi := &file_v1_payload_payload_proto_msgTypes[89]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5000,7 +5272,7 @@ func (x *Info_Index_UUID) String() string {
func (*Info_Index_UUID) ProtoMessage() {}
func (x *Info_Index_UUID) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[84]
+ mi := &file_v1_payload_payload_proto_msgTypes[89]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5060,7 +5332,7 @@ type Info_Index_Statistics struct {
func (x *Info_Index_Statistics) Reset() {
*x = Info_Index_Statistics{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[85]
+ mi := &file_v1_payload_payload_proto_msgTypes[90]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5073,7 +5345,7 @@ func (x *Info_Index_Statistics) String() string {
func (*Info_Index_Statistics) ProtoMessage() {}
func (x *Info_Index_Statistics) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[85]
+ mi := &file_v1_payload_payload_proto_msgTypes[90]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5333,7 +5605,7 @@ type Info_Index_StatisticsDetail struct {
func (x *Info_Index_StatisticsDetail) Reset() {
*x = Info_Index_StatisticsDetail{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[86]
+ mi := &file_v1_payload_payload_proto_msgTypes[91]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5346,7 +5618,7 @@ func (x *Info_Index_StatisticsDetail) String() string {
func (*Info_Index_StatisticsDetail) ProtoMessage() {}
func (x *Info_Index_StatisticsDetail) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[86]
+ mi := &file_v1_payload_payload_proto_msgTypes[91]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5414,7 +5686,7 @@ type Info_Index_Property struct {
func (x *Info_Index_Property) Reset() {
*x = Info_Index_Property{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[87]
+ mi := &file_v1_payload_payload_proto_msgTypes[92]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5427,7 +5699,7 @@ func (x *Info_Index_Property) String() string {
func (*Info_Index_Property) ProtoMessage() {}
func (x *Info_Index_Property) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[87]
+ mi := &file_v1_payload_payload_proto_msgTypes[92]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5693,7 +5965,7 @@ type Info_Index_PropertyDetail struct {
func (x *Info_Index_PropertyDetail) Reset() {
*x = Info_Index_PropertyDetail{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[88]
+ mi := &file_v1_payload_payload_proto_msgTypes[93]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5706,7 +5978,7 @@ func (x *Info_Index_PropertyDetail) String() string {
func (*Info_Index_PropertyDetail) ProtoMessage() {}
func (x *Info_Index_PropertyDetail) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[88]
+ mi := &file_v1_payload_payload_proto_msgTypes[93]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5741,7 +6013,7 @@ type Info_Index_UUID_Committed struct {
func (x *Info_Index_UUID_Committed) Reset() {
*x = Info_Index_UUID_Committed{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[90]
+ mi := &file_v1_payload_payload_proto_msgTypes[95]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5754,7 +6026,7 @@ func (x *Info_Index_UUID_Committed) String() string {
func (*Info_Index_UUID_Committed) ProtoMessage() {}
func (x *Info_Index_UUID_Committed) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[90]
+ mi := &file_v1_payload_payload_proto_msgTypes[95]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5789,7 +6061,7 @@ type Info_Index_UUID_Uncommitted struct {
func (x *Info_Index_UUID_Uncommitted) Reset() {
*x = Info_Index_UUID_Uncommitted{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[91]
+ mi := &file_v1_payload_payload_proto_msgTypes[96]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5802,7 +6074,7 @@ func (x *Info_Index_UUID_Uncommitted) String() string {
func (*Info_Index_UUID_Uncommitted) ProtoMessage() {}
func (x *Info_Index_UUID_Uncommitted) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[91]
+ mi := &file_v1_payload_payload_proto_msgTypes[96]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5840,7 +6112,7 @@ type Mirror_Target struct {
func (x *Mirror_Target) Reset() {
*x = Mirror_Target{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[96]
+ mi := &file_v1_payload_payload_proto_msgTypes[101]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5853,7 +6125,7 @@ func (x *Mirror_Target) String() string {
func (*Mirror_Target) ProtoMessage() {}
func (x *Mirror_Target) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[96]
+ mi := &file_v1_payload_payload_proto_msgTypes[101]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5896,7 +6168,7 @@ type Mirror_Targets struct {
func (x *Mirror_Targets) Reset() {
*x = Mirror_Targets{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[97]
+ mi := &file_v1_payload_payload_proto_msgTypes[102]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5909,7 +6181,7 @@ func (x *Mirror_Targets) String() string {
func (*Mirror_Targets) ProtoMessage() {}
func (x *Mirror_Targets) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[97]
+ mi := &file_v1_payload_payload_proto_msgTypes[102]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5943,7 +6215,7 @@ type Meta_Key struct {
func (x *Meta_Key) Reset() {
*x = Meta_Key{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[98]
+ mi := &file_v1_payload_payload_proto_msgTypes[103]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -5956,7 +6228,7 @@ func (x *Meta_Key) String() string {
func (*Meta_Key) ProtoMessage() {}
func (x *Meta_Key) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[98]
+ mi := &file_v1_payload_payload_proto_msgTypes[103]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -5990,7 +6262,7 @@ type Meta_Value struct {
func (x *Meta_Value) Reset() {
*x = Meta_Value{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[99]
+ mi := &file_v1_payload_payload_proto_msgTypes[104]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -6003,7 +6275,7 @@ func (x *Meta_Value) String() string {
func (*Meta_Value) ProtoMessage() {}
func (x *Meta_Value) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[99]
+ mi := &file_v1_payload_payload_proto_msgTypes[104]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -6038,7 +6310,7 @@ type Meta_KeyValue struct {
func (x *Meta_KeyValue) Reset() {
*x = Meta_KeyValue{}
if protoimpl.UnsafeEnabled {
- mi := &file_v1_payload_payload_proto_msgTypes[100]
+ mi := &file_v1_payload_payload_proto_msgTypes[105]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -6051,7 +6323,7 @@ func (x *Meta_KeyValue) String() string {
func (*Meta_KeyValue) ProtoMessage() {}
func (x *Meta_KeyValue) ProtoReflect() protoreflect.Message {
- mi := &file_v1_payload_payload_proto_msgTypes[100]
+ mi := &file_v1_payload_payload_proto_msgTypes[105]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -6139,11 +6411,11 @@ var file_v1_payload_payload_proto_rawDesc = []byte{
0x52, 0x07, 0x65, 0x70, 0x73, 0x69, 0x6c, 0x6f, 0x6e, 0x12, 0x18, 0x0a, 0x07, 0x74, 0x69, 0x6d,
0x65, 0x6f, 0x75, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28, 0x03, 0x52, 0x07, 0x74, 0x69, 0x6d, 0x65,
0x6f, 0x75, 0x74, 0x12, 0x42, 0x0a, 0x0f, 0x69, 0x6e, 0x67, 0x72, 0x65, 0x73, 0x73, 0x5f, 0x66,
- 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
+ 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x06, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72,
0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x0e, 0x69, 0x6e, 0x67, 0x72, 0x65, 0x73, 0x73,
0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x12, 0x40, 0x0a, 0x0e, 0x65, 0x67, 0x72, 0x65, 0x73,
- 0x73, 0x5f, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32,
+ 0x73, 0x5f, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c,
0x74, 0x65, 0x72, 0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x0d, 0x65, 0x67, 0x72, 0x65,
0x73, 0x73, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x12, 0x20, 0x0a, 0x07, 0x6d, 0x69, 0x6e,
@@ -6185,14 +6457,43 @@ var file_v1_payload_payload_proto_rawDesc = []byte{
0x0d, 0x0a, 0x09, 0x53, 0x6f, 0x72, 0x74, 0x53, 0x6c, 0x69, 0x63, 0x65, 0x10, 0x02, 0x12, 0x11,
0x0a, 0x0d, 0x53, 0x6f, 0x72, 0x74, 0x50, 0x6f, 0x6f, 0x6c, 0x53, 0x6c, 0x69, 0x63, 0x65, 0x10,
0x03, 0x12, 0x0f, 0x0a, 0x0b, 0x50, 0x61, 0x69, 0x72, 0x69, 0x6e, 0x67, 0x48, 0x65, 0x61, 0x70,
- 0x10, 0x04, 0x22, 0x79, 0x0a, 0x06, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x1a, 0x30, 0x0a, 0x06,
- 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01,
- 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x70, 0x6f,
- 0x72, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x04, 0x70, 0x6f, 0x72, 0x74, 0x1a, 0x3d,
- 0x0a, 0x06, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x33, 0x0a, 0x07, 0x74, 0x61, 0x72, 0x67,
- 0x65, 0x74, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c,
- 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x54, 0x61,
- 0x72, 0x67, 0x65, 0x74, 0x52, 0x07, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x73, 0x22, 0xe5, 0x04,
+ 0x10, 0x04, 0x22, 0xc8, 0x04, 0x0a, 0x06, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x1a, 0x30, 0x0a,
+ 0x06, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x70,
+ 0x6f, 0x72, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x04, 0x70, 0x6f, 0x72, 0x74, 0x1a,
+ 0x1d, 0x0a, 0x05, 0x51, 0x75, 0x65, 0x72, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x71, 0x75, 0x65, 0x72,
+ 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x71, 0x75, 0x65, 0x72, 0x79, 0x1a, 0x6b,
+ 0x0a, 0x06, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x31, 0x0a, 0x06, 0x74, 0x61, 0x72, 0x67,
+ 0x65, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f,
+ 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x54, 0x61, 0x72,
+ 0x67, 0x65, 0x74, 0x52, 0x06, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x12, 0x2e, 0x0a, 0x05, 0x71,
+ 0x75, 0x65, 0x72, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x70, 0x61, 0x79,
+ 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x51,
+ 0x75, 0x65, 0x72, 0x79, 0x52, 0x05, 0x71, 0x75, 0x65, 0x72, 0x79, 0x1a, 0x7a, 0x0a, 0x0f, 0x44,
+ 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x37,
+ 0x0a, 0x08, 0x64, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b,
+ 0x32, 0x1b, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62,
+ 0x6a, 0x65, 0x63, 0x74, 0x2e, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x52, 0x08, 0x64,
+ 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x2e, 0x0a, 0x05, 0x71, 0x75, 0x65, 0x72, 0x79,
+ 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
+ 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x51, 0x75, 0x65, 0x72, 0x79,
+ 0x52, 0x05, 0x71, 0x75, 0x65, 0x72, 0x79, 0x1a, 0x4b, 0x0a, 0x10, 0x44, 0x69, 0x73, 0x74, 0x61,
+ 0x6e, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x37, 0x0a, 0x08, 0x64,
+ 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1b, 0x2e,
+ 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63,
+ 0x74, 0x2e, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x52, 0x08, 0x64, 0x69, 0x73, 0x74,
+ 0x61, 0x6e, 0x63, 0x65, 0x1a, 0x72, 0x0a, 0x0d, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x52, 0x65,
+ 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x31, 0x0a, 0x06, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e,
+ 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x2e, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72,
+ 0x52, 0x06, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x12, 0x2e, 0x0a, 0x05, 0x71, 0x75, 0x65, 0x72,
+ 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61,
+ 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x51, 0x75, 0x65, 0x72,
+ 0x79, 0x52, 0x05, 0x71, 0x75, 0x65, 0x72, 0x79, 0x1a, 0x43, 0x0a, 0x0e, 0x56, 0x65, 0x63, 0x74,
+ 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x31, 0x0a, 0x06, 0x76, 0x65,
+ 0x63, 0x74, 0x6f, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70, 0x61, 0x79,
+ 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x2e, 0x56,
+ 0x65, 0x63, 0x74, 0x6f, 0x72, 0x52, 0x06, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x22, 0xe5, 0x04,
0x0a, 0x06, 0x49, 0x6e, 0x73, 0x65, 0x72, 0x74, 0x1a, 0x79, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x12, 0x3b, 0x0a, 0x06, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31,
@@ -6227,7 +6528,7 @@ var file_v1_payload_payload_proto_rawDesc = []byte{
0x5f, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x14, 0x73, 0x6b,
0x69, 0x70, 0x53, 0x74, 0x72, 0x69, 0x63, 0x74, 0x45, 0x78, 0x69, 0x73, 0x74, 0x43, 0x68, 0x65,
0x63, 0x6b, 0x12, 0x33, 0x0a, 0x07, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20,
- 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31,
+ 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31,
0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x07,
0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x12, 0x1c, 0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73,
0x74, 0x61, 0x6d, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x74, 0x69, 0x6d, 0x65,
@@ -6271,7 +6572,7 @@ var file_v1_payload_payload_proto_rawDesc = []byte{
0x69, 0x63, 0x74, 0x5f, 0x65, 0x78, 0x69, 0x73, 0x74, 0x5f, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x18,
0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x14, 0x73, 0x6b, 0x69, 0x70, 0x53, 0x74, 0x72, 0x69, 0x63,
0x74, 0x45, 0x78, 0x69, 0x73, 0x74, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x12, 0x33, 0x0a, 0x07, 0x66,
- 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
+ 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72,
0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x07, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73,
0x12, 0x1c, 0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x03, 0x20,
@@ -6313,7 +6614,7 @@ var file_v1_payload_payload_proto_rawDesc = []byte{
0x69, 0x63, 0x74, 0x5f, 0x65, 0x78, 0x69, 0x73, 0x74, 0x5f, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x18,
0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x14, 0x73, 0x6b, 0x69, 0x70, 0x53, 0x74, 0x72, 0x69, 0x63,
0x74, 0x45, 0x78, 0x69, 0x73, 0x74, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x12, 0x33, 0x0a, 0x07, 0x66,
- 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
+ 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72,
0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x07, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73,
0x12, 0x1c, 0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x03, 0x20,
@@ -6361,7 +6662,7 @@ var file_v1_payload_payload_proto_rawDesc = []byte{
0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
0x2e, 0x76, 0x31, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x2e, 0x49, 0x44, 0x42, 0x08, 0xba,
0x48, 0x05, 0x92, 0x01, 0x02, 0x08, 0x02, 0x52, 0x02, 0x69, 0x64, 0x12, 0x33, 0x0a, 0x07, 0x66,
- 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
+ 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x70,
0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x76, 0x31, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72,
0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x07, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73,
0x1a, 0x36, 0x0a, 0x08, 0x44, 0x69, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x0e, 0x0a, 0x02,
@@ -6850,7 +7151,7 @@ func file_v1_payload_payload_proto_rawDescGZIP() []byte {
var (
file_v1_payload_payload_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
- file_v1_payload_payload_proto_msgTypes = make([]protoimpl.MessageInfo, 101)
+ file_v1_payload_payload_proto_msgTypes = make([]protoimpl.MessageInfo, 106)
file_v1_payload_payload_proto_goTypes = []any{
(Search_AggregationAlgorithm)(0), // 0: payload.v1.Search.AggregationAlgorithm
(Remove_Timestamp_Operator)(0), // 1: payload.v1.Remove.Timestamp.Operator
@@ -6879,88 +7180,92 @@ var (
(*Search_Responses)(nil), // 24: payload.v1.Search.Responses
(*Search_StreamResponse)(nil), // 25: payload.v1.Search.StreamResponse
(*Filter_Target)(nil), // 26: payload.v1.Filter.Target
- (*Filter_Config)(nil), // 27: payload.v1.Filter.Config
- (*Insert_Request)(nil), // 28: payload.v1.Insert.Request
- (*Insert_MultiRequest)(nil), // 29: payload.v1.Insert.MultiRequest
- (*Insert_ObjectRequest)(nil), // 30: payload.v1.Insert.ObjectRequest
- (*Insert_MultiObjectRequest)(nil), // 31: payload.v1.Insert.MultiObjectRequest
- (*Insert_Config)(nil), // 32: payload.v1.Insert.Config
- (*Update_Request)(nil), // 33: payload.v1.Update.Request
- (*Update_MultiRequest)(nil), // 34: payload.v1.Update.MultiRequest
- (*Update_ObjectRequest)(nil), // 35: payload.v1.Update.ObjectRequest
- (*Update_MultiObjectRequest)(nil), // 36: payload.v1.Update.MultiObjectRequest
- (*Update_TimestampRequest)(nil), // 37: payload.v1.Update.TimestampRequest
- (*Update_Config)(nil), // 38: payload.v1.Update.Config
- (*Upsert_Request)(nil), // 39: payload.v1.Upsert.Request
- (*Upsert_MultiRequest)(nil), // 40: payload.v1.Upsert.MultiRequest
- (*Upsert_ObjectRequest)(nil), // 41: payload.v1.Upsert.ObjectRequest
- (*Upsert_MultiObjectRequest)(nil), // 42: payload.v1.Upsert.MultiObjectRequest
- (*Upsert_Config)(nil), // 43: payload.v1.Upsert.Config
- (*Remove_Request)(nil), // 44: payload.v1.Remove.Request
- (*Remove_MultiRequest)(nil), // 45: payload.v1.Remove.MultiRequest
- (*Remove_TimestampRequest)(nil), // 46: payload.v1.Remove.TimestampRequest
- (*Remove_Timestamp)(nil), // 47: payload.v1.Remove.Timestamp
- (*Remove_Config)(nil), // 48: payload.v1.Remove.Config
- (*Flush_Request)(nil), // 49: payload.v1.Flush.Request
- (*Object_VectorRequest)(nil), // 50: payload.v1.Object.VectorRequest
- (*Object_Distance)(nil), // 51: payload.v1.Object.Distance
- (*Object_StreamDistance)(nil), // 52: payload.v1.Object.StreamDistance
- (*Object_ID)(nil), // 53: payload.v1.Object.ID
- (*Object_IDs)(nil), // 54: payload.v1.Object.IDs
- (*Object_Vector)(nil), // 55: payload.v1.Object.Vector
- (*Object_TimestampRequest)(nil), // 56: payload.v1.Object.TimestampRequest
- (*Object_Timestamp)(nil), // 57: payload.v1.Object.Timestamp
- (*Object_Vectors)(nil), // 58: payload.v1.Object.Vectors
- (*Object_StreamVector)(nil), // 59: payload.v1.Object.StreamVector
- (*Object_ReshapeVector)(nil), // 60: payload.v1.Object.ReshapeVector
- (*Object_Blob)(nil), // 61: payload.v1.Object.Blob
- (*Object_StreamBlob)(nil), // 62: payload.v1.Object.StreamBlob
- (*Object_Location)(nil), // 63: payload.v1.Object.Location
- (*Object_StreamLocation)(nil), // 64: payload.v1.Object.StreamLocation
- (*Object_Locations)(nil), // 65: payload.v1.Object.Locations
- (*Object_List)(nil), // 66: payload.v1.Object.List
- (*Object_List_Request)(nil), // 67: payload.v1.Object.List.Request
- (*Object_List_Response)(nil), // 68: payload.v1.Object.List.Response
- (*Control_CreateIndexRequest)(nil), // 69: payload.v1.Control.CreateIndexRequest
- (*Discoverer_Request)(nil), // 70: payload.v1.Discoverer.Request
- (*Info_Index)(nil), // 71: payload.v1.Info.Index
- (*Info_Pod)(nil), // 72: payload.v1.Info.Pod
- (*Info_Node)(nil), // 73: payload.v1.Info.Node
- (*Info_Service)(nil), // 74: payload.v1.Info.Service
- (*Info_ServicePort)(nil), // 75: payload.v1.Info.ServicePort
- (*Info_Labels)(nil), // 76: payload.v1.Info.Labels
- (*Info_Annotations)(nil), // 77: payload.v1.Info.Annotations
- (*Info_CPU)(nil), // 78: payload.v1.Info.CPU
- (*Info_Memory)(nil), // 79: payload.v1.Info.Memory
- (*Info_Pods)(nil), // 80: payload.v1.Info.Pods
- (*Info_Nodes)(nil), // 81: payload.v1.Info.Nodes
- (*Info_Services)(nil), // 82: payload.v1.Info.Services
- (*Info_IPs)(nil), // 83: payload.v1.Info.IPs
- (*Info_Index_Count)(nil), // 84: payload.v1.Info.Index.Count
- (*Info_Index_Detail)(nil), // 85: payload.v1.Info.Index.Detail
- (*Info_Index_UUID)(nil), // 86: payload.v1.Info.Index.UUID
- (*Info_Index_Statistics)(nil), // 87: payload.v1.Info.Index.Statistics
- (*Info_Index_StatisticsDetail)(nil), // 88: payload.v1.Info.Index.StatisticsDetail
- (*Info_Index_Property)(nil), // 89: payload.v1.Info.Index.Property
- (*Info_Index_PropertyDetail)(nil), // 90: payload.v1.Info.Index.PropertyDetail
- nil, // 91: payload.v1.Info.Index.Detail.CountsEntry
- (*Info_Index_UUID_Committed)(nil), // 92: payload.v1.Info.Index.UUID.Committed
- (*Info_Index_UUID_Uncommitted)(nil), // 93: payload.v1.Info.Index.UUID.Uncommitted
- nil, // 94: payload.v1.Info.Index.StatisticsDetail.DetailsEntry
- nil, // 95: payload.v1.Info.Index.PropertyDetail.DetailsEntry
- nil, // 96: payload.v1.Info.Labels.LabelsEntry
- nil, // 97: payload.v1.Info.Annotations.AnnotationsEntry
- (*Mirror_Target)(nil), // 98: payload.v1.Mirror.Target
- (*Mirror_Targets)(nil), // 99: payload.v1.Mirror.Targets
- (*Meta_Key)(nil), // 100: payload.v1.Meta.Key
- (*Meta_Value)(nil), // 101: payload.v1.Meta.Value
- (*Meta_KeyValue)(nil), // 102: payload.v1.Meta.KeyValue
- (*wrapperspb.FloatValue)(nil), // 103: google.protobuf.FloatValue
- (*status.Status)(nil), // 104: google.rpc.Status
- (*anypb.Any)(nil), // 105: google.protobuf.Any
+ (*Filter_Query)(nil), // 27: payload.v1.Filter.Query
+ (*Filter_Config)(nil), // 28: payload.v1.Filter.Config
+ (*Filter_DistanceRequest)(nil), // 29: payload.v1.Filter.DistanceRequest
+ (*Filter_DistanceResponse)(nil), // 30: payload.v1.Filter.DistanceResponse
+ (*Filter_VectorRequest)(nil), // 31: payload.v1.Filter.VectorRequest
+ (*Filter_VectorResponse)(nil), // 32: payload.v1.Filter.VectorResponse
+ (*Insert_Request)(nil), // 33: payload.v1.Insert.Request
+ (*Insert_MultiRequest)(nil), // 34: payload.v1.Insert.MultiRequest
+ (*Insert_ObjectRequest)(nil), // 35: payload.v1.Insert.ObjectRequest
+ (*Insert_MultiObjectRequest)(nil), // 36: payload.v1.Insert.MultiObjectRequest
+ (*Insert_Config)(nil), // 37: payload.v1.Insert.Config
+ (*Update_Request)(nil), // 38: payload.v1.Update.Request
+ (*Update_MultiRequest)(nil), // 39: payload.v1.Update.MultiRequest
+ (*Update_ObjectRequest)(nil), // 40: payload.v1.Update.ObjectRequest
+ (*Update_MultiObjectRequest)(nil), // 41: payload.v1.Update.MultiObjectRequest
+ (*Update_TimestampRequest)(nil), // 42: payload.v1.Update.TimestampRequest
+ (*Update_Config)(nil), // 43: payload.v1.Update.Config
+ (*Upsert_Request)(nil), // 44: payload.v1.Upsert.Request
+ (*Upsert_MultiRequest)(nil), // 45: payload.v1.Upsert.MultiRequest
+ (*Upsert_ObjectRequest)(nil), // 46: payload.v1.Upsert.ObjectRequest
+ (*Upsert_MultiObjectRequest)(nil), // 47: payload.v1.Upsert.MultiObjectRequest
+ (*Upsert_Config)(nil), // 48: payload.v1.Upsert.Config
+ (*Remove_Request)(nil), // 49: payload.v1.Remove.Request
+ (*Remove_MultiRequest)(nil), // 50: payload.v1.Remove.MultiRequest
+ (*Remove_TimestampRequest)(nil), // 51: payload.v1.Remove.TimestampRequest
+ (*Remove_Timestamp)(nil), // 52: payload.v1.Remove.Timestamp
+ (*Remove_Config)(nil), // 53: payload.v1.Remove.Config
+ (*Flush_Request)(nil), // 54: payload.v1.Flush.Request
+ (*Object_VectorRequest)(nil), // 55: payload.v1.Object.VectorRequest
+ (*Object_Distance)(nil), // 56: payload.v1.Object.Distance
+ (*Object_StreamDistance)(nil), // 57: payload.v1.Object.StreamDistance
+ (*Object_ID)(nil), // 58: payload.v1.Object.ID
+ (*Object_IDs)(nil), // 59: payload.v1.Object.IDs
+ (*Object_Vector)(nil), // 60: payload.v1.Object.Vector
+ (*Object_TimestampRequest)(nil), // 61: payload.v1.Object.TimestampRequest
+ (*Object_Timestamp)(nil), // 62: payload.v1.Object.Timestamp
+ (*Object_Vectors)(nil), // 63: payload.v1.Object.Vectors
+ (*Object_StreamVector)(nil), // 64: payload.v1.Object.StreamVector
+ (*Object_ReshapeVector)(nil), // 65: payload.v1.Object.ReshapeVector
+ (*Object_Blob)(nil), // 66: payload.v1.Object.Blob
+ (*Object_StreamBlob)(nil), // 67: payload.v1.Object.StreamBlob
+ (*Object_Location)(nil), // 68: payload.v1.Object.Location
+ (*Object_StreamLocation)(nil), // 69: payload.v1.Object.StreamLocation
+ (*Object_Locations)(nil), // 70: payload.v1.Object.Locations
+ (*Object_List)(nil), // 71: payload.v1.Object.List
+ (*Object_List_Request)(nil), // 72: payload.v1.Object.List.Request
+ (*Object_List_Response)(nil), // 73: payload.v1.Object.List.Response
+ (*Control_CreateIndexRequest)(nil), // 74: payload.v1.Control.CreateIndexRequest
+ (*Discoverer_Request)(nil), // 75: payload.v1.Discoverer.Request
+ (*Info_Index)(nil), // 76: payload.v1.Info.Index
+ (*Info_Pod)(nil), // 77: payload.v1.Info.Pod
+ (*Info_Node)(nil), // 78: payload.v1.Info.Node
+ (*Info_Service)(nil), // 79: payload.v1.Info.Service
+ (*Info_ServicePort)(nil), // 80: payload.v1.Info.ServicePort
+ (*Info_Labels)(nil), // 81: payload.v1.Info.Labels
+ (*Info_Annotations)(nil), // 82: payload.v1.Info.Annotations
+ (*Info_CPU)(nil), // 83: payload.v1.Info.CPU
+ (*Info_Memory)(nil), // 84: payload.v1.Info.Memory
+ (*Info_Pods)(nil), // 85: payload.v1.Info.Pods
+ (*Info_Nodes)(nil), // 86: payload.v1.Info.Nodes
+ (*Info_Services)(nil), // 87: payload.v1.Info.Services
+ (*Info_IPs)(nil), // 88: payload.v1.Info.IPs
+ (*Info_Index_Count)(nil), // 89: payload.v1.Info.Index.Count
+ (*Info_Index_Detail)(nil), // 90: payload.v1.Info.Index.Detail
+ (*Info_Index_UUID)(nil), // 91: payload.v1.Info.Index.UUID
+ (*Info_Index_Statistics)(nil), // 92: payload.v1.Info.Index.Statistics
+ (*Info_Index_StatisticsDetail)(nil), // 93: payload.v1.Info.Index.StatisticsDetail
+ (*Info_Index_Property)(nil), // 94: payload.v1.Info.Index.Property
+ (*Info_Index_PropertyDetail)(nil), // 95: payload.v1.Info.Index.PropertyDetail
+ nil, // 96: payload.v1.Info.Index.Detail.CountsEntry
+ (*Info_Index_UUID_Committed)(nil), // 97: payload.v1.Info.Index.UUID.Committed
+ (*Info_Index_UUID_Uncommitted)(nil), // 98: payload.v1.Info.Index.UUID.Uncommitted
+ nil, // 99: payload.v1.Info.Index.StatisticsDetail.DetailsEntry
+ nil, // 100: payload.v1.Info.Index.PropertyDetail.DetailsEntry
+ nil, // 101: payload.v1.Info.Labels.LabelsEntry
+ nil, // 102: payload.v1.Info.Annotations.AnnotationsEntry
+ (*Mirror_Target)(nil), // 103: payload.v1.Mirror.Target
+ (*Mirror_Targets)(nil), // 104: payload.v1.Mirror.Targets
+ (*Meta_Key)(nil), // 105: payload.v1.Meta.Key
+ (*Meta_Value)(nil), // 106: payload.v1.Meta.Value
+ (*Meta_KeyValue)(nil), // 107: payload.v1.Meta.KeyValue
+ (*wrapperspb.FloatValue)(nil), // 108: google.protobuf.FloatValue
+ (*status.Status)(nil), // 109: google.rpc.Status
+ (*anypb.Any)(nil), // 110: google.protobuf.Any
}
)
-
var file_v1_payload_payload_proto_depIdxs = []int32{
22, // 0: payload.v1.Search.Request.config:type_name -> payload.v1.Search.Config
16, // 1: payload.v1.Search.MultiRequest.requests:type_name -> payload.v1.Search.Request
@@ -6969,88 +7274,95 @@ var file_v1_payload_payload_proto_depIdxs = []int32{
22, // 4: payload.v1.Search.ObjectRequest.config:type_name -> payload.v1.Search.Config
26, // 5: payload.v1.Search.ObjectRequest.vectorizer:type_name -> payload.v1.Filter.Target
20, // 6: payload.v1.Search.MultiObjectRequest.requests:type_name -> payload.v1.Search.ObjectRequest
- 27, // 7: payload.v1.Search.Config.ingress_filters:type_name -> payload.v1.Filter.Config
- 27, // 8: payload.v1.Search.Config.egress_filters:type_name -> payload.v1.Filter.Config
+ 28, // 7: payload.v1.Search.Config.ingress_filters:type_name -> payload.v1.Filter.Config
+ 28, // 8: payload.v1.Search.Config.egress_filters:type_name -> payload.v1.Filter.Config
0, // 9: payload.v1.Search.Config.aggregation_algorithm:type_name -> payload.v1.Search.AggregationAlgorithm
- 103, // 10: payload.v1.Search.Config.ratio:type_name -> google.protobuf.FloatValue
- 51, // 11: payload.v1.Search.Response.results:type_name -> payload.v1.Object.Distance
+ 108, // 10: payload.v1.Search.Config.ratio:type_name -> google.protobuf.FloatValue
+ 56, // 11: payload.v1.Search.Response.results:type_name -> payload.v1.Object.Distance
23, // 12: payload.v1.Search.Responses.responses:type_name -> payload.v1.Search.Response
23, // 13: payload.v1.Search.StreamResponse.response:type_name -> payload.v1.Search.Response
- 104, // 14: payload.v1.Search.StreamResponse.status:type_name -> google.rpc.Status
- 26, // 15: payload.v1.Filter.Config.targets:type_name -> payload.v1.Filter.Target
- 55, // 16: payload.v1.Insert.Request.vector:type_name -> payload.v1.Object.Vector
- 32, // 17: payload.v1.Insert.Request.config:type_name -> payload.v1.Insert.Config
- 28, // 18: payload.v1.Insert.MultiRequest.requests:type_name -> payload.v1.Insert.Request
- 61, // 19: payload.v1.Insert.ObjectRequest.object:type_name -> payload.v1.Object.Blob
- 32, // 20: payload.v1.Insert.ObjectRequest.config:type_name -> payload.v1.Insert.Config
- 26, // 21: payload.v1.Insert.ObjectRequest.vectorizer:type_name -> payload.v1.Filter.Target
- 30, // 22: payload.v1.Insert.MultiObjectRequest.requests:type_name -> payload.v1.Insert.ObjectRequest
- 27, // 23: payload.v1.Insert.Config.filters:type_name -> payload.v1.Filter.Config
- 55, // 24: payload.v1.Update.Request.vector:type_name -> payload.v1.Object.Vector
- 38, // 25: payload.v1.Update.Request.config:type_name -> payload.v1.Update.Config
- 33, // 26: payload.v1.Update.MultiRequest.requests:type_name -> payload.v1.Update.Request
- 61, // 27: payload.v1.Update.ObjectRequest.object:type_name -> payload.v1.Object.Blob
- 38, // 28: payload.v1.Update.ObjectRequest.config:type_name -> payload.v1.Update.Config
- 26, // 29: payload.v1.Update.ObjectRequest.vectorizer:type_name -> payload.v1.Filter.Target
- 35, // 30: payload.v1.Update.MultiObjectRequest.requests:type_name -> payload.v1.Update.ObjectRequest
- 27, // 31: payload.v1.Update.Config.filters:type_name -> payload.v1.Filter.Config
- 55, // 32: payload.v1.Upsert.Request.vector:type_name -> payload.v1.Object.Vector
- 43, // 33: payload.v1.Upsert.Request.config:type_name -> payload.v1.Upsert.Config
- 39, // 34: payload.v1.Upsert.MultiRequest.requests:type_name -> payload.v1.Upsert.Request
- 61, // 35: payload.v1.Upsert.ObjectRequest.object:type_name -> payload.v1.Object.Blob
- 43, // 36: payload.v1.Upsert.ObjectRequest.config:type_name -> payload.v1.Upsert.Config
- 26, // 37: payload.v1.Upsert.ObjectRequest.vectorizer:type_name -> payload.v1.Filter.Target
- 41, // 38: payload.v1.Upsert.MultiObjectRequest.requests:type_name -> payload.v1.Upsert.ObjectRequest
- 27, // 39: payload.v1.Upsert.Config.filters:type_name -> payload.v1.Filter.Config
- 53, // 40: payload.v1.Remove.Request.id:type_name -> payload.v1.Object.ID
- 48, // 41: payload.v1.Remove.Request.config:type_name -> payload.v1.Remove.Config
- 44, // 42: payload.v1.Remove.MultiRequest.requests:type_name -> payload.v1.Remove.Request
- 47, // 43: payload.v1.Remove.TimestampRequest.timestamps:type_name -> payload.v1.Remove.Timestamp
- 1, // 44: payload.v1.Remove.Timestamp.operator:type_name -> payload.v1.Remove.Timestamp.Operator
- 53, // 45: payload.v1.Object.VectorRequest.id:type_name -> payload.v1.Object.ID
- 27, // 46: payload.v1.Object.VectorRequest.filters:type_name -> payload.v1.Filter.Config
- 51, // 47: payload.v1.Object.StreamDistance.distance:type_name -> payload.v1.Object.Distance
- 104, // 48: payload.v1.Object.StreamDistance.status:type_name -> google.rpc.Status
- 53, // 49: payload.v1.Object.TimestampRequest.id:type_name -> payload.v1.Object.ID
- 55, // 50: payload.v1.Object.Vectors.vectors:type_name -> payload.v1.Object.Vector
- 55, // 51: payload.v1.Object.StreamVector.vector:type_name -> payload.v1.Object.Vector
- 104, // 52: payload.v1.Object.StreamVector.status:type_name -> google.rpc.Status
- 61, // 53: payload.v1.Object.StreamBlob.blob:type_name -> payload.v1.Object.Blob
- 104, // 54: payload.v1.Object.StreamBlob.status:type_name -> google.rpc.Status
- 63, // 55: payload.v1.Object.StreamLocation.location:type_name -> payload.v1.Object.Location
- 104, // 56: payload.v1.Object.StreamLocation.status:type_name -> google.rpc.Status
- 63, // 57: payload.v1.Object.Locations.locations:type_name -> payload.v1.Object.Location
- 55, // 58: payload.v1.Object.List.Response.vector:type_name -> payload.v1.Object.Vector
- 104, // 59: payload.v1.Object.List.Response.status:type_name -> google.rpc.Status
- 78, // 60: payload.v1.Info.Pod.cpu:type_name -> payload.v1.Info.CPU
- 79, // 61: payload.v1.Info.Pod.memory:type_name -> payload.v1.Info.Memory
- 73, // 62: payload.v1.Info.Pod.node:type_name -> payload.v1.Info.Node
- 78, // 63: payload.v1.Info.Node.cpu:type_name -> payload.v1.Info.CPU
- 79, // 64: payload.v1.Info.Node.memory:type_name -> payload.v1.Info.Memory
- 80, // 65: payload.v1.Info.Node.Pods:type_name -> payload.v1.Info.Pods
- 75, // 66: payload.v1.Info.Service.ports:type_name -> payload.v1.Info.ServicePort
- 76, // 67: payload.v1.Info.Service.labels:type_name -> payload.v1.Info.Labels
- 77, // 68: payload.v1.Info.Service.annotations:type_name -> payload.v1.Info.Annotations
- 96, // 69: payload.v1.Info.Labels.labels:type_name -> payload.v1.Info.Labels.LabelsEntry
- 97, // 70: payload.v1.Info.Annotations.annotations:type_name -> payload.v1.Info.Annotations.AnnotationsEntry
- 72, // 71: payload.v1.Info.Pods.pods:type_name -> payload.v1.Info.Pod
- 73, // 72: payload.v1.Info.Nodes.nodes:type_name -> payload.v1.Info.Node
- 74, // 73: payload.v1.Info.Services.services:type_name -> payload.v1.Info.Service
- 91, // 74: payload.v1.Info.Index.Detail.counts:type_name -> payload.v1.Info.Index.Detail.CountsEntry
- 94, // 75: payload.v1.Info.Index.StatisticsDetail.details:type_name -> payload.v1.Info.Index.StatisticsDetail.DetailsEntry
- 95, // 76: payload.v1.Info.Index.PropertyDetail.details:type_name -> payload.v1.Info.Index.PropertyDetail.DetailsEntry
- 84, // 77: payload.v1.Info.Index.Detail.CountsEntry.value:type_name -> payload.v1.Info.Index.Count
- 87, // 78: payload.v1.Info.Index.StatisticsDetail.DetailsEntry.value:type_name -> payload.v1.Info.Index.Statistics
- 89, // 79: payload.v1.Info.Index.PropertyDetail.DetailsEntry.value:type_name -> payload.v1.Info.Index.Property
- 98, // 80: payload.v1.Mirror.Targets.targets:type_name -> payload.v1.Mirror.Target
- 105, // 81: payload.v1.Meta.Value.value:type_name -> google.protobuf.Any
- 100, // 82: payload.v1.Meta.KeyValue.key:type_name -> payload.v1.Meta.Key
- 101, // 83: payload.v1.Meta.KeyValue.value:type_name -> payload.v1.Meta.Value
- 84, // [84:84] is the sub-list for method output_type
- 84, // [84:84] is the sub-list for method input_type
- 84, // [84:84] is the sub-list for extension type_name
- 84, // [84:84] is the sub-list for extension extendee
- 0, // [0:84] is the sub-list for field type_name
+ 109, // 14: payload.v1.Search.StreamResponse.status:type_name -> google.rpc.Status
+ 26, // 15: payload.v1.Filter.Config.target:type_name -> payload.v1.Filter.Target
+ 27, // 16: payload.v1.Filter.Config.query:type_name -> payload.v1.Filter.Query
+ 56, // 17: payload.v1.Filter.DistanceRequest.distance:type_name -> payload.v1.Object.Distance
+ 27, // 18: payload.v1.Filter.DistanceRequest.query:type_name -> payload.v1.Filter.Query
+ 56, // 19: payload.v1.Filter.DistanceResponse.distance:type_name -> payload.v1.Object.Distance
+ 60, // 20: payload.v1.Filter.VectorRequest.vector:type_name -> payload.v1.Object.Vector
+ 27, // 21: payload.v1.Filter.VectorRequest.query:type_name -> payload.v1.Filter.Query
+ 60, // 22: payload.v1.Filter.VectorResponse.vector:type_name -> payload.v1.Object.Vector
+ 60, // 23: payload.v1.Insert.Request.vector:type_name -> payload.v1.Object.Vector
+ 37, // 24: payload.v1.Insert.Request.config:type_name -> payload.v1.Insert.Config
+ 33, // 25: payload.v1.Insert.MultiRequest.requests:type_name -> payload.v1.Insert.Request
+ 66, // 26: payload.v1.Insert.ObjectRequest.object:type_name -> payload.v1.Object.Blob
+ 37, // 27: payload.v1.Insert.ObjectRequest.config:type_name -> payload.v1.Insert.Config
+ 26, // 28: payload.v1.Insert.ObjectRequest.vectorizer:type_name -> payload.v1.Filter.Target
+ 35, // 29: payload.v1.Insert.MultiObjectRequest.requests:type_name -> payload.v1.Insert.ObjectRequest
+ 28, // 30: payload.v1.Insert.Config.filters:type_name -> payload.v1.Filter.Config
+ 60, // 31: payload.v1.Update.Request.vector:type_name -> payload.v1.Object.Vector
+ 43, // 32: payload.v1.Update.Request.config:type_name -> payload.v1.Update.Config
+ 38, // 33: payload.v1.Update.MultiRequest.requests:type_name -> payload.v1.Update.Request
+ 66, // 34: payload.v1.Update.ObjectRequest.object:type_name -> payload.v1.Object.Blob
+ 43, // 35: payload.v1.Update.ObjectRequest.config:type_name -> payload.v1.Update.Config
+ 26, // 36: payload.v1.Update.ObjectRequest.vectorizer:type_name -> payload.v1.Filter.Target
+ 40, // 37: payload.v1.Update.MultiObjectRequest.requests:type_name -> payload.v1.Update.ObjectRequest
+ 28, // 38: payload.v1.Update.Config.filters:type_name -> payload.v1.Filter.Config
+ 60, // 39: payload.v1.Upsert.Request.vector:type_name -> payload.v1.Object.Vector
+ 48, // 40: payload.v1.Upsert.Request.config:type_name -> payload.v1.Upsert.Config
+ 44, // 41: payload.v1.Upsert.MultiRequest.requests:type_name -> payload.v1.Upsert.Request
+ 66, // 42: payload.v1.Upsert.ObjectRequest.object:type_name -> payload.v1.Object.Blob
+ 48, // 43: payload.v1.Upsert.ObjectRequest.config:type_name -> payload.v1.Upsert.Config
+ 26, // 44: payload.v1.Upsert.ObjectRequest.vectorizer:type_name -> payload.v1.Filter.Target
+ 46, // 45: payload.v1.Upsert.MultiObjectRequest.requests:type_name -> payload.v1.Upsert.ObjectRequest
+ 28, // 46: payload.v1.Upsert.Config.filters:type_name -> payload.v1.Filter.Config
+ 58, // 47: payload.v1.Remove.Request.id:type_name -> payload.v1.Object.ID
+ 53, // 48: payload.v1.Remove.Request.config:type_name -> payload.v1.Remove.Config
+ 49, // 49: payload.v1.Remove.MultiRequest.requests:type_name -> payload.v1.Remove.Request
+ 52, // 50: payload.v1.Remove.TimestampRequest.timestamps:type_name -> payload.v1.Remove.Timestamp
+ 1, // 51: payload.v1.Remove.Timestamp.operator:type_name -> payload.v1.Remove.Timestamp.Operator
+ 58, // 52: payload.v1.Object.VectorRequest.id:type_name -> payload.v1.Object.ID
+ 28, // 53: payload.v1.Object.VectorRequest.filters:type_name -> payload.v1.Filter.Config
+ 56, // 54: payload.v1.Object.StreamDistance.distance:type_name -> payload.v1.Object.Distance
+ 109, // 55: payload.v1.Object.StreamDistance.status:type_name -> google.rpc.Status
+ 58, // 56: payload.v1.Object.TimestampRequest.id:type_name -> payload.v1.Object.ID
+ 60, // 57: payload.v1.Object.Vectors.vectors:type_name -> payload.v1.Object.Vector
+ 60, // 58: payload.v1.Object.StreamVector.vector:type_name -> payload.v1.Object.Vector
+ 109, // 59: payload.v1.Object.StreamVector.status:type_name -> google.rpc.Status
+ 66, // 60: payload.v1.Object.StreamBlob.blob:type_name -> payload.v1.Object.Blob
+ 109, // 61: payload.v1.Object.StreamBlob.status:type_name -> google.rpc.Status
+ 68, // 62: payload.v1.Object.StreamLocation.location:type_name -> payload.v1.Object.Location
+ 109, // 63: payload.v1.Object.StreamLocation.status:type_name -> google.rpc.Status
+ 68, // 64: payload.v1.Object.Locations.locations:type_name -> payload.v1.Object.Location
+ 60, // 65: payload.v1.Object.List.Response.vector:type_name -> payload.v1.Object.Vector
+ 109, // 66: payload.v1.Object.List.Response.status:type_name -> google.rpc.Status
+ 83, // 67: payload.v1.Info.Pod.cpu:type_name -> payload.v1.Info.CPU
+ 84, // 68: payload.v1.Info.Pod.memory:type_name -> payload.v1.Info.Memory
+ 78, // 69: payload.v1.Info.Pod.node:type_name -> payload.v1.Info.Node
+ 83, // 70: payload.v1.Info.Node.cpu:type_name -> payload.v1.Info.CPU
+ 84, // 71: payload.v1.Info.Node.memory:type_name -> payload.v1.Info.Memory
+ 85, // 72: payload.v1.Info.Node.Pods:type_name -> payload.v1.Info.Pods
+ 80, // 73: payload.v1.Info.Service.ports:type_name -> payload.v1.Info.ServicePort
+ 81, // 74: payload.v1.Info.Service.labels:type_name -> payload.v1.Info.Labels
+ 82, // 75: payload.v1.Info.Service.annotations:type_name -> payload.v1.Info.Annotations
+ 101, // 76: payload.v1.Info.Labels.labels:type_name -> payload.v1.Info.Labels.LabelsEntry
+ 102, // 77: payload.v1.Info.Annotations.annotations:type_name -> payload.v1.Info.Annotations.AnnotationsEntry
+ 77, // 78: payload.v1.Info.Pods.pods:type_name -> payload.v1.Info.Pod
+ 78, // 79: payload.v1.Info.Nodes.nodes:type_name -> payload.v1.Info.Node
+ 79, // 80: payload.v1.Info.Services.services:type_name -> payload.v1.Info.Service
+ 96, // 81: payload.v1.Info.Index.Detail.counts:type_name -> payload.v1.Info.Index.Detail.CountsEntry
+ 99, // 82: payload.v1.Info.Index.StatisticsDetail.details:type_name -> payload.v1.Info.Index.StatisticsDetail.DetailsEntry
+ 100, // 83: payload.v1.Info.Index.PropertyDetail.details:type_name -> payload.v1.Info.Index.PropertyDetail.DetailsEntry
+ 89, // 84: payload.v1.Info.Index.Detail.CountsEntry.value:type_name -> payload.v1.Info.Index.Count
+ 92, // 85: payload.v1.Info.Index.StatisticsDetail.DetailsEntry.value:type_name -> payload.v1.Info.Index.Statistics
+ 94, // 86: payload.v1.Info.Index.PropertyDetail.DetailsEntry.value:type_name -> payload.v1.Info.Index.Property
+ 103, // 87: payload.v1.Mirror.Targets.targets:type_name -> payload.v1.Mirror.Target
+ 110, // 88: payload.v1.Meta.Value.value:type_name -> google.protobuf.Any
+ 105, // 89: payload.v1.Meta.KeyValue.key:type_name -> payload.v1.Meta.Key
+ 106, // 90: payload.v1.Meta.KeyValue.value:type_name -> payload.v1.Meta.Value
+ 91, // [91:91] is the sub-list for method output_type
+ 91, // [91:91] is the sub-list for method input_type
+ 91, // [91:91] is the sub-list for extension type_name
+ 91, // [91:91] is the sub-list for extension extendee
+ 0, // [0:91] is the sub-list for field type_name
}
func init() { file_v1_payload_payload_proto_init() }
@@ -7360,7 +7672,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[25].Exporter = func(v any, i int) any {
- switch v := v.(*Filter_Config); i {
+ switch v := v.(*Filter_Query); i {
case 0:
return &v.state
case 1:
@@ -7372,7 +7684,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[26].Exporter = func(v any, i int) any {
- switch v := v.(*Insert_Request); i {
+ switch v := v.(*Filter_Config); i {
case 0:
return &v.state
case 1:
@@ -7384,7 +7696,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[27].Exporter = func(v any, i int) any {
- switch v := v.(*Insert_MultiRequest); i {
+ switch v := v.(*Filter_DistanceRequest); i {
case 0:
return &v.state
case 1:
@@ -7396,7 +7708,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[28].Exporter = func(v any, i int) any {
- switch v := v.(*Insert_ObjectRequest); i {
+ switch v := v.(*Filter_DistanceResponse); i {
case 0:
return &v.state
case 1:
@@ -7408,7 +7720,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[29].Exporter = func(v any, i int) any {
- switch v := v.(*Insert_MultiObjectRequest); i {
+ switch v := v.(*Filter_VectorRequest); i {
case 0:
return &v.state
case 1:
@@ -7420,7 +7732,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[30].Exporter = func(v any, i int) any {
- switch v := v.(*Insert_Config); i {
+ switch v := v.(*Filter_VectorResponse); i {
case 0:
return &v.state
case 1:
@@ -7432,7 +7744,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[31].Exporter = func(v any, i int) any {
- switch v := v.(*Update_Request); i {
+ switch v := v.(*Insert_Request); i {
case 0:
return &v.state
case 1:
@@ -7444,7 +7756,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[32].Exporter = func(v any, i int) any {
- switch v := v.(*Update_MultiRequest); i {
+ switch v := v.(*Insert_MultiRequest); i {
case 0:
return &v.state
case 1:
@@ -7456,7 +7768,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[33].Exporter = func(v any, i int) any {
- switch v := v.(*Update_ObjectRequest); i {
+ switch v := v.(*Insert_ObjectRequest); i {
case 0:
return &v.state
case 1:
@@ -7468,7 +7780,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[34].Exporter = func(v any, i int) any {
- switch v := v.(*Update_MultiObjectRequest); i {
+ switch v := v.(*Insert_MultiObjectRequest); i {
case 0:
return &v.state
case 1:
@@ -7480,7 +7792,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[35].Exporter = func(v any, i int) any {
- switch v := v.(*Update_TimestampRequest); i {
+ switch v := v.(*Insert_Config); i {
case 0:
return &v.state
case 1:
@@ -7492,7 +7804,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[36].Exporter = func(v any, i int) any {
- switch v := v.(*Update_Config); i {
+ switch v := v.(*Update_Request); i {
case 0:
return &v.state
case 1:
@@ -7504,7 +7816,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[37].Exporter = func(v any, i int) any {
- switch v := v.(*Upsert_Request); i {
+ switch v := v.(*Update_MultiRequest); i {
case 0:
return &v.state
case 1:
@@ -7516,7 +7828,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[38].Exporter = func(v any, i int) any {
- switch v := v.(*Upsert_MultiRequest); i {
+ switch v := v.(*Update_ObjectRequest); i {
case 0:
return &v.state
case 1:
@@ -7528,7 +7840,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[39].Exporter = func(v any, i int) any {
- switch v := v.(*Upsert_ObjectRequest); i {
+ switch v := v.(*Update_MultiObjectRequest); i {
case 0:
return &v.state
case 1:
@@ -7540,7 +7852,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[40].Exporter = func(v any, i int) any {
- switch v := v.(*Upsert_MultiObjectRequest); i {
+ switch v := v.(*Update_TimestampRequest); i {
case 0:
return &v.state
case 1:
@@ -7552,7 +7864,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[41].Exporter = func(v any, i int) any {
- switch v := v.(*Upsert_Config); i {
+ switch v := v.(*Update_Config); i {
case 0:
return &v.state
case 1:
@@ -7564,7 +7876,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[42].Exporter = func(v any, i int) any {
- switch v := v.(*Remove_Request); i {
+ switch v := v.(*Upsert_Request); i {
case 0:
return &v.state
case 1:
@@ -7576,7 +7888,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[43].Exporter = func(v any, i int) any {
- switch v := v.(*Remove_MultiRequest); i {
+ switch v := v.(*Upsert_MultiRequest); i {
case 0:
return &v.state
case 1:
@@ -7588,7 +7900,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[44].Exporter = func(v any, i int) any {
- switch v := v.(*Remove_TimestampRequest); i {
+ switch v := v.(*Upsert_ObjectRequest); i {
case 0:
return &v.state
case 1:
@@ -7600,7 +7912,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[45].Exporter = func(v any, i int) any {
- switch v := v.(*Remove_Timestamp); i {
+ switch v := v.(*Upsert_MultiObjectRequest); i {
case 0:
return &v.state
case 1:
@@ -7612,7 +7924,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[46].Exporter = func(v any, i int) any {
- switch v := v.(*Remove_Config); i {
+ switch v := v.(*Upsert_Config); i {
case 0:
return &v.state
case 1:
@@ -7624,7 +7936,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[47].Exporter = func(v any, i int) any {
- switch v := v.(*Flush_Request); i {
+ switch v := v.(*Remove_Request); i {
case 0:
return &v.state
case 1:
@@ -7636,7 +7948,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[48].Exporter = func(v any, i int) any {
- switch v := v.(*Object_VectorRequest); i {
+ switch v := v.(*Remove_MultiRequest); i {
case 0:
return &v.state
case 1:
@@ -7648,7 +7960,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[49].Exporter = func(v any, i int) any {
- switch v := v.(*Object_Distance); i {
+ switch v := v.(*Remove_TimestampRequest); i {
case 0:
return &v.state
case 1:
@@ -7660,7 +7972,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[50].Exporter = func(v any, i int) any {
- switch v := v.(*Object_StreamDistance); i {
+ switch v := v.(*Remove_Timestamp); i {
case 0:
return &v.state
case 1:
@@ -7672,7 +7984,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[51].Exporter = func(v any, i int) any {
- switch v := v.(*Object_ID); i {
+ switch v := v.(*Remove_Config); i {
case 0:
return &v.state
case 1:
@@ -7684,7 +7996,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[52].Exporter = func(v any, i int) any {
- switch v := v.(*Object_IDs); i {
+ switch v := v.(*Flush_Request); i {
case 0:
return &v.state
case 1:
@@ -7696,7 +8008,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[53].Exporter = func(v any, i int) any {
- switch v := v.(*Object_Vector); i {
+ switch v := v.(*Object_VectorRequest); i {
case 0:
return &v.state
case 1:
@@ -7708,7 +8020,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[54].Exporter = func(v any, i int) any {
- switch v := v.(*Object_TimestampRequest); i {
+ switch v := v.(*Object_Distance); i {
case 0:
return &v.state
case 1:
@@ -7720,7 +8032,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[55].Exporter = func(v any, i int) any {
- switch v := v.(*Object_Timestamp); i {
+ switch v := v.(*Object_StreamDistance); i {
case 0:
return &v.state
case 1:
@@ -7732,7 +8044,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[56].Exporter = func(v any, i int) any {
- switch v := v.(*Object_Vectors); i {
+ switch v := v.(*Object_ID); i {
case 0:
return &v.state
case 1:
@@ -7744,7 +8056,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[57].Exporter = func(v any, i int) any {
- switch v := v.(*Object_StreamVector); i {
+ switch v := v.(*Object_IDs); i {
case 0:
return &v.state
case 1:
@@ -7756,7 +8068,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[58].Exporter = func(v any, i int) any {
- switch v := v.(*Object_ReshapeVector); i {
+ switch v := v.(*Object_Vector); i {
case 0:
return &v.state
case 1:
@@ -7768,7 +8080,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[59].Exporter = func(v any, i int) any {
- switch v := v.(*Object_Blob); i {
+ switch v := v.(*Object_TimestampRequest); i {
case 0:
return &v.state
case 1:
@@ -7780,7 +8092,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[60].Exporter = func(v any, i int) any {
- switch v := v.(*Object_StreamBlob); i {
+ switch v := v.(*Object_Timestamp); i {
case 0:
return &v.state
case 1:
@@ -7792,7 +8104,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[61].Exporter = func(v any, i int) any {
- switch v := v.(*Object_Location); i {
+ switch v := v.(*Object_Vectors); i {
case 0:
return &v.state
case 1:
@@ -7804,7 +8116,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[62].Exporter = func(v any, i int) any {
- switch v := v.(*Object_StreamLocation); i {
+ switch v := v.(*Object_StreamVector); i {
case 0:
return &v.state
case 1:
@@ -7816,7 +8128,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[63].Exporter = func(v any, i int) any {
- switch v := v.(*Object_Locations); i {
+ switch v := v.(*Object_ReshapeVector); i {
case 0:
return &v.state
case 1:
@@ -7828,7 +8140,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[64].Exporter = func(v any, i int) any {
- switch v := v.(*Object_List); i {
+ switch v := v.(*Object_Blob); i {
case 0:
return &v.state
case 1:
@@ -7840,7 +8152,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[65].Exporter = func(v any, i int) any {
- switch v := v.(*Object_List_Request); i {
+ switch v := v.(*Object_StreamBlob); i {
case 0:
return &v.state
case 1:
@@ -7852,7 +8164,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[66].Exporter = func(v any, i int) any {
- switch v := v.(*Object_List_Response); i {
+ switch v := v.(*Object_Location); i {
case 0:
return &v.state
case 1:
@@ -7864,7 +8176,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[67].Exporter = func(v any, i int) any {
- switch v := v.(*Control_CreateIndexRequest); i {
+ switch v := v.(*Object_StreamLocation); i {
case 0:
return &v.state
case 1:
@@ -7876,7 +8188,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[68].Exporter = func(v any, i int) any {
- switch v := v.(*Discoverer_Request); i {
+ switch v := v.(*Object_Locations); i {
case 0:
return &v.state
case 1:
@@ -7888,7 +8200,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[69].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index); i {
+ switch v := v.(*Object_List); i {
case 0:
return &v.state
case 1:
@@ -7900,7 +8212,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[70].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Pod); i {
+ switch v := v.(*Object_List_Request); i {
case 0:
return &v.state
case 1:
@@ -7912,7 +8224,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[71].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Node); i {
+ switch v := v.(*Object_List_Response); i {
case 0:
return &v.state
case 1:
@@ -7924,7 +8236,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[72].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Service); i {
+ switch v := v.(*Control_CreateIndexRequest); i {
case 0:
return &v.state
case 1:
@@ -7936,7 +8248,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[73].Exporter = func(v any, i int) any {
- switch v := v.(*Info_ServicePort); i {
+ switch v := v.(*Discoverer_Request); i {
case 0:
return &v.state
case 1:
@@ -7948,7 +8260,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[74].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Labels); i {
+ switch v := v.(*Info_Index); i {
case 0:
return &v.state
case 1:
@@ -7960,7 +8272,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[75].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Annotations); i {
+ switch v := v.(*Info_Pod); i {
case 0:
return &v.state
case 1:
@@ -7972,7 +8284,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[76].Exporter = func(v any, i int) any {
- switch v := v.(*Info_CPU); i {
+ switch v := v.(*Info_Node); i {
case 0:
return &v.state
case 1:
@@ -7984,7 +8296,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[77].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Memory); i {
+ switch v := v.(*Info_Service); i {
case 0:
return &v.state
case 1:
@@ -7996,7 +8308,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[78].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Pods); i {
+ switch v := v.(*Info_ServicePort); i {
case 0:
return &v.state
case 1:
@@ -8008,7 +8320,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[79].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Nodes); i {
+ switch v := v.(*Info_Labels); i {
case 0:
return &v.state
case 1:
@@ -8020,7 +8332,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[80].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Services); i {
+ switch v := v.(*Info_Annotations); i {
case 0:
return &v.state
case 1:
@@ -8032,7 +8344,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[81].Exporter = func(v any, i int) any {
- switch v := v.(*Info_IPs); i {
+ switch v := v.(*Info_CPU); i {
case 0:
return &v.state
case 1:
@@ -8044,7 +8356,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[82].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_Count); i {
+ switch v := v.(*Info_Memory); i {
case 0:
return &v.state
case 1:
@@ -8056,7 +8368,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[83].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_Detail); i {
+ switch v := v.(*Info_Pods); i {
case 0:
return &v.state
case 1:
@@ -8068,7 +8380,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[84].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_UUID); i {
+ switch v := v.(*Info_Nodes); i {
case 0:
return &v.state
case 1:
@@ -8080,7 +8392,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[85].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_Statistics); i {
+ switch v := v.(*Info_Services); i {
case 0:
return &v.state
case 1:
@@ -8092,7 +8404,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[86].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_StatisticsDetail); i {
+ switch v := v.(*Info_IPs); i {
case 0:
return &v.state
case 1:
@@ -8104,7 +8416,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[87].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_Property); i {
+ switch v := v.(*Info_Index_Count); i {
case 0:
return &v.state
case 1:
@@ -8116,7 +8428,19 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[88].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_PropertyDetail); i {
+ switch v := v.(*Info_Index_Detail); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_v1_payload_payload_proto_msgTypes[89].Exporter = func(v any, i int) any {
+ switch v := v.(*Info_Index_UUID); i {
case 0:
return &v.state
case 1:
@@ -8128,7 +8452,7 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[90].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_UUID_Committed); i {
+ switch v := v.(*Info_Index_Statistics); i {
case 0:
return &v.state
case 1:
@@ -8140,7 +8464,43 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[91].Exporter = func(v any, i int) any {
- switch v := v.(*Info_Index_UUID_Uncommitted); i {
+ switch v := v.(*Info_Index_StatisticsDetail); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_v1_payload_payload_proto_msgTypes[92].Exporter = func(v any, i int) any {
+ switch v := v.(*Info_Index_Property); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_v1_payload_payload_proto_msgTypes[93].Exporter = func(v any, i int) any {
+ switch v := v.(*Info_Index_PropertyDetail); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_v1_payload_payload_proto_msgTypes[95].Exporter = func(v any, i int) any {
+ switch v := v.(*Info_Index_UUID_Committed); i {
case 0:
return &v.state
case 1:
@@ -8152,6 +8512,18 @@ func file_v1_payload_payload_proto_init() {
}
}
file_v1_payload_payload_proto_msgTypes[96].Exporter = func(v any, i int) any {
+ switch v := v.(*Info_Index_UUID_Uncommitted); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_v1_payload_payload_proto_msgTypes[101].Exporter = func(v any, i int) any {
switch v := v.(*Mirror_Target); i {
case 0:
return &v.state
@@ -8163,7 +8535,7 @@ func file_v1_payload_payload_proto_init() {
return nil
}
}
- file_v1_payload_payload_proto_msgTypes[97].Exporter = func(v any, i int) any {
+ file_v1_payload_payload_proto_msgTypes[102].Exporter = func(v any, i int) any {
switch v := v.(*Mirror_Targets); i {
case 0:
return &v.state
@@ -8175,7 +8547,7 @@ func file_v1_payload_payload_proto_init() {
return nil
}
}
- file_v1_payload_payload_proto_msgTypes[98].Exporter = func(v any, i int) any {
+ file_v1_payload_payload_proto_msgTypes[103].Exporter = func(v any, i int) any {
switch v := v.(*Meta_Key); i {
case 0:
return &v.state
@@ -8187,7 +8559,7 @@ func file_v1_payload_payload_proto_init() {
return nil
}
}
- file_v1_payload_payload_proto_msgTypes[99].Exporter = func(v any, i int) any {
+ file_v1_payload_payload_proto_msgTypes[104].Exporter = func(v any, i int) any {
switch v := v.(*Meta_Value); i {
case 0:
return &v.state
@@ -8199,7 +8571,7 @@ func file_v1_payload_payload_proto_init() {
return nil
}
}
- file_v1_payload_payload_proto_msgTypes[100].Exporter = func(v any, i int) any {
+ file_v1_payload_payload_proto_msgTypes[105].Exporter = func(v any, i int) any {
switch v := v.(*Meta_KeyValue); i {
case 0:
return &v.state
@@ -8216,23 +8588,23 @@ func file_v1_payload_payload_proto_init() {
(*Search_StreamResponse_Response)(nil),
(*Search_StreamResponse_Status)(nil),
}
- file_v1_payload_payload_proto_msgTypes[50].OneofWrappers = []any{
+ file_v1_payload_payload_proto_msgTypes[55].OneofWrappers = []any{
(*Object_StreamDistance_Distance)(nil),
(*Object_StreamDistance_Status)(nil),
}
- file_v1_payload_payload_proto_msgTypes[57].OneofWrappers = []any{
+ file_v1_payload_payload_proto_msgTypes[62].OneofWrappers = []any{
(*Object_StreamVector_Vector)(nil),
(*Object_StreamVector_Status)(nil),
}
- file_v1_payload_payload_proto_msgTypes[60].OneofWrappers = []any{
+ file_v1_payload_payload_proto_msgTypes[65].OneofWrappers = []any{
(*Object_StreamBlob_Blob)(nil),
(*Object_StreamBlob_Status)(nil),
}
- file_v1_payload_payload_proto_msgTypes[62].OneofWrappers = []any{
+ file_v1_payload_payload_proto_msgTypes[67].OneofWrappers = []any{
(*Object_StreamLocation_Location)(nil),
(*Object_StreamLocation_Status)(nil),
}
- file_v1_payload_payload_proto_msgTypes[66].OneofWrappers = []any{
+ file_v1_payload_payload_proto_msgTypes[71].OneofWrappers = []any{
(*Object_List_Response_Vector)(nil),
(*Object_List_Response_Status)(nil),
}
@@ -8242,7 +8614,7 @@ func file_v1_payload_payload_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_v1_payload_payload_proto_rawDesc,
NumEnums: 2,
- NumMessages: 101,
+ NumMessages: 106,
NumExtensions: 0,
NumServices: 0,
},
diff --git a/apis/grpc/v1/payload/payload.pb.json.go b/apis/grpc/v1/payload/payload.pb.json.go
index 9896f5936b..823898f966 100644
--- a/apis/grpc/v1/payload/payload.pb.json.go
+++ b/apis/grpc/v1/payload/payload.pb.json.go
@@ -151,6 +151,16 @@ func (msg *Filter_Target) UnmarshalJSON(b []byte) error {
return protojson.UnmarshalOptions{}.Unmarshal(b, msg)
}
+// MarshalJSON implements json.Marshaler
+func (msg *Filter_Query) MarshalJSON() ([]byte, error) {
+ return protojson.MarshalOptions{}.Marshal(msg)
+}
+
+// UnmarshalJSON implements json.Unmarshaler
+func (msg *Filter_Query) UnmarshalJSON(b []byte) error {
+ return protojson.UnmarshalOptions{}.Unmarshal(b, msg)
+}
+
// MarshalJSON implements json.Marshaler
func (msg *Filter_Config) MarshalJSON() ([]byte, error) {
return protojson.MarshalOptions{}.Marshal(msg)
@@ -161,6 +171,46 @@ func (msg *Filter_Config) UnmarshalJSON(b []byte) error {
return protojson.UnmarshalOptions{}.Unmarshal(b, msg)
}
+// MarshalJSON implements json.Marshaler
+func (msg *Filter_DistanceRequest) MarshalJSON() ([]byte, error) {
+ return protojson.MarshalOptions{}.Marshal(msg)
+}
+
+// UnmarshalJSON implements json.Unmarshaler
+func (msg *Filter_DistanceRequest) UnmarshalJSON(b []byte) error {
+ return protojson.UnmarshalOptions{}.Unmarshal(b, msg)
+}
+
+// MarshalJSON implements json.Marshaler
+func (msg *Filter_DistanceResponse) MarshalJSON() ([]byte, error) {
+ return protojson.MarshalOptions{}.Marshal(msg)
+}
+
+// UnmarshalJSON implements json.Unmarshaler
+func (msg *Filter_DistanceResponse) UnmarshalJSON(b []byte) error {
+ return protojson.UnmarshalOptions{}.Unmarshal(b, msg)
+}
+
+// MarshalJSON implements json.Marshaler
+func (msg *Filter_VectorRequest) MarshalJSON() ([]byte, error) {
+ return protojson.MarshalOptions{}.Marshal(msg)
+}
+
+// UnmarshalJSON implements json.Unmarshaler
+func (msg *Filter_VectorRequest) UnmarshalJSON(b []byte) error {
+ return protojson.UnmarshalOptions{}.Unmarshal(b, msg)
+}
+
+// MarshalJSON implements json.Marshaler
+func (msg *Filter_VectorResponse) MarshalJSON() ([]byte, error) {
+ return protojson.MarshalOptions{}.Marshal(msg)
+}
+
+// UnmarshalJSON implements json.Unmarshaler
+func (msg *Filter_VectorResponse) UnmarshalJSON(b []byte) error {
+ return protojson.UnmarshalOptions{}.Unmarshal(b, msg)
+}
+
// MarshalJSON implements json.Marshaler
func (msg *Insert) MarshalJSON() ([]byte, error) {
return protojson.MarshalOptions{}.Marshal(msg)
diff --git a/apis/grpc/v1/payload/payload_vtproto.pb.go b/apis/grpc/v1/payload/payload_vtproto.pb.go
index f5e313e726..7f5c31495c 100644
--- a/apis/grpc/v1/payload/payload_vtproto.pb.go
+++ b/apis/grpc/v1/payload/payload_vtproto.pb.go
@@ -181,12 +181,24 @@ func (m *Search_Config) CloneVT() *Search_Config {
r.Radius = m.Radius
r.Epsilon = m.Epsilon
r.Timeout = m.Timeout
- r.IngressFilters = m.IngressFilters.CloneVT()
- r.EgressFilters = m.EgressFilters.CloneVT()
r.MinNum = m.MinNum
r.AggregationAlgorithm = m.AggregationAlgorithm
r.Ratio = (*wrapperspb.FloatValue)((*wrapperspb1.FloatValue)(m.Ratio).CloneVT())
r.Nprobe = m.Nprobe
+ if rhs := m.IngressFilters; rhs != nil {
+ tmpContainer := make([]*Filter_Config, len(rhs))
+ for k, v := range rhs {
+ tmpContainer[k] = v.CloneVT()
+ }
+ r.IngressFilters = tmpContainer
+ }
+ if rhs := m.EgressFilters; rhs != nil {
+ tmpContainer := make([]*Filter_Config, len(rhs))
+ for k, v := range rhs {
+ tmpContainer[k] = v.CloneVT()
+ }
+ r.EgressFilters = tmpContainer
+ }
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
@@ -324,17 +336,53 @@ func (m *Filter_Target) CloneMessageVT() proto.Message {
return m.CloneVT()
}
+func (m *Filter_Query) CloneVT() *Filter_Query {
+ if m == nil {
+ return (*Filter_Query)(nil)
+ }
+ r := new(Filter_Query)
+ r.Query = m.Query
+ if len(m.unknownFields) > 0 {
+ r.unknownFields = make([]byte, len(m.unknownFields))
+ copy(r.unknownFields, m.unknownFields)
+ }
+ return r
+}
+
+func (m *Filter_Query) CloneMessageVT() proto.Message {
+ return m.CloneVT()
+}
+
func (m *Filter_Config) CloneVT() *Filter_Config {
if m == nil {
return (*Filter_Config)(nil)
}
r := new(Filter_Config)
- if rhs := m.Targets; rhs != nil {
- tmpContainer := make([]*Filter_Target, len(rhs))
+ r.Target = m.Target.CloneVT()
+ r.Query = m.Query.CloneVT()
+ if len(m.unknownFields) > 0 {
+ r.unknownFields = make([]byte, len(m.unknownFields))
+ copy(r.unknownFields, m.unknownFields)
+ }
+ return r
+}
+
+func (m *Filter_Config) CloneMessageVT() proto.Message {
+ return m.CloneVT()
+}
+
+func (m *Filter_DistanceRequest) CloneVT() *Filter_DistanceRequest {
+ if m == nil {
+ return (*Filter_DistanceRequest)(nil)
+ }
+ r := new(Filter_DistanceRequest)
+ r.Query = m.Query.CloneVT()
+ if rhs := m.Distance; rhs != nil {
+ tmpContainer := make([]*Object_Distance, len(rhs))
for k, v := range rhs {
tmpContainer[k] = v.CloneVT()
}
- r.Targets = tmpContainer
+ r.Distance = tmpContainer
}
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
@@ -343,7 +391,65 @@ func (m *Filter_Config) CloneVT() *Filter_Config {
return r
}
-func (m *Filter_Config) CloneMessageVT() proto.Message {
+func (m *Filter_DistanceRequest) CloneMessageVT() proto.Message {
+ return m.CloneVT()
+}
+
+func (m *Filter_DistanceResponse) CloneVT() *Filter_DistanceResponse {
+ if m == nil {
+ return (*Filter_DistanceResponse)(nil)
+ }
+ r := new(Filter_DistanceResponse)
+ if rhs := m.Distance; rhs != nil {
+ tmpContainer := make([]*Object_Distance, len(rhs))
+ for k, v := range rhs {
+ tmpContainer[k] = v.CloneVT()
+ }
+ r.Distance = tmpContainer
+ }
+ if len(m.unknownFields) > 0 {
+ r.unknownFields = make([]byte, len(m.unknownFields))
+ copy(r.unknownFields, m.unknownFields)
+ }
+ return r
+}
+
+func (m *Filter_DistanceResponse) CloneMessageVT() proto.Message {
+ return m.CloneVT()
+}
+
+func (m *Filter_VectorRequest) CloneVT() *Filter_VectorRequest {
+ if m == nil {
+ return (*Filter_VectorRequest)(nil)
+ }
+ r := new(Filter_VectorRequest)
+ r.Vector = m.Vector.CloneVT()
+ r.Query = m.Query.CloneVT()
+ if len(m.unknownFields) > 0 {
+ r.unknownFields = make([]byte, len(m.unknownFields))
+ copy(r.unknownFields, m.unknownFields)
+ }
+ return r
+}
+
+func (m *Filter_VectorRequest) CloneMessageVT() proto.Message {
+ return m.CloneVT()
+}
+
+func (m *Filter_VectorResponse) CloneVT() *Filter_VectorResponse {
+ if m == nil {
+ return (*Filter_VectorResponse)(nil)
+ }
+ r := new(Filter_VectorResponse)
+ r.Vector = m.Vector.CloneVT()
+ if len(m.unknownFields) > 0 {
+ r.unknownFields = make([]byte, len(m.unknownFields))
+ copy(r.unknownFields, m.unknownFields)
+ }
+ return r
+}
+
+func (m *Filter_VectorResponse) CloneMessageVT() proto.Message {
return m.CloneVT()
}
@@ -452,8 +558,14 @@ func (m *Insert_Config) CloneVT() *Insert_Config {
}
r := new(Insert_Config)
r.SkipStrictExistCheck = m.SkipStrictExistCheck
- r.Filters = m.Filters.CloneVT()
r.Timestamp = m.Timestamp
+ if rhs := m.Filters; rhs != nil {
+ tmpContainer := make([]*Filter_Config, len(rhs))
+ for k, v := range rhs {
+ tmpContainer[k] = v.CloneVT()
+ }
+ r.Filters = tmpContainer
+ }
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
@@ -589,9 +701,15 @@ func (m *Update_Config) CloneVT() *Update_Config {
}
r := new(Update_Config)
r.SkipStrictExistCheck = m.SkipStrictExistCheck
- r.Filters = m.Filters.CloneVT()
r.Timestamp = m.Timestamp
r.DisableBalancedUpdate = m.DisableBalancedUpdate
+ if rhs := m.Filters; rhs != nil {
+ tmpContainer := make([]*Filter_Config, len(rhs))
+ for k, v := range rhs {
+ tmpContainer[k] = v.CloneVT()
+ }
+ r.Filters = tmpContainer
+ }
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
@@ -708,9 +826,15 @@ func (m *Upsert_Config) CloneVT() *Upsert_Config {
}
r := new(Upsert_Config)
r.SkipStrictExistCheck = m.SkipStrictExistCheck
- r.Filters = m.Filters.CloneVT()
r.Timestamp = m.Timestamp
r.DisableBalancedUpdate = m.DisableBalancedUpdate
+ if rhs := m.Filters; rhs != nil {
+ tmpContainer := make([]*Filter_Config, len(rhs))
+ for k, v := range rhs {
+ tmpContainer[k] = v.CloneVT()
+ }
+ r.Filters = tmpContainer
+ }
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
@@ -892,7 +1016,13 @@ func (m *Object_VectorRequest) CloneVT() *Object_VectorRequest {
}
r := new(Object_VectorRequest)
r.Id = m.Id.CloneVT()
- r.Filters = m.Filters.CloneVT()
+ if rhs := m.Filters; rhs != nil {
+ tmpContainer := make([]*Filter_Config, len(rhs))
+ for k, v := range rhs {
+ tmpContainer[k] = v.CloneVT()
+ }
+ r.Filters = tmpContainer
+ }
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
@@ -2371,12 +2501,40 @@ func (this *Search_Config) EqualVT(that *Search_Config) bool {
if this.Timeout != that.Timeout {
return false
}
- if !this.IngressFilters.EqualVT(that.IngressFilters) {
+ if len(this.IngressFilters) != len(that.IngressFilters) {
return false
}
- if !this.EgressFilters.EqualVT(that.EgressFilters) {
+ for i, vx := range this.IngressFilters {
+ vy := that.IngressFilters[i]
+ if p, q := vx, vy; p != q {
+ if p == nil {
+ p = &Filter_Config{}
+ }
+ if q == nil {
+ q = &Filter_Config{}
+ }
+ if !p.EqualVT(q) {
+ return false
+ }
+ }
+ }
+ if len(this.EgressFilters) != len(that.EgressFilters) {
return false
}
+ for i, vx := range this.EgressFilters {
+ vy := that.EgressFilters[i]
+ if p, q := vx, vy; p != q {
+ if p == nil {
+ p = &Filter_Config{}
+ }
+ if q == nil {
+ q = &Filter_Config{}
+ }
+ if !p.EqualVT(q) {
+ return false
+ }
+ }
+ }
if this.MinNum != that.MinNum {
return false
}
@@ -2596,34 +2754,157 @@ func (this *Filter_Target) EqualMessageVT(thatMsg proto.Message) bool {
return this.EqualVT(that)
}
+func (this *Filter_Query) EqualVT(that *Filter_Query) bool {
+ if this == that {
+ return true
+ } else if this == nil || that == nil {
+ return false
+ }
+ if this.Query != that.Query {
+ return false
+ }
+ return string(this.unknownFields) == string(that.unknownFields)
+}
+
+func (this *Filter_Query) EqualMessageVT(thatMsg proto.Message) bool {
+ that, ok := thatMsg.(*Filter_Query)
+ if !ok {
+ return false
+ }
+ return this.EqualVT(that)
+}
+
func (this *Filter_Config) EqualVT(that *Filter_Config) bool {
if this == that {
return true
} else if this == nil || that == nil {
return false
}
- if len(this.Targets) != len(that.Targets) {
+ if !this.Target.EqualVT(that.Target) {
return false
}
- for i, vx := range this.Targets {
- vy := that.Targets[i]
+ if !this.Query.EqualVT(that.Query) {
+ return false
+ }
+ return string(this.unknownFields) == string(that.unknownFields)
+}
+
+func (this *Filter_Config) EqualMessageVT(thatMsg proto.Message) bool {
+ that, ok := thatMsg.(*Filter_Config)
+ if !ok {
+ return false
+ }
+ return this.EqualVT(that)
+}
+
+func (this *Filter_DistanceRequest) EqualVT(that *Filter_DistanceRequest) bool {
+ if this == that {
+ return true
+ } else if this == nil || that == nil {
+ return false
+ }
+ if len(this.Distance) != len(that.Distance) {
+ return false
+ }
+ for i, vx := range this.Distance {
+ vy := that.Distance[i]
if p, q := vx, vy; p != q {
if p == nil {
- p = &Filter_Target{}
+ p = &Object_Distance{}
}
if q == nil {
- q = &Filter_Target{}
+ q = &Object_Distance{}
}
if !p.EqualVT(q) {
return false
}
}
}
+ if !this.Query.EqualVT(that.Query) {
+ return false
+ }
return string(this.unknownFields) == string(that.unknownFields)
}
-func (this *Filter_Config) EqualMessageVT(thatMsg proto.Message) bool {
- that, ok := thatMsg.(*Filter_Config)
+func (this *Filter_DistanceRequest) EqualMessageVT(thatMsg proto.Message) bool {
+ that, ok := thatMsg.(*Filter_DistanceRequest)
+ if !ok {
+ return false
+ }
+ return this.EqualVT(that)
+}
+
+func (this *Filter_DistanceResponse) EqualVT(that *Filter_DistanceResponse) bool {
+ if this == that {
+ return true
+ } else if this == nil || that == nil {
+ return false
+ }
+ if len(this.Distance) != len(that.Distance) {
+ return false
+ }
+ for i, vx := range this.Distance {
+ vy := that.Distance[i]
+ if p, q := vx, vy; p != q {
+ if p == nil {
+ p = &Object_Distance{}
+ }
+ if q == nil {
+ q = &Object_Distance{}
+ }
+ if !p.EqualVT(q) {
+ return false
+ }
+ }
+ }
+ return string(this.unknownFields) == string(that.unknownFields)
+}
+
+func (this *Filter_DistanceResponse) EqualMessageVT(thatMsg proto.Message) bool {
+ that, ok := thatMsg.(*Filter_DistanceResponse)
+ if !ok {
+ return false
+ }
+ return this.EqualVT(that)
+}
+
+func (this *Filter_VectorRequest) EqualVT(that *Filter_VectorRequest) bool {
+ if this == that {
+ return true
+ } else if this == nil || that == nil {
+ return false
+ }
+ if !this.Vector.EqualVT(that.Vector) {
+ return false
+ }
+ if !this.Query.EqualVT(that.Query) {
+ return false
+ }
+ return string(this.unknownFields) == string(that.unknownFields)
+}
+
+func (this *Filter_VectorRequest) EqualMessageVT(thatMsg proto.Message) bool {
+ that, ok := thatMsg.(*Filter_VectorRequest)
+ if !ok {
+ return false
+ }
+ return this.EqualVT(that)
+}
+
+func (this *Filter_VectorResponse) EqualVT(that *Filter_VectorResponse) bool {
+ if this == that {
+ return true
+ } else if this == nil || that == nil {
+ return false
+ }
+ if !this.Vector.EqualVT(that.Vector) {
+ return false
+ }
+ return string(this.unknownFields) == string(that.unknownFields)
+}
+
+func (this *Filter_VectorResponse) EqualMessageVT(thatMsg proto.Message) bool {
+ that, ok := thatMsg.(*Filter_VectorResponse)
if !ok {
return false
}
@@ -2773,9 +3054,23 @@ func (this *Insert_Config) EqualVT(that *Insert_Config) bool {
if this.SkipStrictExistCheck != that.SkipStrictExistCheck {
return false
}
- if !this.Filters.EqualVT(that.Filters) {
+ if len(this.Filters) != len(that.Filters) {
return false
}
+ for i, vx := range this.Filters {
+ vy := that.Filters[i]
+ if p, q := vx, vy; p != q {
+ if p == nil {
+ p = &Filter_Config{}
+ }
+ if q == nil {
+ q = &Filter_Config{}
+ }
+ if !p.EqualVT(q) {
+ return false
+ }
+ }
+ }
if this.Timestamp != that.Timestamp {
return false
}
@@ -2959,9 +3254,23 @@ func (this *Update_Config) EqualVT(that *Update_Config) bool {
if this.SkipStrictExistCheck != that.SkipStrictExistCheck {
return false
}
- if !this.Filters.EqualVT(that.Filters) {
+ if len(this.Filters) != len(that.Filters) {
return false
}
+ for i, vx := range this.Filters {
+ vy := that.Filters[i]
+ if p, q := vx, vy; p != q {
+ if p == nil {
+ p = &Filter_Config{}
+ }
+ if q == nil {
+ q = &Filter_Config{}
+ }
+ if !p.EqualVT(q) {
+ return false
+ }
+ }
+ }
if this.Timestamp != that.Timestamp {
return false
}
@@ -3122,9 +3431,23 @@ func (this *Upsert_Config) EqualVT(that *Upsert_Config) bool {
if this.SkipStrictExistCheck != that.SkipStrictExistCheck {
return false
}
- if !this.Filters.EqualVT(that.Filters) {
+ if len(this.Filters) != len(that.Filters) {
return false
}
+ for i, vx := range this.Filters {
+ vy := that.Filters[i]
+ if p, q := vx, vy; p != q {
+ if p == nil {
+ p = &Filter_Config{}
+ }
+ if q == nil {
+ q = &Filter_Config{}
+ }
+ if !p.EqualVT(q) {
+ return false
+ }
+ }
+ }
if this.Timestamp != that.Timestamp {
return false
}
@@ -3356,10 +3679,24 @@ func (this *Object_VectorRequest) EqualVT(that *Object_VectorRequest) bool {
if !this.Id.EqualVT(that.Id) {
return false
}
- if !this.Filters.EqualVT(that.Filters) {
+ if len(this.Filters) != len(that.Filters) {
return false
}
- return string(this.unknownFields) == string(that.unknownFields)
+ for i, vx := range this.Filters {
+ vy := that.Filters[i]
+ if p, q := vx, vy; p != q {
+ if p == nil {
+ p = &Filter_Config{}
+ }
+ if q == nil {
+ q = &Filter_Config{}
+ }
+ if !p.EqualVT(q) {
+ return false
+ }
+ }
+ }
+ return string(this.unknownFields) == string(that.unknownFields)
}
func (this *Object_VectorRequest) EqualMessageVT(thatMsg proto.Message) bool {
@@ -5619,25 +5956,29 @@ func (m *Search_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i--
dAtA[i] = 0x40
}
- if m.EgressFilters != nil {
- size, err := m.EgressFilters.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if len(m.EgressFilters) > 0 {
+ for iNdEx := len(m.EgressFilters) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.EgressFilters[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x3a
}
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x3a
}
- if m.IngressFilters != nil {
- size, err := m.IngressFilters.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if len(m.IngressFilters) > 0 {
+ for iNdEx := len(m.IngressFilters) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.IngressFilters[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x32
}
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x32
}
if m.Timeout != 0 {
i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timeout))
@@ -5940,7 +6281,7 @@ func (m *Filter_Target) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Filter_Config) MarshalVT() (dAtA []byte, err error) {
+func (m *Filter_Query) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -5953,12 +6294,12 @@ func (m *Filter_Config) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Filter_Config) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Filter_Query) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Filter_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Filter_Query) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -5970,22 +6311,17 @@ func (m *Filter_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Targets) > 0 {
- for iNdEx := len(m.Targets) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Targets[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
- }
+ if len(m.Query) > 0 {
+ i -= len(m.Query)
+ copy(dAtA[i:], m.Query)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Query)))
+ i--
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Filter) MarshalVT() (dAtA []byte, err error) {
+func (m *Filter_Config) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -5998,12 +6334,12 @@ func (m *Filter) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Filter) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Filter_Config) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Filter) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Filter_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6015,10 +6351,30 @@ func (m *Filter) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
+ if m.Query != nil {
+ size, err := m.Query.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Target != nil {
+ size, err := m.Target.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
+ }
return len(dAtA) - i, nil
}
-func (m *Insert_Request) MarshalVT() (dAtA []byte, err error) {
+func (m *Filter_DistanceRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6031,12 +6387,12 @@ func (m *Insert_Request) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Insert_Request) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Filter_DistanceRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Insert_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Filter_DistanceRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6048,8 +6404,8 @@ func (m *Insert_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Config != nil {
- size, err := m.Config.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Query != nil {
+ size, err := m.Query.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -6058,20 +6414,22 @@ func (m *Insert_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i--
dAtA[i] = 0x12
}
- if m.Vector != nil {
- size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if len(m.Distance) > 0 {
+ for iNdEx := len(m.Distance) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Distance[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
}
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Insert_MultiRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Filter_DistanceResponse) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6084,12 +6442,12 @@ func (m *Insert_MultiRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Insert_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Filter_DistanceResponse) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Insert_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Filter_DistanceResponse) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6101,9 +6459,9 @@ func (m *Insert_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Requests) > 0 {
- for iNdEx := len(m.Requests) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Requests[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if len(m.Distance) > 0 {
+ for iNdEx := len(m.Distance) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Distance[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -6116,7 +6474,7 @@ func (m *Insert_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Insert_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Filter_VectorRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6129,12 +6487,12 @@ func (m *Insert_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Insert_ObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Filter_VectorRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Insert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Filter_VectorRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6146,18 +6504,8 @@ func (m *Insert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error)
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Vectorizer != nil {
- size, err := m.Vectorizer.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x1a
- }
- if m.Config != nil {
- size, err := m.Config.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Query != nil {
+ size, err := m.Query.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -6166,8 +6514,8 @@ func (m *Insert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error)
i--
dAtA[i] = 0x12
}
- if m.Object != nil {
- size, err := m.Object.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Vector != nil {
+ size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -6179,52 +6527,7 @@ func (m *Insert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error)
return len(dAtA) - i, nil
}
-func (m *Insert_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
- }
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Insert_MultiObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
- size := m.SizeVT()
- return m.MarshalToSizedBufferVT(dAtA[:size])
-}
-
-func (m *Insert_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
- }
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
- }
- if len(m.Requests) > 0 {
- for iNdEx := len(m.Requests) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Requests[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
- }
- }
- return len(dAtA) - i, nil
-}
-
-func (m *Insert_Config) MarshalVT() (dAtA []byte, err error) {
+func (m *Filter_VectorResponse) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6237,12 +6540,12 @@ func (m *Insert_Config) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Insert_Config) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Filter_VectorResponse) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Insert_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Filter_VectorResponse) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6254,35 +6557,20 @@ func (m *Insert_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Timestamp != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
- i--
- dAtA[i] = 0x18
- }
- if m.Filters != nil {
- size, err := m.Filters.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Vector != nil {
+ size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
i--
- dAtA[i] = 0x12
- }
- if m.SkipStrictExistCheck {
- i--
- if m.SkipStrictExistCheck {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
- }
- i--
- dAtA[i] = 0x8
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Insert) MarshalVT() (dAtA []byte, err error) {
+func (m *Filter) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6295,12 +6583,12 @@ func (m *Insert) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Insert) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Filter) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Insert) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Filter) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6315,7 +6603,7 @@ func (m *Insert) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Update_Request) MarshalVT() (dAtA []byte, err error) {
+func (m *Insert_Request) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6328,12 +6616,12 @@ func (m *Update_Request) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Update_Request) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Insert_Request) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Update_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Insert_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6368,7 +6656,7 @@ func (m *Update_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Update_MultiRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Insert_MultiRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6381,12 +6669,12 @@ func (m *Update_MultiRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Update_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Insert_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Update_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Insert_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6413,7 +6701,7 @@ func (m *Update_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Update_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Insert_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6426,12 +6714,12 @@ func (m *Update_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Update_ObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Insert_ObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Update_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Insert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6476,7 +6764,7 @@ func (m *Update_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error)
return len(dAtA) - i, nil
}
-func (m *Update_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Insert_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6489,12 +6777,12 @@ func (m *Update_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Update_MultiObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Insert_MultiObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Update_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Insert_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6521,7 +6809,7 @@ func (m *Update_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, er
return len(dAtA) - i, nil
}
-func (m *Update_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Insert_Config) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6534,12 +6822,12 @@ func (m *Update_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Update_TimestampRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Insert_Config) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Update_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Insert_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6551,32 +6839,37 @@ func (m *Update_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, erro
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Force {
+ if m.Timestamp != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
i--
- if m.Force {
+ dAtA[i] = 0x18
+ }
+ if len(m.Filters) > 0 {
+ for iNdEx := len(m.Filters) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Filters[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x12
+ }
+ }
+ if m.SkipStrictExistCheck {
+ i--
+ if m.SkipStrictExistCheck {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
- dAtA[i] = 0x18
- }
- if m.Timestamp != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
- i--
- dAtA[i] = 0x10
- }
- if len(m.Id) > 0 {
- i -= len(m.Id)
- copy(dAtA[i:], m.Id)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
- i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
-func (m *Update_Config) MarshalVT() (dAtA []byte, err error) {
+func (m *Insert) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6589,80 +6882,12 @@ func (m *Update_Config) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Update_Config) MarshalToVT(dAtA []byte) (int, error) {
- size := m.SizeVT()
- return m.MarshalToSizedBufferVT(dAtA[:size])
-}
-
-func (m *Update_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
- }
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
- }
- if m.DisableBalancedUpdate {
- i--
- if m.DisableBalancedUpdate {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
- }
- i--
- dAtA[i] = 0x20
- }
- if m.Timestamp != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
- i--
- dAtA[i] = 0x18
- }
- if m.Filters != nil {
- size, err := m.Filters.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x12
- }
- if m.SkipStrictExistCheck {
- i--
- if m.SkipStrictExistCheck {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
- }
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *Update) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
- }
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Update) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Insert) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Update) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Insert) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6677,7 +6902,7 @@ func (m *Update) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Upsert_Request) MarshalVT() (dAtA []byte, err error) {
+func (m *Update_Request) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6690,12 +6915,12 @@ func (m *Upsert_Request) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Upsert_Request) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Update_Request) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Upsert_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Update_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6730,7 +6955,7 @@ func (m *Upsert_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Upsert_MultiRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Update_MultiRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6743,12 +6968,12 @@ func (m *Upsert_MultiRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Upsert_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Update_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Upsert_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Update_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6775,7 +7000,7 @@ func (m *Upsert_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Upsert_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Update_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6788,12 +7013,12 @@ func (m *Upsert_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Upsert_ObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Update_ObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Upsert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Update_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6838,7 +7063,7 @@ func (m *Upsert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error)
return len(dAtA) - i, nil
}
-func (m *Upsert_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Update_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6851,12 +7076,12 @@ func (m *Upsert_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Upsert_MultiObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Update_MultiObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Upsert_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Update_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6883,7 +7108,7 @@ func (m *Upsert_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, er
return len(dAtA) - i, nil
}
-func (m *Upsert_Config) MarshalVT() (dAtA []byte, err error) {
+func (m *Update_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6896,12 +7121,67 @@ func (m *Upsert_Config) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Upsert_Config) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Update_TimestampRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Upsert_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Update_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if m.Force {
+ i--
+ if m.Force {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.Timestamp != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
+ i--
+ dAtA[i] = 0x10
+ }
+ if len(m.Id) > 0 {
+ i -= len(m.Id)
+ copy(dAtA[i:], m.Id)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Update_Config) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Update_Config) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Update_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6928,15 +7208,17 @@ func (m *Upsert_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i--
dAtA[i] = 0x18
}
- if m.Filters != nil {
- size, err := m.Filters.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if len(m.Filters) > 0 {
+ for iNdEx := len(m.Filters) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Filters[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x12
}
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x12
}
if m.SkipStrictExistCheck {
i--
@@ -6951,7 +7233,7 @@ func (m *Upsert_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Upsert) MarshalVT() (dAtA []byte, err error) {
+func (m *Update) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6964,12 +7246,12 @@ func (m *Upsert) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Upsert) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Update) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Upsert) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Update) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -6984,7 +7266,7 @@ func (m *Upsert) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Remove_Request) MarshalVT() (dAtA []byte, err error) {
+func (m *Upsert_Request) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -6997,12 +7279,12 @@ func (m *Remove_Request) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Remove_Request) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Upsert_Request) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Remove_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Upsert_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7024,8 +7306,8 @@ func (m *Remove_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i--
dAtA[i] = 0x12
}
- if m.Id != nil {
- size, err := m.Id.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Vector != nil {
+ size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -7037,7 +7319,7 @@ func (m *Remove_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Remove_MultiRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Upsert_MultiRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7050,12 +7332,12 @@ func (m *Remove_MultiRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Remove_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Upsert_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Remove_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Upsert_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7082,7 +7364,7 @@ func (m *Remove_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Remove_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Upsert_ObjectRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7095,12 +7377,12 @@ func (m *Remove_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Remove_TimestampRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Upsert_ObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Remove_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Upsert_ObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7112,22 +7394,40 @@ func (m *Remove_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, erro
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Timestamps) > 0 {
- for iNdEx := len(m.Timestamps) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Timestamps[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
+ if m.Vectorizer != nil {
+ size, err := m.Vectorizer.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if m.Config != nil {
+ size, err := m.Config.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Object != nil {
+ size, err := m.Object.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
}
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Remove_Timestamp) MarshalVT() (dAtA []byte, err error) {
+func (m *Upsert_MultiObjectRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7140,12 +7440,12 @@ func (m *Remove_Timestamp) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Remove_Timestamp) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Upsert_MultiObjectRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Remove_Timestamp) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Upsert_MultiObjectRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7157,20 +7457,22 @@ func (m *Remove_Timestamp) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Operator != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Operator))
- i--
- dAtA[i] = 0x10
- }
- if m.Timestamp != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
- i--
- dAtA[i] = 0x8
+ if len(m.Requests) > 0 {
+ for iNdEx := len(m.Requests) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Requests[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
+ }
}
return len(dAtA) - i, nil
}
-func (m *Remove_Config) MarshalVT() (dAtA []byte, err error) {
+func (m *Upsert_Config) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7183,12 +7485,12 @@ func (m *Remove_Config) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Remove_Config) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Upsert_Config) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Remove_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Upsert_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7200,11 +7502,33 @@ func (m *Remove_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
+ if m.DisableBalancedUpdate {
+ i--
+ if m.DisableBalancedUpdate {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
+ i--
+ dAtA[i] = 0x20
+ }
if m.Timestamp != 0 {
i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
i--
dAtA[i] = 0x18
}
+ if len(m.Filters) > 0 {
+ for iNdEx := len(m.Filters) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Filters[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x12
+ }
+ }
if m.SkipStrictExistCheck {
i--
if m.SkipStrictExistCheck {
@@ -7218,7 +7542,7 @@ func (m *Remove_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Remove) MarshalVT() (dAtA []byte, err error) {
+func (m *Upsert) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7231,12 +7555,12 @@ func (m *Remove) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Remove) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Upsert) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Remove) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Upsert) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7251,7 +7575,7 @@ func (m *Remove) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Flush_Request) MarshalVT() (dAtA []byte, err error) {
+func (m *Remove_Request) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7264,12 +7588,12 @@ func (m *Flush_Request) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Flush_Request) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Remove_Request) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Flush_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Remove_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7281,10 +7605,30 @@ func (m *Flush_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
+ if m.Config != nil {
+ size, err := m.Config.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Id != nil {
+ size, err := m.Id.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
+ }
return len(dAtA) - i, nil
}
-func (m *Flush) MarshalVT() (dAtA []byte, err error) {
+func (m *Remove_MultiRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7297,12 +7641,12 @@ func (m *Flush) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Flush) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Remove_MultiRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Flush) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Remove_MultiRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7314,10 +7658,22 @@ func (m *Flush) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
+ if len(m.Requests) > 0 {
+ for iNdEx := len(m.Requests) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Requests[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
+ }
+ }
return len(dAtA) - i, nil
}
-func (m *Object_VectorRequest) MarshalVT() (dAtA []byte, err error) {
+func (m *Remove_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7330,12 +7686,12 @@ func (m *Object_VectorRequest) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_VectorRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Remove_TimestampRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_VectorRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Remove_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7347,30 +7703,22 @@ func (m *Object_VectorRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error)
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Filters != nil {
- size, err := m.Filters.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x12
- }
- if m.Id != nil {
- size, err := m.Id.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if len(m.Timestamps) > 0 {
+ for iNdEx := len(m.Timestamps) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Timestamps[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
}
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Object_Distance) MarshalVT() (dAtA []byte, err error) {
+func (m *Remove_Timestamp) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7383,12 +7731,12 @@ func (m *Object_Distance) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_Distance) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Remove_Timestamp) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_Distance) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Remove_Timestamp) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7400,23 +7748,20 @@ func (m *Object_Distance) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Distance != 0 {
- i -= 4
- binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.Distance))))
+ if m.Operator != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Operator))
i--
- dAtA[i] = 0x15
+ dAtA[i] = 0x10
}
- if len(m.Id) > 0 {
- i -= len(m.Id)
- copy(dAtA[i:], m.Id)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
+ if m.Timestamp != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
-func (m *Object_StreamDistance) MarshalVT() (dAtA []byte, err error) {
+func (m *Remove_Config) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7429,12 +7774,12 @@ func (m *Object_StreamDistance) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_StreamDistance) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Remove_Config) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamDistance) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Remove_Config) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7446,71 +7791,58 @@ func (m *Object_StreamDistance) MarshalToSizedBufferVT(dAtA []byte) (int, error)
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if vtmsg, ok := m.Payload.(interface {
- MarshalToSizedBufferVT([]byte) (int, error)
- }); ok {
- size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if m.Timestamp != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.SkipStrictExistCheck {
+ i--
+ if m.SkipStrictExistCheck {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
}
- i -= size
+ i--
+ dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
-func (m *Object_StreamDistance_Distance) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Remove) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
size := m.SizeVT()
- return m.MarshalToSizedBufferVT(dAtA[:size])
-}
-
-func (m *Object_StreamDistance_Distance) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Distance != nil {
- size, err := m.Distance.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return len(dAtA) - i, nil
+ return dAtA[:n], nil
}
-func (m *Object_StreamDistance_Status) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Remove) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamDistance_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Remove) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
i := len(dAtA)
- if m.Status != nil {
- if vtmsg, ok := any(m.Status).(interface {
- MarshalToSizedBufferVT([]byte) (int, error)
- }); ok {
- size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- } else {
- encoded, err := proto.Marshal(m.Status)
- if err != nil {
- return 0, err
- }
- i -= len(encoded)
- copy(dAtA[i:], encoded)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(encoded)))
- }
- i--
- dAtA[i] = 0x12
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
}
return len(dAtA) - i, nil
}
-func (m *Object_ID) MarshalVT() (dAtA []byte, err error) {
+func (m *Flush_Request) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7523,12 +7855,12 @@ func (m *Object_ID) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_ID) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Flush_Request) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_ID) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Flush_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7540,17 +7872,10 @@ func (m *Object_ID) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Id) > 0 {
- i -= len(m.Id)
- copy(dAtA[i:], m.Id)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
- i--
- dAtA[i] = 0xa
- }
return len(dAtA) - i, nil
}
-func (m *Object_IDs) MarshalVT() (dAtA []byte, err error) {
+func (m *Flush) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7563,12 +7888,12 @@ func (m *Object_IDs) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_IDs) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Flush) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_IDs) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Flush) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7580,19 +7905,10 @@ func (m *Object_IDs) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Ids) > 0 {
- for iNdEx := len(m.Ids) - 1; iNdEx >= 0; iNdEx-- {
- i -= len(m.Ids[iNdEx])
- copy(dAtA[i:], m.Ids[iNdEx])
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ids[iNdEx])))
- i--
- dAtA[i] = 0xa
- }
- }
return len(dAtA) - i, nil
}
-func (m *Object_Vector) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_VectorRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7605,12 +7921,12 @@ func (m *Object_Vector) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_Vector) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_VectorRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_VectorRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7622,60 +7938,17 @@ func (m *Object_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Timestamp != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
- i--
- dAtA[i] = 0x18
- }
- if len(m.Vector) > 0 {
- for iNdEx := len(m.Vector) - 1; iNdEx >= 0; iNdEx-- {
- f1 := math.Float32bits(float32(m.Vector[iNdEx]))
- i -= 4
- binary.LittleEndian.PutUint32(dAtA[i:], uint32(f1))
+ if len(m.Filters) > 0 {
+ for iNdEx := len(m.Filters) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Filters[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x12
}
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Vector)*4))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Id) > 0 {
- i -= len(m.Id)
- copy(dAtA[i:], m.Id)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
-}
-
-func (m *Object_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
- }
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Object_TimestampRequest) MarshalToVT(dAtA []byte) (int, error) {
- size := m.SizeVT()
- return m.MarshalToSizedBufferVT(dAtA[:size])
-}
-
-func (m *Object_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
- }
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
}
if m.Id != nil {
size, err := m.Id.MarshalToSizedBufferVT(dAtA[:i])
@@ -7690,7 +7963,7 @@ func (m *Object_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, erro
return len(dAtA) - i, nil
}
-func (m *Object_Timestamp) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_Distance) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7703,12 +7976,12 @@ func (m *Object_Timestamp) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_Timestamp) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_Distance) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_Timestamp) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_Distance) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7720,10 +7993,11 @@ func (m *Object_Timestamp) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Timestamp != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
+ if m.Distance != 0 {
+ i -= 4
+ binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.Distance))))
i--
- dAtA[i] = 0x10
+ dAtA[i] = 0x15
}
if len(m.Id) > 0 {
i -= len(m.Id)
@@ -7735,52 +8009,7 @@ func (m *Object_Timestamp) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Object_Vectors) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
- }
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Object_Vectors) MarshalToVT(dAtA []byte) (int, error) {
- size := m.SizeVT()
- return m.MarshalToSizedBufferVT(dAtA[:size])
-}
-
-func (m *Object_Vectors) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
- }
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
- }
- if len(m.Vectors) > 0 {
- for iNdEx := len(m.Vectors) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Vectors[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
- }
- }
- return len(dAtA) - i, nil
-}
-
-func (m *Object_StreamVector) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_StreamDistance) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7793,12 +8022,12 @@ func (m *Object_StreamVector) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_StreamVector) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamDistance) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamVector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamDistance) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7822,15 +8051,15 @@ func (m *Object_StreamVector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Object_StreamVector_Vector) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamDistance_Distance) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamVector_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamDistance_Distance) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
- if m.Vector != nil {
- size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Distance != nil {
+ size, err := m.Distance.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -7842,12 +8071,12 @@ func (m *Object_StreamVector_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, e
return len(dAtA) - i, nil
}
-func (m *Object_StreamVector_Status) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamDistance_Status) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamVector_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamDistance_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
if m.Status != nil {
if vtmsg, ok := any(m.Status).(interface {
@@ -7874,7 +8103,7 @@ func (m *Object_StreamVector_Status) MarshalToSizedBufferVT(dAtA []byte) (int, e
return len(dAtA) - i, nil
}
-func (m *Object_ReshapeVector) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_ID) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7887,12 +8116,12 @@ func (m *Object_ReshapeVector) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_ReshapeVector) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_ID) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_ReshapeVector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_ID) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7904,38 +8133,17 @@ func (m *Object_ReshapeVector) MarshalToSizedBufferVT(dAtA []byte) (int, error)
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Shape) > 0 {
- var pksize2 int
- for _, num := range m.Shape {
- pksize2 += protohelpers.SizeOfVarint(uint64(num))
- }
- i -= pksize2
- j1 := i
- for _, num1 := range m.Shape {
- num := uint64(num1)
- for num >= 1<<7 {
- dAtA[j1] = uint8(uint64(num)&0x7f | 0x80)
- num >>= 7
- j1++
- }
- dAtA[j1] = uint8(num)
- j1++
- }
- i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize2))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Object) > 0 {
- i -= len(m.Object)
- copy(dAtA[i:], m.Object)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Object)))
+ if len(m.Id) > 0 {
+ i -= len(m.Id)
+ copy(dAtA[i:], m.Id)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Object_Blob) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_IDs) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7948,12 +8156,12 @@ func (m *Object_Blob) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_Blob) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_IDs) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_Blob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_IDs) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -7965,24 +8173,19 @@ func (m *Object_Blob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Object) > 0 {
- i -= len(m.Object)
- copy(dAtA[i:], m.Object)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Object)))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Id) > 0 {
- i -= len(m.Id)
- copy(dAtA[i:], m.Id)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
- i--
- dAtA[i] = 0xa
+ if len(m.Ids) > 0 {
+ for iNdEx := len(m.Ids) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.Ids[iNdEx])
+ copy(dAtA[i:], m.Ids[iNdEx])
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ids[iNdEx])))
+ i--
+ dAtA[i] = 0xa
+ }
}
return len(dAtA) - i, nil
}
-func (m *Object_StreamBlob) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_Vector) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -7995,12 +8198,12 @@ func (m *Object_StreamBlob) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_StreamBlob) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_Vector) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamBlob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8012,27 +8215,63 @@ func (m *Object_StreamBlob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if vtmsg, ok := m.Payload.(interface {
- MarshalToSizedBufferVT([]byte) (int, error)
- }); ok {
- size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if m.Timestamp != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
+ i--
+ dAtA[i] = 0x18
+ }
+ if len(m.Vector) > 0 {
+ for iNdEx := len(m.Vector) - 1; iNdEx >= 0; iNdEx-- {
+ f1 := math.Float32bits(float32(m.Vector[iNdEx]))
+ i -= 4
+ binary.LittleEndian.PutUint32(dAtA[i:], uint32(f1))
}
- i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Vector)*4))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Id) > 0 {
+ i -= len(m.Id)
+ copy(dAtA[i:], m.Id)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
+ i--
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Object_StreamBlob_Blob) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_TimestampRequest) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Object_TimestampRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamBlob_Blob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_TimestampRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
i := len(dAtA)
- if m.Blob != nil {
- size, err := m.Blob.MarshalToSizedBufferVT(dAtA[:i])
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if m.Id != nil {
+ size, err := m.Id.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -8044,39 +8283,52 @@ func (m *Object_StreamBlob_Blob) MarshalToSizedBufferVT(dAtA []byte) (int, error
return len(dAtA) - i, nil
}
-func (m *Object_StreamBlob_Status) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_Timestamp) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Object_Timestamp) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamBlob_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_Timestamp) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
i := len(dAtA)
- if m.Status != nil {
- if vtmsg, ok := any(m.Status).(interface {
- MarshalToSizedBufferVT([]byte) (int, error)
- }); ok {
- size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- } else {
- encoded, err := proto.Marshal(m.Status)
- if err != nil {
- return 0, err
- }
- i -= len(encoded)
- copy(dAtA[i:], encoded)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(encoded)))
- }
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if m.Timestamp != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Timestamp))
i--
- dAtA[i] = 0x12
+ dAtA[i] = 0x10
+ }
+ if len(m.Id) > 0 {
+ i -= len(m.Id)
+ copy(dAtA[i:], m.Id)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
+ i--
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Object_Location) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_Vectors) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8089,12 +8341,12 @@ func (m *Object_Location) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_Location) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_Vectors) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_Location) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_Vectors) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8106,33 +8358,22 @@ func (m *Object_Location) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Ips) > 0 {
- for iNdEx := len(m.Ips) - 1; iNdEx >= 0; iNdEx-- {
- i -= len(m.Ips[iNdEx])
- copy(dAtA[i:], m.Ips[iNdEx])
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ips[iNdEx])))
+ if len(m.Vectors) > 0 {
+ for iNdEx := len(m.Vectors) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Vectors[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
i--
- dAtA[i] = 0x1a
+ dAtA[i] = 0xa
}
}
- if len(m.Uuid) > 0 {
- i -= len(m.Uuid)
- copy(dAtA[i:], m.Uuid)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Uuid)))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Name) > 0 {
- i -= len(m.Name)
- copy(dAtA[i:], m.Name)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
- i--
- dAtA[i] = 0xa
- }
return len(dAtA) - i, nil
}
-func (m *Object_StreamLocation) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_StreamVector) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8145,12 +8386,12 @@ func (m *Object_StreamLocation) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_StreamLocation) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamVector) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamLocation) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamVector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8174,15 +8415,15 @@ func (m *Object_StreamLocation) MarshalToSizedBufferVT(dAtA []byte) (int, error)
return len(dAtA) - i, nil
}
-func (m *Object_StreamLocation_Location) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamVector_Vector) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamLocation_Location) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamVector_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
- if m.Location != nil {
- size, err := m.Location.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Vector != nil {
+ size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -8194,12 +8435,12 @@ func (m *Object_StreamLocation_Location) MarshalToSizedBufferVT(dAtA []byte) (in
return len(dAtA) - i, nil
}
-func (m *Object_StreamLocation_Status) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamVector_Status) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_StreamLocation_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamVector_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
if m.Status != nil {
if vtmsg, ok := any(m.Status).(interface {
@@ -8226,7 +8467,7 @@ func (m *Object_StreamLocation_Status) MarshalToSizedBufferVT(dAtA []byte) (int,
return len(dAtA) - i, nil
}
-func (m *Object_Locations) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_ReshapeVector) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8239,12 +8480,12 @@ func (m *Object_Locations) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_Locations) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_ReshapeVector) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_Locations) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_ReshapeVector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8256,22 +8497,38 @@ func (m *Object_Locations) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Locations) > 0 {
- for iNdEx := len(m.Locations) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Locations[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
+ if len(m.Shape) > 0 {
+ var pksize2 int
+ for _, num := range m.Shape {
+ pksize2 += protohelpers.SizeOfVarint(uint64(num))
+ }
+ i -= pksize2
+ j1 := i
+ for _, num1 := range m.Shape {
+ num := uint64(num1)
+ for num >= 1<<7 {
+ dAtA[j1] = uint8(uint64(num)&0x7f | 0x80)
+ num >>= 7
+ j1++
}
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0xa
+ dAtA[j1] = uint8(num)
+ j1++
}
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize2))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Object) > 0 {
+ i -= len(m.Object)
+ copy(dAtA[i:], m.Object)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Object)))
+ i--
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Object_List_Request) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_Blob) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8284,12 +8541,12 @@ func (m *Object_List_Request) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_List_Request) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_Blob) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_List_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_Blob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8301,10 +8558,24 @@ func (m *Object_List_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
+ if len(m.Object) > 0 {
+ i -= len(m.Object)
+ copy(dAtA[i:], m.Object)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Object)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Id) > 0 {
+ i -= len(m.Id)
+ copy(dAtA[i:], m.Id)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Id)))
+ i--
+ dAtA[i] = 0xa
+ }
return len(dAtA) - i, nil
}
-func (m *Object_List_Response) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_StreamBlob) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8317,12 +8588,12 @@ func (m *Object_List_Response) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_List_Response) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamBlob) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_List_Response) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamBlob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8346,15 +8617,15 @@ func (m *Object_List_Response) MarshalToSizedBufferVT(dAtA []byte) (int, error)
return len(dAtA) - i, nil
}
-func (m *Object_List_Response_Vector) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamBlob_Blob) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_List_Response_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamBlob_Blob) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
- if m.Vector != nil {
- size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
+ if m.Blob != nil {
+ size, err := m.Blob.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -8366,12 +8637,12 @@ func (m *Object_List_Response_Vector) MarshalToSizedBufferVT(dAtA []byte) (int,
return len(dAtA) - i, nil
}
-func (m *Object_List_Response_Status) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamBlob_Status) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_List_Response_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamBlob_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
if m.Status != nil {
if vtmsg, ok := any(m.Status).(interface {
@@ -8398,7 +8669,7 @@ func (m *Object_List_Response_Status) MarshalToSizedBufferVT(dAtA []byte) (int,
return len(dAtA) - i, nil
}
-func (m *Object_List) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_Location) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8411,12 +8682,12 @@ func (m *Object_List) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object_List) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_Location) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object_List) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_Location) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8428,10 +8699,33 @@ func (m *Object_List) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
+ if len(m.Ips) > 0 {
+ for iNdEx := len(m.Ips) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.Ips[iNdEx])
+ copy(dAtA[i:], m.Ips[iNdEx])
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ips[iNdEx])))
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ if len(m.Uuid) > 0 {
+ i -= len(m.Uuid)
+ copy(dAtA[i:], m.Uuid)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Uuid)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Name) > 0 {
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
+ i--
+ dAtA[i] = 0xa
+ }
return len(dAtA) - i, nil
}
-func (m *Object) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_StreamLocation) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8444,12 +8738,12 @@ func (m *Object) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Object) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamLocation) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Object) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_StreamLocation) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8461,81 +8755,71 @@ func (m *Object) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- return len(dAtA) - i, nil
-}
-
-func (m *Control_CreateIndexRequest) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
- }
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
+ if vtmsg, ok := m.Payload.(interface {
+ MarshalToSizedBufferVT([]byte) (int, error)
+ }); ok {
+ size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
}
- return dAtA[:n], nil
+ return len(dAtA) - i, nil
}
-func (m *Control_CreateIndexRequest) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamLocation_Location) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Control_CreateIndexRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
- }
+func (m *Object_StreamLocation_Location) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
- }
- if m.PoolSize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PoolSize))
+ if m.Location != nil {
+ size, err := m.Location.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
i--
- dAtA[i] = 0x8
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Control) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
- }
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Control) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_StreamLocation_Status) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Control) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
- }
+func (m *Object_StreamLocation_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
+ if m.Status != nil {
+ if vtmsg, ok := any(m.Status).(interface {
+ MarshalToSizedBufferVT([]byte) (int, error)
+ }); ok {
+ size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ } else {
+ encoded, err := proto.Marshal(m.Status)
+ if err != nil {
+ return 0, err
+ }
+ i -= len(encoded)
+ copy(dAtA[i:], encoded)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(encoded)))
+ }
+ i--
+ dAtA[i] = 0x12
}
return len(dAtA) - i, nil
}
-func (m *Discoverer_Request) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_Locations) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8548,12 +8832,12 @@ func (m *Discoverer_Request) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Discoverer_Request) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_Locations) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Discoverer_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_Locations) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8565,31 +8849,22 @@ func (m *Discoverer_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Node) > 0 {
- i -= len(m.Node)
- copy(dAtA[i:], m.Node)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Node)))
- i--
- dAtA[i] = 0x1a
- }
- if len(m.Namespace) > 0 {
- i -= len(m.Namespace)
- copy(dAtA[i:], m.Namespace)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Namespace)))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Name) > 0 {
- i -= len(m.Name)
- copy(dAtA[i:], m.Name)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
- i--
- dAtA[i] = 0xa
+ if len(m.Locations) > 0 {
+ for iNdEx := len(m.Locations) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Locations[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
+ }
}
return len(dAtA) - i, nil
}
-func (m *Discoverer) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_List_Request) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8602,12 +8877,12 @@ func (m *Discoverer) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Discoverer) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_List_Request) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Discoverer) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_List_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8622,7 +8897,7 @@ func (m *Discoverer) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Info_Index_Count) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_List_Response) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8635,12 +8910,12 @@ func (m *Info_Index_Count) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_Count) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_List_Response) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_Count) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_List_Response) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8652,40 +8927,71 @@ func (m *Info_Index_Count) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Saving {
- i--
- if m.Saving {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
+ if vtmsg, ok := m.Payload.(interface {
+ MarshalToSizedBufferVT([]byte) (int, error)
+ }); ok {
+ size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
}
- i--
- dAtA[i] = 0x20
+ i -= size
}
- if m.Indexing {
- i--
- if m.Indexing {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
+ return len(dAtA) - i, nil
+}
+
+func (m *Object_List_Response_Vector) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Object_List_Response_Vector) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.Vector != nil {
+ size, err := m.Vector.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
}
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
i--
- dAtA[i] = 0x18
- }
- if m.Uncommitted != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Uncommitted))
- i--
- dAtA[i] = 0x10
+ dAtA[i] = 0xa
}
- if m.Stored != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Stored))
+ return len(dAtA) - i, nil
+}
+
+func (m *Object_List_Response_Status) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Object_List_Response_Status) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.Status != nil {
+ if vtmsg, ok := any(m.Status).(interface {
+ MarshalToSizedBufferVT([]byte) (int, error)
+ }); ok {
+ size, err := vtmsg.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ } else {
+ encoded, err := proto.Marshal(m.Status)
+ if err != nil {
+ return 0, err
+ }
+ i -= len(encoded)
+ copy(dAtA[i:], encoded)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(encoded)))
+ }
i--
- dAtA[i] = 0x8
+ dAtA[i] = 0x12
}
return len(dAtA) - i, nil
}
-func (m *Info_Index_Detail) MarshalVT() (dAtA []byte, err error) {
+func (m *Object_List) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8698,12 +9004,12 @@ func (m *Info_Index_Detail) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_Detail) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object_List) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_Detail) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object_List) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8715,42 +9021,10 @@ func (m *Info_Index_Detail) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.LiveAgents != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.LiveAgents))
- i--
- dAtA[i] = 0x18
- }
- if m.Replica != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Replica))
- i--
- dAtA[i] = 0x10
- }
- if len(m.Counts) > 0 {
- for k := range m.Counts {
- v := m.Counts[k]
- baseI := i
- size, err := v.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x12
- i -= len(k)
- copy(dAtA[i:], k)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(k)))
- i--
- dAtA[i] = 0xa
- i = protohelpers.EncodeVarint(dAtA, i, uint64(baseI-i))
- i--
- dAtA[i] = 0xa
- }
- }
return len(dAtA) - i, nil
}
-func (m *Info_Index_UUID_Committed) MarshalVT() (dAtA []byte, err error) {
+func (m *Object) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8763,12 +9037,12 @@ func (m *Info_Index_UUID_Committed) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_UUID_Committed) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Object) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_UUID_Committed) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Object) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8780,17 +9054,10 @@ func (m *Info_Index_UUID_Committed) MarshalToSizedBufferVT(dAtA []byte) (int, er
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Uuid) > 0 {
- i -= len(m.Uuid)
- copy(dAtA[i:], m.Uuid)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Uuid)))
- i--
- dAtA[i] = 0xa
- }
return len(dAtA) - i, nil
}
-func (m *Info_Index_UUID_Uncommitted) MarshalVT() (dAtA []byte, err error) {
+func (m *Control_CreateIndexRequest) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8803,12 +9070,12 @@ func (m *Info_Index_UUID_Uncommitted) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_UUID_Uncommitted) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Control_CreateIndexRequest) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_UUID_Uncommitted) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Control_CreateIndexRequest) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8820,17 +9087,15 @@ func (m *Info_Index_UUID_Uncommitted) MarshalToSizedBufferVT(dAtA []byte) (int,
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Uuid) > 0 {
- i -= len(m.Uuid)
- copy(dAtA[i:], m.Uuid)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Uuid)))
+ if m.PoolSize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PoolSize))
i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
-func (m *Info_Index_UUID) MarshalVT() (dAtA []byte, err error) {
+func (m *Control) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8843,12 +9108,12 @@ func (m *Info_Index_UUID) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_UUID) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Control) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_UUID) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Control) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8863,7 +9128,7 @@ func (m *Info_Index_UUID) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Info_Index_Statistics) MarshalVT() (dAtA []byte, err error) {
+func (m *Discoverer_Request) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -8876,12 +9141,12 @@ func (m *Info_Index_Statistics) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_Statistics) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Discoverer_Request) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_Statistics) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Discoverer_Request) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -8893,274 +9158,129 @@ func (m *Info_Index_Statistics) MarshalToSizedBufferVT(dAtA []byte) (int, error)
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.IndegreeHistogram) > 0 {
- var pksize2 int
- for _, num := range m.IndegreeHistogram {
- pksize2 += protohelpers.SizeOfVarint(uint64(num))
- }
- i -= pksize2
- j1 := i
- for _, num := range m.IndegreeHistogram {
- for num >= 1<<7 {
- dAtA[j1] = uint8(uint64(num)&0x7f | 0x80)
- num >>= 7
- j1++
- }
- dAtA[j1] = uint8(num)
- j1++
- }
- i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize2))
- i--
- dAtA[i] = 0x2
+ if len(m.Node) > 0 {
+ i -= len(m.Node)
+ copy(dAtA[i:], m.Node)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Node)))
i--
- dAtA[i] = 0x8a
+ dAtA[i] = 0x1a
}
- if len(m.OutdegreeHistogram) > 0 {
- var pksize4 int
- for _, num := range m.OutdegreeHistogram {
- pksize4 += protohelpers.SizeOfVarint(uint64(num))
- }
- i -= pksize4
- j3 := i
- for _, num := range m.OutdegreeHistogram {
- for num >= 1<<7 {
- dAtA[j3] = uint8(uint64(num)&0x7f | 0x80)
- num >>= 7
- j3++
- }
- dAtA[j3] = uint8(num)
- j3++
- }
- i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize4))
- i--
- dAtA[i] = 0x2
+ if len(m.Namespace) > 0 {
+ i -= len(m.Namespace)
+ copy(dAtA[i:], m.Namespace)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Namespace)))
i--
- dAtA[i] = 0x82
+ dAtA[i] = 0x12
}
- if len(m.IndegreeCount) > 0 {
- var pksize6 int
- for _, num := range m.IndegreeCount {
- pksize6 += protohelpers.SizeOfVarint(uint64(num))
- }
- i -= pksize6
- j5 := i
- for _, num1 := range m.IndegreeCount {
- num := uint64(num1)
- for num >= 1<<7 {
- dAtA[j5] = uint8(uint64(num)&0x7f | 0x80)
- num >>= 7
- j5++
- }
- dAtA[j5] = uint8(num)
- j5++
- }
- i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize6))
- i--
- dAtA[i] = 0x1
+ if len(m.Name) > 0 {
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
i--
- dAtA[i] = 0xfa
+ dAtA[i] = 0xa
}
- if m.C99Outdegree != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C99Outdegree))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xf1
+ return len(dAtA) - i, nil
+}
+
+func (m *Discoverer) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
}
- if m.C95Outdegree != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C95Outdegree))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xe9
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- if m.C5Indegree != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C5Indegree))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xe1
+ return dAtA[:n], nil
+}
+
+func (m *Discoverer) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Discoverer) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
}
- if m.C1Indegree != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C1Indegree))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xd9
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
}
- if m.MeanNumberOfEdgesPerNode != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanNumberOfEdgesPerNode))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xd1
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Index_Count) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
}
- if m.MeanIndegreeDistanceFor10Edges != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanIndegreeDistanceFor10Edges))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xc9
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- if m.MeanEdgeLengthFor10Edges != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanEdgeLengthFor10Edges))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xc1
+ return dAtA[:n], nil
+}
+
+func (m *Info_Index_Count) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_Index_Count) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
}
- if m.MeanEdgeLength != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanEdgeLength))))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xb9
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
}
- if m.VarianceOfOutdegree != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.VarianceOfOutdegree))))
+ if m.Saving {
i--
- dAtA[i] = 0x1
+ if m.Saving {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
i--
- dAtA[i] = 0xb1
+ dAtA[i] = 0x20
}
- if m.VarianceOfIndegree != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.VarianceOfIndegree))))
+ if m.Indexing {
i--
- dAtA[i] = 0x1
+ if m.Indexing {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
i--
- dAtA[i] = 0xa9
+ dAtA[i] = 0x18
}
- if m.SizeOfRefinementObjectRepository != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.SizeOfRefinementObjectRepository))
- i--
- dAtA[i] = 0x1
+ if m.Uncommitted != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Uncommitted))
i--
- dAtA[i] = 0xa0
+ dAtA[i] = 0x10
}
- if m.SizeOfObjectRepository != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.SizeOfObjectRepository))
- i--
- dAtA[i] = 0x1
+ if m.Stored != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Stored))
i--
- dAtA[i] = 0x98
+ dAtA[i] = 0x8
}
- if m.NumberOfRemovedObjects != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfRemovedObjects))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0x90
- }
- if m.NumberOfObjects != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfObjects))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0x88
- }
- if m.NumberOfNodesWithoutIndegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfNodesWithoutIndegree))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0x80
- }
- if m.NumberOfNodesWithoutEdges != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfNodesWithoutEdges))
- i--
- dAtA[i] = 0x78
- }
- if m.NumberOfNodes != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfNodes))
- i--
- dAtA[i] = 0x70
- }
- if m.NumberOfIndexedObjects != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfIndexedObjects))
- i--
- dAtA[i] = 0x68
- }
- if m.NumberOfEdges != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfEdges))
- i--
- dAtA[i] = 0x60
- }
- if m.NodesSkippedForIndegreeDistance != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NodesSkippedForIndegreeDistance))
- i--
- dAtA[i] = 0x58
- }
- if m.NodesSkippedFor10Edges != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NodesSkippedFor10Edges))
- i--
- dAtA[i] = 0x50
- }
- if m.ModeOutdegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ModeOutdegree))
- i--
- dAtA[i] = 0x48
- }
- if m.ModeIndegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ModeIndegree))
- i--
- dAtA[i] = 0x40
- }
- if m.MinNumberOfOutdegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MinNumberOfOutdegree))
- i--
- dAtA[i] = 0x38
- }
- if m.MinNumberOfIndegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MinNumberOfIndegree))
- i--
- dAtA[i] = 0x30
- }
- if m.MaxNumberOfOutdegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MaxNumberOfOutdegree))
- i--
- dAtA[i] = 0x28
- }
- if m.MaxNumberOfIndegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MaxNumberOfIndegree))
- i--
- dAtA[i] = 0x20
- }
- if m.MedianOutdegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MedianOutdegree))
- i--
- dAtA[i] = 0x18
- }
- if m.MedianIndegree != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MedianIndegree))
- i--
- dAtA[i] = 0x10
- }
- if m.Valid {
- i--
- if m.Valid {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
- }
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *Info_Index_StatisticsDetail) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Index_Detail) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
}
size := m.SizeVT()
dAtA = make([]byte, size)
@@ -9171,12 +9291,12 @@ func (m *Info_Index_StatisticsDetail) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_StatisticsDetail) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Index_Detail) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_StatisticsDetail) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Index_Detail) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9188,9 +9308,19 @@ func (m *Info_Index_StatisticsDetail) MarshalToSizedBufferVT(dAtA []byte) (int,
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Details) > 0 {
- for k := range m.Details {
- v := m.Details[k]
+ if m.LiveAgents != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.LiveAgents))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.Replica != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Replica))
+ i--
+ dAtA[i] = 0x10
+ }
+ if len(m.Counts) > 0 {
+ for k := range m.Counts {
+ v := m.Counts[k]
baseI := i
size, err := v.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
@@ -9213,7 +9343,7 @@ func (m *Info_Index_StatisticsDetail) MarshalToSizedBufferVT(dAtA []byte) (int,
return len(dAtA) - i, nil
}
-func (m *Info_Index_Property) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Index_UUID_Committed) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9226,12 +9356,12 @@ func (m *Info_Index_Property) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_Property) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Index_UUID_Committed) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_Property) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Index_UUID_Committed) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9243,242 +9373,385 @@ func (m *Info_Index_Property) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.IncomingEdge != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.IncomingEdge))
- i--
- dAtA[i] = 0x2
+ if len(m.Uuid) > 0 {
+ i -= len(m.Uuid)
+ copy(dAtA[i:], m.Uuid)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Uuid)))
i--
- dAtA[i] = 0x90
+ dAtA[i] = 0xa
}
- if m.OutgoingEdge != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.OutgoingEdge))
- i--
- dAtA[i] = 0x2
- i--
- dAtA[i] = 0x88
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Index_UUID_Uncommitted) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
}
- if m.BuildTimeLimit != 0 {
- i -= 4
- binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.BuildTimeLimit))))
- i--
- dAtA[i] = 0x2
- i--
- dAtA[i] = 0x85
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- if m.DynamicEdgeSizeRate != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.DynamicEdgeSizeRate))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xf8
+ return dAtA[:n], nil
+}
+
+func (m *Info_Index_UUID_Uncommitted) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_Index_UUID_Uncommitted) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
}
- if m.DynamicEdgeSizeBase != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.DynamicEdgeSizeBase))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xf0
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
}
- if len(m.GraphType) > 0 {
- i -= len(m.GraphType)
- copy(dAtA[i:], m.GraphType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.GraphType)))
- i--
- dAtA[i] = 0x1
+ if len(m.Uuid) > 0 {
+ i -= len(m.Uuid)
+ copy(dAtA[i:], m.Uuid)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Uuid)))
i--
- dAtA[i] = 0xea
+ dAtA[i] = 0xa
}
- if m.BatchSizeForCreation != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.BatchSizeForCreation))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xe0
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Index_UUID) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
}
- if m.TruncationThreadPoolSize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.TruncationThreadPoolSize))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xd8
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- if len(m.SeedType) > 0 {
- i -= len(m.SeedType)
- copy(dAtA[i:], m.SeedType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SeedType)))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xd2
+ return dAtA[:n], nil
+}
+
+func (m *Info_Index_UUID) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_Index_UUID) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
}
- if m.SeedSize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.SeedSize))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xc8
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
}
- if m.InsertionRadiusCoefficient != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.InsertionRadiusCoefficient))))
- i--
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Index_Statistics) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Info_Index_Statistics) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_Index_Statistics) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if len(m.IndegreeHistogram) > 0 {
+ var pksize2 int
+ for _, num := range m.IndegreeHistogram {
+ pksize2 += protohelpers.SizeOfVarint(uint64(num))
+ }
+ i -= pksize2
+ j1 := i
+ for _, num := range m.IndegreeHistogram {
+ for num >= 1<<7 {
+ dAtA[j1] = uint8(uint64(num)&0x7f | 0x80)
+ num >>= 7
+ j1++
+ }
+ dAtA[j1] = uint8(num)
+ j1++
+ }
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize2))
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x8a
+ }
+ if len(m.OutdegreeHistogram) > 0 {
+ var pksize4 int
+ for _, num := range m.OutdegreeHistogram {
+ pksize4 += protohelpers.SizeOfVarint(uint64(num))
+ }
+ i -= pksize4
+ j3 := i
+ for _, num := range m.OutdegreeHistogram {
+ for num >= 1<<7 {
+ dAtA[j3] = uint8(uint64(num)&0x7f | 0x80)
+ num >>= 7
+ j3++
+ }
+ dAtA[j3] = uint8(num)
+ j3++
+ }
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize4))
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x82
+ }
+ if len(m.IndegreeCount) > 0 {
+ var pksize6 int
+ for _, num := range m.IndegreeCount {
+ pksize6 += protohelpers.SizeOfVarint(uint64(num))
+ }
+ i -= pksize6
+ j5 := i
+ for _, num1 := range m.IndegreeCount {
+ num := uint64(num1)
+ for num >= 1<<7 {
+ dAtA[j5] = uint8(uint64(num)&0x7f | 0x80)
+ num >>= 7
+ j5++
+ }
+ dAtA[j5] = uint8(num)
+ j5++
+ }
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(pksize6))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xfa
+ }
+ if m.C99Outdegree != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C99Outdegree))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xf1
+ }
+ if m.C95Outdegree != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C95Outdegree))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xe9
+ }
+ if m.C5Indegree != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C5Indegree))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xe1
+ }
+ if m.C1Indegree != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.C1Indegree))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xd9
+ }
+ if m.MeanNumberOfEdgesPerNode != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanNumberOfEdgesPerNode))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xd1
+ }
+ if m.MeanIndegreeDistanceFor10Edges != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanIndegreeDistanceFor10Edges))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xc9
+ }
+ if m.MeanEdgeLengthFor10Edges != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanEdgeLengthFor10Edges))))
+ i--
dAtA[i] = 0x1
i--
dAtA[i] = 0xc1
}
- if m.EdgeSizeLimitForCreation != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.EdgeSizeLimitForCreation))
+ if m.MeanEdgeLength != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.MeanEdgeLength))))
i--
dAtA[i] = 0x1
i--
- dAtA[i] = 0xb8
+ dAtA[i] = 0xb9
}
- if m.EdgeSizeForSearch != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.EdgeSizeForSearch))
+ if m.VarianceOfOutdegree != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.VarianceOfOutdegree))))
i--
dAtA[i] = 0x1
i--
- dAtA[i] = 0xb0
+ dAtA[i] = 0xb1
}
- if m.EdgeSizeForCreation != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.EdgeSizeForCreation))
+ if m.VarianceOfIndegree != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.VarianceOfIndegree))))
i--
dAtA[i] = 0x1
i--
- dAtA[i] = 0xa8
+ dAtA[i] = 0xa9
}
- if m.TruncationThreshold != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.TruncationThreshold))
+ if m.SizeOfRefinementObjectRepository != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.SizeOfRefinementObjectRepository))
i--
dAtA[i] = 0x1
i--
dAtA[i] = 0xa0
}
- if len(m.RefinementObjectType) > 0 {
- i -= len(m.RefinementObjectType)
- copy(dAtA[i:], m.RefinementObjectType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.RefinementObjectType)))
+ if m.SizeOfObjectRepository != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.SizeOfObjectRepository))
i--
dAtA[i] = 0x1
i--
- dAtA[i] = 0x9a
+ dAtA[i] = 0x98
}
- if m.EpsilonForInsertionOrder != 0 {
- i -= 4
- binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.EpsilonForInsertionOrder))))
+ if m.NumberOfRemovedObjects != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfRemovedObjects))
i--
dAtA[i] = 0x1
i--
- dAtA[i] = 0x95
+ dAtA[i] = 0x90
}
- if m.NOfNeighborsForInsertionOrder != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NOfNeighborsForInsertionOrder))
+ if m.NumberOfObjects != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfObjects))
i--
dAtA[i] = 0x1
i--
dAtA[i] = 0x88
}
- if m.MaxMagnitude != 0 {
- i -= 4
- binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.MaxMagnitude))))
+ if m.NumberOfNodesWithoutIndegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfNodesWithoutIndegree))
i--
dAtA[i] = 0x1
i--
- dAtA[i] = 0x85
+ dAtA[i] = 0x80
}
- if len(m.SearchType) > 0 {
- i -= len(m.SearchType)
- copy(dAtA[i:], m.SearchType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SearchType)))
+ if m.NumberOfNodesWithoutEdges != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfNodesWithoutEdges))
i--
- dAtA[i] = 0x7a
+ dAtA[i] = 0x78
}
- if len(m.AccuracyTable) > 0 {
- i -= len(m.AccuracyTable)
- copy(dAtA[i:], m.AccuracyTable)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.AccuracyTable)))
+ if m.NumberOfNodes != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfNodes))
i--
- dAtA[i] = 0x72
+ dAtA[i] = 0x70
}
- if m.PrefetchSize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PrefetchSize))
+ if m.NumberOfIndexedObjects != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfIndexedObjects))
i--
dAtA[i] = 0x68
}
- if m.PrefetchOffset != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PrefetchOffset))
+ if m.NumberOfEdges != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NumberOfEdges))
i--
dAtA[i] = 0x60
}
- if m.ObjectSharedMemorySize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ObjectSharedMemorySize))
+ if m.NodesSkippedForIndegreeDistance != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NodesSkippedForIndegreeDistance))
i--
dAtA[i] = 0x58
}
- if m.TreeSharedMemorySize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.TreeSharedMemorySize))
+ if m.NodesSkippedFor10Edges != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NodesSkippedFor10Edges))
i--
dAtA[i] = 0x50
}
- if m.GraphSharedMemorySize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.GraphSharedMemorySize))
+ if m.ModeOutdegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ModeOutdegree))
i--
dAtA[i] = 0x48
}
- if m.PathAdjustmentInterval != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PathAdjustmentInterval))
+ if m.ModeIndegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ModeIndegree))
i--
dAtA[i] = 0x40
}
- if len(m.ObjectAlignment) > 0 {
- i -= len(m.ObjectAlignment)
- copy(dAtA[i:], m.ObjectAlignment)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ObjectAlignment)))
+ if m.MinNumberOfOutdegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MinNumberOfOutdegree))
i--
- dAtA[i] = 0x3a
+ dAtA[i] = 0x38
}
- if len(m.DatabaseType) > 0 {
- i -= len(m.DatabaseType)
- copy(dAtA[i:], m.DatabaseType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.DatabaseType)))
- i--
- dAtA[i] = 0x32
+ if m.MinNumberOfIndegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MinNumberOfIndegree))
+ i--
+ dAtA[i] = 0x30
}
- if len(m.IndexType) > 0 {
- i -= len(m.IndexType)
- copy(dAtA[i:], m.IndexType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.IndexType)))
+ if m.MaxNumberOfOutdegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MaxNumberOfOutdegree))
i--
- dAtA[i] = 0x2a
+ dAtA[i] = 0x28
}
- if len(m.DistanceType) > 0 {
- i -= len(m.DistanceType)
- copy(dAtA[i:], m.DistanceType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.DistanceType)))
+ if m.MaxNumberOfIndegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MaxNumberOfIndegree))
i--
- dAtA[i] = 0x22
+ dAtA[i] = 0x20
}
- if len(m.ObjectType) > 0 {
- i -= len(m.ObjectType)
- copy(dAtA[i:], m.ObjectType)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ObjectType)))
+ if m.MedianOutdegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MedianOutdegree))
i--
- dAtA[i] = 0x1a
+ dAtA[i] = 0x18
}
- if m.ThreadPoolSize != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ThreadPoolSize))
+ if m.MedianIndegree != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.MedianIndegree))
i--
dAtA[i] = 0x10
}
- if m.Dimension != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Dimension))
+ if m.Valid {
+ i--
+ if m.Valid {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
i--
dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
-func (m *Info_Index_PropertyDetail) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Index_StatisticsDetail) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9491,12 +9764,12 @@ func (m *Info_Index_PropertyDetail) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index_PropertyDetail) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Index_StatisticsDetail) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index_PropertyDetail) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Index_StatisticsDetail) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9533,7 +9806,7 @@ func (m *Info_Index_PropertyDetail) MarshalToSizedBufferVT(dAtA []byte) (int, er
return len(dAtA) - i, nil
}
-func (m *Info_Index) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Index_Property) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9546,12 +9819,12 @@ func (m *Info_Index) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Index) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Index_Property) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Index) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Index_Property) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9563,185 +9836,242 @@ func (m *Info_Index) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- return len(dAtA) - i, nil
-}
-
-func (m *Info_Pod) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
- }
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Info_Pod) MarshalToVT(dAtA []byte) (int, error) {
- size := m.SizeVT()
- return m.MarshalToSizedBufferVT(dAtA[:size])
-}
-
-func (m *Info_Pod) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
- }
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
+ if m.IncomingEdge != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.IncomingEdge))
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x90
}
- if m.Node != nil {
- size, err := m.Node.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ if m.OutgoingEdge != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.OutgoingEdge))
i--
- dAtA[i] = 0x3a
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x88
}
- if m.Memory != nil {
- size, err := m.Memory.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ if m.BuildTimeLimit != 0 {
+ i -= 4
+ binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.BuildTimeLimit))))
i--
- dAtA[i] = 0x32
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x85
}
- if m.Cpu != nil {
- size, err := m.Cpu.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ if m.DynamicEdgeSizeRate != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.DynamicEdgeSizeRate))
i--
- dAtA[i] = 0x2a
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xf8
}
- if len(m.Ip) > 0 {
- i -= len(m.Ip)
- copy(dAtA[i:], m.Ip)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ip)))
+ if m.DynamicEdgeSizeBase != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.DynamicEdgeSizeBase))
i--
- dAtA[i] = 0x22
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xf0
}
- if len(m.Namespace) > 0 {
- i -= len(m.Namespace)
- copy(dAtA[i:], m.Namespace)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Namespace)))
+ if len(m.GraphType) > 0 {
+ i -= len(m.GraphType)
+ copy(dAtA[i:], m.GraphType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.GraphType)))
i--
- dAtA[i] = 0x1a
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xea
}
- if len(m.Name) > 0 {
- i -= len(m.Name)
- copy(dAtA[i:], m.Name)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
+ if m.BatchSizeForCreation != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.BatchSizeForCreation))
i--
- dAtA[i] = 0x12
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xe0
}
- if len(m.AppName) > 0 {
- i -= len(m.AppName)
- copy(dAtA[i:], m.AppName)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.AppName)))
+ if m.TruncationThreadPoolSize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.TruncationThreadPoolSize))
i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xd8
}
- return len(dAtA) - i, nil
-}
-
-func (m *Info_Node) MarshalVT() (dAtA []byte, err error) {
- if m == nil {
- return nil, nil
+ if len(m.SeedType) > 0 {
+ i -= len(m.SeedType)
+ copy(dAtA[i:], m.SeedType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SeedType)))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xd2
}
- size := m.SizeVT()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBufferVT(dAtA[:size])
- if err != nil {
- return nil, err
+ if m.SeedSize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.SeedSize))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xc8
}
- return dAtA[:n], nil
-}
-
-func (m *Info_Node) MarshalToVT(dAtA []byte) (int, error) {
- size := m.SizeVT()
- return m.MarshalToSizedBufferVT(dAtA[:size])
-}
-
-func (m *Info_Node) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
- if m == nil {
- return 0, nil
+ if m.InsertionRadiusCoefficient != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.InsertionRadiusCoefficient))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xc1
}
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.unknownFields != nil {
- i -= len(m.unknownFields)
- copy(dAtA[i:], m.unknownFields)
+ if m.EdgeSizeLimitForCreation != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.EdgeSizeLimitForCreation))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xb8
}
- if m.Pods != nil {
- size, err := m.Pods.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ if m.EdgeSizeForSearch != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.EdgeSizeForSearch))
i--
- dAtA[i] = 0x32
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xb0
}
- if m.Memory != nil {
- size, err := m.Memory.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ if m.EdgeSizeForCreation != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.EdgeSizeForCreation))
i--
- dAtA[i] = 0x2a
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xa8
}
- if m.Cpu != nil {
- size, err := m.Cpu.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ if m.TruncationThreshold != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.TruncationThreshold))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xa0
+ }
+ if len(m.RefinementObjectType) > 0 {
+ i -= len(m.RefinementObjectType)
+ copy(dAtA[i:], m.RefinementObjectType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.RefinementObjectType)))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x9a
+ }
+ if m.EpsilonForInsertionOrder != 0 {
+ i -= 4
+ binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.EpsilonForInsertionOrder))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x95
+ }
+ if m.NOfNeighborsForInsertionOrder != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.NOfNeighborsForInsertionOrder))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x88
+ }
+ if m.MaxMagnitude != 0 {
+ i -= 4
+ binary.LittleEndian.PutUint32(dAtA[i:], uint32(math.Float32bits(float32(m.MaxMagnitude))))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x85
+ }
+ if len(m.SearchType) > 0 {
+ i -= len(m.SearchType)
+ copy(dAtA[i:], m.SearchType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SearchType)))
+ i--
+ dAtA[i] = 0x7a
+ }
+ if len(m.AccuracyTable) > 0 {
+ i -= len(m.AccuracyTable)
+ copy(dAtA[i:], m.AccuracyTable)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.AccuracyTable)))
+ i--
+ dAtA[i] = 0x72
+ }
+ if m.PrefetchSize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PrefetchSize))
+ i--
+ dAtA[i] = 0x68
+ }
+ if m.PrefetchOffset != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PrefetchOffset))
+ i--
+ dAtA[i] = 0x60
+ }
+ if m.ObjectSharedMemorySize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ObjectSharedMemorySize))
+ i--
+ dAtA[i] = 0x58
+ }
+ if m.TreeSharedMemorySize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.TreeSharedMemorySize))
+ i--
+ dAtA[i] = 0x50
+ }
+ if m.GraphSharedMemorySize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.GraphSharedMemorySize))
+ i--
+ dAtA[i] = 0x48
+ }
+ if m.PathAdjustmentInterval != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.PathAdjustmentInterval))
+ i--
+ dAtA[i] = 0x40
+ }
+ if len(m.ObjectAlignment) > 0 {
+ i -= len(m.ObjectAlignment)
+ copy(dAtA[i:], m.ObjectAlignment)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ObjectAlignment)))
+ i--
+ dAtA[i] = 0x3a
+ }
+ if len(m.DatabaseType) > 0 {
+ i -= len(m.DatabaseType)
+ copy(dAtA[i:], m.DatabaseType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.DatabaseType)))
+ i--
+ dAtA[i] = 0x32
+ }
+ if len(m.IndexType) > 0 {
+ i -= len(m.IndexType)
+ copy(dAtA[i:], m.IndexType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.IndexType)))
+ i--
+ dAtA[i] = 0x2a
+ }
+ if len(m.DistanceType) > 0 {
+ i -= len(m.DistanceType)
+ copy(dAtA[i:], m.DistanceType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.DistanceType)))
i--
dAtA[i] = 0x22
}
- if len(m.ExternalAddr) > 0 {
- i -= len(m.ExternalAddr)
- copy(dAtA[i:], m.ExternalAddr)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ExternalAddr)))
+ if len(m.ObjectType) > 0 {
+ i -= len(m.ObjectType)
+ copy(dAtA[i:], m.ObjectType)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ObjectType)))
i--
dAtA[i] = 0x1a
}
- if len(m.InternalAddr) > 0 {
- i -= len(m.InternalAddr)
- copy(dAtA[i:], m.InternalAddr)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.InternalAddr)))
+ if m.ThreadPoolSize != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.ThreadPoolSize))
i--
- dAtA[i] = 0x12
+ dAtA[i] = 0x10
}
- if len(m.Name) > 0 {
- i -= len(m.Name)
- copy(dAtA[i:], m.Name)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
+ if m.Dimension != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Dimension))
i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
-func (m *Info_Service) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Index_PropertyDetail) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9754,12 +10084,12 @@ func (m *Info_Service) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Service) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Index_PropertyDetail) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Service) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Index_PropertyDetail) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9771,65 +10101,32 @@ func (m *Info_Service) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Annotations != nil {
- size, err := m.Annotations.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x32
- }
- if m.Labels != nil {
- size, err := m.Labels.MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
- i--
- dAtA[i] = 0x2a
- }
- if len(m.Ports) > 0 {
- for iNdEx := len(m.Ports) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Ports[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if len(m.Details) > 0 {
+ for k := range m.Details {
+ v := m.Details[k]
+ baseI := i
+ size, err := v.MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
i--
- dAtA[i] = 0x22
- }
- }
- if len(m.ClusterIps) > 0 {
- for iNdEx := len(m.ClusterIps) - 1; iNdEx >= 0; iNdEx-- {
- i -= len(m.ClusterIps[iNdEx])
- copy(dAtA[i:], m.ClusterIps[iNdEx])
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ClusterIps[iNdEx])))
+ dAtA[i] = 0x12
+ i -= len(k)
+ copy(dAtA[i:], k)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(k)))
i--
- dAtA[i] = 0x1a
+ dAtA[i] = 0xa
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(baseI-i))
+ i--
+ dAtA[i] = 0xa
}
}
- if len(m.ClusterIp) > 0 {
- i -= len(m.ClusterIp)
- copy(dAtA[i:], m.ClusterIp)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ClusterIp)))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Name) > 0 {
- i -= len(m.Name)
- copy(dAtA[i:], m.Name)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
- i--
- dAtA[i] = 0xa
- }
return len(dAtA) - i, nil
}
-func (m *Info_ServicePort) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Index) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9842,12 +10139,12 @@ func (m *Info_ServicePort) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_ServicePort) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Index) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_ServicePort) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Index) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9859,22 +10156,10 @@ func (m *Info_ServicePort) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Port != 0 {
- i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Port))
- i--
- dAtA[i] = 0x10
- }
- if len(m.Name) > 0 {
- i -= len(m.Name)
- copy(dAtA[i:], m.Name)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
- i--
- dAtA[i] = 0xa
- }
return len(dAtA) - i, nil
}
-func (m *Info_Labels) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Pod) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9887,12 +10172,12 @@ func (m *Info_Labels) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Labels) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Pod) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Labels) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Pod) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9904,29 +10189,68 @@ func (m *Info_Labels) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Labels) > 0 {
- for k := range m.Labels {
- v := m.Labels[k]
- baseI := i
- i -= len(v)
- copy(dAtA[i:], v)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(v)))
- i--
- dAtA[i] = 0x12
- i -= len(k)
- copy(dAtA[i:], k)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(k)))
- i--
- dAtA[i] = 0xa
- i = protohelpers.EncodeVarint(dAtA, i, uint64(baseI-i))
- i--
- dAtA[i] = 0xa
+ if m.Node != nil {
+ size, err := m.Node.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
}
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x3a
}
- return len(dAtA) - i, nil
-}
-
-func (m *Info_Annotations) MarshalVT() (dAtA []byte, err error) {
+ if m.Memory != nil {
+ size, err := m.Memory.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x32
+ }
+ if m.Cpu != nil {
+ size, err := m.Cpu.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x2a
+ }
+ if len(m.Ip) > 0 {
+ i -= len(m.Ip)
+ copy(dAtA[i:], m.Ip)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ip)))
+ i--
+ dAtA[i] = 0x22
+ }
+ if len(m.Namespace) > 0 {
+ i -= len(m.Namespace)
+ copy(dAtA[i:], m.Namespace)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Namespace)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if len(m.Name) > 0 {
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.AppName) > 0 {
+ i -= len(m.AppName)
+ copy(dAtA[i:], m.AppName)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.AppName)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Node) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9939,12 +10263,12 @@ func (m *Info_Annotations) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Annotations) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Node) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Annotations) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Node) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -9956,29 +10280,61 @@ func (m *Info_Annotations) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Annotations) > 0 {
- for k := range m.Annotations {
- v := m.Annotations[k]
- baseI := i
- i -= len(v)
- copy(dAtA[i:], v)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(v)))
- i--
- dAtA[i] = 0x12
- i -= len(k)
- copy(dAtA[i:], k)
- i = protohelpers.EncodeVarint(dAtA, i, uint64(len(k)))
- i--
- dAtA[i] = 0xa
- i = protohelpers.EncodeVarint(dAtA, i, uint64(baseI-i))
- i--
- dAtA[i] = 0xa
+ if m.Pods != nil {
+ size, err := m.Pods.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x32
+ }
+ if m.Memory != nil {
+ size, err := m.Memory.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x2a
+ }
+ if m.Cpu != nil {
+ size, err := m.Cpu.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
}
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x22
+ }
+ if len(m.ExternalAddr) > 0 {
+ i -= len(m.ExternalAddr)
+ copy(dAtA[i:], m.ExternalAddr)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ExternalAddr)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if len(m.InternalAddr) > 0 {
+ i -= len(m.InternalAddr)
+ copy(dAtA[i:], m.InternalAddr)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.InternalAddr)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Name) > 0 {
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
+ i--
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Info_CPU) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Service) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -9991,12 +10347,12 @@ func (m *Info_CPU) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_CPU) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Service) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_CPU) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Service) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -10008,28 +10364,65 @@ func (m *Info_CPU) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Usage != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Usage))))
+ if m.Annotations != nil {
+ size, err := m.Annotations.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
i--
- dAtA[i] = 0x19
+ dAtA[i] = 0x32
}
- if m.Request != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Request))))
+ if m.Labels != nil {
+ size, err := m.Labels.MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
i--
- dAtA[i] = 0x11
+ dAtA[i] = 0x2a
}
- if m.Limit != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Limit))))
+ if len(m.Ports) > 0 {
+ for iNdEx := len(m.Ports) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Ports[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x22
+ }
+ }
+ if len(m.ClusterIps) > 0 {
+ for iNdEx := len(m.ClusterIps) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.ClusterIps[iNdEx])
+ copy(dAtA[i:], m.ClusterIps[iNdEx])
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ClusterIps[iNdEx])))
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ if len(m.ClusterIp) > 0 {
+ i -= len(m.ClusterIp)
+ copy(dAtA[i:], m.ClusterIp)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.ClusterIp)))
i--
- dAtA[i] = 0x9
+ dAtA[i] = 0x12
+ }
+ if len(m.Name) > 0 {
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
+ i--
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Info_Memory) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_ServicePort) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -10042,12 +10435,12 @@ func (m *Info_Memory) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Memory) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_ServicePort) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Memory) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_ServicePort) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -10059,28 +10452,22 @@ func (m *Info_Memory) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if m.Usage != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Usage))))
- i--
- dAtA[i] = 0x19
- }
- if m.Request != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Request))))
+ if m.Port != 0 {
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(m.Port))
i--
- dAtA[i] = 0x11
+ dAtA[i] = 0x10
}
- if m.Limit != 0 {
- i -= 8
- binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Limit))))
+ if len(m.Name) > 0 {
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Name)))
i--
- dAtA[i] = 0x9
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Info_Pods) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Labels) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -10093,12 +10480,12 @@ func (m *Info_Pods) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Pods) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Labels) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Pods) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Labels) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -10110,14 +10497,21 @@ func (m *Info_Pods) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Pods) > 0 {
- for iNdEx := len(m.Pods) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Pods[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ if len(m.Labels) > 0 {
+ for k := range m.Labels {
+ v := m.Labels[k]
+ baseI := i
+ i -= len(v)
+ copy(dAtA[i:], v)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(v)))
+ i--
+ dAtA[i] = 0x12
+ i -= len(k)
+ copy(dAtA[i:], k)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(k)))
+ i--
+ dAtA[i] = 0xa
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(baseI-i))
i--
dAtA[i] = 0xa
}
@@ -10125,7 +10519,7 @@ func (m *Info_Pods) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *Info_Nodes) MarshalVT() (dAtA []byte, err error) {
+func (m *Info_Annotations) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
@@ -10138,12 +10532,12 @@ func (m *Info_Nodes) MarshalVT() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Info_Nodes) MarshalToVT(dAtA []byte) (int, error) {
+func (m *Info_Annotations) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
-func (m *Info_Nodes) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+func (m *Info_Annotations) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
@@ -10155,9 +10549,208 @@ func (m *Info_Nodes) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
- if len(m.Nodes) > 0 {
- for iNdEx := len(m.Nodes) - 1; iNdEx >= 0; iNdEx-- {
- size, err := m.Nodes[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if len(m.Annotations) > 0 {
+ for k := range m.Annotations {
+ v := m.Annotations[k]
+ baseI := i
+ i -= len(v)
+ copy(dAtA[i:], v)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(v)))
+ i--
+ dAtA[i] = 0x12
+ i -= len(k)
+ copy(dAtA[i:], k)
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(len(k)))
+ i--
+ dAtA[i] = 0xa
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(baseI-i))
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_CPU) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Info_CPU) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_CPU) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if m.Usage != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Usage))))
+ i--
+ dAtA[i] = 0x19
+ }
+ if m.Request != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Request))))
+ i--
+ dAtA[i] = 0x11
+ }
+ if m.Limit != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Limit))))
+ i--
+ dAtA[i] = 0x9
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Memory) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Info_Memory) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_Memory) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if m.Usage != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Usage))))
+ i--
+ dAtA[i] = 0x19
+ }
+ if m.Request != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Request))))
+ i--
+ dAtA[i] = 0x11
+ }
+ if m.Limit != 0 {
+ i -= 8
+ binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Limit))))
+ i--
+ dAtA[i] = 0x9
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Pods) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Info_Pods) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_Pods) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if len(m.Pods) > 0 {
+ for iNdEx := len(m.Pods) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Pods[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = protohelpers.EncodeVarint(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Info_Nodes) MarshalVT() (dAtA []byte, err error) {
+ if m == nil {
+ return nil, nil
+ }
+ size := m.SizeVT()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBufferVT(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Info_Nodes) MarshalToVT(dAtA []byte) (int, error) {
+ size := m.SizeVT()
+ return m.MarshalToSizedBufferVT(dAtA[:size])
+}
+
+func (m *Info_Nodes) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
+ if m == nil {
+ return 0, nil
+ }
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.unknownFields != nil {
+ i -= len(m.unknownFields)
+ copy(dAtA[i:], m.unknownFields)
+ }
+ if len(m.Nodes) > 0 {
+ for iNdEx := len(m.Nodes) - 1; iNdEx >= 0; iNdEx-- {
+ size, err := m.Nodes[iNdEx].MarshalToSizedBufferVT(dAtA[:i])
if err != nil {
return 0, err
}
@@ -10742,13 +11335,17 @@ func (m *Search_Config) SizeVT() (n int) {
if m.Timeout != 0 {
n += 1 + protohelpers.SizeOfVarint(uint64(m.Timeout))
}
- if m.IngressFilters != nil {
- l = m.IngressFilters.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ if len(m.IngressFilters) > 0 {
+ for _, e := range m.IngressFilters {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
- if m.EgressFilters != nil {
- l = m.EgressFilters.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ if len(m.EgressFilters) > 0 {
+ for _, e := range m.EgressFilters {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
if m.MinNum != 0 {
n += 1 + protohelpers.SizeOfVarint(uint64(m.MinNum))
@@ -10875,58 +11472,66 @@ func (m *Filter_Target) SizeVT() (n int) {
return n
}
-func (m *Filter_Config) SizeVT() (n int) {
+func (m *Filter_Query) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
- if len(m.Targets) > 0 {
- for _, e := range m.Targets {
- l = e.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
- }
+ l = len(m.Query)
+ if l > 0 {
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
n += len(m.unknownFields)
return n
}
-func (m *Filter) SizeVT() (n int) {
+func (m *Filter_Config) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
+ if m.Target != nil {
+ l = m.Target.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ if m.Query != nil {
+ l = m.Query.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
n += len(m.unknownFields)
return n
}
-func (m *Insert_Request) SizeVT() (n int) {
+func (m *Filter_DistanceRequest) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
- if m.Vector != nil {
- l = m.Vector.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ if len(m.Distance) > 0 {
+ for _, e := range m.Distance {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
- if m.Config != nil {
- l = m.Config.SizeVT()
+ if m.Query != nil {
+ l = m.Query.SizeVT()
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
n += len(m.unknownFields)
return n
}
-func (m *Insert_MultiRequest) SizeVT() (n int) {
+func (m *Filter_DistanceResponse) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
- if len(m.Requests) > 0 {
- for _, e := range m.Requests {
+ if len(m.Distance) > 0 {
+ for _, e := range m.Distance {
l = e.SizeVT()
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
@@ -10935,39 +11540,115 @@ func (m *Insert_MultiRequest) SizeVT() (n int) {
return n
}
-func (m *Insert_ObjectRequest) SizeVT() (n int) {
+func (m *Filter_VectorRequest) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
- if m.Object != nil {
- l = m.Object.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
- }
- if m.Config != nil {
- l = m.Config.SizeVT()
+ if m.Vector != nil {
+ l = m.Vector.SizeVT()
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
- if m.Vectorizer != nil {
- l = m.Vectorizer.SizeVT()
+ if m.Query != nil {
+ l = m.Query.SizeVT()
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
n += len(m.unknownFields)
return n
}
-func (m *Insert_MultiObjectRequest) SizeVT() (n int) {
+func (m *Filter_VectorResponse) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
- if len(m.Requests) > 0 {
- for _, e := range m.Requests {
- l = e.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
- }
+ if m.Vector != nil {
+ l = m.Vector.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ n += len(m.unknownFields)
+ return n
+}
+
+func (m *Filter) SizeVT() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ n += len(m.unknownFields)
+ return n
+}
+
+func (m *Insert_Request) SizeVT() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Vector != nil {
+ l = m.Vector.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ if m.Config != nil {
+ l = m.Config.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ n += len(m.unknownFields)
+ return n
+}
+
+func (m *Insert_MultiRequest) SizeVT() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Requests) > 0 {
+ for _, e := range m.Requests {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ }
+ n += len(m.unknownFields)
+ return n
+}
+
+func (m *Insert_ObjectRequest) SizeVT() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Object != nil {
+ l = m.Object.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ if m.Config != nil {
+ l = m.Config.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ if m.Vectorizer != nil {
+ l = m.Vectorizer.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
+ n += len(m.unknownFields)
+ return n
+}
+
+func (m *Insert_MultiObjectRequest) SizeVT() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Requests) > 0 {
+ for _, e := range m.Requests {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
n += len(m.unknownFields)
return n
@@ -10982,9 +11663,11 @@ func (m *Insert_Config) SizeVT() (n int) {
if m.SkipStrictExistCheck {
n += 2
}
- if m.Filters != nil {
- l = m.Filters.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ if len(m.Filters) > 0 {
+ for _, e := range m.Filters {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
if m.Timestamp != 0 {
n += 1 + protohelpers.SizeOfVarint(uint64(m.Timestamp))
@@ -11104,9 +11787,11 @@ func (m *Update_Config) SizeVT() (n int) {
if m.SkipStrictExistCheck {
n += 2
}
- if m.Filters != nil {
- l = m.Filters.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ if len(m.Filters) > 0 {
+ for _, e := range m.Filters {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
if m.Timestamp != 0 {
n += 1 + protohelpers.SizeOfVarint(uint64(m.Timestamp))
@@ -11209,9 +11894,11 @@ func (m *Upsert_Config) SizeVT() (n int) {
if m.SkipStrictExistCheck {
n += 2
}
- if m.Filters != nil {
- l = m.Filters.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ if len(m.Filters) > 0 {
+ for _, e := range m.Filters {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
if m.Timestamp != 0 {
n += 1 + protohelpers.SizeOfVarint(uint64(m.Timestamp))
@@ -11355,9 +12042,11 @@ func (m *Object_VectorRequest) SizeVT() (n int) {
l = m.Id.SizeVT()
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
- if m.Filters != nil {
- l = m.Filters.SizeVT()
- n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ if len(m.Filters) > 0 {
+ for _, e := range m.Filters {
+ l = e.SizeVT()
+ n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
+ }
}
n += len(m.unknownFields)
return n
@@ -12622,23 +13311,457 @@ func (m *Meta) SizeVT() (n int) {
if m == nil {
return 0
}
- var l int
- _ = l
- n += len(m.unknownFields)
- return n
-}
+ var l int
+ _ = l
+ n += len(m.unknownFields)
+ return n
+}
+
+func (m *Empty) SizeVT() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ n += len(m.unknownFields)
+ return n
+}
+
+func (m *Search_Request) UnmarshalVT(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Search_Request: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Search_Request: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType == 5 {
+ var v uint32
+ if (iNdEx + 4) > l {
+ return io.ErrUnexpectedEOF
+ }
+ v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
+ iNdEx += 4
+ v2 := float32(math.Float32frombits(v))
+ m.Vector = append(m.Vector, v2)
+ } else if wireType == 2 {
+ var packedLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ packedLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if packedLen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + packedLen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ var elementCount int
+ elementCount = packedLen / 4
+ if elementCount != 0 && len(m.Vector) == 0 {
+ m.Vector = make([]float32, 0, elementCount)
+ }
+ for iNdEx < postIndex {
+ var v uint32
+ if (iNdEx + 4) > l {
+ return io.ErrUnexpectedEOF
+ }
+ v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
+ iNdEx += 4
+ v2 := float32(math.Float32frombits(v))
+ m.Vector = append(m.Vector, v2)
+ }
+ } else {
+ return fmt.Errorf("proto: wrong wireType = %d for field Vector", wireType)
+ }
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Config", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Config == nil {
+ m.Config = &Search_Config{}
+ }
+ if err := m.Config.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := protohelpers.Skip(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+
+func (m *Search_MultiRequest) UnmarshalVT(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Search_MultiRequest: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Search_MultiRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Requests", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Requests = append(m.Requests, &Search_Request{})
+ if err := m.Requests[len(m.Requests)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := protohelpers.Skip(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+
+func (m *Search_IDRequest) UnmarshalVT(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Search_IDRequest: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Search_IDRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Id = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Config", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Config == nil {
+ m.Config = &Search_Config{}
+ }
+ if err := m.Config.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := protohelpers.Skip(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+
+func (m *Search_MultiIDRequest) UnmarshalVT(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Search_MultiIDRequest: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Search_MultiIDRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Requests", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Requests = append(m.Requests, &Search_IDRequest{})
+ if err := m.Requests[len(m.Requests)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := protohelpers.Skip(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
-func (m *Empty) SizeVT() (n int) {
- if m == nil {
- return 0
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
}
- var l int
- _ = l
- n += len(m.unknownFields)
- return n
+ return nil
}
-func (m *Search_Request) UnmarshalVT(dAtA []byte) error {
+func (m *Search_ObjectRequest) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12661,66 +13784,46 @@ func (m *Search_Request) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_Request: wiretype end group for non-group")
+ return fmt.Errorf("proto: Search_ObjectRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_Request: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Search_ObjectRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType == 5 {
- var v uint32
- if (iNdEx + 4) > l {
- return io.ErrUnexpectedEOF
- }
- v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
- iNdEx += 4
- v2 := float32(math.Float32frombits(v))
- m.Vector = append(m.Vector, v2)
- } else if wireType == 2 {
- var packedLen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- packedLen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if packedLen < 0 {
- return protohelpers.ErrInvalidLength
- }
- postIndex := iNdEx + packedLen
- if postIndex < 0 {
- return protohelpers.ErrInvalidLength
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Object", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
}
- if postIndex > l {
+ if iNdEx >= l {
return io.ErrUnexpectedEOF
}
- var elementCount int
- elementCount = packedLen / 4
- if elementCount != 0 && len(m.Vector) == 0 {
- m.Vector = make([]float32, 0, elementCount)
- }
- for iNdEx < postIndex {
- var v uint32
- if (iNdEx + 4) > l {
- return io.ErrUnexpectedEOF
- }
- v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
- iNdEx += 4
- v2 := float32(math.Float32frombits(v))
- m.Vector = append(m.Vector, v2)
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- } else {
- return fmt.Errorf("proto: wrong wireType = %d for field Vector", wireType)
}
+ if byteLen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Object = append(m.Object[:0], dAtA[iNdEx:postIndex]...)
+ if m.Object == nil {
+ m.Object = []byte{}
+ }
+ iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Config", wireType)
@@ -12757,6 +13860,42 @@ func (m *Search_Request) UnmarshalVT(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Vectorizer", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Vectorizer == nil {
+ m.Vectorizer = &Filter_Target{}
+ }
+ if err := m.Vectorizer.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
@@ -12780,7 +13919,7 @@ func (m *Search_Request) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_MultiRequest) UnmarshalVT(dAtA []byte) error {
+func (m *Search_MultiObjectRequest) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12803,10 +13942,10 @@ func (m *Search_MultiRequest) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_MultiRequest: wiretype end group for non-group")
+ return fmt.Errorf("proto: Search_MultiObjectRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_MultiRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Search_MultiObjectRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
@@ -12838,7 +13977,7 @@ func (m *Search_MultiRequest) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Requests = append(m.Requests, &Search_Request{})
+ m.Requests = append(m.Requests, &Search_ObjectRequest{})
if err := m.Requests[len(m.Requests)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -12866,7 +14005,7 @@ func (m *Search_MultiRequest) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_IDRequest) UnmarshalVT(dAtA []byte) error {
+func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12876,30 +14015,122 @@ func (m *Search_IDRequest) UnmarshalVT(dAtA []byte) error {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
- if iNdEx >= l {
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Search_Config: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Search_Config: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RequestId", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.RequestId = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Num", wireType)
+ }
+ m.Num = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Num |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 5 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Radius", wireType)
+ }
+ var v uint32
+ if (iNdEx + 4) > l {
+ return io.ErrUnexpectedEOF
+ }
+ v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
+ iNdEx += 4
+ m.Radius = float32(math.Float32frombits(v))
+ case 4:
+ if wireType != 5 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Epsilon", wireType)
+ }
+ var v uint32
+ if (iNdEx + 4) > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
+ iNdEx += 4
+ m.Epsilon = float32(math.Float32frombits(v))
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Timeout", wireType)
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: Search_IDRequest: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_IDRequest: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ m.Timeout = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Timeout |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 6:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field IngressFilters", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
@@ -12909,27 +14140,29 @@ func (m *Search_IDRequest) UnmarshalVT(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return protohelpers.ErrInvalidLength
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Id = string(dAtA[iNdEx:postIndex])
+ m.IngressFilters = append(m.IngressFilters, &Filter_Config{})
+ if err := m.IngressFilters[len(m.IngressFilters)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
- case 2:
+ case 7:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Config", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field EgressFilters", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -12956,68 +14189,52 @@ func (m *Search_IDRequest) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Config == nil {
- m.Config = &Search_Config{}
- }
- if err := m.Config.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ m.EgressFilters = append(m.EgressFilters, &Filter_Config{})
+ if err := m.EgressFilters[len(m.EgressFilters)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := protohelpers.Skip(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return protohelpers.ErrInvalidLength
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
+ case 8:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field MinNum", wireType)
}
- m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-
-func (m *Search_MultiIDRequest) UnmarshalVT(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
+ m.MinNum = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.MinNum |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
+ case 9:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field AggregationAlgorithm", wireType)
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ m.AggregationAlgorithm = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.AggregationAlgorithm |= Search_AggregationAlgorithm(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: Search_MultiIDRequest: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_MultiIDRequest: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ case 10:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Requests", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Ratio", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -13044,11 +14261,32 @@ func (m *Search_MultiIDRequest) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Requests = append(m.Requests, &Search_IDRequest{})
- if err := m.Requests[len(m.Requests)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ if m.Ratio == nil {
+ m.Ratio = &wrapperspb.FloatValue{}
+ }
+ if err := (*wrapperspb1.FloatValue)(m.Ratio).UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
+ case 11:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Nprobe", wireType)
+ }
+ m.Nprobe = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Nprobe |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
@@ -13072,7 +14310,7 @@ func (m *Search_MultiIDRequest) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_ObjectRequest) UnmarshalVT(dAtA []byte) error {
+func (m *Search_Response) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -13095,17 +14333,17 @@ func (m *Search_ObjectRequest) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_ObjectRequest: wiretype end group for non-group")
+ return fmt.Errorf("proto: Search_Response: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_ObjectRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Search_Response: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Object", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field RequestId", wireType)
}
- var byteLen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
@@ -13115,65 +14353,27 @@ func (m *Search_ObjectRequest) UnmarshalVT(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Object = append(m.Object[:0], dAtA[iNdEx:postIndex]...)
- if m.Object == nil {
- m.Object = []byte{}
- }
+ m.RequestId = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Config", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return protohelpers.ErrInvalidLength
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return protohelpers.ErrInvalidLength
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Config == nil {
- m.Config = &Search_Config{}
- }
- if err := m.Config.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 3:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Vectorizer", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Results", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -13200,10 +14400,8 @@ func (m *Search_ObjectRequest) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Vectorizer == nil {
- m.Vectorizer = &Filter_Target{}
- }
- if err := m.Vectorizer.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ m.Results = append(m.Results, &Object_Distance{})
+ if err := m.Results[len(m.Results)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -13230,7 +14428,7 @@ func (m *Search_ObjectRequest) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_MultiObjectRequest) UnmarshalVT(dAtA []byte) error {
+func (m *Search_Responses) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -13253,15 +14451,15 @@ func (m *Search_MultiObjectRequest) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_MultiObjectRequest: wiretype end group for non-group")
+ return fmt.Errorf("proto: Search_Responses: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_MultiObjectRequest: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Search_Responses: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Requests", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Responses", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -13288,8 +14486,8 @@ func (m *Search_MultiObjectRequest) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Requests = append(m.Requests, &Search_ObjectRequest{})
- if err := m.Requests[len(m.Requests)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ m.Responses = append(m.Responses, &Search_Response{})
+ if err := m.Responses[len(m.Responses)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -13316,7 +14514,7 @@ func (m *Search_MultiObjectRequest) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
+func (m *Search_StreamResponse) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -13339,17 +14537,17 @@ func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_Config: wiretype end group for non-group")
+ return fmt.Errorf("proto: Search_StreamResponse: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_Config: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Search_StreamResponse: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field RequestId", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Response", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
@@ -13359,87 +14557,36 @@ func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return protohelpers.ErrInvalidLength
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.RequestId = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Num", wireType)
- }
- m.Num = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Num |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 5 {
- return fmt.Errorf("proto: wrong wireType = %d for field Radius", wireType)
- }
- var v uint32
- if (iNdEx + 4) > l {
- return io.ErrUnexpectedEOF
- }
- v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
- iNdEx += 4
- m.Radius = float32(math.Float32frombits(v))
- case 4:
- if wireType != 5 {
- return fmt.Errorf("proto: wrong wireType = %d for field Epsilon", wireType)
- }
- var v uint32
- if (iNdEx + 4) > l {
- return io.ErrUnexpectedEOF
- }
- v = uint32(binary.LittleEndian.Uint32(dAtA[iNdEx:]))
- iNdEx += 4
- m.Epsilon = float32(math.Float32frombits(v))
- case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Timeout", wireType)
- }
- m.Timeout = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
+ if oneof, ok := m.Payload.(*Search_StreamResponse_Response); ok {
+ if err := oneof.Response.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- b := dAtA[iNdEx]
- iNdEx++
- m.Timeout |= int64(b&0x7F) << shift
- if b < 0x80 {
- break
+ } else {
+ v := &Search_Response{}
+ if err := v.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
+ m.Payload = &Search_StreamResponse_Response{Response: v}
}
- case 6:
+ iNdEx = postIndex
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IngressFilters", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -13466,92 +14613,143 @@ func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.IngressFilters == nil {
- m.IngressFilters = &Filter_Config{}
- }
- if err := m.IngressFilters.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 7:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field EgressFilters", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
+ if oneof, ok := m.Payload.(*Search_StreamResponse_Status); ok {
+ if unmarshal, ok := any(oneof.Status).(interface {
+ UnmarshalVT([]byte) error
+ }); ok {
+ if err := unmarshal.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ } else {
+ if err := proto.Unmarshal(dAtA[iNdEx:postIndex], oneof.Status); err != nil {
+ return err
+ }
}
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
+ } else {
+ v := &status.Status{}
+ if unmarshal, ok := any(v).(interface {
+ UnmarshalVT([]byte) error
+ }); ok {
+ if err := unmarshal.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ } else {
+ if err := proto.Unmarshal(dAtA[iNdEx:postIndex], v); err != nil {
+ return err
+ }
}
+ m.Payload = &Search_StreamResponse_Status{Status: v}
}
- if msglen < 0 {
- return protohelpers.ErrInvalidLength
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := protohelpers.Skip(dAtA[iNdEx:])
+ if err != nil {
+ return err
}
- postIndex := iNdEx + msglen
- if postIndex < 0 {
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
return protohelpers.ErrInvalidLength
}
- if postIndex > l {
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+
+func (m *Search) UnmarshalVT(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
return io.ErrUnexpectedEOF
}
- if m.EgressFilters == nil {
- m.EgressFilters = &Filter_Config{}
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- if err := m.EgressFilters.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Search: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Search: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ default:
+ iNdEx = preIndex
+ skippy, err := protohelpers.Skip(dAtA[iNdEx:])
+ if err != nil {
return err
}
- iNdEx = postIndex
- case 8:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field MinNum", wireType)
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return protohelpers.ErrInvalidLength
}
- m.MinNum = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.MinNum |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
}
- case 9:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field AggregationAlgorithm", wireType)
+ m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+
+func (m *Filter_Target) UnmarshalVT(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
}
- m.AggregationAlgorithm = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.AggregationAlgorithm |= Search_AggregationAlgorithm(b&0x7F) << shift
- if b < 0x80 {
- break
- }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- case 10:
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Filter_Target: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Filter_Target: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Ratio", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Host", wireType)
}
- var msglen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
@@ -13561,33 +14759,29 @@ func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Ratio == nil {
- m.Ratio = &wrapperspb.FloatValue{}
- }
- if err := (*wrapperspb1.FloatValue)(m.Ratio).UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
+ m.Host = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 11:
+ case 2:
if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Nprobe", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType)
}
- m.Nprobe = 0
+ m.Port = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
@@ -13597,7 +14791,7 @@ func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Nprobe |= uint32(b&0x7F) << shift
+ m.Port |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -13625,7 +14819,7 @@ func (m *Search_Config) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_Response) UnmarshalVT(dAtA []byte) error {
+func (m *Filter_Query) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -13648,15 +14842,15 @@ func (m *Search_Response) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_Response: wiretype end group for non-group")
+ return fmt.Errorf("proto: Filter_Query: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_Response: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Filter_Query: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field RequestId", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -13684,41 +14878,7 @@ func (m *Search_Response) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.RequestId = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Results", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return protohelpers.ErrIntOverflow
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return protohelpers.ErrInvalidLength
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return protohelpers.ErrInvalidLength
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Results = append(m.Results, &Object_Distance{})
- if err := m.Results[len(m.Results)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
+ m.Query = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -13743,7 +14903,7 @@ func (m *Search_Response) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_Responses) UnmarshalVT(dAtA []byte) error {
+func (m *Filter_Config) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -13766,15 +14926,15 @@ func (m *Search_Responses) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_Responses: wiretype end group for non-group")
+ return fmt.Errorf("proto: Filter_Config: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_Responses: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Filter_Config: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Responses", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Target", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -13801,8 +14961,46 @@ func (m *Search_Responses) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Responses = append(m.Responses, &Search_Response{})
- if err := m.Responses[len(m.Responses)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ if m.Target == nil {
+ m.Target = &Filter_Target{}
+ }
+ if err := m.Target.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Query == nil {
+ m.Query = &Filter_Query{}
+ }
+ if err := m.Query.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -13829,7 +15027,7 @@ func (m *Search_Responses) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search_StreamResponse) UnmarshalVT(dAtA []byte) error {
+func (m *Filter_DistanceRequest) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -13852,15 +15050,15 @@ func (m *Search_StreamResponse) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search_StreamResponse: wiretype end group for non-group")
+ return fmt.Errorf("proto: Filter_DistanceRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search_StreamResponse: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Filter_DistanceRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Response", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Distance", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -13887,21 +15085,14 @@ func (m *Search_StreamResponse) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if oneof, ok := m.Payload.(*Search_StreamResponse_Response); ok {
- if err := oneof.Response.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- } else {
- v := &Search_Response{}
- if err := v.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Payload = &Search_StreamResponse_Response{Response: v}
+ m.Distance = append(m.Distance, &Object_Distance{})
+ if err := m.Distance[len(m.Distance)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
iNdEx = postIndex
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -13928,32 +15119,11 @@ func (m *Search_StreamResponse) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if oneof, ok := m.Payload.(*Search_StreamResponse_Status); ok {
- if unmarshal, ok := any(oneof.Status).(interface {
- UnmarshalVT([]byte) error
- }); ok {
- if err := unmarshal.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- } else {
- if err := proto.Unmarshal(dAtA[iNdEx:postIndex], oneof.Status); err != nil {
- return err
- }
- }
- } else {
- v := &status.Status{}
- if unmarshal, ok := any(v).(interface {
- UnmarshalVT([]byte) error
- }); ok {
- if err := unmarshal.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- } else {
- if err := proto.Unmarshal(dAtA[iNdEx:postIndex], v); err != nil {
- return err
- }
- }
- m.Payload = &Search_StreamResponse_Status{Status: v}
+ if m.Query == nil {
+ m.Query = &Filter_Query{}
+ }
+ if err := m.Query.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
iNdEx = postIndex
default:
@@ -13979,7 +15149,7 @@ func (m *Search_StreamResponse) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Search) UnmarshalVT(dAtA []byte) error {
+func (m *Filter_DistanceResponse) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -14002,12 +15172,46 @@ func (m *Search) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Search: wiretype end group for non-group")
+ return fmt.Errorf("proto: Filter_DistanceResponse: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Search: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Filter_DistanceResponse: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Distance", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return protohelpers.ErrIntOverflow
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Distance = append(m.Distance, &Object_Distance{})
+ if err := m.Distance[len(m.Distance)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
@@ -14031,7 +15235,7 @@ func (m *Search) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Filter_Target) UnmarshalVT(dAtA []byte) error {
+func (m *Filter_VectorRequest) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -14054,17 +15258,17 @@ func (m *Filter_Target) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Filter_Target: wiretype end group for non-group")
+ return fmt.Errorf("proto: Filter_VectorRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Filter_Target: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Filter_VectorRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Host", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Vector", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
@@ -14074,29 +15278,33 @@ func (m *Filter_Target) UnmarshalVT(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return protohelpers.ErrInvalidLength
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Host = string(dAtA[iNdEx:postIndex])
+ if m.Vector == nil {
+ m.Vector = &Object_Vector{}
+ }
+ if err := m.Vector.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType)
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
}
- m.Port = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
@@ -14106,11 +15314,28 @@ func (m *Filter_Target) UnmarshalVT(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Port |= uint32(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
+ if msglen < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return protohelpers.ErrInvalidLength
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Query == nil {
+ m.Query = &Filter_Query{}
+ }
+ if err := m.Query.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
@@ -14134,7 +15359,7 @@ func (m *Filter_Target) UnmarshalVT(dAtA []byte) error {
return nil
}
-func (m *Filter_Config) UnmarshalVT(dAtA []byte) error {
+func (m *Filter_VectorResponse) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -14157,15 +15382,15 @@ func (m *Filter_Config) UnmarshalVT(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Filter_Config: wiretype end group for non-group")
+ return fmt.Errorf("proto: Filter_VectorResponse: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Filter_Config: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Filter_VectorResponse: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Targets", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Vector", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -14192,8 +15417,10 @@ func (m *Filter_Config) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Targets = append(m.Targets, &Filter_Target{})
- if err := m.Targets[len(m.Targets)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ if m.Vector == nil {
+ m.Vector = &Object_Vector{}
+ }
+ if err := m.Vector.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -14806,10 +16033,8 @@ func (m *Insert_Config) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Filters == nil {
- m.Filters = &Filter_Config{}
- }
- if err := m.Filters.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ m.Filters = append(m.Filters, &Filter_Config{})
+ if err := m.Filters[len(m.Filters)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -15564,10 +16789,8 @@ func (m *Update_Config) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Filters == nil {
- m.Filters = &Filter_Config{}
- }
- if err := m.Filters.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ m.Filters = append(m.Filters, &Filter_Config{})
+ if err := m.Filters[len(m.Filters)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -16219,10 +17442,8 @@ func (m *Upsert_Config) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Filters == nil {
- m.Filters = &Filter_Config{}
- }
- if err := m.Filters.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ m.Filters = append(m.Filters, &Filter_Config{})
+ if err := m.Filters[len(m.Filters)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -17067,10 +18288,8 @@ func (m *Object_VectorRequest) UnmarshalVT(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Filters == nil {
- m.Filters = &Filter_Config{}
- }
- if err := m.Filters.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
+ m.Filters = append(m.Filters, &Filter_Config{})
+ if err := m.Filters[len(m.Filters)-1].UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
diff --git a/apis/grpc/v1/rpc/errdetails/error_details.pb.go b/apis/grpc/v1/rpc/errdetails/error_details.pb.go
index ff9e1c42cf..1ca92f9554 100644
--- a/apis/grpc/v1/rpc/errdetails/error_details.pb.go
+++ b/apis/grpc/v1/rpc/errdetails/error_details.pb.go
@@ -1108,7 +1108,6 @@ var (
(*durationpb.Duration)(nil), // 15: google.protobuf.Duration
}
)
-
var file_v1_rpc_errdetails_error_details_proto_depIdxs = []int32{
10, // 0: rpc.v1.ErrorInfo.metadata:type_name -> rpc.v1.ErrorInfo.MetadataEntry
15, // 1: rpc.v1.RetryInfo.retry_delay:type_name -> google.protobuf.Duration
diff --git a/apis/proto/v1/filter/egress/egress_filter.proto b/apis/proto/v1/filter/egress/egress_filter.proto
index fd04ea054c..3f3f2dfa8b 100644
--- a/apis/proto/v1/filter/egress/egress_filter.proto
+++ b/apis/proto/v1/filter/egress/egress_filter.proto
@@ -29,7 +29,7 @@ option java_package = "org.vdaas.vald.api.v1.filter.egress";
// Represent the egress filter service.
service Filter {
// Represent the RPC to filter the distance.
- rpc FilterDistance(payload.v1.Object.Distance) returns (payload.v1.Object.Distance) {
+ rpc FilterDistance(payload.v1.Filter.DistanceRequest) returns (payload.v1.Filter.DistanceResponse) {
option (google.api.http) = {
post: "/filter/egress/distance"
body: "*"
@@ -37,7 +37,7 @@ service Filter {
}
// Represent the RPC to filter the vector.
- rpc FilterVector(payload.v1.Object.Vector) returns (payload.v1.Object.Vector) {
+ rpc FilterVector(payload.v1.Filter.VectorRequest) returns (payload.v1.Filter.VectorResponse) {
option (google.api.http) = {
post: "/filter/egress/vector"
body: "*"
diff --git a/apis/proto/v1/payload/payload.proto b/apis/proto/v1/payload/payload.proto
index 08e55f8f11..2af90fab4d 100644
--- a/apis/proto/v1/payload/payload.proto
+++ b/apis/proto/v1/payload/payload.proto
@@ -88,9 +88,9 @@ message Search {
// Search timeout in nanoseconds.
int64 timeout = 5;
// Ingress filter configurations.
- Filter.Config ingress_filters = 6;
+ repeated Filter.Config ingress_filters = 6;
// Egress filter configurations.
- Filter.Config egress_filters = 7;
+ repeated Filter.Config egress_filters = 7;
// Minimum number of result to be returned.
uint32 min_num = 8 [(buf.validate.field).uint32.gte = 0];
// Aggregation Algorithm
@@ -145,10 +145,46 @@ message Filter {
uint32 port = 2;
}
+ // Represent the filter query.
+ message Query {
+ // The raw query string.
+ string query = 1;
+ }
+
// Represent filter configuration.
message Config {
// Represent the filter target configuration.
- repeated Target targets = 1;
+ Target target = 1;
+ // The target query.
+ Query query = 2;
+ }
+
+ // Represent the ID and distance pair.
+ message DistanceRequest {
+ // Distance
+ repeated Object.Distance distance = 1;
+ // Query
+ Query query = 2;
+ }
+
+ // Represent the ID and distance pair.
+ message DistanceResponse {
+ // Distance
+ repeated Object.Distance distance = 1;
+ }
+
+ // Represent the ID and vector pair.
+ message VectorRequest {
+ // Vector
+ Object.Vector vector = 1;
+ // Query
+ Query query = 2;
+ }
+
+ // Represent the ID and vector pair.
+ message VectorResponse {
+ // Distance
+ Object.Vector vector = 1;
}
}
@@ -189,7 +225,7 @@ message Insert {
// A flag to skip exist check during insert operation.
bool skip_strict_exist_check = 1;
// Filter configurations.
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
// Insert timestamp.
int64 timestamp = 3;
}
@@ -242,7 +278,7 @@ message Update {
// A flag to skip exist check during update operation.
bool skip_strict_exist_check = 1;
// Filter configuration.
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
// Update timestamp.
int64 timestamp = 3;
// A flag to disable balanced update (split remove -> insert operation)
@@ -288,7 +324,7 @@ message Upsert {
// A flag to skip exist check during upsert operation.
bool skip_strict_exist_check = 1;
// Filter configuration.
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
// Upsert timestamp.
int64 timestamp = 3;
// A flag to disable balanced update (split remove -> insert operation)
@@ -366,7 +402,7 @@ message Object {
// The vector ID to be fetched.
ID id = 1 [(buf.validate.field).repeated.min_items = 2];
// Filter configurations.
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
}
// Represent the ID and distance pair.
diff --git a/apis/swagger/v1/filter/egress/egress_filter.swagger.json b/apis/swagger/v1/filter/egress/egress_filter.swagger.json
index 395887d8e2..1907de3575 100644
--- a/apis/swagger/v1/filter/egress/egress_filter.swagger.json
+++ b/apis/swagger/v1/filter/egress/egress_filter.swagger.json
@@ -20,7 +20,7 @@
"200": {
"description": "A successful response.",
"schema": {
- "$ref": "#/definitions/ObjectDistance"
+ "$ref": "#/definitions/FilterDistanceResponse"
}
},
"default": {
@@ -37,7 +37,7 @@
"in": "body",
"required": true,
"schema": {
- "$ref": "#/definitions/ObjectDistance"
+ "$ref": "#/definitions/FilterDistanceRequest"
}
}
],
@@ -52,7 +52,7 @@
"200": {
"description": "A successful response.",
"schema": {
- "$ref": "#/definitions/ObjectVector"
+ "$ref": "#/definitions/FilterVectorResponse"
}
},
"default": {
@@ -65,11 +65,11 @@
"parameters": [
{
"name": "body",
- "description": "Represent a vector.",
+ "description": "Represent the ID and vector pair.",
"in": "body",
"required": true,
"schema": {
- "$ref": "#/definitions/ObjectVector"
+ "$ref": "#/definitions/v1FilterVectorRequest"
}
}
],
@@ -78,6 +78,58 @@
}
},
"definitions": {
+ "FilterDistanceRequest": {
+ "type": "object",
+ "properties": {
+ "distance": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/ObjectDistance"
+ },
+ "title": "Distance"
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "title": "Query"
+ }
+ },
+ "description": "Represent the ID and distance pair."
+ },
+ "FilterDistanceResponse": {
+ "type": "object",
+ "properties": {
+ "distance": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/ObjectDistance"
+ },
+ "title": "Distance"
+ }
+ },
+ "description": "Represent the ID and distance pair."
+ },
+ "FilterQuery": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The raw query string."
+ }
+ },
+ "description": "Represent the filter query."
+ },
+ "FilterVectorResponse": {
+ "type": "object",
+ "properties": {
+ "vector": {
+ "$ref": "#/definitions/ObjectVector",
+ "title": "Distance"
+ }
+ },
+ "description": "Represent the ID and vector pair."
+ },
"ObjectDistance": {
"type": "object",
"properties": {
@@ -149,6 +201,20 @@
}
},
"description": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). Each `Status` message contains\nthree pieces of data: error code, error message, and error details.\n\nYou can find out more about this error model and how to work with it in the\n[API Design Guide](https://cloud.google.com/apis/design/errors)."
+ },
+ "v1FilterVectorRequest": {
+ "type": "object",
+ "properties": {
+ "vector": {
+ "$ref": "#/definitions/ObjectVector",
+ "title": "Vector"
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "title": "Query"
+ }
+ },
+ "description": "Represent the ID and vector pair."
}
}
}
diff --git a/apis/swagger/v1/vald/filter.swagger.json b/apis/swagger/v1/vald/filter.swagger.json
index ddf4c886eb..d04037a631 100644
--- a/apis/swagger/v1/vald/filter.swagger.json
+++ b/apis/swagger/v1/vald/filter.swagger.json
@@ -270,6 +270,16 @@
}
},
"definitions": {
+ "FilterQuery": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The raw query string."
+ }
+ },
+ "description": "Represent the filter query."
+ },
"ObjectBlob": {
"type": "object",
"properties": {
@@ -404,13 +414,13 @@
"v1FilterConfig": {
"type": "object",
"properties": {
- "targets": {
- "type": "array",
- "items": {
- "type": "object",
- "$ref": "#/definitions/v1FilterTarget"
- },
+ "target": {
+ "$ref": "#/definitions/v1FilterTarget",
"description": "Represent the filter target configuration."
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "description": "The target query."
}
},
"description": "Represent filter configuration."
@@ -438,7 +448,11 @@
"description": "A flag to skip exist check during insert operation."
},
"filters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Filter configurations."
},
"timestamp": {
@@ -530,11 +544,19 @@
"description": "Search timeout in nanoseconds."
},
"ingressFilters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Ingress filter configurations."
},
"egressFilters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Egress filter configurations."
},
"minNum": {
@@ -618,7 +640,11 @@
"description": "A flag to skip exist check during update operation."
},
"filters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Filter configuration."
},
"timestamp": {
@@ -673,7 +699,11 @@
"description": "A flag to skip exist check during upsert operation."
},
"filters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Filter configuration."
},
"timestamp": {
diff --git a/apis/swagger/v1/vald/insert.swagger.json b/apis/swagger/v1/vald/insert.swagger.json
index 6b76719382..0d5f31bb46 100644
--- a/apis/swagger/v1/vald/insert.swagger.json
+++ b/apis/swagger/v1/vald/insert.swagger.json
@@ -78,6 +78,16 @@
}
},
"definitions": {
+ "FilterQuery": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The raw query string."
+ }
+ },
+ "description": "Represent the filter query."
+ },
"ObjectLocations": {
"type": "object",
"properties": {
@@ -165,13 +175,13 @@
"v1FilterConfig": {
"type": "object",
"properties": {
- "targets": {
- "type": "array",
- "items": {
- "type": "object",
- "$ref": "#/definitions/v1FilterTarget"
- },
+ "target": {
+ "$ref": "#/definitions/v1FilterTarget",
"description": "Represent the filter target configuration."
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "description": "The target query."
}
},
"description": "Represent filter configuration."
@@ -199,7 +209,11 @@
"description": "A flag to skip exist check during insert operation."
},
"filters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Filter configurations."
},
"timestamp": {
diff --git a/apis/swagger/v1/vald/object.swagger.json b/apis/swagger/v1/vald/object.swagger.json
index 263808e6d3..9eb04cc4e5 100644
--- a/apis/swagger/v1/vald/object.swagger.json
+++ b/apis/swagger/v1/vald/object.swagger.json
@@ -131,6 +131,16 @@
}
},
"definitions": {
+ "FilterQuery": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The raw query string."
+ }
+ },
+ "description": "Represent the filter query."
+ },
"ObjectID": {
"type": "object",
"properties": {
@@ -227,13 +237,13 @@
"v1FilterConfig": {
"type": "object",
"properties": {
- "targets": {
- "type": "array",
- "items": {
- "type": "object",
- "$ref": "#/definitions/v1FilterTarget"
- },
+ "target": {
+ "$ref": "#/definitions/v1FilterTarget",
"description": "Represent the filter target configuration."
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "description": "The target query."
}
},
"description": "Represent filter configuration."
diff --git a/apis/swagger/v1/vald/search.swagger.json b/apis/swagger/v1/vald/search.swagger.json
index cf23ad1f06..869d7bad43 100644
--- a/apis/swagger/v1/vald/search.swagger.json
+++ b/apis/swagger/v1/vald/search.swagger.json
@@ -270,6 +270,16 @@
}
},
"definitions": {
+ "FilterQuery": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The raw query string."
+ }
+ },
+ "description": "Represent the filter query."
+ },
"ObjectDistance": {
"type": "object",
"properties": {
@@ -390,13 +400,13 @@
"v1FilterConfig": {
"type": "object",
"properties": {
- "targets": {
- "type": "array",
- "items": {
- "type": "object",
- "$ref": "#/definitions/v1FilterTarget"
- },
+ "target": {
+ "$ref": "#/definitions/v1FilterTarget",
"description": "Represent the filter target configuration."
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "description": "The target query."
}
},
"description": "Represent filter configuration."
@@ -444,11 +454,19 @@
"description": "Search timeout in nanoseconds."
},
"ingressFilters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Ingress filter configurations."
},
"egressFilters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Egress filter configurations."
},
"minNum": {
diff --git a/apis/swagger/v1/vald/update.swagger.json b/apis/swagger/v1/vald/update.swagger.json
index 6e6c883681..f660913360 100644
--- a/apis/swagger/v1/vald/update.swagger.json
+++ b/apis/swagger/v1/vald/update.swagger.json
@@ -110,6 +110,16 @@
}
},
"definitions": {
+ "FilterQuery": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The raw query string."
+ }
+ },
+ "description": "Represent the filter query."
+ },
"ObjectLocations": {
"type": "object",
"properties": {
@@ -197,13 +207,13 @@
"v1FilterConfig": {
"type": "object",
"properties": {
- "targets": {
- "type": "array",
- "items": {
- "type": "object",
- "$ref": "#/definitions/v1FilterTarget"
- },
+ "target": {
+ "$ref": "#/definitions/v1FilterTarget",
"description": "Represent the filter target configuration."
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "description": "The target query."
}
},
"description": "Represent filter configuration."
@@ -252,7 +262,11 @@
"description": "A flag to skip exist check during update operation."
},
"filters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Filter configuration."
},
"timestamp": {
diff --git a/apis/swagger/v1/vald/upsert.swagger.json b/apis/swagger/v1/vald/upsert.swagger.json
index b36801bc74..b79828aa29 100644
--- a/apis/swagger/v1/vald/upsert.swagger.json
+++ b/apis/swagger/v1/vald/upsert.swagger.json
@@ -78,6 +78,16 @@
}
},
"definitions": {
+ "FilterQuery": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The raw query string."
+ }
+ },
+ "description": "Represent the filter query."
+ },
"ObjectLocations": {
"type": "object",
"properties": {
@@ -165,13 +175,13 @@
"v1FilterConfig": {
"type": "object",
"properties": {
- "targets": {
- "type": "array",
- "items": {
- "type": "object",
- "$ref": "#/definitions/v1FilterTarget"
- },
+ "target": {
+ "$ref": "#/definitions/v1FilterTarget",
"description": "Represent the filter target configuration."
+ },
+ "query": {
+ "$ref": "#/definitions/FilterQuery",
+ "description": "The target query."
}
},
"description": "Represent filter configuration."
@@ -220,7 +230,11 @@
"description": "A flag to skip exist check during upsert operation."
},
"filters": {
- "$ref": "#/definitions/v1FilterConfig",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "$ref": "#/definitions/v1FilterConfig"
+ },
"description": "Filter configuration."
},
"timestamp": {
diff --git a/charts/vald-benchmark-operator/crds/valdbenchmarkjob.yaml b/charts/vald-benchmark-operator/crds/valdbenchmarkjob.yaml
index a51277c261..4e2dfe0d4d 100644
--- a/charts/vald-benchmark-operator/crds/valdbenchmarkjob.yaml
+++ b/charts/vald-benchmark-operator/crds/valdbenchmarkjob.yaml
@@ -326,11 +326,11 @@ spec:
object_config:
type: object
properties:
- filter_config:
- type: object
- properties:
- host:
- type: string
+ filter_configs:
+ type: array
+ items:
+ type: object
+ x-kubernetes-preserve-unknown-fields: true
remove_config:
type: object
properties:
diff --git a/charts/vald-benchmark-operator/schemas/job-values.yaml b/charts/vald-benchmark-operator/schemas/job-values.yaml
index 1e166a1db0..cfad3a0a61 100644
--- a/charts/vald-benchmark-operator/schemas/job-values.yaml
+++ b/charts/vald-benchmark-operator/schemas/job-values.yaml
@@ -137,15 +137,14 @@ remove_config:
# @schema {"name": "object_config", "type": "object"}
# object_config -- object config
object_config:
- # @schema {"name": "object_config.filter_config", "type": "object"}
- # object_config.filter_config -- filter target config
- filter_config:
- # @schema {"name": "object_config.filter_config.host", "type": "string"}
- # object_config.filter_config.host -- filter target host
- host: 0.0.0.0
- # @schema {"name": "object_config.filter_config.host", "type": "integer"}
- # object_config.filter_config.port -- filter target host
- port: 8081
+ # @schema {"name": "object_config.filter_configs", "type": "array", "items": {"type": "object"}}
+ # object_config.filter_configs -- filter configs
+ filter_configs:
+ - target:
+ host: 0.0.0.0
+ port: 8081
+ query:
+ query: ""
# @schema {"name": "client_config", "type": "object"}
# client_config -- gRPC client config for request to the Vald cluster
client_config:
diff --git a/charts/vald-helm-operator/crds/valdrelease.yaml b/charts/vald-helm-operator/crds/valdrelease.yaml
index 4d5d6a7332..6a0bf4abfb 100644
--- a/charts/vald-helm-operator/crds/valdrelease.yaml
+++ b/charts/vald-helm-operator/crds/valdrelease.yaml
@@ -4276,11 +4276,13 @@ spec:
distance_filters:
type: array
items:
- type: string
+ type: object
+ x-kubernetes-preserve-unknown-fields: true
object_filters:
type: array
items:
- type: string
+ type: object
+ x-kubernetes-preserve-unknown-fields: true
gateway_client:
type: object
properties:
diff --git a/charts/vald/values.schema.json b/charts/vald/values.schema.json
index f364e2359b..cce7aac2d3 100644
--- a/charts/vald/values.schema.json
+++ b/charts/vald/values.schema.json
@@ -7091,12 +7091,12 @@
"distance_filters": {
"type": "array",
"description": "distance egress vector filter targets",
- "items": { "type": "string" }
+ "items": { "type": "object" }
},
"object_filters": {
"type": "array",
"description": "object egress vector filter targets",
- "items": { "type": "string" }
+ "items": { "type": "object" }
}
}
},
diff --git a/charts/vald/values.yaml b/charts/vald/values.yaml
index 8df795ee67..c19da14ca4 100644
--- a/charts/vald/values.yaml
+++ b/charts/vald/values.yaml
@@ -1516,10 +1516,10 @@ gateway:
# @schema {"name": "gateway.filter.gateway_config.egress_filter.client", "alias": "grpc.client"}
# gateway.filter.gateway_config.egress_filter.client -- gRPC client config for egress filter (overrides defaults.grpc.client)
client: {}
- # @schema {"name": "gateway.filter.gateway_config.egress_filter.object_filters", "type": "array", "items": {"type": "string"}}
+ # @schema {"name": "gateway.filter.gateway_config.egress_filter.object_filters", "type": "array", "items": {"type": "object"}}
# gateway.filter.gateway_config.egress_filter.object_filters -- object egress vector filter targets
object_filters: []
- # @schema {"name": "gateway.filter.gateway_config.egress_filter.distance_filters", "type": "array", "items": {"type": "string"}}
+ # @schema {"name": "gateway.filter.gateway_config.egress_filter.distance_filters", "type": "array", "items": {"type": "object"}}
# gateway.filter.gateway_config.egress_filter.distance_filters -- distance egress vector filter targets
distance_filters: []
# @schema {"name": "gateway.mirror", "type": "object"}
diff --git a/docs/api/filter-gateway.md b/docs/api/filter-gateway.md
index 33a4349878..cfaf08c0ed 100644
--- a/docs/api/filter-gateway.md
+++ b/docs/api/filter-gateway.md
@@ -36,7 +36,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -54,8 +54,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -77,11 +82,18 @@ service Filter {
- Insert.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already inserted or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector inserted.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already inserted or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector inserted.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -90,11 +102,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -158,7 +170,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -176,8 +188,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -199,11 +216,18 @@ service Filter {
- Insert.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already inserted or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector inserted.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already inserted or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector inserted.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -212,11 +236,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -307,7 +331,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -325,8 +349,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -354,11 +383,18 @@ service Filter {
- Insert.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already inserted or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector inserted.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already inserted or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector inserted.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -367,11 +403,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -444,7 +480,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -462,8 +498,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -485,12 +526,19 @@ service Filter {
- Update.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :-------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already updated or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector updated.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :-------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already updated or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector updated.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -499,11 +547,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -566,7 +614,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -584,8 +632,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -607,12 +660,19 @@ service Filter {
- Update.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :-------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already updated or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector updated.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :-------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already updated or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector updated.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -621,11 +681,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -721,7 +781,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -739,8 +799,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -768,12 +833,19 @@ service Filter {
- Update.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :-------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already updated or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector updated.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :-------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already updated or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector updated.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -782,11 +854,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -859,7 +931,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -877,8 +949,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -900,12 +977,19 @@ service Filter {
- Update.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already upserted or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector upserted.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already upserted or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector upserted.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -914,11 +998,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -978,7 +1062,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -996,8 +1080,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -1019,12 +1108,19 @@ service Filter {
- Update.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already upserted or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector upserted.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already upserted or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector upserted.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -1033,11 +1129,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -1133,7 +1229,7 @@ service Filter {
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -1151,8 +1247,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -1180,12 +1281,19 @@ service Filter {
- Update.Config
- | field | type | label | required | desc. |
- | :---------------------: | :------------ | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
- | skip_strict_exist_check | bool | | | check the same vector is already upserted or not.
the ID should be unique if the value is `true`. |
- | timestamp | int64 | | | the timestamp of the vector upserted.
if it is N/A, the current time will be used. |
- | filters | Filter.Config | | | configuration for filter |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | desc. |
+ | :---------------------: | :----------------------------- | :---- | :------: | :--------------------------------------------------------------------------------------------------- |
+ | skip_strict_exist_check | bool | | | check the same vector is already upserted or not.
the ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | the timestamp of the vector upserted.
if it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | configuration for filter |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
- Filter.Target
@@ -1194,11 +1302,11 @@ service Filter {
| host | string | | \* | the target hostname |
| port | port | | \* | the target port |
- - Filter.Config
+ - Filter.Query
- | field | type | label | required | desc. |
- | :-----: | :------------ | :----------------------------- | :------: | :------------------------------ |
- | targets | Filter.Target | repeated(Array[Filter.Target]) | \* | the filter target configuration |
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -1275,8 +1383,8 @@ service Filter {
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8 [ (validate.rules).uint32.gte = 0 ];
}
}
@@ -1287,8 +1395,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -1303,16 +1416,36 @@ service Filter {
- Search.Config
- | field | type | label | required | desc. |
- | :-------------: | :------------ | :---- | :------: | :---------------------------------------------------- |
- | request_id | string | | | unique request ID |
- | num | uint32 | | \* | the maximum number of result to be returned |
- | radius | float | | \* | the search radius |
- | epsilon | float | | \* | the search coefficient (default value is `0.1`) |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`) |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration |
- | egress_filters | Filter.Config | | | Egress Filter configuration |
- | min_num | uint32 | | | the minimum number of result to be returned |
+ | field | type | label | required | desc. |
+ | :-------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------- |
+ | request_id | string | | | unique request ID |
+ | num | uint32 | | \* | the maximum number of result to be returned |
+ | radius | float | | \* | the search radius |
+ | epsilon | float | | \* | the search coefficient (default value is `0.1`) |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`) |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration |
+ | min_num | uint32 | | | the minimum number of result to be returned |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
+
+ - Filter.Target
+
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :------------------ |
+ | host | string | | \* | the target hostname |
+ | port | port | | \* | the target port |
+
+ - Filter.Query
+
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -1390,8 +1523,8 @@ service Filter {
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8 [ (validate.rules).uint32.gte = 0 ];
}
}
@@ -1402,8 +1535,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -1418,16 +1556,36 @@ service Filter {
- Search.Config
- | field | type | label | required | desc. |
- | :-------------: | :------------ | :---- | :------: | :---------------------------------------------------- |
- | request_id | string | | | unique request ID |
- | num | uint32 | | \* | the maximum number of result to be returned |
- | radius | float | | \* | the search radius |
- | epsilon | float | | \* | the search coefficient (default value is `0.1`) |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`) |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration |
- | egress_filters | Filter.Config | | | Egress Filter configuration |
- | min_num | uint32 | | | the minimum number of result to be returned |
+ | field | type | label | required | desc. |
+ | :-------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------- |
+ | request_id | string | | | unique request ID |
+ | num | uint32 | | \* | the maximum number of result to be returned |
+ | radius | float | | \* | the search radius |
+ | epsilon | float | | \* | the search coefficient (default value is `0.1`) |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`) |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration |
+ | min_num | uint32 | | | the minimum number of result to be returned |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
+
+ - Filter.Target
+
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :------------------ |
+ | host | string | | \* | the target hostname |
+ | port | port | | \* | the target port |
+
+ - Filter.Query
+
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
@@ -1531,8 +1689,8 @@ service Filter {
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8 [ (validate.rules).uint32.gte = 0 ];
}
}
@@ -1543,8 +1701,13 @@ service Filter {
uint32 port = 2;
}
+ message Query {
+ string query = 1;
+ }
+
message Config {
- repeated Target targets = 1;
+ Target target = 1;
+ Query query = 2;
}
}
```
@@ -1565,16 +1728,36 @@ service Filter {
- Search.Config
- | field | type | label | required | desc. |
- | :-------------: | :------------ | :---- | :------: | :---------------------------------------------------- |
- | request_id | string | | | unique request ID |
- | num | uint32 | | \* | the maximum number of result to be returned |
- | radius | float | | \* | the search radius |
- | epsilon | float | | \* | the search coefficient (default value is `0.1`) |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`) |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration |
- | egress_filters | Filter.Config | | | Egress Filter configuration |
- | min_num | uint32 | | | the minimum number of result to be returned |
+ | field | type | label | required | desc. |
+ | :-------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------- |
+ | request_id | string | | | unique request ID |
+ | num | uint32 | | \* | the maximum number of result to be returned |
+ | radius | float | | \* | the search radius |
+ | epsilon | float | | \* | the search coefficient (default value is `0.1`) |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`) |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration |
+ | min_num | uint32 | | | the minimum number of result to be returned |
+
+ - Filter.Config
+
+ | field | type | label | required | desc. |
+ | :----: | :------------ | :---- | :------: | :------------------------------ |
+ | target | Filter.Target | | \* | the filter target configuration |
+ | query | Filter.Query | | | the filter target configuration |
+
+ - Filter.Target
+
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :------------------ |
+ | host | string | | \* | the target hostname |
+ | port | port | | \* | the target port |
+
+ - Filter.Query
+
+ | field | type | label | required | desc. |
+ | :---: | :----- | :---- | :------: | :--------------- |
+ | query | string | | | the filter query |
### Output
diff --git a/docs/api/insert.md b/docs/api/insert.md
index eae2bd7dce..4788ae3eca 100644
--- a/docs/api/insert.md
+++ b/docs/api/insert.md
@@ -31,7 +31,7 @@ Inset RPC is the method to add a new single vector.
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -53,11 +53,11 @@ Inset RPC is the method to add a new single vector.
- Insert.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for filter. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
- Object.Vector
@@ -135,7 +135,7 @@ It's the recommended method to insert a large number of vectors.
}
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -157,11 +157,11 @@ It's the recommended method to insert a large number of vectors.
- Insert.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for the filter targets. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
- Object.Vector
@@ -266,7 +266,7 @@ Please be careful that the size of the request exceeds the limit.
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -294,11 +294,11 @@ Please be careful that the size of the request exceeds the limit.
- Insert.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for the filter targets. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
- Object.Vector
diff --git a/docs/api/object.md b/docs/api/object.md
index a9e37ea9d0..637f3a7552 100644
--- a/docs/api/object.md
+++ b/docs/api/object.md
@@ -95,7 +95,7 @@ GetObject RPC is the method to get the metadata of a vector inserted into the `v
message Object {
message VectorRequest {
ID id = 1 [ (validate.rules).repeated .min_items = 2 ];
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
}
message ID {
@@ -106,10 +106,10 @@ GetObject RPC is the method to get the metadata of a vector inserted into the `v
- Object.VectorRequest
- | field | type | label | required | description |
- | :-----: | :------------ | :---- | :------: | :------------------------------------------------------------- |
- | id | Object.ID | | \* | The ID of a vector. ID should consist of 1 or more characters. |
- | filters | Filter.Config | | | Configuration for filter. |
+ | field | type | label | required | description |
+ | :-----: | :---------------------------- | :---- | :------: | :------------------------------------------------------------- |
+ | id | Object.ID | | \* | The ID of a vector. ID should consist of 1 or more characters. |
+ | filters | repeated(Array[Filter.Config] | | | Configuration for filter. |
- Object.ID
@@ -176,7 +176,7 @@ Each Upsert request and response are independent.
message Object {
message VectorRequest {
ID id = 1 [ (validate.rules).repeated .min_items = 2 ];
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
}
message ID {
@@ -187,10 +187,10 @@ Each Upsert request and response are independent.
- Object.VectorRequest
- | field | type | label | required | description |
- | :-----: | :------------ | :---- | :------: | :------------------------------------------------------------- |
- | id | Object.ID | | \* | The ID of a vector. ID should consist of 1 or more characters. |
- | filters | Filter.Config | | | Configuration for the filter targets. |
+ | field | type | label | required | description |
+ | :-----: | :----------------------------- | :---- | :------: | :------------------------------------------------------------- |
+ | id | Object.ID | | \* | The ID of a vector. ID should consist of 1 or more characters. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for the filter targets. |
- Object.ID
diff --git a/docs/api/search.md b/docs/api/search.md
index ccec2863d0..c17e90f5ee 100644
--- a/docs/api/search.md
+++ b/docs/api/search.md
@@ -63,8 +63,8 @@ Search RPC is the method to search vector(s) similar to the request vector.
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -88,17 +88,17 @@ Search RPC is the method to search vector(s) similar to the request vector.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | radius | float | | \* | The search radius. |
- | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | radius | float | | \* | The search radius. |
+ | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -184,8 +184,8 @@ The vector with the same requested ID should be indexed into the `vald-agent` be
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -209,17 +209,17 @@ The vector with the same requested ID should be indexed into the `vald-agent` be
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | radius | float | | \* | The search radius. |
- | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | radius | float | | \* | The search radius. |
+ | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -306,8 +306,8 @@ Each Search request and response are independent.
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -331,17 +331,17 @@ Each Search request and response are independent.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | radius | float | | \* | The search radius. |
- | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | radius | float | | \* | The search radius. |
+ | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -442,8 +442,8 @@ Each SearchByID request and response are independent.
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -467,17 +467,17 @@ Each SearchByID request and response are independent.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | radius | float | | \* | The search radius. |
- | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | radius | float | | \* | The search radius. |
+ | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -585,8 +585,8 @@ Please be careful that the size of the request exceeds the limit.
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -616,17 +616,17 @@ Please be careful that the size of the request exceeds the limit.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | radius | float | | \* | The search radius. |
- | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | radius | float | | \* | The search radius. |
+ | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -731,8 +731,8 @@ Please be careful that the size of the request exceeds the limit.
float radius = 3;
float epsilon = 4;
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -762,17 +762,17 @@ Please be careful that the size of the request exceeds the limit.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | radius | float | | \* | The search radius. |
- | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | radius | float | | \* | The search radius. |
+ | epsilon | float | | \* | The search coefficient (default value is `0.1`). |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -865,8 +865,8 @@ LinearSearch RPC is the method to linear search vector(s) similar to the request
string request_id = 1;
uint32 num = 2 [ (validate.rules).uint32.gte = 1 ];
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -890,15 +890,15 @@ LinearSearch RPC is the method to linear search vector(s) similar to the request
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -983,8 +983,8 @@ You will get a `NOT_FOUND` error if the vector isn't stored.
string request_id = 1;
uint32 num = 2 [ (validate.rules).uint32.gte = 1 ];
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -1008,15 +1008,15 @@ You will get a `NOT_FOUND` error if the vector isn't stored.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -1101,8 +1101,8 @@ Each LinearSearch request and response are independent.
string request_id = 1;
uint32 num = 2 [ (validate.rules).uint32.gte = 1 ];
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -1126,15 +1126,15 @@ Each LinearSearch request and response are independent.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -1233,8 +1233,8 @@ Each LinearSearchByID request and response are independent.
string request_id = 1;
uint32 num = 2 [ (validate.rules).uint32.gte = 1 ];
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -1258,15 +1258,15 @@ Each LinearSearchByID request and response are independent.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -1372,8 +1372,8 @@ Please be careful that the size of the request exceeds the limit.
string request_id = 1;
uint32 num = 2 [ (validate.rules).uint32.gte = 1 ];
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -1403,15 +1403,15 @@ Please be careful that the size of the request exceeds the limit.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
@@ -1514,8 +1514,8 @@ Please be careful that the size of the request exceeds the limit.
string request_id = 1;
uint32 num = 2 [ (validate.rules).uint32.gte = 1 ];
int64 timeout = 5;
- Filter.Config ingress_filters = 6;
- Filter.Config egress_filters = 7;
+ repeated Filter.Config ingress_filters = 6;
+ repeated Filter.Config egress_filters = 7;
uint32 min_num = 8;
AggregationAlgorithm aggregation_algorithm = 9;
}
@@ -1545,15 +1545,15 @@ Please be careful that the size of the request exceeds the limit.
- Search.Config
- | field | type | label | required | description |
- | :-------------------: | :------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
- | request_id | string | | | Unique request ID. |
- | num | uint32 | | \* | The maximum number of results to be returned. |
- | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
- | ingress_filters | Filter.Config | | | Ingress Filter configuration. |
- | egress_filters | Filter.Config | | | Egress Filter configuration. |
- | min_num | uint32 | | | The minimum number of results to be returned. |
- | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
+ | field | type | label | required | description |
+ | :-------------------: | :----------------------------- | :---- | :------: | :---------------------------------------------------------------------------- |
+ | request_id | string | | | Unique request ID. |
+ | num | uint32 | | \* | The maximum number of results to be returned. |
+ | timeout | int64 | | | Search timeout in nanoseconds (default value is `5s`). |
+ | ingress_filters | repeated(Array[Filter.Config]) | | | Ingress Filter configuration. |
+ | egress_filters | repeated(Array[Filter.Config]) | | | Egress Filter configuration. |
+ | min_num | uint32 | | | The minimum number of results to be returned. |
+ | aggregation_algorithm | AggregationAlgorithm | | | The search aggregation algorithm option (default value is `ConcurrentQueue`). |
### Output
diff --git a/docs/api/update.md b/docs/api/update.md
index 2fb68b05f7..b158c2d565 100644
--- a/docs/api/update.md
+++ b/docs/api/update.md
@@ -31,7 +31,7 @@ Update RPC is the method to update a single vector.
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -53,12 +53,12 @@ Update RPC is the method to update a single vector.
- Update.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for filter. |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
- Object.Vector
@@ -138,7 +138,7 @@ It's the recommended method to update the large amount of vectors.
}
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -160,12 +160,12 @@ It's the recommended method to update the large amount of vectors.
- Update.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for filter. |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
- Object.Vector
@@ -272,7 +272,7 @@ Please be careful that the size of the request exceeds the limit.
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -300,12 +300,12 @@ Please be careful that the size of the request exceeds the limit.
- Update.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for filter. |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
- Object.Vector
diff --git a/docs/api/upsert.md b/docs/api/upsert.md
index 9ed9f6572c..8988328059 100644
--- a/docs/api/upsert.md
+++ b/docs/api/upsert.md
@@ -35,7 +35,7 @@ Upsert RPC is the method to update the inserted vector to a new single vector or
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -57,12 +57,12 @@ Upsert RPC is the method to update the inserted vector to a new single vector or
- Upsert.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for filter. |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
- Object.Vector
@@ -140,7 +140,7 @@ It’s the recommended method to upsert a large number of vectors.
}
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -162,12 +162,12 @@ It’s the recommended method to upsert a large number of vectors.
- Upsert.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for filter. |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
- Object.Vector
@@ -272,7 +272,7 @@ Please be careful that the size of the request exceeds the limit.
message Config {
bool skip_strict_exist_check = 1;
- Filter.Config filters = 2;
+ repeated Filter.Config filters = 2;
int64 timestamp = 3;
}
}
@@ -300,12 +300,12 @@ Please be careful that the size of the request exceeds the limit.
- Upsert.Config
- | field | type | label | required | description |
- | :---------------------: | :------------ | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
- | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
- | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
- | filters | Filter.Config | | | Configuration for filter. |
- | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
+ | field | type | label | required | description |
+ | :---------------------: | :----------------------------- | :---- | :------: | :------------------------------------------------------------------------------------------------------------ |
+ | skip_strict_exist_check | bool | | | Check whether the same vector is already inserted or not.
The ID should be unique if the value is `true`. |
+ | timestamp | int64 | | | The timestamp of the vector inserted.
If it is N/A, the current time will be used. |
+ | filters | repeated(Array[Filter.Config]) | | | Configuration for filter. |
+ | disable_balanced_update | bool | | | A flag to disable balanced update (split remove -> insert operation) during update operation. |
- Object.Vector
diff --git a/docs/overview/component/filter-gateway.md b/docs/overview/component/filter-gateway.md
index 1561363016..0485283d85 100644
--- a/docs/overview/component/filter-gateway.md
+++ b/docs/overview/component/filter-gateway.md
@@ -108,12 +108,12 @@ If you want to use this feature, please deploy your own egress filter component,
- The scheme of egress filter service
```rpc
- // https://github.com/vdaas/vald/blob/main/apis/proto/v1/filter/ingress/egress_filter.proto
+ // https://github.com/vdaas/vald/blob/main/apis/proto/v1/filter/egress/egress_filter.proto
service Filter {
// Represent the RPC to filter the distance.
- rpc FilterDistance(payload.v1.Object.Distance)
- returns (payload.v1.Object.Distance) {
+ rpc FilterDistance(payload.v1.Filter.DistanceRequest)
+ returns (payload.v1.Filter.DistanceResponse) {
option (google.api.http) = {
post : "/filter/egress/distance"
body : "*"
@@ -121,8 +121,8 @@ If you want to use this feature, please deploy your own egress filter component,
}
// Represent the RPC to filter the vector.
- rpc FilterVector(payload.v1.Object.Vector)
- returns (payload.v1.Object.Vector) {
+ rpc FilterVector(payload.v1.Filter.VectorRequest)
+ returns (payload.v1.Filter.VectorResponse) {
option (google.api.http) = {
post : "/filter/egress/vector"
body : "*"
@@ -131,23 +131,35 @@ If you want to use this feature, please deploy your own egress filter component,
}
```
-- The scheme of `payload.v1.Object.Distance` and `payload.v1.Object.Vector`
+- The scheme of `payload.v1.Filter.DistanceRequest`, `payload.v1.Filter.DistanceResponse`, `payload.v1.Filter.VectorRequest` and `payload.v1.Filter.VectorResponse`
```rpc
// https://github.com/vdaas/vald/blob/main/apis/proto/v1/payload/payload.proto
// Represent the ID and distance pair.
- message Distance {
- // The vector ID.
- string id = 1;
- // The distance.
- float distance = 2;
+ message DistanceRequest {
+ // Distance
+ repeated Object.Distance distance = 1;
+ // Query
+ Query query = 2;
}
- // Represent a vector.
- message Vector {
- // The vector ID.
- string id = 1 [ (validate.rules).string.min_len = 1 ];
- // The vector.
- repeated float vector = 2 [ (validate.rules).repeated .min_items = 2 ];
+ // Represent the ID and distance pair.
+ message DistanceResponse {
+ // Distance
+ repeated Object.Distance distance = 1;
+ }
+
+ // Represent the ID and vector pair.
+ message VectorRequest {
+ // Vector
+ Object.Vector vector = 1;
+ // Query
+ Query query = 2;
+ }
+
+ // Represent the ID and vector pair.
+ message VectorResponse {
+ // Distance
+ Object.Vector vector = 1;
}
```
diff --git a/docs/user-guides/client-api-config.md b/docs/user-guides/client-api-config.md
index 3abac4d34d..450854bcba 100644
--- a/docs/user-guides/client-api-config.md
+++ b/docs/user-guides/client-api-config.md
@@ -16,13 +16,13 @@ It requires the vector, its ID (specific ID for the vector), and optional config
### Configuration
```rpc
-// Represent insert configuration.
+// Represent insert configurations.
message Config {
- // Check whether or not the same set of vector and ID is already inserted.
+ // A flag to skip exist check during insert operation.
bool skip_strict_exist_check = 1;
- // Configuration for filters if your Vald cluster uses filters.
- Filter.Config filters = 2;
- // The timestamp when the vector was inserted.
+ // Filter configurations.
+ repeated Filter.Config filters = 2;
+ // Insert timestamp.
int64 timestamp = 3;
}
```
@@ -66,12 +66,13 @@ func main() {
// Insert configuration (optional)
Config: &payload.Insert_Config{
SkipStrictExistCheck: false,
- Filters: &payload.Filter_Config{
- Targets: []*payload.Filter_Target{
- {
+ Filters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
Host: "vald-ingress-filter",
Port: 8081,
},
+ Query: &payload.Filter_Query{},
},
},
Timestamp: time.Now().UnixMilli(),
@@ -111,10 +112,18 @@ message Filter {
uint32 port = 2;
}
+ // Represent the filter query.
+ message Query {
+ // The raw query string.
+ string query = 1;
+ }
+
// Represent filter configuration.
message Config {
// Represent the filter target configuration.
- repeated Target targets = 1;
+ Target target = 1;
+ // The target query.
+ Query query = 2;
}
}
```
@@ -132,14 +141,16 @@ It requires the new vector, its ID (the target ID already indexed), and optional
### Configuration
```rpc
-// Represent update configuration.
+// Represent the update configuration.
message Config {
- // Check whether or not the same set of vector and ID is already inserted.
+ // A flag to skip exist check during update operation.
bool skip_strict_exist_check = 1;
- // Configuration for filters if your Vald cluster uses filters.
- Filter.Config filters = 2;
- // The timestamp when the vector was inserted.
+ // Filter configuration.
+ repeated Filter.Config filters = 2;
+ // Update timestamp.
int64 timestamp = 3;
+ // A flag to disable balanced update (split remove -> insert operation) during update operation.
+ bool disable_balanced_update = 4;
}
```
@@ -182,12 +193,13 @@ func example() {
// Update configuration (optional)
Config: &payload.Update_Config{
SkipStrictExistCheck: false,
- Filters: &payload.Filter_Config{
- Targets: []*payload.Filter_Target{
- {
+ Filters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
Host: "vald-ingress-filter",
Port: 8081,
},
+ Query: &payload.Filter_Query{},
},
},
Timestamp: time.Now().UnixMilli(),
@@ -235,10 +247,18 @@ message Filter {
uint32 port = 2;
}
+ // Represent the filter query.
+ message Query {
+ // The raw query string.
+ string query = 1;
+ }
+
// Represent filter configuration.
message Config {
// Represent the filter target configuration.
- repeated Target targets = 1;
+ Target target = 1;
+ // The target query.
+ Query query = 2;
}
}
```
@@ -256,14 +276,16 @@ It requires the vector, its ID (specific ID for the vector), and optional config
### Configuration
```rpc
-// Represent upsert configuration.
+// Represent the upsert configuration.
message Config {
- // Check whether or not the same set of vector and ID is already inserted.
+ // A flag to skip exist check during upsert operation.
bool skip_strict_exist_check = 1;
- // Configuration for filters if your Vald cluster uses filters.
- Filter.Config filters = 2;
- // The timestamp when the vector was inserted.
+ // Filter configuration.
+ repeated Filter.Config filters = 2;
+ // Upsert timestamp.
int64 timestamp = 3;
+ // A flag to disable balanced update (split remove -> insert operation) during update operation.
+ bool disable_balanced_update = 4;
}
```
@@ -305,12 +327,13 @@ func example() {
// Upsert configuration (optional)
Config: &payload.Upsert_Config{
SkipStrictExistCheck: false,
- Filters: &payload.Filter_Config{
- Targets: []*payload.Filter_Target{
- {
+ Filters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
Host: "vald-ingress-filter",
Port: 8081,
},
+ Query: &payload.Filter_Query{},
},
},
Timestamp: time.Now().UnixMilli(),
@@ -358,10 +381,18 @@ message Filter {
uint32 port = 2;
}
+ // Represent the filter query.
+ message Query {
+ // The raw query string.
+ string query = 1;
+ }
+
// Represent filter configuration.
message Config {
// Represent the filter target configuration.
- repeated Target targets = 1;
+ Target target = 1;
+ // The target query.
+ Query query = 2;
}
}
```
@@ -412,7 +443,7 @@ For more details, please refer to [the Search API document](../api/search.md).
message Config {
// Unique request ID.
string request_id = 1;
- // Maximum number of results to be returned.
+ // Maximum number of result to be returned.
uint32 num = 2 [ (validate.rules).uint32.gte = 1 ];
// Search radius.
float radius = 3;
@@ -421,10 +452,10 @@ message Config {
// Search timeout in nanoseconds.
int64 timeout = 5;
// Ingress filter configurations.
- Filter.Config ingress_filters = 6;
+ repeated Filter.Config ingress_filters = 6;
// Egress filter configurations.
- Filter.Config egress_filters = 7;
- // Minimum number of results to be returned.
+ repeated Filter.Config egress_filters = 7;
+ // Minimum number of result to be returned.
uint32 min_num = 8 [ (validate.rules).uint32.gte = 0 ];
// Aggregation Algorithm
AggregationAlgorithm aggregation_algorithm = 9;
@@ -495,20 +526,22 @@ func main() {
Epsilon: 0.1,
// Search timeout setting.
Timeout: 100000000,
- IngressFilters: &payload.Filter_Config{
- Targets: []*payload.Filter_Target{
- {
+ IngressFilters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
Host: "vald-ingress-filter",
Port: 8081,
},
+ Query: &payload.Filter_Query{},
},
},
- EgressFilters: &payload.Filter_Config{
- Targets: []*payload.Filter_Target{
- {
+ EgressFilters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
Host: "vald-egress-filter",
Port: 8081,
},
+ Query: &payload.Filter_Query{},
},
},
AggregationAlgorithm: payload.Search_PairingHeap,
@@ -576,10 +609,18 @@ message Filter {
uint32 port = 2;
}
+ // Represent the filter query.
+ message Query {
+ // The raw query string.
+ string query = 1;
+ }
+
// Represent filter configuration.
message Config {
// Represent the filter target configuration.
- repeated Target targets = 1;
+ Target target = 1;
+ // The target query.
+ Query query = 2;
}
}
```
@@ -601,10 +642,18 @@ message Filter {
uint32 port = 2;
}
+ // Represent the filter query.
+ message Query {
+ // The raw query string.
+ string query = 1;
+ }
+
// Represent filter configuration.
message Config {
// Represent the filter target configuration.
- repeated Target targets = 1;
+ Target target = 1;
+ // The target query.
+ Query query = 2;
}
}
```
@@ -625,11 +674,11 @@ For more details, please refer to [the Remove API document](../api/remove.md).
### Configuration
```rpc
-// Represent remove configuration.
+// Represent the remove configuration.
message Config {
- // Check whether or not the same set of ID and vector is already inserted.
+ // A flag to skip exist check during upsert operation.
bool skip_strict_exist_check = 1;
- // The timestamp when the vector was removed.
+ // Remove timestamp.
int64 timestamp = 3;
}
```
diff --git a/docs/user-guides/filtering-configuration.md b/docs/user-guides/filtering-configuration.md
index 05f7673bfc..4898aadf99 100644
--- a/docs/user-guides/filtering-configuration.md
+++ b/docs/user-guides/filtering-configuration.md
@@ -72,6 +72,8 @@ Please refer to:
- [Vald ONNX Ingress Filter](https://github.com/vdaas/vald-onnx-ingress-filter)
- [Vald Tensorflow Ingress Filter](https://github.com/vdaas/vald-tensorflow-ingress-filter)
+You can also find other samples [here](../../example/client/gateway/filter).
+
## Configuration
It is easy to enable the filtering feature.
diff --git a/example/client/gateway/filter/README.md b/example/client/gateway/filter/README.md
new file mode 100644
index 0000000000..f2ecc00174
--- /dev/null
+++ b/example/client/gateway/filter/README.md
@@ -0,0 +1,77 @@
+# How to use this example
+After launching the k8s cluster, do the following steps at the root directory.
+
+1. Deploy Vald cluster with filter-gateway
+
+ ```bash
+ vim example/helm/values.yaml
+ ---
+ ...
+ gateway:
+ ...
+ filter:
+ enabled: true
+ ...
+ agent:
+ ngt:
+ dimension: 784
+ distance_type: l2
+ ...
+
+ # deploy vald cluster
+ helm repo add vald https://vald.vdaas.org/charts
+ helm install vald vald/vald --values example/helm/values.yaml
+ ```
+
+2. Build and publish example ingress filter and egress filter docker image
+
+ ```bash
+ # login to docker if needed, and setup your DockerHub ID
+ docker login
+ export DOCKERHUB_ID=
+
+ # build and publish ingress filter image
+ docker build \
+ -f example/manifest/filter/ingress/Dockerfile \
+ -t $DOCKERHUB_ID/vald-ingress-filter:latest . \
+ --build-arg GO_VERSION=$(make version/go)
+
+ docker push ${DOCKERHUB_ID}/vald-ingress-filter:latest
+
+ # build and publish egress filter image
+ docker build \
+ -f example/manifest/filter/egress/Dockerfile \
+ -t $DOCKERHUB_ID/vald-egress-filter:latest . \
+ --build-arg GO_VERSION=$(make version/go)
+
+ docker push ${DOCKERHUB_ID}/vald-egress-filter:latest
+ ```
+
+3. Deploy ingress filter server and egress filter server
+
+ ```bash
+ # deploy ingress filter
+ sed -e "s/DOCKERHUB_ID/${DOCKERHUB_ID}/g" example/manifest/filter/egress/deployment.yaml | kubectl apply -f - \
+ && kubectl apply -f example/manifest/filter/egress/service.yaml
+
+ # deploy egress filter
+ sed -e "s/DOCKERHUB_ID/${DOCKERHUB_ID}/g" example/manifest/filter/ingress/deployment.yaml | kubectl apply -f - \
+ && kubectl apply -f example/manifest/filter/ingress/service.yaml
+ ```
+
+4. Run test
+
+ ```bash
+ # if you don't use the Kubernetes ingress, set the port forward
+ kubectl port-forward deployment/vald-filter-gateway 8081:8081
+
+ # Please change the argument according to your environment.
+ go run ./example/client/gateway/filter/main.go -addr "localhost:8081" -ingresshost "vald-ingress-filter.default.svc.cluster.local" -ingressport 8082 -egresshost "vald-egress-filter.default.svc.cluster.local" -egressport 8083
+ ```
+
+5. Cleanup
+
+ ```bash
+ helm uninstall vald
+ kubectl delete -f ./example/manifest/filter/egress/deployment.yaml -f ./example/manifest/filter/egress/service.yaml -f ./example/manifest/filter/ingress/deployment.yaml -f ./example/manifest/filter/ingress/service.yaml
+ ```
\ No newline at end of file
diff --git a/example/client/gateway/filter/egress-filter/main.go b/example/client/gateway/filter/egress-filter/main.go
new file mode 100644
index 0000000000..ba39141bfb
--- /dev/null
+++ b/example/client/gateway/filter/egress-filter/main.go
@@ -0,0 +1,108 @@
+// Copyright (C) 2019-2024 vdaas.org vald team
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// You may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package main
+
+import (
+ // NOTE:
+ // The correct approach is to use "github.com/vdaas/vald-client-go/v1/payload" and "github.com/vdaas/vald-client-go/v1/vald" in the "example/client".
+ // However, the "vald-client-go" module is not available in the filter client example
+ // because the changes to the filter query have not been released. (current version is v1.7.12)
+ // Therefore, the root module is used until it is released.
+ // The import path and go.mod will be changed after release.
+ "context"
+ "flag"
+ "net"
+ "strconv"
+
+ "github.com/kpango/glg"
+ "github.com/vdaas/vald/apis/grpc/v1/filter/egress"
+ "github.com/vdaas/vald/apis/grpc/v1/payload"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
+)
+
+var (
+ client egress.FilterClient
+ egressServerHost string
+ egressServerPort uint
+ dimension uint
+)
+
+func init() {
+ /**
+ Ingresshost option specifies grpc server host of your egress filter. Default value is `127.0.0.1`.
+ Ingressport option specifies grpc server port of your egress filter. Default value is `8083`.
+ Dimension option specifies dimension size of vectors. Default value is `784`.
+ **/
+ flag.StringVar(&egressServerHost, "host", "127.0.0.1", "ingress server host")
+ flag.UintVar(&egressServerPort, "port", 8083, "ingress server port")
+ flag.UintVar(&dimension, "dimension", 784, "dimension size of vectors")
+ flag.Parse()
+}
+
+func main() {
+ glg.Println("start gRPC Client.")
+
+ addr := net.JoinHostPort(egressServerHost, strconv.Itoa(int(egressServerPort)))
+ conn, err := grpc.NewClient(addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
+ if err != nil {
+ glg.Error("Connection failed.")
+ return
+ }
+ defer conn.Close()
+
+ client = egress.NewFilterClient(conn)
+
+ fdr := &payload.Filter_DistanceRequest{
+ Distance: []*payload.Object_Distance{
+ {
+ Id: "1_fashion",
+ Distance: 0.01,
+ },
+ {
+ Id: "2_food",
+ Distance: 0.02,
+ },
+ {
+ Id: "3_fashion",
+ Distance: 0.03,
+ },
+ {
+ Id: "4_pet",
+ Distance: 0.04,
+ },
+ },
+ Query: &payload.Filter_Query{
+ Query: "category=fashion",
+ },
+ }
+ res, err := client.FilterDistance(context.Background(), fdr)
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+ glg.Info("FilterDistance Distance: ", res.GetDistance())
+
+ r, err := client.FilterVector(context.Background(), &payload.Filter_VectorRequest{
+ Vector: &payload.Object_Vector{
+ Id: "1", Vector: make([]float32, dimension),
+ },
+ Query: &payload.Filter_Query{},
+ })
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+ glg.Info("FilterVector Vector: ", r.GetVector())
+}
diff --git a/example/client/gateway/filter/ingress-filter/main.go b/example/client/gateway/filter/ingress-filter/main.go
new file mode 100644
index 0000000000..e2dec66293
--- /dev/null
+++ b/example/client/gateway/filter/ingress-filter/main.go
@@ -0,0 +1,80 @@
+// Copyright (C) 2019-2024 vdaas.org vald team
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// You may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package main
+
+import (
+ // NOTE:
+ // The correct approach is to use "github.com/vdaas/vald-client-go/v1/payload" and "github.com/vdaas/vald-client-go/v1/vald" in the "example/client".
+ // However, the "vald-client-go" module is not available in the filter client example
+ // because the changes to the filter query have not been released. (current version is v1.7.12)
+ // Therefore, the root module is used until it is released.
+ // The import path and go.mod will be changed after release.
+ "context"
+ "flag"
+ "net"
+ "strconv"
+
+ "github.com/kpango/glg"
+ "github.com/vdaas/vald/apis/grpc/v1/filter/ingress"
+ "github.com/vdaas/vald/apis/grpc/v1/payload"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
+)
+
+var (
+ client ingress.FilterClient
+ ingressServerHost string
+ ingressServerPort uint
+ dimension uint
+)
+
+func init() {
+ /**
+ Ingresshost option specifies grpc server host of your ingress filter. Default value is `127.0.0.1`.
+ Ingressport option specifies grpc server port of your ingress filter. Default value is `8082`.
+ Dimension option specifies dimension size of vectors. Default value is `784`.
+ **/
+ flag.StringVar(&ingressServerHost, "host", "127.0.0.1", "ingress server host")
+ flag.UintVar(&ingressServerPort, "port", 8082, "ingress server port")
+ flag.UintVar(&dimension, "dimension", 784, "dimension size of vectors")
+ flag.Parse()
+}
+
+func main() {
+ glg.Println("start gRPC Client.")
+
+ addr := net.JoinHostPort(ingressServerHost, strconv.Itoa(int(ingressServerPort)))
+ conn, err := grpc.NewClient(addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
+ if err != nil {
+ glg.Error("Connection failed.")
+ return
+ }
+ defer conn.Close()
+
+ client = ingress.NewFilterClient(conn)
+
+ res, err := client.GenVector(context.Background(), &payload.Object_Blob{Id: "1", Object: make([]byte, 0)})
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+ glg.Info("GenVector Vector: ", res.GetVector())
+
+ res, err = client.FilterVector(context.Background(), &payload.Object_Vector{Id: "1", Vector: make([]float32, dimension)})
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+ glg.Info("FilterVector Id: ", res.GetId())
+}
diff --git a/example/client/gateway/filter/main.go b/example/client/gateway/filter/main.go
new file mode 100644
index 0000000000..567a8b0384
--- /dev/null
+++ b/example/client/gateway/filter/main.go
@@ -0,0 +1,242 @@
+// Copyright (C) 2019-2024 vdaas.org vald team
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// You may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package main
+
+import (
+ // NOTE:
+ // The correct approach is to use "github.com/vdaas/vald-client-go/v1/payload" and "github.com/vdaas/vald-client-go/v1/vald" in the "example/client".
+ // However, the "vald-client-go" module is not available in the filter client example
+ // because the changes to the filter query have not been released. (current version is v1.7.12)
+ // Therefore, the root module is used until it is released.
+ // The import path and go.mod will be changed after release.
+ "context"
+ "encoding/json"
+ "flag"
+ "time"
+
+ "github.com/kpango/glg"
+ "github.com/vdaas/vald/apis/grpc/v1/payload"
+ "github.com/vdaas/vald/apis/grpc/v1/vald"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
+)
+
+type dataset struct {
+ id string
+ vector []float32
+}
+
+var (
+ grpcServerAddr string
+ ingressServerHost string
+ ingressServerPort uint
+ egressServerHost string
+ egressServerPort uint
+ indexingWaitSeconds uint
+ dimension uint
+)
+
+func init() {
+ // init initializes the command-line flags with default values for the filter client setup.
+ /**
+ Addr option specifies grpc server address of filter gateway. Default value is `127.0.0.1:8081`.
+ Ingresshost option specifies grpc server host of your ingress filter. Default value is `127.0.0.1`.
+ Ingressport option specifies grpc server port of your ingress filter. Default value is `8082`.
+ Egresshost option specifies grpc server host of your egress filter. Default value is `127.0.0.1`.
+ Egressport option specifies grpc server port of your egress filter. Default value is `8083`.
+ Wait option specifies indexing wait time (in seconds). Default value is `240`.
+ Dimension option specifies dimension size of vectors. Default value is `784`.
+ **/
+ flag.StringVar(&grpcServerAddr, "addr", "127.0.0.1:8081", "gRPC server address of filter gateway")
+ flag.StringVar(&ingressServerHost, "ingresshost", "127.0.0.1", "ingress server host")
+ flag.UintVar(&ingressServerPort, "ingressport", 8082, "ingress server port")
+ flag.StringVar(&egressServerHost, "egresshost", "127.0.0.1", "egress server host")
+ flag.UintVar(&egressServerPort, "egressport", 8083, "egress server port")
+ flag.UintVar(&indexingWaitSeconds, "wait", 240, "indexing wait seconds")
+ flag.UintVar(&dimension, "dimension", 784, "dimension size of vectors")
+ flag.Parse()
+}
+
+// Please execute after setting up the server of vald cluster and ingress/egress filter
+func main() {
+ dataset := genDataset()
+ query := "category=fashion"
+
+ // connect to the Vald cluster
+ ctx := context.Background()
+ conn, err := grpc.DialContext(ctx, grpcServerAddr, grpc.WithTransportCredentials(insecure.NewCredentials()))
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+
+ // create a filter client
+ glg.Info("Start inserting object via vald filter client")
+ var object []byte
+ fclient := vald.NewFilterClient(conn)
+
+ for _, ds := range dataset {
+ icfg := &payload.Insert_ObjectRequest{
+ // object data to pass to GenVector function of your ingress filter
+ Object: &payload.Object_Blob{
+ Id: ds.id,
+ Object: object,
+ },
+ // insert config
+ Config: &payload.Insert_Config{
+ SkipStrictExistCheck: false,
+ // config to call FilterVector function of your ingress filter
+ Filters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
+ Host: ingressServerHost,
+ Port: uint32(ingressServerPort),
+ },
+ Query: &payload.Filter_Query{},
+ },
+ },
+ },
+ // specify vectorizer component location
+ Vectorizer: &payload.Filter_Target{
+ Host: ingressServerHost,
+ Port: uint32(ingressServerPort),
+ },
+ }
+
+ // send InsertObject request
+ res, err := fclient.InsertObject(ctx, icfg)
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+
+ glg.Infof("location: %#v", res.Ips)
+ }
+
+ // Vald Agent starts indexing automatically after insert. It needs to wait until the indexing is completed before a search action is performed.
+ wt := time.Duration(indexingWaitSeconds) * time.Second
+ glg.Infof("Wait %s for indexing to finish", wt)
+ time.Sleep(wt)
+
+ // create a search client
+ glg.Log("Start searching dataset")
+ sclient := vald.NewSearchClient(conn)
+
+ for _, ds := range dataset {
+ scfg := &payload.Search_Config{
+ Num: 10,
+ Epsilon: 0.1,
+ Radius: -1,
+ // config to call DistanceVector function of your egress filter
+ EgressFilters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
+ Host: egressServerHost,
+ Port: uint32(egressServerPort),
+ },
+ Query: &payload.Filter_Query{
+ Query: query,
+ },
+ },
+ },
+ }
+
+ // send Search request
+ res, err := sclient.Search(ctx, &payload.Search_Request{
+ Vector: ds.vector,
+ Config: scfg,
+ })
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+ b, _ := json.MarshalIndent(res.GetResults(), "", " ")
+ glg.Infof("Results : %s\n\n", string(b))
+ }
+
+ // create an object client
+ glg.Info("Start GetObject")
+ oclient := vald.NewObjectClient(conn)
+
+ for _, ds := range dataset {
+ vreq := &payload.Object_VectorRequest{
+ Id: &payload.Object_ID{Id: ds.id},
+ // config to call FilterVector function of your egress filter
+ Filters: []*payload.Filter_Config{
+ {
+ Target: &payload.Filter_Target{
+ Host: egressServerHost,
+ Port: uint32(egressServerPort),
+ },
+ Query: &payload.Filter_Query{},
+ },
+ },
+ }
+
+ // send GetObject request
+ r, err := oclient.GetObject(ctx, vreq)
+ if err != nil {
+ glg.Error(err)
+ return
+ }
+ b, _ := json.Marshal(r.GetVector())
+ glg.Infof("Get Object result: %s\n", string(b))
+ }
+
+ // send remove request
+ glg.Info("Start removing data")
+ rclient := vald.NewRemoveClient(conn)
+
+ for _, ds := range dataset {
+ rreq := &payload.Remove_Request{
+ Id: &payload.Object_ID{
+ Id: ds.id,
+ },
+ }
+ if _, err := rclient.Remove(ctx, rreq); err != nil {
+ glg.Errorf("Failed to remove, ID: %v", ds.id)
+ } else {
+ glg.Infof("Remove ID %v successed", ds.id)
+ }
+ }
+}
+
+func genDataset() []dataset {
+ // create a data set for operation confirmation
+ makeVecFn := func(dim int, value float32) []float32 {
+ vec := make([]float32, dim)
+ for i := 0; i < dim; i++ {
+ vec[i] = value
+ }
+ return vec
+ }
+ return []dataset{
+ {
+ id: "1_fashion",
+ vector: makeVecFn(int(dimension), 0.1),
+ },
+ {
+ id: "2_food",
+ vector: makeVecFn(int(dimension), 0.2),
+ },
+ {
+ id: "3_fashion",
+ vector: makeVecFn(int(dimension), 0.3),
+ },
+ {
+ id: "4_pet",
+ vector: makeVecFn(int(dimension), 0.4),
+ },
+ }
+}
diff --git a/example/client/go.mod.default b/example/client/go.mod.default
index 205d365b8e..7e2ad8df5a 100644
--- a/example/client/go.mod.default
+++ b/example/client/go.mod.default
@@ -6,6 +6,7 @@ replace (
github.com/envoyproxy/protoc-gen-validate => github.com/envoyproxy/protoc-gen-validate latest
github.com/goccy/go-json => github.com/goccy/go-json latest
github.com/golang/protobuf => github.com/golang/protobuf latest
+ github.com/kpango/gache/v2 => github.com/kpango/gache/v2 latest
github.com/kpango/glg => github.com/kpango/glg latest
github.com/pkg/sftp => github.com/pkg/sftp latest
golang.org/x/crypto => golang.org/x/crypto latest
@@ -18,4 +19,5 @@ replace (
google.golang.org/protobuf => google.golang.org/protobuf latest
gopkg.in/yaml.v2 => gopkg.in/yaml.v2 latest
gopkg.in/yaml.v3 => gopkg.in/yaml.v3 latest
+ github.com/vdaas/vald => ../../../vald
)
diff --git a/example/client/mirror/main.go b/example/client/mirror/main.go
index 3bd0318166..3c9ce235fa 100644
--- a/example/client/mirror/main.go
+++ b/example/client/mirror/main.go
@@ -27,6 +27,7 @@ import (
"github.com/vdaas/vald-client-go/v1/vald"
"gonum.org/v1/hdf5"
"google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
)
const (
@@ -68,7 +69,7 @@ func main() {
// Creates Vald clients for connecting to Vald clusters.
clients := make([]vald.Client, 0, len(grpcServerAddrs))
for _, addr := range grpcServerAddrs {
- conn, err := grpc.NewClient(addr, grpc.WithInsecure())
+ conn, err := grpc.NewClient(addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
glg.Fatal(err)
}
diff --git a/example/manifest/filter/egress/Dockerfile b/example/manifest/filter/egress/Dockerfile
new file mode 100644
index 0000000000..49010d4424
--- /dev/null
+++ b/example/manifest/filter/egress/Dockerfile
@@ -0,0 +1,103 @@
+# syntax = docker/dockerfile:latest
+#
+# Copyright (C) 2019-2024 vdaas.org vald team
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# You may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+ARG GO_VERSION=latest
+ARG DISTROLESS_IMAGE=gcr.io/distroless/static
+ARG DISTROLESS_IMAGE_TAG=nonroot
+ARG MAINTAINER="vdaas.org vald team "
+
+FROM golang:${GO_VERSION} AS golang
+
+FROM ubuntu:devel AS builder
+
+ENV GO111MODULE on
+ENV DEBIAN_FRONTEND noninteractive
+ENV INITRD No
+ENV LANG en_US.UTF-8
+ENV GOROOT /opt/go
+ENV GOPATH /go
+ENV PATH ${PATH}:${GOROOT}/bin:${GOPATH}/bin
+ENV ORG vdaas
+ENV REPO vald
+ENV PKG filter/egress-filter
+ENV APP_NAME egress-filter
+ENV DIR example/server/egress-filter
+
+# skipcq: DOK-DL3008
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ ca-certificates \
+ build-essential \
+ upx \
+ git \
+ && apt-get clean \
+ && rm -rf /var/lib/apt/lists/*
+
+COPY --from=golang /usr/local/go $GOROOT
+RUN mkdir -p "$GOPATH/src"
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}
+
+COPY go.mod .
+COPY go.sum .
+
+RUN go mod download
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/example
+COPY example .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/internal
+COPY internal .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/apis/grpc
+COPY apis/grpc .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/pkg
+COPY pkg .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/versions
+COPY versions .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/Makefile.d
+COPY Makefile.d .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}
+COPY Makefile .
+
+RUN GO111MODULE=on \
+ go build \
+ --ldflags "-w -extldflags=-static \
+ -buildid=" \
+ -mod=readonly \
+ -modcacherw \
+ -a \
+ -tags "osusergo netgo static_build" \
+ -trimpath \
+ -o ${DIR}/${APP_NAME} \
+ ${DIR}/main.go
+RUN mv "${DIR}/${APP_NAME}" "/usr/bin/${APP_NAME}"
+
+
+FROM ${DISTROLESS_IMAGE}:${DISTROLESS_IMAGE_TAG}
+LABEL maintainer "${MAINTAINER}"
+
+ENV APP_NAME egress-filter
+
+COPY --from=builder /usr/bin/${APP_NAME} /go/bin/${APP_NAME}
+
+USER nonroot:nonroot
+
+ENTRYPOINT ["/go/bin/egress-filter"]
diff --git a/example/manifest/filter/egress/deployment.yaml b/example/manifest/filter/egress/deployment.yaml
new file mode 100644
index 0000000000..08be9a6d3e
--- /dev/null
+++ b/example/manifest/filter/egress/deployment.yaml
@@ -0,0 +1,43 @@
+#
+# Copyright (C) 2019-2024 vdaas.org vald team
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# You may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: vald-egress-filter
+ labels:
+ app: vald-egress-filter
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: vald-egress-filter
+ template:
+ metadata:
+ labels:
+ app: vald-egress-filter
+ spec:
+ securityContext:
+ runAsUser: 65532
+ runAsGroup: 65532
+ runAsNonRoot: true
+ fsGroup: 65532
+ containers:
+ - name: vald-egress-filter
+ image: DOCKERHUB_ID/vald-egress-filter:latest
+ imagePullPolicy: Always
+ ports:
+ - name: grpc
+ containerPort: 8083
diff --git a/example/manifest/filter/egress/service.yaml b/example/manifest/filter/egress/service.yaml
new file mode 100644
index 0000000000..8f045593f9
--- /dev/null
+++ b/example/manifest/filter/egress/service.yaml
@@ -0,0 +1,26 @@
+#
+# Copyright (C) 2019-2024 vdaas.org vald team
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# You may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: vald-egress-filter
+spec:
+ selector:
+ app: vald-egress-filter
+ ports:
+ - protocol: TCP
+ port: 8083
+ targetPort: 8083
diff --git a/example/manifest/filter/ingress/Dockerfile b/example/manifest/filter/ingress/Dockerfile
new file mode 100644
index 0000000000..7a14afbeae
--- /dev/null
+++ b/example/manifest/filter/ingress/Dockerfile
@@ -0,0 +1,103 @@
+# syntax = docker/dockerfile:latest
+#
+# Copyright (C) 2019-2024 vdaas.org vald team
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# You may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+ARG GO_VERSION=latest
+ARG DISTROLESS_IMAGE=gcr.io/distroless/static
+ARG DISTROLESS_IMAGE_TAG=nonroot
+ARG MAINTAINER="vdaas.org vald team "
+
+FROM golang:${GO_VERSION} AS golang
+
+FROM ubuntu:devel AS builder
+
+ENV GO111MODULE on
+ENV DEBIAN_FRONTEND noninteractive
+ENV INITRD No
+ENV LANG en_US.UTF-8
+ENV GOROOT /opt/go
+ENV GOPATH /go
+ENV PATH ${PATH}:${GOROOT}/bin:${GOPATH}/bin
+ENV ORG vdaas
+ENV REPO vald
+ENV PKG filter/ingress-filter
+ENV APP_NAME ingress-filter
+ENV DIR example/server/ingress-filter
+
+# skipcq: DOK-DL3008
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ ca-certificates \
+ build-essential \
+ upx \
+ git \
+ && apt-get clean \
+ && rm -rf /var/lib/apt/lists/*
+
+COPY --from=golang /usr/local/go $GOROOT
+RUN mkdir -p "$GOPATH/src"
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}
+
+COPY go.mod .
+COPY go.sum .
+
+RUN go mod download
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/example
+COPY example .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/internal
+COPY internal .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/apis/grpc
+COPY apis/grpc .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/pkg
+COPY pkg .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/versions
+COPY versions .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}/Makefile.d
+COPY Makefile.d .
+
+WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}
+COPY Makefile .
+
+RUN GO111MODULE=on \
+ go build \
+ --ldflags "-w -extldflags=-static \
+ -buildid=" \
+ -mod=readonly \
+ -modcacherw \
+ -a \
+ -tags "osusergo netgo static_build" \
+ -trimpath \
+ -o ${DIR}/${APP_NAME} \
+ ${DIR}/main.go
+RUN mv "${DIR}/${APP_NAME}" "/usr/bin/${APP_NAME}"
+
+
+FROM ${DISTROLESS_IMAGE}:${DISTROLESS_IMAGE_TAG}
+LABEL maintainer "${MAINTAINER}"
+
+ENV APP_NAME ingress-filter
+
+COPY --from=builder /usr/bin/${APP_NAME} /go/bin/${APP_NAME}
+
+USER nonroot:nonroot
+
+ENTRYPOINT ["/go/bin/ingress-filter"]
diff --git a/example/manifest/filter/ingress/deployment.yaml b/example/manifest/filter/ingress/deployment.yaml
new file mode 100644
index 0000000000..c3e6f4b92e
--- /dev/null
+++ b/example/manifest/filter/ingress/deployment.yaml
@@ -0,0 +1,43 @@
+#
+# Copyright (C) 2019-2024 vdaas.org vald team
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# You may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: vald-ingress-filter
+ labels:
+ app: vald-ingress-filter
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: vald-ingress-filter
+ template:
+ metadata:
+ labels:
+ app: vald-ingress-filter
+ spec:
+ securityContext:
+ runAsUser: 65532
+ runAsGroup: 65532
+ runAsNonRoot: true
+ fsGroup: 65532
+ containers:
+ - name: vald-ingress-filter
+ image: DOCKERHUB_ID/vald-ingress-filter:latest
+ imagePullPolicy: Always
+ ports:
+ - name: grpc
+ containerPort: 8082
diff --git a/example/manifest/filter/ingress/service.yaml b/example/manifest/filter/ingress/service.yaml
new file mode 100644
index 0000000000..47e3c5914c
--- /dev/null
+++ b/example/manifest/filter/ingress/service.yaml
@@ -0,0 +1,26 @@
+#
+# Copyright (C) 2019-2024 vdaas.org vald team
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# You may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: vald-ingress-filter
+spec:
+ selector:
+ app: vald-ingress-filter
+ ports:
+ - protocol: TCP
+ port: 8082
+ targetPort: 8082
diff --git a/example/server/egress-filter/main.go b/example/server/egress-filter/main.go
new file mode 100644
index 0000000000..acd806d269
--- /dev/null
+++ b/example/server/egress-filter/main.go
@@ -0,0 +1,123 @@
+// Copyright (C) 2019-2024 vdaas.org vald team
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// You may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package main
+
+import (
+ "context"
+ "flag"
+ "fmt"
+ "net"
+ "os"
+ "os/signal"
+ "strings"
+
+ "github.com/kpango/glg"
+ "github.com/vdaas/vald/apis/grpc/v1/filter/egress"
+ "github.com/vdaas/vald/apis/grpc/v1/payload"
+ "github.com/vdaas/vald/internal/net/grpc/codes"
+ "github.com/vdaas/vald/internal/net/grpc/status"
+ "google.golang.org/grpc"
+)
+
+var (
+ egressServerPort uint
+ dimension uint
+)
+
+func init() {
+ // init initializes the command-line flags with default values for the filter setup.
+ /**
+ Port option specifies grpc server port of your egress filter. Default value is `8083`.
+ Dimension option specifies dimension size of vectors. Default value is `784`.
+ **/
+ flag.UintVar(&egressServerPort, "port", 8083, "server port")
+ flag.UintVar(&dimension, "dimension", 784, "dimension size of vectors")
+ flag.Parse()
+}
+
+func getSplitValue(str string, sep string, pos uint) (string, bool) {
+ ss := strings.Split(str, sep)
+ if len(ss) == int(pos+1) {
+ return ss[pos], true
+ }
+
+ return "", false
+}
+
+type myEgressServer struct {
+ egress.UnimplementedFilterServer
+}
+
+func (s *myEgressServer) FilterDistance(
+ ctx context.Context, in *payload.Filter_DistanceRequest,
+) (*payload.Filter_DistanceResponse, error) {
+ glg.Log("filtering vector %#v", in)
+ qCategory, ok := getSplitValue(in.GetQuery().GetQuery(), "=", 1)
+ if !ok {
+ return &payload.Filter_DistanceResponse{
+ Distance: in.GetDistance(),
+ }, nil
+ }
+
+ filteredDis := []*payload.Object_Distance{}
+ for _, d := range in.GetDistance() {
+ iCategory, ok := getSplitValue(d.GetId(), "_", 1)
+ if !ok {
+ continue
+ }
+ glg.Infof("qCategory: %v, iCategory: %v", qCategory, iCategory)
+ if qCategory == iCategory {
+ filteredDis = append(filteredDis, d)
+ }
+ }
+
+ if len(filteredDis) == 0 {
+ return nil, status.Error(codes.NotFound, "FilterDistance results not found.")
+ }
+
+ return &payload.Filter_DistanceResponse{
+ Distance: filteredDis,
+ }, nil
+}
+
+func (s *myEgressServer) FilterVector(
+ ctx context.Context, in *payload.Filter_VectorRequest,
+) (*payload.Filter_VectorResponse, error) {
+ // Write your own logic
+ glg.Logf("filtering the vector %#v", in)
+ return &payload.Filter_VectorResponse{
+ Vector: in.GetVector(),
+ }, nil
+}
+
+func main() {
+ listener, err := net.Listen("tcp", fmt.Sprintf(":%d", egressServerPort))
+ if err != nil {
+ glg.Fatal(err)
+ }
+
+ s := grpc.NewServer()
+ egress.RegisterFilterServer(s, &myEgressServer{})
+
+ go func() {
+ glg.Infof("start gRPC server port: %v", egressServerPort)
+ s.Serve(listener)
+ }()
+
+ quit := make(chan os.Signal, 1)
+ signal.Notify(quit, os.Interrupt)
+ <-quit
+ glg.Infof("stopping gRPC server...")
+ s.GracefulStop()
+}
diff --git a/example/server/ingress-filter/main.go b/example/server/ingress-filter/main.go
new file mode 100644
index 0000000000..979d96d706
--- /dev/null
+++ b/example/server/ingress-filter/main.go
@@ -0,0 +1,92 @@
+// Copyright (C) 2019-2024 vdaas.org vald team
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// You may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package main
+
+import (
+ "context"
+ "flag"
+ "fmt"
+ "net"
+ "os"
+ "os/signal"
+
+ "github.com/kpango/glg"
+ "github.com/vdaas/vald/apis/grpc/v1/filter/ingress"
+ "github.com/vdaas/vald/apis/grpc/v1/payload"
+ "github.com/vdaas/vald/internal/test/data/vector"
+ "google.golang.org/grpc"
+)
+
+var (
+ ingressServerPort uint
+ dimension uint
+)
+
+func init() {
+ /**
+ Port option specifies grpc server port of your ingress filter. Default value is `8082`.
+ Dimension option specifies dimension size of vectors. Default value is `784`.
+ **/
+ flag.UintVar(&ingressServerPort, "port", 8082, "server port")
+ flag.UintVar(&dimension, "dimension", 784, "dimension size of vectors")
+ flag.Parse()
+}
+
+type myIngressServer struct {
+ ingress.UnimplementedFilterServer
+}
+
+func (s *myIngressServer) GenVector(
+ ctx context.Context, in *payload.Object_Blob,
+) (*payload.Object_Vector, error) {
+ // Write your own logic
+ glg.Logf("generating vector %#v", in)
+ vec, err := vector.GenF32Vec(vector.Gaussian, 1, int(dimension))
+ if err != nil {
+ return nil, err
+ }
+ return &payload.Object_Vector{
+ Id: in.GetId(),
+ Vector: vec[0],
+ }, nil
+}
+
+func (s *myIngressServer) FilterVector(
+ ctx context.Context, in *payload.Object_Vector,
+) (*payload.Object_Vector, error) {
+ // Write your own logic
+ glg.Logf("filtering vector %#v", in)
+ return in, nil
+}
+
+func main() {
+ listener, err := net.Listen("tcp", fmt.Sprintf(":%d", ingressServerPort))
+ if err != nil {
+ panic(err)
+ }
+
+ s := grpc.NewServer()
+ ingress.RegisterFilterServer(s, &myIngressServer{})
+
+ go func() {
+ glg.Infof("start gRPC server port: %v", ingressServerPort)
+ s.Serve(listener)
+ }()
+
+ quit := make(chan os.Signal, 1)
+ signal.Notify(quit, os.Interrupt)
+ <-quit
+ glg.Infof("stopping gRPC server...")
+ s.GracefulStop()
+}
diff --git a/internal/client/v1/client/filter/egress/client.go b/internal/client/v1/client/filter/egress/client.go
index 3e550db635..eda0a58009 100644
--- a/internal/client/v1/client/filter/egress/client.go
+++ b/internal/client/v1/client/filter/egress/client.go
@@ -131,8 +131,8 @@ func (c *client) Target(ctx context.Context, targets ...string) (egress.FilterCl
}
func (c *client) FilterDistance(
- ctx context.Context, in *payload.Object_Distance, opts ...grpc.CallOption,
-) (res *payload.Object_Distance, err error) {
+ ctx context.Context, in *payload.Filter_DistanceRequest, opts ...grpc.CallOption,
+) (res *payload.Filter_DistanceResponse, err error) {
ctx, span := trace.StartSpan(ctx, apiName+"/Client.FilterDistance")
defer func() {
if span != nil {
@@ -144,7 +144,7 @@ func (c *client) FilterDistance(
copts ...grpc.CallOption,
) (any, error) {
res, err = egress.NewFilterClient(conn).FilterDistance(ctx, in, append(copts, opts...)...)
- return nil, err
+ return res, err
})
if err != nil {
return nil, err
@@ -153,8 +153,8 @@ func (c *client) FilterDistance(
}
func (s *specificAddrClient) FilterDistance(
- ctx context.Context, in *payload.Object_Distance, opts ...grpc.CallOption,
-) (res *payload.Object_Distance, err error) {
+ ctx context.Context, in *payload.Filter_DistanceRequest, opts ...grpc.CallOption,
+) (res *payload.Filter_DistanceResponse, err error) {
ctx, span := trace.StartSpan(ctx, apiName+"/Client.FilterDistance/"+s.addr)
defer func() {
if span != nil {
@@ -166,11 +166,7 @@ func (s *specificAddrClient) FilterDistance(
copts ...grpc.CallOption,
) (any, error) {
res, err = egress.NewFilterClient(conn).FilterDistance(ctx, in, append(copts, opts...)...)
- if err != nil {
- return nil, err
- }
- in = res
- return in, nil
+ return res, err
})
if err != nil {
return nil, err
@@ -179,8 +175,8 @@ func (s *specificAddrClient) FilterDistance(
}
func (m *multipleAddrsClient) FilterDistance(
- ctx context.Context, in *payload.Object_Distance, opts ...grpc.CallOption,
-) (res *payload.Object_Distance, err error) {
+ ctx context.Context, in *payload.Filter_DistanceRequest, opts ...grpc.CallOption,
+) (res *payload.Filter_DistanceResponse, err error) {
ctx, span := trace.StartSpan(ctx, apiName+"/Client.FilterDistance/["+strings.Join(m.addrs, ",")+"]")
defer func() {
if span != nil {
@@ -195,8 +191,8 @@ func (m *multipleAddrsClient) FilterDistance(
if err != nil {
return err
}
- in = res
- return nil
+ in.Distance = res.Distance
+ return err
})
if err != nil {
return nil, err
@@ -205,8 +201,8 @@ func (m *multipleAddrsClient) FilterDistance(
}
func (c *client) FilterVector(
- ctx context.Context, in *payload.Object_Vector, opts ...grpc.CallOption,
-) (res *payload.Object_Vector, err error) {
+ ctx context.Context, in *payload.Filter_VectorRequest, opts ...grpc.CallOption,
+) (res *payload.Filter_VectorResponse, err error) {
ctx, span := trace.StartSpan(ctx, apiName+"/Client.FilterVector")
defer func() {
if span != nil {
@@ -218,7 +214,7 @@ func (c *client) FilterVector(
copts ...grpc.CallOption,
) (any, error) {
res, err = egress.NewFilterClient(conn).FilterVector(ctx, in, append(copts, opts...)...)
- return nil, err
+ return res, err
})
if err != nil {
return nil, err
@@ -227,8 +223,8 @@ func (c *client) FilterVector(
}
func (s *specificAddrClient) FilterVector(
- ctx context.Context, in *payload.Object_Vector, opts ...grpc.CallOption,
-) (res *payload.Object_Vector, err error) {
+ ctx context.Context, in *payload.Filter_VectorRequest, opts ...grpc.CallOption,
+) (res *payload.Filter_VectorResponse, err error) {
ctx, span := trace.StartSpan(ctx, apiName+"/Client.FilterVector/"+s.addr)
defer func() {
if span != nil {
@@ -240,11 +236,7 @@ func (s *specificAddrClient) FilterVector(
copts ...grpc.CallOption,
) (any, error) {
res, err = egress.NewFilterClient(conn).FilterVector(ctx, in, append(copts, opts...)...)
- if err != nil {
- return nil, err
- }
- in = res
- return in, nil
+ return res, err
})
if err != nil {
return nil, err
@@ -253,8 +245,8 @@ func (s *specificAddrClient) FilterVector(
}
func (m *multipleAddrsClient) FilterVector(
- ctx context.Context, in *payload.Object_Vector, opts ...grpc.CallOption,
-) (res *payload.Object_Vector, err error) {
+ ctx context.Context, in *payload.Filter_VectorRequest, opts ...grpc.CallOption,
+) (res *payload.Filter_VectorResponse, err error) {
ctx, span := trace.StartSpan(ctx, apiName+"/Client.FilterVector/["+strings.Join(m.addrs, ",")+"]")
defer func() {
if span != nil {
@@ -269,8 +261,8 @@ func (m *multipleAddrsClient) FilterVector(
if err != nil {
return err
}
- in = res
- return nil
+ res.Vector = in.Vector
+ return err
})
if err != nil {
return nil, err
diff --git a/internal/client/v1/client/filter/egress/client_test.go b/internal/client/v1/client/filter/egress/client_test.go
index 44c87c0a9f..24f370c376 100644
--- a/internal/client/v1/client/filter/egress/client_test.go
+++ b/internal/client/v1/client/filter/egress/client_test.go
@@ -536,7 +536,7 @@ package egress
// func Test_client_FilterDistance(t *testing.T) {
// type args struct {
// ctx context.Context
-// in *payload.Object_Distance
+// in *payload.Filter_DistanceRequest
// opts []grpc.CallOption
// }
// type fields struct {
@@ -545,7 +545,7 @@ package egress
// c grpc.Client
// }
// type want struct {
-// wantRes *payload.Object_Distance
+// wantRes *payload.Filter_DistanceResponse
// err error
// }
// type test struct {
@@ -553,11 +553,11 @@ package egress
// args args
// fields fields
// want want
-// checkFunc func(want, *payload.Object_Distance, error) error
+// checkFunc func(want, *payload.Filter_DistanceResponse, error) error
// beforeFunc func(*testing.T, args)
// afterFunc func(*testing.T, args)
// }
-// defaultCheckFunc := func(w want, gotRes *payload.Object_Distance, err error) error {
+// defaultCheckFunc := func(w want, gotRes *payload.Filter_DistanceResponse, err error) error {
// if !errors.Is(err, w.err) {
// return errors.Errorf("got_error: \"%#v\",\n\t\t\t\twant: \"%#v\"", err, w.err)
// }
@@ -652,7 +652,7 @@ package egress
// func Test_specificAddrClient_FilterDistance(t *testing.T) {
// type args struct {
// ctx context.Context
-// in *payload.Object_Distance
+// in *payload.Filter_DistanceRequest
// opts []grpc.CallOption
// }
// type fields struct {
@@ -660,7 +660,7 @@ package egress
// c grpc.Client
// }
// type want struct {
-// wantRes *payload.Object_Distance
+// wantRes *payload.Filter_DistanceResponse
// err error
// }
// type test struct {
@@ -668,11 +668,11 @@ package egress
// args args
// fields fields
// want want
-// checkFunc func(want, *payload.Object_Distance, error) error
+// checkFunc func(want, *payload.Filter_DistanceResponse, error) error
// beforeFunc func(*testing.T, args)
// afterFunc func(*testing.T, args)
// }
-// defaultCheckFunc := func(w want, gotRes *payload.Object_Distance, err error) error {
+// defaultCheckFunc := func(w want, gotRes *payload.Filter_DistanceResponse, err error) error {
// if !errors.Is(err, w.err) {
// return errors.Errorf("got_error: \"%#v\",\n\t\t\t\twant: \"%#v\"", err, w.err)
// }
@@ -764,7 +764,7 @@ package egress
// func Test_multipleAddrsClient_FilterDistance(t *testing.T) {
// type args struct {
// ctx context.Context
-// in *payload.Object_Distance
+// in *payload.Filter_DistanceRequest
// opts []grpc.CallOption
// }
// type fields struct {
@@ -772,7 +772,7 @@ package egress
// c grpc.Client
// }
// type want struct {
-// wantRes *payload.Object_Distance
+// wantRes *payload.Filter_DistanceResponse
// err error
// }
// type test struct {
@@ -780,11 +780,11 @@ package egress
// args args
// fields fields
// want want
-// checkFunc func(want, *payload.Object_Distance, error) error
+// checkFunc func(want, *payload.Filter_DistanceResponse, error) error
// beforeFunc func(*testing.T, args)
// afterFunc func(*testing.T, args)
// }
-// defaultCheckFunc := func(w want, gotRes *payload.Object_Distance, err error) error {
+// defaultCheckFunc := func(w want, gotRes *payload.Filter_DistanceResponse, err error) error {
// if !errors.Is(err, w.err) {
// return errors.Errorf("got_error: \"%#v\",\n\t\t\t\twant: \"%#v\"", err, w.err)
// }
@@ -876,7 +876,7 @@ package egress
// func Test_client_FilterVector(t *testing.T) {
// type args struct {
// ctx context.Context
-// in *payload.Object_Vector
+// in *payload.Filter_VectorRequest
// opts []grpc.CallOption
// }
// type fields struct {
@@ -885,7 +885,7 @@ package egress
// c grpc.Client
// }
// type want struct {
-// wantRes *payload.Object_Vector
+// wantRes *payload.Filter_VectorResponse
// err error
// }
// type test struct {
@@ -893,11 +893,11 @@ package egress
// args args
// fields fields
// want want
-// checkFunc func(want, *payload.Object_Vector, error) error
+// checkFunc func(want, *payload.Filter_VectorResponse, error) error
// beforeFunc func(*testing.T, args)
// afterFunc func(*testing.T, args)
// }
-// defaultCheckFunc := func(w want, gotRes *payload.Object_Vector, err error) error {
+// defaultCheckFunc := func(w want, gotRes *payload.Filter_VectorResponse, err error) error {
// if !errors.Is(err, w.err) {
// return errors.Errorf("got_error: \"%#v\",\n\t\t\t\twant: \"%#v\"", err, w.err)
// }
@@ -992,7 +992,7 @@ package egress
// func Test_specificAddrClient_FilterVector(t *testing.T) {
// type args struct {
// ctx context.Context
-// in *payload.Object_Vector
+// in *payload.Filter_VectorRequest
// opts []grpc.CallOption
// }
// type fields struct {
@@ -1000,7 +1000,7 @@ package egress
// c grpc.Client
// }
// type want struct {
-// wantRes *payload.Object_Vector
+// wantRes *payload.Filter_VectorResponse
// err error
// }
// type test struct {
@@ -1008,11 +1008,11 @@ package egress
// args args
// fields fields
// want want
-// checkFunc func(want, *payload.Object_Vector, error) error
+// checkFunc func(want, *payload.Filter_VectorResponse, error) error
// beforeFunc func(*testing.T, args)
// afterFunc func(*testing.T, args)
// }
-// defaultCheckFunc := func(w want, gotRes *payload.Object_Vector, err error) error {
+// defaultCheckFunc := func(w want, gotRes *payload.Filter_VectorResponse, err error) error {
// if !errors.Is(err, w.err) {
// return errors.Errorf("got_error: \"%#v\",\n\t\t\t\twant: \"%#v\"", err, w.err)
// }
@@ -1104,7 +1104,7 @@ package egress
// func Test_multipleAddrsClient_FilterVector(t *testing.T) {
// type args struct {
// ctx context.Context
-// in *payload.Object_Vector
+// in *payload.Filter_VectorRequest
// opts []grpc.CallOption
// }
// type fields struct {
@@ -1112,7 +1112,7 @@ package egress
// c grpc.Client
// }
// type want struct {
-// wantRes *payload.Object_Vector
+// wantRes *payload.Filter_VectorResponse
// err error
// }
// type test struct {
@@ -1120,11 +1120,11 @@ package egress
// args args
// fields fields
// want want
-// checkFunc func(want, *payload.Object_Vector, error) error
+// checkFunc func(want, *payload.Filter_VectorResponse, error) error
// beforeFunc func(*testing.T, args)
// afterFunc func(*testing.T, args)
// }
-// defaultCheckFunc := func(w want, gotRes *payload.Object_Vector, err error) error {
+// defaultCheckFunc := func(w want, gotRes *payload.Filter_VectorResponse, err error) error {
// if !errors.Is(err, w.err) {
// return errors.Errorf("got_error: \"%#v\",\n\t\t\t\twant: \"%#v\"", err, w.err)
// }
diff --git a/internal/config/benchmark.go b/internal/config/benchmark.go
index 7b8ab66b94..0bf2a092b9 100644
--- a/internal/config/benchmark.go
+++ b/internal/config/benchmark.go
@@ -155,11 +155,15 @@ func (cfg *RemoveConfig) Bind() *RemoveConfig {
// ObjectConfig defines the desired state of object config.
type ObjectConfig struct {
- FilterConfig FilterConfig `json:"filter_config,omitempty" yaml:"filter_config"`
+ FilterConfigs []*FilterConfig `json:"filter_configs,omitempty" yaml:"filter_configs"`
}
func (cfg *ObjectConfig) Bind() *ObjectConfig {
- cfg.FilterConfig = *cfg.FilterConfig.Bind()
+ for i := 0; i < len(cfg.FilterConfigs); i++ {
+ if cfg.FilterConfigs[i] != nil {
+ cfg.FilterConfigs[i] = cfg.FilterConfigs[i].Bind()
+ }
+ }
return cfg
}
@@ -174,14 +178,33 @@ func (cfg *FilterTarget) Bind() *FilterTarget {
return cfg
}
+// FilterQuery defines the query passed to filter target.
+type FilterQuery struct {
+ Query string `json:"query,omitempty" yaml:"query"`
+}
+
+func (cfg *FilterQuery) Bind() *FilterQuery {
+ cfg.Query = GetActualValue(cfg.Query)
+ return cfg
+}
+
// FilterConfig defines the desired state of filter config.
type FilterConfig struct {
- Targets []*FilterTarget `json:"target,omitempty" yaml:"target"`
+ Target *FilterTarget `json:"target,omitempty" yaml:"target"`
+ Query *FilterQuery `json:"query,omitempty" yaml:"query"`
}
func (cfg *FilterConfig) Bind() *FilterConfig {
- for i := 0; i < len(cfg.Targets); i++ {
- cfg.Targets[i] = cfg.Targets[i].Bind()
+ if cfg.Target != nil {
+ cfg.Target = cfg.Target.Bind()
+ } else {
+ cfg.Target = (&FilterTarget{}).Bind()
+ }
+
+ if cfg.Query != nil {
+ cfg.Query = cfg.Query.Bind()
+ } else {
+ cfg.Query = (&FilterQuery{}).Bind()
}
return cfg
}
diff --git a/internal/config/filter.go b/internal/config/filter.go
index 61ef3b21fa..f546838292 100644
--- a/internal/config/filter.go
+++ b/internal/config/filter.go
@@ -19,9 +19,9 @@ package config
// EgressFilter represents the EgressFilter configuration.
type EgressFilter struct {
- Client *GRPCClient `json:"client,omitempty" yaml:"client"`
- DistanceFilters []string `json:"distance_filters,omitempty" yaml:"distance_filters"`
- ObjectFilters []string `json:"object_filters,omitempty" yaml:"object_filters"`
+ Client *GRPCClient `json:"client,omitempty" yaml:"client"`
+ DistanceFilters []*DistanceFilterConfig `json:"distance_filters,omitempty" yaml:"distance_filters"`
+ ObjectFilters []*ObjectFilterConfig `json:"object_filters,omitempty" yaml:"object_filters"`
}
// IngressFilter represents the IngressFilter configuration.
@@ -34,16 +34,28 @@ type IngressFilter struct {
UpsertFilters []string `json:"upsert_filters,omitempty" yaml:"upsert_filters"`
}
+// DistanceFilterConfig represents the DistanceFilter configuration.
+type DistanceFilterConfig struct {
+ Addr string `json:"addr,omitempty" yaml:"addr"`
+ Query string `json:"query,omitempty" yaml:"query"`
+}
+
+// ObjectFilterConfig represents the ObjectFilter configuration.
+type ObjectFilterConfig struct {
+ Addr string `json:"addr,omitempty" yaml:"addr"`
+ Query string `json:"query,omitempty" yaml:"query"`
+}
+
// Bind binds the actual data from the EgressFilter receiver field.
func (e *EgressFilter) Bind() *EgressFilter {
if e.Client != nil {
e.Client.Bind()
}
- if e.DistanceFilters != nil {
- e.DistanceFilters = GetActualValues(e.DistanceFilters)
+ for _, df := range e.DistanceFilters {
+ df.Bind()
}
- if e.ObjectFilters != nil {
- e.ObjectFilters = GetActualValues(e.ObjectFilters)
+ for _, of := range e.ObjectFilters {
+ of.Bind()
}
return e
}
@@ -70,3 +82,17 @@ func (i *IngressFilter) Bind() *IngressFilter {
}
return i
}
+
+// Bind binds the actual data from the DistanceFilterConfig receiver field.
+func (c *DistanceFilterConfig) Bind() *DistanceFilterConfig {
+ c.Addr = GetActualValue(c.Addr)
+ c.Query = GetActualValue(c.Query)
+ return c
+}
+
+// Bind binds the actual data from the ObjectFilterConfig receiver field.
+func (c *ObjectFilterConfig) Bind() *ObjectFilterConfig {
+ c.Addr = GetActualValue(c.Addr)
+ c.Query = GetActualValue(c.Query)
+ return c
+}
diff --git a/internal/config/filter_test.go b/internal/config/filter_test.go
index b682201f97..d97561d883 100644
--- a/internal/config/filter_test.go
+++ b/internal/config/filter_test.go
@@ -28,8 +28,8 @@ import (
func TestEgressFilter_Bind(t *testing.T) {
type fields struct {
Client *GRPCClient
- DistanceFilters []string
- ObjectFilters []string
+ DistanceFilters []*DistanceFilterConfig
+ ObjectFilters []*ObjectFilterConfig
}
type want struct {
want *EgressFilter
@@ -53,20 +53,32 @@ func TestEgressFilter_Bind(t *testing.T) {
return test{
name: "return EgressFilter when the bind successes",
fields: fields{
- DistanceFilters: []string{
- "192.168.1.2",
+ DistanceFilters: []*DistanceFilterConfig{
+ {
+ Addr: "192.168.1.2",
+ Query: "distQuery",
+ },
},
- ObjectFilters: []string{
- "192.168.1.3",
+ ObjectFilters: []*ObjectFilterConfig{
+ {
+ Addr: "192.168.1.3",
+ Query: "objQuery",
+ },
},
},
want: want{
want: &EgressFilter{
- DistanceFilters: []string{
- "192.168.1.2",
+ DistanceFilters: []*DistanceFilterConfig{
+ {
+ Addr: "192.168.1.2",
+ Query: "distQuery",
+ },
},
- ObjectFilters: []string{
- "192.168.1.3",
+ ObjectFilters: []*ObjectFilterConfig{
+ {
+ Addr: "192.168.1.3",
+ Query: "objQuery",
+ },
},
},
},
@@ -76,21 +88,33 @@ func TestEgressFilter_Bind(t *testing.T) {
return test{
name: "return EgressFilter when the bind successes and the Client is not nil",
fields: fields{
- DistanceFilters: []string{
- "192.168.1.2",
+ DistanceFilters: []*DistanceFilterConfig{
+ {
+ Addr: "192.168.1.2",
+ Query: "distQuery",
+ },
},
- ObjectFilters: []string{
- "192.168.1.3",
+ ObjectFilters: []*ObjectFilterConfig{
+ {
+ Addr: "192.168.1.3",
+ Query: "objQuery",
+ },
},
Client: new(GRPCClient),
},
want: want{
want: &EgressFilter{
- DistanceFilters: []string{
- "192.168.1.2",
+ DistanceFilters: []*DistanceFilterConfig{
+ {
+ Addr: "192.168.1.2",
+ Query: "distQuery",
+ },
},
- ObjectFilters: []string{
- "192.168.1.3",
+ ObjectFilters: []*ObjectFilterConfig{
+ {
+ Addr: "192.168.1.3",
+ Query: "objQuery",
+ },
},
Client: &GRPCClient{
ConnectionPool: new(ConnectionPool),
@@ -106,17 +130,25 @@ func TestEgressFilter_Bind(t *testing.T) {
func() test {
suffix := "_FOR_TEST_EGRESS_FILTER_BIND"
m := map[string]string{
- "DISTANCE_FILTERS" + suffix: "192.168.1.2",
- "OBJECT_FILTERS" + suffix: "192.168.1.3",
+ "DISTANCE_FILTERS" + suffix: "192.168.1.2",
+ "OBJECT_FILTERS" + suffix: "192.168.1.3",
+ "DISTANCE_FILTERS_QUERY" + suffix: "distQuery",
+ "OBJECT_FILTERS_QUERY" + suffix: "objQuery",
}
return test{
name: "return EgressFilter when the bind successes and the data is loaded from the environment variable",
fields: fields{
- DistanceFilters: []string{
- "_DISTANCE_FILTERS" + suffix + "_",
+ DistanceFilters: []*DistanceFilterConfig{
+ {
+ Addr: "_DISTANCE_FILTERS" + suffix + "_",
+ Query: "_DISTANCE_FILTERS_QUERY" + suffix + "_",
+ },
},
- ObjectFilters: []string{
- "_OBJECT_FILTERS" + suffix + "_",
+ ObjectFilters: []*ObjectFilterConfig{
+ {
+ Addr: "_OBJECT_FILTERS" + suffix + "_",
+ Query: "_OBJECT_FILTERS_QUERY" + suffix + "_",
+ },
},
},
beforeFunc: func(t *testing.T) {
@@ -127,11 +159,17 @@ func TestEgressFilter_Bind(t *testing.T) {
},
want: want{
want: &EgressFilter{
- DistanceFilters: []string{
- "192.168.1.2",
+ DistanceFilters: []*DistanceFilterConfig{
+ {
+ Addr: "192.168.1.2",
+ Query: "distQuery",
+ },
},
- ObjectFilters: []string{
- "192.168.1.3",
+ ObjectFilters: []*ObjectFilterConfig{
+ {
+ Addr: "192.168.1.3",
+ Query: "objQuery",
+ },
},
},
},
@@ -362,3 +400,189 @@ func TestIngressFilter_Bind(t *testing.T) {
}
// NOT IMPLEMENTED BELOW
+//
+// func TestDistanceFilterConfig_Bind(t *testing.T) {
+// type fields struct {
+// Addr string
+// Query string
+// }
+// type want struct {
+// want *DistanceFilterConfig
+// }
+// type test struct {
+// name string
+// fields fields
+// want want
+// checkFunc func(want, *DistanceFilterConfig) error
+// beforeFunc func(*testing.T)
+// afterFunc func(*testing.T)
+// }
+// defaultCheckFunc := func(w want, got *DistanceFilterConfig) error {
+// if !reflect.DeepEqual(got, w.want) {
+// return errors.Errorf("got: \"%#v\",\n\t\t\t\twant: \"%#v\"", got, w.want)
+// }
+// return nil
+// }
+// tests := []test{
+// // TODO test cases
+// /*
+// {
+// name: "test_case_1",
+// fields: fields {
+// Addr:"",
+// Query:"",
+// },
+// want: want{},
+// checkFunc: defaultCheckFunc,
+// beforeFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// afterFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// },
+// */
+//
+// // TODO test cases
+// /*
+// func() test {
+// return test {
+// name: "test_case_2",
+// fields: fields {
+// Addr:"",
+// Query:"",
+// },
+// want: want{},
+// checkFunc: defaultCheckFunc,
+// beforeFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// afterFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// }
+// }(),
+// */
+// }
+//
+// for _, tc := range tests {
+// test := tc
+// t.Run(test.name, func(tt *testing.T) {
+// tt.Parallel()
+// defer goleak.VerifyNone(tt, goleak.IgnoreCurrent())
+// if test.beforeFunc != nil {
+// test.beforeFunc(tt)
+// }
+// if test.afterFunc != nil {
+// defer test.afterFunc(tt)
+// }
+// checkFunc := test.checkFunc
+// if test.checkFunc == nil {
+// checkFunc = defaultCheckFunc
+// }
+// c := &DistanceFilterConfig{
+// Addr: test.fields.Addr,
+// Query: test.fields.Query,
+// }
+//
+// got := c.Bind()
+// if err := checkFunc(test.want, got); err != nil {
+// tt.Errorf("error = %v", err)
+// }
+//
+// })
+// }
+// }
+//
+// func TestObjectFilterConfig_Bind(t *testing.T) {
+// type fields struct {
+// Addr string
+// Query string
+// }
+// type want struct {
+// want *ObjectFilterConfig
+// }
+// type test struct {
+// name string
+// fields fields
+// want want
+// checkFunc func(want, *ObjectFilterConfig) error
+// beforeFunc func(*testing.T)
+// afterFunc func(*testing.T)
+// }
+// defaultCheckFunc := func(w want, got *ObjectFilterConfig) error {
+// if !reflect.DeepEqual(got, w.want) {
+// return errors.Errorf("got: \"%#v\",\n\t\t\t\twant: \"%#v\"", got, w.want)
+// }
+// return nil
+// }
+// tests := []test{
+// // TODO test cases
+// /*
+// {
+// name: "test_case_1",
+// fields: fields {
+// Addr:"",
+// Query:"",
+// },
+// want: want{},
+// checkFunc: defaultCheckFunc,
+// beforeFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// afterFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// },
+// */
+//
+// // TODO test cases
+// /*
+// func() test {
+// return test {
+// name: "test_case_2",
+// fields: fields {
+// Addr:"",
+// Query:"",
+// },
+// want: want{},
+// checkFunc: defaultCheckFunc,
+// beforeFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// afterFunc: func(t *testing.T,) {
+// t.Helper()
+// },
+// }
+// }(),
+// */
+// }
+//
+// for _, tc := range tests {
+// test := tc
+// t.Run(test.name, func(tt *testing.T) {
+// tt.Parallel()
+// defer goleak.VerifyNone(tt, goleak.IgnoreCurrent())
+// if test.beforeFunc != nil {
+// test.beforeFunc(tt)
+// }
+// if test.afterFunc != nil {
+// defer test.afterFunc(tt)
+// }
+// checkFunc := test.checkFunc
+// if test.checkFunc == nil {
+// checkFunc = defaultCheckFunc
+// }
+// c := &ObjectFilterConfig{
+// Addr: test.fields.Addr,
+// Query: test.fields.Query,
+// }
+//
+// got := c.Bind()
+// if err := checkFunc(test.want, got); err != nil {
+// tt.Errorf("error = %v", err)
+// }
+//
+// })
+// }
+// }
diff --git a/pkg/gateway/filter/handler/grpc/handler.go b/pkg/gateway/filter/handler/grpc/handler.go
index dd42febe0b..c2eab8693d 100644
--- a/pkg/gateway/filter/handler/grpc/handler.go
+++ b/pkg/gateway/filter/handler/grpc/handler.go
@@ -27,10 +27,12 @@ import (
"github.com/vdaas/vald/internal/client/v1/client/filter/egress"
"github.com/vdaas/vald/internal/client/v1/client/filter/ingress"
client "github.com/vdaas/vald/internal/client/v1/client/vald"
+ "github.com/vdaas/vald/internal/config"
"github.com/vdaas/vald/internal/core/algorithm"
"github.com/vdaas/vald/internal/errors"
"github.com/vdaas/vald/internal/info"
"github.com/vdaas/vald/internal/log"
+ "github.com/vdaas/vald/internal/net"
"github.com/vdaas/vald/internal/net/grpc"
"github.com/vdaas/vald/internal/net/grpc/codes"
"github.com/vdaas/vald/internal/net/grpc/errdetails"
@@ -53,8 +55,8 @@ type server struct {
copts []grpc.CallOption
streamConcurrency int
Vectorizer string
- DistanceFilters []string
- ObjectFilters []string
+ DistanceFilters []*config.DistanceFilterConfig
+ ObjectFilters []*config.ObjectFilterConfig
SearchFilters []string
InsertFilters []string
UpdateFilters []string
@@ -1385,66 +1387,64 @@ func (s *server) Search(
span.End()
}
}()
- targets := req.GetConfig().GetIngressFilters().GetTargets()
- if targets != nil || s.SearchFilters != nil {
- addrs := make([]string, 0, len(targets)+len(s.SearchFilters))
- addrs = append(addrs, s.SearchFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
- }
- c, err := s.ingress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.SearchRPCName+" API ingress filter targets %v not found", addrs),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.BadRequest{
- FieldViolations: []*errdetails.BadRequestFieldViolation{
- {
- Field: "vectorizer targets",
- Description: err.Error(),
+ filterConfigs := req.GetConfig().GetIngressFilters()
+ if filterConfigs != nil || s.SearchFilters != nil {
+ for _, filterConfig := range filterConfigs {
+ addr := net.JoinHostPort(filterConfig.GetTarget().GetHost(), uint16(filterConfig.GetTarget().GetPort()))
+ c, err := s.ingress.Target(ctx, addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.SearchRPCName+" API ingress filter target %v not found", addr),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.BadRequest{
+ FieldViolations: []*errdetails.BadRequestFieldViolation{
+ {
+ Field: "vectorizer targets",
+ Description: err.Error(),
+ },
},
},
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- vec, err := c.FilterVector(ctx, &payload.Object_Vector{
- Vector: req.GetVector(),
- })
- if err != nil {
- err = status.WrapWithInternal(
- fmt.Sprintf(vald.SearchRPCName+" API ingress filter request to %v failure on vec %v", addrs, req.GetVector()),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ vec, err := c.FilterVector(ctx, &payload.Object_Vector{
+ Vector: req.GetVector(),
+ })
+ if err != nil {
+ err = status.WrapWithInternal(
+ fmt.Sprintf(vald.SearchRPCName+" API ingress filter request to %v failure on vec %v", addr, req.GetVector()),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
+ req.Vector = vec.GetVector()
}
- req.Vector = vec.GetVector()
}
res, err = s.gateway.Search(ctx, req, s.copts...)
if err != nil {
@@ -1456,47 +1456,57 @@ func (s *server) Search(
}
return nil, err
}
- targets = req.GetConfig().GetEgressFilters().GetTargets()
- if targets != nil || s.DistanceFilters != nil {
- addrs := make([]string, 0, len(targets)+len(s.DistanceFilters))
- addrs = append(addrs, s.DistanceFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
+ filterConfigs = req.GetConfig().GetEgressFilters()
+ if filterConfigs != nil || s.DistanceFilters != nil {
+ filters := make([]*config.DistanceFilterConfig, 0, len(filterConfigs)+len(s.DistanceFilters))
+ filters = append(filters, s.DistanceFilters...)
+ for _, c := range filterConfigs {
+ filters = append(filters, &config.DistanceFilterConfig{
+ Addr: net.JoinHostPort(c.GetTarget().GetHost(), uint16(c.GetTarget().GetPort())),
+ Query: c.Query.GetQuery(),
+ })
}
- c, err := s.egress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.SearchRPCName+" API egress filter targets %v not found", addrs),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.BadRequest{
- FieldViolations: []*errdetails.BadRequestFieldViolation{
- {
- Field: "vectorizer targets",
- Description: err.Error(),
+ for _, filterConfig := range filters {
+ c, err := s.egress.Target(ctx, filterConfig.Addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.SearchRPCName+" API egress filter target %v not found", filterConfig.Addr),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.BadRequest{
+ FieldViolations: []*errdetails.BadRequestFieldViolation{
+ {
+ Field: "vectorizer target",
+ Description: err.Error(),
+ },
},
},
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- for i, dist := range res.GetResults() {
- d, err := c.FilterDistance(ctx, dist)
+ dist := res.GetResults()
+ q := &payload.Filter_Query{
+ Query: filterConfig.Query,
+ }
+ d, err := c.FilterDistance(ctx, &payload.Filter_DistanceRequest{
+ Distance: dist,
+ Query: q,
+ })
if err != nil {
err = status.WrapWithInternal(
- fmt.Sprintf(vald.SearchRPCName+" API egress filter request to %v failure on id %s", addrs, dist.GetId()),
+ fmt.Sprintf(vald.SearchRPCName+" API egress filter request to %v failure on distance %v and query %v", filterConfig.Addr, dist, q),
err,
&errdetails.RequestInfo{
RequestId: req.GetConfig().GetRequestId(),
@@ -1514,7 +1524,7 @@ func (s *server) Search(
}
return nil, err
}
- res.Results[i] = d
+ res.Results = d.GetDistance()
}
}
return res, nil
@@ -1539,47 +1549,57 @@ func (s *server) SearchByID(
}
return nil, err
}
- targets := req.GetConfig().GetEgressFilters().GetTargets()
- if targets != nil || s.DistanceFilters != nil {
- addrs := make([]string, 0, len(targets)+len(s.DistanceFilters))
- addrs = append(addrs, s.DistanceFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
+ filterConfigs := req.GetConfig().GetEgressFilters()
+ if filterConfigs != nil || s.DistanceFilters != nil {
+ filters := make([]*config.DistanceFilterConfig, 0, len(filterConfigs)+len(s.DistanceFilters))
+ filters = append(filters, s.DistanceFilters...)
+ for _, c := range filterConfigs {
+ filters = append(filters, &config.DistanceFilterConfig{
+ Addr: net.JoinHostPort(c.GetTarget().GetHost(), uint16(c.GetTarget().GetPort())),
+ Query: c.Query.GetQuery(),
+ })
}
- c, err := s.egress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.SearchByIDRPCName+" API egress filter targets %v not found", addrs),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.BadRequest{
- FieldViolations: []*errdetails.BadRequestFieldViolation{
- {
- Field: "vectorizer targets",
- Description: err.Error(),
+ for _, filterConfig := range filters {
+ c, err := s.egress.Target(ctx, filterConfig.Addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.SearchByIDRPCName+" API egress filter target %v not found", filterConfig.Addr),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.BadRequest{
+ FieldViolations: []*errdetails.BadRequestFieldViolation{
+ {
+ Field: "vectorizer targets",
+ Description: err.Error(),
+ },
},
},
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchByIDRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.SearchByIDRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- for i, dist := range res.GetResults() {
- d, err := c.FilterDistance(ctx, dist)
+ dist := res.GetResults()
+ q := &payload.Filter_Query{
+ Query: filterConfig.Query,
+ }
+ d, err := c.FilterDistance(ctx, &payload.Filter_DistanceRequest{
+ Distance: dist,
+ Query: q,
+ })
if err != nil {
err = status.WrapWithInternal(
- fmt.Sprintf(vald.SearchByIDRPCName+" API egress filter request to %v failure on id %s", addrs, dist.GetId()),
+ fmt.Sprintf(vald.SearchByIDRPCName+" API egress filter request to %v failure on distance %v and query %v", filterConfig.Addr, dist, q),
err,
&errdetails.RequestInfo{
RequestId: req.GetConfig().GetRequestId(),
@@ -1597,7 +1617,7 @@ func (s *server) SearchByID(
}
return nil, err
}
- res.Results[i] = d
+ res.Results = d.GetDistance()
}
}
return res, nil
@@ -1876,112 +1896,121 @@ func (s *server) LinearSearch(
span.End()
}
}()
- targets := req.GetConfig().GetIngressFilters().GetTargets()
- if targets != nil || s.SearchFilters != nil {
- addrs := make([]string, 0, len(targets)+len(s.SearchFilters))
- addrs = append(addrs, s.SearchFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
- }
- c, err := s.ingress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.LinearSearchRPCName+" API ingress filter targets %v not found", addrs),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.BadRequest{
- FieldViolations: []*errdetails.BadRequestFieldViolation{
- {
- Field: "vectorizer targets",
- Description: err.Error(),
+ filterConfigs := req.GetConfig().GetIngressFilters()
+ if filterConfigs != nil || s.SearchFilters != nil {
+ for _, filterConfig := range filterConfigs {
+ addr := net.JoinHostPort(filterConfig.GetTarget().GetHost(), uint16(filterConfig.GetTarget().GetPort()))
+ c, err := s.ingress.Target(ctx, addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.LinearSearchRPCName+" API ingress filter target %v not found", addr),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.BadRequest{
+ FieldViolations: []*errdetails.BadRequestFieldViolation{
+ {
+ Field: "vectorizer targets",
+ Description: err.Error(),
+ },
},
},
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- vec, err := c.FilterVector(ctx, &payload.Object_Vector{
- Vector: req.GetVector(),
- })
- if err != nil {
- err = status.WrapWithInternal(
- fmt.Sprintf(vald.LinearSearchRPCName+" API ingress filter request to %v failure on vec %v", addrs, req.GetVector()),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ vec, err := c.FilterVector(ctx, &payload.Object_Vector{
+ Vector: req.GetVector(),
+ })
+ if err != nil {
+ err = status.WrapWithInternal(
+ fmt.Sprintf(vald.LinearSearchRPCName+" API ingress filter request to %v failure on vec %v", addr, req.GetVector()),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
+ req.Vector = vec.GetVector()
}
- req.Vector = vec.GetVector()
}
res, err = s.gateway.LinearSearch(ctx, req, s.copts...)
if err != nil {
return nil, err
}
- targets = req.GetConfig().GetEgressFilters().GetTargets()
- if targets != nil || s.DistanceFilters != nil {
- addrs := make([]string, 0, len(targets)+len(s.DistanceFilters))
- addrs = append(addrs, s.DistanceFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
+ filterConfigs = req.GetConfig().GetEgressFilters()
+ if filterConfigs != nil || s.DistanceFilters != nil {
+ filters := make([]*config.DistanceFilterConfig, 0, len(filterConfigs)+len(s.DistanceFilters))
+ filters = append(filters, s.DistanceFilters...)
+ for _, c := range filterConfigs {
+ filters = append(filters, &config.DistanceFilterConfig{
+ Addr: net.JoinHostPort(c.GetTarget().GetHost(), uint16(c.GetTarget().GetPort())),
+ Query: c.Query.GetQuery(),
+ })
}
- c, err := s.egress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.LinearSearchRPCName+" API ingress filter targets %v not found", addrs),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.BadRequest{
- FieldViolations: []*errdetails.BadRequestFieldViolation{
- {
- Field: "vectorizer targets",
- Description: err.Error(),
+ for _, filterConfig := range filters {
+ c, err := s.egress.Target(ctx, filterConfig.Addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.LinearSearchRPCName+" API ingress filter target %v not found", filterConfig.Addr),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.BadRequest{
+ FieldViolations: []*errdetails.BadRequestFieldViolation{
+ {
+ Field: "vectorizer targets",
+ Description: err.Error(),
+ },
},
},
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- for i, dist := range res.GetResults() {
- d, err := c.FilterDistance(ctx, dist)
+
+ dist := res.GetResults()
+ q := &payload.Filter_Query{
+ Query: filterConfig.Query,
+ }
+ d, err := c.FilterDistance(ctx, &payload.Filter_DistanceRequest{
+ Distance: dist,
+ Query: q,
+ })
if err != nil {
err = status.WrapWithInternal(
- fmt.Sprintf(vald.LinearSearchRPCName+" API egress filter request to %v failure on id %s", addrs, dist.GetId()),
+ fmt.Sprintf(vald.LinearSearchRPCName+" API egress filter request to %v failure on distance %v and query %v", filterConfig.Addr, dist, q),
err,
&errdetails.RequestInfo{
RequestId: req.GetConfig().GetRequestId(),
@@ -1999,7 +2028,8 @@ func (s *server) LinearSearch(
}
return nil, err
}
- res.Results[i] = d
+ res.Results = d.GetDistance()
+
}
}
return res, nil
@@ -2018,50 +2048,60 @@ func (s *server) LinearSearchByID(
if err != nil {
return nil, err
}
- targets := req.GetConfig().GetEgressFilters().GetTargets()
- if targets != nil || s.DistanceFilters != nil {
- addrs := make([]string, 0, len(targets)+len(s.DistanceFilters))
- addrs = append(addrs, s.DistanceFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
+ filterConfigs := req.GetConfig().GetEgressFilters()
+ if filterConfigs != nil || s.DistanceFilters != nil {
+ filters := make([]*config.DistanceFilterConfig, 0, len(filterConfigs)+len(s.DistanceFilters))
+ filters = append(filters, s.DistanceFilters...)
+ for _, c := range filterConfigs {
+ filters = append(filters, &config.DistanceFilterConfig{
+ Addr: net.JoinHostPort(c.GetTarget().GetHost(), uint16(c.GetTarget().GetPort())),
+ Query: c.Query.GetQuery(),
+ })
}
- c, err := s.egress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.LinearSearchByIDRPCName+" API egress filter targets %v not found", addrs),
- err,
- &errdetails.RequestInfo{
- RequestId: req.GetConfig().GetRequestId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.BadRequest{
- FieldViolations: []*errdetails.BadRequestFieldViolation{
- {
- Field: "vectorizer targets",
- Description: err.Error(),
+ for _, filterConfig := range filters {
+ c, err := s.egress.Target(ctx, filterConfig.Addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.LinearSearchByIDRPCName+" API egress filter target %v not found", filterConfig.Addr),
+ err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetConfig().GetRequestId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.BadRequest{
+ FieldViolations: []*errdetails.BadRequestFieldViolation{
+ {
+ Field: "vectorizer targets",
+ Description: err.Error(),
+ },
},
},
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchByIDRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.LinearSearchByIDRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- for i, dist := range res.GetResults() {
- d, err := c.FilterDistance(ctx, dist)
+ dist := res.GetResults()
+ q := &payload.Filter_Query{
+ Query: filterConfig.Query,
+ }
+ d, err := c.FilterDistance(ctx, &payload.Filter_DistanceRequest{
+ Distance: dist,
+ Query: q,
+ })
if err != nil {
err = status.WrapWithInternal(
- fmt.Sprintf(vald.LinearSearchByIDRPCName+" API egress filter request to %v failure on id %s", addrs, dist.GetId()),
+ fmt.Sprintf(vald.LinearSearchByIDRPCName+" API egress filter request to %v failure on distance %v and query %v", filterConfig.Addr, dist, q),
err,
&errdetails.RequestInfo{
- RequestId: dist.GetId(),
+ RequestId: req.GetConfig().GetRequestId(),
ServingData: errdetails.Serialize(req),
},
&errdetails.ResourceInfo{
@@ -2076,7 +2116,7 @@ func (s *server) LinearSearchByID(
}
return nil, err
}
- res.Results[i] = d
+ res.Results = d.GetDistance()
}
}
return res, nil
@@ -2383,78 +2423,76 @@ func (s *server) Insert(
err = errors.ErrMetaDataAlreadyExists(uuid)
err = status.WrapWithAlreadyExists(vald.InsertRPCName+" API ID = "+uuid+" already exists", err,
&errdetails.RequestInfo{
- RequestId: uuid,
+ RequestId: uuid,
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.InsertRPCName + "." + vald.ExistsRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeAlreadyExists(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
+ }
+ if req.GetConfig() != nil {
+ req.GetConfig().SkipStrictExistCheck = true
+ } else {
+ req.Config = &payload.Insert_Config{SkipStrictExistCheck: true}
+ }
+ }
+ filterConfigs := req.GetConfig().GetFilters()
+ if len(filterConfigs) == 0 && len(s.InsertFilters) == 0 {
+ return s.gateway.Insert(ctx, req)
+ }
+ for _, filterConfig := range filterConfigs {
+ addr := net.JoinHostPort(filterConfig.GetTarget().GetHost(), uint16(filterConfig.GetTarget().GetPort()))
+ c, err := s.ingress.Target(ctx, addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.InsertRPCName+" API ingress filter filter target %v not found", addr), err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetVector().GetId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.InsertRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ }
+ vec, err = c.FilterVector(ctx, req.GetVector())
+ if err != nil {
+ err = status.WrapWithInternal(
+ fmt.Sprintf(vald.InsertRPCName+" API ingress filter request to %v failure on id: %s\tvec: %v", addr, req.GetVector().GetId(), req.GetVector().GetVector()), err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetVector().GetId(),
ServingData: errdetails.Serialize(req),
},
&errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.InsertRPCName + "." + vald.ExistsRPCName,
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.InsertRPCName,
ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
}, info.Get())
+ log.Warn(err)
if span != nil {
span.RecordError(err)
- span.SetAttributes(trace.StatusCodeAlreadyExists(err.Error())...)
+ span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
span.SetStatus(trace.StatusError, err.Error())
}
return nil, err
}
- if req.GetConfig() != nil {
- req.GetConfig().SkipStrictExistCheck = true
- } else {
- req.Config = &payload.Insert_Config{SkipStrictExistCheck: true}
- }
- }
- targets := req.GetConfig().GetFilters().GetTargets()
- if len(targets) == 0 && len(s.InsertFilters) == 0 {
- return s.gateway.Insert(ctx, req)
- }
- addrs := make([]string, 0, len(targets)+len(s.InsertFilters))
- addrs = append(addrs, s.InsertFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
- }
- c, err := s.ingress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.InsertRPCName+" API ingress filter filter targets %v not found", addrs), err,
- &errdetails.RequestInfo{
- RequestId: req.GetVector().GetId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.InsertRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
- }
- }
- vec, err = c.FilterVector(ctx, req.GetVector())
- if err != nil {
- err = status.WrapWithInternal(
- fmt.Sprintf(vald.InsertRPCName+" API ingress filter request to %v failure on id: %s\tvec: %v", addrs, req.GetVector().GetId(), req.GetVector().GetVector()), err,
- &errdetails.RequestInfo{
- RequestId: req.GetVector().GetId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.InsertRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ if vec.GetId() == "" {
+ vec.Id = req.GetVector().GetId()
}
- return nil, err
+ req.Vector = vec
}
- if vec.GetId() == "" {
- vec.Id = req.GetVector().GetId()
- }
- req.Vector = vec
loc, err = s.gateway.Insert(ctx, req, s.copts...)
if err != nil {
err = status.WrapWithInternal(
@@ -2676,59 +2714,57 @@ func (s *server) Update(
req.Config = &payload.Update_Config{SkipStrictExistCheck: true}
}
}
- targets := req.GetConfig().GetFilters().GetTargets()
- if len(targets) == 0 && len(s.UpdateFilters) == 0 {
+ filterConfigs := req.GetConfig().GetFilters()
+ if len(filterConfigs) == 0 && len(s.UpdateFilters) == 0 {
return s.gateway.Update(ctx, req)
}
- addrs := make([]string, 0, len(targets)+len(s.UpdateFilters))
- addrs = append(addrs, s.UpdateFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
- }
- c, err := s.ingress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.UpdateRPCName+" API ingress filter filter targets %v not found", addrs), err,
- &errdetails.RequestInfo{
- RequestId: req.GetVector().GetId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpdateRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ for _, filterConfig := range filterConfigs {
+ addr := net.JoinHostPort(filterConfig.GetTarget().GetHost(), uint16(filterConfig.GetTarget().GetPort()))
+ c, err := s.ingress.Target(ctx, addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.UpdateRPCName+" API ingress filter filter target %v not found", addr), err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetVector().GetId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpdateRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- vec, err = c.FilterVector(ctx, req.GetVector())
- if err != nil {
- err = status.WrapWithInternal(
- fmt.Sprintf(vald.UpdateRPCName+" API ingress filter request to %v failure on id: %s\tvec: %v", addrs, req.GetVector().GetId(), req.GetVector().GetVector()), err,
- &errdetails.RequestInfo{
- RequestId: req.GetVector().GetId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpdateRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ vec, err = c.FilterVector(ctx, req.GetVector())
+ if err != nil {
+ err = status.WrapWithInternal(
+ fmt.Sprintf(vald.UpdateRPCName+" API ingress filter request to %v failure on id: %s\tvec: %v", addr, req.GetVector().GetId(), req.GetVector().GetVector()), err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetVector().GetId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpdateRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- if vec.GetId() == "" {
- vec.Id = req.GetVector().GetId()
+ if vec.GetId() == "" {
+ vec.Id = req.GetVector().GetId()
+ }
+ req.Vector = vec
}
- req.Vector = vec
loc, err = s.gateway.Update(ctx, req, s.copts...)
if err != nil {
err = status.WrapWithInternal(
@@ -2934,59 +2970,57 @@ func (s *server) Upsert(
req.Config = &payload.Upsert_Config{SkipStrictExistCheck: true}
}
}
- targets := req.GetConfig().GetFilters().GetTargets()
- if len(targets) == 0 && len(s.UpsertFilters) == 0 {
+ filterConfigs := req.GetConfig().GetFilters()
+ if len(filterConfigs) == 0 && len(s.UpsertFilters) == 0 {
return s.gateway.Upsert(ctx, req)
}
- addrs := make([]string, 0, len(targets))
- addrs = append(addrs, s.UpsertFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
- }
- c, err := s.ingress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(
- fmt.Sprintf(vald.UpsertRPCName+" API ingress filter filter targets %v not found", addrs), err,
- &errdetails.RequestInfo{
- RequestId: req.GetVector().GetId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpsertObjectRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ for _, filterConfig := range filterConfigs {
+ addr := net.JoinHostPort(filterConfig.GetTarget().GetHost(), uint16(filterConfig.GetTarget().GetPort()))
+ c, err := s.ingress.Target(ctx, addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(
+ fmt.Sprintf(vald.UpsertRPCName+" API ingress filter filter target %v not found", addr), err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetVector().GetId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpsertObjectRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- vec, err = c.FilterVector(ctx, req.GetVector())
- if err != nil {
- err = status.WrapWithInternal(
- fmt.Sprintf(vald.UpsertRPCName+" API ingress filter request to %v failure on id: %s\tvec: %v", addrs, req.GetVector().GetId(), req.GetVector().GetVector()), err,
- &errdetails.RequestInfo{
- RequestId: req.GetVector().GetId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpsertObjectRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ vec, err = c.FilterVector(ctx, req.GetVector())
+ if err != nil {
+ err = status.WrapWithInternal(
+ fmt.Sprintf(vald.UpsertRPCName+" API ingress filter request to %v failure on id: %s\tvec: %v", addr, req.GetVector().GetId(), req.GetVector().GetVector()), err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetVector().GetId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.UpsertObjectRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- if vec.GetId() == "" {
- vec.Id = req.GetVector().GetId()
+ if vec.GetId() == "" {
+ vec.Id = req.GetVector().GetId()
+ }
+ req.Vector = vec
}
- req.Vector = vec
loc, err = s.gateway.Upsert(ctx, req, s.copts...)
if err != nil {
err = status.WrapWithInternal(vald.UpsertRPCName+" API failed to Execute DoMulti ID = "+uuid, err,
@@ -3385,58 +3419,69 @@ func (s *server) GetObject(
}
return nil, err
}
- targets := req.GetFilters().GetTargets()
- if targets != nil || s.ObjectFilters != nil {
- addrs := make([]string, 0, len(targets)+len(s.ObjectFilters))
- addrs = append(addrs, s.ObjectFilters...)
- for _, target := range targets {
- addrs = append(addrs, fmt.Sprintf("%s:%d", target.GetHost(), target.GetPort()))
+ filterConfigs := req.GetFilters()
+ if filterConfigs != nil || s.ObjectFilters != nil {
+ filters := make([]*config.ObjectFilterConfig, 0, len(filterConfigs)+len(s.ObjectFilters))
+ filters = append(filters, s.ObjectFilters...)
+ for _, c := range filterConfigs {
+ filters = append(filters, &config.ObjectFilterConfig{
+ Addr: net.JoinHostPort(c.GetTarget().GetHost(), uint16(c.GetTarget().GetPort())),
+ Query: c.Query.GetQuery(),
+ })
}
- c, err := s.egress.Target(ctx, addrs...)
- if err != nil {
- err = status.WrapWithUnavailable(vald.SearchObjectRPCName+" API target filter API unavailable", err,
- &errdetails.RequestInfo{
- RequestId: req.GetId().GetId(),
- ServingData: errdetails.Serialize(req),
- },
- &errdetails.BadRequest{
- FieldViolations: []*errdetails.BadRequestFieldViolation{
- {
- Field: "vectorizer targets",
- Description: err.Error(),
+ for _, filterConfig := range filters {
+ c, err := s.egress.Target(ctx, filterConfig.Addr)
+ if err != nil {
+ err = status.WrapWithUnavailable(vald.SearchObjectRPCName+" API target filter API unavailable", err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetId().GetId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.BadRequest{
+ FieldViolations: []*errdetails.BadRequestFieldViolation{
+ {
+ Field: "vectorizer targets",
+ Description: err.Error(),
+ },
},
},
- },
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.GetObjectRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.GetObjectRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeUnavailable(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
- }
- vec, err = c.FilterVector(ctx, vec)
- if err != nil {
- err = status.WrapWithInternal(vald.GetObjectRPCName+" API egress filter API failed", err,
- &errdetails.RequestInfo{
- RequestId: req.GetId().GetId(),
- ServingData: errdetails.Serialize(req),
+ res, err := c.FilterVector(ctx, &payload.Filter_VectorRequest{
+ Vector: vec,
+ Query: &payload.Filter_Query{
+ Query: filterConfig.Query,
},
- &errdetails.ResourceInfo{
- ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.GetObjectRPCName,
- ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
- }, info.Get())
- log.Warn(err)
- if span != nil {
- span.RecordError(err)
- span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
- span.SetStatus(trace.StatusError, err.Error())
+ })
+ if err != nil {
+ err = status.WrapWithInternal(vald.GetObjectRPCName+" API egress filter API failed", err,
+ &errdetails.RequestInfo{
+ RequestId: req.GetId().GetId(),
+ ServingData: errdetails.Serialize(req),
+ },
+ &errdetails.ResourceInfo{
+ ResourceType: errdetails.ValdGRPCResourceTypePrefix + "/vald.v1." + vald.GetObjectRPCName,
+ ResourceName: fmt.Sprintf("%s: %s(%s)", apiName, s.name, s.ip),
+ }, info.Get())
+ log.Warn(err)
+ if span != nil {
+ span.RecordError(err)
+ span.SetAttributes(trace.StatusCodeInternal(err.Error())...)
+ span.SetStatus(trace.StatusError, err.Error())
+ }
+ return nil, err
}
- return nil, err
+ vec = res.GetVector()
}
}
return vec, nil
diff --git a/pkg/gateway/filter/handler/grpc/handler_test.go b/pkg/gateway/filter/handler/grpc/handler_test.go
index 377911df47..b1d0dc0b5d 100644
--- a/pkg/gateway/filter/handler/grpc/handler_test.go
+++ b/pkg/gateway/filter/handler/grpc/handler_test.go
@@ -121,8 +121,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -294,8 +294,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -466,8 +466,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -633,8 +633,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -806,8 +806,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -978,8 +978,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -1145,8 +1145,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -1317,8 +1317,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -1484,8 +1484,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -1657,8 +1657,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -1829,8 +1829,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -1996,8 +1996,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -2169,8 +2169,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -2341,8 +2341,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -2508,8 +2508,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -2681,8 +2681,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -2854,8 +2854,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -3027,8 +3027,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -3199,8 +3199,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -3365,8 +3365,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -3532,8 +3532,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -3705,8 +3705,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -3878,8 +3878,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -4051,8 +4051,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -4223,8 +4223,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -4389,8 +4389,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -4556,8 +4556,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -4729,8 +4729,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -4902,8 +4902,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -5074,8 +5074,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -5241,8 +5241,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -5414,8 +5414,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -5586,8 +5586,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -5753,8 +5753,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -5926,8 +5926,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -6098,8 +6098,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -6265,8 +6265,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -6438,8 +6438,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -6610,8 +6610,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -6777,8 +6777,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -7296,8 +7296,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
@@ -7468,8 +7468,8 @@ package grpc
// copts []grpc.CallOption
// streamConcurrency int
// Vectorizer string
-// DistanceFilters []string
-// ObjectFilters []string
+// DistanceFilters []*config.DistanceFilterConfig
+// ObjectFilters []*config.ObjectFilterConfig
// SearchFilters []string
// InsertFilters []string
// UpdateFilters []string
diff --git a/pkg/gateway/filter/handler/grpc/option.go b/pkg/gateway/filter/handler/grpc/option.go
index 0d240a6d5d..a797619395 100644
--- a/pkg/gateway/filter/handler/grpc/option.go
+++ b/pkg/gateway/filter/handler/grpc/option.go
@@ -23,6 +23,7 @@ import (
"github.com/vdaas/vald/internal/client/v1/client/filter/egress"
"github.com/vdaas/vald/internal/client/v1/client/filter/ingress"
"github.com/vdaas/vald/internal/client/v1/client/vald"
+ "github.com/vdaas/vald/internal/config"
"github.com/vdaas/vald/internal/log"
"github.com/vdaas/vald/internal/net"
"github.com/vdaas/vald/internal/os"
@@ -111,28 +112,28 @@ func WithVectorizerTargets(addr string) Option {
}
}
-func WithDistanceFilterTargets(addrs ...string) Option {
+func WithDistanceFilterTargets(cs ...*config.DistanceFilterConfig) Option {
return func(s *server) {
- if len(addrs) == 0 {
+ if len(cs) == 0 {
return
}
if len(s.DistanceFilters) == 0 {
- s.DistanceFilters = addrs
+ s.DistanceFilters = cs
} else {
- s.DistanceFilters = append(s.DistanceFilters, addrs...)
+ s.DistanceFilters = append(s.DistanceFilters, cs...)
}
}
}
-func WithObjectFilterTargets(addrs ...string) Option {
+func WithObjectFilterTargets(cs ...*config.ObjectFilterConfig) Option {
return func(s *server) {
- if len(addrs) == 0 {
+ if len(cs) == 0 {
return
}
if len(s.ObjectFilters) == 0 {
- s.ObjectFilters = addrs
+ s.ObjectFilters = cs
} else {
- s.ObjectFilters = append(s.ObjectFilters, addrs...)
+ s.ObjectFilters = append(s.ObjectFilters, cs...)
}
}
}
diff --git a/pkg/gateway/filter/handler/grpc/option_test.go b/pkg/gateway/filter/handler/grpc/option_test.go
index 4221909396..46ad1da51c 100644
--- a/pkg/gateway/filter/handler/grpc/option_test.go
+++ b/pkg/gateway/filter/handler/grpc/option_test.go
@@ -701,7 +701,7 @@ package grpc
//
// func TestWithDistanceFilterTargets(t *testing.T) {
// type args struct {
-// addrs []string
+// cs []*config.DistanceFilterConfig
// }
// type want struct {
// want Option
@@ -726,7 +726,7 @@ package grpc
// {
// name: "test_case_1",
// args: args {
-// addrs:nil,
+// cs:nil,
// },
// want: want{},
// checkFunc: defaultCheckFunc,
@@ -745,7 +745,7 @@ package grpc
// return test {
// name: "test_case_2",
// args: args {
-// addrs:nil,
+// cs:nil,
// },
// want: want{},
// checkFunc: defaultCheckFunc,
@@ -776,7 +776,7 @@ package grpc
// checkFunc = defaultCheckFunc
// }
//
-// got := WithDistanceFilterTargets(test.args.addrs...)
+// got := WithDistanceFilterTargets(test.args.cs...)
// if err := checkFunc(test.want, got); err != nil {
// tt.Errorf("error = %v", err)
// }
@@ -786,7 +786,7 @@ package grpc
//
// func TestWithObjectFilterTargets(t *testing.T) {
// type args struct {
-// addrs []string
+// cs []*config.ObjectFilterConfig
// }
// type want struct {
// want Option
@@ -811,7 +811,7 @@ package grpc
// {
// name: "test_case_1",
// args: args {
-// addrs:nil,
+// cs:nil,
// },
// want: want{},
// checkFunc: defaultCheckFunc,
@@ -830,7 +830,7 @@ package grpc
// return test {
// name: "test_case_2",
// args: args {
-// addrs:nil,
+// cs:nil,
// },
// want: want{},
// checkFunc: defaultCheckFunc,
@@ -861,7 +861,7 @@ package grpc
// checkFunc = defaultCheckFunc
// }
//
-// got := WithObjectFilterTargets(test.args.addrs...)
+// got := WithObjectFilterTargets(test.args.cs...)
// if err := checkFunc(test.want, got); err != nil {
// tt.Errorf("error = %v", err)
// }
diff --git a/pkg/gateway/filter/usecase/vald.go b/pkg/gateway/filter/usecase/vald.go
index a93d7a210e..2325aa6a7b 100644
--- a/pkg/gateway/filter/usecase/vald.go
+++ b/pkg/gateway/filter/usecase/vald.go
@@ -121,11 +121,11 @@ func New(cfg *config.Data) (r runner.Runner, err error) {
if cfg.EgressFilters.Client != nil && cfg.EgressFilters.Client.Addrs != nil {
as = append(as, cfg.EgressFilters.Client.Addrs...)
}
- if cfg.EgressFilters.DistanceFilters != nil {
- as = append(as, cfg.EgressFilters.DistanceFilters...)
+ for _, df := range cfg.EgressFilters.DistanceFilters {
+ as = append(as, df.Addr)
}
- if cfg.EgressFilters.ObjectFilters != nil {
- as = append(as, cfg.EgressFilters.ObjectFilters...)
+ for _, of := range cfg.EgressFilters.ObjectFilters {
+ as = append(as, of.Addr)
}
if len(as) != 0 {
slices.Sort(as)
diff --git a/pkg/tools/benchmark/job/service/object.go b/pkg/tools/benchmark/job/service/object.go
index 3f644c0239..79c16209f6 100644
--- a/pkg/tools/benchmark/job/service/object.go
+++ b/pkg/tools/benchmark/job/service/object.go
@@ -80,12 +80,29 @@ func (j *job) getObject(ctx context.Context, ech chan error) error {
eg.SetLimit(j.concurrencyLimit)
for i := j.dataset.Range.Start; i <= j.dataset.Range.End; i++ {
log.Infof("[benchmark job] Start get object: iter = %d", i)
- ft := []*payload.Filter_Target{}
+ fcfgs := []*payload.Filter_Config{}
if j.objectConfig != nil {
- for i, target := range j.objectConfig.FilterConfig.Targets {
- ft[i] = &payload.Filter_Target{
- Host: target.Host,
- Port: uint32(target.Port),
+ for _, cfg := range j.objectConfig.FilterConfigs {
+ if cfg != nil {
+ var (
+ target *payload.Filter_Target
+ query *payload.Filter_Query
+ )
+ if cfg.Target != nil {
+ target = &payload.Filter_Target{
+ Host: cfg.Target.Host,
+ Port: uint32(cfg.Target.Port),
+ }
+ }
+ if cfg.Query != nil {
+ query = &payload.Filter_Query{
+ Query: cfg.Query.Query,
+ }
+ }
+ fcfgs = append(fcfgs, &payload.Filter_Config{
+ Target: target,
+ Query: query,
+ })
}
}
}
@@ -108,9 +125,7 @@ func (j *job) getObject(ctx context.Context, ech chan error) error {
Id: &payload.Object_ID{
Id: strconv.Itoa(idx),
},
- Filters: &payload.Filter_Config{
- Targets: ft,
- },
+ Filters: fcfgs,
})
if err != nil {
select {
diff --git a/rust/libs/proto/src/filter.egress.v1.tonic.rs b/rust/libs/proto/src/filter.egress.v1.tonic.rs
index 9627dbcf2c..052cbe8988 100644
--- a/rust/libs/proto/src/filter.egress.v1.tonic.rs
+++ b/rust/libs/proto/src/filter.egress.v1.tonic.rs
@@ -104,10 +104,12 @@ pub mod filter_client {
pub async fn filter_distance(
&mut self,
request: impl tonic::IntoRequest<
- super::super::super::super::payload::v1::object::Distance,
+ super::super::super::super::payload::v1::filter::DistanceRequest,
>,
) -> std::result::Result<
- tonic::Response,
+ tonic::Response<
+ super::super::super::super::payload::v1::filter::DistanceResponse,
+ >,
tonic::Status,
> {
self.inner
@@ -133,10 +135,12 @@ pub mod filter_client {
pub async fn filter_vector(
&mut self,
request: impl tonic::IntoRequest<
- super::super::super::super::payload::v1::object::Vector,
+ super::super::super::super::payload::v1::filter::VectorRequest,
>,
) -> std::result::Result<
- tonic::Response,
+ tonic::Response<
+ super::super::super::super::payload::v1::filter::VectorResponse,
+ >,
tonic::Status,
> {
self.inner
@@ -171,10 +175,12 @@ pub mod filter_server {
async fn filter_distance(
&self,
request: tonic::Request<
- super::super::super::super::payload::v1::object::Distance,
+ super::super::super::super::payload::v1::filter::DistanceRequest,
>,
) -> std::result::Result<
- tonic::Response,
+ tonic::Response<
+ super::super::super::super::payload::v1::filter::DistanceResponse,
+ >,
tonic::Status,
>;
/** Represent the RPC to filter the vector.
@@ -182,10 +188,12 @@ pub mod filter_server {
async fn filter_vector(
&self,
request: tonic::Request<
- super::super::super::super::payload::v1::object::Vector,
+ super::super::super::super::payload::v1::filter::VectorRequest,
>,
) -> std::result::Result<
- tonic::Response,
+ tonic::Response<
+ super::super::super::super::payload::v1::filter::VectorResponse,
+ >,
tonic::Status,
>;
}
@@ -273,9 +281,9 @@ pub mod filter_server {
impl<
T: Filter,
> tonic::server::UnaryService<
- super::super::super::super::payload::v1::object::Distance,
+ super::super::super::super::payload::v1::filter::DistanceRequest,
> for FilterDistanceSvc {
- type Response = super::super::super::super::payload::v1::object::Distance;
+ type Response = super::super::super::super::payload::v1::filter::DistanceResponse;
type Future = BoxFuture<
tonic::Response,
tonic::Status,
@@ -283,7 +291,7 @@ pub mod filter_server {
fn call(
&mut self,
request: tonic::Request<
- super::super::super::super::payload::v1::object::Distance,
+ super::super::super::super::payload::v1::filter::DistanceRequest,
>,
) -> Self::Future {
let inner = Arc::clone(&self.0);
@@ -321,9 +329,9 @@ pub mod filter_server {
impl<
T: Filter,
> tonic::server::UnaryService<
- super::super::super::super::payload::v1::object::Vector,
+ super::super::super::super::payload::v1::filter::VectorRequest,
> for FilterVectorSvc {
- type Response = super::super::super::super::payload::v1::object::Vector;
+ type Response = super::super::super::super::payload::v1::filter::VectorResponse;
type Future = BoxFuture<
tonic::Response,
tonic::Status,
@@ -331,7 +339,7 @@ pub mod filter_server {
fn call(
&mut self,
request: tonic::Request<
- super::super::super::super::payload::v1::object::Vector,
+ super::super::super::super::payload::v1::filter::VectorRequest,
>,
) -> Self::Future {
let inner = Arc::clone(&self.0);
diff --git a/rust/libs/proto/src/payload.v1.rs b/rust/libs/proto/src/payload.v1.rs
index ee11217a6f..2309846cd9 100644
--- a/rust/libs/proto/src/payload.v1.rs
+++ b/rust/libs/proto/src/payload.v1.rs
@@ -123,11 +123,11 @@ fn full_name() -> ::prost::alloc::string::String { "payload.v1.Search.MultiObjec
#[prost(int64, tag="5")]
pub timeout: i64,
/// Ingress filter configurations.
- #[prost(message, optional, tag="6")]
- pub ingress_filters: ::core::option::Option,
+ #[prost(message, repeated, tag="6")]
+ pub ingress_filters: ::prost::alloc::vec::Vec,
/// Egress filter configurations.
- #[prost(message, optional, tag="7")]
- pub egress_filters: ::core::option::Option,
+ #[prost(message, repeated, tag="7")]
+ pub egress_filters: ::prost::alloc::vec::Vec,
/// Minimum number of result to be returned.
#[prost(uint32, tag="8")]
pub min_num: u32,
@@ -259,18 +259,87 @@ impl ::prost::Name for Target {
const NAME: &'static str = "Target";
const PACKAGE: &'static str = "payload.v1";
fn full_name() -> ::prost::alloc::string::String { "payload.v1.Filter.Target".into() }fn type_url() -> ::prost::alloc::string::String { "/payload.v1.Filter.Target".into() }}
+ /// Represent the filter query.
+ #[allow(clippy::derive_partial_eq_without_eq)]
+#[derive(Clone, PartialEq, ::prost::Message)]
+ pub struct Query {
+ /// The raw query string.
+ #[prost(string, tag="1")]
+ pub query: ::prost::alloc::string::String,
+ }
+impl ::prost::Name for Query {
+const NAME: &'static str = "Query";
+const PACKAGE: &'static str = "payload.v1";
+fn full_name() -> ::prost::alloc::string::String { "payload.v1.Filter.Query".into() }fn type_url() -> ::prost::alloc::string::String { "/payload.v1.Filter.Query".into() }}
/// Represent filter configuration.
#[allow(clippy::derive_partial_eq_without_eq)]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Config {
/// Represent the filter target configuration.
- #[prost(message, repeated, tag="1")]
- pub targets: ::prost::alloc::vec::Vec,
+ #[prost(message, optional, tag="1")]
+ pub target: ::core::option::Option,
+ /// The target query.
+ #[prost(message, optional, tag="2")]
+ pub query: ::core::option::Option,
}
impl ::prost::Name for Config {
const NAME: &'static str = "Config";
const PACKAGE: &'static str = "payload.v1";
fn full_name() -> ::prost::alloc::string::String { "payload.v1.Filter.Config".into() }fn type_url() -> ::prost::alloc::string::String { "/payload.v1.Filter.Config".into() }}
+ /// Represent the ID and distance pair.
+ #[allow(clippy::derive_partial_eq_without_eq)]
+#[derive(Clone, PartialEq, ::prost::Message)]
+ pub struct DistanceRequest {
+ /// Distance
+ #[prost(message, repeated, tag="1")]
+ pub distance: ::prost::alloc::vec::Vec,
+ /// Query
+ #[prost(message, optional, tag="2")]
+ pub query: ::core::option::Option,
+ }
+impl ::prost::Name for DistanceRequest {
+const NAME: &'static str = "DistanceRequest";
+const PACKAGE: &'static str = "payload.v1";
+fn full_name() -> ::prost::alloc::string::String { "payload.v1.Filter.DistanceRequest".into() }fn type_url() -> ::prost::alloc::string::String { "/payload.v1.Filter.DistanceRequest".into() }}
+ /// Represent the ID and distance pair.
+ #[allow(clippy::derive_partial_eq_without_eq)]
+#[derive(Clone, PartialEq, ::prost::Message)]
+ pub struct DistanceResponse {
+ /// Distance
+ #[prost(message, repeated, tag="1")]
+ pub distance: ::prost::alloc::vec::Vec,
+ }
+impl ::prost::Name for DistanceResponse {
+const NAME: &'static str = "DistanceResponse";
+const PACKAGE: &'static str = "payload.v1";
+fn full_name() -> ::prost::alloc::string::String { "payload.v1.Filter.DistanceResponse".into() }fn type_url() -> ::prost::alloc::string::String { "/payload.v1.Filter.DistanceResponse".into() }}
+ /// Represent the ID and vector pair.
+ #[allow(clippy::derive_partial_eq_without_eq)]
+#[derive(Clone, PartialEq, ::prost::Message)]
+ pub struct VectorRequest {
+ /// Vector
+ #[prost(message, optional, tag="1")]
+ pub vector: ::core::option::Option,
+ /// Query
+ #[prost(message, optional, tag="2")]
+ pub query: ::core::option::Option,
+ }
+impl ::prost::Name for VectorRequest {
+const NAME: &'static str = "VectorRequest";
+const PACKAGE: &'static str = "payload.v1";
+fn full_name() -> ::prost::alloc::string::String { "payload.v1.Filter.VectorRequest".into() }fn type_url() -> ::prost::alloc::string::String { "/payload.v1.Filter.VectorRequest".into() }}
+ /// Represent the ID and vector pair.
+ #[allow(clippy::derive_partial_eq_without_eq)]
+#[derive(Clone, PartialEq, ::prost::Message)]
+ pub struct VectorResponse {
+ /// Distance
+ #[prost(message, optional, tag="1")]
+ pub vector: ::core::option::Option,
+ }
+impl ::prost::Name for VectorResponse {
+const NAME: &'static str = "VectorResponse";
+const PACKAGE: &'static str = "payload.v1";
+fn full_name() -> ::prost::alloc::string::String { "payload.v1.Filter.VectorResponse".into() }fn type_url() -> ::prost::alloc::string::String { "/payload.v1.Filter.VectorResponse".into() }}
}
impl ::prost::Name for Filter {
const NAME: &'static str = "Filter";
@@ -348,8 +417,8 @@ fn full_name() -> ::prost::alloc::string::String { "payload.v1.Insert.MultiObjec
#[prost(bool, tag="1")]
pub skip_strict_exist_check: bool,
/// Filter configurations.
- #[prost(message, optional, tag="2")]
- pub filters: ::core::option::Option,
+ #[prost(message, repeated, tag="2")]
+ pub filters: ::prost::alloc::vec::Vec,
/// Insert timestamp.
#[prost(int64, tag="3")]
pub timestamp: i64,
@@ -453,8 +522,8 @@ fn full_name() -> ::prost::alloc::string::String { "payload.v1.Update.TimestampR
#[prost(bool, tag="1")]
pub skip_strict_exist_check: bool,
/// Filter configuration.
- #[prost(message, optional, tag="2")]
- pub filters: ::core::option::Option,
+ #[prost(message, repeated, tag="2")]
+ pub filters: ::prost::alloc::vec::Vec,
/// Update timestamp.
#[prost(int64, tag="3")]
pub timestamp: i64,
@@ -544,8 +613,8 @@ fn full_name() -> ::prost::alloc::string::String { "payload.v1.Upsert.MultiObjec
#[prost(bool, tag="1")]
pub skip_strict_exist_check: bool,
/// Filter configuration.
- #[prost(message, optional, tag="2")]
- pub filters: ::core::option::Option,
+ #[prost(message, repeated, tag="2")]
+ pub filters: ::prost::alloc::vec::Vec,
/// Upsert timestamp.
#[prost(int64, tag="3")]
pub timestamp: i64,
@@ -730,8 +799,8 @@ pub mod object {
#[prost(message, optional, tag="1")]
pub id: ::core::option::Option,
/// Filter configurations.
- #[prost(message, optional, tag="2")]
- pub filters: ::core::option::Option,
+ #[prost(message, repeated, tag="2")]
+ pub filters: ::prost::alloc::vec::Vec,
}
impl ::prost::Name for VectorRequest {
const NAME: &'static str = "VectorRequest";