You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By piggybacking onto the API suggested here we could easily define a methodology for converting data into Binary. We could also take advantage of the type information in the format to create significant size reductions. For example we could account for custom string character sets (like only ASCII or a-z or lowercase etc) in order to represent the data in less bits. By doing this we could create a BSON like encoding that largely doesn't need any type information at all, and it can be optimized at compile time so we can create lightning fast binary serializers.
We could take a format here
[{_: Number, {min: 0, max: 16}]
Here we know at compile time we can represent this number using only 4 bits (2^4=16). The only dynamic part here would be the array length itself.
Before our encoder maybe it was in JSON which requires either a ton of Unicode (very wasteful) or BSON (pretty slow and can't optimize based on the data structure to my knowledge), or using a custom serializer which is generally very time consuming and error-prone.
The text was updated successfully, but these errors were encountered:
By piggybacking onto the API suggested here we could easily define a methodology for converting data into Binary. We could also take advantage of the type information in the format to create significant size reductions. For example we could account for custom string character sets (like only ASCII or a-z or lowercase etc) in order to represent the data in less bits. By doing this we could create a BSON like encoding that largely doesn't need any type information at all, and it can be optimized at compile time so we can create lightning fast binary serializers.
We could take a format here
Here we know at compile time we can represent this number using only 4 bits (2^4=16). The only dynamic part here would be the array length itself.
Before our encoder maybe it was in JSON which requires either a ton of Unicode (very wasteful) or BSON (pretty slow and can't optimize based on the data structure to my knowledge), or using a custom serializer which is generally very time consuming and error-prone.
The text was updated successfully, but these errors were encountered: