You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.
Read and write a file including nulls, pure bash, no externals, no sub-shell forks. not limited to late versions.
Slow and ram-hungry. Every byte is read one byte at a time, and stored as a hex pair in an array.
Variations on this could store the data more compactly and read the file faster, by removing "-n 1" and reading all contiguous non-null bytes together and storing normally in a variable rather than an array, and only the nulls would be stored in some encoded form, (and a further simple enhancement would store contiguous strings of nulls with a single code that means N nulls instead of null) and the loop would only tick over on every null instead of on every byte. But the resulting data wouldn't be as convenient to work with, depedning on why you wanted to read the file and what you wanted to do with the data. Reading binary data and wanting to operate on the binary values, read them as numbers, count bytes, edit specifically positioned bytes in-place etc, an array of ints or hex pairs was more convenient for what I was working on. But if you were merely storing and reproducing the data without needing to parse it or edit it, this other method would be more efficient.
But in either the simple or fancy cases, the point and the essential trick is the same:
combination of LANG, IFS, and read option flags to arrange that null is the only delimiter and no other bytes have any special meaning.
on each read, be it a byte or a chunk, consult the return value from read to determine the difference between "got nothing because eof" and "got nothing because delimiter"
htof() could run a lot faster if willing to abuse the commandline to hold the entire file. It could be a single printf with a singe brace-expansion with a global replace instead of a loop that does a printf for each byte. x=" ${h[*]}" ;printf '%b' "${x// /\\x}"
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Read and write a file including nulls, pure bash, no externals, no sub-shell forks. not limited to late versions.
Slow and ram-hungry. Every byte is read one byte at a time, and stored as a hex pair in an array.
https://gist.github.com/bkw777/c1413d0e3de6c54524ddae890fe8d705
Variations on this could store the data more compactly and read the file faster, by removing "-n 1" and reading all contiguous non-null bytes together and storing normally in a variable rather than an array, and only the nulls would be stored in some encoded form, (and a further simple enhancement would store contiguous strings of nulls with a single code that means N nulls instead of null) and the loop would only tick over on every null instead of on every byte. But the resulting data wouldn't be as convenient to work with, depedning on why you wanted to read the file and what you wanted to do with the data. Reading binary data and wanting to operate on the binary values, read them as numbers, count bytes, edit specifically positioned bytes in-place etc, an array of ints or hex pairs was more convenient for what I was working on. But if you were merely storing and reproducing the data without needing to parse it or edit it, this other method would be more efficient.
But in either the simple or fancy cases, the point and the essential trick is the same:
htof() could run a lot faster if willing to abuse the commandline to hold the entire file. It could be a single printf with a singe brace-expansion with a global replace instead of a loop that does a printf for each byte.
x=" ${h[*]}" ;printf '%b' "${x// /\\x}"
The text was updated successfully, but these errors were encountered: