- Remove 'get_version' in data_for_serialisation. In the past we might
of had a dynamic version that was assigned and it might of changed
during a change in the hardfork. This is no longer the case and we
always return a fixed value which means we don't use this feature.
Inline and remove.
- Remove the long/short term data cache in the transient struct on the
SNL. In the past this was probably modified outside of the serialisation
function 'service_node_list::store()' but now it is only modified in the
single function and everytime we enter this function we clear out the
cache and re-fill it in with runtime data.
Potentially, it was cached to save a heap allocation on store, in this
case I would switch to thread-local storage OR a static to reuse the
vector storage. I've inline it for now onto the stack to reduce
complexity in the header.
- Pull data_for_serialization into the implementation file. This
structure is not used outside of the SNL.
- Rename 'state_added_to_archive' to 'long_term_dirty_flag' to more
accurately convey the intent of this flag (e.g. when it's dirty, the next
store will have to write the [long term] data to disk ...)
- Remove the cache data blob, similar reasoning as above. This is
cleared everytime we enter the function. I suspect it was done to save
a heap allocation.
The solution I'd advise here is thread-local storage or static. Simplifies
the complicated SNL header.
- Remove the for-loop to initialise the data for serialisation which
just clears the structures and reassigns the version. The version is
static now and default initialised so no need to set it. The data
structures, potentially were there for heap allocation optimisation.
Again recommended solution is thread local storage or static.
--
I've chosen to use static because
- There will always only be one SNL list instance in the entire program
so there aren't multiple SNL's that we have to synchronise against.
- Storing requires taking a mutex so the instruction level lock will
never be contended.
- Store will be called from multiple threads (multiple P2P threads
AFAICT) so we'd end up with multiple per-thread 'caches' if we used
thread-local storage.