You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When shared-memory becomes full and a new record is added, then the code currently calls the xxx_shmpanic() handler, which invokes panic() and aborts the process.
The code should be changed to raise a Tcl error that the user can "catch" and handle appropriately, perhaps by deleting some rows from the table.
The text was updated successfully, but these errors were encountered:
There appears to be an undocumented "panic" option that can be specified to the "create" method (which sets ctable->share_panic), however it does not seem to be honored in all of the places it should be. Repeatedly calling "store" will still trigger a panic.
This is another hard problem we ran into during development.
The problem is that Speedtables memory management is based on Tcl's memory management, and Tcl assumes that memory allocations always succeed. Shared memory allocations can occur in places where there's no framework in Speedtables itself for propogating a failure back to a place where it can be handled.
Actually, it looks the argument parsing for the "panic" option was just bad. It was misinterpreting the return value from strcmp(). I committed a change to fix that obvious error, which allows me to now catch the failure during "store".
[master fc7589d] properly parse the "panic" option to "create". github issue 8.
3 files changed, 6 insertions(+), 1 deletions(-)
When shared-memory becomes full and a new record is added, then the code currently calls the xxx_shmpanic() handler, which invokes panic() and aborts the process.
The code should be changed to raise a Tcl error that the user can "catch" and handle appropriately, perhaps by deleting some rows from the table.
The text was updated successfully, but these errors were encountered: