You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When making requests for kubios, sometimes [Errno 12] ENOMEM occurs in requests.post().
The error itself is a memory allocation error. I tried use gc.collect() before making request, and free memory usually is around 90 KB. However error still sometimes occures.
Also, when the requests.post() succeed, the free memory could be even less than when it failed. For example:
Success:
6722476 RAM before kubios request: 88.84 KB
6724775 RAM after the first kubios request: 83.31 KB
6727026 RAM after the second kubios request: 72.67 KB
An interesting phenomenon is once it starts to failed and the free memory stays same, it will fail again and again, until some memory changes. For example:
Note that during each retry, the free memory stays same, and it always failed.
After searched on google, it seems because requests need a block of continuous memory but memory is fragmented. The program already has a states pre-loading mechanism, and views are also stay in memory, ready to be re-used instead of deletion and instantiation. These are what I can do to reduce memory fragmentation.
Now, a weird temporary "solution" is: store a random size of garbage data in memory before the requests, and delete later. It aims to modify the memory a bit and maybe triggered some mechanism of micropython, thus after the first failure, the RAM changed and thesecond try usually succeeds. See log below:
10304785 RAM before garbage: 88.53 KB
10304792 RAM after garbage: 88.3 KB
10304798 RAM before kubios request: 88.14 KB
10524630 RAM before garbage: 88.53 KB
10524637 RAM after garbage: 88.31 KB
10524643 RAM before kubios request: 88.16 KB
10524718 Kubios analysis failed: [Errno 12] ENOMEM
10537880 RAM before garbage: 88.81 KB
10537886 RAM after garbage: 88.55 KB
10537893 RAM before kubios request: 88.39 KB
10539986 RAM after the first kubios request: 83.2 KB
10542018 RAM after the second kubios request: 72.55 KB
12282097 RAM before garbage: 88.53 KB
12282104 RAM after garbage: 88.27 KB
12282110 RAM before kubios request: 88.11 KB
12285540 RAM after the first kubios request: 82.92 KB
12288765 RAM after the second kubios request: 72.27 KB
This problem needs further study.
The text was updated successfully, but these errors were encountered:
When making requests for kubios, sometimes [Errno 12] ENOMEM occurs in requests.post().
The error itself is a memory allocation error. I tried use
gc.collect()
before making request, and free memory usually is around 90 KB. However error still sometimes occures.Also, when the requests.post() succeed, the free memory could be even less than when it failed. For example:
An interesting phenomenon is once it starts to failed and the free memory stays same, it will fail again and again, until some memory changes. For example:
Note that during each retry, the free memory stays same, and it always failed.
After searched on google, it seems because
requests
need a block of continuous memory but memory is fragmented. The program already has a states pre-loading mechanism, and views are also stay in memory, ready to be re-used instead of deletion and instantiation. These are what I can do to reduce memory fragmentation.Now, a weird temporary "solution" is: store a random size of garbage data in memory before the
requests
, and delete later. It aims to modify the memory a bit and maybe triggered some mechanism of micropython, thus after the first failure, the RAM changed and thesecond try usually succeeds. See log below:This problem needs further study.
The text was updated successfully, but these errors were encountered: