Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Errno 12] ENOMEM sometimes when making requests for kubios #43

Open
shengt25 opened this issue May 11, 2024 · 0 comments
Open

[Errno 12] ENOMEM sometimes when making requests for kubios #43

shengt25 opened this issue May 11, 2024 · 0 comments

Comments

@shengt25
Copy link
Owner

When making requests for kubios, sometimes [Errno 12] ENOMEM occurs in requests.post().

The error itself is a memory allocation error. I tried use gc.collect() before making request, and free memory usually is around 90 KB. However error still sometimes occures.

Also, when the requests.post() succeed, the free memory could be even less than when it failed. For example:

Fail:
3631148 RAM before kubios request: 93.86 KB
3633134 Kubios analysis failed: [Errno 12] ENOMEM
Success:
6722476 RAM before kubios request: 88.84 KB
6724775 RAM after the first kubios request: 83.31 KB
6727026 RAM after the second kubios request: 72.67 KB

An interesting phenomenon is once it starts to failed and the free memory stays same, it will fail again and again, until some memory changes. For example:

6889136 RAM before kubios request: 89.28 KB
6889216 Kubios analysis failed: [Errno 12] ENOMEM

6908079 RAM before kubios request: 89.28 KB
6908148 Kubios analysis failed: [Errno 12] ENOMEM

6928406 RAM before kubios request: 89.28 KB
6928476 Kubios analysis failed: [Errno 12] ENOMEM

Note that during each retry, the free memory stays same, and it always failed.

After searched on google, it seems because requests need a block of continuous memory but memory is fragmented. The program already has a states pre-loading mechanism, and views are also stay in memory, ready to be re-used instead of deletion and instantiation. These are what I can do to reduce memory fragmentation.

Now, a weird temporary "solution" is: store a random size of garbage data in memory before the requests, and delete later. It aims to modify the memory a bit and maybe triggered some mechanism of micropython, thus after the first failure, the RAM changed and thesecond try usually succeeds. See log below:

10304785 RAM before garbage: 88.53 KB
10304792 RAM after garbage: 88.3 KB
10304798 RAM before kubios request: 88.14 KB

10524630 RAM before garbage: 88.53 KB
10524637 RAM after garbage: 88.31 KB
10524643 RAM before kubios request: 88.16 KB
10524718 Kubios analysis failed: [Errno 12] ENOMEM

10537880 RAM before garbage: 88.81 KB
10537886 RAM after garbage: 88.55 KB
10537893 RAM before kubios request: 88.39 KB
10539986 RAM after the first kubios request: 83.2 KB
10542018 RAM after the second kubios request: 72.55 KB

12282097 RAM before garbage: 88.53 KB
12282104 RAM after garbage: 88.27 KB
12282110 RAM before kubios request: 88.11 KB
12285540 RAM after the first kubios request: 82.92 KB
12288765 RAM after the second kubios request: 72.27 KB

This problem needs further study.

shengt25 added a commit that referenced this issue May 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant