You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ability to control OpenAI backoff strategy for large volume of embeddings calls. This is standard practice in almost any library I've used because we cannot assume we have infinite capacity from our API providers.
Actual Behavior (Mandatory)
] version=71, last transaction in previous log=5140, rotation took 51 millis, started after 7843 millis."}
{"time":"2024-07-24 22:57:06.966+0000","level":"WARN","category":"o.n.k.a.p.GlobalProcedures","message":"Error during iterate.commit:"}
{"time":"2024-07-24 22:57:06.966+0000","level":"WARN","category":"o.n.k.a.p.GlobalProcedures","message":"1887 times: org.neo4j.graphdb.QueryExecutionException: Failed to invoke procedure `apoc.ml.openai.embedding`: Caused by: java.io.IOException: Server returned HTTP response code: 429 for URL: https://api.openai.com/v1/embeddings"}
{"time":"2024-07-24 22:57:06.966+0000","level":"WARN","category":"o.n.k.a.p.GlobalProcedures","message":"332 times: org.neo4j.graphdb.QueryExecutionException: Failed to invoke procedure `apoc.ml.openai.embedding`: Caused by: java.net.SocketTimeoutException: Connect timed out"}
{"time":"2024-07-24 22:57:06.966+0000","level":"WARN","category":"o.n.k.a.p.GlobalProcedures","message":"Error during iterate.execute:"}
{"time":"2024-07-24 22:57:06.966+0000","level":"WARN","category":"o.n.k.a.p.GlobalProcedures","message":"332 times: Connect timed out"}
{"time":"2024-07-24 22:57:06.966+0000","level":"WARN","category":"o.n.k.a.p.GlobalProcedures","message":"1887 times: Server returned HTTP response code: 429 for URL: https://api.openai.com/v1/embeddings"}
How to Reproduce the Problem
Try embedding 5M nodes at 2000 nodes batched per API request (to maximise throughput) so you end up hitting the 429 for too many tokens per minute
Expected Behavior (Mandatory)
Ability to control OpenAI backoff strategy for large volume of embeddings calls. This is standard practice in almost any library I've used because we cannot assume we have infinite capacity from our API providers.
Actual Behavior (Mandatory)
How to Reproduce the Problem
Try embedding 5M nodes at 2000 nodes batched per API request (to maximise throughput) so you end up hitting the 429 for too many tokens per minute
Specifications (Mandatory)
Currently used versions
Versions
The text was updated successfully, but these errors were encountered: