Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare chunks in a worker process #8618

Open
wants to merge 23 commits into
base: develop
Choose a base branch
from
Open

Conversation

azhavoro
Copy link
Contributor

@azhavoro azhavoro commented Oct 30, 2024

Added:

  • Prepare chunks in rq workers instead of application server process
  • Increase TTL for cached preview images

Motivation and context

How has this been tested?

Checklist

  • I submit my changes into the develop branch
  • I have created a changelog fragment
  • I have updated the documentation accordingly
  • I have added tests to cover my changes
  • I have linked related issues (see GitHub docs)
  • I have increased versions of npm packages if it is necessary
    (cvat-canvas,
    cvat-core,
    cvat-data and
    cvat-ui)

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.

Summary by CodeRabbit

  • New Features

    • Introduced new debugging configurations for various RQ workers, enhancing debugging capabilities.
    • Added a new service for processing chunks in Docker and Kubernetes configurations.
    • Enhanced error handling in data retrieval methods to manage timeout scenarios effectively.
  • Bug Fixes

    • Improved robustness of job-related functionalities with enhanced test coverage and validation checks.
  • Documentation

    • Updated API documentation settings and configurations for better clarity and usability.
  • Tests

    • Expanded test suite with new test cases and parameterization for comprehensive validation of task and cloud storage functionalities.

@azhavoro azhavoro requested review from zhiltsov-max and removed request for Marishka17 October 30, 2024 13:20
Copy link
Contributor

coderabbitai bot commented Oct 30, 2024

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The changes in this pull request encompass various enhancements across multiple files, primarily focusing on improving debugging configurations, cache management, error handling, and testing capabilities within the CVAT application. Key updates include new debugging configurations for REST API tests, significant modifications to the MediaCache class for improved cache handling, and the introduction of new test cases and fixtures to enhance test coverage and reliability. Additionally, new services and configurations are added to support chunk processing in Docker and Kubernetes environments.

Changes

File Change Summary
.vscode/launch.json Added new debugging configurations for RQ workers and modified existing configurations with trailing commas.
cvat/apps/engine/cache.py Introduced class-level constants, refactored cache methods, added error handling, and improved job management logic.
cvat/apps/engine/frame_provider.py Added static methods for chunk creation, improving code organization and modularity.
cvat/apps/engine/rq_job_handler.py Introduced new constants for RQ job metadata and enhanced RQ ID handling.
cvat/apps/engine/views.py Improved error handling in data retrieval methods to manage timeout scenarios.
cvat/settings/base.py Added new queue type for chunks and updated configurations for caching and CORS handling.
docker-compose.yml Added new service definition for cvat_worker_chunks.
helm-chart/templates/cvat_backend/worker_chunks/deployment.yml Introduced a new Kubernetes Deployment for the CVAT backend worker chunks.
helm-chart/values.yaml Added new configuration section for chunks with properties for deployment.
supervisord/worker.chunks.conf Established configuration for managing worker processes using supervisord.
tests/docker-compose.file_share.yml Added new service definition for cvat_worker_chunks with volume configuration.
tests/docker-compose.minio.yml Added new service cvat_worker_chunks with dependency on MinIO.
tests/python/rest_api/test_jobs.py Added a new fixture and enhanced test methods for job-related functionalities.
tests/python/rest_api/test_tasks.py Introduced new test cases and improved parameterization for task-related tests.
tests/python/sdk/test_auto_annotation.py Added a new fixture for Redis restoration in auto-annotation tests.
tests/python/sdk/test_datasets.py Updated fixture to include Redis restoration for dataset tests.

Poem

In the garden where code does bloom,
New workers sprout, dispelling the gloom.
With caches refreshed and tests that delight,
The chunks come alive, ready for flight!
So hop along, let’s celebrate this cheer,
For every new feature brings us near! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@azhavoro azhavoro changed the title Az/worker for chunks [WIP] Prepare chunks in a worker process Oct 30, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Outside diff range and nitpick comments (8)
supervisord/worker.chunks.conf (1)

1-9: Consider security implications of socket location

The Unix socket is currently placed in /tmp/supervisord/. This location could pose security risks as:

  1. /tmp is world-writable
  2. Socket files might persist between container restarts
  3. Permissions need to be carefully managed

Consider moving the socket to a dedicated directory (e.g., /var/run/supervisord/) with proper permissions and ownership.

tests/docker-compose.minio.yml (1)

11-11: Consider adding resource constraints for production readiness.

While the current configuration is consistent with other services, consider defining resource limits (CPU, memory) for the worker service to ensure proper resource management in production environments.

Example addition:

  cvat_worker_chunks:
    <<: *allow-minio
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 2G
        reservations:
          cpus: '0.5'
          memory: 1G
helm-chart/templates/cvat_backend/worker_chunks/deployment.yml (2)

1-19: Consider adding operational metadata annotations.

While the basic metadata is well-structured, consider adding these operational annotations for better manageability:

  • app.kubernetes.io/version: Track the CVAT version
  • app.kubernetes.io/part-of: Indicate it's part of CVAT platform
  • prometheus.io/scrape: Enable metrics scraping if applicable
  annotations:
+   app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
+   app.kubernetes.io/part-of: cvat
+   prometheus.io/scrape: "true"
🧰 Tools
🪛 yamllint

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


21-23: Consider using RollingUpdate strategy with PodDisruptionBudget.

The current Recreate strategy will cause downtime during updates. For better availability:

  1. Switch to RollingUpdate
  2. Add a PodDisruptionBudget
  strategy:
-   type: Recreate
+   type: RollingUpdate
+   rollingUpdate:
+     maxSurge: 25%
+     maxUnavailable: 25%

Consider adding a PodDisruptionBudget in a separate file:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: {{ .Release.Name }}-backend-worker-chunks-pdb
spec:
  minAvailable: 50%
  selector:
    matchLabels:
      app: cvat-app
      tier: backend
      component: worker-chunks
docker-compose.yml (1)

227-241: Consider future enhancements for operational robustness.

While the current implementation is consistent with other services, consider these potential enhancements for future iterations:

  • Resource limits (memory/CPU) for better resource management
  • Health checks for improved container orchestration
  • Specific logging configuration for better observability
helm-chart/values.yaml (1)

120-129: Document the chunks worker configuration.

Consider adding comments to document:

  • The purpose of the chunks worker
  • Guidelines for adjusting replica count
  • Resource requirements and scaling considerations

Example documentation:

      analyticsreports:
        replicas: 1
        labels: {}
        annotations: {}
+      # Configuration for the chunks processing worker
+      # Handles parallel processing of data chunks
+      # Default replica count is set to 4 for optimal parallel processing
+      # Adjust based on your workload and available resources
       chunks:
         replicas: 4
         labels: {}
cvat/apps/engine/cache.py (2)

82-83: Add return type annotation to _cache method

Adding a return type annotation enhances code readability and assists static type checkers in detecting potential issues.

Apply this diff to add the return type:

     @staticmethod
     def _cache():
+        -> Any:
         return caches[MediaCache._CACHE_NAME]

73-79: Add documentation for class-level constants

Adding comments explaining the purpose of class-level constants improves code maintainability and readability.

Consider adding comments like:

     _QUEUE_NAME = settings.CVAT_QUEUES.CHUNKS.value  # Queue name for chunk processing
     _QUEUE_JOB_PREFIX_TASK = "chunks:prepare-item-"  # Prefix for job IDs
     _SLEEP_TIMEOUT = 0.2  # Sleep duration between job status checks in seconds
     _CHUNK_CREATE_TIMEOUT = 50  # Maximum time to wait for chunk creation in seconds
     _CACHE_NAME = "media"  # Name of the cache to use
     _RQ_JOB_RESULT_TTL = 60  # Time-to-live for job results in seconds
     _RQ_JOB_FAILURE_TTL = 3600 * 24 * 14  # Time-to-live for failed jobs (2 weeks)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 4dd7f22 and 6c5eeed.

📒 Files selected for processing (16)
  • .vscode/launch.json (5 hunks)
  • cvat/apps/engine/cache.py (19 hunks)
  • cvat/apps/engine/frame_provider.py (3 hunks)
  • cvat/apps/engine/rq_job_handler.py (1 hunks)
  • cvat/apps/engine/views.py (4 hunks)
  • cvat/settings/base.py (3 hunks)
  • docker-compose.yml (1 hunks)
  • helm-chart/templates/cvat_backend/worker_chunks/deployment.yml (1 hunks)
  • helm-chart/values.yaml (1 hunks)
  • supervisord/worker.chunks.conf (1 hunks)
  • tests/docker-compose.file_share.yml (1 hunks)
  • tests/docker-compose.minio.yml (1 hunks)
  • tests/python/rest_api/test_jobs.py (1 hunks)
  • tests/python/rest_api/test_tasks.py (3 hunks)
  • tests/python/sdk/test_auto_annotation.py (1 hunks)
  • tests/python/sdk/test_datasets.py (1 hunks)
🧰 Additional context used
🪛 yamllint
helm-chart/templates/cvat_backend/worker_chunks/deployment.yml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

🔇 Additional comments (20)
tests/docker-compose.file_share.yml (1)

8-10: LGTM! Volume configuration follows established patterns.

The volume mount configuration for the new cvat_worker_chunks service is consistent with existing services, maintaining the same mount point and permissions necessary for testing.

Let's verify the consistency of this configuration across other compose files:

✅ Verification successful

Volume configuration is correctly aligned with other services

The volume mount configuration for cvat_worker_chunks is perfectly consistent with both cvat_server and cvat_worker_import services, using the identical mount path ./tests/mounted_file_share:/home/django/share:rw. This ensures proper file sharing functionality across all related services in the test environment.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify cvat_worker_chunks configuration across compose files
# Expected: Should find consistent volume configurations in other compose files

# Check for cvat_worker_chunks service in all docker-compose files
rg -A 5 "cvat_worker_chunks:" "docker-compose*.yml" "tests/docker-compose*.yml"

# Check for the specific volume mount pattern
rg "/home/django/share" "docker-compose*.yml" "tests/docker-compose*.yml"

Length of output: 405


Script:

#!/bin/bash
# Let's try with fd to find docker-compose files first, then use them in our search
echo "=== Docker Compose files in the repository ==="
fd -e yml -e yaml docker-compose

echo -e "\n=== Checking for cvat_worker_chunks service ==="
fd -e yml -e yaml docker-compose -x rg -A 5 "cvat_worker_chunks:"

echo -e "\n=== Checking volume mount patterns ==="
fd -e yml -e yaml docker-compose -x rg "/home/django/share"

Length of output: 1355


Script:

#!/bin/bash
# Let's check the context of these volume mounts to see which services they belong to
echo "=== Checking services with shared volume mounts ==="
fd -e yml -e yaml docker-compose -x rg -B 5 "./tests/mounted_file_share:/home/django/share:rw"

Length of output: 486

supervisord/worker.chunks.conf (2)

28-29: 🛠️ Refactor suggestion

Consider adding health check and logging configuration for smokescreen

While the localhost binding is secure, the configuration could be enhanced with:

  • Health check configuration
  • Log file configuration
  • Error handling directives
  • Process restart parameters

Let's check if these configurations exist in other smokescreen setups:

#!/bin/bash
# Search for other smokescreen configurations
rg "smokescreen" -A 5 -B 5

Consider adding:

 [program:smokescreen]
 command=smokescreen --listen-ip=127.0.0.1 %(ENV_SMOKESCREEN_OPTS)s
+stdout_logfile=%(ENV_HOME)s/logs/smokescreen.log
+stderr_logfile=%(ENV_HOME)s/logs/smokescreen.err
+autorestart=true
+startretries=3
+stopwaitsecs=10

19-27: Verify dependencies and consider adding documentation

The configuration looks well-structured, but let's verify the dependencies:

Consider adding a comment block explaining:

  • The purpose of this worker
  • Expected NUMPROCS values
  • The role of VECTOR_EVENT_HANDLER
✅ Verification successful

Dependencies verified successfully, documentation suggestion remains

The verification shows that:

  • wait_for_deps.sh exists and properly checks for required services (PostgreSQL, Redis)
  • DefaultWorker class is properly defined in rqworker.py with debug support
  • The worker configuration is correctly set up for chunk processing

Consider adding a comment block explaining:

  • The purpose of this worker (chunk processing)
  • Expected NUMPROCS values
  • The role of VECTOR_EVENT_HANDLER
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if wait_for_deps.sh exists and is executable
fd "wait_for_deps.sh" --type f --exec test -x {} \; || echo "Script not executable"

# Verify the worker class exists
ast-grep --pattern 'class DefaultWorker' 

Length of output: 5322


Script:

#!/bin/bash
# Check for wait_for_deps.sh
fd "wait_for_deps.sh" --type f

# Look for DefaultWorker class definition
rg "class DefaultWorker" -A 5

# Look for chunks worker related code/docs
rg "rqworker.*chunks" -A 3

Length of output: 605


Script:

#!/bin/bash
# Check content of wait_for_deps.sh
cat wait_for_deps.sh

# Look for DefaultWorker in cvat/rqworker.py or similar paths
fd "rqworker.py" --type f --exec cat {}

Length of output: 2815

tests/docker-compose.minio.yml (1)

11-11: LGTM! The new service is properly configured.

The cvat_worker_chunks service is correctly integrated with MinIO using the allow-minio anchor, maintaining consistency with other worker services.

cvat/apps/engine/rq_job_handler.py (1)

31-32: Consider security implications of exception metadata storage.

While adding exception type and args to job metadata improves debugging capabilities, ensure that sensitive information is properly sanitized before storage. Consider adding documentation about:

  1. The purpose and usage of these fields
  2. Any sanitization measures in place
  3. How these fields integrate with the new chunks worker

Let's verify the usage of these new fields:

✅ Verification successful

Exception metadata storage implementation is properly secured

The code review findings show that the exception metadata fields are used appropriately and securely:

  1. In views.py, the fields store actual exception information from the system (exc_type, exc_value.args) only for chunk-related jobs
  2. In cache.py, these fields are used to reconstruct and re-raise the original exception, providing accurate error reporting
  3. The implementation is properly scoped to RQ job metadata and doesn't expose sensitive information

The usage pattern shows these fields serve their intended purpose for error handling in the chunks worker while maintaining security best practices.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for usage of the new exception metadata fields
rg -l "EXCEPTION_TYPE|EXCEPTION_ARGS" --type python

Length of output: 84


Script:

#!/bin/bash
# Search for usage of the new exception metadata fields without file type restriction
rg -l "EXCEPTION_TYPE|EXCEPTION_ARGS"

# Also search for any exception handling patterns in the RQ job handler
rg -A 5 "except|raise" cvat/apps/engine/rq_job_handler.py

Length of output: 882


Script:

#!/bin/bash
# Let's examine how these fields are used in the identified files
rg -B 2 -A 5 "EXCEPTION_TYPE|EXCEPTION_ARGS" cvat/apps/engine/views.py cvat/apps/engine/rq_job_handler.py cvat/apps/engine/cache.py

# Also check for any exception metadata setting patterns
rg -B 2 -A 5 "set_job_meta|update_job_meta" cvat/apps/engine/views.py cvat/apps/engine/rq_job_handler.py cvat/apps/engine/cache.py

Length of output: 2081

tests/python/sdk/test_datasets.py (1)

27-27: LGTM! Consider documenting the Redis restoration strategy.

The addition of restore_redis_inmem_per_function alongside restore_redis_ondisk_per_function provides comprehensive test isolation. This is a good practice to prevent test interference.

Let's verify the Redis restoration strategy across the test suite:

Consider adding a comment explaining:

  1. Why both ondisk and inmem Redis restoration are necessary
  2. The performance implications of per-function restoration
✅ Verification successful

Let me analyze the Redis restoration fixtures usage across the test suite to make a final determination.

From the results:

  1. The fixtures are defined in tests/python/shared/fixtures/init.py
  2. They are used consistently across both SDK and REST API tests
  3. Some tests use both fixtures while others use only one, showing intentional usage patterns:
    • SDK tests often use both or just ondisk
    • REST API tests predominantly use inmem

Let me check the fixture implementations to understand their specific purposes:


The Redis restoration strategy is well-implemented and necessary

The implementation reveals clear distinctions between the two Redis instances:

  • inmem: Selectively cleans keys while preserving specific ones (via _get_redis_inmem_keys_to_keep())
  • ondisk: Performs a complete flush of the Redis instance running on port 6666

The usage in test_datasets.py is correct as it needs both fixtures because:

  1. The inmem instance handles temporary data and caching
  2. The ondisk instance manages persistent data

Both are required for complete test isolation in dataset operations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check Redis restoration patterns and documentation
# Test 1: Look for other test files using these fixtures
rg "restore_redis_(inmem|ondisk)_per_function"

# Test 2: Check for any Redis-related documentation
fd -e md -e rst | xargs rg -l "redis"

Length of output: 2996


Script:

#!/bin/bash
# Check the fixture implementations
cat tests/python/shared/fixtures/init.py | grep -A 20 "restore_redis_"

Length of output: 3060

docker-compose.yml (1)

227-241: LGTM! Service definition follows established patterns.

The new cvat_worker_chunks service is well-integrated and follows the consistent patterns established by other worker services in terms of:

  • Image and restart policy
  • Backend dependencies
  • Environment variables
  • Volume mounts
  • Network configuration
helm-chart/values.yaml (1)

120-129: Verify the need for 4 replicas.

The configuration structure follows the established pattern and provides all necessary customization points. However, the default replica count (4) is higher than other workers (1-2). Please verify if this level of parallelization is required for the chunk processing workload.

.vscode/launch.json (4)

172-172: LGTM! Good practice with trailing commas.

Adding trailing commas makes future changes cleaner and reduces merge conflicts.

Also applies to: 182-182


364-385: LGTM! Well-structured RQ worker configuration.

The new chunks worker configuration follows the established pattern and includes all necessary settings consistent with other RQ workers.


579-580: LGTM! Proper integration into debug compound.

The new chunks worker is correctly added to the compound debug configuration, maintaining consistent style and order.


373-374: Verify RQ queue registration.

Ensure that the "chunks" queue is properly registered in the RQ configuration.

✅ Verification successful

Queue registration verified successfully

The "chunks" queue is properly registered in the RQ configuration. Found in cvat/settings/base.py:

  • Defined as CHUNKS in the CVAT_QUEUES enum
  • Configured in RQ_QUEUES with shared queue settings and a 1-minute default timeout
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the chunks queue is registered in RQ configuration
# Expected: Find queue registration in Django settings or RQ configuration files

# Search for queue registration
rg -l 'chunks.*queue' --type py

Length of output: 33


Script:

#!/bin/bash
# Let's try different search patterns to find RQ queue configurations

# Search for RQ queue configurations
rg -l "QUEUES.*=.*\[.*\]" --type py

# Search for any mentions of RQ queues
rg -l "RQ.*QUEUES" --type py

# Search for chunks worker or queue definitions
rg -l "chunks.*worker|QUEUE.*chunks" --type py

# Look for Django settings files that might contain queue configs
fd "settings.*\.py$"

Length of output: 515


Script:

#!/bin/bash
# Let's examine the content of base settings and other relevant files

# Check base settings file for queue configurations
rg -A 5 -B 5 "QUEUES|RQ_" cvat/settings/base.py

# Check engine default settings
rg -A 5 -B 5 "QUEUES|RQ_" cvat/apps/engine/default_settings.py

# Look for any worker or queue related configurations
rg -A 5 -B 5 "WORKER_QUEUE|DEFAULT_QUEUE|QUEUES" cvat/apps/engine/default_settings.py cvat/settings/base.py

Length of output: 5256

tests/python/sdk/test_auto_annotation.py (1)

33-33: Good addition for test isolation!

Adding restore_redis_inmem_per_function alongside restore_redis_ondisk_per_function ensures complete Redis state cleanup between test functions. This improves test isolation by managing both on-disk and in-memory Redis states, which helps prevent test interference and increases reliability.

Let's verify the usage of this fixture across the test suite:

✅ Verification successful

Redis fixture usage is consistent across the test suite

The fixture restore_redis_inmem_per_function is properly defined in tests/python/shared/fixtures/init.py and is consistently used across multiple test files including SDK tests and REST API tests. The addition in test_auto_annotation.py follows the established pattern of using this fixture for Redis state cleanup, ensuring test isolation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other test files using this fixture to ensure consistent usage
rg "restore_redis_inmem_per_function" "tests/"

Length of output: 1958

cvat/settings/base.py (2)

552-558: Verify Redis connection string format

The Redis connection string for the media cache appears correct, but let's ensure the URL encoding of special characters in the password is handled properly.

Let's verify the Redis connection string handling:

#!/bin/bash
# Search for Redis connection handling to ensure proper URL encoding
rg -g '!*.{json,lock}' "LOCATION.*redis://"

# Check if there are any existing Redis connection error handlers
rg -g '!*.{json,lock}' -A 5 "RedisCache.*error"

277-277: LGTM: Queue configuration for chunks processing

The addition of the CHUNKS queue with a 1-minute timeout appears reasonable for chunk processing tasks. The configuration follows the established pattern and reuses the shared Redis connection settings.

Let's verify that the timeout aligns with typical chunk processing times:

Also applies to: 323-326

tests/python/rest_api/test_jobs.py (1)

694-694: LGTM: Redis fixture addition enhances test isolation.

The addition of the restore_redis_inmem_per_class fixture to TestGetGtJobData ensures that Redis in-memory storage is properly restored between test classes, preventing potential test interference and improving test reliability.

cvat/apps/engine/views.py (4)

726-727: LGTM: Proper timeout handling with retry mechanism

The implementation correctly handles TimeoutError by returning a 429 (Too Many Requests) response with a Retry-After header, allowing clients to implement proper backoff strategies.


811-827: LGTM: Consistent timeout handling for job data retrieval

The implementation properly handles TimeoutError during chunk retrieval, maintaining consistency with the parent class implementation. The error response includes appropriate status code and retry information.


2976-2977: LGTM: Consistent timeout handling for cloud storage operations

The implementation follows the established pattern for timeout handling across the codebase, providing consistent error responses for cloud storage operations.


3264-3266: LGTM: Enhanced exception handling for chunk processing

The implementation adds specialized exception metadata storage for chunk processing jobs, improving debugging capabilities while maintaining the existing exception handling behavior.

Comment on lines +11 to +18
[supervisord]
nodaemon=true
logfile=%(ENV_HOME)s/logs/supervisord.log ; supervisord log file
logfile_maxbytes=50MB ; maximum size of logfile before rotation
logfile_backups=10 ; number of backed up logfiles
loglevel=debug ; info, debug, warn, trace
pidfile=/tmp/supervisord/supervisord.pid ; pidfile location

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Debug logging and PID file location need attention across all supervisord configurations

The verification confirms that debug-level logging and /tmp PID file location are consistently used across all supervisord configuration files:

  • All 9 supervisord config files have loglevel=debug
  • All configs use /tmp/supervisord/supervisord.pid for PID file storage

This is a systemic configuration that should be addressed across all files to:

  1. Set appropriate production logging level
  2. Use a more secure location for PID files
  3. Ensure consistent changes across all supervisord configurations
🔗 Analysis chain

Review logging configuration for production use

  1. Debug level logging is enabled, which could:
    • Impact performance
    • Fill disk space faster
    • Potentially expose sensitive information
  2. PID file is stored in /tmp, which has similar security implications as the socket file

Let's check if debug logging is used in other worker configurations:

Consider:

  • Setting loglevel to 'info' for production
  • Moving PID file to a more appropriate location
  • Adding log rotation configuration to other log handlers
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check logging levels in other supervisord configs
rg "loglevel=" -A 1 -B 1

Length of output: 5322


Script:

#!/bin/bash
# Check logging levels in supervisord configs using a more specific pattern
rg "loglevel\s*=" supervisord/

Length of output: 817


Script:

#!/bin/bash
# Check for PID file configurations in supervisord configs
rg "pidfile\s*=" supervisord/

Length of output: 861


Script:

#!/bin/bash
# List all supervisord config files to ensure we're not missing any
fd -e conf -e ini supervisord/

Length of output: 386

Comment on lines +47 to +96
spec:
serviceAccountName: {{ include "cvat.backend.serviceAccountName" . }}
containers:
- name: cvat-backend
image: {{ .Values.cvat.backend.image }}:{{ .Values.cvat.backend.tag }}
imagePullPolicy: {{ .Values.cvat.backend.imagePullPolicy }}
{{- with merge $localValues.resources .Values.cvat.backend.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
args: ["run", "worker.chunks"]
env:
{{ include "cvat.sharedBackendEnv" . | indent 10 }}
{{- with concat .Values.cvat.backend.additionalEnv $localValues.additionalEnv }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- $probeArgs := list "chunks" -}}
{{- $probeConfig := dict "args" $probeArgs "livenessProbe" $.Values.cvat.backend.worker.livenessProbe -}}
{{ include "cvat.backend.worker.livenessProbe" $probeConfig | indent 10 }}
volumeMounts:
- mountPath: /home/django/data
name: cvat-backend-data
subPath: data
- mountPath: /home/django/logs
name: cvat-backend-data
subPath: logs
{{- with concat .Values.cvat.backend.additionalVolumeMounts $localValues.additionalVolumeMounts }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with merge $localValues.affinity .Values.cvat.backend.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with concat .Values.cvat.backend.tolerations $localValues.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.cvat.backend.defaultStorage.enabled }}
- name: cvat-backend-data
persistentVolumeClaim:
claimName: "{{ .Release.Name }}-backend-data"
{{- end }}
{{- with concat .Values.cvat.backend.additionalVolumes $localValues.additionalVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add security context and reliability configurations.

Several important security and reliability configurations are missing:

  1. Security context for running as non-root
  2. Readiness probe for proper load balancing
  3. Mandatory resource limits
  4. Node affinity for data locality

Apply these security and reliability improvements:

    spec:
      serviceAccountName: {{ include "cvat.backend.serviceAccountName" . }}
+     securityContext:
+       runAsNonRoot: true
+       runAsUser: 1000
+       fsGroup: 1000
      containers:
        - name: cvat-backend
          image: {{ .Values.cvat.backend.image }}:{{ .Values.cvat.backend.tag }}
          imagePullPolicy: {{ .Values.cvat.backend.imagePullPolicy }}
+         securityContext:
+           allowPrivilegeEscalation: false
+           capabilities:
+             drop: ["ALL"]
          resources:
+           limits:
+             cpu: "2"
+             memory: "4Gi"
+           requests:
+             cpu: "500m"
+             memory: "1Gi"
          # ... existing config ...
+         readinessProbe:
+           exec:
+             command: ["python3", "manage.py", "health_check", "chunks"]
+           initialDelaySeconds: 30
+           periodSeconds: 60
      affinity:
+       nodeAffinity:
+         preferredDuringSchedulingIgnoredDuringExecution:
+         - weight: 100
+           preference:
+             matchExpressions:
+             - key: cvat.io/chunk-processing
+               operator: In
+               values:
+               - "true"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
spec:
serviceAccountName: {{ include "cvat.backend.serviceAccountName" . }}
containers:
- name: cvat-backend
image: {{ .Values.cvat.backend.image }}:{{ .Values.cvat.backend.tag }}
imagePullPolicy: {{ .Values.cvat.backend.imagePullPolicy }}
{{- with merge $localValues.resources .Values.cvat.backend.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
args: ["run", "worker.chunks"]
env:
{{ include "cvat.sharedBackendEnv" . | indent 10 }}
{{- with concat .Values.cvat.backend.additionalEnv $localValues.additionalEnv }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- $probeArgs := list "chunks" -}}
{{- $probeConfig := dict "args" $probeArgs "livenessProbe" $.Values.cvat.backend.worker.livenessProbe -}}
{{ include "cvat.backend.worker.livenessProbe" $probeConfig | indent 10 }}
volumeMounts:
- mountPath: /home/django/data
name: cvat-backend-data
subPath: data
- mountPath: /home/django/logs
name: cvat-backend-data
subPath: logs
{{- with concat .Values.cvat.backend.additionalVolumeMounts $localValues.additionalVolumeMounts }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with merge $localValues.affinity .Values.cvat.backend.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with concat .Values.cvat.backend.tolerations $localValues.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.cvat.backend.defaultStorage.enabled }}
- name: cvat-backend-data
persistentVolumeClaim:
claimName: "{{ .Release.Name }}-backend-data"
{{- end }}
{{- with concat .Values.cvat.backend.additionalVolumes $localValues.additionalVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "cvat.backend.serviceAccountName" . }}
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: cvat-backend
image: {{ .Values.cvat.backend.image }}:{{ .Values.cvat.backend.tag }}
imagePullPolicy: {{ .Values.cvat.backend.imagePullPolicy }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
resources:
limits:
cpu: "2"
memory: "4Gi"
requests:
cpu: "500m"
memory: "1Gi"
args: ["run", "worker.chunks"]
env:
{{ include "cvat.sharedBackendEnv" . | indent 10 }}
{{- with concat .Values.cvat.backend.additionalEnv $localValues.additionalEnv }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- $probeArgs := list "chunks" -}}
{{- $probeConfig := dict "args" $probeArgs "livenessProbe" $.Values.cvat.backend.worker.livenessProbe -}}
{{ include "cvat.backend.worker.livenessProbe" $probeConfig | indent 10 }}
readinessProbe:
exec:
command: ["python3", "manage.py", "health_check", "chunks"]
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- mountPath: /home/django/data
name: cvat-backend-data
subPath: data
- mountPath: /home/django/logs
name: cvat-backend-data
subPath: logs
{{- with concat .Values.cvat.backend.additionalVolumeMounts $localValues.additionalVolumeMounts }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with merge $localValues.affinity .Values.cvat.backend.affinity }}
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: cvat.io/chunk-processing
operator: In
values:
- "true"
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with concat .Values.cvat.backend.tolerations $localValues.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.cvat.backend.defaultStorage.enabled }}
- name: cvat-backend-data
persistentVolumeClaim:
claimName: "{{ .Release.Name }}-backend-data"
{{- end }}
{{- with concat .Values.cvat.backend.additionalVolumes $localValues.additionalVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}

Comment on lines +346 to +348
frame, frame_name, _ = segment_frame_provider._get_raw_frame(
task_chunk_frames_with_rel_numbers[task_chunk_frame_id], quality=quality
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid accessing private methods from outside the class

The method _get_raw_frame is intended to be private, as indicated by the leading underscore. Accessing it from outside the SegmentFrameProvider class violates encapsulation principles. Consider adding a public method to expose the necessary functionality or refactor the code to avoid the need to access this private method externally.

Comment on lines 694 to 716
@staticmethod
def _get_chunk_create_callback(db_segment: models.Segment | int, segment_chunk_frame_ids, chunk_number, quality) -> DataWithMime:
# Create and return a joined / cleaned chunk
if isinstance(db_segment, int):
db_segment = models.Segment.objects.get(pk=db_segment)

if db_segment.type == models.SegmentType.RANGE:
return MediaCache.prepare_custom_range_segment_chunk(
db_task=db_segment.task,
frame_ids=segment_chunk_frame_ids,
quality=quality,
)
elif db_segment.type == models.SegmentType.SPECIFIC_FRAMES:
return MediaCache.prepare_custom_masked_range_segment_chunk(
db_task=db_segment.task,
frame_ids=segment_chunk_frame_ids,
chunk_number=chunk_number,
quality=quality,
insert_placeholders=True,
)
else:
assert False

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor to eliminate code duplication between _get_chunk_create_callback methods

The _get_chunk_create_callback methods in both TaskFrameProvider and JobFrameProvider share similar logic. This code duplication can make maintenance more challenging and increase the risk of inconsistencies. Consider refactoring the shared code into a common function or a helper method to promote reusability and improve maintainability.

Comment on lines +714 to +716
else:
assert False

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace assert False with explicit exception

Using assert False might be problematic because assertions can be disabled when Python is run with optimization flags (-O). It's safer to raise an explicit exception to handle unexpected cases. Consider raising an appropriate exception such as NotImplementedError to indicate unsupported segment types.

Suggested change:

     else:
-        assert False
+        raise NotImplementedError(f"Unsupported segment type: {db_segment.type}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
else:
assert False
else:
raise NotImplementedError(f"Unsupported segment type: {db_segment.type}")

Comment on lines 133 to 135
exc_type = job_meta.get(RQJobMetaField.EXCEPTION_TYPE, Exception)
exc_args = job_meta.get(RQJobMetaField.EXCEPTION_ARGS, ("Cannot create chunk",))
raise exc_type(*exc_args)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Validate exception type before raising

When raising an exception using exc_type(*exc_args), ensure that exc_type is indeed an exception class to prevent unexpected behavior. Add validation to confirm exc_type is a subclass of Exception.

Apply this diff to add validation:

                 job_meta = rq_job.get_meta()
                 exc_type = job_meta.get(RQJobMetaField.EXCEPTION_TYPE, Exception)
+                if not issubclass(exc_type, Exception):
+                    exc_type = Exception
                 exc_args = job_meta.get(RQJobMetaField.EXCEPTION_ARGS, ("Cannot create chunk",))
                 raise exc_type(*exc_args)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
exc_type = job_meta.get(RQJobMetaField.EXCEPTION_TYPE, Exception)
exc_args = job_meta.get(RQJobMetaField.EXCEPTION_ARGS, ("Cannot create chunk",))
raise exc_type(*exc_args)
exc_type = job_meta.get(RQJobMetaField.EXCEPTION_TYPE, Exception)
if not issubclass(exc_type, Exception):
exc_type = Exception
exc_args = job_meta.get(RQJobMetaField.EXCEPTION_ARGS, ("Cannot create chunk",))
raise exc_type(*exc_args)

cvat/apps/engine/cache.py Outdated Show resolved Hide resolved
Comment on lines 179 to 180
assert item is not None
else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace assert statement with exception handling

Using assert statements in production code is not recommended as they can be skipped when Python is run with optimizations. Replace the assert with explicit exception handling to ensure None values are properly managed.

Apply this diff to fix the issue:

                 item = self._get_cache_item(key)
-                assert item is not None
+                if item is None:
+                    raise ValueError(f"Failed to retrieve cache item {key} after job completion")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert item is not None
else:
if item is None:
raise ValueError(f"Failed to retrieve cache item {key} after job completion")
else:

Comment on lines 150 to 183
slogger.glob.info(f"Starting to prepare chunk: key {key}")
if self._is_run_inside_rq():
item = self._create_and_set_cache_item(
key,
create_callback,
True,
*args,
**kwargs,
)
else:
# compare checksum
item_data = item[0].getbuffer() if isinstance(item[0], io.BytesIO) else item[0]
item_checksum = item[2] if len(item) == 3 else None
if item_checksum != self._get_checksum(item_data):
slogger.glob.info(f"Recreating cache item {key} due to checksum mismatch")
item = create_item()
queue = self._get_queue()
rq_id = self._make_queue_job_id_base(key)
rq_job = queue.fetch_job(rq_id)

if not rq_job:
rq_job = queue.enqueue(
self._create_and_set_cache_item,
key,
create_callback,
False,
*args,
**kwargs,
job_id=rq_id,
result_ttl=self._RQ_JOB_RESULT_TTL,
failure_ttl=self._RQ_JOB_FAILURE_TTL,
)

if self._wait_for_rq_job(rq_job):
item = self._get_cache_item(key)
assert item is not None
else:
raise TimeoutError(f"Chunk processing takes too long {key}")

slogger.glob.info(f"Ending to prepare chunk: key {key}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add exception handling for create_callback failures

In _create_cache_item, if create_callback raises an exception, it may not be properly handled, leading to unhandled exceptions in the job. Consider adding exception handling around create_callback to ensure errors are logged and the job fails gracefully.

Apply this diff to enhance error handling:

             slogger.glob.info(f"Starting to prepare chunk: key {key}")
             if self._is_run_inside_rq():
+                try:
                     item = self._create_and_set_cache_item(
                         key,
                         create_callback,
                         True,
                         *args,
                         **kwargs,
                     )
+                except Exception as e:
+                    slogger.glob.error(f"Error creating cache item for key {key}: {e}", exc_info=True)
+                    raise
             else:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
slogger.glob.info(f"Starting to prepare chunk: key {key}")
if self._is_run_inside_rq():
item = self._create_and_set_cache_item(
key,
create_callback,
True,
*args,
**kwargs,
)
else:
# compare checksum
item_data = item[0].getbuffer() if isinstance(item[0], io.BytesIO) else item[0]
item_checksum = item[2] if len(item) == 3 else None
if item_checksum != self._get_checksum(item_data):
slogger.glob.info(f"Recreating cache item {key} due to checksum mismatch")
item = create_item()
queue = self._get_queue()
rq_id = self._make_queue_job_id_base(key)
rq_job = queue.fetch_job(rq_id)
if not rq_job:
rq_job = queue.enqueue(
self._create_and_set_cache_item,
key,
create_callback,
False,
*args,
**kwargs,
job_id=rq_id,
result_ttl=self._RQ_JOB_RESULT_TTL,
failure_ttl=self._RQ_JOB_FAILURE_TTL,
)
if self._wait_for_rq_job(rq_job):
item = self._get_cache_item(key)
assert item is not None
else:
raise TimeoutError(f"Chunk processing takes too long {key}")
slogger.glob.info(f"Ending to prepare chunk: key {key}")
slogger.glob.info(f"Starting to prepare chunk: key {key}")
if self._is_run_inside_rq():
try:
item = self._create_and_set_cache_item(
key,
create_callback,
True,
*args,
**kwargs,
)
except Exception as e:
slogger.glob.error(f"Error creating cache item for key {key}: {e}", exc_info=True)
raise
else:
queue = self._get_queue()
rq_id = self._make_queue_job_id_base(key)
rq_job = queue.fetch_job(rq_id)
if not rq_job:
rq_job = queue.enqueue(
self._create_and_set_cache_item,
key,
create_callback,
False,
*args,
**kwargs,
job_id=rq_id,
result_ttl=self._RQ_JOB_RESULT_TTL,
failure_ttl=self._RQ_JOB_FAILURE_TTL,
)
if self._wait_for_rq_job(rq_job):
item = self._get_cache_item(key)
assert item is not None
else:
raise TimeoutError(f"Chunk processing takes too long {key}")
slogger.glob.info(f"Ending to prepare chunk: key {key}")

{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "cvat.backend.serviceAccountName" . }}

Check warning

Code scanning / SonarCloud

Service account permissions should be restricted Medium

Bind this Service Account to RBAC or disable "automountServiceAccountToken". See more on SonarQube Cloud
spec:
serviceAccountName: {{ include "cvat.backend.serviceAccountName" . }}
containers:
- name: cvat-backend

Check warning

Code scanning / SonarCloud

Storage limits should be enforced Medium

Specify a storage limit for this container. See more on SonarQube Cloud
spec:
serviceAccountName: {{ include "cvat.backend.serviceAccountName" . }}
containers:
- name: cvat-backend

Check warning

Code scanning / SonarCloud

Memory limits should be enforced Medium

Specify a memory limit for this container. See more on SonarQube Cloud
@azhavoro azhavoro changed the title [WIP] Prepare chunks in a worker process Prepare chunks in a worker process Oct 31, 2024
@codecov-commenter
Copy link

codecov-commenter commented Oct 31, 2024

Codecov Report

Attention: Patch coverage is 76.62338% with 54 lines in your changes missing coverage. Please review.

Project coverage is 74.06%. Comparing base (5520212) to head (c66681f).
Report is 8 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8618      +/-   ##
===========================================
- Coverage    74.19%   74.06%   -0.13%     
===========================================
  Files          401      401              
  Lines        43504    43658     +154     
  Branches      3950     3950              
===========================================
+ Hits         32278    32337      +59     
- Misses       11226    11321      +95     
Components Coverage Δ
cvat-ui 78.53% <ø> (+0.02%) ⬆️
cvat-server 70.27% <76.62%> (-0.24%) ⬇️
---- 🚨 Try these New Features:

@zhiltsov-max
Copy link
Contributor

Could you also add queue handling and locking for this action:

def _write_updated_static_chunk():
?

cvat/settings/base.py Outdated Show resolved Hide resolved
Copy link

sonarcloud bot commented Nov 19, 2024

Quality Gate Failed Quality Gate failed

Failed conditions
C Security Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants