Backend for Resource Optimization Service
The Red Hat Insights resource optimization service enables RHEL customers to assess and monitor their public cloud usage and optimization. The service exposes workload metrics for CPU, memory, and disk-usage and compares them to resource limits recommended by the public cloud provider. Currently ROS only provides suggestions for AWS RHEL instances. To enable ROS, customers need to perform a few prerequisite steps on targeted systems via Ansible playbook.
Underneath, ROS uses Performance Co-Pilot (PCP) to monitor and report workload metrics.
ROS currently supports two architectures running in parallel:
- ROS V1 - Legacy Architecture
- ROS V2 - New Architecture
The new architecture consists of the following components:
- Consumes from:
platform.inventory.events - Produces to:
ros.events - Responsibilities:
- Listens to inventory create/update events
- Downloads and extracts system archives containing PCP data
- Runs
pmlogextractandpmlogsummarycommands to process PCP metrics - Executes ROS rules engine to generate recommendations
- Produces events with performance profiles and rule hits
- Handles both systems with and without PCP data
- Consumes from:
ros.events - Responsibilities:
- Processes events produced by Suggestions Engine
- Creates/updates system records in the database
- Manages performance profiles and history
- Processes API-triggered system updates
- Consumes from:
platform.inventory.events - Responsibilities:
- Listens for system deletion events from Inventory
- Removes systems and associated data from ROS database
- Ensures data consistency with inventory service
These components serve customers not yet migrated to ROS V2 and will be deprecated once migration is complete:
- Processor handling inventory events
- Consumes from:
platform.inventory.events - Combines data processing and database operations in a single service
- Processes results from Insights Engine
- Consumes from:
platform.engine.results
These components are shared by both V1 and V2 architectures:
- REST API providing access to optimization recommendations
- OpenAPI specification available at
/api/ros/v1/openapi.json - Handles RBAC and Kessel authorization
- Serves data from the same database regardless of which architecture processed it
- Periodic cleanup of outdated system data
- Configurable via
GARBAGE_COLLECTION_INTERVALandDAYS_UNTIL_STALE - Works with data from both V1 and V2 processing
This project uses poetry to manage the development and production environments.
Once you have poetry installed, do the following:
The latest version is supported on Python 3.11, install it and then switch to 3.11 version:
poetry env use python3.11There are some package dependencies, install those:
dnf install tar gzip gcc python3.11-devel libpq-develInstall the required dependencies:
poetry installAfterwards you can activate the virtual environment by running:
poetry shellA list of configurable environment variables is present inside .env.example file.
ROS V2 uses the following Kafka topics:
platform.inventory.events- Input topic for inventory system events (create, update, delete)ros.events- Internal topic for communication between Suggestions Engine and Report Processor
ROS V2 functionality is controlled by Unleash feature flags for gradual migration:
ros.v2- Flag to enable/disable ROS V2 architecture- When enabled: System uses V2 architecture (Suggestions Engine → Report Processor flow)
- When disabled: System uses V1 architecture (Inventory Events Processor flow)
- Uses
org_idfor controlled rollout
The application depends on several parts of the insights platform. These dependencies are provided by the
docker-compose.yml file in the scripts directory.
To run the dependencies, just run following command:
cd scripts && docker-compose up insights-inventory-mq db-ros insights-engineTo run the full application ( With ros components within docker)
docker-compose up ros-processor ros-apiIn order to properly run the application from the host machine, you may optionally modify your /etc/hosts file for convenience.
Check the README.md file in scripts directory for details and important networking considerations.
Run the following commands to execute the db migration scripts.
export FLASK_APP=manage.py
flask db upgrade
flask seedpython -m ros.processor.suggestions_enginepython -m ros.processor.report_processor_consumerpython -m ros.processor.system_eraserpython -m ros.processor.mainThe web api component provides a REST api view of the app database.
python -m ros.api.mainIt is possible to run the tests using pytest:
poetry install
poetry run pytest --cov=ros testsResource Optimization REST API documentation can be found at /api/ros. It is accessible at raw OpenAPI definition here.
On a local instance it can be accessed on http://localhost:8000/api/ros/v1/openapi.json.
For local development setup, remember to use the x-rh-identity header encoded from account number and org_id, the one used while running make insights-upload-data command.