-
Notifications
You must be signed in to change notification settings - Fork 114
Description
In time RAM utilization has been increasing
Exporter has shown huge RAM utilization up to 2GB that is progressing in time. Starting with 50MB it has grown to 1GB in 10 days. Other Postgres instance showed 50MB to 2GB in 7 days. Others has shown smaller growth: 30MB to 100MB in 10 days.
After restarting the service, which is used to start exporter, memory 'returns', but than utilization starts to increase once again.
Installation details
- operating system: [CentOS 7]
- query-exporter installation type:
- pip:
- Package Version
---------------------- ------------
aiohttp 3.7.4.post0
argcomplete 3.1.2
async-timeout 3.0.1
attrs 22.2.0
chardet 4.0.0
croniter 2.0.1
idna 3.6
idna-ssl 1.1.0
importlib-metadata 4.8.3
jsonschema 3.2.0
multidict 5.2.0
outcome 1.1.0
pip 21.3.1
prometheus-aioexporter 1.6.3
prometheus-client 0.17.1
psycopg2-binary 2.8.6
pyrsistent 0.18.0
python-dateutil 2.8.2
pytz 2023.3.post1
PyYAML 6.0.1
query-exporter 2.7.0
Represent 1.6.0.post0
setuptools 59.6.0
six 1.16.0
SQLAlchemy 1.3.24
sqlalchemy-aio 0.16.0
toolrack 3.0.1
typing_extensions 4.1.1
wheel 0.37.1
yarl 1.7.2
zipp 3.6.0
- docker image: [no docker]
- snap: [no snap`]
To Reproduce
Such a huge increase is reproducing only on some instances, but the only difference between them - is the number of metrics retrieved(depends on the amount of queries and tables in database). I can't see how it can be the reason to not letting the memory go.
- Config file content (redacted of secrets if needed)
databases:
dbname:
dsn: env:PG_DATABASE_DSN_dbname
metrics:
pg_table_seq_scan:
type: counter
description: Number of sequential scans initiated on the table
labels: [datname, schemaname, relname, parent_relname]
....
queries:
table_stats:
interval: 1h
databases: [dbname]
metrics:
- pg_table_seq_scan
...
sql: >
select
current_database() as datname,
... limit 200
idx_stats:
interval: 1h
databases: [dbname]
metrics:
- pg_idx_scan
...
sql: >
with q_locked_rels as (
select relation from pg_locks where mode = 'AccessExclusiveLock'
... limit 200
query_stats:
interval: 1m
databases: [dbname ]
metrics:
- pg_statements_calls
...
sql: >
with q_data as (
select
... limit 200(query_exporter)
- Ran query-exporter with the following command line ...
/usr/local/query_exporter/bin/query-exporter /etc/query_exporter/config.yml --host 0.0.0.0 --port 9560
PG_DATABASE_DSN_dbname=postgresql://<exporter_user>:<password>@<host>.ru:<pg_port>/dbname?target_session_attrs=read-write&application_name=query_exporter
Right now I'm trying to use keep-connected: false, but the results will take a couple of days at least. I have no understanding in why it keeps doing it and not just return memory back after doing a query.
Also there is a thought that it could be Postgres specified behaviour. I would be grateful if you can share your knowledge.
You can clearly see when restart of the exporter has been made.
