728x90
반응형
OS: rocky 8.9
1P/4Core
16GB
설치 필수 요소
gcc
gcc-c++
make
autoconf
readline
readline-deve
zlib
zlib-devel
openssl
openssl-devel
gettext
gettext-devel
python
python-devel 설치안됨 패스
wget
libicu-devel
필요한 필수 환경 변수들
.bash_profile
export LD_LIBRARY_PATH=/DBMS/PG16/bin
export PATH=/DBMS/PG16/bin:$PATH
export MANPATH=/DBMS/PG16/share/man:$MANPATH
export PGDATA=/DBMS/DATA
export PGLOG=/DBMS/LOG
export PGUSER=postgres
export PGDATABASE=postgres
export PGPORT=5552
export BASE_PATH=/DBMS
export PGHOME='/DBMS/PG16'
export LANG=ko_KR.utf8
alias pgstart='pg_ctl start -mf'
alias pgstop='pg_ctl stop -mf'
alias pghome='cd $PGHOME'
alias pgcfg='vi $PGDATA/postgresql.conf'
alias pghba='vi $PGDATA/pg_hba.conf'
alias pglog='cd $PGLOG'
alias pgreload='$PGHOME/bin/pg_ctl reload'
--maek 3.81 이상
[db@TEST1 ~]$ make --version
GNU Make 4.2.1
Built for x86_64-redhat-linux-gnu
Copyright (C) 1988-2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
--postgrssql 압축 풀기
cd /DBMS/work/PG16
tar xvf postgresql-16.2.tar.gz
-- 소스트리 생성 및 prefix를 통해 엔진 설치에 빌드생성 위치 지정
/DBMS/work/PG16/postgresql-16.2/configure --prefix=/DBMS/PG16
--빌드 수행
configure를 통해 만들어지 소스 트리를 사용하기 위해 /DBMS/PG16 에서 아래 명령어를 수행합니다.
make
설치 전 build 버전 테스트를 수행해 봅니다.
make check
make install
--initDB 구성 생성시 superuser 를 postgres로 생성하기 위해 옵션에 넣어준다.
--설치하는 os 이름 따라 접속이 이루어 지기때문에 환경 변수 및 init DB 생성시 아래 명령어를 써줘여 한다. 그래야 postgres 유저로 supreuser로 사용가능
/DBMS/PG16/bin/initdb -E utf8 -U postgres;
-- 유저 생성 확인
[local]:5432 postgres@postgres=#\l
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges
-----------+----------+----------+-----------------+-------------+-------------+------------+-----------+-----------------------
postgres | postgres | UTF8 | libc | ko_KR.UTF-8 | ko_KR.UTF-8 | | |
template0 | postgres | UTF8 | libc | ko_KR.UTF-8 | ko_KR.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | libc | ko_KR.UTF-8 | ko_KR.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
--Superuser 패스워드 변경
[pg@test DATA]$ psql
Timing is on.
Pager usage is off.
psql (15.5)
Type "help" for help.
postgres=# alter user postgres password 'P@ssw0rd';
ALTER ROLE
postgres=*# commit;
COMMIT
-- pdhba 접속 설정
$PGDATA/pg_hba.conf
# IPv4 local connections:
host all all 127.0.0.1/32 md5 --trust 비번 없이 로그인 가능 md5는 비번을 넣고 로그인 가능 필수로 md5로 변경해야한다.
host all all 10.10.0.0/16 md5 --network bit에 따라 접속 가능한 영역을 지정 가능 8,16,24,32
-- 파라미터 세팅 서버의 용량에 따라 세팅 바란다
listen_addresses = '*'
port = 5002
log_destination = 'stderr'
logging_collector = on
log_directory = '/DBMS/LOG/'
log_rotation_size = 50MB
log_line_prefix = '%m [%p:%h:%u:%a] '
log_statement = 'ddl'
log_timezone = 'Asia/Seoul'
log_min_duration_statement = 5000
temp_file_limit = 30GB
idle_in_transaction_session_timeout = 900000
max_connections = 200
shared_buffers = 4GB
effective_cache_size = 12GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 10485kB
huge_pages = off
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
--확장 모듈 설치
압출 풀고 make한 경로의 contrib 경로에 확장 모듈이 자동으로 생성이 된다 필요한 기능을 추가해서 사용 하면 될듯
SELECT name, default_version,installed_version
FROM pg_available_extensions
WHERE name LIKE '%store%'
/DBMS/work/PG16/contrib
cd /DBMS/work/PG16/contrib
cd /DBMS/work/PG16/contrib/pg_stat_statements
make && make install
모니터링할 데이터베이스에서 다음을 실행합니다. CREATE 쿼리:
에 대한 확장 생성이 필요하지 않음 auto_explain, 에만 해당 pg_stat_statements.
psql
postgres=# \c mydatabase
mydatabase=# CREATE EXTENSION pg_stat_statements;
pg_stat_statements.max = 5000
pg_stat_statements.track = all
pg_stat_statements.save = on
데이터베이스에서 비용이 가장 많이 드는 상위 5개의 쿼리를 가져옵니다.
SELECT query, calls, total_time, rows, 100.0*shared_blks_hit/nullif(shared_blks_hit+shared_blks_read,0) AS hit_percent
from pg_stat_statements as s inner join pg_database as d on d.oid = s.dbid
where d.datname = 'Database_Name'
order by total_time desc limit 5;
cd /DBMS/work/PG16/contrib/pg_buffercache
make && make install
CREATE EXTENSION pg_buffercache;
현황은 아래 세가지항목을 중점을 본다.
1. buffer 사용량
2. shared_buffers 대비 사용률
3. 테이블 대비 캐시 비율
select c.relname as relname
, pg_size_pretty(count(*) * 8192) as buffered
, round(100.0 * count(*) / ( select setting from pg_settings where name='shared_buffers')::integer,1) as buffer_percent
, round(100.0*count(*)*8192 / pg_table_size(c.oid),1) as percent_of_relation
from pg_class c inner join pg_buffercache b on b.relfilenode = c.relfilenode inner
join pg_database d on ( b.reldatabase =d.oid and d.datname =current_database())
group by c.oid,c.relname order by 3 desc limit 20
dataframe = psql.read_sql(query,conn)
cd /DBMS/work/PG16/contrib/passwordcheck
make && make install
cd /DBMS/work/PG16/contrib/pg_hint_plan
make && make install
--익스텐션 추가
create extension pg_hint_plan;
주의사항 : 테이블명, alias, 인덱스명 사용시 소문자만 SQL Hint로 인식 가능함
select /*+ SeqScan(a) */
*
from t_xxx_brd a
where use_yn = 'Y'
order by 1, 2;
cd /DBMS/work/PG16/contrib/pg_store_plans -- 없음
git clone https://github.com/ossc-db/pg_store_plans.git --완료
make && make install
/usr/bin/install: cannot stat '/home/db_admin/postgresql-16.2/contrib/pg_store_plans/pg_store_plans.control': No such file or directory
make: *** [/home/db_admin/postgresql-16.2/src/makefiles/pgxs.mk:239: install] Error 1
오류발생
mkdir -p /home/db_admin/postgresql-16.2/contrib/pg_store_plans/pg_store_plans.control
make USE_PGXS=1 && make install USE_PGXS=1
pg_store_plans.log_analyze = false
pg_store_plans.log_timing = false
pg_store_plans.max=1000
pg_store_plans.track=all
pg_store_plans.plan_format=text
# Consider setting this parameter to a value greater than 0 in your high-load production clusters
pg_store_plans.min_duration=0
pg_store_plans.log_buffers=false
pg_store_plans.log_triggers=false
pg_store_plans.verbose=false
pg_store_plans.save=false
shared_preload_libraries =pg_store_plans
CREATE EXTENSION pg_store_plans;
SELECT current_database(), extname, extversion from pg_extension where extname ='pg_store_plans';
current_database | extname | extversion
------------------+----------------+------------
postgres | pg_store_plans | 1.6
(1 row)
SELECT name, setting from pg_settings where name like 'pg_store_plans%';
name | setting
-----------------------------+---------
pg_store_plans.log_analyze | off
pg_store_plans.log_buffers | off
pg_store_plans.log_timing | off
pg_store_plans.log_triggers | off
pg_store_plans.log_verbose | off
pg_store_plans.max | 1000
pg_store_plans.min_duration | 0
pg_store_plans.plan_format | text
pg_store_plans.save | off
pg_store_plans.track | all
SELECT dbid, count(*) from pg_store_plans group by dbid;
postgres=#
postgres=# SELECT s.queryid, s.query
postgres-# , p.planid
postgres-# , p.plan
postgres-# , p.calls AS "plan calls"
postgres-# , s.calls AS "stmt calls"
postgres-# , round(s.mean_exec_time::numeric/1000,2) AS avg_exec_sec
postgres-# , (s.shared_blks_hit + s.shared_blks_read) shared_blks_io
postgres-# , p.first_call, p.last_call
postgres-# FROM pg_stat_statements s, pg_store_plans p
postgres-# WHERE s.queryid = p.queryid
postgres-# ;
-[ RECORD 1 ]--+--------------------------------------------------------------------------------------------------------------------
queryid | -104275097160788559
query | SELECT name, default_version,installed_version +
| FROM pg_available_extensions +
| WHERE name LIKE $1
planid | 2340386692
plan | Nested Loop Left (cost=0.03..3.36 rows=1 width=100) +
| Join Filter: (e.name = x.extname) +
| -> Function Scan on pg_available_extensions e (cost=0.03..1.27 rows=1 width=96) +
| Filter: (name ~~ '%store%'::text) +
| -> Seq Scan on pg_extension x (cost=0.00..2.04 rows=4 width=68)
plan calls | 1
stmt calls | 1
avg_exec_sec | 0.00
shared_blks_io | 1
first_call | 2024-04-09 08:39:45.65975+09
last_call | 2024-04-09 08:39:45.65975+09
-[ RECORD 2 ]--+--------------------------------------------------------------------------------------------------------------------
queryid | 3917716246224232700
query | SELECT s.query, p.plan, +
| p.calls as "plan calls", s.calls as "stmt calls", +
| p.total_time / p.calls as "time/call", p.first_call, p.last_call +
| FROM pg_stat_statements s +
| JOIN pg_store_plans p USING (queryid) WHERE p.calls < s.calls +
| ORDER BY query ASC, "time/call" DESC
planid | 2879847318
plan | Sort (cost=309.71..313.88 rows=1667 width=104) +
| Sort Key: pg_stat_statements.query, ((pg_store_plans.total_time / (pg_store_plans.calls)::double precision)) DESC+
| -> Merge Join (cost=119.66..220.50 rows=1667 width=104) +
| Merge Cond: (pg_stat_statements.queryid = pg_store_plans.queryid) +
| Join Filter: (pg_store_plans.calls < pg_stat_statements.calls) +
| -> Sort (cost=59.83..62.33 rows=1000 width=48) +
| Sort Key: pg_stat_statements.queryid +
| -> Function Scan on pg_stat_statements (cost=0.00..10.00 rows=1000 width=48) +
| -> Sort (cost=59.83..62.33 rows=1000 width=72) +
| Sort Key: pg_store_plans.queryid +
| -> Function Scan on pg_store_plans (cost=0.00..10.00 rows=1000 width=72)
plan calls | 3
stmt calls | 3
avg_exec_sec | 0.00
shared_blks_io | 22
first_call | 2024-04-09 08:38:26.001959+09
last_call | 2024-04-09 08:39:13.044464+09
postgres=# explain
postgres-# SELECT s.queryid, s.query
postgres-# , p.planid
postgres-# , p.plan
postgres-# , p.calls AS "plan calls"
postgres-# , s.calls AS "stmt calls"
postgres-# , round(s.mean_exec_time::numeric/1000,2) AS avg_exec_sec
postgres-# , (s.shared_blks_hit + s.shared_blks_read) shared_blks_io
postgres-# , p.first_call, p.last_call
postgres-# FROM pg_stat_statements s, pg_store_plans p
postgres-# WHERE s.queryid = p.queryid
postgres-# ;
-[ RECORD 1 ]--------------------------------------------------------------------------------------
QUERY PLAN | Merge Join (cost=119.66..249.66 rows=5000 width=152)
-[ RECORD 2 ]--------------------------------------------------------------------------------------
QUERY PLAN | Merge Cond: (pg_stat_statements.queryid = pg_store_plans.queryid)
-[ RECORD 3 ]--------------------------------------------------------------------------------------
QUERY PLAN | -> Sort (cost=59.83..62.33 rows=1000 width=72)
-[ RECORD 4 ]--------------------------------------------------------------------------------------
QUERY PLAN | Sort Key: pg_stat_statements.queryid
-[ RECORD 5 ]--------------------------------------------------------------------------------------
QUERY PLAN | -> Function Scan on pg_stat_statements (cost=0.00..10.00 rows=1000 width=72)
-[ RECORD 6 ]--------------------------------------------------------------------------------------
QUERY PLAN | -> Sort (cost=59.83..62.33 rows=1000 width=72)
-[ RECORD 7 ]--------------------------------------------------------------------------------------
QUERY PLAN | Sort Key: pg_store_plans.queryid
-[ RECORD 8 ]--------------------------------------------------------------------------------------
QUERY PLAN | -> Function Scan on pg_store_plans (cost=0.00..10.00 rows=1000 width=72)
postgres=#
cd /DBMS/work/PG16/contrib/pg_bigm -- 없음 16에는 기본적으로 있음 https://www.postgresql.org/docs/current/textsearch-indexes.html
git clone https://github.com/pgbigm/pg_bigm.git
mkdir -p /home/db_admin/postgresql-16.2/contrib/pg_bigm/pg_bigm.control
make: *** [/home/db_admin/postgresql-16.2/src/makefiles/pgxs.mk:239: install] Error 1 --에러 발생
cd /DBMS/work/PG16/contrib/pg_cron -- 없음
git clone https://github.com/citusdata/pg_cron.git --완료
https://github.com/citusdata/pg_cron
cd pg_cron
# Ensure pg_config is in your path, e.g.
export PATH=/usr/pgsql-16/bin:$PATH
make && sudo PATH=$PATH make install
--확장 모듈을 파라미터 파일에 아래와 같이 등록하고 재시작한다.
shared_preload_libraries = 'pg_buffercache,passwordcheck,pg_stat_statements,pg_hint_plan,pg_store_plans,pg_cron'
-----------아래 내용은 메뉴얼에서 가져온글이라 알아서 보기바람 ㅎㅎ...
Setting up pg_cron
To start the pg_cron background worker when PostgreSQL starts, you need to add pg_cron to shared_preload_libraries in postgresql.conf. Note that pg_cron does not run any jobs as a long a server is in hot standby mode, but it automatically starts when the server is promoted.
# add to postgresql.conf
# required to load pg_cron background worker on start-up
shared_preload_libraries = 'pg_cron'
By default, the pg_cron background worker expects its metadata tables to be created in the "postgres" database. However, you can configure this by setting the cron.database_name configuration parameter in postgresql.conf.
# add to postgresql.conf
# optionally, specify the database in which the pg_cron background worker should run (defaults to postgres)
cron.database_name = 'postgres'
pg_cron may only be installed to one database in a cluster. If you need to run jobs in multiple databases, use cron.schedule_in_database().
-- Vacuum every day at 10:00am (GMT)
SELECT cron.schedule('nightly-vacuum', '0 10 * * *', 'VACUUM');
schedule
----------
43
-- Change to vacuum at 3:00am (GMT)
SELECT cron.schedule('nightly-vacuum', '0 3 * * *', 'VACUUM');
schedule
----------
43
-- Stop scheduling jobs
SELECT cron.unschedule('nightly-vacuum' );
unschedule
------------
t
SELECT cron.unschedule(42);
unschedule
------------
t
-- Vacuum every Sunday at 4:00am (GMT) in a database other than the one pg_cron is installed in
SELECT cron.schedule_in_database('weekly-vacuum', '0 4 * * 0', 'VACUUM', 'some_other_database');
schedule
----------
44
-- Call a stored procedure every 5 seconds
SELECT cron.schedule('process-updates', '5 seconds', 'CALL process_updates()');
-- Process payroll at 12:00 of the last day of each month
SELECT cron.schedule('process-payroll', '0 12 $ * *', 'CALL process_payroll()');
┌───────────── min (0 - 59)
│ ┌────────────── hour (0 - 23)
│ │ ┌─────────────── day of month (1 - 31) or last day of the month ($)
│ │ │ ┌──────────────── month (1 - 12)
│ │ │ │ ┌───────────────── day of week (0 - 6) (0 to 6 are Sunday to
│ │ │ │ │ Saturday, or use names; 7 is also Sunday)
│ │ │ │ │
│ │ │ │ │
* * * * *
Previously pg_cron could only use GMT time, but now you can adapt your time by setting cron.timezone in postgresql.conf.
# add to postgresql.conf
# optionally, specify the timezone in which the pg_cron background worker should run (defaults to GMT). E.g:
cron.timezone = 'PRC'
After restarting PostgreSQL, you can create the pg_cron functions and metadata tables using CREATE EXTENSION pg_cron.
-- run as superuser:
CREATE EXTENSION pg_cron;
-- optionally, grant usage to regular users:
GRANT USAGE ON SCHEMA cron TO marco;
Ensuring pg_cron can start jobs
Important: By default, pg_cron uses libpq to open a new connection to the local database, which needs to be allowed by pg_hba.conf. It may be necessary to enable trust authentication for connections coming from localhost in for the user running the cron job, or you can add the password to a .pgpass file, which libpq will use when opening a connection.
You can also use a unix domain socket directory as the hostname and enable trust authentication for local connections in pg_hba.conf, which is normally safe:
# Connect via a unix domain socket:
cron.host = '/tmp'
# Can also be an empty string to look for the default directory:
cron.host = ''
Alternatively, pg_cron can be configured to use background workers. In that case, the number of concurrent jobs is limited by the max_worker_processes setting, so you may need to raise that.
# Schedule jobs via background workers instead of localhost connections
cron.use_background_workers = on
# Increase the number of available background workers from the default of 8
max_worker_processes = 20
For security, jobs are executed in the database in which the cron.schedule function is called with the same permissions as the current user. In addition, users are only able to see their own jobs in the cron.job table.
Viewing job run details
You can view the status of running and recently completed job runs in the cron.job_run_details:
select * from cron.job_run_details order by start_time desc limit 5;
┌───────┬───────┬─────────┬──────────┬──────────┬───────────────────┬───────────┬──────────────────┬───────────────────────────────┬───────────────────────────────┐
│ jobid │ runid │ job_pid │ database │ username │ command │ status │ return_message │ start_time │ end_time │
├───────┼───────┼─────────┼──────────┼──────────┼───────────────────┼───────────┼──────────────────┼───────────────────────────────┼───────────────────────────────┤
│ 10 │ 4328 │ 2610 │ postgres │ marco │ select process() │ succeeded │ SELECT 1 │ 2023-02-07 09:30:00.098164+01 │ 2023-02-07 09:30:00.130729+01 │
│ 10 │ 4327 │ 2609 │ postgres │ marco │ select process() │ succeeded │ SELECT 1 │ 2023-02-07 09:29:00.015168+01 │ 2023-02-07 09:29:00.832308+01 │
│ 10 │ 4321 │ 2603 │ postgres │ marco │ select process() │ succeeded │ SELECT 1 │ 2023-02-07 09:28:00.011965+01 │ 2023-02-07 09:28:01.420901+01 │
│ 10 │ 4320 │ 2602 │ postgres │ marco │ select process() │ failed │ server restarted │ 2023-02-07 09:27:00.011833+01 │ 2023-02-07 09:27:00.72121+01 │
│ 9 │ 4320 │ 2602 │ postgres │ marco │ select do_stuff() │ failed │ job canceled │ 2023-02-07 09:26:00.011833+01 │ 2023-02-07 09:26:00.22121+01 │
└───────┴───────┴─────────┴──────────┴──────────┴───────────────────┴───────────┴──────────────────┴───────────────────────────────┴───────────────────────────────┘
(10 rows)
The records in cron.job_run_details are not cleaned automatically, but every user that can schedule cron jobs also has permission to delete their own cron.job_run_details records.
Especially when you have jobs that run every few seconds, it can be a good idea to clean up regularly, which can easily be done using pg_cron itself:
-- Delete old cron.job_run_details records of the current user every day at noon
SELECT cron.schedule('delete-job-run-details', '0 12 * * *', $$DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days'$$);
If you do not want to use cron.job_run_details at all, then you can add cron.log_run = off to postgresql.conf.
Example use cases
Articles showing possible ways of using pg_cron:
Auto-partitioning using pg_partman
Computing rollups in an analytical dashboard
Deleting old data, vacuum
Feeding cats
Routinely invoking a function
Postgres as a cron server
Managed services
The following table keeps track of which of the major managed Postgres services support pg_cron.
728x90
반응형
'DBMS > Open Source DB' 카테고리의 다른 글
PostgreSQL DB Lock 발생시 조치 방법 (0) | 2024.10.15 |
---|---|
PostgreSQL DB 다양한 스크립트 모음 (0) | 2024.08.22 |
MySQL Workbench dbmysqlquery.resultfieldstring Client Connetion 세션 정보 오류 (0) | 2024.08.22 |
MariaDB 수동 설치 하기 (0) | 2020.05.08 |