代码拉取完成,页面将自动刷新
同步操作将从 Gitee 极速下载/pigsty 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
---
######################################################################
# File : pigsty.yml
# Desc : Pigsty example configuration
# Note : Sandbox demo configuration (4-node)
# Link : https://pigsty.cc/zh/docs/config/
# Ctime : 2020-05-22
# Mtime : 2021-07-07
# Copyright (C) 2018-2021 Ruohang Feng ([email protected])
######################################################################
######################################################################
# Sandbox (1-node) #
#====================================================================#
# admin user : vagrant (nopass ssh & sudo already set) #
# 1 node meta : 10.10.10.10 ---> pg-meta-1 (2 Core | 4GB) #
# 1 vip pg-meta : 10.10.10.2 ---> pg-meta (10.10.10.10) #
#====================================================================#
# Sandbox (4-node) #
#====================================================================#
# admin user : vagrant (nopass ssh & sudo already set) #
# 1. meta : 10.10.10.10 (2 Core | 4GB) pg-meta #
# 2. node-1 : 10.10.10.10 (1 Core | 1GB) pg-test-1 #
# 3. node-2 : 10.10.10.10 (1 Core | 1GB) pg-test-2 #
# 4. node-3 : 10.10.10.10 (1 Core | 1GB) pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP: #
# pg-meta ---> 10.10.10.2 ---> 10.10.10.10 #
# pg-test ---> 10.10.10.3 ---> 10.10.10.1{1,2,3} #
######################################################################
all: # top-level namespace
#==================================================================#
# Clusters #
#==================================================================#
# postgres database clusters are defined as kv pair in `all.children`
# where the key is cluster name and the value is the object consist
# of cluster members (hosts) and cluster specific variables (vars)
# meta nodes are defined in special group "meta" with `meta_node=true`
children:
#----------------------------------#
# meta node (admin controller) #
#----------------------------------#
meta: # special group 'meta' marks admin nodes
vars: # with variable 'meta_node = true'
meta_node: true # and set their ansible_group_priority to 99
ansible_group_priority: 99 # which will overwrite other group definition
hosts: # add meta nodes here
10.10.10.10: {} # 10.10.10.10 is the default meta node
#----------------------------------#
# cluster: pg-meta (on meta node) #
#----------------------------------#
# pg-meta is the default SINGLE-NODE pgsql cluster deployed on meta node (10.10.10.10)
# if you have multiple n meta nodes, consider deploying pg-meta as n-node cluster too
pg-meta: # required, ansible group name , pgsql cluster name. should be unique among environment
hosts: # `<cluster>.hosts` holds instances definition of this cluster
10.10.10.10: # INSTANCE-LEVEL CONFIG: ip address is the key. values are instance level config entries (dict)
pg_seq: 1 # required, unique identity parameter (+integer) among pg_cluster
pg_role: primary # required, pg_role is mandatory identity parameter, primary|replica|offline|delayed
pg_offline_query: true # instance with `pg_offline_query: true` will take offline traffic (saga, etl,...)
# some variables can be overwritten on instance level. e.g: pg_upstream, pg_weight, etc...
#---------------
# mandatory # all configuration above (`ip`, `pg_seq`, `pg_role`) and `pg_cluster` are mandatory
#---------------
vars: # `<cluster>.vars` holds CLUSTER LEVEL CONFIG of this pgsql cluster
pg_cluster: pg-meta # required, pgsql cluster name, unique among cluster, used as namespace of cluster resources
#---------------
# optional # all configuration below are OPTIONAL for a pgsql cluster (Overwrite global default)
#---------------
pg_version: 13 # pgsql version to be installed (use global version if missing)
node_tune: tiny # node optimization profile: {oltp|olap|crit|tiny}, use tiny for vm sandbox
pg_conf: tiny.yml # pgsql template: {oltp|olap|crit|tiny}, use tiny for sandbox
patroni_mode: pause # entering patroni pause mode after bootstrap {default|pause|remove}
patroni_watchdog_mode: off # disable patroni watchdog on meta node {off|require|automatic}
pg_lc_ctype: en_US.UTF8 # use en_US.UTF8 locale for i18n char support (required by `pg_trgm`)
#---------------
# biz databases # Defining Business Databases (Optional)
#---------------
pg_databases: # define business databases on this cluster, array of database definition
# define the default `meta` database
- name: meta # required, `name` is the only mandatory field of a database definition
# baseline: meta/schema.sql # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
# owner: postgres # optional, database owner, postgres by default
# template: template1 # optional, which template to use, template1 by default
# encoding: UTF8 # optional, database encoding, UTF8 by default. (MUST same as template database)
# locale: C # optional, database locale, C by default. (MUST same as template database)
# lc_collate: C # optional, database collate, C by default. (MUST same as template database)
# lc_ctype: C # optional, database ctype, C by default. (MUST same as template database)
# tablespace: pg_default # optional, default tablespace, 'pg_default' by default.
# allowconn: true # optional, allow connection, true by default. false will disable connect at all
# revokeconn: false # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
# pgbouncer: true # optional, add this database to pgbouncer database list? true by default
comment: pigsty meta database # optional, comment string for this database
connlimit: -1 # optional, database connection limit, default -1 disable limit
schemas: [pigsty] # optional, additional schemas to be created, array of schema names
extensions: # optional, additional extensions to be installed: array of schema definition `{name,schema}`
- {name: adminpack, schema: pg_catalog} # install adminpack to pg_catalog and install postgis to public
- {name: postgis, schema: public} # if schema is omitted, extension will be installed according to search_path.
# define an additional database named grafana & prometheus (optional)
- { name: grafana, owner: dbuser_grafana , revokeconn: true , comment: grafana primary database }
- { name: prometheus, owner: dbuser_prometheus , revokeconn: true , comment: prometheus primary database }
#---------------
# biz users # Defining Business Users (Optional)
#---------------
pg_users: # define business users/roles on this cluster, array of user definition
# define admin user for meta database (This user are used for pigsty app deployment by default)
- name: dbuser_meta # required, `name` is the only mandatory field of a user definition
password: md5d3d10d8cad606308bdb180148bf663e1 # md5 salted password of 'DBUser.Meta'
# optional, plain text and md5 password are both acceptable (prefixed with `md5`)
login: true # optional, can login, true by default (new biz ROLE should be false)
superuser: false # optional, is superuser? false by default
createdb: false # optional, can create database? false by default
createrole: false # optional, can create role? false by default
inherit: true # optional, can this role use inherited privileges? true by default
replication: false # optional, can this role do replication? false by default
bypassrls: false # optional, can this role bypass row level security? false by default
pgbouncer: true # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
connlimit: -1 # optional, user connection limit, default -1 disable limit
expire_in: 3650 # optional, now + n days when this role is expired (OVERWRITE expire_at)
expire_at: '2030-12-31' # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
comment: pigsty admin user # optional, comment string for this user/role
roles: [dbrole_admin] # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
parameters: {} # optional, role level parameters with `ALTER ROLE SET`
# search_path: public # key value config parameters according to postgresql documentation (e.g: use pigsty as default search_path)
- {name: dbuser_view , password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
# define additional business users for prometheus & grafana (optional)
- {name: dbuser_grafana , password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for grafana database }
- {name: dbuser_prometheus , password: DBUser.Prometheus ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for prometheus database }
#---------------
# hba rules # Defining extra HBA rules on this cluster (Optional)
#---------------
pg_hba_rules_extra: # Extra HBA rules to be installed on this cluster
- title: reject grafana non-local access # required, rule title (used as hba description & comment string)
role: common # required, which roles will be applied? ('common' applies to all roles)
rules: # required, rule content: array of hba string
- local grafana dbuser_grafana md5
- host grafana dbuser_grafana 127.0.0.1/32 md5
- host grafana dbuser_grafana 10.10.10.10/32 md5
vip_mode: l2 # setup a level-2 vip for cluster pg-meta
vip_address: 10.10.10.2 # virtual ip address that binds to primary instance of cluster pg-meta
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
#----------------------------------#
# cluster: pg-test (4-node demo) #
#----------------------------------#
# pg-test ---> 10.10.10.3 ---> 10.10.10.1{1,2,3}
# Uncomment these to use 4-node version demo
pg-test: # define the new 3-node cluster pg-test
hosts:
10.10.10.11: {pg_seq: 1, pg_role: primary} # primary instance, leader of cluster
10.10.10.12: {pg_seq: 2, pg_role: replica} # replica instance, follower of leader
10.10.10.13: {pg_seq: 3, pg_role: offline} # offline instance, replica that allow offline access
vars:
pg_cluster: pg-test # define actual cluster name
pg_version: 14 # test postgresql 14 with pg-test cluster
pg_packages: # overwrite postgres packages to be installed
- postgresql14* pgbouncer patroni pg_exporter pgbadger patroni patroni-consul patroni-etcd pgbouncer pgbadger pg_activity
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography
pg_extensions: [] # PostgreSQL 14 does not have available 3rd extensions
pg_users: [{name: test , password: test ,pgbouncer: true ,roles: [dbrole_admin], comment: test user for test database cluster }]
pg_databases: [{ name: test}] # create a database and user named 'test'
vip_mode: l2 # enable/disable vip (require members in same LAN)
vip_address: 10.10.10.3 # virtual ip address for this cluster
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
#---------------
# service # Defining Extra Service (Optional)
#---------------
pg_services_extra: # extra services in addition to pg_services, array of service definition
# standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
- name: standby # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
src_ip: "*" # required, service bind ip address, `*` for all ip, `vip` for cluster `vip_address`
src_port: 5435 # required, service exposed port (work as kubernetes service node port mode)
dst_port: postgres # optional, destination port, postgres|pgbouncer|<port_number> , pgbouncer(6432) by default
check_method: http # optional, health check method: http is the only available method for now
check_port: patroni # optional, health check port: patroni|pg_exporter|<port_number> , patroni(8008) by default
check_url: /read-only?lag=0 # optional, health check url path, / by default
check_code: 200 # optional, health check expected http code, 200 by default
selector: "[]" # required, JMESPath to filter inventory ()
selector_backup: "[? pg_role == `primary`]" # primary used as backup server for standby service (will not work because /sync for )
haproxy: # optional, adhoc parameters for haproxy service provider (vip_l4 is another service provider)
maxconn: 3000 # optional, max allowed front-end connection
balance: roundrobin # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
default_server_options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
#==================================================================#
# Globals #
#==================================================================#
vars:
#------------------------------------------------------------------------------
# CONNECTION PARAMETERS
#------------------------------------------------------------------------------
# this section defines connection parameters (How to perform ssh sudo on nodes)
# ansible_user: vagrant # admin user with ssh access and sudo privilege
# ansible_password: <remote ssh pass> # admin user's ssh password (sshpass required, not recommended)
# ansible_become_pass: <remote sudo password> # admin user's sudo password (security breach, not recommended)
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com"
# http_proxy: # set your proxy here: e.g http://user:[email protected]
# https_proxy: # set your proxy here: e.g http://user:[email protected]
# all_proxy: # set your proxy here: e.g http://user:[email protected]
#------------------------------------------------------------------------------
# REPO PROVISION
#------------------------------------------------------------------------------
# this section describes pigsty local yum repo
# - repo basic - #
repo_enabled: true # build local yum repo on meta nodes?
repo_name: pigsty # local repo name
repo_address: yum.pigsty # repo external address (ip:port or url)
repo_port: 80 # listen address, must same as repo_address
repo_home: /www # default repo dir location
repo_rebuild: false # force re-download packages
repo_remove: true # remove existing repos
# - where to download - #
repo_upstreams:
- name: base
description: CentOS-$releasever - Base
gpgcheck: no
baseurl:
- https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/ # tuna
- http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/ # aliyun
- http://mirror.centos.org/centos/$releasever/os/$basearch/ # official
- name: updates
description: CentOS-$releasever - Updates
gpgcheck: no
baseurl:
- https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/ # tuna
- http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/ # aliyun
- http://mirror.centos.org/centos/$releasever/updates/$basearch/ # official
- name: extras
description: CentOS-$releasever - Extras
baseurl:
- https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/ # tuna
- http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
- http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
- http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/ # aliyun
- http://mirror.centos.org/centos/$releasever/extras/$basearch/ # official
gpgcheck: no
- name: epel
description: CentOS $releasever - epel
gpgcheck: no
baseurl:
- https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/$basearch # tuna
- http://mirrors.aliyun.com/epel/$releasever/$basearch # aliyun
- http://download.fedoraproject.org/pub/epel/$releasever/$basearch # official
- name: grafana
description: Grafana
enabled: yes
gpgcheck: no
baseurl:
- https://mirrors.tuna.tsinghua.edu.cn/grafana/yum/rpm # tuna mirror
- https://packages.grafana.com/oss/rpm # official
- name: prometheus
description: Prometheus and exporters
gpgcheck: no
baseurl: https://packagecloud.io/prometheus-rpm/release/el/$releasever/$basearch # no other mirrors, quite slow
- name: pgdg-common
description: PostgreSQL common RPMs for RHEL/CentOS $releasever - $basearch
gpgcheck: no
baseurl:
- http://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch # tuna
- https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch # official
- name: pgdg13
description: PostgreSQL 13 for RHEL/CentOS $releasever - $basearch
gpgcheck: no
baseurl:
- https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch # tuna
- https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch # official
- name: pgdg14-beta
description: PostgreSQL 14 beta for RHEL/CentOS $releasever - $basearch
enabled: yes
gpgcheck: no
baseurl:
- https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/testing/14/redhat/rhel-$releasever-$basearch # tuna
- https://download.postgresql.org/pub/repos/yum/testing/14/redhat/rhel-$releasever-$basearch # official
- name: centos-sclo
description: CentOS-$releasever - SCLo
gpgcheck: no
baseurl: # mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=$releasever&repo=sclo-sclo
- http://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/sclo/
- http://repo.virtualhosting.hk/centos/$releasever/sclo/$basearch/sclo/
- name: centos-sclo-rh
description: CentOS-$releasever - SCLo rh
gpgcheck: no
baseurl: # mirrorlist: http://mirrorlist.centos.org?arch=$basearch&release=7&repo=sclo-rh
- http://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/rh/
- http://repo.virtualhosting.hk/centos/$releasever/sclo/$basearch/rh/
- name: nginx
description: Nginx Official Yum Repo
skip_if_unavailable: true
gpgcheck: no
baseurl: http://nginx.org/packages/centos/$releasever/$basearch/
- name: haproxy
description: Copr repo for haproxy
skip_if_unavailable: true
gpgcheck: no
baseurl: https://download.copr.fedorainfracloud.org/results/roidelapluie/haproxy/epel-$releasever-$basearch/
# for latest consul & kubernetes
- name: harbottle
description: Copr repo for main owned by harbottle
skip_if_unavailable: true
gpgcheck: no
baseurl: https://download.copr.fedorainfracloud.org/results/harbottle/main/epel-$releasever-$basearch/
# - what to download - #
repo_packages:
# repo bootstrap packages
- epel-release nginx wget yum-utils yum createrepo sshpass unzip # bootstrap packages
# node basic packages
- ntp chrony uuid lz4 nc pv jq vim-enhanced make patch bash lsof wget git tuned # basic system util
- readline zlib openssl libyaml libxml2 libxslt perl-ExtUtils-Embed ca-certificates # basic pg dependency
- numactl grubby sysstat dstat iotop bind-utils net-tools tcpdump socat ipvsadm telnet # system utils
# dcs & monitor packages
- grafana prometheus2 pushgateway alertmanager # monitor and ui
- node_exporter postgres_exporter nginx_exporter blackbox_exporter # exporter
- consul consul_exporter consul-template etcd # dcs
# python3 dependencies
- ansible python python-pip python-psycopg2 audit # ansible & python
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul # python3
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography # patroni extra deps
# proxy and load balancer
- haproxy keepalived dnsmasq # proxy and dns
# postgres common Packages
- patroni patroni-consul patroni-etcd pgbouncer pg_cli pgbadger pg_activity # major components
- pgcenter boxinfo check_postgres emaj pgbconsole pg_bloat_check pgquarrel # other common utils
- barman barman-cli pgloader pgFormatter pitrery pspg pgxnclient PyGreSQL pgadmin4 tail_n_mail
# postgres 13 packages
- postgresql13* # postgresql 13 kernel
- postgresql14* # postgresql 14 kernel (beta)
- postgresql13* postgis31* citus_13 timescaledb_13 pg_repack13 pg_squeeze13 # postgresql 13 extensions
- pg_qualstats13 pg_stat_kcache13 system_stats_13 bgw_replstatus13 # stats extensions
- plr13 plsh13 plpgsql_check_13 plproxy13 plr13 plsh13 plpgsql_check_13 pldebugger13 # PL extensions
- hdfs_fdw_13 mongo_fdw13 mysql_fdw_13 ogr_fdw13 redis_fdw_13 pgbouncer_fdw13 # FDW extensions
- wal2json13 count_distinct13 ddlx_13 geoip13 orafce13 # MISC extensions
- rum_13 hypopg_13 ip4r13 jsquery_13 logerrors_13 periods_13 pg_auto_failover_13 pg_catcheck13
- pg_fkpart13 pg_jobmon13 pg_partman13 pg_prioritize_13 pg_track_settings13 pgaudit15_13
- pgcryptokey13 pgexportdoc13 pgimportdoc13 pgmemcache-13 pgmp13 pgq-13
- pguint13 pguri13 prefix13 safeupdate_13 semver13 table_version13 tdigest13
# build & devel packages (optional)
- gcc gcc-c++ clang coreutils diffutils rpm-build rpm-devel rpmlint rpmdevtools
- zlib-devel openssl-libs openssl-devel pam-devel libxml2-devel libxslt-devel openldap-devel systemd-devel tcl-devel python-devel
repo_url_packages:
- https://github.com/Vonng/pg_exporter/releases/download/v0.4.0beta/pg_exporter-0.4.0-1.el7.x86_64.rpm # pg_exporter rpm
- https://github.com/cybertec-postgresql/vip-manager/releases/download/v1.0/vip-manager_1.0-1_amd64.rpm # vip manger
- https://github.com/prometheus/node_exporter/releases/download/v1.1.2/node_exporter-1.1.2.linux-amd64.tar.gz # monitor binaries
- https://github.com/Vonng/pg_exporter/releases/download/v0.4.0beta/pg_exporter_v0.4.0_linux-amd64.tar.gz
- https://github.com/grafana/loki/releases/download/v2.2.1/loki-linux-amd64.zip
- https://github.com/grafana/loki/releases/download/v2.2.1/promtail-linux-amd64.zip
- https://github.com/grafana/loki/releases/download/v2.2.1/logcli-linux-amd64.zip
- https://github.com/grafana/loki/releases/download/v2.2.1/loki-canary-linux-amd64.zip
# - https://github.com/Vonng/pg_exporter/releases/download/v0.3.2/pg_exporter-0.3.2-1.el7.x86_64.rpm
# - https://github.com/cybertec-postgresql/vip-manager/releases/download/v0.6/vip-manager_0.6-1_amd64.rpm
# - https://github.com/Vonng/pg_exporter/releases/download/v0.3.2/pg_exporter_v0.3.2_linux-amd64.tar.gz
# mirror in mainland china (use commented packages to install from official site)
# - http://pigsty-1304147732.cos.accelerate.myqcloud.com/pkg/pg_exporter-0.3.2-1.el7.x86_64.rpm
# - http://pigsty-1304147732.cos.accelerate.myqcloud.com/pkg/vip-manager_0.6-1_amd64.rpm
# - http://pigsty-1304147732.cos.accelerate.myqcloud.com/pkg/polysh-0.4-1.noarch.rpm
#------------------------------------------------------------------------------
# NODE PROVISION
#------------------------------------------------------------------------------
# this section defines how to provision nodes
# nodename: # if defined, node's hostname will be overwritten
# meta_node: false # node with meta_node will be marked as admin node
# - node dns - #
node_dns_hosts: # static dns records in /etc/hosts
- 10.10.10.10 yum.pigsty
node_dns_server: add # add (default) | none (skip) | overwrite (remove old settings)
node_dns_servers: # dynamic nameserver in /etc/resolv.conf
- 10.10.10.10
node_dns_options: # dns resolv options
- options single-request-reopen timeout:1 rotate
- domain service.consul
# - node repo - #
node_repo_method: local # none|local|public (use local repo for production env)
node_repo_remove: true # whether remove existing repo
node_local_repo_url: # local repo url (if method=local, make sure firewall is configured or disabled)
- http://yum.pigsty/pigsty.repo
# - node packages - #
node_packages: # common packages for all nodes
- wget,yum-utils,sshpass,ntp,chrony,tuned,uuid,lz4,vim-minimal,make,patch,bash,lsof,wget,unzip,git,readline,zlib,openssl
- numactl,grubby,sysstat,dstat,iotop,bind-utils,net-tools,tcpdump,socat,ipvsadm,telnet,tuned,pv,jq
- python3,python3-psycopg2,python36-requests,python3-etcd,python3-consul
- python36-urllib3,python36-idna,python36-pyOpenSSL,python36-cryptography
- node_exporter,consul,consul-template,etcd,haproxy,keepalived,vip-manager
node_extra_packages: # extra packages for all nodes
- patroni,patroni-consul,patroni-etcd,pgbouncer,pgbadger,pg_activity
node_meta_packages: # packages for meta nodes only
- grafana,prometheus2,alertmanager,nginx_exporter,blackbox_exporter,pushgateway
- dnsmasq,nginx,ansible,pgbadger,python-psycopg2
- gcc,gcc-c++,clang,coreutils,diffutils,rpm-build,rpm-devel,rpmlint,rpmdevtools
- zlib-devel,openssl-libs,openssl-devel,pam-devel,libxml2-devel,libxslt-devel,openldap-devel,systemd-devel,tcl-devel,python-devel
# - node features - #
node_disable_numa: false # disable numa, important for production database, reboot required
node_disable_swap: false # disable swap, important for production database
node_disable_firewall: true # disable firewall (required if using kubernetes)
node_disable_selinux: true # disable selinux (required if using kubernetes)
node_static_network: true # keep dns resolver settings after reboot
node_disk_prefetch: false # setup disk prefetch on HDD to increase performance
# - node kernel modules - #
node_kernel_modules: [softdog, br_netfilter, ip_vs, ip_vs_rr, ip_vs_rr, ip_vs_wrr, ip_vs_sh]
# - node tuned - #
node_tune: tiny # install and activate tuned profile: none|oltp|olap|crit|tiny
node_sysctl_params: {} # set additional sysctl parameters, k:v format
# net.bridge.bridge-nf-call-iptables: 1 # example sysctl parameters
# - node admin - #
node_admin_setup: true # create a default admin user defined by `node_admin_*` ?
node_admin_uid: 88 # uid and gid for this admin user
node_admin_username: dba # name of this admin user, dba by default
node_admin_ssh_exchange: true # exchange admin ssh key among each pgsql cluster ?
node_admin_pk_current: true # add current user's ~/.ssh/id_rsa.pub to admin authorized_keys ?
node_admin_pks: # ssh public keys to be added to admin user (REPLACE WITH YOURS!)
- 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC7IMAMNavYtWwzAJajKqwdn3ar5BhvcwCnBTxxEkXhGlCO2vfgosSAQMEflfgvkiI5nM1HIFQ8KINlx1XLO7SdL5KdInG5LIJjAFh0pujS4kNCT9a5IGvSq1BrzGqhbEcwWYdju1ZPYBcJm/MG+JD0dYCh8vfrYB/cYMD0SOmNkQ== [email protected]'
# - node ntp - #
node_ntp_service: ntp # ntp service provider: ntp|chrony
node_ntp_config: true # config ntp service? false will leave it with system default
node_timezone: Asia/Shanghai # default node timezone
node_ntp_servers: # default NTP servers
- pool cn.pool.ntp.org iburst
- pool pool.ntp.org iburst
- pool time.pool.aliyun.com iburst
- server 10.10.10.10 iburst
- server ntp.tuna.tsinghua.edu.cn iburst
#------------------------------------------------------------------------------
# META PROVISION
#------------------------------------------------------------------------------
# - ca - #
ca_method: create # create|copy|recreate
ca_subject: "/CN=root-ca" # self-signed CA subject
ca_homedir: /ca # ca cert directory
ca_cert: ca.crt # ca public key/cert
ca_key: ca.key # ca private key
# - nginx - #
nginx_upstream: # domain names that will be used for accessing pigsty services
# some service can only be accessed via correct domain name (e.g consul)
- { name: home, host: pigsty, url: "127.0.0.1:3000" } # default -> grafana (3000)
- { name: consul, host: c.pigsty, url: "127.0.0.1:8500" } # pigsty consul UI (8500) (domain required)
- { name: grafana, host: g.pigsty, url: "127.0.0.1:3000" } # pigsty grafana (3000)
- { name: prometheus, host: p.pigsty, url: "127.0.0.1:9090" } # pigsty prometheus (9090)
- { name: alertmanager, host: a.pigsty, url: "127.0.0.1:9093" } # pigsty alertmanager (9093)
- { name: haproxy, host: h.pigsty, url: "127.0.0.1:9091" } # pigsty haproxy admin page (9091)
- { name: server, host: s.pigsty, url: "127.0.0.1:9633" } # pigsty server gui (9093)
# - nameserver - #
dns_records: # dynamic dns record resolved by dnsmasq
- 10.10.10.2 pg-meta # sandbox vip for pg-meta
- 10.10.10.10 meta-1 # sandbox node meta-1 (node-0)
- 10.10.10.10 pigsty
- 10.10.10.10 y.pigsty yum.pigsty
- 10.10.10.10 c.pigsty consul.pigsty
- 10.10.10.10 g.pigsty grafana.pigsty
- 10.10.10.10 p.pigsty prometheus.pigsty
- 10.10.10.10 a.pigsty alertmanager.pigsty
- 10.10.10.10 n.pigsty ntp.pigsty
- 10.10.10.10 h.pigsty haproxy.pigsty
# - prometheus - #
prometheus_data_dir: /data/prometheus/data # prometheus data dir
prometheus_options: '--storage.tsdb.retention=30d'
prometheus_reload: false # reload prometheus instead of recreate it
prometheus_sd_method: static # service discovery method: static|consul|etcd
prometheus_scrape_interval: 10s # global scrape & evaluation interval
prometheus_scrape_timeout: 8s # scrape timeout
prometheus_sd_interval: 10s # service discovery refresh interval
# - grafana - #
grafana_endpoint: http://10.10.10.10:3000 # grafana endpoint url
grafana_admin_username: admin # default grafana admin username
grafana_admin_password: pigsty # default grafana admin password
grafana_database: sqlite3 # default grafana database type: sqlite3|postgres, sqlite3 by default
# if postgres is used, url must be specified. The user is pre-defined in pg-meta.pg_users
grafana_plugin: install # none|install, none will skip plugin installation
grafana_cache: /www/pigsty/plugins.tgz # path to grafana plugins cache tarball
grafana_plugins: [] # plugins that will be downloaded via grafana-cli
grafana_git_plugins: [] # plugins that will be downloaded via git
# - loki - #
loki_clean: false # whether remove existing loki data
loki_data_dir: /data/loki # default loki data dir
#------------------------------------------------------------------------------
# DCS PROVISION
#------------------------------------------------------------------------------
service_registry: consul # where to register services: none | consul | etcd | both
dcs_type: consul # consul | etcd | both
dcs_name: pigsty # consul dc name | etcd initial cluster token
dcs_servers: # dcs server dict in name:ip format
meta-1: 10.10.10.10 # you could use existing dcs cluster
# meta-2: 10.10.10.11 # host which have their IP listed here will be init as server
# meta-3: 10.10.10.12 # 3 or 5 dcs nodes are recommend for production environment
dcs_exists_action: clean # abort|skip|clean if dcs server already exists
dcs_disable_purge: false # set to true to disable purge functionality for good (force dcs_exists_action = abort)
consul_data_dir: /var/lib/consul # consul data dir (/var/lib/consul by default)
etcd_data_dir: /var/lib/etcd # etcd data dir (/var/lib/consul by default)
#------------------------------------------------------------------------------
# POSTGRES INSTALLATION
#------------------------------------------------------------------------------
# - dbsu - #
pg_dbsu: postgres # os user for database, postgres by default (unwise to change it)
pg_dbsu_uid: 26 # os dbsu uid and gid, 26 for default postgres users and groups
pg_dbsu_sudo: limit # dbsu sudo privilege: none|limit|all|nopass, limit by default
pg_dbsu_home: /var/lib/pgsql # postgresql home directory
pg_dbsu_ssh_exchange: true # exchange postgres dbsu ssh key among same cluster ?
# - postgres packages - #
pg_version: 13 # default postgresql version to be installed
pgdg_repo: false # add pgdg official repo before install (in case of no local repo available)
pg_add_repo: false # add postgres related repo before install (useful if you want a simple install)
pg_bin_dir: /usr/pgsql/bin # postgres binary dir, default is /usr/pgsql/bin, which use /usr/pgsql -> /usr/pgsql-{ver}
pg_packages: # postgresql related packages. `${pg_version} will be replaced by `pg_version`
- postgresql${pg_version}* # postgresql kernel packages
- postgis31_${pg_version}* # postgis
- pgbouncer patroni pg_exporter pgbadger # 3rd utils
- patroni patroni-consul patroni-etcd pgbouncer pgbadger pg_activity
- python3 python3-psycopg2 python36-requests python3-etcd python3-consul
- python36-urllib3 python36-idna python36-pyOpenSSL python36-cryptography
pg_extensions: # postgresql extensions. `${pg_version} will be replaced by `pg_version`
- pg_repack${pg_version} pg_qualstats${pg_version} pg_stat_kcache${pg_version} wal2json${pg_version}
# - ogr_fdw${pg_version} mysql_fdw_${pg_version} redis_fdw_${pg_version} mongo_fdw${pg_version} hdfs_fdw_${pg_version}
# - count_distinct${version} ddlx_${version} geoip${version} orafce${version}
# - hypopg_${version} ip4r${version} jsquery_${version} logerrors_${version} periods_${version} pg_auto_failover_${version} pg_catcheck${version}
# - pg_fkpart${version} pg_jobmon${version} pg_partman${version} pg_prioritize_${version} pg_track_settings${version} pgaudit15_${version}
# - pgcryptokey${version} pgexportdoc${version} pgimportdoc${version} pgmemcache-${version} pgmp${version} pgq-${version} pgquarrel pgrouting_${version}
# - pguint${version} pguri${version} prefix${version} safeupdate_${version} semver${version} table_version${version} tdigest${version}
#------------------------------------------------------------------------------
# POSTGRES PROVISION
#------------------------------------------------------------------------------
# - identity - #
# pg_cluster: # [REQUIRED] cluster name (cluster level, validated during pg_preflight)
# pg_seq: 0 # [REQUIRED] instance seq (instance level, validated during pg_preflight)
# pg_role: replica # [REQUIRED] service role (instance level, validated during pg_preflight)
# pg_shard: # [OPTIONAL] shard name (cluster level)
# pg_sindex: # [OPTIONAl] shard index (cluster level)
# - identity option -#
pg_hostname: false # overwrite node hostname with pg instance name
pg_nodename: true # overwrite consul nodename with pg instance name
# - retention - #
# pg_exists_action, available options: abort|clean|skip
# - abort: abort entire play's execution (default)
# - clean: remove existing cluster (dangerous)
# - skip: end current play for this host
# pg_exists: false # auxiliary flag variable (DO NOT SET THIS)
pg_exists_action: clean # what to do when found running postgres instance ? (clean are JUST FOR DEMO! do not use this on production)
pg_disable_purge: false # set to true to disable pg purge functionality for good (force pg_exists_action = abort)
# - storage - #
pg_data: /pg/data # postgres data directory (soft link)
pg_fs_main: /data # primary data disk mount point /pg -> {{ pg_fs_main }}/postgres/{{ pg_instance }}
pg_fs_bkup: /data/backups # backup disk mount point /pg/* -> {{ pg_fs_bkup }}/postgres/{{ pg_instance }}/*
# - connection - #
pg_listen: '0.0.0.0' # postgres listen address, '0.0.0.0' (all ipv4 addr) by default
pg_port: 5432 # postgres port, 5432 by default
pg_localhost: /var/run/postgresql # localhost unix socket dir for connection
# pg_upstream: # [OPTIONAL] specify replication upstream, instance level
# Set on primary instance will transform this cluster into a standby cluster
# - patroni - #
# patroni_mode, available options: default|pause|remove
# - default: default ha mode
# - pause: into maintenance mode
# - remove: remove patroni after bootstrap
patroni_mode: default # pause|default|remove
pg_namespace: /pg # top level key namespace in dcs
patroni_port: 8008 # default patroni port
patroni_watchdog_mode: automatic # watchdog mode: off|automatic|required
pg_conf: tiny.yml # pgsql template: {oltp|olap|crit|tiny}.yml , use tiny for sandbox
# use oltp|olap|crit for production, or fork your own templates (in ansible templates dir)
# - flags - #
pg_backup: false # store base backup on this node (instance level, TBD)
pg_delay: 0 # apply delay for offline|delayed replica (instance level, TBD)
# - localization - #
pg_encoding: UTF8 # database cluster encoding, UTF8 by default
pg_locale: C # database cluster local, C by default
pg_lc_collate: C # database cluster collate, C by default
pg_lc_ctype: en_US.UTF8 # database character type, en_US.UTF8 by default (for i18n full-text search)
# - pgbouncer - #
pgbouncer_port: 6432 # pgbouncer port, 6432 by default
pgbouncer_poolmode: transaction # pooling mode: session|transaction|statement, transaction pooling by default
pgbouncer_max_db_conn: 100 # max connection to single database, DO NOT set this larger than postgres max conn or db connlimit
#------------------------------------------------------------------------------
# POSTGRES TEMPLATE
#------------------------------------------------------------------------------
# - template - #
pg_init: pg-init # init script for cluster template
# - system roles - #
pg_replication_username: replicator # system replication user
pg_replication_password: DBUser.Replicator # system replication password
pg_monitor_username: dbuser_monitor # system monitor user
pg_monitor_password: DBUser.Monitor # system monitor password
pg_admin_username: dbuser_dba # system admin user
pg_admin_password: DBUser.DBA # system admin password
# - default roles - #
pg_default_roles: # check http://pigsty.cc/zh/docs/concepts/provision/acl/ for more detail, sequence matters
# default roles
- { name: dbrole_readonly , login: false , comment: role for global read-only access } # production read-only role
- { name: dbrole_readwrite , login: false , roles: [dbrole_readonly], comment: role for global read-write access } # production read-write role
- { name: dbrole_offline , login: false , comment: role for restricted read-only access (offline instance) } # restricted-read-only role
- { name: dbrole_admin , login: false , roles: [pg_monitor, dbrole_readwrite] , comment: role for object creation } # production DDL change role
# default users
- { name: postgres , superuser: true , comment: system superuser } # system dbsu, name is designated by `pg_dbsu`
- { name: dbuser_dba , superuser: true , roles: [dbrole_admin] , comment: system admin user } # admin dbsu, name is designated by `pg_admin_username`
- { name: replicator , replication: true , bypassrls: true , roles: [pg_monitor, dbrole_readonly] , comment: system replicator } # replicator
- { name: dbuser_monitor , roles: [pg_monitor, dbrole_readonly] , comment: system monitor user , parameters: {log_min_duration_statement: 1000 } } # monitor user
- { name: dbuser_stats , password: DBUser.Stats , roles: [dbrole_offline] , comment: business offline user for offline queries and ETL } # ETL user
# - privileges - #
# object created by dbsu and admin will have their privileges properly set
pg_default_privileges:
- GRANT USAGE ON SCHEMAS TO dbrole_readonly
- GRANT SELECT ON TABLES TO dbrole_readonly
- GRANT SELECT ON SEQUENCES TO dbrole_readonly
- GRANT EXECUTE ON FUNCTIONS TO dbrole_readonly
- GRANT USAGE ON SCHEMAS TO dbrole_offline
- GRANT SELECT ON TABLES TO dbrole_offline
- GRANT SELECT ON SEQUENCES TO dbrole_offline
- GRANT EXECUTE ON FUNCTIONS TO dbrole_offline
- GRANT INSERT, UPDATE, DELETE ON TABLES TO dbrole_readwrite
- GRANT USAGE, UPDATE ON SEQUENCES TO dbrole_readwrite
- GRANT TRUNCATE, REFERENCES, TRIGGER ON TABLES TO dbrole_admin
- GRANT CREATE ON SCHEMAS TO dbrole_admin
# - schemas - #
pg_default_schemas: [monitor] # default schemas to be created
# - extension - #
pg_default_extensions: # default extensions to be created
- { name: 'pg_stat_statements', schema: 'monitor' }
- { name: 'pgstattuple', schema: 'monitor' }
- { name: 'pg_qualstats', schema: 'monitor' }
- { name: 'pg_buffercache', schema: 'monitor' }
- { name: 'pageinspect', schema: 'monitor' }
- { name: 'pg_prewarm', schema: 'monitor' }
- { name: 'pg_visibility', schema: 'monitor' }
- { name: 'pg_freespacemap', schema: 'monitor' }
- { name: 'pg_repack', schema: 'monitor' }
- name: postgres_fdw
- name: file_fdw
- name: btree_gist
- name: btree_gin
- name: pg_trgm
- name: intagg
- name: intarray
# - hba - #
pg_offline_query: false # set to true to enable offline query on this instance (instance level)
pg_reload: true # reload postgres after hba changes
pg_hba_rules: # postgres host-based authentication rules
- title: allow meta node password access
role: common
rules:
- host all all 10.10.10.10/32 md5
- title: allow intranet admin password access
role: common
rules:
- host all +dbrole_admin 10.0.0.0/8 md5
- host all +dbrole_admin 172.16.0.0/12 md5
- host all +dbrole_admin 192.168.0.0/16 md5
- title: allow intranet password access
role: common
rules:
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
- title: allow local read/write (local production user via pgbouncer)
role: common
rules:
- local all +dbrole_readonly md5
- host all +dbrole_readonly 127.0.0.1/32 md5
- title: allow offline query (ETL,SAGA,Interactive) on offline instance
role: offline
rules:
- host all +dbrole_offline 10.0.0.0/8 md5
- host all +dbrole_offline 172.16.0.0/12 md5
- host all +dbrole_offline 192.168.0.0/16 md5
pg_hba_rules_extra: [] # extra hba rules (overwrite by cluster/instance level config)
pgbouncer_hba_rules: # pgbouncer host-based authentication rules
- title: local password access
role: common
rules:
- local all all md5
- host all all 127.0.0.1/32 md5
- title: intranet password access
role: common
rules:
- host all all 10.0.0.0/8 md5
- host all all 172.16.0.0/12 md5
- host all all 192.168.0.0/16 md5
pgbouncer_hba_rules_extra: [] # extra pgbouncer hba rules (overwrite by cluster/instance level config)
# pg_users: [] # business users
# pg_databases: [] # business databases
#------------------------------------------------------------------------------
# MONITOR PROVISION
#------------------------------------------------------------------------------
# - install - #
exporter_install: none # none|yum|binary, none by default
exporter_repo_url: '' # if set, repo will be added to /etc/yum.repos.d/ before yum installation
# - collect - #
exporter_metrics_path: /metrics # default metric path for pg related exporter
# - node exporter - #
node_exporter_enabled: true # setup node_exporter on instance
node_exporter_port: 9100 # default port for node exporter
node_exporter_options: '--no-collector.softnet --collector.systemd --collector.ntp --collector.tcpstat --collector.processes'
# - pg exporter - #
pg_exporter_config: pg_exporter.yml # default config files for pg_exporter
pg_exporter_enabled: true # setup pg_exporter on instance
pg_exporter_port: 9630 # default port for pg exporter
pg_exporter_url: '' # optional, if not set, generate from reference parameters
pg_exporter_auto_discovery: true # optional, discovery available database on target instance ?
pg_exporter_exclude_database: 'template0,template1,postgres' # optional, comma separated list of database that WILL NOT be monitored when auto-discovery enabled
pg_exporter_include_database: '' # optional, comma separated list of database that WILL BE monitored when auto-discovery enabled, empty string will disable include mode
pg_exporter_options: '--log.level=info --log.format="logger:syslog?appname=pg_exporter&local=7"'
# - pgbouncer exporter - #
pgbouncer_exporter_enabled: true # setup pgbouncer_exporter on instance (if you don't have pgbouncer, disable it)
pgbouncer_exporter_port: 9631 # default port for pgbouncer exporter
pgbouncer_exporter_url: '' # optional, if not set, generate from reference parameters
pgbouncer_exporter_options: '--log.level=info --log.format="logger:syslog?appname=pgbouncer_exporter&local=7"'
# - promtail - # # promtail is a beta feature which requires manual deployment
promtail_enabled: true # enable promtail logging collector?
promtail_clean: false # remove promtail status file? false by default
promtail_port: 9080 # default listen address for promtail
promtail_status_file: /tmp/promtail-status.yml
promtail_send_url: http://10.10.10.10:3100/loki/api/v1/push # loki url to receive logs
#------------------------------------------------------------------------------
# SERVICE PROVISION
#------------------------------------------------------------------------------
pg_weight: 100 # default load balance weight (instance level)
# - service - #
pg_services: # how to expose postgres service in cluster?
# primary service will route {ip|name}:5433 to primary pgbouncer (5433->6432 rw)
- name: primary # service name {{ pg_cluster }}-primary
src_ip: "*"
src_port: 5433
dst_port: pgbouncer # 5433 route to pgbouncer
check_url: /primary # primary health check, success when instance is primary
selector: "[]" # select all instance as primary service candidate
# replica service will route {ip|name}:5434 to replica pgbouncer (5434->6432 ro)
- name: replica # service name {{ pg_cluster }}-replica
src_ip: "*"
src_port: 5434
dst_port: pgbouncer
check_url: /read-only # read-only health check. (including primary)
selector: "[]" # select all instance as replica service candidate
selector_backup: "[? pg_role == `primary`]" # primary are used as backup server in replica service
# default service will route {ip|name}:5436 to primary postgres (5436->5432 primary)
- name: default # service's actual name is {{ pg_cluster }}-default
src_ip: "*" # service bind ip address, * for all, vip for cluster virtual ip address
src_port: 5436 # bind port, mandatory
dst_port: postgres # target port: postgres|pgbouncer|port_number , pgbouncer(6432) by default
check_method: http # health check method: only http is available for now
check_port: patroni # health check port: patroni|pg_exporter|port_number , patroni by default
check_url: /primary # health check url path, / as default
check_code: 200 # health check http code, 200 as default
selector: "[]" # instance selector
haproxy: # haproxy specific fields
maxconn: 3000 # default front-end connection
balance: roundrobin # load balance algorithm (roundrobin by default)
default_server_options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
# offline service will route {ip|name}:5438 to offline postgres (5438->5432 offline)
- name: offline # service name {{ pg_cluster }}-offline
src_ip: "*"
src_port: 5438
dst_port: postgres
check_url: /replica # offline MUST be a replica
selector: "[? pg_role == `offline` || pg_offline_query ]" # instances with pg_role == 'offline' or instance marked with 'pg_offline_query == true'
selector_backup: "[? pg_role == `replica` && !pg_offline_query]" # replica are used as backup server in offline service
pg_services_extra: [] # extra services to be added
# - haproxy - #
haproxy_enabled: true # enable haproxy among every cluster members
haproxy_reload: true # reload haproxy after config
haproxy_admin_auth_enabled: false # enable authentication for haproxy admin?
haproxy_admin_username: admin # default haproxy admin username
haproxy_admin_password: admin # default haproxy admin password
haproxy_exporter_port: 9101 # default admin/exporter port
haproxy_client_timeout: 12h # client side connection timeout
haproxy_server_timeout: 12h # server side connection timeout
# - vip - #
vip_mode: none # none | l2 | l4
vip_reload: true # whether reload service after config
# vip_address: 127.0.0.1 # virtual ip address ip (l2 or l4)
# vip_cidrmask: 24 # virtual ip address cidr mask (l2 only)
# vip_interface: eth0 # virtual ip network interface (l2 only)
# - dns - # # NOT IMPLEMENTED
# dns_mode: vip # vip|all|selector: how to resolve cluster DNS?
# dns_selector: '[]' # if dns_mode == vip, filter instances been resolved
...
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。