pgxc_ctl部署postgresql-xl集群

using pgxc_ctl Deploy postgresql-xl cluster

pgxc_ctl部署postgresql-xl集群
Page content

Intro

PostgreSQL

PostgreSQL也叫Postgres,是一个开源的关系型数据库管理系统。其强调可扩展性和 对SQL语言的兼容性。其起源于加州大学伯克利分校的Ingres数据库,最先的名字叫 POSTGRES,为了突出对SQL的兼容性而改为了PostgreSQL。 PostgreSQL支持完整的ACID事务特性(原子性-Atomicity、一致性-Consistency、 隔离性-Isolation、持久性-Durability),其也支持MVCC多版本并发控制,自带 基于WAL预写式日志的二进制复制和支持多主写的同步复制技术。

PostgreSQL-xc

PostgreSQL-xc(eXtensible Cluster)基于PostgreSQL,提供了可扩展的多主异步复制 技术。

PostgreSQL-xl

Postgres-xl 是基于PostgreSQL-xc 发展而来的分布式关系型数据库管理系统,其提供了 集群范围内事务快照的一致性(通过GTM:global transaction manager),由于pgxl需要 内部各个组件之间高速互联,所以其不适用于在地理上为分布式的集群。使用pgxl集群时 可以将高并发查询请求分开并行到多个datanode处理,单个数据表可以选择在整个集群范围 内复制或者以分布式的方式存储在各个datanode中(写扩展)

所以从Ingres到PostgreSQL-xl的发展大概如下:

Ingres ---> POSTGRES ---> PostgreSQL ---> PostgreSQL-xc ---> PostgreSQL-xl

PGXL 集群的组件

  • gtm(global transaction manager) GTM是Postgres-XL的一个关键组件,用于提供一致的事务管理和元组可见性控制。 GTM基于MVCC(Multi-Version Concurrency Control)技术 如果要高可用,一般还在另外的物理机上部署gtm-standby

  • gtm-proxy gtm-proxy可以作为运行在同一个机器上的datanode和coordinator接受和发送请求(到gtm) 的代理,以减少datnode和coordinator和gtm交互时发生的请求和响应。gtm-proxy也可以在 gtm故障转移时帮忙。gtm失败后,gtm-proxy会自动连到gtm-standby。

  • datanode datanode存储了实际的应用和用户数据。数据表可以选择分布式的存储在pgxl集群中,或者 在集群中的所有datanode复制。每个datanode没有整个数据库的观点,其只关心自己存储 的那部分数据,每个应用发来的请求会被coordinator拦截和分析,然后路由到各个datanode, datanode可以在不同session中从多个coordinator接受SQL请求。 datanode一般也部署成高可用,在另外的机器部署datanode-standby,可部署多个 datanode-standby

  • coordinator coordinator是应用和数据库交互的一个接口,其行为就像一个PostgreSQL后台进程,但是 它不存储任何数据,数据都存储在datanode。其接受SQL查询请求,获取Global Transaction Id 和Global Snapshots,其决定将查询路由到哪个datanode,让datanode执行SQL查询,datanode 执行完SQL后,它会与GXID和Global Snapshots相关联,因此多版本并发控制(MVCC)属性可以在 集群范围内扩展,保证事务的一致性。coordinator同样要部署为高可用,为coordinator-standby

ENV of pgxl cluster

使用两台物理机,形成最小的集群模式,分别在两台物理机部署相应组件和其standby。 使用pgxl-node1为管理机,执行pgxc_ctl工具来初始化集群和管理集群。pgxc_ctl工具是为了 方便部署和管理pgxl集群而附带的工具,在pgxl 10版本以上,编译pgxl后会一块编译pgxc_ctl 之前的版本还是要单独编译。

HOST HOSTNAME CLUSTER component
192.168.88.10 pgxl-node1 gtm、gtm-proxy、datanode1、coordinator1、datanode2-standby、coordinator2-standby
192.168.88.11 pgxl-node2 gtm-standby、gtm-proxy、datanode2、coordinator2、datanode1-standby、coordinator1-standby

Prepare for deploy

Make all the nodes ssh to pgxl-node1 are pass free.

  • pgxl-node1
[pgxl@pgxl-node1 ~]$ echo -e "\n 192.168.88.10 pgxl-node1 \n 192.168.88.11 \n" >> /etc/hosts
[pgxl@pgxl-node1 ~]$ scp /etc/hosts root@192.168.88.11:/etc/

# Add user pgxl
[pgxl@pgxl-node1 ~]$ useradd -u 1001 pgxl

# gen ssh private key and pubkey
[root@pgxl-node1 ~]# su - pgxl
Last login: Mon Jun 29 07:27:54 CST 2020 on pts/1
[pgxl@pgxl-node1 ~]# ssh-keygen -t rsa   # dont set passphrase
[pgxl@pgxl-node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[pgxl@pgxl-node1 ~]$ chmod 600 ~/.ssh/authorized_keys
[pgxl@pgxl-node1 ~]$ scp ~/.ssh/authorized_keys  pgxl@pgxl-node2:~/.ssh/
# 如果有多台,则需要将authorized_keys复制到每一台机器上的pgxl用户家目录下的.ssh
  • pgxl-node2
# Add user pgxl
[pgxl@pgxl-node2 ~]$ useradd -u 1001 pgxl

# gen ssh private key and pubkey
[root@pgxl-node2 ~]# su - pgxl
Last login: Mon Jun 29 07:30:34 CST 2020 on pts/1
[pgxl@pgxl-node2 ~]# ssh-keygen -t rsa   # dont set passphrase

get source and make

  • pgxl-node1/2
# install libs and deps
~# yum install gcc gcc-c++ kernel-devel readline-devel flex bison bison-devel zlib zlib-devel make docbook-style-dsssl jade
  • PGXL source code

https://www.postgres-xl.org/downloads/postgres-xl-9.5r1.6.tar.bz2

  • get source code and conpile on all the nodes
# compile and install postgresql-xl cluster components
[root@pgxl-node1 src]# wget https://www.postgres-xl.org/downloads/postgres-xl-9.5r1.6.tar.bz2`
[root@pgxl-node1 src]# tar -xf postgres-xl-9.5r1.6.tar.bz2

[root@pgxl-node1 src]# cd postgres-xl-9.5r1.6
[root@pgxl-node1 postgres-xl-9.5r1.6]# ./configure --prefix=/usr/local/pgxl  --with-python
......
[root@pgxl-node1 postgres-xl-9.5r1.6]# make -j 4 && make install 

# compile and install pgxc_ctl deploy utility
[root@pgxl-node1 postgres-xl-9.5r1.6]# pwd
/usr/local/src/postgres-xl-9.5r1.6
[root@pgxl-node1 postgres-xl-9.5r1.6]# cd contrib/pgxc_ctl/
[root@pgxl-node1 pgxc_ctl]# make && make install   # 编译pgxc_ctl集群部署和管理工具(类似部署k8s的kubeadm)

[root@pgxl-node1 pgxc_ctl]# ll /usr/local/pgxl/
total 12
drwxr-xr-x 2 root root 4096 Jun 28 16:20 bin
drwxr-xr-x 4 root root 4096 Jun 28 16:10 include
drwxr-xr-x 4 root root 4096 Jun 28 16:10 lib
drwxr-xr-x 3 root root   24 Jun 28 16:10 share
[root@pgxl-node1 pgxc_ctl]# ll /usr/local/pgxl/bin/
total 11656
-rwxr-xr-x 1 root root   67288 Jun 28 16:10 clusterdb
-rwxr-xr-x 1 root root   67328 Jun 28 16:10 createdb
-rwxr-xr-x 1 root root   71584 Jun 28 16:10 createlang
-rwxr-xr-x 1 root root   67776 Jun 28 16:10 createuser
-rwxr-xr-x 1 root root   62768 Jun 28 16:10 dropdb
-rwxr-xr-x 1 root root   71536 Jun 28 16:10 droplang
-rwxr-xr-x 1 root root   62736 Jun 28 16:10 dropuser
-rwxr-xr-x 1 root root  892224 Jun 28 16:10 ecpg
-rwxr-xr-x 1 root root  293952 Jun 28 16:10 gtm
-rwxr-xr-x 1 root root   48176 Jun 28 16:10 gtm_ctl
-rwxr-xr-x 1 root root  179512 Jun 28 16:10 gtm_proxy
-rwxr-xr-x 1 root root  110456 Jun 28 16:10 initdb
-rwxr-xr-x 1 root root   39776 Jun 28 16:10 initgtm
-rwxr-xr-x 1 root root   24040 Jun 28 16:10 pg_archivecleanup
-rwxr-xr-x 1 root root   78512 Jun 28 16:10 pg_basebackup
-rwxr-xr-x 1 root root   34384 Jun 28 16:10 pg_config
-rwxr-xr-x 1 root root   42272 Jun 28 16:10 pg_controldata
-rwxr-xr-x 1 root root   49104 Jun 28 16:10 pg_ctl
-rwxr-xr-x 1 root root  378416 Jun 28 16:10 pg_dump
-rwxr-xr-x 1 root root   88784 Jun 28 16:10 pg_dumpall
-rwxr-xr-x 1 root root   35648 Jun 28 16:10 pg_isready
-rwxr-xr-x 1 root root   55232 Jun 28 16:10 pg_receivexlog
-rwxr-xr-x 1 root root   59984 Jun 28 16:10 pg_recvlogical
-rwxr-xr-x 1 root root   51776 Jun 28 16:10 pg_resetxlog
-rwxr-xr-x 1 root root  159856 Jun 28 16:10 pg_restore
-rwxr-xr-x 1 root root   86232 Jun 28 16:10 pg_rewind
-rwxr-xr-x 1 root root   24840 Jun 28 16:10 pg_test_fsync
-rwxr-xr-x 1 root root   19568 Jun 28 16:10 pg_test_timing
-rwxr-xr-x 1 root root  107920 Jun 28 16:10 pg_upgrade
-rwxr-xr-x 1 root root   75784 Jun 28 16:10 pg_xlogdump
-rwxr-xr-x 1 root root   87840 Jun 28 16:10 pgbench
-rwxr-xr-x 1 root root  355552 Jun 28 16:20 pgxc_ctl  # 使用它来简化集群的部署
-rwxr-xr-x 1 root root 7379744 Jun 28 16:10 postgres
lrwxrwxrwx 1 root root       8 Jun 28 16:10 postmaster -> postgres
-rwxr-xr-x 1 root root  497472 Jun 28 16:10 psql
-rwxr-xr-x 1 root root   67384 Jun 28 16:10 reindexdb
-rwxr-xr-x 1 root root   72064 Jun 28 16:10 vacuumdb

PATH setup

  • PATH setup
[pgxl@pgxl-node1 ~]$ vim /etc/profile.d/pgxl.sh
#!/usr/bin/env bash

export PATH=/usr/local/pgxl/bin:$PATH

[pgxl@pgxl-node1 ~]$ chmod 755 /etc/profile.d/pgxl.sh
[pgxl@pgxl-node1 ~]$ source /etc/profile.d/pgxl.sh

init pgxc_ctl and gen cluster deploy configuration

使用pgxc_ctl prepare命令会生成对应的集群部署配置文件pgxc_ctl.conf,该配置文件 定义了整个集群各个组件的部署细节。

  • pgxl-node1
[root@pgxl-node1 ~]# su - pgxl
Last login: Mon Jun 29 09:25:57 CST 2020 on pts/0
[pgxl@pgxl-node1 ~]$ pgxc_ctl prepare
[pgxl@pgxl-node1 ~]$ ll ./pgxc_ctl/pgxc_ctl.conf
-rw-rw-r-- 1 pgxl pgxl 13341 Jun 29 08:41 ./pgxc_ctl/pgxc_ctl.conf

edit pgxc_ctl.conf

  • /home/pgxl/pgxc_ctl/pgxc_ctl.conf
#!/usr/bin/env bash
#
#========================================================================================
# pgxcInstallDir variable is needed if you invoke "deploy" command from pgxc_ctl utility.
# If don't you don't need this variable.
pgxcInstallDir=/data/pgxc
#---- OVERALL -----------------------------------------------------------------------------
#
pgxcOwner=pgxl			# owner of the Postgres-XC databaseo cluster.  Here, we use this
						# both as linus user and database user.  This must be
						# the super user of each coordinator and datanode.
pgxcUser=$pgxcOwner		# OS user of Postgres-XC owner

tmpDir=/tmp					# temporary dir used in XC servers
localTmpDir=$tmpDir			# temporary dir used here locally

configBackup=n					# If you want config file backup, specify y to this value.
configBackupHost=pgxc-linker	# host to backup config file
configBackupDir=/data/pgxc		# Backup directory
configBackupFile=pgxc_ctl.bak	# Backup file name --> Need to synchronize when original changed.

#---- GTM ------------------------------------------------------------------------------------

# GTM is mandatory.  You must have at least (and only) one GTM master in your Postgres-XC cluster.
# If GTM crashes and you need to reconfigure it, you can do it by pgxc_update_gtm command to update
# GTM master with others.   Of course, we provide pgxc_remove_gtm command to remove it.  This command
# will not stop the current GTM.  It is up to the operator.

#---- GTM Master -----------------------------------------------

#---- Overall ----
gtmName=gtm
gtmMasterServer=pgxl-node1
gtmMasterPort=20001
gtmMasterDir=/data/pgxc/nodes/gtm

#---- Configuration ---
gtmExtraConfig=none			# Will be added gtm.conf for both Master and Slave (done at initilization only)
gtmMasterSpecificExtraConfig=none	# Will be added to Master's gtm.conf (done at initialization only)

#---- GTM Slave -----------------------------------------------

# Because GTM is a key component to maintain database consistency, you may want to configure GTM slave
# for backup.

#---- Overall ------
gtmSlave=y					# Specify y if you configure GTM Slave.   Otherwise, GTM slave will not be configured and
gtmSlaveName=gtmSlave
gtmSlaveServer=pgxl-node2		# value none means GTM slave is not available.  Give none if you don't configure GTM Slave.
gtmSlavePort=20001			# Not used if you don't configure GTM slave.
gtmSlaveDir=/data/pgxc/nodes/gtm	# Not used if you don't configure GTM slave.
# Please note that when you have GTM failover, then there will be no slave available until you configure the slave
# again. (pgxc_add_gtm_slave function will handle it)

#---- Configuration ----
gtmSlaveSpecificExtraConfig=none # Will be added to Slave's gtm.conf (done at initialization only)

#---- GTM Proxy -------------------------------------------------------------------------------------------------------
# GTM proxy will be selected based upon which server each component runs on.
# When fails over to the slave, the slave inherits its master's gtm proxy.  It should be
# reconfigured based upon the new location.
#
# To do so, slave should be restarted.   So pg_ctl promote -> (edit postgresql.conf and recovery.conf) -> pg_ctl restart
#
# You don't have to configure GTM Proxy if you dont' configure GTM slave or you are happy if every component connects
# to GTM Master directly.  If you configure GTL slave, you must configure GTM proxy too.

#---- Shortcuts ------
gtmProxyDir=/data/pgxc/nodes/gtm_pxy

#---- Overall -------
gtmProxy=y				# Specify y if you conifugre at least one GTM proxy.   You may not configure gtm proxies
						# only when you dont' configure GTM slaves.
						# If you specify this value not to y, the following parameters will be set to default empty values.
						# If we find there're no valid Proxy server names (means, every servers are specified
						# as none), then gtmProxy value will be set to "n" and all the entries will be set to
						# empty values.
gtmProxyNames=(gtm_pxy1 gtm_pxy2)	# No used if it is not configured
gtmProxyServers=(pgxl-node1 pgxl-node2)			# Specify none if you dont' configure it.
gtmProxyPorts=(20002 20002)				# Not used if it is not configured.
gtmProxyDirs=($gtmProxyDir $gtmProxyDir)	# Not used if it is not configured.

#---- Configuration ----
gtmPxyExtraConfig=none		# Extra configuration parameter for gtm_proxy.  Coordinator section has an example.
gtmPxySpecificExtraConfig=(none none)

#---- Coordinators ----------------------------------------------------------------------------------------------------

#---- shortcuts ----------
coordMasterDir=/data/pgxc/nodes/coord
coordSlaveDir=/data/pgxc/nodes/coord_slave
coordArchLogDir=/data/pgxc/nodes/coord_archlog

#---- Overall ------------
coordNames=(coord1 coord2)		# Master and slave use the same name
coordPorts=(20004 20005)			# Master ports
poolerPorts=(20010 20011)			# Master pooler ports
coordPgHbaEntries=(192.168.88.0/24)				# Assumes that all the coordinator (master/slave) accepts
												# the same connection
												# This entry allows only $pgxcOwner to connect.
												# If you'd like to setup another connection, you should
												# supply these entries through files specified below.
# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want
# such setups, specify the value () to this variable and suplly what you want using coordExtraPgHba
# and/or coordSpecificExtraPgHba variables.
#coordPgHbaEntries=(::1/128)	# Same as above but for IPv6 addresses

#---- Master -------------
coordMasterServers=(pgxl-node1 pgxl-node2)
coordMasterDirs=($coordMasterDir $coordMasterDir)
coordMaxWALsernder=5	# max_wal_senders: needed to configure slave. If zero value is specified,
						# it is expected to supply this parameter explicitly by external files
						# specified in the following.	If you don't configure slaves, leave this value to zero.
coordMaxWALSenders=($coordMaxWALsernder $coordMaxWALsernder)
						# max_wal_senders configuration for each coordinator.

#---- Slave -------------
coordSlave=y			# Specify y if you configure at least one coordiantor slave.  Otherwise, the following
						# configuration parameters will be set to empty values.
						# If no effective server names are found (that is, every servers are specified as none),
						# then coordSlave value will be set to n and all the following values will be set to
						# empty values.
coordSlaveSync=y		# Specify to connect with synchronized mode.
coordSlaveServers=(pgxl-node2 pgxl-node1)			# none means this slave is not available
coordSlavePorts=(20004 20005)			# Master ports
coordSlavePoolerPorts=(20010 20011)			# Master pooler ports
coordSlaveDirs=($coordSlaveDir $coordSlaveDir)
coordArchLogDirs=($coordArchLogDir $coordArchLogDir)

#---- Configuration files---
# Need these when you'd like setup specific non-default configuration 
# These files will go to corresponding files for the master.
# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries 
# Or you may supply these files manually.
coordExtraConfig=coordExtraConfig	# Extra configuration file for coordinators.  
						# This file will be added to all the coordinators'
						# postgresql.conf
# Pleae note that the following sets up minimum parameters which you may want to change.
# You can put your postgresql.conf lines here.
cat > $coordExtraConfig <<EOF
#================================================
# Added to all the coordinator postgresql.conf
# Original: $coordExtraConfig
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 100
EOF

# Additional Configuration file for specific coordinator master.
# You can define each setting by similar means as above.
coordSpecificExtraConfig=(none none none none)
coordExtraPgHba=none	# Extra entry for pg_hba.conf.  This file will be added to all the coordinators' pg_hba.conf
coordSpecificExtraPgHba=(none none none none)

#---- Datanodes -------------------------------------------------------------------------------------------------------

#---- Shortcuts --------------
datanodeMasterDir=/data/pgxc/nodes/dn_master
datanodeSlaveDir=/data/pgxc/nodes/dn_slave
datanodeArchLogDir=/data/pgxc/nodes/datanode_archlog

#---- Overall ---------------
#primaryDatanode=datanode1				# Primary Node.
# At present, xc has a priblem to issue ALTER NODE against the primay node.  Until it is fixed, the test will be done
# without this feature.
primaryDatanode=datanode1				# Primary Node.
datanodeNames=(datanode1 datanode2)
datanodePorts=(20008 20009)	# Master ports
datanodePoolerPorts=(20012 20013)	# Master pooler ports
datanodePgHbaEntries=(192.168.88.0/24)	# Assumes that all the coordinator (master/slave) accepts
										# the same connection
										# This list sets up pg_hba.conf for $pgxcOwner user.
										# If you'd like to setup other entries, supply them
										# through extra configuration files specified below.
# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want
# such setups, specify the value () to this variable and suplly what you want using datanodeExtraPgHba
# and/or datanodeSpecificExtraPgHba variables.
#datanodePgHbaEntries=(::1/128)	# Same as above but for IPv6 addresses

#---- Master ----------------
datanodeMasterServers=(pgxl-node1 pgxl-node2)	# none means this master is not available.
													# This means that there should be the master but is down.
													# The cluster is not operational until the master is
													# recovered and ready to run.	
datanodeMasterDirs=($datanodeMasterDir $datanodeMasterDir)
datanodeMaxWalSender=5								# max_wal_senders: needed to configure slave. If zero value is 
													# specified, it is expected this parameter is explicitly supplied
													# by external configuration files.
													# If you don't configure slaves, leave this value zero.
datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender)
						# max_wal_senders configuration for each datanode

#---- Slave -----------------
datanodeSlave=y			# Specify y if you configure at least one coordiantor slave.  Otherwise, the following
						# configuration parameters will be set to empty values.
						# If no effective server names are found (that is, every servers are specified as none),
						# then datanodeSlave value will be set to n and all the following values will be set to
						# empty values.
datanodeSlaveServers=(pgxl-node2 pgxl-node1)	# value none means this slave is not available
datanodeSlavePorts=(20008 20009)	# value none means this slave is not available
datanodeSlavePoolerPorts=(20012 20013)	# value none means this slave is not available
datanodeSlaveSync=y		# If datanode slave is connected in synchronized mode
datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir)
datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir)

# ---- Configuration files ---
# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries here.
# These files will go to corresponding files for the master.
# Or you may supply these files manually.
datanodeExtraConfig=none	# Extra configuration file for datanodes.  This file will be added to all the 
							# datanodes' postgresql.conf
datanodeSpecificExtraConfig=(none none none none)
datanodeExtraPgHba=none		# Extra entry for pg_hba.conf.  This file will be added to all the datanodes' postgresql.conf
datanodeSpecificExtraPgHba=(none none none none)

#----- Additional Slaves -----
datanodeAdditionalSlaves=n	# Additional slave can be specified as follows: where you

#---- WAL archives -------------------------------------------------------------------------------------------------
walArchive=n	# If you'd like to configure WAL archive, edit this section.
				# Pgxc_ctl assumes that if you configure WAL archive, you configure it
				# for all the coordinators and datanodes.
				# Default is "no".   Please specify "y" here to turn it on.
#
#		End of Configuration Section
#
#==========================================================================================================================

#========================================================================================================================
# The following is for extension.  Just demonstrate how to write such extension.  There's no code
# which takes care of them so please ignore the following lines.  They are simply ignored by pgxc_ctl.
# No side effects.
#=============<< Beginning of future extension demonistration >> ========================================================
# You can setup more than one backup set for various purposes, such as disaster recovery.
walArchiveSet=(war1 war2)
war1_source=(master)	# you can specify master, slave or ano other additional slaves as a source of WAL archive.
					# Default is the master
wal1_source=(slave)
wal1_source=(additiona_coordinator_slave_set additional_datanode_slave_set)
war1_host=node10	# All the nodes are backed up at the same host for a given archive set
war1_backupdir=/data/pgxc/backup_war1
wal2_source=(master)
war2_host=node11
war2_backupdir=/data/pgxc/backup_war2
#=============<< End of future extension demonistration >> ========================================================

Init cluster

使用pgxc_ctl工具初始化集群,会根据pgxc_ctl.conf中的配置将各个集群组件运行 起来(gtm,gtm-proxy,datanode,coordinator)

[pgxl@pgxl-node1 ~]$ pgxc_ctl init all

Connect to cluster

# 20004 端口为coordinator的端口
[pgxl@pgxl-node1 ~]$ psql -U pgxl -h localhost -p 20004 -d postgres
psql (PGXL 9.5r1.6, based on PG 9.5.8 (Postgres-XL 9.5r1.6))
Type "help" for help.

postgres=#

或者:

# psql "postgresql://USER:PASS@HOST:COORPORT/DB"
[pgxl@pgxl-node1 ~]$ psql "postgresql://pgxl:stevenux@localhost:20004/postgres"
psql (PGXL 9.5r1.6, based on PG 9.5.8 (Postgres-XL 9.5r1.6))
Type "help" for help.

postgres=#

使用pgxc_ctl工具管理集群时,千万要小心命令pgxc_ctl init all,如果 是初始化集群,没有的文件夹会自动被创建,如果已经跑起了集群,会将现有的 文件夹清空再初始化。在postgresql-xl-10r1.1版本的pgxc_ctl新增了一个 force选项pgxc_ctl force init all才会清空非空的文件夹,不带force 选项则pgxl从现有的文件夹下的数据启动集群。