本文共 7102 字,大约阅读时间需要 23 分钟。
本教程是基于CENTOS7 的WEB负载与数据库高可用方案
本方案采用主主模式首先进行系统的初步设置,这对整个集群有特别重要的作用
1.修改主机名称2.关停NetworkManager服务systemctl stop NetworkManagersystemclt disable NetworkManager3.修改网卡名称,这个设置是可选的,我喜欢用eth0来表示网卡。vi /etc/sysconfig/grub在“GRUB_CMDLINE_LINUX=”这行最后 添加 net.ifnames=0 biosdevname=0修改后:GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto vconsole.keymap=us rhgb quiet net.ifnames=0 biosdevname=0"然后执行:grub2-mkconfig -o /boot/grub2/grub.cfg然后修改 /etc/sysconfig/network-scripts/ifcfg-* 为想要的名称 ifcfg-eth0 ,修改配置文件里面的名称 NAME=eth0重启操作系统,即可将网卡名称修改为 eth04.关停防火墙systemctl stop firewalldsystemctl disable firewalld5.设置selinux配置选项/etc/selinux/config把enforcing 改为:disabled6.给集群内的所有主机添加主机列表/etc/hosts122.0.113.142 website01122.0.113.143 website02122.0.113.144 website03这项操作是在所有机器上进行的7.设置ssh无密码访问ssh-keygen一路回车,完成后做如下操作:cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keyschmod 644 ~/.ssh/authorized_keyschmod 755 ~/.sshvi ~/.ssh/config (加入如下一行)StrictHostKeyChecking nochmod 600 config然后赋予相应的权限chmod 755 ~/.ssh/ chmod 600 ~/.ssh/id_rsa ~/.ssh/id_rsa.pub 操作完毕后,要把/root/.ssh内的密钥及配置文件全部下载到本地,然后在其它节点上进行ssh-keygen的操作。完成后,进到/root/.ssh目当,清除目录里的内容,把本地的文件上传上去。然后执行:chmod 755 ~/.ssh/ chmod 600 ~/.ssh/id_rsa ~/.ssh/id_rsa.pub前面的准备工作完成后,开始LAMP的操作
先要在/etc/yum.repo.d/目录下创建yum源[mariadb]name = MariaDB Galera Clusterbaseurl = gpgkey = gpgcheck = 1进行数据库的安装yum install MariaDB-Galera-server MariaDB-client galeraservice mysql startchkconfig mysql onmysql_secure_installation 进行root密码的设置安装apacheyum install -y httpdsystemctl start httpdsystemctl enable httpd安装phpyum install -y php安装php扩展yum install -y php-mysql php-gd libjpeg* php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-bcmath php-mhash现在进行基于Pacemake corosync的操作
yum install pacemaker pcs corosync fence-agnets resource-agents -y 三个节点都需要安装开启pcs服务并设置为自启动systemctl start pcsdsystemctl enable pcsd 三个节点都需要这样操作设置两个控制节点的hacluster用户名和密码,两个节点都要操作echo my-secret-password-no-dont-use-this-one | passwd –stdin hacluster #website01echo my-secret-password-no-dont-use-this-one | passwd –stdin hacluster #website02echo my-secret-password-no-dont-use-this-one | passwd –stdin hacluster #website03设置集群密码passwd hacluster1qaz2wsx,./配置corosync[root@controller01 ~]# pcs cluster auth website01 website02 website03创建一个集群,并设置命令,启用服务[root@controller01 ~]# pcs cluster setup --name my-cluster website01 website02 website03启动集群pcs cluster start --allsystemctl start corosyncsystemctl enable corosyncsystemctl start pacemakersystemctl enable pacemaker检查pcs status 输出WARNING: no stonith devices and stonith-enabled is not false 会有这样一个警告信息这个是没有Fencing设备时,没有禁用stonith功能,禁用就可以了pcs property set stonith-enabled=false设置基本的群集属性pcs property set pe-warn-series-max=1000 \pe-input-series-max=1000 \
pe-error-series-max=1000 \cluster-recheck-interval=3min设置资源默认粘性(防止资源回切)pcs resource defaults resource-stickiness=100pcs resource defaults设置资源超时时间pcs resource op defaults timeout=90spcs resource op defaults配置集群的VIPpcs resource create vip ocf:heartbeat:IPaddr2 params ip="122.0.113.146" cidr_netmask="25" op monitor interval="30s"VIP已经能Ping通了配置Haporxy安装haproxy 用yumyum install haproxy -y添加haproxy日志cd /var/logmkdir haproxycd haproxytouch haproxy.logchmod a+w haproxy.logvi /etc/rsyslog.conf$ModLoad imudp$UDPServerRun 514两行前的#去掉在#Save boot messages also to boot.loglocal7. /var/log/boot.loglocal3. /var/log/haproxy/haproxy.log修改/etc/sysconfig/rsyslog 文件SYSLOGD_OPTIONS=""改为SYSLOGD_OPTIONS="-r -m 0 -c 2"配置haproxy修改/etc/haproxy/haproxy.cfg文件,在global区段添加 #我会准备好haproxy的配置文件log 127.0.0.1 local3systemctl restart rsyslog然后进行下面的操作pcs resource create lb-haproxy systemd:haproxy --clonepcs constraint order start vip then lb-haproxy-clone kind=Optionalpcs constraint colocation add vip with lb-haproxy-cloneecho net.ipv4.ip_nonlocal_bind=1 >> /etc/sysctl.d/haproxy.confecho 1 > /proc/sys/net/ipv4/ip_nonlocal_bind还需要在三个控制节点里,把haproxy设置为自启动。systemctl enable haproxy进行重启操作,重启完成后,用pcs status 查看集群状态[root@website01 ~]# pcs statusCluster name: my-clusterLast updated: Tue Apr 19 20:15:20 2016 Last change: Tue Apr 19 18:11:30 2016 by root via cibadmin on website01Stack: corosyncCurrent DC: website01 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum3 nodes and 4 resources configured
Online: [ website01 website02 website03 ]
Full list of resources:
vip (ocf::heartbeat:IPaddr2): Started website01
Clone Set: lb-haproxy-clone [lb-haproxy]Started: [ website01 website02 website03 ]PCSD Status:
website01: Onlinewebsite02: Onlinewebsite03: OnlineDaemon Status:
corosync: active/enabledpacemaker: active/enabledpcsd: active/enabled当所有这些状态显示后,会发现httpd服务与mariadb没有启来,我们需要做一些操作在/etc/httpd/conf/httpd.conf配置文件里修改Listen website01:80保存退出,systemctl restart httpd 重启服务在/etc/my.cnf.d/server.cnf 配置文件里修改在mysqld 选项里添加相应的节点主机名称bind_address = website01保存退出,service mysql restart 重启服务以上操作都需要在三个节点上进行下面进行mariadb基于galera做的主主数据库集群,我们需要把三个节点的mariadb服务给停掉,service mysql stop设置群集配置文件vi /etc/my.cnf.d/server.cnf在[mariadb-5.5]选项下面添加配置内容query_cache_size=0binlog_format=ROWdefault_storage_engine=innodbinnodb_autoinc_lock_mode=2wsrep_provider=/usr/lib64/galera/libgalera_smm.sowsrep_cluster_address=gcomm://website01,website02,website03wsrep_cluster_name='websitecluster'wsrep_node_address='122.0.113.142'wsrep_node_name='webstie01'wsrep_sst_method=rsync完成后,以上操作还需要在其它两个节点进行,需要修改wsrep_node_address='122.0.113.142'的ip为当前的节点ip wsrep_node_name='webstie01' 为当前的主机名称现在开始启动集群在主服务器上运行:service mysql start --wsrep-new-cluster在从服务器上运行:service mysql start登陆到任意一台节点,进到数据库里mysql -uroot -p输入SHOW STATUS LIKE 'wsrep%'; 回车wsrep_incoming_addresses | website01:3306,website02:3306,website03:3306 看到有如下信息,说明我们的集群配置正常最后在每个节点上设置数据库健康检测/etc/sysconfig/clustercheck:MYSQL_USERNAME="clustercheck"MYSQL_PASSWORD="my_clustercheck_password"MYSQL_HOST="localhost"MYSQL_PORT="3306"在数据库里创建检测账号GRANT PROCESS ON . TO 'clustercheck'@'localhost'IDENTIFIED BY 'my_clustercheck_password';FLUSH PRIVILEGES;设置galera检测文件/etc/xinetd.d/galera-monitor:# 这点非常的重要,如果格式不对,9200端口会不通的。service galera-monitor { port = 9200disable = nosocket_type = streamprotocol = tcpwait = nouser = rootgroup = rootgroups = yesserver = /usr/bin/clusterchecktype = UNLISTEDper_source = UNLIMITEDlog_on_success =log_on_failure = HOSTflags = REUSE}由于系统本身没有clustercheck文件的,我们需要在/usr/bin/clustercheck里新建一个,我已经创建好了,直接上传到目录就可以了。然后给文件权限 chmod -R a+x /usr/bin/clustercheck检查状态信息,执行如下命令:/usr/bin/clustercheckHTTP/1.1 200 OKContent-Type: text/plainConnection: closeContent-Length: 32Galera cluster node is synced.
安装xinetd服务yum install xinetd启动xinetd服务systemctl daemon-reloadsystemctl enable xinetdsystemctl start xinetd最后,我们需要允许VIP能够访问到我们的数据库,因此,要在三个节点的数据库里进行操作updatemysql
.user
set Host
= '%' where Host
= 'localhost' and User
= 'root'; flush privileges;到这里,一个稳定的集群服务就配置完成了。 centos安装ab工具给网站进行压力测试
使用方法(直接输入ab命令查看参数)ab -c 10 -n 1000 上条命令的意思是,对test.com的首页进行压力测试,模拟同时10个用户总共进行1000个请求测试完成后会显示报告,会显示一些参数供我们判断转载于:https://blog.51cto.com/mycnarms/2300538