企业 keepalived 高可用项目实战

企业 keepalived 高可用项目实战Listitem企业keepalived高可用项目实战1、KeepalivedVRRP介绍keepalived是什么keepalived是集群管理中保证集群高可用的一个服务软件,用来防止单点故障。keepalived工作原理keepalived是以VRRP协议为实现基础的,VRRP全称VirtualRouterRedundancyProtocol,即虚拟路由冗余协议。虚拟路由冗余协议,可以认为是实现高可用的协议,即将N台提供相同功能的路由器组成一个..

大家好,又见面了,我是你们的朋友全栈君。

  • List item

企业 keepalived 高可用项目实战

1、Keepalived VRRP 介绍

keepalived是什么

  keepalived是集群管理中保证集群高可用的一个服务软件,用来防止单点故障。

keepalived工作原理
    keepalived是以VRRP协议为实现基础的,VRRP全称Virtual Router Redundancy Protocol,即虚拟路由冗余协议。

    虚拟路由冗余协议,可以认为是实现高可用的协议,即将N台提供相同功能的路由器组成一个路由器组,这个组里面有一个master和多个backup,master上面有一个对外提供服务的vip(该路由器所在局域网内其他机器的默认路由为该vip),master会发组播,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举一个backup当master。这样的话就可以保证路由器的高可用了。

keepalived可以做一个集群,在服务进行运行的时候,会在vrrp协议上的,backup机子上会一直收到keepalived主机(master)所发送的vrrp包,当没有收到这个包的时候,集群里面就认为master宕掉啦,这就会选举新的机器来做主机,这样就实现了高可用

keepalived的三大模块:

keepalived主要有三个模块,分别是core、check和vrrp。core模块为keepalived的核心,负责主进程的启动、维护以及全局配置文件的加载和解析。check负责健康检查,包括常见的各种检查方式。vrrp模块是来实现VRRP协议的。

core模块:
为keepalived的核心,负责主进程的启动
维护以及全局配置文件的加载和解析

check模块:
负责健康检查,包括常见的各种检查方式

VRRP模块:
是来实现VRRP协议的

脑裂(裂脑):

Keepalived的BACKUP主机在收到不MASTER主机报文后就会切换成为master,如果是它们之间的通信线路出现问题,无法接收到彼此的组播通知,但是两个节点实际都处于正常工作状态,这时两个节点均为master强行绑定虚拟IP,导致不可预料的后果,这就是脑裂。


解决方式:
1、添加更多的检测手段,比如冗余的心跳线(两块网卡做健康监测),ping对方等等。尽量减少"裂脑"发生机会。(指标不治本,只是提高了检测到的概率);
2、做好对裂脑的监控报警(如邮件及手机短信等或值班).在问题发生时人为第一时间介入仲裁,降低损失。例如,百度的监控报警短倍就有上行和下行的区别。报警消息发送到管理员手机上,管理员可以通过手机回复对应数字或简单的字符串操作返回给服务器.让服务器根据指令自动处理相应故障,这样解决故障的时间更短.
3、爆头,将master停掉。然后检查机器之间的防火墙。网络之间的通信

BACKUP在收不到MASTER发送的报文的时候,但是都处于一个正常工作的时候,但是无法通信,就会抢占ip,这样就会容易产生脑裂

2、Nginx+keepalived实现七层的负载均衡(同类服务)

Nginx通过Upstream模块实现负载均衡

upstream 支持的负载均衡算法

轮询(默认):可以通过weight指定轮询的权重,权重越大,被调度的次数越多
ip_hash:可以实现会话保持,将同一客户的IP调度到同一样后端服务器,可以解决session的问题,不能使用weight
fair:可以根据请求页面的大小和加载时间长短进行调度,使用第三方的upstream_fair模块
url_hash:按请求的url的hash进行调度,从而使每个url定向到同一服务器,使用第三方的url_hash模块
配置安装nginx 所有的机器,关闭防火墙和selinux
[root@nginx-proxy ~]# cd /etc/yum.repos.d/
[root@nginx-proxy yum.repos.d]# vim nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
[root@nginx-proxy yum.repos.d]# yum install yum-utils -y
[root@nginx-proxy yum.repos.d]# yum install nginx -y
调度到不同组后端服务器
网站分区进行调度
=================================================================================
拓扑结构
[vip: 20.20.20.20]
[LB1 Nginx]		[LB2 Nginx]
192.168.1.2		192.168.1.3
[index]		[milis]		 [videos]	   [images]  	  [news]
1.11		 1.21		   1.31			  1.41		   1.51
1.12		 1.22		   1.32			  1.42		   1.52
1.13		 1.23		   1.33			  1.43		   1.53
...		 ...		    ...			  ...		    ...
/web     /web/milis    /web/videos     /web/images   /web/news
index.html  index.html     index.html      index.html   index.html
一、实施过程 
1、选择两台nginx服务器作为代理服务器。
2、给两台代理服务器安装keepalived制作高可用生成VIP
3、配置nginx的负载均衡
以上两台nginx服务器配置文件一致
根据站点分区进行调度
配置upstream文件
所有机器关闭防火墙selinux
systemctl stop firewalld && setenforce 0
[root@nginx-proxy ~]# cd /etc/nginx/conf.d/
[root@nginx-proxy conf.d]# mv default.conf default.conf.bak
[root@nginx-proxy conf.d]# vim upstream.conf
upstream index { 

server 192.168.246.162:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.246.163:80 weight=2 max_fails=2 fail_timeout=2;
}       
[root@nginx-proxy conf.d]# vim proxy.conf
server { 

listen 80;
server_name     localhost;
access_log  /var/log/nginx/host.access.log  main;
location / { 

proxy_pass http://index;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
将nginx的配置文件拷贝到另一台代理服务器中:
[root@nginx-proxy-master conf.d]# scp proxy.conf 192.168.246.161:/etc/nginx/conf.d/ 
[root@nginx-proxy-master conf.d]# scp upstream.conf 192.168.246.161:/etc/nginx/conf.d/
二、Keepalived实现调度器HA
注:主/备调度器均能够实现正常调度
1. 主/备调度器安装软件
[root@nginx-proxy-master ~]# yum install -y keepalived
[root@nginx-proxy-slave ~]# yum install -y keepalived
[root@nginx-proxy-slave ~]# cd /etc/nginx/conf.d/
[root@nginx-proxy-slave conf.d]# mv default.conf default.conf.bak
[root@nginx-proxy-master ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@nginx-proxy-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id directory1   #辅助改为directory2
}
vrrp_instance VI_1 { 

state MASTER        #定义主还是备
interface ens33     #VIP绑定接口
virtual_router_id 80  #整个集群的调度器一致
priority 100         #back改为50
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.16/24
}
}
[root@nginx-porxy-slave ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@nginx-proxy-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id directory2
}
vrrp_instance VI_1 { 

state BACKUP    #设置为backup
interface ens33
nopreempt        #设置到back上面,不抢占资源
virtual_router_id 80
priority 50   #辅助改为50
advert_int 1	#检测间隔1s
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.16/24
}
}
3. 启动KeepAlived(主备均启动)
[root@nginx-proxy-master ~]# systemctl start keepalived
[root@nginx-proxy-master ~]# systemctl enable keepalived
[root@nginx-porxy-slave ~]# systemctl start keepalived
[root@nginx-porxy-slave ~]# systemctl enable keepalived
[root@nginx-proxy-master ~]# ip addr
: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:48:07:7d brd ff:ff:ff:ff:ff:ff
inet 192.168.246.169/24 brd 192.168.246.255 scope global dynamic ens33
valid_lft 1726sec preferred_lft 1726sec
inet 192.168.246.16/24 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::23e9:de18:1e67:f152/64 scope link 
valid_lft forever preferred_lft forever
测试:
浏览器访问:http://192.168.246.16
如能正常访问,讲keepalived主节点关机,测试vip是否漂移
到此:
可以解决心跳故障keepalived
不能解决Nginx服务故障,也就是心跳检测,确认的是keepalived主节点是否存活,而不是nginx服务是否正常运行
注释:systemctl stop keepalived时,显示keepalived已经dead:但是用ps -ef |grep keepalived 查看是进程还在。
解决方法:vim /usr/lib/systemed/system/keepalived.service;进入其中注释掉“KillMode=process”
  1. 扩展对调度器Nginx健康检查(可选)两台都设置
 思路:
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Nginx失败,则关闭本机的Keepalived
(1) script
[root@nginx-proxy-master ~]# vim /etc/keepalived/check_nginx_status.sh
#!/bin/bash 
/usr/bin/curl -I http://localhost &>/dev/null
if [ $? -ne 0 ];then						    
# /etc/init.d/keepalived stop
systemctl stop keepalived
fi														        	
[root@nginx-proxy-master ~]# chmod a+x /etc/keepalived/check_nginx_status.sh
(2). keepalived使用script
[root@nginx-proxy-master ~]#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id director1
}
vrrp_script check_nginx { 

script "/etc/keepalived/check_nginx_status.sh"
interval 5
}
vrrp_instance VI_1 { 

state MASTER
interface ens33
virtual_router_id 80
priority 100
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.16/24
}
track_script { 

check_nginx
}
}
注:必须先启动nginx,再启动keepalived
测试访问:
将keepalived集群的主节点的nginx服务关闭,查看vip是否漂移,如果漂移,即成功
```
#### 3、LVS_Director + KeepAlived
![1568643540153](assets/1568643540153.png)
```shell
LVS_Director + KeepAlived
KeepAlived在该项目中的功能:
1. 管理IPVS的路由表(包括对RealServer做健康检查)
2. 实现调度器的HA(高可用)
http://www.keepalived.org
Keepalived所执行的外部脚本命令建议使用绝对路径
=================================================================================
实施步骤:
1./备调度器安装软件
[root@lvs-keepalived-master ~]# yum -y install ipvsadm keepalived 
[root@lvs-keepalived-slave ~]# yum -y install ipvsadm keepalived
2. Keepalived
lvs-master
[root@ha-proxy-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id lvs-keepalived-master    #辅助改为lvs-backup
}
vrrp_instance VI_1 { 

state MASTER
interface ens33                #VIP绑定接口
virtual_router_id 80         #VRID 同一组集群,主备一致 
priority 100            #本节点优先级,辅助改为50
advert_int 1            #检查间隔,默认为1s
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.110/24
}
}
virtual_server 192.168.246.110 80 { 
    #LVS配置
delay_loop 3  #启动3个进程
lb_algo rr     #LVS调度算法
lb_kind DR     #LVS集群模式(路由模式)
nat_mask 255.255.255.0
protocol TCP      #健康检查使用的协议
real_server 192.168.246.162 80 { 

weight 1
inhibit_on_failure   #当该节点失败时,把权重设置为0,而不是从IPVS中删除
TCP_CHECK { 
          #健康检查
connect_port 80   #检查的端口
connect_timeout 3  #连接超时的时间
}
}
real_server 192.168.246.163 80 { 

weight 1
inhibit_on_failure
TCP_CHECK { 

connect_timeout 3
connect_port 80
}
}
}
[root@lvs-keepalived-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id lvs-keepalived-slave
}
vrrp_instance VI_1 { 

state BACKUP
interface ens33
nopreempt                    #不抢占资源
virtual_router_id 80
priority 50
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.110/24
}
}
virtual_server 192.168.246.110 80 { 

delay_loop 3
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
protocol TCP
real_server 192.168.246.162 80 { 

weight 1
inhibit_on_failure
TCP_CHECK { 

connect_port 80
connect_timeout 3
}
}
real_server 192.168.246.163 80 { 

weight 1
inhibit_on_failure
TCP_CHECK { 

connect_timeout 3
connect_port 80
}
}
}
3. 启动KeepAlived(主备均启动)
[root@lvs-keepalived-master ~]# systemctl start keepalived
[root@lvs-keepalived-master ~]# systemctl enable keepalived
[root@lvs-keepalived-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.246.110:80 rr persistent 20
-> 192.168.246.162:80           Route   1      0          0         
-> 192.168.246.163:80           Route   0      0          0
4. 所有RS配置(nginx1,nginx2)
配置好网站服务器,测试所有RS
[root@test-nginx1 ~]# yum install -y nginx
[root@test-nginx2 ~]# yum install -y nginx
[root@test-nginx1 ~]# ip addr add dev lo 192.168.246.110/32
[root@test-nginx2 ~]# ip addr add dev lo 192.168.246.110/32
[root@real-server1 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore #忽略arp广播
[root@real-server1 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce #匹配精确ip地址回包
[root@test-nginx1 ~]# sysctl -p
[root@real-server2 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore #忽略arp广播
[root@real-server2 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce #匹配精确ip地址回包
[root@test-nginx2 ~]# sysctl -p
[root@test-nginx1 ~]# echo "web1..." >> /usr/share/nginx/html/index.html
[root@test-nginx2 ~]# echo "web2..." >> /usr/share/nginx/html/index.html
[root@test-nginx1 ~]# systemctl start nginx
LB集群测试
所有分发器和Real Server都正常
主分发器故障及恢复

作业:

MySQL+Keepalived

Keepalived+mysql 自动切换
项目环境:
VIP 192.168.246.100
mysql1 192.168.246.162      keepalived-master
mysql2 192.168.246.163      keepalived-slave
一、mysql 主主同步(互为主从)        (不使用共享存储,数据保存本地存储)
二、安装keepalived 
三、keepalived主备配置文件
四、mysql状态检测脚本/root/bin/keepalived_check_mysql.sh
五、测试及诊断
======================================================
实施步骤:
一、mysql 主从同步 <>
===================================================================================
mysql两个节点均操作
[root@mysql-keepalived-master ~]# yum -y install maraidb-server mariadb
[root@mysql-keepalived-master ~]# systemctl start mariadb
[root@mysql-keepalived-master ~]# mysql
节点1创建qf1库名,以便测试
MariaDB [(none)]> create database qf1;
MariaDB [(none)]> quit;
创建一个客户端能够测试连接的用户
MariaDB [(none)]> grant all privileges on *.* to root@'%' identified by '123456';
MariaDB [(none)]> flush privileges;
[root@mysql-keepalived-slave ~]# yum -y install maraidb-server mariadb
[root@mysql-keepalived-slave ~]# systemctl start mariadb
[root@mysql-keepalived-slave ~]# mysql
节点1创建qf2库名,以便测试
MariaDB [(none)]> create database qf2;
MariaDB [(none)]> quit;
创建一个客户端能够测试连接的用户
MariaDB [(none)]> grant all privileges on *.* to root@'%' identified by '123456';
MariaDB [(none)]> flush privileges;
====================================================================================
二、安装keepalived---两台机器都操作
[root@mysql-keepalived-master ~]# yum -y install keepalived
[root@mysql-keepalived-slave ~]# yum -y install keepalived
三、keepalived 主备配置文件
192.168.246.162 master配置
[root@mysql-keepalived-master ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@mysql-keepalived-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id master
}
vrrp_script check_run { 

script "/etc/keepalived/keepalived_chech_mysql.sh"
interval 5
}
vrrp_instance VI_1 { 

state MASTER
interface ens33
virtual_router_id 89
priority 100
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.18/24
}
track_script { 

check_run
}
}
===========================================================================
slave 192.168.246.163 配置
[root@mysql-keepalived-slave ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@mysql-keepalived-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id backup
}
vrrp_script check_run { 

script "/etc/keepalived/keepalived_check_mysql.sh"
interval 5        #执行脚本的间隔时间
}
vrrp_instance VI_1 { 

state BACKUP
nopreempt
interface ens33
virtual_router_id 89
priority 50
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.18/24
}
track_script { 

check_run
}
}
四、mysql状态检测脚本/root/keepalived_check_mysql.sh(两台MySQL同样的脚本)
版本一:简单使用:
[root@mysql-keepalived-master ~]# vim /etc/keepalived/keepalived_chech_mysql.sh
#!/bin/bash
/usr/bin/mysql -uroot -p'QianFeng@2019!' -e "show status" &>/dev/null 
if [ $? -ne 0 ] ;then 
# service keepalived stop
systemctl stop keepalived
fi
[root@mysql-keepalived-master ~]# chmod +x /etc/keepalived/keepalived_chech_mysql.sh
==========================================================================
两边均启动keepalived
方式一:
[root@mysql-keepalived-master ~]# systemctl restart keepalived
[root@mysql-keepalived-master ~]# systemctl enable keepalived
方式二:
# /etc/init.d/keepalived start
# /etc/init.d/keepalived start
# chkconfig --add keepalived
# chkconfig keepalived on
注意:在任意一台机器作为客户端。在测试的时候记得检查mysql用户的可不可以远程登录。
用客户端去连接测试
[root@client ~]# mysql -uroot -p -h 192.168.246.18
Enter password: 
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| qf1                |
| test               |
+--------------------+
可以看到qf1库
停止节点1的mariadb服务,查看vip是否漂移
[root@mysql-keepalived-master ~]# systemctl stop mariadb
可以看到vip会漂移到节点2上面
再次用客户端去连接测试
[root@client ~]# mysql -uroot -p -h 192.168.246.18
Enter password: 
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| qf2                |
| test               |
+--------------------+
可以看到qf1库

MySQL+Keepalived的主主同步

境描述:
mysql的安装可以参考:http://www.cnblogs.com/kevingrace/p/6109679.html
Centos6.8版本
Master1:192.168.15.238        安装mysql和keepalived
Master2: 192.168.15.237        安装mysql和keepalived
VIP:192.168.15.236
要实现主主同步,可以先实现主从同步,即master1->master2的主从同步,然后master2->master1的主从同步.
这样,双方就完成了主主同步。
注意下面几点:
1)要保证同步服务期间之间的网络联通。即能相互ping通,能使用对方授权信息连接到对方数据库(防火墙开放3306端口)。
2)关闭selinux。
3)同步前,双方数据库中需要同步的数据要保持一致。这样,同步环境实现后,再次更新的数据就会如期同步了。
注意下面几点:
1)要保证同步服务期间之间的网络联通。即能相互ping通,能使用对方授权信息连接到对方数据库(防火墙开放3306端口)。
2)关闭selinux。
3)同步前,双方数据库中需要同步的数据要保持一致。这样,同步环境实现后,再次更新的数据就会如期同步了。

可能出现的问题

报错:
Last_IO_Error: Fatal error: The slave I/O thread stops because master and slave have equal MySQL server ids; these ids must be different for replication to work (or the --replicate-same-server-id option must be used on slave but this does not always make sense; please check the manual before using it).
解决办法:
删除mysql数据目录下的auto.cnf文件,重启mysql服务即可!
另:Keepalived必须使用root账号启动!!

一、Mysql主主同步环境部署

---------------master1服务器操作记录---------------
在my.cnf文件的[mysqld]配置区域添加下面内容:
[root@master1 ~]# vim /usr/local/mysql/my.cnf
server-id = 1         
log-bin = mysql-bin     
sync_binlog = 1
binlog_checksum = none
binlog_format = mixed
auto-increment-increment = 2     
auto-increment-offset = 1    
slave-skip-errors = all      
[root@master1 ~]# /etc/init.d/mysql restart
Shutting down MySQL. SUCCESS!
Starting MySQL.. SUCCESS!
数据同步授权(iptables防火墙开启3306端口)这样I/O线程就可以以这个用户的身份连接到主服务器,并且读取它的二进制日志。
mysql> grant replication slave,replication client on *.* to wang@'182.148.15.%' identified by "wang@123";
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
最好将库锁住,仅仅允许读,以保证数据一致性;待主主同步环境部署后再解锁;
锁住后,就不能往表里写数据,但是重启mysql服务后就会自动解锁!
mysql> flush tables with read lock;  //注意该参数设置后,如果自己同步对方数据,同步前一定要记得先解锁!
Query OK, 0 rows affected (0.00 sec)
查看下log bin日志和pos值位置
mysql> show master status;
+------------------+----------+--------------+--------------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         | Executed_Gtid_Set |
+------------------+----------+--------------+--------------------------+-------------------+
| mysql-bin.000004 |      430 |              | mysql,information_schema |                   |
+------------------+----------+--------------+--------------------------+-------------------+
1 row in set (0.00 sec)
---------------master2服务器操作记录---------------
在my.cnf文件的[mysqld]配置区域添加下面内容:
[root@master2 ~]# vim /usr/local/mysql/my.cnf
server-id = 2        
log-bin = mysql-bin    
sync_binlog = 1
binlog_checksum = none
binlog_format = mixed
auto-increment-increment = 2     
auto-increment-offset = 2    
slave-skip-errors = all
[root@master2 ~]# /etc/init.d/mysql restart
Shutting down MySQL.. SUCCESS!
Starting MySQL.. SUCCESS!
mysql> grant replication slave,replication client on *.* to wang@'182.148.15.%' identified by "wang@123";
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> flush tables with read lock;
Query OK, 0 rows affected (0.00 sec)
mysql> show master status;
+------------------+----------+--------------+--------------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         | Executed_Gtid_Set |
+------------------+----------+--------------+--------------------------+-------------------+
| mysql-bin.000003 |      430 |              | mysql,information_schema |                   |
+------------------+----------+--------------+--------------------------+-------------------+
1 row in set (0.00 sec)
---------------master1服务器做同步操作---------------
mysql> unlock tables;     //先解锁,将对方数据同步到自己的数据库中
mysql> slave stop;
mysql> change  master to master_host='182.148.15.237',master_user='wang',master_password='wang@123',master_log_file='mysql-bin.000003',master_log_pos=430;         
Query OK, 0 rows affected, 2 warnings (0.01 sec)
mysql> start slave;
Query OK, 0 rows affected (0.01 sec)
查看同步状态,如下出现两个“Yes”,表明同步成功!
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 182.148.15.237
Master_User: wang
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 430
Relay_Log_File: mysql-relay-bin.000002
Relay_Log_Pos: 279
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
.........................
Seconds_Behind_Master: 0
.........................
这样,master1就和master2实现了主从同步,即master1同步master2的数据。
---------------master2服务器做同步操作---------------
mysql> unlock tables;     //先解锁,将对方数据同步到自己的数据库中
mysql> slave stop;
mysql> change  master to master_host='182.148.15.238',master_user='wang',master_password='wang@123',master_log_file='mysql-bin.000004',master_log_pos=430;  
Query OK, 0 rows affected, 2 warnings (0.06 sec)
mysql> start slave;
Query OK, 0 rows affected (0.01 sec)
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 182.148.15.238
Master_User: wang
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 430
Relay_Log_File: mysql-relay-bin.000002
Relay_Log_Pos: 279
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
........................
Seconds_Behind_Master: 0
........................
这样,master2就和master1实现了主从同步,即master2也同步master1的数据。
以上表明双方已经实现了mysql主主同步。
当运行一段时间后,要是发现同步有问题,比如只能单向同步,双向同步失效。可以重新执行下上面的change master同步操作,只不过这样同步后,只能同步在此之后的更新数据。下面开始进行数据验证:
-----------------主主同步效果验证---------------------
1)在master1数据库上写入新数据
mysql> unlock tables;
Query OK, 0 rows affected (0.00 sec)
mysql> create database huanqiu;
Query OK, 1 row affected (0.01 sec)
mysql> use huanqiu;
Database changed
mysql> create table if not exists haha (
-> id int(10) PRIMARY KEY AUTO_INCREMENT,
-> name varchar(50) NOT NULL);
Query OK, 0 rows affected (0.04 sec)
mysql> insert into haha values(1,"王士博");
Query OK, 1 row affected (0.00 sec)
mysql> insert into haha values(2,"郭慧慧");
Query OK, 1 row affected (0.00 sec)
mysql> select * from haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | 王士博    |
|  2 | 郭慧慧    |
+----+-----------+
2 rows in set (0.00 sec)
然后在master2数据库上查看,发现数据已经同步过来了!
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| huanqiu            |
| mysql              |
| performance_schema |
| test               |
+--------------------+
5 rows in set (0.00 sec)
mysql> use huanqiu;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+-------------------+
| Tables_in_huanqiu |
+-------------------+
| haha              |
+-------------------+
1 row in set (0.00 sec)
mysql> select * from haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | 王士博    |
|  2 | 郭慧慧    |
+----+-----------+
2 rows in set (0.00 sec)
2)在master2数据库上写入新数据
mysql> create database hehe;
Query OK, 1 row affected (0.00 sec)
mysql> insert into huanqiu.haha values(3,"周正"),(4,"李敏");
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
然后在master1数据库上查看,发现数据也已经同步过来了!
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hehe               |
| huanqiu            |
| mysql              |
| performance_schema |
| test               |
+--------------------+
6 rows in set (0.00 sec)
mysql> select * from huanqiu.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | 王士博    |
|  2 | 郭慧慧    |
|  3 | 周正      |
|  4 | 李敏      |
+----+-----------+
4 rows in set (0.00 sec)

二、配置Mysql+Keepalived故障转移的高可用环境

[root@master1 ~]# yum install -y openssl-devel
[root@master1 ~]# cd /usr/local/src/
[root@master1 src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz
[root@master1 src]# tar -zvxf keepalived-1.3.5.tar.gz
[root@master1 src]# cd keepalived-1.3.5
[root@master1 keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived
[root@master1 keepalived-1.3.5]# make && make install
[root@master1 keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@master1 keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@master1 keepalived-1.3.5]# mkdir /etc/keepalived/
[root@master1 keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@master1 keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@master1 keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local
2)master1机器上的keepalived.conf配置。(下面配置中没有使用lvs的负载均衡功能,所以不需要配置虚拟服务器virtual server)
[root@master1 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@master1 ~]# vim /etc/keepalived/keepalived.conf #清空默认内容,直接采用下面配置:
! Configuration File for keepalived
global_defs { 

notification_email { 

ops@wangshibo.cn
tech@wangshibo.cn
}
notification_email_from ops@wangshibo.cn
smtp_server 127.0.0.1 
smtp_connect_timeout 30
router_id MASTER-HA
}
vrrp_script chk_mysql_port { 
     #检测mysql服务是否在运行。有很多方式,比如进程,用脚本检测等等
script "/opt/chk_mysql.sh"   #这里通过脚本监测
interval 2                   #脚本执行间隔,每2s检测一次
weight -5                    #脚本结果导致的优先级变更,检测失败(脚本返回非0)则优先级 -5
fall 2                    #检测连续2次失败才算确定是真失败。会用weight减少优先级(1-255之间)
rise 1                    #检测1次成功就算成功。但不修改优先级
}
vrrp_instance VI_1 { 

state MASTER    
interface eth0      #指定虚拟ip的网卡接口
mcast_src_ip 182.148.15.238
virtual_router_id 51    #路由器标识,MASTER和BACKUP必须是一致的
priority 101            #定义优先级,数字越大,优先级越高,在同一个vrrp_instance下,MASTER的优先级必须大于BACKUP的优先级。这样MASTER故障恢复后,就可以将VIP资源再次抢回来 
advert_int 1         
authentication { 
   
auth_type PASS 
auth_pass 1111     
}
virtual_ipaddress { 
    
182.148.15.236
}
track_script { 
               
chk_mysql_port             
}
}
编写切换脚本。KeepAlived做心跳检测,如果Master的MySQL服务挂了(3306端口挂了),那么它就会选择自杀。Slave的KeepAlived通过心跳检测发现这个情况,就会将VIP的请求接管
[root@master1 ~]# vim /opt/chk_mysql.sh
#!/bin/bash
counter=$(netstat -na|grep "LISTEN"|grep "3306"|wc -l)
if [ "${counter}" -eq 0 ]; then
/etc/init.d/keepalived stop
fi
[root@master1 ~]# chmod 755 /opt/chk_mysql.sh
启动keepalived服务
[root@master1 ~]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
4)master2机器上的keepalived配置。master2机器上的keepalived.conf文件只修改priority为90、nopreempt不设置、real_server设置本地IP。
[root@master2 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@master2 ~]# >/etc/keepalived/keepalived.conf
[root@master2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

notification_email { 

ops@wangshibo.cn
tech@wangshibo.cn
}
notification_email_from ops@wangshibo.cn
smtp_server 127.0.0.1 
smtp_connect_timeout 30
router_id MASTER-HA
}
vrrp_script chk_mysql_port { 

script "/opt/chk_mysql.sh"
interval 2            
weight -5                 
fall 2                 
rise 1               
}
vrrp_instance VI_1 { 

state BACKUP
interface eth0    
mcast_src_ip 182.148.15.237
virtual_router_id 51    
priority 99          
advert_int 1         
authentication { 
   
auth_type PASS 
auth_pass 1111     
}
virtual_ipaddress { 
    
182.148.15.236
}
track_script { 
               
chk_mysql_port             
}
}
[root@master2 ~]# cat /opt/chk_mysql.sh
#!/bin/bash
counter=$(netstat -na|grep "LISTEN"|grep "3306"|wc -l)
if [ "${counter}" -eq 0 ]; then
/etc/init.d/keepalived stop
fi
[root@master2 ~]# chmod 755 /opt/chk_mysql.sh
[root@master2 ~]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
5)master1和master2两台服务器都要授权允许root用户远程登录,用于在客户端登陆测试!
mysql> grant all on *.* to root@'%' identified by "1234567";
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
6)在master1和master2两台机器上设置iptables防火墙规则,如下:
[root@master1 ~]# cat /etc/sysconfig/iptables
........
-A INPUT -s 182.148.15.0/24 -d 224.0.0.18 -j ACCEPT       #允许组播地址通信
-A INPUT -s 182.148.15.0/24 -p vrrp -j ACCEPT             #允许VRRP(虚拟路由器冗余协)通信
-A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT    #开放mysql的3306端口
[root@master1 ~]# /etc/init.d/iptables restart

Mysql+keepalived故障转移的高可用测试

1)通过Mysql客户端通过VIP连接,看是否连接成功。
比如,在远程一台测试机上连接,通过vip地址可以正常连接(下面的连接权限要是在服务端提前授权的)
[root@dev-new-test ~]# mysql -h182.148.15.236 -uroot -p123456
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 60
Server version: 5.6.35-log Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select * from huanqiu.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | 王士博    |
|  2 | 郭慧慧    |
|  3 | 周正      |
|  4 | 李敏      |
+----+-----------+
4 rows in set (0.00 sec)
2)默认情况下,vip是在master1上的。使用"ip addr"命令查看vip切换情况 
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:3c:25:42 brd ff:ff:ff:ff:ff:ff
inet 182.148.15.238/27 brd 182.148.15.255 scope global eth0
inet 182.148.15.236/32 scope global eth0                              //这个32位子网掩码的vip地址表示该资源目前还在master1机器上
inet 182.148.15.236/27 brd 82.48.115.255 scope global secondary eth0:0
inet6 fe80::5054:ff:fe3c:2542/64 scope link
valid_lft forever preferred_lft forever
停止master1机器上的mysql服务,根据配置中的脚本,mysql服务停了,keepalived也会停,从而vip资源将会切换到master2机器上。(mysql服务没有起来的时候,keepalived服务也无法顺利启动!)
[root@master1 ~]# /etc/init.d/mysql stop
Shutting down MySQL.. SUCCESS!
[root@master1 ~]# ps -ef|grep mysql
root     25812 21588  0 17:30 pts/0    00:00:00 grep mysql
[root@master1 ~]# ps -ef|grep keepalived
root     25814 21588  0 17:30 pts/0    00:00:00 grep keepalived
[root@master1 ~]# ip addr 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:3c:25:42 brd ff:ff:ff:ff:ff:ff
inet 182.148.15.238/27 brd 182.148.15.255 scope global eth0
inet 182.148.15.236/27 brd 82.48.115.255 scope global secondary eth0:0
inet6 fe80::5054:ff:fe3c:2542/64 scope link
valid_lft forever preferred_lft forever
如上结果,发现32位子网掩码的vip没有了,说明此时vip资源已不在master1机器上了
查看下master1的系统日志,如下,会发现vip资源已经切换走了
[root@master1 ~]# tail -f /var/log/messages
Apr 15 17:17:43 localhost Keepalived_vrrp[23037]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:17:48 localhost Keepalived_vrrp[23037]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:17:48 localhost Keepalived_vrrp[23037]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.148.15.236
Apr 15 17:17:48 localhost Keepalived_vrrp[23037]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:17:48 localhost Keepalived_vrrp[23037]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:17:48 localhost Keepalived_vrrp[23037]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:17:48 localhost Keepalived_vrrp[23037]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:39 localhost Keepalived_healthcheckers[23036]: Stopped
Apr 15 17:30:39 localhost Keepalived_vrrp[23037]: VRRP_Instance(VI_1) sent 0 priority
Apr 15 17:30:39 localhost Keepalived_vrrp[23037]: VRRP_Instance(VI_1) removing protocol VIPs.
再到master2机器上,发现vip资源的确切换过来了
[root@master2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:95:1f:6d brd ff:ff:ff:ff:ff:ff
inet 182.148.15.237/27 brd 182.148.15.255 scope global eth0
inet 182.148.15.236/32 scope global eth0
inet6 fe80::5054:ff:fe95:1f6d/64 scope link
valid_lft forever preferred_lft forever
查看master2的系统日志
[root@master2 ~]# tail -f /var/log/messages
Apr 15 17:30:41 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:41 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:41 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:41 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
3)再次启动master1的mysql和keepalived服务。(注意:如果restart重启mysql,那么还要启动下keepalived,因为mysql重启,根据脚本会造成keepalived关闭)
注意:一定要先启动mysql服务,然后再启动keepalived服务。如果先启动keepalived服务,按照上面的配置,mysql没有起来,就会自动关闭keepalived。
[root@master1 ~]# /etc/init.d/mysql start
Starting MySQL.. SUCCESS!
[root@master1 ~]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
启动这两个服务器后,稍微等过一会儿,注意观察会发现vip资源再次从master2机器上切换回来了。
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:3c:25:42 brd ff:ff:ff:ff:ff:ff
inet 182.148.15.238/27 brd 182.148.15.255 scope global eth0
inet 182.148.15.236/32 scope global eth0
inet 182.148.15.236/27 brd 82.48.115.255 scope global secondary eth0:0
inet6 fe80::5054:ff:fe3c:2542/64 scope link
valid_lft forever preferred_lft forever
[root@master1 ~]# tail -f /var/log/messages
Apr 15 17:40:41 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:41 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:41 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:41 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:46 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:46 localhost Keepalived_vrrp[27002]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.148.15.236
Apr 15 17:40:46 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:46 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:46 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:46 localhost Keepalived_vrrp[27002]: Sending gratuitous ARP on eth0 for 182.148.15.236
再看看master2机器,发现vip资源又被恢复后的master1抢过去了
[root@master2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:95:1f:6d brd ff:ff:ff:ff:ff:ff
inet 182.148.15.237/27 brd 182.148.15.255 scope global eth0
inet6 fe80::5054:ff:fe95:1f6d/64 scope link
valid_lft forever preferred_lft forever
[root@master2 ~]# tail -f /var/log/messages
Apr 15 17:30:41 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:30:46 localhost Keepalived_vrrp[8731]: Sending gratuitous ARP on eth0 for 182.148.15.236
Apr 15 17:40:41 localhost Keepalived_vrrp[8731]: VRRP_Instance(VI_1) Received advert with higher priority 101, ours 99
Apr 15 17:40:41 localhost Keepalived_vrrp[8731]: VRRP_Instance(VI_1) Entering BACKUP STATE
Apr 15 17:40:41 localhost Keepalived_vrrp[8731]: VRRP_Instance(VI_1) removing protocol VIPs.
4)同样,关闭master1机器的keepalived服务,vip资源会自动切换到master2机器上。当master1的keepalived服务恢复后,会将vip资源再次切回来。
[root@master1 ~]# /etc/init.d/keepalived stop
停止 keepalived:                                          [确定]
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:3c:25:42 brd ff:ff:ff:ff:ff:ff
inet 182.148.15.238/27 brd 182.148.15.255 scope global eth0
inet 182.148.15.236/27 brd 82.48.115.255 scope global secondary eth0:0
inet6 fe80::5054:ff:fe3c:2542/64 scope link
valid_lft forever preferred_lft forever
查看master2,发现vip切过来了
[root@master2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:95:1f:6d brd ff:ff:ff:ff:ff:ff
inet 182.148.15.237/27 brd 182.148.15.255 scope global eth0
inet 182.148.15.236/32 scope global eth0
inet6 fe80::5054:ff:fe95:1f6d/64 scope link
valid_lft forever preferred_lft forever
再次恢复master1的keepalived服务,发现vip资源很快油切回来了。
[root@master1 ~]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:3c:25:42 brd ff:ff:ff:ff:ff:ff
inet 182.148.15.238/27 brd 182.148.15.255 scope global eth0
inet 182.148.15.236/32 scope global eth0
inet 182.148.15.236/27 brd 82.48.115.255 scope global secondary eth0:0
inet6 fe80::5054:ff:fe3c:2542/64 scope link
valid_lft forever preferred_lft forever
在此查看master2,发现vip资源被切走了
[root@master2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:95:1f:6d brd ff:ff:ff:ff:ff:ff
inet 182.148.15.237/27 brd 182.148.15.255 scope global eth0
inet6 fe80::5054:ff:fe95:1f6d/64 scope link
valid_lft forever preferred_lft forever
以上在vip资源切换过程中,对于客户端连接mysql(使用vip连接)来说几乎是没有任何影响的。

keeplivce的抢占和非抢占

keepalive是基于vrrp协议在linux主机上以守护进程方式,根据配置文件实现健康检查。
VRRP是一种选择协议,它可以把一个虚拟路由器的责任动态分配到局域网上的VRRP路由器中的一台。
控制虚拟路由器IP地址的VRRP路由器称为主路由器,它负责转发数据包到这些虚拟IP地址。
一旦主路由器不可用,这种选择过程就提供了动态的故障转移机制,这就允许虚拟路由器的IP地址可以作为终端主机的默认第一跳路由器。
keepalive通过组播,单播等方式(自定义),实现keepalive主备推选。工作模式分为抢占和非抢占(通过参数nopreempt来控制)。
1)抢占模式:
主服务正常工作时,虚拟IP会在主上,备不提供服务,当主服务优先级低于备的时候,备会自动抢占虚拟IP,这时,主不提供服务,备提供服务。
也就是说,工作在抢占模式下,不分主备,只管优先级。
如上配置,不管keepalived.conf里的state配置成master还是backup,只看谁的priority优先级高(一般而言,state为MASTER的优先级要高于BACKUP)。
priority优先级高的那一个在故障恢复后,会自动将VIP资源再次抢占回来!!
2)非抢占模式:
这种方式通过参数nopreempt(一般设置在advert_int的那一行下面)来控制。不管priority优先级,只要MASTER机器发生故障,VIP资源就会被切换到BACKUP上。
并且当MASTER机器恢复后,也不会去将VIP资源抢占回来,直至BACKUP机器发生故障时,才能自动切换回来。
千万注意:
nopreempt这个参数只能用于state为backup的情况,所以在配置的时候要把master和backup的state都设置成backup,这样才会实现keepalived的非抢占模式!
也就是说:
a)当state状态一个为master,一个为backup的时候,加不加nopreempt这个参数都是一样的效果。即都是根据priority优先级来决定谁抢占vip资源的,是抢占模式!
b)当state状态都设置成backup,如果不配置nopreempt参数,那么也是看priority优先级决定谁抢占vip资源,即也是抢占模式。
c)当state状态都设置成backup,如果配置nopreempt参数,那么就不会去考虑priority优先级了,是非抢占模式!即只有vip当前所在机器发生故障,另一台机器才能接管vip。即使优先级高的那一台机器恢复  后也不会主动抢回vip,只能等到对方发生故障,才会将vip切回来。

mysql状态检测脚本优化

上面的mysql监测脚本有点过于简单且粗暴,即脚本一旦监测到Master的mysql服务关闭,就立刻把keepalived服务关闭,从而实现vip转移!
下面对该脚本进行优化,优化后,当监测到Master的mysql服务关闭后,就会将vip切换到Backup上(但此时Master的keepalived服务不会被暴力kill)
当Master的mysql服务恢复后,就会再次将VIP资源切回来!
[root@master ~]# cat /opt/chk_mysql.sh
#!/bin/bash
MYSQL=/usr/local/mysql/bin/mysql
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASSWORD=123456
CHECK_TIME=3
#mysql is working MYSQL_OK is 1 , mysql down MYSQL_OK is 0
MYSQL_OK=1
function check_mysql_helth (){ 

$MYSQL -h $MYSQL_HOST -u $MYSQL_USER -p${MYSQL_PASSWORD} -e "show status;" >/dev/null 2>&1
if [ $? = 0 ] ;then
MYSQL_OK=1
else
MYSQL_OK=0
fi
return $MYSQL_OK
}
while [ $CHECK_TIME -ne 0 ]
do
let "CHECK_TIME -= 1"
check_mysql_helth
if [ $MYSQL_OK = 1 ] ; then
CHECK_TIME=0
exit 0
fi
if [ $MYSQL_OK -eq 0 ] &&  [ $CHECK_TIME -eq 0 ]
then
pkill keepalived
exit 1
fi
sleep 1
done

Haproxy 基础

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-A0qzOT3B-1646917448040)(assets/1562943827261.png)]

软件:haproxy---主要是做负载均衡的7层,也可以做4层负载均衡
apache也可以做7层负载均衡,但是很麻烦。实际工作中没有人用。
负载均衡是通过OSI协议对应的
7层负载均衡:用的7层http协议,
4层负载均衡:用的是tcp协议加端口号做的负载均衡
-----------------------------------------------------------------------------------------
ha-proxy概述
ha是一款高性能的负载均衡软件。因为其专注于负载均衡这一些事情,因此与nginx比起来在负载均衡这件事情上做更好,更专业。
ha-proxy的特点
ha-proxy 作为目前流行的负载均衡软件,必须有其出色的一面。下面介绍一下ha-proxy相对LVS,Nginx等负载均衡软件的优点。
•支持tcp / http 两种协议层的负载均衡,使得其负载均衡功能非常丰富。
•支持8种左右的负载均衡算法,尤其是在http模式时,有许多非常实在的负载均衡算法,适用各种需求。
•性能非常优秀,基于单进程处理模式(和Nginx类似)让其性能卓越。
•拥有一个功能出色的监控页面,实时了解系统的当前状况。
•功能强大的ACL支持,给用户极大的方便。
haproxy算法:
1.roundrobin
基于权重进行轮询,在服务器的处理时间保持均匀分布时,这是最平衡,最公平的算法.此算法是动态的,这表示其权重可以在运行时进行调整.
2.static-rr
基于权重进行轮询,与roundrobin类似,但是为静态方法,在运行时调整其服务器权重不会生效.不过,其在后端服务器连接数上没有限制
3.leastconn
新的连接请求被派发至具有最少连接数目的后端服务器.

1、Haproxy 实现七层负载

Keepalived + Haproxy
=================================================================================
/etc/haproxy/haproxy.cfg
global												      //关于进程的全局参数
log         		    127.0.0.1 local2 info  #日志服务器
pidfile     		    /var/run/haproxy.pid  #pid文件
maxconn     	4000     #最大连接数
user        		    haproxy   #用户
group       	    haproxy      #组
daemon			#守护进程方式后台运行
nbproc 1		#工作进程数量 cpu内核是几就写几
defaults 段用于为其它配置段提供默认参数
listen是frontend和backend的结合体
frontend        虚拟服务VIrtual Server
backend        真实服务器Real Server
调度器可以同时为多个站点调度,如果使用frontend、backend的方式:
frontend1 backend1
frontend2 backend2
frontend3 backend3
Keepalived + Haproxy
=================================================================================
拓扑结构
[vip: 192.168.246.17]
[LB1 Haproxy]		[LB2 Haproxy]
192.168.246.169	    192.168.246.161
[httpd]				      [httpd] 
192.168.246.162		         192.168.246.163
一、Haproxy实施步骤				
1. 准备工作(集群中所有主机)
[root@ha-proxy-master ~]# cat /etc/hosts
127.0.0.1      	localhost
192.168.246.169	ha-proxy-master
192.168.246.161	ha-proxy-slave
192.168.246.162	test-nginx1 
192.168.246.163	test-nginx2
2. RS配置
配置好网站服务器,测试所有RS,所有机器安装nginx
[root@test-nginx1 ~]# yum install -y nginx
[root@test-nginx1 ~]# systemctl start nginx
[root@test-nginx1 ~]# echo "test-nginx1" >> /usr/share/nginx/html/index.html
# 所有nginx服务器按顺序输入编号,方便区分。
3. 调度器配置Haproxy(主/备)都执行
[root@ha-proxy-master ~]# yum -y install haproxy
[root@ha-proxy-master ~]# cp -rf /etc/haproxy/haproxy.cfg{,.bak}
[root@ha-proxy-master ~]# sed -i -r '/^[ ]*#/d;/^$/d' /etc/haproxy/haproxy.cfg
[root@ha-proxy-master ~]# vim /etc/haproxy/haproxy.cfg
global
log         127.0.0.1 local2 info
pidfile     /var/run/haproxy.pid
maxconn     4000   #优先级低
user        haproxy
group       haproxy
daemon               #以后台形式运行ha-proxy
nbproc 1		    #工作进程数量 cpu内核是几就写几
defaults
mode                    http  #工作模式 http ,tcp 是 4 层,http是 7 层 
log                     global
retries                 3   #健康检查。3次连接失败就认为服务器不可用,主要通过后面的check检查
option                  redispatch  #服务不可用后重定向到其他健康服务器。
maxconn                 4000  #优先级中
contimeout	            5000  #ha服务器与后端服务器连接超时时间,单位毫秒ms
clitimeout	            50000 #客户端超时
srvtimeout	            50000 #后端服务器超时
listen stats
bind			*:81
stats                   	enable
stats uri              	/haproxy  #使用浏览器访问 http://192.168.246.169/haproxy,可以看到服务器状态 
stats auth           	qianfeng:123  #用户认证,客户端使用elinks浏览器的时候不生效
frontend  web
mode                   	http  
bind                    	    *:80   #监听哪个ip和什么端口
option                  httplog		#日志类别 http 日志格式
acl html url_reg  -i  \.html$  #1.访问控制列表名称html。规则要求访问以html结尾的url(可选)
use_backend httpservers if  html #2.如果满足acl html规则,则推送给后端服务器httpservers
default_backend    httpservers   #默认使用的服务器组
backend httpservers    #名字要与上面的名字必须一样
balance     roundrobin  #负载均衡的方式
server  http1 192.168.246.162:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
server  http2 192.168.246.163:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
将配置文件拷贝到slave服务器
[root@ha-proxy-master ~]# scp /etc/haproxy/haproxy.cfg 192.168.246.161:/etc/haproxy/
两台机器启动设置开机启动
[root@ha-proxy-master ~]# systemctl start haproxy
[root@ha-proxy-master ~]# systemctl enable haproxy

4.测试主/备(浏览器访问)

主:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3zZn7UC4-1646917448042)(assets/1569121374136.png)]

备:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7Wpt4Dqw-1646917448043)(assets/1569121395825.png)]

页面主要参数解释
Queue
Cur: current queued requests //当前的队列请求数量
Max:max queued requests     //最大的队列请求数量
Limit:           //队列限制数量
Errors
Req:request errors             //错误请求
Conn:connection errors          //错误的连接
Server列表:
Status:状态,包括up(后端机活动)和down(后端机挂掉)两种状态
LastChk:    持续检查后端服务器的时间
Wght: (weight) : 权重
========================================================
2.测试访问
通过访问haparoxy的ip地址访问到后端服务器
# curl http://192.168.246.169
如果出现bind失败的报错,执行下列命令
setsebool -P haproxy_connect_any=1
二、Keepalived实现调度器HA
注:主/备调度器均能够实现正常调度
1. 主/备调度器安装软件
[root@ha-proxy-master ~]# yum install -y keepalived
[root@ha-proxy-slave ~]# yum install -y keepalived
[root@ha-proxy-master ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@ha-proxy-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id director1
}
vrrp_instance VI_1 { 

state MASTER
interface ens33
virtual_router_id 80
priority 100
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.17/24
}
}
[root@ha-proxy-slave ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@ha-proxy-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id directory2
}
vrrp_instance VI_1 { 

state BACKUP
interface ens33
nopreempt
virtual_router_id 80
priority 50
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.17/24
}
}
3. 启动KeepAlived(主备均启动)
[root@ha-proxy-master ~]# systemctl start keepalived 
[root@ha-proxy-master ~]# systemctl enable keepalived
[root@ha-proxy-master ~]# ip a
4. 扩展对调度器Haproxy健康检查(可选)
思路:两台机器都做
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Haproxy失败,则关闭本机的Keepalived
a. script
[root@ha-proxy-master ~]# cat /etc/keepalived/check_haproxy_status.sh
#!/bin/bash
/usr/bin/curl -I http://localhost &>/dev/null   
if [ $? -ne 0 ];then                                                                     
# /etc/init.d/keepalived stop
systemctl stop keepalived
fi															        	
[root@ha-proxy-master ~]# chmod a+x /etc/keepalived/check_haproxy_status.sh
b. keepalived使用script
[root@ha-proxy-master keepalived]# vim keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id director1
}
vrrp_script check_haproxy { 

script "/etc/keepalived/check_haproxy_status.sh"
interval 5
}
vrrp_instance VI_1 { 

state MASTER
interface ens33
virtual_router_id 80
priority 100
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.17/24
}
track_script { 

check_haproxy
}
}
[root@ha-proxy-slave keepalived]# vim keepalived.conf
! Configuration File for keepalived
global_defs { 

router_id directory2
}
vrrp_script check_haproxy { 

script "/etc/keepalived/check_haproxy_status.sh"
interval 5
}
vrrp_instance VI_1 { 

state BACKUP
interface ens33
nopreempt
virtual_router_id 80
priority 50
advert_int 1
authentication { 

auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 

192.168.246.17/24
}
track_script { 

check_haproxy
}
}
[root@ha-proxy-master keepalived]# systemctl restart keepalived
[root@ha-proxy-slave keepalived]# systemctl restart keepalived
注:必须先启动haproxy,再启动keepalived
两台机器都配置haproxy的日志:需要打开注释并添加
[root@ha-proxy-master ~]# vim /etc/rsyslog.conf 
# Provides UDP syslog reception #由于haproxy的日志是用udp传输的,所以要启用rsyslog的udp监听
$ModLoad imudp
$UDPServerRun 514
找到  #### RULES #### 下面添加
local2.*                       /var/log/haproxy.log
[root@ha-proxy-master ~]# systemctl restart rsyslog
[root@ha-proxy-master ~]# systemctl restart haproxy
[root@ha-proxy-master ~]# tail -f /var/log/haproxy.log 
2019-07-13T23:11:35+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56866 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:11:35+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56867 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:13:39+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56889 to 192.168.246.17:80 (stats/HTTP)
2019-07-13T23:13:39+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56890 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:14:07+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56895 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:14:07+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56896 to 192.168.246.17:80 (stats/HTTP)

作业:Haproxy 实现四层负载

两台haproxy配置文件:
[root@ha-proxy-master ~]# cat /etc/haproxy/haproxy.cfg
Haproxy L4
=================================================================================
global
log         127.0.0.1 local2
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon
nbproc      1
defaults
mode                    http
log                     global
option                  redispatch
retries                 3
maxconn                 4000
contimeout	            5000
clitimeout	            50000
srvtimeout	            50000
listen stats
bind			*:81
stats                   	enable
stats uri              	/haproxy
stats auth           	qianfeng:123
frontend  web
mode                   	http
bind                    	    *:80
option                  httplog
default_backend    httpservers
backend httpservers
balance     roundrobin
server  http1 192.168.246.162:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
server  http2 192.168.246.163:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
listen mysql
bind *:3306
mode tcp
balance roundrobin
server mysql1 192.168.246.163:3306 weight 1  check inter 1s rise 2 fall 2
server mysql2 192.168.246.162:3306 weight 1  check inter 1s rise 2 fall 2
inter表示健康检查的间隔,单位为毫秒 可以用1s等,fall代表健康检查失败2回后放弃检查。rise代表连续健康检查成功2此后将认为服务器可用。默认的,haproxy认为服务时永远可用的,除非加上check让haproxy确认服务是否真的可用。
找一台机器做为客户端去测试,在测试的时候注意mysql的远程登录权限

mariadb依赖包:yum -y install mariadb-server mariadb mariadb-client mariadb-devel

HTTP)


### **作业:Haproxy 实现四层负载**
```shell
两台haproxy配置文件:
[root@ha-proxy-master ~]# cat /etc/haproxy/haproxy.cfg
Haproxy L4
=================================================================================
global
log         127.0.0.1 local2
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon
nbproc      1
defaults
mode                    http
log                     global
option                  redispatch
retries                 3
maxconn                 4000
contimeout	            5000
clitimeout	            50000
srvtimeout	            50000
listen stats
bind			*:81
stats                   	enable
stats uri              	/haproxy
stats auth           	qianfeng:123
frontend  web
mode                   	http
bind                    	    *:80
option                  httplog
default_backend    httpservers
backend httpservers
balance     roundrobin
server  http1 192.168.246.162:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
server  http2 192.168.246.163:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
listen mysql
bind *:3306
mode tcp
balance roundrobin
server mysql1 192.168.246.163:3306 weight 1  check inter 1s rise 2 fall 2
server mysql2 192.168.246.162:3306 weight 1  check inter 1s rise 2 fall 2
inter表示健康检查的间隔,单位为毫秒 可以用1s等,fall代表健康检查失败2回后放弃检查。rise代表连续健康检查成功2此后将认为服务器可用。默认的,haproxy认为服务时永远可用的,除非加上check让haproxy确认服务是否真的可用。
找一台机器做为客户端去测试,在测试的时候注意mysql的远程登录权限

mariadb依赖包:yum -y install mariadb-server mariadb mariadb-client mariadb-devel

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/158494.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • 御用导航提示提醒_AR实景导航,让你安全驾驶,不再“绕弯”

    御用导航提示提醒_AR实景导航,让你安全驾驶,不再“绕弯”虽然现在手机、车机的导航能力越来越强,但是当我们遇到不熟悉的路况,特别是在立交桥和高速匝道口时还是会出拐错弯或错过路口的情况,而往往错过了一个出口,就意味着你要多跑几公里甚至更远!!基于当前复杂的行车环境,EASYOWN联合高德地图,推出了AR系列行车记录仪,在应对相关行车痛点问题上拥有完美的解决方案。EASYOWN-E3AR行车记录仪通过连接高德地图,在真实的路况信息中,加入3D…

  • windows7/windows10 虚拟显示器部署(Virtual monitor)

    最近有些网友看了我之前的博客之后,向我要虚拟显示器的bin文件,由于之前代码是绑定在VDI下的,没有单独的虚拟显示器代码,所以抽空提取了下相关代码,单独编译。网盘:https://pan.baidu.com/s/1vdqm0Is9pjAcG40Qf_q7cw有问题加QQ3505459047咨询。总结了下网友的一些用途,这几种情况下使用虚拟显示器(显卡欺骗器功能的软件)可以解决:…

  • JSP入门教程(4)[通俗易懂]

    使用脚本在有些地方,你大概要加一些好的,成熟的程序到你的JSP页里,JSP的标签虽然很强大,但是完成某些工作还是比较费力的困难的。这时你可以使用脚本语言段来补充JSP标签。使用的JSP引擎是支持脚本语言的,SUN的JSP参考文说明,必须使用Java程序语言来编写脚本,但是其他第三方的JSP引擎允许使用其他语言来写脚本程。如何增加脚本首先,你必须了解一些增加脚本元素到JSP页中的一些基本规则

  • Lamda表达式详解

    Lamda表达式详解Lamda表达式1、λ希腊字母表中排序第十一位的字母,英语名称为Lamda2、避免匿名内部类定义过多3、可以让你的代码看起来很简洁4、去掉了一堆没有意义的代码,留下核心的逻辑3、其实质属于函数式编程的概念(params)->expression[表达式](params)->statement[语句](params)->{statements}a->System.out.println(“ilikelamda–>”+a)newTh

  • 【c#】DataTable分页处理

    【c#】DataTable分页处理【c#】DataTable分页处理

  • oracle数据库备份:

    oracle数据库备份:oracle数据库备份

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号