目录

web集群高可用监控项目

web集群高可用监控项目

项目步骤:

1,架构规划与拓扑设计:使 ⽤ draw 画 图 软 件 规 划 好 整 个 项 ⽬ 的 拓 扑 结 构 图 , 规划服务器作⽤。

2,系统初始化⾃动化:编写 ssh脚本,批量完成所有服务器配置 ⸺ IP/DNS,主机名标准化、禁⽤ Firewalld/SELinux保障集群基础环境⼀致性。

3,NFS 共享存储搭建:编写 Shell 脚本,⾃动化安装含 SSL、VTS 模块的 Nginx,部署 NFS 服务器并配置共享⽬录,在所有 Web 节点实现⾃动挂载,统⼀存储⽤⼾静态资源,为 Web 节点⽆状态化扩展和数据⼀致性提供⽀撑。

4,安 装 mysql 数 据 库 服 务 器 , 为 后 端 的 接 ⼝ 项 ⽬ 提 供 数 据 库 服 务 , 同 时 安 装 mysqld_exporter 对 mysql 内 部 性 能 指 标 进 ⾏监控。

5,Ansible 控制中⼼搭建:配置主机清单(按 [web]、[lb] 等⻆⾊分类),建⽴ SSH 免密通道,实现集群所有节点的集中 化、⾃动化管理,为后续服务⼀键部署奠定基础。

6, HTTPS 配置:配置基于域名的虚拟主机(www.su.com、software.su.com),部署 SSL 证书启⽤全站 HTTPS 加密,通过 Ansible 批量推送执 ⾏。

7,LVS+Keepalived ⾼可⽤构建:在 2 台负载均衡节点部署 LVS(DR 模式),配置 Keepalived 实现主备切换与虚拟 IP (VIP)漂移,确保 LVS 节点故障时秒级切换,消除单点故障。

8,全栈监控平台搭建:部署 Prometheus+Grafana,⽤ Ansible 批量部署 node_exporter 采集主机指标,配置 Nginx VTS 模块采集 Web 流量数据(请求数、响应时间),定制 Grafana 可视化⼤

9,测试与性能优化:模拟主 LVS 宕机、Web 节点下线等场景做故障演练,验证集群⾼可⽤;⽤ ab ⼯具做⾼并发压⼒测 试,根据吞吐量、延迟数据调优系统内核参数。

10,期望扩展:加⼊Docker概念部署nginx与mysql,加⼊redis缓存优化后端数据,新增Web 应⽤防⽕墙(WAF),堡垒机

一,draw画图拓扑

https://i-blog.csdnimg.cn/direct/6670a14ba27f489883f80d4b2d2c1355.png

二,基础服务器配置

自动化脚本:

1,修改ip,dns,与网关
vi /etc/NetworkManager/system-connections/ens160.nmconnection

修改[ipv4]里的ip,gateway以及dns

[ipv4]
method=manual
addresses=192.168.42.103/24,192.168.42.2
dns=192.168.42.105;8.8.8.8;
gateway=192.168.42.2
2,重启服务连接
systemctl NetworkManager
nmcli c down ens160 && nmcli c up ens160
4,验证
[root@LB1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:29:69:ad brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.42.101/24 brd 192.168.42.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe29:69ad/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LB1 ~]# ip route
default via 192.168.42.100 dev ens160 proto static metric 100 
192.168.42.0/24 dev ens160 proto kernel scope link src 192.168.42.101 metric 100 
[root@LB1 ~]# cat /etc/resolve.conf
cat: /etc/resolve.conf: No such file or directory
[root@LB1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.42.105
nameserver 8.8.8.8
[root@LB1 ~]# 
5,改名
hostnamectl set-hostname LB1
6,关闭selinux与防火墙
#临时关闭
setenforce 0
systemctl stop firewalld


#永久关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
systemctl disable firewalld

三,nfs共享服务

Proometheus+nfs+dns+..改名为Ansible

1,nginx编译安装

2,安装nfs服务
[root@Ansible ~]# dnf install -y nfs-utils rpcbind
Rocky Linux 9 - BaseOS                                                                                         120 kB/s | 2.5 MB     00:21    
Rocky Linux 9 - AppStream                                                                                      752 kB/s | 9.5 MB     00:12    
Rocky Linux 9 - Extras                                                               

[root@Ansible ~]# systemctl start rpcbind nfs-server
[root@Ansible ~]# systemctl enable rpcbind nfs-server
Created symlink /etc/systemd/system/multi-user.target.wants/rpcbind.service → /usr/lib/systemd/system/rpcbind.service.
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
[root@Ansible ~]# 
2,创建共享并给读写权限
[root@Ansible ~]# mkdir -p /data/webroot
#读写权限
[root@Ansible ~]# chmod 755 /data/webroot
#设置为nobody,避免后续权限错误
[root@Ansible ~]# chown nobody:nobody /data/webroot
#配置共享规则与共享主机
[root@Ansible ~]# vi /etc/exports
/data/webroot 192.168.42.103(rw,no_root_squash,...)
/data/webroot 192.168.42.104(rw,no_root_squash,...)
#加载
[root@Ansible ~]# exportfs -r
#查看信息确认生效
[root@Ansible ~]# exportfs -v
/data/webroot 	192.168.42.103(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/data/webroot 	192.168.42.104(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
[root@Ansible ~]# 
3,安装nfs客户端(Web1/Web2)
dnf install -y nfs-utils
#临时挂载
mount -t nfs 192.168.42.105:/data/webroot /usr/local/nginx/html
4,永久挂载
#编辑 /etc/fstab 文件(File System Table,文件系统表
vim /etc/fstab

#在文件末尾添加一行(
192.168.42.105:/data/webroot  /usr/local/nginx/html  nfs  defaults  0  0

#验证
df -h

四,Mysql安装,监控与服务

1,Mysql安装
wget https://dev.mysql.com/get/mysql80-community-release-el7-7.noarch.rpm
yum install -y mysql-community-server

#设置开机自启
systemctl start mysql
systemctl enable mysql
2,Mysql服务
#初始化
#获取临时密码
[root@mysql ~]# grep "temporary password" /var/log/mysqld.log
2025-06-27T19:17:42.987720Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: a_qsfrlwC7es
[root@mysql ~]# 
#登录修改
[root@mysql ~]# mysql -uroot -p
Enter password: a_qsfrlwC7es
ALTER USER 'root'@'localhost' IDENTIFIED BY '新密码';

#远程访问
CREATE USER 'root'@'%' IDENTIFIED BY '新密码';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
# 刷新权限
FLUSH PRIVILEGES;
3,安装mysqld_exporter监控
#下载最新版本(可从 https://github.com/prometheus/mysqld_exporter/releases 查看最新版本)
wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.15.1/mysqld_exporter-0.15.1.linux-amd64.tar.gz

#解压并移动到指定目录
tar -zxvf mysqld_exporter-0.15.1.linux-amd64.tar.gz
mv mysqld_exporter-0.15.1.linux-amd64 /usr/local/mysqld_exporter

#进入安装目录
cd /usr/local/mysqld_exporter


#登录 MySQL
mysql -u root -p

#创建监控专用账号
CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'ExporterPassword123!';
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';
FLUSH PRIVILEGES;
exit


#配置系统服务(统一标准化
# 创建 systemd 服务文件
cat > /etc/systemd/system/mysqld_exporter.service << EOF
[Unit]
Description=MySQL Exporter for Prometheus
After=mysqld.service

[Service]
User=root
Group=root
WorkingDirectory=/usr/local/mysqld_exporter
ExecStart=/usr/local/mysqld_exporter/mysqld_exporter --config.my-cnf=/usr/local/mysqld_exporter/.my.cnf
Restart=always

[Install]
WantedBy=multi-user.target
EOF

# 启动并设置开机自启
systemctl daemon-reload
systemctl start mysqld_exporter
systemctl enable mysqld_exporter

五,ansible自动化运维

1,安装ansible(Ansible服务器
# 安装Ansible(Rocky Linux需启用EPEL源)
dnf install -y epel-release
dnf install -y ansible
# 验证安装
ansible --version  # 输出版本信息即为成功
2,编辑 Ansible 默认主机清单/etc/ansible/hosts,按角色分组:
vim /etc/ansible/hosts
# 添加以下内容
[lb]
LB1 ansible_host=192.168.42.101
LB2 ansible_host=192.168.42.102

[web]
Web1 ansible_host=192.168.42.103
Web2 ansible_host=192.168.42.104

[db]
MySQL ansible_host=192.168.42.106
3,建立免密通道
#ansibel节点
ssh-keygen -t rsa

# 分发到LB1
ssh-copy-id root@192.168.42.101
# 分发到LB2
ssh-copy-id root@192.168.42.102
# 分发到Web1
ssh-copy-id root@192.168.42.103
# 分发到Web2
ssh-copy-id root@192.168.42.104
# 分发到MySQL
ssh-copy-id root@192.168.42.106
4,验证
[root@Ansible webroot]# ansible all -m ping
MySQL | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
Web2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
LB2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
Web1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
LB1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
[root@Ansible webroot]# 

六,部署ssl证书与域名

1,生成ssl证书
#提前创建存储
mkdir -p /ansible/nginx_https/{ssl,conf}
cd /ansible/nginx_https


# 进入证书目录
cd ssl

# 生成CA根证书(用于签名)
openssl genrsa -out ca.key 2048
openssl req -new -x509 -days 3650 -key ca.key -out ca.crt \
  -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=su/OU=IT/CN=su.com"

# 生成www.su.com证书
openssl genrsa -out www.su.com.key 2048
openssl req -new -key www.su.com.key -out www.su.com.csr \
  -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=su/OU=IT/CN=www.su.com"
openssl x509 -req -in www.su.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out www.su.com.crt -days 3650

# 生成software.su.com证书
openssl genrsa -out software.su.com.key 2048
openssl req -new -key software.su.com.key -out software.su.com.csr \
  -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=su/OU=IT/CN=software.su.com"
openssl x509 -req -in software.su.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out software.su.com.crt -days 3650
2,配置域名www.su.com 与software.su.com
#cp一份nginx配置进行修改

[root@Ansible ~]#vi /ansible/nginx_https/conf/nginx_vhosts.conf
# 基于域名的虚拟主机配置(HTTPS专用)
worker_processes auto;
error_log /usr/local/nginx/logs/error.log;
pid /usr/local/nginx/logs/nginx.pid;

events {
    worker_connections 1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    # 全局SSL配置(复用)
    ssl_protocols TLSv1.2 TLSv1.3;  # 安全协议
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;

    # 1. HTTP自动跳转HTTPS(强制加密)
    server {
        listen 80;
        server_name www.su.com software.su.com;
        return 301 https://$host$request_uri;  # 永久重定向到HTTPS
    }

    # 2. 虚拟主机:www.su.com
    server {
        listen 443 ssl;
        server_name www.su.com;

        # 证书路径(与后续Ansible推送路径一致)
        ssl_certificate /etc/nginx/ssl/www.su.com.crt;
        ssl_certificate_key /etc/nginx/ssl/www.su.com.key;

        # 网站根目录(NFS共享目录,确保Web1/Web2已挂载)
        root /var/www/html/www.su.com;
        index index.html index.htm;

        # 访问日志
        access_log /usr/local/nginx/logs/www.su.com_access.log;
        error_log /usr/local/nginx/logs/www.su.com_error.log;

        location / {
            try_files $uri $uri/ =404;
        }
    }

    # 3. 虚拟主机:software.su.com
    server {
        listen 443 ssl;
        server_name software.su.com;

        # 证书路径
        ssl_certificate /etc/nginx/ssl/software.su.com.crt;
        ssl_certificate_key /etc/nginx/ssl/software.su.com.key;

        # 网站根目录
        root /var/www/html/software.su.com;
        index index.html index.htm;

        # 访问日志
        access_log /usr/local/nginx/logs/software.su.com_access.log;
        error_log /usr/local/nginx/logs/software.su.com_error.log;

        location / {
            try_files $uri $uri/ =404;
        }
    }
}
3,ansible剧本palybook批量处理
[root@Ansible ~]#vi /ansible/nginx_https/conf/deploy_https.yml
- name: 批量部署Nginx HTTPS虚拟主机
  hosts: web  # 目标Web节点组(Web1、Web2)
  remote_user: root
  tasks:
    # 1. 创建SSL证书存放目录
    - name: 创建证书目录 /etc/nginx/ssl
      file:
        path: /etc/nginx/ssl
        state: directory
        mode: 0700  # 严格权限,仅root可读

    # 2. 推送SSL证书到Web节点
    - name: 推送www.su.com证书和密钥
      copy:
        src: ./ssl/{{ item }}
        dest: /etc/nginx/ssl/{{ item }}
        mode: 0600  # 证书密钥仅root可读写
      loop:
        - www.su.com.crt
        - www.su.com.key

    - name: 推送software.su.com证书和密钥
      copy:
        src: ./ssl/{{ item }}
        dest: /etc/nginx/ssl/{{ item }}
        mode: 0600
      loop:
        - software.su.com.crt
        - software.su.com.key

    # 3. 推送虚拟主机配置文件(覆盖原有nginx.conf)
    - name: 部署HTTPS虚拟主机配置
      copy:
        src: ./conf/nginx_vhosts.conf
        dest: /usr/local/nginx/conf/nginx.conf  # 确保路径与你的Nginx一致
        mode: 0644
      notify: 重启Nginx  # 配置变更后触发重启

    # 4. 创建网站根目录(基于NFS共享,确保权限正确)
    - name: 创建www.su.com网站目录
      file:
        path: /var/www/html/www.su.com
        state: directory
        mode: 0755
        owner: nginx  # 与Nginx运行用户一致
        group: nginx

    - name: 创建software.su.com网站目录
      file:
        path: /var/www/html/software.su.com
        state: directory
        mode: 0755
        owner: nginx
        group: nginx

    # 5. 生成测试页面(区分不同Web节点)
    - name: 创建www.su.com测试页
      copy:
        content: "<h1>HTTPS - www.su.com</h1><p>Server: {{ inventory_hostname }} ({{ ansible_default_ipv4.address }})</p>"
        dest: /var/www/html/www.su.com/index.html
        mode: 0644

    - name: 创建software.su.com测试页
      copy:
        content: "<h1>HTTPS - software.su.com</h1><p>Server: {{ inventory_hostname }} ({{ ansible_default_ipv4.address }})</p>"
        dest: /var/www/html/software.su.com/index.html
        mode: 0644

  #  handlers:配置变更后重启Nginx
  handlers:
    - name: 重启Nginx
      systemd:
        name: nginx
        state: restarted
4,执行
ansible-playbook deploy_https.yml

七,LVS与keepalive高可用

DR 模式要求 Web 节点绑定 VIP 到回环网卡,禁止 ARP 响应,避免冲突

1,禁止ARP,避免冲突(Web1/Web2)
cat >> /etc/sysctl.conf << EOF
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 1
EOF

# 生效配置
sysctl -p
2. 绑定 VIP 到回环网卡
#临时
# 绑定VIP(192.168.42.200)到lo网卡
ip addr add 192.168.42.200/32 dev lo label lo:0

# 验证绑定(应显示lo:0有192.168.42.200)
ip addr show lo | grep 192.168.42.200


#永久
cat >> /etc/rc.d/rc.local << EOF
# 绑定LVS DR模式VIP
ip addr add 192.168.42.200/32 dev lo label lo:0
EOF


#给执行权限

chmod +x /etc/rc.d/rc.local
3,部署 LVS+Keepalived(LB1/LB2)负载均衡
# 安装
dnf install -y ipvsadm keepalived
2. 加载 LVS 内核模块
# 配置开机自动加载LVS模块
[root@LB1 ~]#cat >> /etc/modules-load.d/ipvs.conf << EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

# 立即加载模块
[root@LB1 ~]#modprobe ip_vs
[root@LB1 ~]#modprobe ip_vs_rr
[root@LB1 ~]#modprobe ip_vs_wrr
[root@LB1 ~]#modprobe ip_vs_sh
[root@LB1 ~]#modprobe nf_conntrack

# 验证模块加载(应显示ip_vs相关模块)
[root@LB1 ~]# lsmod | grep ip_vs
ip_vs_sh               12288  0
ip_vs_wrr              12288  0
ip_vs_rr               12288  1
ip_vs                 237568  7 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          229376  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
libcrc32c              12288  3 nf_conntrack,xfs,ip_vs
[root@LB1 ~]# 
3. 配置 Keepalived(主节点配置)
# 备份默认配置
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

# 创建新配置文件
cat > /etc/keepalived/keepalived.conf << EOF
! LVS DR模式主节点配置(LB1)
global_defs {
    router_id LB1  # 主节点标识
}

vrrp_instance VI_1 {
    state MASTER    # 主节点角色
    interface ens160  # 物理网卡(与实际网卡名一致)
    virtual_router_id 51  # 虚拟路由ID(主备必须相同)
    priority 150    # 优先级(主>备,LB1=150,LB2=100)
    advert_int 1    # 心跳间隔1秒

    authentication {
        auth_type PASS
        auth_pass 111111  # 认证密码(主备必须相同)
    }

    # 虚拟IP(VIP)配置
    virtual_ipaddress {
        192.168.42.200/32 dev ens160 label ens160:0
    }
}

# LVS负载均衡配置(DR模式,轮询调度)
virtual_server 192.168.42.200 443 {  # VIP:HTTPS端口
    delay_loop 6          # 健康检查间隔6秒
    lb_algo rr            # 轮询调度算法
    lb_kind DR            # DR模式
    persistence_timeout 0 # 无会话保持
    protocol TCP

    # 后端Web1节点
    real_server 192.168.42.103 443 {
        weight 1          # 权重
        TCP_CHECK {
            connect_port 443
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    # 后端Web2节点
    real_server 192.168.42.104 443 {
        weight 1
        TCP_CHECK {
            connect_port 443
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
EOF
4. 启动 Keepalived 并设置开机自启
systemctl start keepalived
systemctl enable keepalived

# 验证状态(应显示active)
systemctl status keepalived
5,LB2一样,只用修改priority优先级
6,验证负载均衡

https://i-blog.csdnimg.cn/direct/78506a1e695f4cf79ad6b256ee85b16a.png

八,Prometheus+Grafana(监控服务器)

1. 安装 Prometheus(Ansible)
# 下载并安装Prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.45.0/prometheus-2.45.0.linux-amd64.tar.gz
tar -zxf prometheus-2.45.0.linux-amd64.tar.gz -C /usr/local/
ln -s /usr/local/prometheus-2.45.0.linux-amd64 /usr/local/prometheus

# 创建系统服务
cat > /usr/lib/systemd/system/prometheus.service << EOF
[Unit]
Description=Prometheus Monitoring System
After=network.target

[Service]
ExecStart=/usr/local/prometheus/prometheus \
  --config.file=/usr/local/prometheus/prometheus.yml \
  --storage.tsdb.path=/usr/local/prometheus/data \
  --web.listen-address=:9090

[Install]
WantedBy=multi-user.target
EOF

# 启动并设置开机自启
systemctl daemon-reload
systemctl start prometheus
systemctl enable prometheus
2. 安装 Grafana
# 安装Grafana
dnf install -y https://dl.grafana.com/oss/release/grafana-10.1.0-1.x86_64.rpm

# 启动并设置开机自启
systemctl start grafana-server
systemctl enable grafana-server

# 验证端口(3000为Grafana默认端口)
ss -tuln | grep 3000
3,Ansible 批量部署 node_exporter(主机指标采集)
  1. 编写 Ansible 剧本(deploy_node_exporter.yml

2,运行

运行
ansible-playbook deploy_node_exporter.yml
4,第三步:配置 Nginx VTS 指标采集(Web 流量监控)

通过 VTS 模块采集 Nginx 指标(请求数、响应时间、状态码等)

  1. 配置 Nginx 启用 VTS 模块(Ansible 批量操作)

编写剧本configure_nginx_vts.yml

  1. 执行配置
ansible-playbook configure_nginx_vts.yml
5,配置 Prometheus 监控目标
[root@Ansible ~]#vi /usr/local/prometheus/prometheus.yml
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'nodes'
    static_configs:
      - targets: ['192.168.42.101:9100', '192.168.42.102:9100', '192.168.42.103:9100', '192.168.42.104:9100']

  - job_name: 'nginx'
    metrics_path: '/status/format/prometheus'
    static_configs:
      - targets: ['192.168.42.103:8080', '192.168.42.104:8080']
  -job_name: 'mysql'
   static_configs:
      - targets: ['192.168.42.106:8090']

重启 Prometheus 生效

systemctl restart prometheus

九,ab压力测试与检测高可用

1,安装
[root@mysql ~]# dnf install -y httpd-tools 
#查看相关信息
[root@mysql ~]# ab -V
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

[root@mysql ~]# 
2,ab压力测试
[root@mysql ~]# ab -n 10000 -c 500 -k -s 30 https://192.168.42.200/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.42.200 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.29.1
Server Hostname:        192.168.42.200
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /
Document Length:        63 bytes

Concurrency Level:      500
Time taken for tests:   0.804 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    10000
Total transferred:      2990000 bytes
HTML transferred:       630000 bytes
Requests per second:    12431.73 [#/sec] (mean)
Time per request:       40.220 [ms] (mean)
Time per request:       0.080 [ms] (mean, across all concurrent requests)
Transfer rate:          3629.97 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   15  67.6      0     361
Processing:     1   23  11.5     20     245
Waiting:        0   23  11.5     20     245
Total:          4   39  74.0     20     401

Percentage of the requests served within a certain time (ms)
  50%     20
  66%     23
  75%     30
  80%     35
  90%     40
  95%    219
  98%    397
  99%    397
 100%    401 (longest request)
[root@mysql ~]# 
3,验证高可用
#关掉LB1
#查看LB2
[root@LB2 ~]# ip add
    inet 192.168.42.102/24 brd 192.168.42.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 192.168.42.200/32 scope global ens160:0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe34:e245/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LB2 ~]# 


#重连查看LB1
[root@LB1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:29:69:ad brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.42.101/24 brd 192.168.42.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 192.168.42.200/32 scope global ens160:0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe29:69ad/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LB1 ~]# 


[root@LB2 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:34:e2:45 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.42.102/24 brd 192.168.42.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 192.168.42.200/32 scope global ens160:0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe34:e245/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LB2 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:34:e2:45 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.42.102/24 brd 192.168.42.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe34:e245/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LB2 ~]#