HAProxy

  • LB Cluster:

    • 四层:lvs, nginx(stream),haproxy(mode tcp)
    • 七层:http: nginx(http, ngx_http_upstream_module), haproxy(mode http), httpd, ats, perlbal, pound…

HAProxy

  • 程序环境:

    • 主程序:/usr/sbin/haproxy
    • 主配置文件:/etc/haproxy/haproxy.cfg
    • Unit file:/usr/lib/systemd/system/haproxy.service
  • 配置段:

    • global:全局配置段

      • 进程及安全配置相关的参数
        性能调整相关参数
        Debug参数
    • proxies:代理配置段

      • defaults:为frontend, listen, backend提供默认配置;
        fronted:前端,相当于nginx, server {}
        backend:后端,相当于nginx, upstream {}
        listen:同时拥前端和后端
      配置示例:(负载均衡集群,一般主页页面是相同的;只是此处的主页页面不同,)
      
        frontend web
            bind *:80
            default_backend     websrvs
      
        backend websrvs
            balance roundrobin
            server srv1 172.16.100.6:80 check
            server srv2 172.16.100.7:80 check

global配置参数:

  • 进程及安全管理:chroot, deamon,user, group, uid, gid

    • log:定义全局的syslog服务器;最多可以定义两个;

      log <address> [len <length>] <facility> [max level [min level]]`
      1. 用户可以自定义的facility:local0–>local7
      2. 级别有: emerg alert crit err warning notice info debug
    • nbproc :要启动的haproxy的进程数量;
      Creates processes when going daemon. This requires the “daemon” mode. By default, only one process is created, which is the recommended mode of operation. For systems limited to small sets of file descriptors per process, it may be needed to fork multiple daemons. USING MULTIPLE PROCESSES IS HARDER TO DEBUG AND IS REALLY DISCOURAGED. See also “daemon”.

    • ulimit-n :每个haproxy进程可打开的最大文件数;
      Sets the maximum number of per-process file-descriptors to . By default, it is automatically computed, so it is recommended not to use this option.

  • 性能调整:

    • maxconn :设定每个haproxy进程所能接受的最大并发连接数;Sets the maximum per-process number of concurrent connections to .
    • maxconnrate :Sets the maximum per-process number of connections per second to .每个进程每秒种所能创建的最大连接数量;
    • maxsessrate :每秒钟所能创建的会话数
    • maxsslconn : Sets the maximum per-process number of concurrent SSL connections to .
    • spread-checks <0..50, in percent>.后端主机的健康状态检测
      当后端主机的数量较多时,在某一时刻光对健康状态检测就占据较大带宽,因此需要散开在不同的时间对后端主机进行健康状态检测。不可提前和延后整个周期的50%

代理配置段

  • defaults
  • frontend
  • backend
  • listen

    A "frontend" section describes a set of listening sockets accepting client connections.
      A "backend" section describes a set of servers to which the proxy will connect to forward incoming connections.
      A "listen" section defines a complete proxy with its frontend and backend parts combined in one section. It is generally useful for TCP-only traffic.
    
      <name>的规范使用:All proxy names must be formed from upper and lower case letters, digits, '-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). 区分字符大小写;
  • 配置参数:

    • bind:Define one or several listening addresses and/or ports in a frontend.

      bind [<address>]:<port_range> [, ...] [param*]
      
      示例:
      listen http_proxy
        bind :80,:443
        bind 10.0.0.1:10080,10.0.0.1:10443
        bind /var/run/ssl-frontend.sock user root mode 600 accept-proxy
    • balance:后端服务器组内的服务器调度算法
      balance [ ]
      balance url_param [check_post]

      • 算法(algorithm):

        • roundrobin:Each server is used in turns, according to their weights.
          server options: weight #
          动态算法:支持权重的运行时调整,支持慢启动;每个后端中最多支持4095个server;
        • static-rr:
          静态算法:不支持权重的运行时调整及慢启动;后端主机数量无上限;

        • leastconn:
          推荐使用在具有较长会话的场景中,例如MySQL、LDAP等;

        • first:
          根据服务器在列表中的位置,自上而下进行调度;前面服务器的连接数达到上限,新请求才会分配给下一台服务;

        • source:源地址hash;把来自于同一个IP地址的请求始终发往同一个后端主机。是根据源IP地址进行绑定。
          除权取余法
          一致性哈希

        • uri:

          • 对URI的左半部分做hash计算,并由服务器总权重相除以后派发至某挑出的服务器;此时,hash-type使用consistent算法。
          • 通常后端是基于web服务器的缓存时才应该使用uri,为了保证命中率足够高,而且不会保证因为权重的变化而变化不够剧烈,hash-type使用consistent。不同服务器请求同一资源将都调度到同一个后端服务器。

            <scheme>://<user>:<password>@<host>:<port>/<path>;<params>?<query>#<frag>
              左半部分:/<path>;<params>
              整个uri:/<path>;<params>?<query>#<frag>
        • url_param:对用户请求的uri中部分中的参数的值作hash计算,并由服务器总权重相除以后派发至某挑出的服务器;通常用于追踪用户,以确保来自同一个用户的请求始终发往同一个Backend Server;

        • hdr():对于每个http请求,此处由指定的http首部将会被取出做hash计算;并由服务器总权重相除以后派发至某挑出的服务器;没有有效值的会被轮询调度;
          hdr(Cookie)

        • rdp-cookie
          rdp-cookie()
          微软的远程桌面协议

    • hash-type:哈希算法

      hash-type <method> <function> <modifier>
            map-based:除权取余法,哈希数据结构是静态的数组;
            consistent:一致性哈希,哈希数据结构是一个树;
      
        <function> is the hash function to be used : 哈希函数
            sdbm
            djb2
            wt6
    • default_backend 
      设定默认的backend,用于frontend中;

    • default-server [param*]
      为backend中的各server设定默认选项;

    • server <name> <address>[:[port]] [param*],定义后端主机的各服务器及其选项;

      server <name> <address>[:port] [settings ...]
      default-server [settings ...]
      
      <name>:服务器在haproxy上的内部名称;出现在日志及警告信息;
      <address>:服务器地址,支持使用主机名;
      [:[port]]:端口映射;省略时,表示同bind中绑定的端口;
      [param*]:参数
        maxconn <maxconn>:当前server的最大并发连接数;
        backlog <backlog>:当前server的连接数达到上限后的后援队列长度;
        backup:设定当前server为备用服务器;
        check:对当前server做健康状态检测;
            addr :检测时使用的专用的IP地址;
            port :针对此端口进行检测;
            inter <delay>:连续两次检测之间的时间间隔,默认为2000ms; 这个时间不要设置的较为短,否则,自己会在检测时崩溃。
            rise <count>:连续多少次检测结果为“成功”才标记服务器为可用;默认为2;
            fall <count>:连续多少次检测结果为“失败”才标记服务器为不可用;默认为3;
      
                七层检测:请求特定资源要求其回应码是多少,或者回应的对应内容是什么
                四层检测:对方的端口只要能响应就表示OK
                三层检测:只要主机在线就行
                注意:httpchk,"smtpchk", "mysql-check", "pgsql-check" and "ssl-hello-chk" 用于定义应用层检测方法;
      
        cookie <value>:为当前server指定其cookie值,用于实现基于cookie的会话黏性;
        disabled:标记为不可用;
        redir <prefix>:将发往此server的所有GET和HEAD类的请求重定向至指定的URL;
        weight <weight>:权重,默认为1;
    • 统计接口启用相关的参数:

      • stats enable

        启用统计页;基于默认的参数启用stats page;
          - stats uri   : /haproxy?stats
          - stats realm : "HAProxy Statistics"
          - stats auth  : no authentication
          - stats scope : no restriction
      • stats auth :
        认证时的账号和密码,可使用多次;

      • stats realm 
        认证时的realm;

      • stats uri 
        自定义stats page uri

      • stats refresh 
        设定自动刷新时间间隔;

      • stats admin { if | unless } 
        启用stats page中的管理功能

        配置示例:
          listen stats
              bind :9099
              stats enable
              stats uri    /haproxy?stats
              stats realm HAPorxy\ Stats\ Page
              stats auth admin:admin
              stats admin if TRUE
    • maxconn :为指定的frontend定义其最大并发连接数;默认为2000;
      Fix the maximum number of concurrent connections on a frontend.

    • mode { tcp|http|health }

      • 定义haproxy的工作模式;
        tcp:基于layer4实现代理;可代理mysql, pgsql, ssh, ssl等协议;
        http:仅当代理的协议为http时使用;
        health:工作为健康状态检查的响应模式,当连接请求到达时回应“OK”后即断开连接;

      • 示例:

        listen ssh
              bind :22022
              balance leastconn
              mode tcp
              server sshsrv1 172.16.100.6:22 check
              server sshsrv2 172.16.100.7:22 check
    • cookie <name> [ rewrite | insert | prefix ] [ indirect ] [ nocache ] [ postonly ] [ preserve ] [ httponly ] [ secure ] [ domain <domain> ]* [ maxidle <idle> ] [ maxlife <life> ]

      • <name>:is the name of the cookie which will be monitored, modified or inserted in order to bring persistence.
        rewirte:重写;
        insert:插入;
        prefix:前缀;

      • 基于cookie的session sticky的实现:

        backend websrvs
              cookie WEBSRV insert nocache indirect
              server srv1 172.16.100.6:80 weight 2 check rise 1 fall 2 maxconn 3000 cookie srv1
              server srv2 172.16.100.7:80 weight 1 check rise 1 fall 2 maxconn 3000 cookie srv2
    • option forwardfor [ except <network> ] [ header <name> ] [ if-none ]
      Enable insertion of the X-Forwarded-For header to requests sent to servers

      • 在由haproxy发往后端主机的请求报文中添加“X-Forwarded-For”首部,其值前端客户端的地址;用于向后端主发送真实的客户端IP;

        [ except <network> ]:请求报请来自此处指定的网络时不予添加此首部;
          [ header <name> ]:使用自定义的首部名称,而非“X-Forwarded-For”;
    • errorfile <code> <file>
      Return a file contents instead of errors generated by HAProxy

      <code>:is the HTTP status code. Currently, HAProxy is capable of  generating codes 200, 400, 403, 408, 500, 502, 503, and 504.
        <file>:designates a file containing the full HTTP response.
      • 示例:

        errorfile 400 /etc/haproxy/errorfiles/400badreq.http
          errorfile 408 /dev/null  # workaround Chrome pre-connect bug
          errorfile 403 /etc/haproxy/errorfiles/403forbid.http
          errorfile 503 /etc/haproxy/errorfiles/503sorry.http
    • errorloc <code> <url>

      errorloc302 <code> <url>
      
        errorfile 403 http://www.magedu.com/error_pages/403.html
    • req是作用在haproxy发送给后端服务器的请求报文的首部
      rsp是作用在haproxy响应给客户端的响应报文中的首部
      i 代表不区分大小写;

      reqadd  <string> [{if | unless} <cond>]
        Add a header at the end of the HTTP request
      
      rspadd <string> [{if | unless} <cond>]
        Add a header at the end of the HTTP response
      
        rspadd X-Via:\ HAPorxy
      
      reqdel  <search> [{if | unless} <cond>]
      reqidel <search> [{if | unless} <cond>]  (ignore case)
        Delete all headers matching a regular expression in an HTTP request
      
      rspdel  <search> [{if | unless} <cond>]
      rspidel <search> [{if | unless} <cond>]  (ignore case)
        Delete all headers matching a regular expression in an HTTP response
      
        rspidel  Server.*
  • 日志系统:

    • log:
      每一个代理可使用两次log指令,把日志发往两处位置;也可以使用global关键字调用全局的log系统,如果全局中本身就调用了两次,那就意味者,此次调用要向两处发送日志,自己添加的别的log就不在起作用了。

      log global
      log <address> [len <length>] <facility> [<level> [<minlevel>]]
      no log
      
        默认发往本机的日志服务器;
            (1) local2.*      /var/log/local2.log 
            (2) /etc/rsyslog.conf
                $ModLoad imudp
                $UDPServerRun 514
    • log-format <string>:可定义日志格式

    • capture cookie <name> len <length>
      Capture and log a cookie in the request and in the response.

    • capture request header <name> len <length>
      Capture and log the last occurrence of the specified request header.

      示例:capture request header X-Forwarded-For len 15

    • capture response header <name> len <length>
      Capture and log the last occurrence of the specified response header.

      示例:
      capture response header Content-length len 9
      capture response header Location len 15

  • 为指定的MIME类型启用压缩传输功能

    compression algo <algorithm> ...:启用http协议的压缩机制,指明压缩算法gzip, deflate;
      compression type <mime type> ...:指明压缩的MIMI类型;
  • 对后端服务器做http协议的健康状态检测:只适用于mode http

    option httpchk
      option httpchk <uri>
      option httpchk <method> <uri>
      option httpchk <method> <uri> <version>        
          定义基于http协议的7层健康状态检测机制;
    http-check expect [!] <match> <pattern>
          Make HTTP health checks consider response contents or specific status codes.
  • 连接超时时长:

    • timeout client <timeout>
      Set the maximum inactivity time on the client side. 默认单位是毫秒;

    • timeout server <timeout>
      Set the maximum inactivity time on the server side.

    • timeout http-keep-alive <timeout>
      持久连接的持久时长;
      代理服务器面向客户端一侧尽量不使用持久连接,但是不持久每一次的连接都需要重新建立连接,可以设置持久连接的时长尽量短。以实测为准。

    • timeout http-request <timeout>
      Set the maximum allowed time to wait for a complete HTTP request
      等待客户端一侧发请求报文的超时时长

    • timeout connect <timeout>
      Set the maximum time to wait for a connection attempt to a server to succeed.
      设置向服务端建立连接的超时时长;

    • timeout client-fin <timeout>
      Set the inactivity timeout on the client side for half-closed connections.

    • timeout server-fin <timeout>
      Set the inactivity timeout on the server side for half-closed connections.

  • 访问控制的处理动作

    • use_backend <backend> [{if | unless} <condition>]
      Switch to a specific backend if/unless an ACL-based condition is matched.
      当符合指定的条件时使用特定的backend;

    • block { if | unless } <condition>
      Block a layer 7 request if/unless a condition is matched

      示例:

      acl invalid_src src 172.16.200.2
        block if invalid_src
        errorfile 403 /etc/fstab
        或者 
        errorloc 403 http://www.baidu.com
    • http-request { allow | deny } [ { if | unless } <condition> ]
      Access control for Layer 7 requests

    • tcp-request connection {accept|reject} [{if | unless} <condition>]
      Perform an action on an incoming connection depending on a layer 4 condition

      示例:

      listen ssh
            bind :22022
            balance leastconn
            acl invalid_src src 172.16.200.2
            tcp-request connection reject if invalid_src
            mode tcp
            server sshsrv1 172.16.100.6:22 check
            server sshsrv2 172.16.100.7:22 check backup
  • acl
    The use of Access Control Lists (ACL) provides a flexible solution to perform content switching and generally to take decisions based on content extracted from the request, the response or any environmental status.

    • acl <aclname> <criterion> [flags] [operator] [<value>] …

      • <aclname>:ACL names must be formed from upper and lower case letters, digits, ‘-‘ (dash), ‘_’ (underscore) , ‘.’ (dot) and ‘:’ (colon).ACL names are case-sensitive.

      • <value>的类型:

        • boolean
        • integer or integer range
        • IP address / network
        • string (exact, substring, suffix, prefix, subdir, domain)
        • regular expression
        • hex block
      • <flags>

        • -i : ignore case during matching of all subsequent patterns.
        • -m : use a specific pattern matching method
        • -n : forbid the DNS resolutions
        • -u : force the unique id of the ACL
        • — : force end of flags. Useful when a string looks like one of the flags.
        • [operator]

          • 匹配整数值:eq、ge、gt、le、lt

          • 匹配字符串:

            - exact match     (-m str) : the extracted string must exactly match the patterns ;
              - substring match (-m sub) : the patterns are looked up inside the extracted string, and the ACL matches if any of them is found inside ;
              - prefix match    (-m beg) : the patterns are compared with the beginning of the extracted string, and the ACL matches if any of them matches.
              - suffix match    (-m end) : the patterns are compared with the end of the extracted string, and the ACL matches if any of them matches.
              - subdir match    (-m dir) : the patterns are looked up inside the extracted string, delimited with slashes ("/"), and the ACL matches if any of them matches.
                  路径子串匹配,以/分隔;/var/www/html --> www,html 
              - domain match    (-m dom) : the patterns are looked up inside the extracted string, delimited with dots ("."), and the ACL matches if any of them matches.    
                  域名子串匹配,以点号分隔;www.magedu.com --> magedu.com
      • acl作为条件时的逻辑关系:

        • AND (implicit)
        • OR (explicit with the “or” keyword or the “||” operator)
        • Negation with the exclamation mark (“!”)

          if invalid_src invalid_port
            if invalid_src || invalid_port
            if ! invalid_src invalid_port
      • <criterion>:

        • dst : ip
        • dst_port : integer
        • src : ip
        • src_port : integer

          示例:acl invalid_src src 172.16.200.2

        • path : string

          This extracts the request's URL path, which starts at the first slash and ends before the question mark (without the host part).
                /path;<params>
          
            path     : exact string match
            path_beg : prefix match
            path_dir : subdir match
            path_dom : domain match
            path_end : suffix match
            path_len : length match
            path_reg : regex match
            path_sub : substring match
        • url : string

          This extracts the request’s URL as presented in the request. A typical use is with prefetch-capable caches, and with portals which need to aggregate multiple information from databases and keep them in caches.

          url     : exact string match
            url_beg : prefix match
            url_dir : subdir match
            url_dom : domain match
            url_end : suffix match
            url_len : length match
            url_reg : regex match
            url_sub : substring match
        • 请求报文:hdr([<name>[,<occ>]]) : string
          This extracts the last occurrence of header in an HTTP request.

          hdr([<name>[,<occ>]])     : exact string match
            hdr_beg([<name>[,<occ>]]) : prefix match
            hdr_dir([<name>[,<occ>]]) : subdir match
            hdr_dom([<name>[,<occ>]]) : domain match
            hdr_end([<name>[,<occ>]]) : suffix match
            hdr_len([<name>[,<occ>]]) : length match
            hdr_reg([<name>[,<occ>]]) : regex match
            hdr_sub([<name>[,<occ>]]) : substring match

          示例:
          acl bad_curl hdr_sub(User-Agent) -i curl
          block if bad_curl

        • status : integer
          Returns an integer containing the HTTP status code in the HTTP response.

原创文章,作者:s,如若转载,请注明出处:http://www.178linux.com/79370

(0)
ss
上一篇 2017-07-03
下一篇 2017-07-03

相关推荐

  • 磁盘管理、文件管理、系统管理

    分区管理工具:fdisk,parted,sfdisk fdisk:MBR模式,对于一块硬盘来说,最多只能管理15个分区; # fdisk -l [-u] [device…]:查看 # fdisk device 子命令:管理功能 p:显示已有分区 n:新建分区 d:删除 w:写入磁盘并退出 q:放弃更改并退出 m:获取帮助 l:列出分区id t:调…

    Linux干货 2017-12-15
  • Nginx 常见配置解析

    结构上: 核心模块:HTTP模块、EVENT模块、MAIL模块。 基础模块:HTTP access模块、HTTP FastCGI模块、HTTP Proxy模块、HTTP Rewrite模块。 第三方模块:HTTP Upstream Request Hash模块。 功能上: Handlers…

    Linux干货 2016-12-05
  • M20 – 1- 第三周博客(3):Linux中文本处理工具

    1、文件查看查看工具 Linuxzhong存在着很多配置文件以及脚本,那Linux中也自带了查看文本的工具: cat 命令 简介: cat – concatenate files and print on the standard output 格式: cat&nbsp…

    Linux干货 2016-08-07
  • redis 主从复制实战

    1.准备好4台机器 192.168.42.150 redis-node1 #主 192.168.42.151 redis-node2 #从 192.168.42.152 redis-node3 #从 192.168.42.153 redis-node4 #从 将主机解析写入hosts文件,分发至每台机器 2.安装redis,配置好基本配置 (1) 4台机器,…

    Linux干货 2017-07-18
  • zabbix快速创建筛选(sql操作)

        公司由于大量业务调整,尤其是服务器功能和性能的转变,监控也跟随这变化,其中操作最为繁琐的当数zabbix筛选(相信各位从页面添加的都深感痛苦)于是有了本文。     本文采用的方法是sql操作(由于本人不会php等,无法写程序直接调用官方api,所以,嘿嘿,不解释,同时也希望各路大…

    Linux干货 2015-12-17
  • 第三周作业

      1. who |cut -d ” ” -f1|uniq 2.who |head -1 3.cat /etc/passwd | cut -d: -f7|uniq -c |sort -n|tail -1|grep -o “/[[:alnum:]].*” 4. cat /etc/passwd |sort…

    2017-12-16