前言
在Tomcat集群中,当一个节点出现故障,其他节点该如何接管故障节点的Session信息呢?本文带来的解决方案是基于MSM+Memcached实现Session共享。
相关介绍
MSM
MSM–Memcached Session Manager是一个高可用的Tomcat Session共享解决方案,除了可以从本机内存快速读取Session信息(仅针对黏性Session)外,同时可使用Memcached存取Session,以实现高可用。
工作原理
Sticky Session(黏性) 模式下的工作原理
#Tomcat本地Session为主Session,Memcached 中的Session为备Session
安装在Tomcat上的MSM使用本机内存保存Session,当一个请求执行完毕之后,如果对应的Session在本地不存在(即某用户的第一次请求),则将该Session复制一份至Memcached;当该Session的下一个请求到达时,会使用Tomcat的本地Session,请求处理结束之后,Session的变化会同步更新到 Memcached,保证数据一致。
当集群中的一个Tomcat挂掉,下一次请求会被路由到其他Tomcat上。负责处理此此请求的Tomcat并不清楚Session信息,于是从Memcached查找该Session,更新该Session并将其保存至本机。此次请求结束,Session被修改,送回Memcached备份。
Non-sticky Session (非黏性)模式下的工作原理
#Tomcat本地Session为中转Session,Memcached为主备Session
收到请求,加载备Session至本地容器,若备Session加载失败则从主Session加载
请求处理结束之后,Session的变化会同步更新到Memcached,并清除Tomcat本地Session
实现过程
实验拓扑
#系统环境:CentOS6.6
nginx安装配置
#解决依赖关系 [root@scholar ~]# yum groupinstall "Development Tools" "Server Platform Deveopment" -y [root@scholar ~]# yum install openssl-devel pcre-devel -y [root@scholar ~]# groupadd -r nginx [root@scholar ~]# useradd -r -g nginx nginx [root@scholar ~]# tar xf nginx-1.6.3.tar.gz [root@scholar ~]# cd nginx-1.6.3 [root@scholar nginx-1.6.3]# ./configure \ > --prefix=/usr/local/nginx \ > --sbin-path=/usr/sbin/nginx \ > --conf-path=/etc/nginx/nginx.conf \ > --error-log-path=/var/log/nginx/error.log \ > --http-log-path=/var/log/nginx/access.log \ > --pid-path=/var/run/nginx/nginx.pid \ > --lock-path=/var/lock/nginx.lock \ > --user=nginx \ > --group=nginx \ > --with-http_ssl_module \ > --with-http_flv_module \ > --with-http_stub_status_module \ > --with-http_gzip_static_module \ > --http-client-body-temp-path=/usr/local/nginx/client/ \ > --http-proxy-temp-path=/usr/local/nginx/proxy/ \ > --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ \ > --http-uwsgi-temp-path=/usr/local/nginx/uwsgi \ > --http-scgi-temp-path=/usr/local/nginx/scgi \ > --with-pcre [root@scholar nginx-1.6.3]# make && make install
为nginx提供SysV init脚本
[root@scholar ~]# vim /etc/rc.d/init.d/nginx #新建文件/etc/rc.d/init.d/nginx,内容如下: #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /var/run/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/etc/nginx/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/var/lock/subsys/nginx make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force- reload|configtest}" exit 2 esac
为脚本赋予执行权限
[root@scholar ~]# chmod +x /etc/rc.d/init.d/nginx
添加至服务管理列表,并让其开机自动启动
[root@scholar ~]# chkconfig --add nginx [root@scholar ~]# chkconfig nginx on
配置nginx
[root@scholar ~]# vim /etc/nginx/nginx.conf upstream www.scholar.com { server 172.16.10.123:8080; server 172.16.10.124:8080; } server { listen 80; server_name www.scholar.com; location / { proxy_pass http://www.scholar.com; index index.jsp index.html index.htm; } } [root@scholar ~]# service nginx start Starting nginx: [ OK ]
tomcat安装配置
安装jdk
[root@node1 ~]# rpm -ivh jdk-7u79-linux-x64.rpm [root@node1 ~]# vim /etc/profile.d/java.sh export JAVA_HOME=/usr/java/latest export PATH=$JAVA_HOME/bin:$PATH [root@node1 ~]# . /etc/profile.d/java.sh
安装tomcat
[root@node1 ~]# tar xf apache-tomcat-7.0.62.tar.gz -C /usr/local/ [root@node1 ~]# cd /usr/local/ [root@node1 local]# ln -sv apache-tomcat-7.0.62/ tomcat [root@node1 local]# vim /etc/profile.d/tomcat.sh export CATALINA_HOME=/usr/local/tomcat export PATH=$CATALINA_HOME/bin:$PATH [root@node1 local]# . /etc/profile.d/tomcat.sh
提供脚本
[root@node1 local]# vim /etc/rc.d/init.d/tomcat #!/bin/sh # Tomcat init script for Linux. # # chkconfig: 2345 96 14 # description: The Apache Tomcat servlet/JSP container. # JAVA_OPTS='-Xms64m -Xmx128m' JAVA_HOME=/usr/java/latest CATALINA_HOME=/usr/local/tomcat export JAVA_HOME CATALINA_HOME case $1 in start) exec $CATALINA_HOME/bin/catalina.sh start ;; stop) exec $CATALINA_HOME/bin/catalina.sh stop;; restart) $CATALINA_HOME/bin/catalina.sh stop sleep 2 exec $CATALINA_HOME/bin/catalina.sh start ;; *) echo "Usage: `basename $0` {start|stop|restart}" exit 1 ;; esac [root@node1 local]# chmod +x /etc/rc.d/init.d/tomcat [root@node1 local]# chkconfig --add tomcat [root@node1 local]# chkconfig tomcat on #两个tomcat节点都执行以上操作
访问测试
准备测试页
[root@node1 local]# cd tomcat/webapps/ [root@node1 webapps]# mkdir -pv test/WEB-INF/{classes,lib} [root@node1 webapps]# cd test/ [root@node1 test]# vim index.jsp <%@ page language="java" %> <html> <head><title>TomcatA</title></head> <body> <h1><font color="red">TomcatA.scholar.com</font></h1> <table align="centre" border="1"> <tr> <td>Session ID</td> <% session.setAttribute("scholar.com","scholar.com"); %> <td><%= session.getId() %></td> </tr> <tr> <td>Created on</td> <td><%= session.getCreationTime() %></td> </tr> </table> </body> </html> #另一个节点将TomcatA替换为TomcatB,颜色设为蓝色 [root@node1 test]# service tomcat start
此时Session信息并不一致,接下来我们通过配置MSM实现Session共享
memcached安装
#解决依赖关系 [root@scholar ~]# yum groupinstall "Development Tools" "Server Platform Deveopment" -y #安装libevent #memcached依赖于libevent API,因此要事先安装之 [root@scholar ~]# tar xf libevent-2.0.22-stable.tar.gz [root@scholar ~]# cd libevent-2.0.22-stable [root@scholar libevent-2.0.22-stable]# ./configure --prefix=/usr/local/libevent [root@scholar libevent-2.0.22-stable]# make && make install [root@scholar ~]# echo "/usr/local/libevent/lib" > /etc/ld.so.conf.d/libevent.conf [root@scholar ~]# ldconfig #安装配置memcached [root@scholar ~]# tar xf memcached-1.4.24.tar.tar [root@scholar ~]# cd memcached-1.4.24 [root@scholar memcached-1.4.24]# ./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent [root@scholar memcached-1.4.24]# make && make install
提供脚本
[root@scholar ~]# vim /etc/init.d/memcached #!/bin/bash # # Init file for memcached # # chkconfig: - 86 14 # description: Distributed memory caching daemon # # processname: memcached # config: /etc/sysconfig/memcached . /etc/rc.d/init.d/functions ## Default variables PORT="11211" USER="nobody" MAXCONN="1024" CACHESIZE="64" RETVAL=0 prog="/usr/local/memcached/bin/memcached" desc="Distributed memory caching" lockfile="/var/lock/subsys/memcached" start() { echo -n $"Starting $desc (memcached): " daemon $prog -d -p $PORT -u $USER -c $MAXCONN -m $CACHESIZE RETVAL=$? [ $RETVAL -eq 0 ] && success && touch $lockfile || failure echo return $RETVAL } stop() { echo -n $"Shutting down $desc (memcached): " killproc $prog RETVAL=$? [ $RETVAL -eq 0 ] && success && rm -f $lockfile || failure echo return $RETVAL } restart() { stop start } reload() { echo -n $"Reloading $desc ($prog): " killproc $prog -HUP RETVAL=$? [ $RETVAL -eq 0 ] && success || failure echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) restart ;; condrestart) [ -e $lockfile ] && restart RETVAL=$? ;; reload) reload ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|condrestart|status}" RETVAL=1 esac exit $RETVAL
授权并启动服务
[root@scholar ~]# chmod +x /etc/init.d/memcached [root@scholar ~]# chkconfig --add memcached [root@scholar ~]# chkconfig memcached on [root@scholar ~]# service memcached start #两个memcached节点都执行以上操作
tomcat配置
将所需jar包放入各tomcat节点的tomcat安装目录下的lib目录中
[root@node1 ~]# cd msm/ [root@node1 msm]# ls javolution-5.4.3.1.jar msm-javolution-serializer-1.8.1.jar memcached-session-manager-1.8.1.jar spymemcached-2.10.2.jar memcached-session-manager-tc7-1.8.1.jar [root@node1 msm]# cp * /usr/local/tomcat/lib/ #各tomcat节点都需执行以上操作
[root@node1 msm]# vim /usr/local/tomcat/conf/server.xml <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" </GlobalNamingResources> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.LockOutRealm"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="www.scholar.com" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Context path="/test" docBase="/usr/local/tomcat/webapps/test/" reloadable="true"> <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:172.16.10.126:11211,n2:172.16.10.212:11211" failoverNodes="n1" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.javolution.Javolu tionTranscoderFactory" /> </Context> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="scholar_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="scholar_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> </Engine> </Service> </Server>
将配置文件同步至另一节点
[root@node1 msm]# scp /usr/local/tomcat/conf/server.xml node2:/usr/local/tomcat/conf/ [root@node1 msm]# service tomcat restart ssh node2 'service tomcat restart'
访问测试
由此可见,Session共享已实现,下面我们模拟TomcatB节点故障,看一下Session是否会发生改变
[root@node2 msm]# service tomcat stop
虽然因为TomcatB故障,导致用户请求被调度到了TomcatA节点上,但Session ID并未发生改变,即Session集群内的所有节点都保存有全局的Session信息,很好的实现了用户访问的不中断
当n2(memcached)节点发生故障,Session信息会不会转移到其他memcached节点呢,我们来试一下
[root@scholar ~]# service memcached stop
Session已转移到n1上,而且Session ID并未发生改变,至此,Tomcat基于MSM+Memcached实现Session共享目的已实现
The end
Tomcat基于MSM+Memcached实现Session共享实验就说到这里了,实验过程中遇到问题可留言交流。以上仅为个人学习整理,如有错漏,大神勿喷~~~
原创文章,作者:书生,如若转载,请注明出处:http://www.178linux.com/5984
评论列表(12条)
为什么我设置后,2个tomcat显示的sessionid不同?
Session ID A14E6BC4D742B22CEBDFB1D46A85A5A1-n1
Session ID 578EFEAAEEEE8B85812498547E0FE283-n1
javolution-5.4.3.1.jar msm-javolution-serializer-1.8.1.jar
memcached-session-manager-1.8.1.jar spymemcached-2.10.2.jar
memcached-session-manager-tc7-1.8.1.jar
这几个包的版本有要求吗? 我都用了最新版本的。
@bun:与msm相关的版本要一致,也要与tomcat版本一致
@书生:memcached-session-manager-1.8.3.jar
memcached-session-manager-tc7-1.8.3.jar
spymemcached-2.11.1.jar
javolution-5.4.3.1.jar
msm-javolution-serializer-1.8.3.jar
这个版本组合有问题吗?
我发现通过不同tomcat实例直接访问的话,session是一致的。
但是经过nginx做了负载均衡以后,每次刷新的时候,每个实例都得到不同的session值。
这个可能是哪里配置的问题?
@bun:no-sticky 模式
@bun:
@bun:你在从头捋一遍 或者看看视频看哪里有没有遗漏 如果按以上配置一般不会出现问题 刷新时间间隔太长或者清除了本地缓存
@书生:tomcat实例访问地址是 http://1.2.3.4:8080/test/session.jsp 和 http://1.2.3.4:18080/test/session.jsp
nginx的配置是
upstream tomcat {
server 12.3.4:8080;
server 1.2.3.4:18080;
}
http://www.xxx.com
proxy_pass http://tomcat/test/;
用 http://www.xxx.com/session.jsp 可以访问
但是这样的话,session就每次刷新都变
修改为
http://www.xxx.com
proxy_pass http://tomcat/;
用http://www.xxx.com/test/session.jsp 可以访问,session也不变了。
但是路径都了 /test/
如何解决前者的配置问题?
@bun:我为何看不懂你nginx的配置
@书生:proxy_pass http://tomcat/ 和
proxy_pass http://tomcat/test 的区别
@书生:页面地址,在tocamcat是带路径的
http://1.2.3.4:8080/test/session.jsp
但是nginx是代理顶级域名 http://www.xxx.com/session.jsp
所以我用了 proxy_pass http://tomcat/test
这样配置,导致通过nginx访问的时候,session不停变化
proxy_pass http://tomcat/ 的话,就正常,但是访问地址多了test路径
http://www.xxx.com/test/session.jsp
@bun:在意这个干啥 如果你非要纠结 回顾下nginx的知识就明白了 能自己解决尽量自己解决
书生出品,必属精品,现在才置顶是我的问题~