1、tengine+luajit下的session_sticky+移植openresty下healthcheck判断一张网页状态码的问题 (说明:为啥不直接用http://tengine.taobao.org/document_cn/http_upstream_check_cn.html? 因为后期健康检查还需要改upstream-healthcheck下的lua文件,改动方便嘛。) http://tengine.taobao.org/document_cn/http_upstream_session_sticky_cn.html https://github.com/openresty/lua-resty-upstream-healthcheck
问题(下面第2点): 1、当一个成员DOWN掉时,新的请求cookie里面不带uumyid的,确实会踢掉DOWN节点调度到up的节点上。 2、先前已经建立的到down的节点的请求,请求中cookie带有uumyid时,还是会调度到标识为DOWN的节点上(测试操作 mv go.htm 33.htm)。 3、接上面第2点,当(mv go.htm 33.htm)时,再关掉tengine ,然后,真的会踢掉DOWN的节点。
tengine的配置如下
lua_package_path "/usr/local/app/nginx/html/healthcheck.lua;;";
# sample upstream block:
upstream cluster1 {
session_sticky cookie=uumyid fallback=on path=/ mode=insert option=direct;
server 192.168.10.225:80;
server 192.168.10.226:80;
server 192.168.10.227:80;
}
## the size depends on the number of servers in upstream {}:
lua_shared_dict healthcheck 1m;
lua_socket_log_errors off;
init_worker_by_lua_block {
local hc = require "resty.upstream.healthcheck"
local ok, err = hc.spawn_checker{
shm = "healthcheck", -- defined by "lua_shared_dict"
upstream = "cluster1", -- defined by "upstream"
type = "http",
http_req = "GET /go.htm HTTP/1.0\r\n\r\n",
-- raw HTTP request for checking
interval = 2000, -- run the check cycle every 2 sec
timeout = 1000, -- 1 sec is the timeout for network operations
fall = 3, -- # of successive failures before turning a peer down
rise = 2, -- # of successive successes before turning a peer up
valid_statuses = {200, 301}, -- a list valid HTTP status code
concurrency = 10, -- concurrency level for test requests
}
if not ok then
ngx.log(ngx.ERR, "failed to spawn health checker: ", err)
return
end
}