[seata]使用seat 1.5.2尝试回滚后锁释放速度慢

2024-05-09 667 views
1
Ⅰ.问题描述

使用sate 1.5.2尝试回滚后锁释放速度慢,业务都执行完了,数据已经回滚了,undo_log表也清除了,但是锁会一直持续小两分钟,然后才释放的锁,然后才找到原因,不知道因为什么,看日志都是正常的。如果正常提交事务则不会出现这个问题。

在业务中就是很短的测试方法,简单的在不同的服务中插入消耗数据,不会执行这么长时间、长时间的占用锁定的问题。

出现回滚之后,尝试点击经常会出现全局锁等待超时的问题。

二.描述发生了什么

org.springframework.dao.QueryTimeoutException:

更新数据库时出错。原因:io.seata.rm.datasource.exec.LockWaitTimeoutException:全局锁等待超时 该错误可能存在于com/wuchan/dao/mapper/LsMemberMapper.java(最佳猜测) 该错误可能涉及com.wuchan.dao.mapper。 LsMemberMapper.updateById-Inline 设置参数时出错 SQL: UPDATE ls_member SET nick_name=?,birthday=?,face=?,mobile=?,integral_num=?,invite_code=?,open_id=?,sex=?,last_login_date= ?、allow_place_order=?、status=?、create_by=?、create_time=?、update_by=?、update_time=?、version=?哪里 id=? 原因:io.seata.rm.datasource.exec.LockWaitTimeoutException:全局锁等待超时

;全局锁等待超时;嵌套异常是 io.seata.rm.datasource.exec.LockWaitTimeoutException: org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:120) 处的全局锁等待超时 org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate (AbstractFallbackSQLExceptionTranslator.java:72)在org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)在org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)在org.mybatis。 spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:91) 在 org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:441) 在 com.sun.proxy.$Proxy116.update(未知来源) 在 org.mybatis .spring.SqlSessionTemplate.update(SqlSessionTemplate.java:288) 在 com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:64) 在 com.baomidou.mybatisplus.core.override.MybatisMapperProxy$PlainMethodInvoker.invoke (MybatisMapperProxy.java:148) 在 com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:89) 在 com.sun.proxy.$Proxy121.updateById(未知来源) 在 com.wuchan.service。 impl.LsMemberServiceImpl.updateMemberById(LsMemberServiceImpl.java:214) 在 com.wuchan.service.impl.LsMemberServiceImpl.test(LsMemberServiceImpl.java:379) 在 com.wuchan.service.impl.LsMemberServiceImpl$$FastClassBySpringCGLIB$$b80af870.invoke()在 org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) 在 org.springframework.aop.framework.CglibAopProxy$CglibMethodInspiration.invokeJoinpoint(CglibAopProxy.java:779) 在 org.springframework.aop.framework。 ReflectiveMethodInvocau.proceed(ReflectiveMethodInitation.java:163)在 org.springframework.aop.framework.CglibAopProxy$CglibMethodInitation.proceed(CglibAopProxy.java:750)在 io.seata.spring.annotation.GlobalTransactionalInterceptor$2.execute(GlobalTransactionalInterceptor.java:205) )在io.seata.tm.api.TransactionalTemplate.execute(TransactionalTemplate.java:127)在io.seata.spring.annotation.GlobalTransactionalInterceptor.handleGlobalTransaction(GlobalTransactionalInterceptor.java:202)在io.seata.spring.annotation.GlobalTransactionalInterceptor。调用(GlobalTransactionalInterceptor.java:172)在org.springframework.aop.framework.ReflectiveMethodInvocau.proceed(ReflectiveMethodInitation.java:186)在org.springframework.aop.framework.CglibAopProxy$CglibMethodInitation.proceed(CglibAopProxy.java:750)在org .springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:692) 在 com.wuchan.service.impl.LsMemberServiceImpl$$EnhancerBySpringCGLIB$$923c055e.test() at com.wuchan.controller.LsMemberController.test(LsMemberController.java:151) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:105) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:878) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:792) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) at javax.servlet.http.HttpServlet.service(HttpServlet.java:626) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) at javax.servlet.http.HttpServlet.service(HttpServlet.java:733) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:374) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1707) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: io.seata.rm.datasource.exec.LockWaitTimeoutException: Global lock wait timeout at io.seata.rm.datasource.exec.LockRetryController.sleep(LockRetryController.java:66) at io.seata.rm.datasource.ConnectionProxy$LockRetryPolicy.doRetryOnLockConflict(ConnectionProxy.java:363) at io.seata.rm.datasource.exec.AbstractDMLBaseExecutor$LockRetryPolicy.execute(AbstractDMLBaseExecutor.java:186) at io.seata.rm.datasource.exec.AbstractDMLBaseExecutor.executeAutoCommitTrue(AbstractDMLBaseExecutor.java:142) at io.seata.rm.datasource.exec.AbstractDMLBaseExecutor.doExecute(AbstractDMLBaseExecutor.java:82) at io.seata.rm.datasource.exec.BaseTransactionalExecutor.execute(BaseTransactionalExecutor.java:126) at io.seata.rm.datasource.exec.ExecuteTemplate.execute(ExecuteTemplate.java:126) at io.seata.rm.datasource.exec.ExecuteTemplate.execute(ExecuteTemplate.java:54) at io.seata.rm.datasource.PreparedStatementProxy.execute(PreparedStatementProxy.java:55) at org.apache.ibatis.executor.statement.PreparedStatementHandler.update(PreparedStatementHandler.java:47) at org.apache.ibatis.executor.statement.RoutingStatementHandler.update(RoutingStatementHandler.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:64) at com.sun.proxy.$Proxy164.update(Unknown Source) at org.apache.ibatis.executor.SimpleExecutor.doUpdate(SimpleExecutor.java:50) at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:117) at org.apache.ibatis.executor.CachingExecutor.update(CachingExecutor.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.ibatis.plugin.Invocation.proceed(Invocation.java:49) at com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor.intercept(MybatisPlusInterceptor.java:106) at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:62) at com.sun.proxy.$Proxy163.update(Unknown Source) at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:194) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:427) ... 72 common frames omitted Caused by: io.seata.rm.datasource.exec.LockConflictException: get global lock fail, xid:10.0.100.41:8091:18416284777611292, lockKeys:ls_member:1 at io.seata.rm.datasource.ConnectionProxy.recognizeLockKeyConflictException(ConnectionProxy.java:159) at io.seata.rm.datasource.ConnectionProxy.processGlobalTransactionCommit(ConnectionProxy.java:252) at io.seata.rm.datasource.ConnectionProxy.doCommit(ConnectionProxy.java:230) at io.seata.rm.datasource.ConnectionProxy.lambda$commit$0(ConnectionProxy.java:188) at io.seata.rm.datasource.ConnectionProxy$LockRetryPolicy.execute(ConnectionProxy.java:343) at io.seata.rm.datasource.ConnectionProxy.commit(ConnectionProxy.java:187) at io.seata.rm.datasource.exec.AbstractDMLBaseExecutor.lambda$executeAutoCommitTrue$2(AbstractDMLBaseExecutor.java:144) at io.seata.rm.datasource.ConnectionProxy$LockRetryPolicy.doRetryOnLockConflict(ConnectionProxy.java:355) ... 104 common frames omitted

Just paste your stack trace here!
Ⅲ. Describe what you expected to happen

想知道如何解决还是新版本特性

Ⅳ. How to reproduce it (as minimally and precisely as possible)

正常测试使用回滚就会出现这样的问题

Minimal yet complete reproducer code (or URL to code):

Ⅴ. Anything else we need to know?

想知道这是新版本的特性吗?感觉以前的版本都没出现过这个问题

Ⅵ. Environment:
  1. jdk 1.8
  2. seata 1.5.2
  3. mysql 5.7

回答

1

大概率服务rpc重试导致的分支注册出现了2次,其中一个分支在第一次同步回滚的时候进行了注册,导致首次回滚没有发现这个分支需要2分钟后进行清理残留的分支和锁,细节自信查看server和rm的日志会有对应的rollback和register的时间,对比下基本上就一清二楚了

1

2023-06-13 01:11:20.930 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: GlobalBeginRequest{transactionName='member-test', timeout=60000}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:20.934 INFO --- [ServerHandlerThread_1_43_500] [io.seata.server.coordinator.DefaultCoordinator] [doGlobalBegin] [10.0.100.41:8091:18416284777614645]: Begin new global transaction applicationId: membership-service,transactionServiceGroup: default_tx_group, transactionName: member-test,timeout:60000,xid:10.0.100.41:8091:18416284777614645 2023-06-13 01:11:20.935 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[single]: GlobalBeginResponse{xid='10.0.100.41:8091:18416284777614645', extraData='null', resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:21.708 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[merged]: BranchRegisterRequest{xid='10.0.100.41:8091:18416284777614645', branchType=AT, resourceId='jdbc:mysql://10.0.10.44:3306/life_shop', lockKey='ls_member:1', applicationData='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:21.733 INFO --- [ForkJoinPool.commonPool-worker-2] [io.seata.server.coordinator.AbstractCore] [lambda$branchRegister$0] [10.0.100.41:8091:18416284777614645]: Register branch successfully, xid = 10.0.100.41:8091:18416284777614645, branchId = 18416284777614648, resourceId = jdbc:mysql://10.0.10.44:3306/life_shop ,lockKeys = ls_member:1 2023-06-13 01:11:21.733 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[merged]: BranchRegisterResponse{branchId=18416284777614648, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:26.884 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[merged]: BranchRegisterRequest{xid='10.0.100.41:8091:18416284777614645', branchType=AT, resourceId='jdbc:mysql://10.0.10.44:3306/life_shop', lockKey='ls_integral:115', applicationData='{"skipCheckLock":true}'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:26.903 INFO --- [ForkJoinPool.commonPool-worker-2] [io.seata.server.coordinator.AbstractCore] [lambda$branchRegister$0] [10.0.100.41:8091:18416284777614645]: Register branch successfully, xid = 10.0.100.41:8091:18416284777614645, branchId = 18416284777614655, resourceId = jdbc:mysql://10.0.10.44:3306/life_shop ,lockKeys = ls_integral:115 2023-06-13 01:11:26.904 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[merged]: BranchRegisterResponse{branchId=18416284777614655, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:27.069 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: GlobalRollbackRequest{xid='10.0.100.41:8091:18416284777614645', extraData='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:27.211 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: BranchRollbackResponse{xid='10.0.100.41:8091:18416284777614645', branchId=18416284777614648, branchStatus=PhaseTwo_Rollbacked, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:27.214 INFO --- [ServerHandlerThread_1_46_500] [io.seata.server.coordinator.DefaultCore] [lambda$doGlobalRollback$3] [10.0.100.41:8091:18416284777614645]: Rollback branch transaction successfully, xid = 10.0.100.41:8091:18416284777614645 branchId = 18416284777614648 2023-06-13 01:11:27.284 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: BranchRollbackResponse{xid='10.0.100.41:8091:18416284777614645', branchId=18416284777614655, branchStatus=PhaseTwo_Rollbacked, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:11:27.287 INFO --- [ServerHandlerThread_1_46_500] [io.seata.server.coordinator.DefaultCore] [lambda$doGlobalRollback$3] [10.0.100.41:8091:18416284777614645]: Rollback branch transaction successfully, xid = 10.0.100.41:8091:18416284777614645 branchId = 18416284777614655 2023-06-13 01:11:27.287 INFO --- [ServerHandlerThread_1_46_500] [io.seata.server.coordinator.DefaultCore] [doGlobalRollback] [10.0.100.41:8091:18416284777614645]: Rollback global transaction successfully, xid = 10.0.100.41:8091:18416284777614645. 2023-06-13 01:11:27.290 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[single]: GlobalRollbackResponse{globalStatus=Rollbacked, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:13:31.376 INFO --- [RetryRollbacking_1_1] [io.seata.server.coordinator.DefaultCore] [doGlobalRollback] [10.0.100.41:8091:18416284777614645]: Rollback global transaction successfully, xid = 10.0.100.41:8091:18416284777614645.

查看service日志 ,没有发现回滚时有分支再次注册的日志,最终全局事务完成还是没有释放锁。

6

你的数据库是不是背后是读写分离?

2

日志很明确,Rollback global transaction successfully的日志输出一定是发出了unlock的sql,sql执行失败不可能会打印该日志,会直接因为抛异常而进入RetryRollbacking线程后续处理,而RetryRollbacking_1_1日志输出只看到了xid,说明branch回滚的时候锁已经被删了,如果你怀疑锁没有被删,请提供当Rollback global transaction successfully已经输出后,去数据库select * from lock_table where xid=这个rollback的xid,如果有数据请提供,如果没有数据说明你的Global lock wait timeout 压根与这个事务无关,自行通过tc日志查看hold相关日志,会输出锁被哪个xid持有

3

Rollback global transaction successfully 之后锁确实是被释放了,但是 2023-06-13 01:11:27.290 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[single]: GlobalRollbackResponse{globalStatus=Rollbacked, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 01:13:31.376 INFO --- [RetryRollbacking_1_1] [io.seata.server.coordinator.DefaultCore] [doGlobalRollback] [10.0.100.41:8091:18416284777614645]: Rollback global transaction successfully, xid = 10.0.100.41:8091:18416284777614645. 这两条日志间隔差了两分钟,锁释放的速度很慢,在这个时间段我检查业务执行完了,数据也回滚了,undo_log数据也删了,但是locl_table中的锁数据会一直持续两分钟才释放。不知道为什么会释放的慢

3

我说了排查方式,这里2分钟的日志只是清理残留锁用的。在之前就已经解锁了,如果你觉得表里还有锁请把日志带上和锁记录截图发出来,第一个问题是否有读写分离也回答清楚,不要略过我每一个问题

6

是用的读写分离

查看日志显示回滚成功后,锁还是存在 2023-06-13 08:16:46.353 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: GlobalBeginRequest{transactionName='member-test', timeout=60000}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:46.358 INFO --- [ServerHandlerThread_1_15_500] [io.seata.server.coordinator.DefaultCoordinator] [doGlobalBegin] [10.0.100.41:8091:18416653908602464]: Begin new global transaction applicationId: membership-service,transactionServiceGroup: default_tx_group, transactionName: member-test,timeout:60000,xid:10.0.100.41:8091:18416653908602464 2023-06-13 08:16:46.359 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[single]: GlobalBeginResponse{xid='10.0.100.41:8091:18416653908602464', extraData='null', resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:47.702 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[merged]: BranchRegisterRequest{xid='10.0.100.41:8091:18416653908602464', branchType=AT, resourceId='jdbc:mysql://10.0.10.44:3306/life_shop', lockKey='ls_member:1', applicationData='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:47.726 INFO --- [ForkJoinPool.commonPool-worker-1] [io.seata.server.coordinator.AbstractCore] [lambda$branchRegister$0] [10.0.100.41:8091:18416653908602464]: Register branch successfully, xid = 10.0.100.41:8091:18416653908602464, branchId = 18416653908602467, resourceId = jdbc:mysql://10.0.10.44:3306/life_shop ,lockKeys = ls_member:1 2023-06-13 08:16:47.726 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[merged]: BranchRegisterResponse{branchId=18416653908602467, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:49.097 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[merged]: BranchRegisterRequest{xid='10.0.100.41:8091:18416653908602464', branchType=AT, resourceId='jdbc:mysql://10.0.10.44:3306/life_shop', lockKey='ls_integral:121', applicationData='{"skipCheckLock":true}'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:49.108 WARN --- [TxTimeoutCheck_1_1] [com.alibaba.druid.pool.DruidAbstractDataSource] [testConnectionInternal] []: discard long time none received connection. , jdbcUrl : jdbc:mysql://10.0.10.44:3306/life_shop?useUnicode=true&characterEncoding=utf-8&autoReconnect=true&serverTimezone=Asia/Shanghai&allowMultiQueries=true&useSSL=false, version : 1.2.5, lastPacketReceivedIdleMillis : 200967 2023-06-13 08:16:49.120 INFO --- [ForkJoinPool.commonPool-worker-1] [io.seata.server.coordinator.AbstractCore] [lambda$branchRegister$0] [10.0.100.41:8091:18416653908602464]: Register branch successfully, xid = 10.0.100.41:8091:18416653908602464, branchId = 18416653908602470, resourceId = jdbc:mysql://10.0.10.44:3306/life_shop ,lockKeys = ls_integral:121 2023-06-13 08:16:49.120 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[merged]: BranchRegisterResponse{branchId=18416653908602470, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:49.489 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: GlobalRollbackRequest{xid='10.0.100.41:8091:18416653908602464', extraData='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:49.735 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: BranchRollbackResponse{xid='10.0.100.41:8091:18416653908602464', branchId=18416653908602467, branchStatus=PhaseTwo_Rollbacked, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:49.741 INFO --- [ServerHandlerThread_1_18_500] [io.seata.server.coordinator.DefaultCore] [lambda$doGlobalRollback$3] [10.0.100.41:8091:18416653908602464]: Rollback branch transaction successfully, xid = 10.0.100.41:8091:18416653908602464 branchId = 18416653908602467 2023-06-13 08:16:52.503 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: receive msg[single]: BranchRollbackResponse{xid='10.0.100.41:8091:18416653908602464', branchId=18416653908602470, branchStatus=PhaseTwo_Rollbacked, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group 2023-06-13 08:16:52.507 INFO --- [ServerHandlerThread_1_18_500] [io.seata.server.coordinator.DefaultCore] [lambda$doGlobalRollback$3] [10.0.100.41:8091:18416653908602464]: Rollback branch transaction successfully, xid = 10.0.100.41:8091:18416653908602464 branchId = 18416653908602470 2023-06-13 08:16:52.508 INFO --- [ServerHandlerThread_1_18_500] [io.seata.server.coordinator.DefaultCore] [doGlobalRollback] [10.0.100.41:8091:18416653908602464]: Rollback global transaction successfully, xid = 10.0.100.41:8091:18416653908602464. 2023-06-13 08:16:52.509 INFO --- [batchLoggerPrint_1_1] [io.seata.core.rpc.processor.server.BatchLogHandler] [run] []: result msg[single]: GlobalRollbackResponse{globalStatus=Rollbacked, resultCode=Success, msg='null'}, clientIp: 192.16.41.147, vgroup: default_tx_group

lock_table

4

你截图的话可以展开看吗?读写分离怀疑你查询的时候从库未同步到写库的delete事件导致,tc的库不允许使用读写分离,本身就无法保证一致性实时性的东西,分布式事务本身就要保证一致性,你整个非一致的出来套娃吗

3

你这个图连个xid都没展开,压根不知道是否跟上面已经rollback的事务是一个xid,建议自行按我上面说的排查和不要使用读写分离的库给tc使用