分布式事务 SEATA-1.4.1 AT模式 配合NACOS 应用
SEATA 配置
使用 nacos 做为配置中心配置 SEATA
当前 SEATA 版本: 1.4.1
TC (Transaction Coordinator) – 事务协调者
维护全局和分支事务的状态,驱动全局事务提交或回滚。
配置参数
config.txt
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
service.vgroupMapping.app-server-tx-group=default
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
store.mode=db
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=url
store.db.user=root
store.db.password=123456
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
store.redis.host=127.0.0.1
store.redis.port=6379
store.redis.maxConn=10
store.redis.minConn=1
store.redis.database=10
store.redis.password=null
store.redis.queryLimit=100
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.log.exceptionRate=100
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
nacos bash 脚本
nacos-config.sh
while getopts ":h:p:g:t:u:w:" opt
do
case $opt in
h)
host=$OPTARG
;;
p)
port=$OPTARG
;;
g)
group=$OPTARG
;;
t)
tenant=$OPTARG
;;
u)
username=$OPTARG
;;
w)
password=$OPTARG
;;
?)
echo " USAGE OPTION: $0 [-h host] [-p port] [-g group] [-t tenant] [-u username] [-w password] "
exit 1
;;
esac
done
if [[ -z ${host} ]]; then
host=localhost
fi
if [[ -z ${port} ]]; then
port=8848
fi
if [[ -z ${group} ]]; then
group="SEATA_GROUP"
fi
if [[ -z ${tenant} ]]; then
tenant=""
fi
if [[ -z ${username} ]]; then
username=""
fi
if [[ -z ${password} ]]; then
password=""
fi
nacosAddr=$host:$port
contentType="content-type:application/json;charset=UTF-8"
echo "set nacosAddr=$nacosAddr"
echo "set group=$group"
failCount=0
tempLog=$(mktemp -u)
function addConfig() {
curl -X POST -H "${contentType}" "//$nacosAddr/nacos/v1/cs/configs?dataId=$1&group=$group&content=$2&tenant=$tenant&username=$username&password=$password" >"${tempLog}" 2>/dev/null
if [[ -z $(cat "${tempLog}") ]]; then
echo " Please check the cluster status. "
exit 1
fi
if [[ $(cat "${tempLog}") =~ "true" ]]; then
echo "Set $1=$2 successfully "
else
echo "Set $1=$2 failure "
(( failCount++ ))
fi
}
count=0
for line in $(cat config.txt | sed s/[[:space:]]//g); do
(( count++ ))
key=${line%%=*}
value=${line#*=}
addConfig "${key}" "${value}"
done
echo "========================================================================="
echo " Complete initialization parameters, total-count:$count , failure-count:$failCount "
echo "========================================================================="
if [[ ${failCount} -eq 0 ]]; then
echo " Init nacos config finished, please start seata-server. "
else
echo " init nacos config fail. "
fi
同步 config 配置到 nacos
进入 TC 服务器
-
新建文件夹 seata-config
-
进入 seata-config
-
新建 config.txt 文件并复制配置参数到 config.txt 文件中
-
新建 nacos-config.sh 文件,同时复制 nacos bash 脚本到 nacos-config.sh 中
-
使用以下命令同步配置参数到 nacos
bash nacos-config.sh -h 127.0.0.1 -p 8848 -g SEATA_GROUP -t 3a2aea46-07c6-4e21-9a1e-8946cde9e2b3 -u nacos -w nacos
得到输出
set nacosAddr=127.0.0.1:8848 set group=SEATA_GROUP Set transport.type=TCP successfully Set transport.server=NIO successfully . . . ========================================================================= Complete initialization parameters, total-count:80 , failure-count:0 ========================================================================= Init nacos config finished, please start seata-server.
使用 docker 部署 SEATA
Docker 部署 SEATA 官方文档
进入 TC 服务器,并进入 seat-config 文件夹
-
新建 registry.conf 文件,并添加以下内容,registry 配置参考
registry.conf
registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" nacos { application = "seata-server" group = "SEATA_GROUP" serverAddr = "127.0.0.1" namespace = "3a2aea46-07c6-4e21-9a1e-8946cde9e2b3" cluster = "default" } } config { # file、nacos 、apollo、zk、consul、etcd3、springCloudConfig type = "nacos" nacos { serverAddr = "127.0.0.1" namespace = "3a2aea46-07c6-4e21-9a1e-8946cde9e2b3" group = "SEATA_GROUP" username = "nacos" password = "nacos" } }
-
新建 file.conf 文件并添加以下内容(可选,可通过 nacos 读取)
file.conf
transport { # tcp udt unix-domain-socket type = "TCP" #NIO NATIVE server = "NIO" #enable heartbeat heartbeat = true # the client batch send request enable enableClientBatchSendRequest = true #thread factory for netty threadFactory { bossThreadPrefix = "NettyBoss" workerThreadPrefix = "NettyServerNIOWorker" serverExecutorThread-prefix = "NettyServerBizHandler" shareBossWorker = false clientSelectorThreadPrefix = "NettyClientSelector" clientSelectorThreadSize = 1 clientWorkerThreadPrefix = "NettyClientWorkerThread" # netty boss thread size,will not be used for UDT bossThreadSize = 1 #auto default pin or 8 workerThreadSize = "default" } shutdown { # when destroy server, wait seconds wait = 3 } serialization = "seata" compressor = "none" } service { #transaction service group mapping vgroupMapping.my_test_tx_group = "default" #only support when registry.type=file, please don't set multiple addresses default.grouplist = "127.0.0.1:8091" #degrade, current not support enableDegrade = false #disable seata disableGlobalTransaction = false } client { rm { asyncCommitBufferLimit = 10000 lock { retryInterval = 10 retryTimes = 30 retryPolicyBranchRollbackOnConflict = true } reportRetryCount = 5 tableMetaCheckEnable = false reportSuccessEnable = false } tm { commitRetryCount = 5 rollbackRetryCount = 5 } undo { dataValidation = true logSerialization = "jackson" logTable = "undo_log" } log { exceptionRate = 100 } }
-
运行 docker 命令
- 注意: 当在 config.txt 中配置 store.mode=db 时,需要在配置的数据库连接中初始化表
global_table
、branch_table
、lock_table
,sql 传送门。
docker run -d --name seata-server \ --net=host \ -p 8091:8091 \ -e SEATA_CONFIG_NAME=file:/root/seata-config/registry \ -v /root/seata-config:/root/seata-config \ seataio/seata-server:1.4.1
挂载目录为 TC 服务器配置目录。
- 注意: 当在 config.txt 中配置 store.mode=db 时,需要在配置的数据库连接中初始化表
TM (Transaction Manager) – 事务管理器
定义全局事务的范围:开始全局事务、提交或回滚全局事务。
例子:业务聚合服务
-
SEATA 包引入。pom 配置如下,当使用
spring-cloud-starter-openfeign
包时,需要移除spring-cloud-starter-openfeign
包,spring-cloud-starter-alibaba-seata
中已经包含了spring-cloud-starter-openfeign
,再次引入可能导致包冲突。pom.xml
<dependency> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> <version>1.4.1</version> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> <version>2.2.1.RELEASE</version> <exclusions> <exclusion> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> </exclusion> <exclusion> <groupId>io.seata</groupId> <artifactId>seata-all</artifactId> </exclusion> </exclusions> </dependency>
-
添加 registry.conf。在工程中 resource 目录下添加如下内容
registry.conf
registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" nacos { application = "seata-server" serverAddr = "127.0.0.1:8848" namespace = "3a2aea46-07c6-4e21-9a1e-8946cde9e2b3" cluster = "default" username = "nacos" password = "nacos" } } config { # file、nacos 、apollo、zk、consul、etcd3、springCloudConfig type = "nacos" nacos { serverAddr = "127.0.0.1:8848" namespace = "3a2aea46-07c6-4e21-9a1e-8946cde9e2b3" group = "SEATA_GROUP" username = "nacos" password = "nacos" } }
-
配置 bootstrap.properties,添加配置,内容如下,
seata.tx-service-group
和namespace
改为对应的值bootstrap.properties
... seata.tx-service-group=app-server-tx-group seata.config.type=nacos seata.config.nacos.server-addr=127.0.0.1:8848 seata.config.nacos.namespace=3a2aea46-07c6-4e21-9a1e-8946cde9e2b3 seata.config.nacos.group=SEATA_GROUP
在 TM 中通过 @GlobalTransactional
开启全局异常,示例代码:
@GlobalTransactional
@GetMapping({"create"})
public String create(String name,Integer age) {
...
return "创建成功";
}
RM (Resource Manager) – 资源管理器
管理分支事务处理的资源,与TC交谈以注册分支事务和报告分支事务的状态,并驱动分支事务提交或回滚。
例子:被调用服务。
-
配置 bootstrap.properties,添加配置,内容如下,
seata.tx-service-group
和namespace
改为对应的值bootstrap.properties
... seata.tx-service-group=app-server-tx-group seata.config.type=nacos seata.config.nacos.server-addr=127.0.0.1:8848 seata.config.nacos.namespace=3a2aea46-07c6-4e21-9a1e-8946cde9e2b3 seata.config.nacos.group=SEATA_GROUP
-
对需要做回滚的业务标记
@Transactional(rollbackFor = Exception.class)
-
如果配置了全局异常处理,使用 SEATA API 发起事务回滚
@ExceptionHandler(value = Exception.class) @ResponseBody public String exceptionHandler(Exception e) { ... try { String xid = RootContext.getXID(); if (StringUtils.isNotEmpty(xid)) { GlobalTransactionContext.reload(RootContext.getXID()).rollback(); } } catch (TransactionException transactionException) { transactionException.printStackTrace(); log.error("===TransactionException==={}", transactionException.getMessage()); } return e.getMessage(); }
或者通过 AOP 全局处理回滚
/** * @author Zhang_Xiang * @since 2021/2/22 17:36:16 */ @Aspect @Component @Slf4j public class TxAspect { @Pointcut("execution(public * *(..))") public void publicMethod() { } @Pointcut("within(com.*.service.impl..*)") private void services() { } @Pointcut("@annotation(org.springframework.transaction.annotation.Transactional)") private void transactional() { } @Pointcut("within(com.*.webapi.controller..*)") private void actions() { } @Pointcut("@annotation(org.springframework.web.bind.annotation.ExceptionHandler))") private void validatedException() { } @Before(value = "validatedException()") public void beforeValidate(JoinPoint joinPoint) throws TransactionException { Object[] args = joinPoint.getArgs(); if (args == null || args.length == 0) { return; } Exception e = (Exception) args[0]; if (e instanceof MethodArgumentNotValidException || e instanceof BindException || e instanceof ConstraintViolationException) { globalRollback(); } } @AfterThrowing(throwing = "e", pointcut = "publicMethod()&&services()&&transactional()") public void doRecoveryMethods(Throwable e) throws TransactionException { log.info("===method throw===:{}", e.getMessage()); globalRollback(); } @AfterReturning(value = "publicMethod()&&actions()", returning = "result") public void afterReturning(RestResponse<?> result) throws TransactionException { log.info("===method finished===:{}", result); if (result.isFail()) { globalRollback(); } } //region private methods private void globalRollback() throws TransactionException { if (!StringUtils.isBlank(RootContext.getXID())) { log.info("===xid===:{}", RootContext.getXID()); GlobalTransactionContext.reload(RootContext.getXID()).rollback(); } } //endregion }