[seata]docker中seata配置了宿主机ip,但是启动后显示是容器内ip?

2024-07-15 658 views
6

docker-compos文件如下:

version: "3.3"

networks:
  my_network:

services:
  nacos:
    image: nacos/nacos-server:v2.1.0
    ports:
      - "8848:8848"
    environment:
      - "JVM_XMS=256m"
      - "JVM_XMX=256m"
      - "MODE=standalone"
    networks:
      - my_network

  seata-server:
    image: seataio/seata-server:1.4.2
    container_name: "seata-server"
    ports:
      - "8091:8091"
    networks:
      - my_network
    environment:
      - SEATA_PORT=8091
      - SEATA_IP=192.168.150.3 # 宿主机ip
    volumes:
      - /home/lyz/seata/seata-server:/seata-server

启动后查看nacos中seata-server服务发现ip为容器内ip

X93Qat.png

我的register.conf配置如下

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
    application = "seata-server"
    serverAddr = "192.168.150.3:8848"
    group = "SEATA_GROUP"
    namespace = "34e7a459-f486-4300-a20a-80cc763b58ac"
    cluster = "default"
    username = "nacos"
    password = "nacos"
  }
}

config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "nacos"

  nacos {
    serverAddr = "192.168.150.3:8848"
    namespace = "34e7a459-f486-4300-a20a-80cc763b58ac"
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"
    dataId = "seataServer.properties"
  }
  file {
    name = "file.conf"
  }
}

file.conf配置如下:

store {
  ## store mode: file、db、redis
  mode = "db"
  ## rsa decryption public key
  publicKey = ""

  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.cj.jdbc.Driver"
    ## if using mysql to store the data, recommend add rewriteBatchedStatements=true in jdbc connection param
    url = "jdbc:mysql://192.168.150.3:3306/seata?useSSL=false&characterEncoding=utf8&serverTimezone=Asia/Shanghai&allowMultiQueries=true"
    user = "root"
    password = "123456"
    minConn = 5
    maxConn = 100
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
    maxWait = 5000
  }
}

帮忙看下是什么原因,谢谢

回答

9

进容器打下env,确认下到底有没有生效

4

生效了 X9tsKS.png

6

我这个env生效了,但是nacos中显示的确是容器内ip,导致我springboot项目一直报错:无法获取seata服务

7

1.5修复了这个问题,或者拉1.4.2的源码自己合并下https://github.com/seata/seata/pull/4408

3

@liuqiufeng 非常感谢,重新拉取1.5就没问题了。 另外,1.5容器自带的 application.example.yml 有点问题,启动会报错,如下:

Could not resolve placeholder 'console.user.username' in value "${console.user.username}"

配置中加上这个就可以了(这里没有用到console为啥还要配置?)

console:
  user:
    username: admin
    password: admin

然后启动又报错,如下:

Could not resolve placeholder 'seata.security.secretKey' in value "${seata.security.secretKey}"

配置中加上这个就可以了

security:
    secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
    tokenValidityInMilliseconds: 1800000
    ignore:
      urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login

我最终的配置文件如下:

server:
  port: 7091

spring:
  application:
    name: seata-server

logging:
  config: classpath:logback-spring.xml
  file:
    path: ${user.home}/logs/seata
  extend:
    logstash-appender:
      destination: 127.0.0.1:4560
    kafka-appender:
      bootstrap-servers: 127.0.0.1:9092
      topic: logback_to_logstash
console:
  user:
    username: admin
    password: admin
seata:
  config:
    # support: nacos 、 consul 、 apollo 、 zk  、 etcd3
    type: nacos
    nacos:
      server-addr: 192.168.150.3:8848
      namespace:
      group: SEATA_GROUP
      username: nacos
      password: nacos
      ##if use MSE Nacos with auth, mutex with username/password attribute
      #access-key: ""
      #secret-key: ""
      data-id: seataServer.properties
    consul:
      server-addr: 127.0.0.1:8500
      acl-token:
      key: seata.properties
    apollo:
      appId: seata-server
      apollo-meta: http://192.168.1.204:8801
      apollo-config-service: http://192.168.1.204:8080
      namespace: application
      apollo-access-key-secret:
      cluster: seata
    zk:
      server-addr: 127.0.0.1:2181
      session-timeout: 6000
      connect-timeout: 2000
      username:
      password:
      node-path: /seata/seata.properties
    etcd3:
      server-addr: http://localhost:2379
      key: seata.properties
  registry:
    # support: nacos 、 eureka 、 redis 、 zk  、 consul 、 etcd3 、 sofa
    type: nacos
    preferred-networks: 30.240.*
    nacos:
      application: seata-server
      server-addr: 192.168.150.3:8848
      group: SEATA_GROUP
      namespace:
      cluster: default
      username: nacos
      password: nacos
      ##if use MSE Nacos with auth, mutex with username/password attribute
      #access-key: ""
      #secret-key: ""
    eureka:
      service-url: http://localhost:8761/eureka
      application: default
      weight: 1
    redis:
      server-addr: localhost:6379
      db: 0
      password:
      cluster: default
      timeout: 0
    zk:
      cluster: default
      server-addr: 127.0.0.1:2181
      session-timeout: 6000
      connect-timeout: 2000
      username: ""
      password: ""
    consul:
      cluster: default
      server-addr: 127.0.0.1:8500
      acl-token:
    etcd3:
      cluster: default
      server-addr: http://localhost:2379
    sofa:
      server-addr: 127.0.0.1:9603
      application: default
      region: DEFAULT_ZONE
      datacenter: DefaultDataCenter
      cluster: default
      group: SEATA_GROUP
      address-wait-time: 3000

  server:
    service-port: 8091 #If not configured, the default is '${server.port} + 1000'
    max-commit-retry-timeout: -1
    max-rollback-retry-timeout: -1
    rollback-retry-timeout-unlock-enable: false
    enableCheckAuth: true
    retryDeadThreshold: 130000
    xaerNotaRetryTimeout: 60000
    recovery:
      handle-all-session-period: 1000
    undo:
      log-save-days: 7
      log-delete-period: 86400000
    session:
      branch-async-queue-size: 5000 #branch async remove queue size
      enable-branch-async-remove: false #enable to asynchronous remove branchSession
  store:
    # support: file 、 db 、 redis
    mode: db
    session:
      mode: db
    lock:
      mode: db
    file:
      dir: sessionStore
      max-branch-session-size: 16384
      max-global-session-size: 512
      file-write-buffer-cache-size: 16384
      session-reload-read-size: 100
      flush-disk-mode: async
    db:
      datasource: druid
      db-type: mysql
      driver-class-name: com.mysql.jdbc.Driver
      url: jdbc:mysql://192.168.150.3:3306/seata?rewriteBatchedStatements=true
      user: root
      password: 123456
      min-conn: 5
      max-conn: 100
      global-table: global_table
      branch-table: branch_table
      lock-table: lock_table
      distributed-lock-table: distributed_lock
      query-limit: 100
      max-wait: 5000
    redis:
      mode: single
      database: 0
      min-conn: 1
      max-conn: 10
      password:
      max-total: 100
      query-limit: 100
      single:
        host: 127.0.0.1
        port: 6379
      sentinel:
        master-name:
        sentinel-hosts:
  metrics:
    enabled: false
    registry-type: compact
    exporter-list: prometheus
    exporter-prometheus-port: 9898
  transport:
    rpc-tc-request-timeout: 30000
    enable-tc-server-batch-send-response: false
    shutdown:
      wait: 3
    thread-factory:
      boss-thread-prefix: NettyBoss
      worker-thread-prefix: NettyServerNIOWorker
      boss-thread-size: 1
  security:
    secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
    tokenValidityInMilliseconds: 1800000
    ignore:
      urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login
6

https://seata.io/zh-cn/docs/ops/deploy-by-docker.html

自定义配置文件需要通过挂载文件的方式实现,将宿主机上的 application.yml 挂载到容器中相应的目录

首先启动一个用户将resources目录文件拷出的临时容器

docker run -d -p 8091:8091 -p 7091:7091 --name seata-serve seataio/seata-server:latest docker cp seata-serve:/seata-server/resources /User/seata/config 拷出后可以,可以选择修改application.yml再cp进容器,或者rm临时容器,如下重新创建,并做好映射路径设置

指定 application.yml $ docker run --name seata-server \ -p 8091:8091 \ -p 7091:7091 \ -v /User/seata/config:/seata-server/resources \ seataio/seata-server 其中 -e 用于配置环境变量, -v 用于挂载宿主机的目录,如果是以file存储模式运行,请加上-v /User/seata/sessionStore :/seata-server/sessionStore 将file的数据文件映射到宿主机,以防数据丢失(注:/User/seata/config和/User/seata/sessionStore可自定义宿主机目录,无需照搬)

接下来你可以看到宿主机对应目录下已经有了,logback-spring.xml,application.example.yml,application.yml 如果比较熟悉springboot,那么接下来就很简单了,只需要修改application.yml即可,详细配置可以参考application.example.yml,该文件存放了所有可使用的详细配置

0

@a364176773 这个我知道的,谢谢