Common Operation

As an example of application deployment namespace, let's use apitable-app.

1. To check the running status of the service, use the following command:

kubectl --kubeconfig /data/apitable/app/config-k8s/kubeconfig get pods -n apitable-app

2. To check the service logs, use the following commands:

kubectl --kubeconfig /data/apitable/app/config-k8s/kubeconfig -n apitable-app get pod
kubectl --kubeconfig /data/apitable/app/config-k8s/kubeconfig logs -f --tail=500 backend-server-pod-id -n apitable-app
kubectl --kubeconfig /data/apitable/app/config-k8s/kubeconfig logs -f --tail=500 room-server-pod-id -n apitable-app

3. To restart a specific service, run the following command:

cd /data/apitable/app
kubectl --kubeconfig config-k8s/kubeconfig rollout restart deployment/backend-server -n apitable-app

4. Modify environment variables in custom-config/config.yaml:

For detailed variables, see: https://apitable.getoutline.com/doc/546v5akd5yy6yep-VtvcXsQpUW

config:
  common:
     KEY: VAL
  custom:
    backend_server:
       KEY: VAL
    room_server:
       KEY: VAL

Execute ops-manager command to make the configuration effective.

5. Customize nodes and resource limits for the service in custom-config/config.yaml:

Node configuration: container.resources.app_name.replicas

container:
  resources:
    web_server:
      replicas: 3
    backend_server:
      requests_cpu: 1500m # Resource limit
      requests_memory: 2048Mi
      limits_cpu: 1500m
      limits_memory: 2048Mi
      replicas: 3
    room_server:
      replicas: 5

Execute ops-manager command to make the configuration effective.

6. Update init-appdata version.

Configuration path: config.custom.init_data.images.initAppData

config:
  custom:
    init_data:
      images:
        initAppData: docker.apitable.ltd/apitable-ee/init-appdata:v0.20.0-alpha_build100

Execute ops-manager command to make the configuration effective.

8. Modify image path (namespace).

The default image path (namespace) is vikadata/vika, for example: docker.vika.ltd/vikadata/vika/backend-server:v0.20.0-alpha_build100

image:
  namespace:
    common: vikadata/apitable # Optional, change all paths to vikadata/apitable by default.
    app:
      custom:
        init_settings: vikadata/apitable-ee # Change the image path for init_settings only.

Execute ops-manager command to make the configuration effective.

8. All configurations for ops-manager

Execute ops-manager command to make the configuration effectiv

# Basic template, including all variables
namespace:
  datacenter: apitable-datacenter # Deploy data center namespace
  app: apitable-app # Deploy application namespace
  create: true # Whether to create namespace. true to create, false not to create
image:
  registry: docker.vika.ltd # Private repository address for apitable-app application
  tag:
    custom:
      app:
        openresty: 1.21.4.1-http-fat
        backend_server: v0.20.0-rc.36_build3331
        room_server: v0.20.0-rc.36_build3776
        socket_server: v0.20.0-rc.36_build3776
        web_server: v0.20.0-op_build3633
        init_db: v0.20.0-rc.37_build682
        init_db_enterprise: v0.20.0-rc.37_build682
        init_settings: v0.18.0-alpha_build801
        imageproxy_server: v0.13.4-alpha_build9
  namespace:
    common: apitable/apitable # Default namespace for app image, for example, the image path for backend-server is: docker.apitable.ltd/apitable/apitable/backend_server:v0.20.0-rc.36_build3331
    custom: # Independent image namespace
      app:
        init_settings: apitable/apitable-ee
  datacenter: # Data center repository address, default to pull from public network
    registry: "docker.io"
storage:
  class: "cbs"
  mysql_backup_size: 50Gi
  default_storage_size: 20Gi
featGate:
  init_data: true
container:
  node_selector: {}
resources:
  web_server:
    rolling_update_max_surge: 100%
    probe_http_get_path: "/api/actuator/health"
  openresty:
    lifecycle_post_start_command: ["/bin/sh", "-c", "pwd"]
  backend_server:
    requests_cpu: 1000m
    requests_memory: 2048Mi
  scheduler_server:
    replicas: 1
  job_admin_server:
    replicas: 0
config:
  common:
    MYSQL_HOST: "mysql-primary.apitable-datacenter" # MySQL host, can be set to external database
    MYSQL_DATABASE: "apitable"
    MYSQL_USERNAME: "root"
    MYSQL_PASSWORD: "6sg8vgDFcwWXP386EiZB" # Password for self-built mysql
    DATABASE_TABLE_PREFIX: "apitable_" # MySQL table prefix ,default value
    REDIS_HOST: "redis-master.apitable-datacenter"
    REDIS_PASSWORD: "UHWCWiuUMVyupqmW4cXV"
    REDIS_SSL_ENABLED: "false" 
    RABBITMQ_HOST: "rabbitmq-headless.apitable-datacenter"
    RABBITMQ_USERNAME: "user"
    RABBITMQ_PASSWORD: "7r4HVvsrwP4kQjAgj8Jj"
    RABBITMQ_VHOST: "/"
    SERVER_DOMAIN: ""
    ROW_FILTER_OFFLOAD_COMPLEXITY_THRESHOLD: "infinity"
    NODE_OPTIONS: "--max-old-space-size=4096 --max-http-header-size=80000"
    INSTANCE_MAX_MEMORY: "4096M"
    ASSETS_URL: "assets"
    ASSETS_BUCKET: "assets"
    OSS_HOST: "/assets"
  custom:
    has_load_balancer: true # Whether to enable slb
    has_mysql: true # Whether to self-build mysql, access path mysql-primary.apitable-datacenter
    has_redis: true # Whether to self-build redis, access path redis-master.apitable-datacenter
    has_mongo: false
    has_minio: true
    has_rabbitmq: true
    has_databus_server: true   # Whether to enable databus-server, The version will take effect after release/1.1.0
    docker_registry: # Private repository configuration
      registry: "docker.apitable.ltd"
      username: "robot"
      password: "123456"
      email: "robot@apitable.com"
    enable_ssl: false
    server_name: "example.apitable.com"              # domain
      tls_crt: |      
        -----BEGIN CERTIFICATE-----
        xxxxx
        -----END CERTIFICATE----- 
      tls_key: |                  
        -----BEGIN PRIVATE KEY-----
        xxxxx
        -----END PRIVATE KEY-----
    openresty_server_config: |
        ###                     # Customize nginx config
    backend_server:
      CALLBACK_DOMAIN: ""
      DOMAIN_NAME: ""
      AWS_ACCESS_KEY: "admin"
      AWS_ACCESS_SECRET: "73VyYWygp7VakhRC6hTf"
      AWS_ENDPOINT: "http://minio.apitable-datacenter:9000"
      ASSETS_LTD_URL: "assets"
      ASSETS_LTD_BUCKET: "assets"
      ASSETS_URL: "assets"
      ASSETS_BUCKET: "assets"
    imageproxy_server:
      BASEURL: "http://minio.vika-aptaible:9000"
    init_data:                                       #init-data model
      mysql:
        host: "mysql-primary.vika-aptaible"          #mysql host ,same as common.MYSQL_HOST
        port: 3306
        username: root
        password: "6sg8vgDFcwWXP386EiZB"
      redis:
        host: "redis-master.vika-apitable"
      minio:
        host: "minio.vika-apitable"
        schema: http
        port: 9000
        accessKey: "admin"
        secretKey: "73VyYWygp7VakhRC6hTf"
        bucket: assets
      initAppData:                                 #Environment Variable
        INIT_CONFIG_SPACE_ENABLED: "true"
        INIT_TEST_ACCOUNT_ENABLED: "true"          #Whether to create  virtual accounts  
      images:
        initDataDb: docker.vika.ltd/vikadata/vika/init-db:v0.20.0-rc.37_build682                         #Initialize MySQL, execute for the first installation.
        initDataDbEnterprise: docker.vika.ltd/vikadata/vika/init-db-enterprise:v0.20.0-rc.37_build682    #Initialize MySQL, execute for the first installation.
        initAppData: docker.vika.ltd/vikadata/apitable-saas/init-appdata:v0.21.0-alpha_build137

9. Enable databus-server

Release/versions after 1.1.0 support the new service data server

Dependency on ops-manager:v1.1.0-alpha_build543 or later version

Configuration path: config.custom.has_databus_server

config:
  custom:
    has_databus_server: true

10. Preview configuration changes (-- dry run)

Similar to the helm --dry-run function, in ops-manager:v1.1.0-alpha_build453 and later versions take effect

docker run --rm --name ops-manager -v  /data/apitable/app/config-k8s:/app/terraform/local/config-k8s \
             -v /data/apitable/app/custom-config:/app/terraform/local/custom-config \
             -v /data/apitable/app/state:/app/terraform/local/state \
             docker.vika.ltd/vikadata/vika/ops-manager:v1.1.0-alpha_build543 install k8s-apitable --dry-run

11. Update license code

Release/versions after 1.9.0 using the new license authentication method.It’s dependency on ops-manager:v1.9.0-alpha_build560 or later version

Configuration path: config.custom.SELFHOST_LICENSE

custom:
   backend_server:
      SELFHOST_LICENSE: {Your license code}