Tegsoft High Availability Cluster
  • 10 Nov 2024
  • 18 Minutes to read
  • Dark
    Light

Tegsoft High Availability Cluster

  • Dark
    Light

Article summary

This article provides an in-depth view of the Tegsoft High Availability Cluster and guidance for deployment.

Topics are covered in the following order:

  • General understanding of " High Availability"

  • General understanding of "Tegsoft High Availability Cluster"

  • Deployment steps

  • Maintenance

  • Fault Impact by Components

1. High Availability and Single Point of Failure

A single point of failure (SPOF) is a system component that, upon failure, renders an entire system unavailable or unreliable. When you design a highly available deployment, you identify potential SPOFs and investigate how these SPOFs can be mitigated.

High availability is a computing infrastructure attribute that permits it to continue running even when some of its components fail. This is vital for systems with critical functions that cannot tolerate service interruptions, and any downtime may result in cause damage or financial loss. 

High availability implies an agreed minimum “uptime” and level of performance for your service. Agreed service levels vary from organization to organization. Service levels might depend on factors such as the time-of-day systems are accessed, whether systems can be brought down for maintenance, and the cost of downtime to the organization. Failure, in this context, is defined as anything that prevents the directory service from providing this minimum level of service.

Tegsoft provides a variety of architectures for high availability. Among them, the most reliable architecture is "High Availability Cluster". This article will cover Tegsoft High Available Cluster Architecture in deep detail.

2. Tegsoft High Availability Cluster

A high availability cluster means multiple servers are grouped together to achieve the same service. Servers can be employed as backup, standby, or load balancers in the cluster to avoid losses caused by system downtime in case of any single server is unavailable. This is an effective way to keep the system running 24/7. The main advantage of a clustered solution is automatic recovery from failure, that is, recovery without user interaction.

The Tegsoft high availability cluster is designed to overcome any possible single point of failure in a multi-location environment. 

For deploying a high-availability cluster architecture you need to follow the setup and configuration steps in this article. 

The topology provided below is equipped with all the possible cluster members over multi-location. According to the business redundancy needs some of the parts can be disused. Topology components are explained in detail in the rest of this chapter.

Tegsoft High Availability Cluster consists of 6 main layers:

  1. Load balancer layer (Kubernetes, Docker Manager, or any load balancer)

  2. Web Servers

  3. Compute Servers

  4. Database Servers

  5. Data Processing Servers

  6. Storage

2.1. Load Balancer

This is the first line for supporting client needs. The load balancer distributes network and application traffic across a number of Tegsoft servers. Load balancers are used to increase the capacity and reliability of Tegsoft applications. They improve the overall performance of Tegsoft applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.

Any load balancer or any docker manager can be used. Here in this article "Docker Manager" on Ubuntu Linux will be explained as an example. It is always recommended to go with an enterprise "Docker Manager" (Cloud solutions, Kubernetes, etc.), for production environments

As our example "Docker Manager" in this document is based on Ubuntu Linux, please note that Ubuntu Core is the secure, application-centric embedded operating system. And also based on the support period as a metric, Ubuntu has 5 years of support. If there is no operational enterprise solution option, an Ubuntu-based solution can be considered as a solution.

2.2. Web Server Instances

The cluster functionality is built on different configuration parameters. The availability of those configuration parameters defines the level of permanence. 

All configuration parameters are stored in the "storage" and served by "web servers" behind the "load balancer". This approach keeps away the "single point of failure" possibility in the cluster configuration.

With the help of the load balancer and its docker architecture the web server instances are the most reliable and trustable members of the cluster. Just for this reason multiple critical duties are assigned to this layer. Here are the tasks web-server instances are responsible for:

  • Web Components - UI Server: The Tegsoft solution user interface layer (UI) is based on HTTP components. These components can be served independently from compute instances over web server instances. A set of reliable web server instances behind load balancers is a stronghold to start serving clients. Tegsoft User Interface Server (UI Server) is the server that hosts the Tegsoft user interface web components. Multiple UI versions can be served at the same time to have a smooth update process. 

  • Session Config Server: Multiple projects or sets of users/clients can be served with different redundant compute members (backend servers). When the client accesses the cluster this configuration parameter is the one defining rules and backend topology.

  • Database Replication Config Server: Database servers have different roles and redundancy configurations. Data Processing Servers need configuration rules to manage data over the cluster. This configuration can be based on location, data processing server, or projects. It is the web server instances' duty to serve those rules to data processing servers. 

  • Database Connection Config Server: The database system is the most important component of the solution. Compute instances are accessing database servers to complete their duties. Database Connection Configuration Parameters are the rules that define connectivity configuration between "Compute Instances" and "Database servers".

2.3. Compute Server Instances

Tegsoft Compute Server is the backend server that handles all the omnichannel functionality including voice and text routing or managing campaigns. Typically, f or high availability and workload balancing, more than one server needs to be used. With the help of session config rules multiple compute servers can act like a single server. 

Compute server instances are responsible for the following tasks;

  • Application Server: Compute servers run the main backend tasks for the Tegsoft solution such as text routing, voice routing, managing campaigns, generating reports, delivering alerts, etc. 

  • Voice Services: Voice processing, voice recording, and analysis of the voice recording files are the duty of the compute server as well.

  • APIs:  Most of the integrations are handled by APIs. API is the acronym for an application programming interface. Compute instances serve different APIs based on different protocols. 

  • Presence: Store, manage, and report agent availability and status.

  • Mapped to Database Server: Compute servers store and access data from database servers. With the help of database connection configuration compute servers can access database servers with failover support.

  • Single / Multiple Cluster: Compute servers can be configured as multiple clusters for different business needs. For example, 8 compute servers with different session configuration parameters can be used to serve 3 large projects.

  • Holds agent licenses:   The compute server also holds the system licenses.

2.4. Database Server Instances

Database servers store and serve data. According to the configuration, there may be different database server roles. Here are the possible database roles in a healthy Tegsoft High Availability Cluster deployment:

  • Active Database: It is the real-time database that the compute server is actively connected to.

  • Standby Backup Database: It is a standby database that is backed up and ready to be run for any failover. The Standby database is a fully backed-up version of the active database.

  • Report Database: The data retention period can be configured shorter term to maintain the full performance of the active database server.  For long-term reporting, it is recommended to store reporting data in a different reporting database server(s).

  • Config Backup Database: Only backups of the servers' configuration information are stored on the config backup database server. This is good for disaster recovery locations with lower bandwidth availability.

2.5. Data Processing Server Instances

Database server instances are identical servers with the same server applications installed. Data Procession Server Instances are responsible for configuring different database servers for different roles with the help of "Database Replication Configuration Parameters".  Duties are listed below:

  • Replicate Full Database: This duty (configuration rule) is for duplicating all the data incrementally from the active database server to the standby database server.

  • Replicate Partial Data: This duty (configuration rule) is for replicating partial data from the active database server to the config backup database server. 

  • Delete Absolute Data: Absolute data is purged from all the database servers according to the data policy and data retention periods that are defined in configuration parameters. 

  • Project Data Processing: While running multiple projects, data movement or project-based data retention policy can be important. These kinds of rules are applied to all database instances by database processing servers. 

2.6. Storage

This is the most important and the most critical layer for the high availability cluster deployment. Well-designed and fast storage allows better performance and better management. All the configuration files and configuration rules are stored here. Single storage or multiple storage components can be used for different duties. The storage solution/components must be redundant and reliable. For a better storage layout design all the responsibilities of the storage are explained in detail below.  

  • Project-Based Session / Connection Configuration Files: As explained above in this article configuration files, define rules to handle client requests by the cluster. You can create different projects for your organization and configure these projects in storage. Each project should have two files. These are:

  • index.php: That file redirects clients to the correct application version and session configuration.

  • config.php: Stores project-based session configuration.

  • Database Processing Rules: This folder contains all the database-related rules. Like connection configuration or replication rules. There may be several files for different configuration needs. Both data processing servers and compute servers are configured with environment variables to access correct configuration files.

  • Web Components and Application UI Web Files: This folder contains user interface files of Tegsoft versions. A separate version folder is created for each version. The version folders created here will be mapped to the projects via config files.

  • Voice Recordings: Voice recordings are recording files (WAV, MP3) or analysis outputs like PNG files. As those files are quite large, they are stored in the storage so that the database can function more efficiently.

  • Database Backups: Load-balanced database servers are reliable but in case of failure backup files are always good for last-chance recovery. Since the database backups are quite large files, they are stored in the storage so that the database can function more efficiently.

3. How to deploy High Available Architecture?

The deployment process is based on installation and configuration stages. So before starting the configuration and activating servers installations needs to be completed. 

3.1. Installations

Installations of the servers below are separate tasks so that they can be done one by one or in parallel. Please notice that, according to the business needs, the topology and number of servers can be different than the one described in this article so additional steps may be needed during deployment.  

3.1.1 Database Server Installation

According to the topology you may need to install multiple servers for multiple roles. 

Possible roles are,

  • Main database server (Mandatory)

  • Stand-by database server (Optional),

  • Reporting database server (Optional),

  • Config database server (Optional),

  • Disaster recovery location database server (Optional),

All different database roles are handled with the same installation procedure.  Setup of the database instance starts with standard Tegsoft Installation and finishes by configuring the instance with the script provided below. Please note, that the roles of the database servers are defined in the cluster configuration parameters, and that topic will be covered later in this article. 

After installation, disable unused services on the instance:


service tegsoft_icr stop
service tegsoft_web stop
service tegsoft stop
service dahdi stop
service tegsoft_fax stop
service tegsoft_fax_t38 stop
service asterisk stop

killall java
killall java
killall java
killall java

rm -rf  /etc/init.d/dahdi
rm -rf  /etc/init.d/tegsoft_icr
rm -rf  /etc/init.d/tegsoft_web
rm -rf  /etc/init.d/tegsoft
rm -rf  /etc/init.d/tegsoft_fax
rm -rf  /etc/init.d/tegsoft_fax_t38
rm -rf  /etc/init.d/asterisk

3.1.2 Compute server installation

Multiple compute instances need to be deployed to create a high-availability cluster. Please note at least two compute server instances need to be prepared. 

Setup of the compute instance starts with standard Tegsoft Installation and finishes by configuring the instance with the script provided below.

After installation, disable database service on the compute instance:


service tegsoft_icr stop
service tegsoft_web stop
service tegsoft stop

killall java
killall java
killall java
killall java

service tegsoft_db stop
service tegsoft_db stop

rm -rf /root/bk
mkdir /root/bk
mv /home/tobe/tobe /root/bk/
mv /opt/ibm/db2 /root/bk/

rm -rf  /etc/init.d/tegsoft_db

Configure compute instances to work with the database service,

3.1.2.1 If you are going to have only one active database in your cluster,

Edit /root/.bashrc and add the lines starting with "export" according to the following information,

  • databaseServerIP: The active database server instance IP

  • UNITUID: compute UNITUID (Please check licensing information)

  • clusterMember: always true

  • clusterMaster: always false

#.bashrc

#User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

export databaseServerIP="PLACE_HERE_THE_ACTIVE_DATABASE_SERVER_IP"
export tobe_dburl="jdbc:db2://"$databaseServerIP":50000/tobe"
export UNITUID="PLACE_HERE_THE_UNITUID"
export defaultSIPENGINE=OTHER
export clusterMember=true
export clusterMaster=false

#Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

3.1.2.2 If you are going to have multiple database servers with failover support,

Edit /root/.bashrc and add the lines starting with "export" according to the following information,

#.bashrc

#User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

export dbConnectionConfigUrl="PLACE_HERE_URL_FOR_DATABASE_CONNECTION_CONFIG"
export UNITUID="PLACE_HERE_THE_UNITUID"
export defaultSIPENGINE=OTHER
export clusterMember=true
export clusterMaster=false

#Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

3.1.3 Ubuntu server installation

For production environments, it is always recommended to use an enterprise docker manager solution. As an example docker configuration on Ubuntu server will be covered in this article.

Please install the 64-bit LTS Ubuntu server with the help of Ubuntu's official documentation.

Docker manager installation and activation

!!ATTENTION!!!

Installation commands below need to be run line by line / one by one.


sudo apt-get remove docker docker-engine docker.io containerd runc

sudo apt-get update
sudo apt-get install ca-certificates curl gnupg cifs-utils

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg


echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

If the installation asks for some selections like below, don't change anything and hit "OK"

3.2. Configurations

The storage, stores most of the configuration and slots to store different files. So it is recommended to start by designing the storage. Configuration steps are dependent so it is recommended to perform configuration topics one by one in order.

3.2.1. Preparing Storage

The sample storage structure should contain the following components:

  • Voice Recordings

  • Database Backups

  • Project Based Config

  • Data Processing Rules

  • Database Connection Configuration

  • APP UI Web Files

Click here to download sample storage files.

Storage folders can be configured externally or from the Ubuntu command line interface (CLI) by mounting storage to the Ubuntu server. 

Mount storage to Ubuntu Server and create folders


sudo mkdir -p /mnt/storage

sudo mount XXXX_STORAGE_XXXX /mnt/storage

sudo mount -t cifs -o username=USERNAME,password=PASSWD //192.168.1.88/shares /mnt/storage

echo "mount -t cifs -o username=USERNAME,password=PASSWD //192.168.1.88/shares /mnt/storage" >> /etc/rc.local


sudo mkdir -p /mnt/storage/sampleprj1

sudo mkdir -p /mnt/storage/app

sudo mkdir -p /mnt/storage/dbconfig

sudo mkdir -p /mnt/storage/dockersys/proxyconfig

3.2.1.1 Project Config


index.php File

The contents of the index.php file are as follows. The "src" part should be customized according to which app will be used created in the previous step and the path of the config file to be mapped.


<?php 
header("Access-Control-Allow-Origin: *");
header("Access-Control-Allow-Credentials: true");
header("Access-Control-Allow-Methods: GET, OPTIONS, HEAD, PUT, POST, DELETE, PATCH, TRACE");
header("Access-Control-Allow-Headers: application-user-agent,application-date,application-key,application-client-key,content-type,authorization");
header("Cache-Control: no-cache, no-store, must-revalidate");
header("Pragma: no-cache");
header("X-UA-Compatible: IE=edge");
if ($_SERVER['REQUEST_METHOD'] === 'OPTIONS') {    
   return 0;    
}
?>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

<html>
<head>
<meta data-fr-http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="robots" content="noindex">
<meta name="googlebot" content="noindex">
<meta name="googlebot-news" content="noindex" />
</head>
<body style="padding:0px;margin:0px;"><iframe style="padding:0px;margin:0px;" allow="camera *; geolocation *; microphone *; autoplay *; display-capture *;"  width="100%" height="100%" frameborder="0" src='/app/20230417/?sessionConfigUrl=/sampleprj1/config.php'></iframe></body>
</html>

➤ config.php File

There are multiple ways to serve cluster configuration JSON. Config server can be one of the Tegsoft servers, any webserver or a docker instance running on Kubernetes server.

These are all configuration parameters 

  • loginSwitchRule: This is the rule for selecting cluster member compute instance during login 

    • None: Login selection is disabled, cluster will not be active

    • Priority: Compute instance selection will always be the one with a lower priority on the config (1, 2, 3, ...)

    • LoginCount: The compute server with less login agents will be selected

    • CallCount: The compute server with less calls will be selected

    • WorkLoad: The compute server with less workload will be selected

  • failoverSwitch: This is the rule for selecting cluster member compute instance during a communication failure with the active selected server.

  • rule: 

    • None: Fail over is disabled

    • Priority On failure selection will be the one with lower priority

    • LoginCount: On failure the compute server with less login agents will be selected

    • CallCount: On failure the compute server with less calls will be selected

    • WorkLoad: On failure the compute server with less workload will be selected

  • timeoutSeconds: When connection times out with the given timeout seconds

  • timeoutCount: Number of connection failures to switch

  • clusterMembers: Array parameter for defining cluster members

    • name: A simple name / label to identify cluster member

    • hostname: Cluster member name (FQDN - https access needs to be valid)

    • priority: Is the priority of the cluster member

The contents of the config.php file are as follows. 

Headers

Please mind headers are important for proper integration.


<?php 
header("Access-Control-Allow-Origin: *");
header("Access-Control-Allow-Credentials: true");
header("Access-Control-Allow-Methods: GET, OPTIONS, HEAD, PUT, POST, DELETE, PATCH, TRACE");
header("Access-Control-Allow-Headers: application-user-agent,application-date,application-key,application-client-key,content-type,authorization");
if ($_SERVER['REQUEST_METHOD'] === 'OPTIONS') {    
   return 0;    
}
?>
{
    "loginSwitchRule": "Priority",  
    "failoverSwitch": {
        "rule": "Priority", 
        "timeoutSeconds": 20,
        "timeoutCount": 2
    },
    "switchback": {
        "rule": "None", 
        "successfulCount": 10
    },
    "clusterMembers": [
        {"name": "server1", "hostname": "arge17.tegsoftcloud.com", "priority":1},
        {"name": "server3", "hostname": "arge209.tegsoftcloud.com", "priority":2}
    ]
}

3.2.1.2 Application Versions

All the files under "/opt/jboss/server/default/deploy/Tobe.war/forms/TegsoftVue/" folder of Compute Instance needs to be copied under folder app/VERSION_TAG ie: StorageRoot/app/20230417

3.2.1.3 dbconfig folder

example database connection config


<?php 
header("Access-Control-Allow-Origin: *");
header("Access-Control-Allow-Credentials: true");
header("Access-Control-Allow-Methods: GET, OPTIONS, HEAD, PUT, POST, DELETE, PATCH, TRACE");
header("Access-Control-Allow-Headers: application-user-agent,application-date,application-key,application-client-key,content-type,authorization");
if ($_SERVER['REQUEST_METHOD'] === 'OPTIONS') {    
   return 0;    
}
?>
{
	"initialConnection": {
		"rule": "Priority", 
		"failureSeconds": 90,
		"failureCount": 2
	},
	"switchback": {
		"rule": "None", 
		"successfulCount": 10
	},
	"clusterMembers": [
		{
			"dbUser": "tobe",
			"dbPassword": "ab2037ef5bb349a1a46116581cb8fec5",
			"dbDriver": "com.ibm.db2.jcc.DB2Driver",
			"dbUrl": "jdbc:db2://192.168.47.17:50000/tobe",
			"priority":2
		},
		{
			"dbUser": "tobe",
			"dbPassword": "ab2037ef5bb349a1a46116581cb8fec5",
			"dbDriver": "com.ibm.db2.jcc.DB2Driver",
			"dbUrl": "jdbc:db2://192.168.47.209:50000/tobe",
			"priority":3
		},
	]
}


example data processing configuration

<?php 
header("Access-Control-Allow-Origin: *");
header("Access-Control-Allow-Credentials: true");
header("Access-Control-Allow-Methods: GET, OPTIONS, HEAD, PUT, POST, DELETE, PATCH, TRACE");
header("Access-Control-Allow-Headers: application-user-agent,application-date,application-key,application-client-key,content-type,authorization");
if ($_SERVER['REQUEST_METHOD'] === 'OPTIONS') {    
   return 0;    
}
?>
{
	"loopInterval": 1,
	"activePeriods": [
		{
			"timeBegin": 1902,
			"timeEnd": 1901
		}
	],
	"replicationRules": [
		{
			"name": "Data transfer",
			"description": "This is for replicating source database to target database",
			"source": {
				"dbUser": "tobe",
				"dbPassword": "ab2037ef5bb349a1a46116581cb8fec5",
				"dbDriver": "com.ibm.db2.jcc.DB2Driver",
				"dbUrl": "jdbc:db2://192.168.47.17:50000/tobe",
				"PBXID": "MYPBXID",
				"UNITUID": "MYUNITIUD"
			},
			"tableSet": "ALL_TBL",
			"excludedTables": [
				"TBLLICSERVER",
				"TBLLIC",
				"TBLHRSTUFF",
				"TBLCCCDR2",
				"TBLCRMCOMPCCARD",
				"TBLLICCOMPBREAKDOWN",
				"TBLPBXTTSLOG",
				"TBLPRICERULES",
				"TBLSALESCAMPACC",
				"TBLSALESCAMPAIGN",
				"TBLSALESCOUPON"
			],
			"excludedColumns": [
				"TBLPBXCONF.ADMINTALKTYPE",
				"TBLCRMINV.BEGINDATE",
				"TBLCRMINV.ENDDATE",
				"TBLCRMCOMPANIES.SERVICE",
				"TBLSTKPRD.COMMITMENT",
				"TBLSTKPRD.PREPAID",
				"TBLSTKPRD.MONTHLY"
			],
			"targets": [
				{
					"targetType": "jdbc",
					"dbUser": "tobe",
					"dbPassword": "ab2037ef5bb349a1a46116581cb8fec5",
					"dbDriver": "com.ibm.db2.jcc.DB2Driver",
					"dbUrl": "jdbc:db2://192.168.47.209:50000/tobe",
					"PBXID": "MYPBXID209",
					"UNITUID": "MYUNITIUD"
				}
			]
		}
	]
}


3.2.1.4 dockersys/proxyconfig folder

Place all the content below into the file, /mnt/storage/dockersys/proxyconfig/proxy.conf 

Please update 192.168.47.81 with the IP address of the Ubuntu server. 


<Proxy balancer://tegsoftwebservers> 
  BalancerMember http://192.168.47.81:8282 
  BalancerMember http://192.168.47.81:8283 
  BalancerMember http://192.168.47.81:8284 
  ProxySet lbmethod=byrequests 
</Proxy> 

ProxyPass "/" "balancer://tegsoftwebservers/" 
ProxyPassReverse "/" "balancer://tegsoftwebservers/" 

Shell

If you are going to use default tegsoftcloud.com certificates, place all the content below into the file, /mnt/storage/dockersys/proxyconfig/Dockerfile


FROM tegsoft/tegsoftwebserver:8
COPY proxy.conf /etc/apache2/mods-enabled/proxy.conf

If you need to use custom certificates,

  • prepare and place all the certificate files (key, certificate, and bundle certificate files) under /mnt/storage/dockersys/proxyconfig/

  • use content below for /mnt/storage/dockersys/proxyconfig/Dockerfile


FROM tegsoft/tegsoftwebserver:8
COPY proxy.conf /etc/apache2/mods-enabled/proxy.conf
COPY bundle.crt  /certificates/
COPY certificate.crt  /certificates/
COPY certificate.key  /certificates/

3.3. Finalizing Configuration

Configuration is done now we need to activate servers one by one to initiate High Available Cluster,

  • Activate Web Server Proxy

  • Activate Web Server Docker Containers

  • Initialize Database Instances

  • Activate Compute Instances

3.3.1 Activating Web Server Instances & Docker Containers

For building "loadbalancer" docker image and starting docker instances for web servers, you need to run the commands below.

Please run commands on all docker manager servers (Ubuntu instances individually)

The timezone parameter needs to match with the domain

The TZ parameter is for the docker instance timezone, please check the link for the correct timezone value.


cd /mnt/storage/dockersys/proxyconfig/

sudo docker build -t loadbalancer:latest .

sudo docker run -d --restart unless-stopped -p 443:443 --env TZ=DESIRED_TIMEZONE loadbalancer:latest

sudo docker run -d --restart unless-stopped -p 8282:80 --env TZ=DESIRED_TIMEZONE --mount type=bind,source=/mnt/storage/,target=/var/www/html/ tegsoft/tegsoftwebserver:8

sudo docker run -d --restart unless-stopped -p 8283:80 --env TZ=DESIRED_TIMEZONE --mount type=bind,source=/mnt/storage/,target=/var/www/html/ tegsoft/tegsoftwebserver:8

sudo docker run -d --restart unless-stopped -p 8284:80 --env TZ=DESIRED_TIMEZONE --mount type=bind,source=/mnt/storage/,target=/var/www/html/ tegsoft/tegsoftwebserver:8


Please run the following command on only one Docket Manager Instance (Ubuntu Instance)

#Run only on one Instance

sudo docker run -d --restart unless-stopped -p 8380:80 --env TZ=DESIRED_TIMEZONE --env configUrl=https://loc.tegsoftcloud.com/dbconfig/repconfig.json --env dbConnectionConfigUrl=https://CLUSTER_DB_CONFIG_URL --env UNITUID=CLUSTER_UNITUID tegsoft/tegsofttouchdataprocessingserver:84

3.3.2 Initializing Database Servers

Please connect any compute Instance and run the following commands by changing DATABASE_SERVER_IP to each database server IP (including active, report, backup, or config databases).

The following command (starting with unset...) is a single-line command. Please run as a single-line and once. 


unset dbConnectionConfigUrl;
unset tobe_dburl;
export tobe_dburl="jdbc:db2://DATABASER_SERVER_IP_HERE:50000/tobe";
/root/tegsoft_prepareUpdate.sh 

Please run the following commands with a separate session on each compute instance (Please disconnect and reconnect before running scripts)


service tegsoft restart

4. Upgrade Process

The upgrade process consists of completing the following tasks,

  • Upgrade compute instances

  • Apply the upgrade to database servers

  • Create a new UI Web Tegsoft version 

  • Update application path mapping and activate the new version

Users will notice a bar notification asking for a refresh, once they reload they will continue with the new version.

4.1. Upgrade compute instances

  • If you have, an “active standby” configuration, please start with the most active one so the users will migrate to the standby server then after the standby upgrade they will migrate back to the original active cluster member. 

  • If you have, an “active-active” configuration you can start with any cluster member after completing the last member all the distribution will become even again.


#You can use alpha / beta or sr for release name
/root/tegsoft_performUpdate.sh alpha

You can continue to run the command above for the remaining compute instances and complete the upgrade on all compute instances.

Please do not start before the active update completes on the instance. All the compute instances need to be upgraded one by one.

4.2. Apply update to database servers

Connect to an updated compute instance (only one is enough) and run the following commands by changing DATABASE_SERVER_IP to each database server IP (including active, report, backup, or config databases).

The following command (starting with unset...) is a single-line command. Please run as a single line and once. 


unset dbConnectionConfigUrl;
unset tobe_dburl;
export tobe_dburl="jdbc:db2://DATABASER_SERVER_IP_HERE:50000/tobe";
/root/tegsoft_prepareUpdate.sh

4.3. Create a new UI Web version

Connect to any “Docker Manager Instance” via SSH and run the following commands after replacing VERSIONTYPE with one of the values (alpha, beta, or sr).

You need to create a new folder with a version tag under the storage app folder, and all UI files must be installed under that folder.

sudo su - 

export VERSION_TAG=`date '+%Y%m%d'`
export VERSIONTYPE=alpha
echo $VERSION_TAG UI version will be installed

mkdir -p /mnt/storage/app/
mkdir -p /mnt/storage/downloads/

rm -rf /mnt/storage/downloads/Tegsoft_$VERSIONTYPE.tgz
rm -rf /mnt/storage/downloads/Tobe.war

cd /mnt/storage/downloads/
wget setup.tegsoftcloud.com/TegsoftVersions/Tegsoft_$VERSIONTYPE.tgz

tar -xzf /mnt/storage/downloads/Tegsoft_$VERSIONTYPE.tgz --directory /mnt/storage/downloads/ Tobe.war/forms/TegsoftVue
tar -xzf /mnt/storage/downloads/Tegsoft_$VERSIONTYPE.tgz --directory /mnt/storage/downloads/ Tobe.war/image

mv /mnt/storage/downloads/Tobe.war/forms/TegsoftVue /mnt/storage/app/$VERSION_TAG

rm -rf /mnt/storage/Tobe/
mkdir -p /mnt/storage/Tobe/

mv /mnt/storage/downloads/Tobe.war/image /mnt/storage/Tobe/

rm -rf /mnt/storage/image
ln -s /mnt/storage/Tobe/image /mnt/storage/image

rm -rf /mnt/storage/downloads/Tobe.war

#Update paths in project index.php files.

Once the above process is done, you need to update VERSION_TAG under the index.php files of the projects.

References

https://docs.docker.com/engine/install/ubuntu/

https://ubuntu.com/tutorials/install-ubuntu-server#1-overview


Was this article helpful?

Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.