- 04 Jul 2024
- 12 Minutes to read
- Print
- DarkLight
Backup and Restore
- Updated on 04 Jul 2024
- 12 Minutes to read
- Print
- DarkLight
The backup and restore method explained in this article allows system administrators to build backup strategies.
Prerequisites
To perform the procedures in this article smoothly, Tegsoft recommends that you have basic knowledge of these topics:
Networking
SSH connection
Linux Command Line Interface
The Backup Process
To execute the backup process you need to prepare the backup config file and the environment. Once these preparations are ready you can build a backup strategy like weekly full, daily incremental backups. Backup Data Processing supports only command line processing and all the environment needs to be prepared on the Tegsoft database server.
Workspace and Environment
All the data management processes are handled by the “Tegsoft Touch Data Processing” service. For backup operations, this service can only be executed from a command line interface. The process requires the preparation of a config file.
Backup Config File
The config file is in JSON format and has various sections. The file can be accessed via an http URL or a local file.
A sample config file is shown below,
{
"activePeriods": [
{
"timeBegin": 200,
"timeEnd": 500
}
],
"backupRules": [
{
"name": "Full backup rule",
"description": "Full backup on Sunday",
"activePeriods": [
{
"dayOfWeek": 7
}
],
"source": {
"dbUser": "tobe",
"dbPassword": "ab2037ef5bb349a1a46116581cb8fec5",
"dbDriver": "com.ibm.db2.jcc.DB2Driver",
"dbUrl": "jdbc:db2://127.0.0.1:50000/tobe",
"PBXID": "ebd37af5-260a-436f-9bdf-44bcc9b1b946",
"UNITUID": "4a55c1e3-edd5-46ef-b66f-d74634e8469a"
},
"backupType": "full",
"destination": "/home/tobe/backup/"
},
{
"name": "Incremental backup rule",
"description": "Daily incremental backup on other days",
"activePeriods": [
{
"dayOfWeekBegin": 1,
"dayOfWeekEnd": 6
}
],
"source": {
"dbUser": "tobe",
"dbPassword": "ab2037ef5bb349a1a46116581cb8fec5",
"dbDriver": "com.ibm.db2.jcc.DB2Driver",
"dbUrl": "jdbc:db2://127.0.0.1:50000/tobe",
"PBXID": "ebd37af5-260a-436f-9bdf-44bcc9b1b946",
"UNITUID": "4a55c1e3-edd5-46ef-b66f-d74634e8469a"
},
"backupType": "incremental",
"destination": "/home/tobe/backup/"
}
]
}
Execution Periods
Execution of the process can be limited to a specific period or periods, this is handled via “activePeriods” definition. The definition is an array and contains allowed periods. Each period is defined via blocks marked with beginning and ending marks. If a period has multiple markers like the “time between” and “date between” both period conditions need to match.
If any period matches whether with a single condition or multiple conditions, execution can start if none matches then the process will not execute. If there is no period defined then the process will execute.
The “activePeriods” element can be used globally in the config file or under each rule individually. If global conditions don’t match execution will not do anything, if conditions under the rule don’t match the related rule will not execute.
Syntax,
"activePeriods":[
{ // Period 1
Condition 1, Condition 2, ... Condition N
},
{
// Period 2
},
....
{
// Period n
}
]
Both begin and end parameters are included.
timeBegin: That field value represents the beginning time of the period in 24-hour format. Value needs to be in a decimal form. Examples,
11pm or 23:00 → 2300
8:30am or 08:30 → 830
One past midnight 00:01 → 1
timeEnd: That field value represents the ending time of the period in 24-hour format. Value needs to be in a decimal form. Examples,
11pm or 23:00 → 2300
8:30am or 08:30 → 830
One past midnight 00:01 → 1
time: That field value represents the exact time of the period in 24-hour format. Value needs to be in a decimal form. Examples,
11pm or 23:00 → 2300
8:30am or 08:30 → 830
One past midnight 00:01 → 1
The period is compared with the current time,
If you have timeBegin: 1 and timeEnd: 2359
That 22:10 will match or 08:30 will match but 00:00 will not match.
dayOfMonthBegin: This is the starting point of the “day of the month” condition. This field takes integer values between 1 - 31
dayOfMonthEnd: This is the ending point of the “day of the month” condition. This field takes integer values between 1 - 31
dayOfMonth: This is the exact day of the month. This field takes integer values between 1 - 31
dayOfWeekBegin: This is the starting point of the “day of the week” condition. This field takes integer values between 1 - 7. The week starts on Monday and ends on Sunday. Monday: 1 and Sunday: 7.
dayOfWeekEnd: This is the ending point of the “day of the week” condition. This field takes integer values between 1 - 7. The week starts on Monday and ends on Sunday. Monday: 1 and Sunday: 7.
dayOfWeek: This is the exact day of the week condition. This field takes integer values between 1 - 7. The week starts on Monday and ends on Sunday. Monday: 1 and Sunday: 7.
dateBegin: This is the starting point of the date condition. This field takes string value in YYYYMMDD format like 20230727 is 27th of July 2023
dateEnd: This is the ending point of the date condition. This field takes string value in YYYYMMDD format like 20230727 is 27th of July 2023
date: This is the exact date condition. This field takes string value in YYYYMMDD format like 20230727 is 27th of July 2023
Backup Rules
This section is for defining backup actions and parameters. This article mainly covers topics related to this section.
name: It is always good to have a rule name so that the process can be tracked, managed, and monitored. The name needs to be unique against all rules in the file.
description: An optional description area for taking notes and sharing more information about the rule.
source: Defines source database connection parameters including dbUser, dbPassword (encrypted), dbDriver, dbUrl, PBXID, and UNITUID.
backupType: That field defines the process that will be executed. The field value can take one of these parameters “full” - for full backup, and “incremental” - for execution of incremental backup process.
destination: This is the backup destination folder. This parameter is mandatory. destination can be a local file system, mounted NFS storage area, FTP, or SFTP-like destinations. Any kind of mountable destination can be used. This folder is the base folder for backup action. YYYYMMDD formated subfolder will be created during backup execution.
Command Line Execution
Before you start command line environment needs to be prepared. The environment must be prepared in a Tegsoft Database Instance.
mkdir -p /root/dataProcessingWorkspace
cd /root/dataProcessingWorkspace
wget -O update.sh https://setup.tegsoftcloud.com/resources/dataProcessingWorkspace/update.sh
chmod +x /root/dataProcessingWorkspace/update.sh
/root/dataProcessingWorkspace/update.sh
You can always run the command below for help.
/root/dataProcessingWorkspace/help.sh
Once you prepared your config file you can execute the process with the commands listed below (Please mind changing the config file name if needed).
export configUrl=/root/dataProcessingWorkspace/backupConfig.json
nohup /root/dataProcessingWorkspace/runProcess.sh > output.txt 2>&1 &
During execution, you can exit the shell. The execution will continue and logs will be accessible via the output.txt file.
If you want to skip logs and access only the output
grep 'START\|DATA_OUTPUT\|DATA_FORMAT\|PROJECT_COMPLETE\|DONE' /root/dataProcessingWorkspace/output.txt
Restoring backup
Restoring from a cold backup file will allow you to restore,
Basic & advanced functionality, including standard Tegsoft services and configuration details. , all the log data and configurations, IVR announcements and advanced IVR development features.
Logs and reports, including agent activities, call details, etc.
All the IVR capabilities with the IVR announcements and advanced IVR plug-ins.
Backup files don’t hold the following items so an external backup technique (usually storage backups) needs to be applied for those files.
Voice recordings
Voice mail files
TTS files
Webchat or Whatsapp media attachments
Here is the summary of the restoration process,
Transfer backup files and prepare the environment
Prepare a new database
Restore database
Start services
Tegsoft services are mission-critical services for most enterprises so short restoring and activation times may be crucial. With this awareness restoring process is documented with two different prosedures.
Both processes start the same; the changes are after step number 3. Quick restore divides step 3 into two stages.
Preparation
You may have three kinds of backup files.
A single compressed backup file (In YYYYMMDD-MAC.tgz format like: 20230819-06ccf01ae1d7.tgz): This file is an output of a traditional backup method that keeps a complete full backup of the database. This standalone file is good enough for a complete restore to the backup moment.
A single compressed full backup file (In YYYYMMDD-MAC-DB-NAME-TYPE.tgz format like: 20231203-06fd44e64c41-db004-full.tgz): Next generation backup method described in this article generates this file that keeps a complete full backup of the database. This standalone file is good enough for a complete restore to the backup moment. If you need you can apply incremental backup data after restoration of this file.
A compressed incremental backup file(s) (In YYYYMMDD-MAC-DB-NAME-TYPE.tgz format like: 20231203-06fd44e64c41-db004-incremental.tgz): Next generation backup method described in this article generates this file that keeps an incremental backup of the database for the related date. This file can not restore the database. It only keeps data changed after a full backup for the related date/time.
All the processes will be handled under the restore folder so we start deleting and recreating the restore folder.
Fresh start
Restore folder may have remainings from previous work. All the data will be deleted.
rm -rf /home/tobe/restore/
mkdir -p /home/tobe/restore/
Please transfer all the backup files into the restore folder. You may use scp or other possible transfer methods. If you are going to restore incremental backup files please transfer all of them at once.
scpp *.tgz root@TEGSOFT_DBSERVER_IP:/home/tobe/restore/
Extracting all compressed files at once is possible with the following script. You may also do it one by one manually.
cd /home/tobe/restore/
for backupFile in $(find . -name '*.tgz'); do tar -xzvf $backupFile; done
Cut-over and Creating the New Database
All the possible running services need to be deactivated before continuing further. So if you are restoring a live Tegsoft server downtime starts with this step.
killall java
killall java
killall java
killall java
#Run the commands until you see "java: no process found"
The restoration process will continue with the database user. Before we switch the active root user we need to transfer permissions and ownership to the database user.
chown -R tobe.tobe /home/tobe/restore/
su - tobe
Before we create a new database all the active data will be deleted.
All the data will be deleted
This step will delete any existing data. So please be extremly carefull for the further steps. In case of failure important data loss may occur.
#This will DROP Existing DATABASE
db2 "DROP DATABASE TOBE"
# Create new fresh database will take long please wait
db2 "CREATE DATABASE tobe automatic storage yes USING CODESET UTF-8 TERRITORY TR pagesize 32 K"
New database log files and required folders will be created with the following commands.
mkdir -p /home/tobe/tobe/NODE0000/MIRRORLOGPATH
chown tobe.tobe /home/tobe/tobe/NODE0000/MIRRORLOGPATH
db2 update db cfg for tobe using MIRRORLOGPATH /home/tobe/tobe/NODE0000/MIRRORLOGPATH
db2 update db cfg for tobe using logfilsiz 10000 logprimary 30 logsecond 30
#This step will take long please wait
db2stop;db2start;
Change the directory into the related restore folder. As there are different cold backup files and restore methods please use correct directory names.
Here are some examples of the possible correct directory names,
/home/tobe/restore/tegsoft/0617fbd294e1-db016/20231126full/
/home/tobe/restore/20231126full/
/home/tobe/restore/home/tobe/backup/20231126full/
/home/tobe/restore/home/tobe/backup/20231126/
Schema generation
Following commands will start schema generation and will output lots of logs.
During execution please use t1be as password when needed.
cd /home/tobe/restore/tegsoft/0617fbd294e1-db016/20231126full/
db2 connect to tobe
db2 -Ctvf ./allddl.sql
Importing Data
Data needs to be imported in stages.
Essential data
Actual data
Archive data
The following commands will prepare files, and perform the import process.
db2 connect to tobe
db2 LOAD FROM TBLUNITS.ixf OF IXF INSERT INTO TBLUNITS
db2 connect to tobe
db2 LOAD FROM TBLPBXEXT.ixf OF IXF MODIFIED BY generatedignore INSERT INTO TBLPBXEXT
db2 connect to tobe
db2 LOAD FROM TBLCRMCONTACTS.ixf OF IXF MODIFIED BY generatedignore INSERT INTO TBLCRMCONTACTS
mv db2move.lst alltablesdb2move.lst
grep -v "TBLUNITS.ixf\|TBLPBX.ixf\|TBLCRMCONTACTS.ixf" alltablesdb2move.lst > processdb2move.lst
grep "\!ARC" processdb2move.lst > arctablesdb2move.lst
grep "\!TBL" processdb2move.lst > tbltablesdb2move.lst
rm -rf db2move.lst
cp tbltablesdb2move.lst db2move.lst
db2move tobe LOAD -lo replace -u tobe -p t1be
Importing Incremental Actual Data
If you don’t have incremental files this step is not necessary.
Please perform the following steps for all incremental folders date by date. Date order is important, follow the date sequence properly.
cd /home/tobe/restore/tegsoft/0617fbd294e1-db016/20231127incremental/
db2 connect to tobe
db2 LOAD FROM TBLUNITS.ixf OF IXF INSERT INTO TBLUNITS
db2 connect to tobe
db2 LOAD FROM TBLPBXEXT.ixf OF IXF MODIFIED BY generatedignore INSERT INTO TBLPBXEXT
db2 connect to tobe
db2 LOAD FROM TBLCRMCONTACTS.ixf OF IXF MODIFIED BY generatedignore INSERT INTO TBLCRMCONTACTS
mv db2move.lst alltablesdb2move.lst
grep -v "TBLUNITS.ixf\|TBLPBX.ixf\|TBLCRMCONTACTS.ixf" alltablesdb2move.lst > processdb2move.lst
grep "\!ARC" processdb2move.lst > arctablesdb2move.lst
grep "\!TBL" processdb2move.lst > tbltablesdb2move.lst
rm -rf db2move.lst
cp tbltablesdb2move.lst db2move.lst
db2move tobe LOAD -lo replace -u tobe -p t1be
Activating Services
Optional Step
If finishing the full import process and starting the services is possible, skipping this process is recommended.
Please mind importing “archive data” during the offline stage is much faster than the online import process.
As essential and actual data are imported now Tegsoft services can be activated.
After loading data into tables, table structure and integrity need to be revalidated. Please execute commands
Two scripts below need to be executed until the first script displays “0 ./fixIntegrity.sh“
db2 connect to tobe
db2 "SELECT 'db2 connect to tobe; db2 \"SET INTEGRITY FOR '||RTRIM(TABSCHEMA)||'.'||TABNAME||' IMMEDIATE CHECKED\"' FROM SYSCAT.TABLES WHERE STATUS = 'C'" |grep db2 > fixIntegrity.sh
chmod +x fixIntegrity.sh
wc -l ./fixIntegrity.sh
./fixIntegrity.sh
If the script output number doesn’t decrease, data integrity needs to be fixed with exceptions.
Please continue executing the scripts below, until “0 ./fixWithException.sh” is displayed.
db2 connect to tobe
db2 "SELECT 'db2 connect to tobe; db2 \"CREATE TABLE '||RTRIM(TABSCHEMA)||'.'||TABNAME||'2 LIKE '||RTRIM(TABSCHEMA)||'.'||TABNAME||' \"' FROM SYSCAT.TABLES WHERE STATUS = 'C'" |grep db2 > fixWithException.sh
db2 "SELECT 'db2 connect to tobe; db2 \"SET INTEGRITY FOR '||RTRIM(TABSCHEMA)||'.'||TABNAME||' IMMEDIATE CHECKED FOR EXCEPTION IN '||RTRIM(TABSCHEMA)||'.'||TABNAME||' USE '||RTRIM(TABSCHEMA)||'.'||TABNAME||'2 \"' FROM SYSCAT.TABLES WHERE STATUS = 'C'" |grep db2 >> fixWithException.sh
chmod +x fixWithException.sh
wc -l ./fixWithException.sh
./fixWithException.sh
Once the integrity process finishes, you can start Tegsoft services by connecting to the compute instance and executing the following commands.
service tegsoft restart
Importing Archive Data
The following commands will prepare files, and perform the archive import process.
Please change the directory to the full backup folder. (ie: /home/tobe/restore/tegsoft/0617fbd294e1-db016/20231126full/)
cd /home/tobe/restore/tegsoft/0617fbd294e1-db016/20231126full/
rm -rf db2move.lst
cp arctablesdb2move.lst db2move.lst
db2move tobe LOAD -lo replace -u tobe -p t1be
Importing Incremental Archive Data
If you don’t have incremental files this step is not necessary.
Please perform the following steps for all incremental folders date by date. Date order is important, follow the date sequence properly.
cd /home/tobe/restore/tegsoft/0617fbd294e1-db016/20231127incremental/
rm -rf db2move.lst
cp arctablesdb2move.lst db2move.lst
db2move tobe LOAD -lo replace -u tobe -p t1be
After finishing the import process please follow the steps under the title “Activating Services“. If you have already activated services please only execute the script below.