# DataIO Export/Import 

## Basic Concept

DataIO business data import and export is one of the core components of InsureMO. It aims to solve issues of configuration data deployment in multiple environments and tenants in the context of microservice architecture. 

In principle, the MC or Portal environment is viewed as a configuration environment, and business data is released by exporting it from the MC environment and importing it to other runtime environments.

DataIO facilitates such a deployment process and makes it easy and friendly for end users to operate.

## User Scenario

For anybody who wants to perform configuration data deployment.

## Composition of DataIO

### Basic Concept

In summary, there are three types of data:

- Instance data: Such as policy, claim, user or sales channel information, typically maintained directly in production or in a runtime environment.
- Configuration data: For example, products or ratings, where no one would like to configure ratings directly in production for troubleshooting and debugging purposes.
- Conditionally, it can be either instance or configuration data: For example, branches or roles where some people might want to add directly in production, while others might want to keep it updated in the MC environment and deploy it to production as well as other testing environments.

Our DataIO engine can be highly helpful in tackling the above scenario 2 and 3, which will involve a special process of configuration and deployment.

### The Difference between tenant ConfigData and platform ConfigData

- The tenant ConfigData is those data which is configured and exported by tenant users. The internal `record_usage` for tenant-config-data is 5.

- The platform ConfigData is those data which is generically embedded with the tenant setup and regularly upgraded on behalf of InsureMO platform team. It is exported from our platform and is not allowed to change. The internal `record_usage` for platform-config-data is 2.

- There's also another category of baseline ConfigData. This is rather a special category that some ConfigData will not belong to InsureMO platform team but can be deployed in a multi-tenant way. This is not quite frequent to use, and tenants can ignore it if not applicable.


### Characteristics and Benefits

- Target Data Import
    - Compared with the traditional dump data mode, DataIO can only import the data required by businesses and does not cause data redundancy.
- Centralized Data Publishing 
    - Since the data on each microservice is independent, consumers cannot import data into a server by executing DB scripts. Therefore, a common business data publishing function is needed.
- Multi-Tenant Support 
    - To meet the publishing requirements of multiple tenants, the default data of the platform can be sent to all tenants, and the tenants only publish their own definition data. The tenants are isolated from each other.
-  Cross-Environment/Version Releases 
    - It supports cross-environment and cross-version releases, eliminating concerns about the inconsistent program versions between the exported and imported environments.
- Standardized Workflow
    - **Configuration > test > online**, fast iteration of standard process.

### Supported Export Types

The following list outlines the common export types generically supported on the platform. It covers all configuration components available in our platform.

Read the respective component guide first to understand its features and perform correct changes.


|Module|Export By|Export Range|Submit Folder| Mark |
|--|--|--|--|--|
|product|Byproduct|Product|`product`| Product-related dd/rating/rule/ratetable will be exported together, and there is no need to export separately again. |
|plan|Module|Single|`plan`|It is recommended to export by product. |
|dd object & model|Module/By Product|Group/Product|`ddobject`| Only common DD needs to be exported separately. |
|code table|Module|Group|`codetable`|  |
|data table|Module|Group|`datatable`|  |
|rating|Module/By Product|Group/Product|`rating`| Only common rating needs to be exported separately.|
|rule|Module/By Product|Single/Group/Product|`rule`| Only common rule needs to be exported separately.|
|snippet|Module|Group|`config_snippet`| |
|dsl|Module|Group|`dsl_script`| |
|rate table & parameter table|Module/By Product|Group/Product|`ratetable`| Only common rate table needs to be exported separately.|
|i18n|Module/By Product|Group/Product|`i18n`| |
|message|Module|Group|`context`| |
|ui-avm|Module/By Product|Group/Product|`uic`| |
|urp & branch|Module|Group|`urp`| |
|batch|Module|Single/Multiple|`batch`| |
|search|Module|Single/Multiple|`search`| |
|global parameter|Module|Single|`configtable`| |
|numbering template|Module|Single|`numbering_template`| |
|configuration group|Module|Single/Multiple|`context`| |
|upload|Module|Single/Multiple|`upload`| |
|language|All|All|`language`| |
|workday|All|All|`workday`| |
|content template|Module|Group|`content_template`| |
|validation rule (deprecate)|Module/By Product|Group/Product|`validation_rule`| |


## Key Configuration Deployment Steps

The process resembles the source code program deployment. Configuration data deployment involves the following key operation steps:

### 1. Configuration

Data configuration is always the first step. During the process, users must clearly understand what has been configured, the actual context, and the group category for deployment.

<div class="docs-caution"><span class="docs-admonitions-text">caution</span>

**Whenever configuration is to be deployed, ensure it always starts from the MC environment before exporting.**

</div>


### 2. Data Export and Check-in

The process consists of three steps:


#### 2.1 Manual Check-in

##### Export Configuration File

There are two places that users can export configuration files. Taking rules as an example, users can choose one of the following options:

*  Export rule configuration data from the rule operation UI.
*  Export rule configuration from the common DataIO operation UI.

Once exported, the system will generate a zip file for the user to download to the local machine.

<div class="docs-caution"><span class="docs-admonitions-text">caution</span>

**For any exported data files, unzip them only and do not make any data or structure changes to the exported files.**

</div>


##### Unzip Configuration File

After configuration file export is performed, users will obtain a zip file. The next step is to unzip downloaded file.

Once file is unzipped, it can be placed into a local folder or directly into the designated folder of the GIT repository.

###### Sample Exported File Structure

As shown in the figure below, taking I18n as an example, we select the configuration, and then click **Search**. After the data record is displayed, click **Export All**, and the packed `.zip` data file will be exported. For other files, see [Export Types](#supported-export-types).  

![Export](./image/dataio/2-1-1.png)

The exported file `.zip` contains:

- One or more files named with suffix `_zip`. It contains:
  - `config.json`
  - `criteria.json`
- One or more files named with suffix `.csv`. It contains:
  - `env.txt`
  - `service_mappings.txt`

For an exported product data file, it contains the following folders:

- `product_master`
- `context_master`
- `{productCode}_{productVersion}`
  + `context_version`
  + `ddobject`
  + `product`
  + `uic / i18n /` ...
- `rating / rule / ratetable /` ...
- `env.txt`

Each folder contains a folder of the same name with suffix `_zip`.

<div class="docs-caution"><span class="docs-admonitions-text">caution</span>

Also, for exported data files, please only unzip them and do not make any data or structure changes.

</div>


##### Check-in Configuration File

With the tenant setup, there will be a configuration data GIT repository created at the same time. If you have new configuration data deployment requirements, please check with your SiteOps team and they will make necessary arrangements for you.

Once it is properly placed in the directory path, users will need to check in or commit changes to corresponding repository.


#### 2.2 Automated Process for Export and Unzip and Check-in Together

In the past version, the export, unzip and check-in steps need to be performed offline and manually.

With the release of the latest platform in early 2022, users can conduct all these operations online via the configuration UI, and they no longer need to perform these extra steps one by one.


### 3. Build and Deploy

Once checking in, users can use our CICD tool to build a package and deploy it to all runtime environments.

For the detailed steps of CICD, users can see the [InsureMO guide](https://docs.insuremo.com/non_insurance_service/docs/md/InsureMO_DevOps).


### 4. View DataIO Status
DataIO deployment might take time and occasionally encounter unexpected errors.

In order to let users know deployment status, there will be an automatic email sent to notify users of the status.

Also, users can enter **Data Import/Export** to view and query the deployment status in a target environment.

#### Deployment Principle

In order to ensure the deployment performance, our deployment will be conducted at an incremental level instead of entire package all the time.

1. Each modification will be deployed only once. System will check the deployment history of folders or files. If the modification has already been deployed in any of previous deployment, the modification will NEVER be updated again. 
- Take products for example.
   - If product A and B both exists, they are packaged and deployed. Later on, a new product C is added, then packaged and deployed. Although they are all in the same deployment package, only product C will be deployed, and the product A and B deployment will NOT be executed again.

   - If, let’s say, product A is updated with rating, then packaged and deployed, only product A rating will be deployed. Other configuration elements of product A will NOT be executed, and the product B and C will NOT be executed either. 

   - This is an important fact to notice. If someone mistakenly deletes the product A rating in target environment and re-deploys the package, the rating will not be resumed back because from deployment history, it has already been deployed. The only way is to change something (if not, it will not take effect either) and build a new package to deploy again.

2.	Under the same module folder, the data will be executed by name orders.

- Take products for example. 
    - If product A and B both exists, the product A will be executed first, then B.


## UI Operation of Configuration Deployment

### Illustration of Data Export in a Typical Tenant

**The following steps take tenants as an example.**

Before start, it's important to differentiate two types of configuration data and make sure there's no overlapping among them:

1. Product Configuration Data

All the Product owned DD, rating and rule, rate, rate table will be exported with Product. If the configuration data's Context is set to be Product, it will be exported when exporting the Product.
  
- Technical Product 
- Market Product
  
2. Common Configuration Data

The common data's Context is NOT set to be Product, so when exporting, please make sure to set Context as Common. For all common data, only current tenant-related data will be exported, and the platform maintained will NOT be exported.
  
  - DD Common Data Model
  - Code Table
  - Data Table
  - Context-Message (for rule)
  - Common Rating
  - Common Rule
  - Common Rate Table
  - I18n
  - URP
  - Global Parameter
  - Context- Group
  - ...


### Auto Check-In Illustration (Commit to Git Directly)

The system supports to commit configuration data in MC or commit to Git directly in UI.
1. Click **Export** to export data. 
2. A dialog pops up and you can set needed inputs as below.

* **GIT Repository**: the bizasset git you want to commit to.
* **GIT Branch**: the branch to commit to.
* **Function**: the folder will be exported to, and it will be created automatically under the `/src/main/bizdata`.
* **Step**: for some modules, the **Step** does not show up. Some are set to be fixed value, some a default value but with dropdown choices. The dropdown choices for **Step** include the default value just set, the existed folders under the  **Function** and related folders. 
    - If **Step** is set to a default value but with dropdown choices, you had better check whether data records with the same filter conditions have exported or not. 
    - If they have been exported before, you should choose existed folders.
    - If there is no expected folder, you can use the default one or input the expected name as **Step**, and the system will create it automatically when committing.
* **Commit Message**: mark information for committing. Space and special symbols are not allowed.
* **Complete Path**: the git paths where the data will be commit to. You can copy it to a browser to check whether the path exists or not.

If there is an identical issue between git and MC environment, the automatic check-in may not work. You can try to click **Download Release Package** under the **Manual Check-In**.

![Auto Check-In](./image/config_data_release/autocheckin_001.png)



#### Auto Checkin for Other Third Party Data Including Container Configuration

If the configuration data comes from appframework, users can directly check-in them in the configuration screen.

However, there are scenarios that configuration is performed in other system, e.g. InsureMO container screen while users want to deploy them together.

Now, if users go to "Common Import/Export" screen, there's an option for users to perform "Commit Other Config-Data".

Take container API registration for example. After the tenant customized APIs are developed, if the APIs needed to be registered to container, you can submit to git under the folder `container_api` and upload your swagger into it.

![commit](./image/dataio/other_config_data_commit.png)


Once that's done, users will be able to find it to be committed in GIT repo.
  
![DATAIO](./image/dataio/api_reg_001.png)

Now all supported types are listed as below. Users can contact InsureMO support team if wanting to add new types for deployment.


|Data Category| Type|Folder| Remark|
|--|--|--|--
|platform workflow|Other|`wf_activiti/wf_zip`| |
|container swagger api|Other|`container_api`| |
|container logo|Other|`container_logo`| |
|container print template|Other|`print_template`| |
|container email template|Other|`container_email_template`| |
|container audit transaction|Other|`container_audit_trans`| |
|container audit module|Other|`container_audit_module`| |
|container activity workflow (deprecate)|Other|`wf_xml`| |




### Quick Illustration on Whole Deployment Process -- GI Product Deployment as Example

![commit product to git](./image/config_data_release/pd_git.png)

The confirmation information pops up as:

* Commit Path is: 
    `https://oss***bizdata/product/TR_POC`

*  Branch is: master

![commit confirmation information](./image/config_data_release/pd_git_confirm.png)

After the committing is done, you can see its result.

![commit result](./image/config_data_release/pd_git_commit_result.png)

* Click **Git Diff View** to check the difference between this commit and the last one.
* Click **Revert** to revert this commit to be last one.
* Click **Package & Deploy** to enter the **Devops** to do the **CICD**. 

For more details, see [Package and Release](#package-and-release-in-portal).


### Package and Release in Portal

#### 1. Package Build -- CICD UI

- Log in to [portal](https://portal.insuremo.com/).

- Go the menu **CI/CD** under **DevOps** link [CICD](https://portal.insuremo.com/#/home/utility-and-admin/devOps).

![CICD](./image/config_data_release/portal-cicd.png)

![CICD](./image/config_data_release/portal-cicd-build.png)


#### 2. Bizasset Compile Pre-check
Since the system supports to automatically commit data, and some existed data and new submitted data are duplicate, this scenario will cause data being updated to the latest and not be reverted to older one. To avoid this scenario happens again, CI adds the logic to check duplicate data.

When doing CI to bizasset, the system will check the **Criteria**, **Dataset** and **ParamMap** in the file `criteria.json` of each folder (each step in dataio task) before compiling.

The content of the file `criteria.json` is similar to the below.

```
{
  "Criteria" : "versionIdList",
  "Dataset" : "RatingCalculationVersion",
  "ParamMap" : {
    "versionIdList" : "370107469"
  },
  "RootRecordList" : [ {
    "Action" : "UNKNOWN",
    "ColumnValueList" : [ ... ],
    "ReferencedRecord" : false
  } ],
  "Strategy" : "Replace"
}
```

So there are 2 scenarios to check:
1. All the **Criteria**, **Dataset** are the same, but **ParamMap** or **RootRecordList** is different.
2. All the **Criteria**, **Dataset**,  **ParamMap** and **RootRecordList** are the same.

##### Build Failed Solution for Pre-check

Then we should check which one is the latest as required. Keep it and remove the other one from GIT, or you can remove both folders, and submit a new one.

Then go through the CICD process again.

There are 2 results for pre-check.
* Failed: the system will pop up error as below, and the building will be broken off. You must fix them.
* Warning: the building will **NOT** be stopped, and it is still built. It is also recommended to check pop-up warnings and fix them.

For troubleshooting detailed error message, see [**Error List**](#appendix-a-error-list) in this doc.


#### 3. Build History

![Build History](./image/config_data_release/portal-cicd-buildhistory.png)


#### 4. Deployment History

![Deploy History](./image/config_data_release/portal-cicd-deployhistory.png)



### Post-Deployment Status Check

#### Notice Email

- After deployment is done, you will receive an email with task status (success or failed).

The email you got is with the title like: **[Unicorn-BizData-Deploy][SUCCESS]**. 

That if email subject is with **[FAIL]** means some errors occurred when deploying. Errors are also attached in the email. If failed, you can check the error message in the email.

![deploy_check1](./image/config_data_release/deploy_check1.png)

If you have never received such email or failed to receive such email for some time, please contact InsureMO support team for assistance.


#### DataIO Auto Deploy Status View and Republish

If during deployment, you find any abnormal behavior, you can locate **Data Import Export > Auto Deploy Status**.

![Task Status Check](./image/config_data_release/task_info3.PNG)

Sometimes if there's deployment error in specific deployment step, you can enter **Step Details** to view detailed deployment information, and you may need to pay attention to the following factors:

1. Locate any of the **Failed** tasks and see detailed error message.

![Message](./image/config_data_release/step_message.PNG)

2. Check whether the field **Executed Before?** has been deployed or not.

*  True: It means that the step has already been executed in the deployment before this execution. So there's no deployment this time and duration is always zero.
*  False: It means that the step is being executed in this execution, and **Duration** shows the period of execution of a step.
 
3. Check detailed deployment file by clicking **Download** task file.

![Step Details](./image/config_data_release/step_info2.PNG)

4. Occasionally, tasks is **FAIL** due to network connection or some data conflicts with existing data in the environment. After cleaning up dirty data in the environment, users can redeploy the package again by clicking **Republish**. 

**Note** that here the system will only redeploy previously deployed packages. If there's any package built, clicking **Republish** will not trigger that deployment.


## Troubleshooting

Usually, users will find errors during deployment, or records are not updated after deployment. There are multiple processes involved to troubleshoot the issue first:

### Step 1 Deployment Task Check

Each deployment will create a deployment log. As the first step for troubleshooting, users must make sure that the deployment log is present in effective status. 

If there's any error present, there will be an alert email sent to the tenant operation mail group. Users need to capture error information and see [**Error List**](#appendix-a-error-list) in this doc for further troubleshooting.

Also, for any of the failure records, users must copy `trace_id` or **task UUID** and paste them in the log monitor to further check on error information.

![Task status check](./image/dataio/troubleshoot_taskstatus.png)


### Step 2 Deployment Step Check

If the deployment task is present, the next step is to check on the specific deployment step.

1. Users need to make sure that there's no error status for all the deployment steps.

![Step status check](./image/dataio/troubleshoot_stepstatus.png)

If there's any error present, there will be an alert email sent to tenant operation mail group. Users need to capture error information and see [**Error List**](#appendix-a-error-list) in this doc for more information.

Also, for any of the failure records, users must copy `trace_id` or **step UUID** and paste them in the log monitor to further check on error information.


2. Users need to make sure that the file itself has been updated with correct steps.

![Step executed check](./image/dataio/troubleshoot_stepexecuted.png)

If the file hasn't been updated, it might be because:


#### Check-in Issue

The file might not have been properly checked in. To take a further look, users need to follow the steps on the screen to download an actual deployment file to check further whether the required deployment file is present or not. 

If the expected check-in data is not present, it is mostly caused by a check-in issue, and users must go to the MC environment to re-check the data again.

![File download check](./image/dataio/troubleshoot_stepdownload.png)


#### File Executed Before

Currently, if the file has been executed in previous deployment before, it will not be deployed again unless changes are made. 

In order to allow the deployment to be triggered again, users can either go to MC to adjust deployment data or follow steps below to delete deployment step records and re-trigger deployment.

1. Search the target bizasset.
2. Find the steps (the same as those of folders' names in git).
3. Delete the data-related steps.
4. Republish the task.

![republish](./image/dataio/dataio-republish_executed_steps.png)

After republishing, there should be expected records under **Data Change Record**. 

![Data Change Record](./image/dataio/dataio-republish_executed_steps-changerecord.png)


### Step 3 Deployment Record Check

If all the deployment tasks and steps are correct but the expected record is not updated, you need to check the movement of that record.

Let's take a rule as an example. Suppose there's a rule fails to update as expected. Users need to conduct two levels of checks:

1. Configuration Data Check Based on Data

First of all, since we are talking about the rule, users must go to **Rule Management** for both the MC environment and target environment, and check whether the rule actually exists or not.

2. Configuration Data Check Based on Primary Key
* If the rule exists in the target environment, it is just an update error.
    - Users can go to the target environment and locate the rule and then go to the audit screen directly:

![Rule Audit Jump](./image/dataio/rule_audit_jump.png)

* If the rule is missing in the target environment, users need to:
1) Go to the MC environment to find the record ID first. 
2) Go to target deployment to check by selecting **Public Setting > Audit Log**.  
3) Query by Primary Key, then you can see the records with change history.  

**Note** that for automatic deployment, users will always be **VIRTUAL** to differentiate those UI changes. If you have doubts about how to locate the primary key, you can contact InsureMO support team for details.

![Data Change Record](./image/dataio/troubleshoot_ruleaudit.png)

Once record change detail is located, there can be multiple scenarios:


#### Mistaken Manual Operation (Update and Delete)

The case that there’s not any record means no update triggered. Users need to download original check-in files to check on the check-in status again. 

The case that there are multiple adding and deletion records with the user not related to **VIRTUAL** means someone who mistakenly operated the record in the target environment. Such operation should be prohibited as it's the basic principle that all configuration task should be performed in MC environment. You can check with the mentioned user for details.

BTW, DataIO includes a built-in mechanism to detect duplicate data based on business keys (e.g., for data tables, it checks by table name and record usage). However, this process is memory-intensive. To ensure performance, large files (such as data table records or rate table records) are processed using a fast delete-and-insert strategy without in-memory analysis. As a result, you’ll need to manually remove any conflicting data in the target environment beforehand.

#### Duplicate Check-in and Manual Change

The case that there are multiple adding and deletion records with users related to **VIRTUAL** may be caused by duplicate check-in. One common scenario is that a tenant may have more than two products, not all of which are re-exported and committed to GIT. 

- For instance, multiple products are deployed within one job. Suppose configuration data is submitted to two folders with the suffix `_zip`. One folder contains the latest data, while the other does not.

    - During deployment, the latest data is conducted first, followed by the older data. In the target environment, under **Data Import Export > Data Change Record**, you will find that the expected latest data is initially updated but is subsequently removed when the older data is deployed.

- A usual example is that the same data (rule/rating) is under more than two products, not all of which are re-exported and committed to SVN or GIT. The following is an example when more than two products are deployed together in one job:

    1. A is updated, but B is not.
    2. When deployed, A is executed first and its record is updated to the latest.
    3. B is executed but its record is not updated to the latest.

If that happens, please check the config data in GIT and confirm whether to remove old records.

#### Cache Issue

For example, the rule/rating is updated after checking in UI, but they cannot work when doing the quotation, which is usually caused by the cache that is not updated.
- If that happens, please try to clear the cache in **Monitor > Local Cache**.

For example, the rule/rating is updated when checking in the UI, but cannot work when doing the quotation. We can clear the cache in:
* **Platform-busi-config > rating/rule/DSL**
* **Quotation > rating/rule/DSL**
* **Pa > rating/rule/DSL**


#### Improper MC Setup by Copy

Unlike newly created tenants, there are many scenarios that one tenant can be created by copying data from another tenant. For such copy process, it's very important that MC environment to be setup currently.

Our standard MC setup process for such copy is:

1. Application team to raise application copy request 
2. InsureMO SiteOps team to setup the git repo especially for config-data
3. Application team to clean-up the git repo to remove unnecessary config-data especially those products from old projects
4. Application team to contact InsureMO SiteOps team to deploy updated config-data to MC just for once
5. Application team start to conduct normal configuration, check-in and deploy to other environments

If this setup is not properly done before the application team starts any project configuration, data might inevitably be wrong.


### Step 4 Seek InsureMO Support

If you have checked all above-mentioned points and everything seems good, you can contact InsureMO support team for help.

For InsureMO team to quickly conduct troubleshooting and identify the root cause, please :
1. Remember to copy/paste all screenshot evidence mentioned above.
2. Share your tenant environment account (if not for production). 



## Q&A

### If my project phase 1 is in production and I don't want to deploy certain things which are still in development for phase 2

Basically, there are two scenarios:

- Configuration like entire one code table/data table which can be differentiated by configuration group.

Suppose there is a group-X, which contains table01, table02, table03, and it has been deployed to the production environment with the branch named ‘pkg’ on Git.

Now, there are new tables, table04 and table05, but we do not wish for them to be deployed to the production environment.

For this situation, a new group-Y can be created for managing the ‘dev’ branch tables, which can include table04 and table05. This way, the ‘pkg’ and ‘dev’ branches will not affect each other.

Migration:

If table04 is added to group-X, you can edit table04 to change its group to group-Y. As a result, when exporting group-X to the production environment, table04 will be deleted. For the internal development environment, after publishing both group-X and group-Y, table04 will first be deleted (from group-X) and then inserted (into group-Y). It is necessary to ensure the publication order of group-X and group-Y.

Since the export zip files of the same type of data (e.g., DataTable) are all placed in one Git directory, group-X and group-Y need to ensure the order of precedence. There are two strategies to ensure this:

Manually create the export directory for group-Y, ensuring it follows group-X alphabetically, and subsequently commit the tables of group-Y in the export interface by selecting this directory.

When creating group-Y as a config group, make special definitions in the group code and group name to ensure that the exported bizdata on Git is sorted alphabetically in the directory name.

- Configuration like the row data in specific code table/data table
Suppose table 01 contains two piece of data, row A and row B, and it has been deployed to the production environment with the branch named ‘pkg’ on Git.

Now there are new values row C,  but we do not wish for them to be deployed to the production environment.

For this situation, a new branch named ‘dev’ is required to be created and only check-in the row C change into this branch.

If there’s a case that there’s a row D both required for development and production,  then after check-in the ‘dev’ branch following standard process, user can specifically click ‘hotfix’ button in the deployment screen and select the ‘pkg’ branch, which will only ‘cherry-pick’ this specific commit (only row D NOT row C) into the ‘pkg’ for production release.

Please make sure that all changes follow the same process and all committed into ‘dev’. Later on, during the merge of ‘dev’ and ‘pkg’, user can use ‘dev’ entirely.

![dataio_deploy_hotfix](./image/dataio/dataio_deploy_hotfix.png)


### If I have large table with a lot of data, how to check-in?

If you encounter issues during auto-check-in with large ata table or configuration table (say records > 100,000), follow these steps:

**1. Configuration Segregation**

Large tables significantly slow down the process as it contains too much data. To prevent impacting other configuration data deployments, it is necessary to segregate the table and assign it to a dedicated configuration group.

* Steps to Segregate Configuration Data:
    - Most configurations belong to either the product level or common level. If it is product level table, avoid keeping it in the product level (as frequent changes to other product configurations may interfere). Instead, set the context to "Common".
    - Once it belongs to "Common", then deployment will be based on configuration group. So in order to keep independency, we would suggest user to create a new configuration group and move the table inside. 
  
  For details about how to create configuration group, see [New Configuration Group](https://docs.insuremo.com/ics/app_framework/context#configuration-group1).
- Avoid Duplicated Check-in
  - If the table was previously checked in at product or other configuration group before, re-check in the previous file after changing the group to prevent duplicated check-in.

![group-large table](./image/dataio/datatable_group-large-table.png)

**2. GIT Repository and CICD Segregation**

After creating the configuration group, segregate the Git and CICD to avoid slowing down the entire project’s build and deployment.
- Git and CICD Creation:<br>
User can contact InsureMO SiteOps team (siteops@insuremo.com) for guidance to create. If table is big, we would suggest to create special big-bizasset to handle such large data.

**3. Check-In**

* Auto Check-in (After 25.01)

- User can check-in the big-bizasset via auto-checkin method. 
   - As long as it's adopting big-bizasset to deploy, the system will use the zip file instead of unzip file to check-in which will have a much better deployment speed. 
   - Note: Detailed change log in GIT will not be visible, which is acceptable for large data tables.

* Manual Check-in (Before 25.01)
   - For detailed steps, see [Manual Check-In](#21-manual-check-in) process illustration in this doc.

   1. Create folder in the GIT

   2. Download and Unzip the file in the GIT and check-in 
   
       More details , see [DATA Export & Checkin](#2-data-export-and-check-in)

   3. Conduct CICD


### If I want to change table from group A to gropu B, what I need to pay attention to?

There can be two scenarios:

1. Just change the group but it's still in one GIT.
2. Change the group and they belong to seperate GIT.

- Focus on more complex scenario 2: assume two separate group and GIT

1. GIT A for group A  
2. GIT B for group B

So if you want to move any configuration from group A to group B,  you need:

1.	Check-in both A and B  - Internally, it will mean delete from A and B.
2.	Next time when you deploy, you need to manually make sure that B to be run together with A and A should be triggered after B.  
    - If B to trigger first then A, the table might be deleted. (It's in the platform backlog to improve.)

Changing groups is high-risk due to potential data deletion or inconsistency, so users should always pay special attention to this move.


### Is there any guideline for us to create config-data repos?

For a simple small project, one repo might be enough. However for a big implementation projects, it's encourged to create multiple repos just like creating multiple micro-services.

Usually there are two principles for such creation:

1. By module/offering - Seperate config-data into different module or offering like policy admin, bcp or claim. It will help later one when there's need to sell bcp while not sell policy admin, it will be easy to achieve.

2. By different impementation team - If there are multiple teams to manage delivery, it's encourged to seperate so that one team's change will not impact on another team. Sometimes, even one project contains both policy admin and bcp two modules, it can be a small scale project and one team to deliver then one repo can be enough.


## Appendix A Error List

### Build Error
#### Pre-check Fail Same Data

If there are duplicated records with some differences in found config-data, the system will pop up an error as below, and the building will be broken off.
Please check the config data, remove the unused data and redo the package.

```
Pre-check Fail!!! Same DataSet, Criteria and Param, But diff root records found!!!

Path A: [../src/main/bizdata/rating/52_InstallmentRating_zip/criteria.json]
Path B: [../src/main/bizdata/rating/09_NBInstallmentRating_zip/criteria.json]
=================================================================================================================
Path A: [../src/main/bizdata/configtable/SystemConfigTable_Data_zip/criteria.json]
Path B: [../src/main/bizdata/configtable/ConfigTableData_defIdListAndRecordUsage_SystemConfigTable_zip/criteria.json]
=================================================================================================================
Path A: [../src/main/bizdata/configtable/ConfigTableData_defIdListAndRecordUsage_UIConfig_zip/criteria.json]
Path B: [../src/main/bizdata/configtable/UIConfig_Data_zip/criteria.json]
=================================================================================================================
Path A: [../src/main/bizdata/plan/ANGBENPlan_zip/criteria.json]
Path B: [../src/main/bizdata/plan/AGNBEN_PDPlan_zip/criteria.json]
=================================================================================================================
Path A: [../src/main/bizdata/codetable/DataTableNew_zip/criteria.json] 
Path B: [../src/main/bizdata/datatable/DataTableWithRecord_groupId_PA_COMMON_PA Common_zip/criteria.json] 
===============================================================================================================================
Path A: [../src/main/bizdata/product/MI00002/MI00002_1.0/product/product_zip/criteria.json] 
Path B: [../bizasset/src/main/bizdata/product/MI002/MI00002_1.0/product/product_zip/criteria.json] 
===========================================================================================================================
Path A: [../src/main/bizdata/product/MI00002/rule/rule_zip/criteria.json] 
Path B: [../src/main/bizdata/product/MI002/rule/rule_zip/criteria.json] 
========================================================================================================================
```

##### Fail Example - Rating Duplicated 
Take the 2 folders below for example, we find the `criteria.json` under both folders.

```
Path A: [../src/main/bizdata/rating/52_InstallmentRating_zip/criteria.json]
Path B: [../src/main/bizdata/rating/09_NBInstallmentRating_zip/criteria.json]
```

The **Criteria**, **Dataset**, **ParamMap** are the same, both 2 folders stored the same record's details.

![criteria.json compare](./image/config_data_release/criteria_json_compare_001.png)

Then we should check which one is the latest as required, then keep it, and remove the other one from GIT. Or we can remove both folders, and submit a new one.

##### Fail Example - Datatable Duplicated
Take the following 2 folders for example. The `DataTableNew_zip` is wrongly submitted to codetable folder or it is submitted long time ago. 

Normally, it is used to submit data table to codetable folder, and the system separate them to different folders later. 

Just remove the datatable under the folder codetable, and make sure the latest data is submitted to data table.

```
Path A: [../src/main/bizdata/codetable/DataTableNew_zip/criteria.json] 
Path B: [../src/main/bizdata/datatable/DataTableWithRecord_groupId_PA_COMMON_PA Common_zip/criteria.json] 
```

```
Path A: [../src/main/bizdata/codetable/DataTableWithoutRecord_groupId_PA_COMMON_PA Common_zip/criteria.json] 
Path B: [../src/main/bizdata/datatable/DataTableWithoutRecord_groupId_PA_COMMON_PA Common_zip/criteria.json]  
```

##### Fail Example - Product Rule Duplicated

Products are submitted automatically by whole product, with product, DD, rating, rule, and rate table related. If duplicate submission messages appear, it must be one of the folders that is manually submitted with an incorrect product code. Please check and remove the incorrect one.

```
Path A: [../src/main/bizdata/product/MI00002/MI00002_1.0/product/product_zip/criteria.json] 
Path B: [../src/main/bizdata/product/MI002/MI00002_1.0/product/product_zip/criteria.json] 
===========================================================================================================================
Path A: [../src/main/bizdata/product/MI00002/rule/rule_zip/criteria.json] 
Path B: [../src/main/bizdata/product/MI002/rule/rule_zip/criteria.json] 
========================================================================================================================
```


##### Fail Example - I18n Duplicated

One I18n folder is automatically submitted, and another one is manually created. 
So if the duplicated message below pops up, please check and remove the old one.


```
 Path A: [../src/main/bizdata/i18n/I18nRefData_REF_UI_PA_zip/criteria.json] 
 Path B: [../src/main/bizdata/i18n/UI_Translation_zip/criteria.json] 
```

#### Pre-check Warning Duplicate Data

All **Criteria**, **Dataset** and **ParamMap** are the same, and the **RootRecordList** is the same too.

If there are duplicated records in found config-data, the system will pop up **Warnings** as below. The building will **NOT** be stopped, and still continue to build. 

The cause is that both 2 folders stored the same record's details. Please check the config-data. If there exists any incorrect data, remove it and redo the CICD. If all the data is correct, then just ignore the Warnings. 

```
Pre-check Warning!!! Duplicate DataSet, Criteria and Param found!!!

Path A: [../src/main/bizdata/i18n/I18nRefData_zip/criteria.json] 
Path B: [../src/main/bizdata/i18n/I18n_RuleMessage_zip/criteria.json] 
================================================================================================================================================
Path A: [../src/main/bizdata/product/APS0001/ratetable_def/ratetable_def_zip/criteria.json] 
Path B: [../src/main/bizdata/product/VBT001/ratetable_def/ratetable_def_zip/criteria.json] 
================================================================================================================================================
Path A: [../src/main/bizdata/product/ASAN001/ratetable_def/ratetable_def_zip/criteria.json] 
Path B: [../src/main/bizdata/product/VBT001/ratetable_def/ratetable_def_zip/criteria.json] 
================================================================================================================================================
Path A: [../src/main/bizdata/product/ASAN001/rating/rating_zip/criteria.json] 
Path B: [../src/main/bizdata/product/TOPUPH1/rating/rating_zip/criteria.json] 
===============================================================================================================================================
Path A: [../src/main/bizdata/product/ASAN001/rule/rule_zip/criteria.json] 
Path B: [../src/main/bizdata/product/TOPUPH1/rule/rule_zip/criteria.json] 
================================================================================================================================================
Path A: [../src/main/bizdata/product/BGRP01/ratetable_def/ratetable_def_zip/criteria.json] 
Path B: [../src/main/bizdata/ratetable/Endor_Common_zip/criteria.json] 
================================================================================================================================================
```

##### Warning Example - Rule Dupliacted

Take the 2 folders below for example, and we find the `criteria.json` under both folders.

```
================================================================================================================================================
Path A: [../src/main/bizdata/product/ASAN001/ratetable_def/ratetable_def_zip/criteria.json] 
Path B: [../src/main/bizdata/product/VBT001/ratetable_def/ratetable_def_zip/criteria.json] 
================================================================================================================================================
Path A: [../src/main/bizdata/product/ASAN001/rule/rule_zip/criteria.json] 
Path B: [../src/main/bizdata/product/TOPUPH1/rule/rule_zip/criteria.json] 
```

The **Criteria**, **Dataset** and **ParamMap** are the same, and both 2 folders stored the same records' details.

![criteria.json compare](./image/config_data_release/criteria_json_compare_002.png)

For rate tables and rules, the Warnings always occur when they are mapped to multi products.  If it is normal, you can ignore. 

#####  Warning Example - Ratetable

Take the following 2 folders for example, some rate tables are submitted to product and the common rate tables. If one rate table is mapped to Product, it is not able to set as common, which causes data is always wrongly submitted. 

Please check and fix the config-data. Just correct it and go through the CICD process again.

```
Path A: [../src/main/bizdata/product/BGRP01/ratetable_def/ratetable_def_zip/criteria.json] 
Path B: [../src/main/bizdata/ratetable/Endor_Common_zip/criteria.json] 
```

##### Warning Example - I18n

The I18n data is always exported by module, so if 2 folders are duplicated, there must be same module data submitted twice with different names.

Then we should check which one is the latest as required, then keep it, and remove the other one from GIT. Or we can remove both folders, and submit a new one.

```
Path A: [../src/main/bizdata/i18n/I18nRefData_zip/criteria.json] 
Path B: [../src/main/bizdata/i18n/I18n_RuleMessage_zip/criteria.json] 
```

#### Pre-check Warning Wrong RecordUsage Data

As designed, all the tenant-maintained data sets the record usage to be 5. If records with non-5 usage in config-data, the system will pop up an error as below:
- The building will **NOT** be stopped, 
- The build process continues uninterrupted. 

If this error occurs, carefully review the data records—some exported data may be NOT required. If the export operation is correct, the issue likely stems from a system export function defect. Please contact the platform for confirmation.
 
```
 Pre-check Warning!!! Wrong RecordUsage data found!!!

 Path: [../src/main/bizdata/codetable/CodeTableI18nByGroup_groupId_PUB_Platform Pub_zip/T_I18N_CODE.csv] Some RecordUsage data is 2!
 Path: [../src/main/bizdata/codetable/CodeTableI18nByGroup_groupId_PUB_Platform Pub_zip/T_PUB_RESOURCE_BUNDLE.csv] Some RecordUsage data is 2!
 Path: [../src/main/bizdata/codetable/CodeTableI18nRefData_REF_Array_zip/T_I18N_CODE.csv] Some RecordUsage data is 2!
 Path: [../src/main/bizdata/codetable/CodeTableI18nRefData_REF_Array_zip/T_PUB_RESOURCE_BUNDLE.csv] Some RecordUsage data is 2!
 Path: [../src/main/bizdata/codetable/CodeTableNew_groupId_PA_COMMON_PA Common_zip/T_PUB_RES_CTX_RELATION.csv] Some RecordUsage data is 1!
```

Sometimes, if you get strange record-usage IDs like the following example, it might be caused by some code merging errors. Please double check your business data file first.

For example, when users get the following message:

```
Path: [/home/jenkins/agent/workspace/TenantCode/app/src/main/bizdata/product/THAVMI/THAVMI_1.0/codetable/codetable_zip/T_DD_BUSI_CODE_TABLE.csv] 
Error occurred when parse csv file: Index for header 'RECORD_USAGE' is 18 but CSVRecord only has 1 value!!
Path: [/home/jenkins/agent/workspace/TenantCode/app/src/main/bizdata/product/THAVMI/THAVMI_1.0/codetable/codetable_zip/T_PUB_RES_CTX_RELATION.csv] 
Error occurred when parse csv file: Index for header 'RECORD_USAGE' is 7 but CSVRecord only has 1 value!!
Path: [/home/jenkins/agent/workspace/TenantCode/app/src/main/bizdata/product/THAVMI/THAVMI_1.0/context_version/context_version_zip/T_PUB_RES_CTX_RELATION.csv] 
Error occurred when parse csv file: Index for header 'RECORD_USAGE' is 7 but CSVRecord only has 1 value!!
Path: [/home/jenkins/agent/workspace/TenantCode/app/src/main/bizdata/product/THAVMI/THAVMI_1.0/datatable/datatable_zip/T_DD_BUSI_DATA_TABLE.csv] 
Error occurred when parse csv file: Index for header 'RECORD_USAGE' is 9 but CSVRecord only has 1 value!!
Path: [/home/jenkins/agent/workspace/TenantCode/app/src/main/bizdata/product/THAVMI/THAVMI_1.0/datatable/datatable_zip/T_DD_BUSI_DATA_TABLE_RECORD.csv] 
Error occurred when parse csv file: Index for header 'RECORD_USAGE' is 7 but CSVRecord only has 1 value!!
```

It might be caused by wrong automatic mergence as below:

![Task RUNNING](./image/dataio/dataio_recordusage_merge_error.png)


### Deployment Error

####  xxx-config-data Task Is Still Running 
##### Common Cause
The platform-busi-config is restarted when the deployment job is executed.

![Task RUNNING](./image/dataio/task_running_001.PNG)

##### Troubleshooting

* Reset the task status to **FAIL** in the DB.
update `t_pub_deploy_task` set `status='FAIL'` where `status='RUNNING'`;

* We will support to update the status in the UI later. Please wait for the function to come.

Then re-deploy the config-data.


#### No such File or Directory

java.io.FileNotFoundException: No such File or Directory

##### Common Cause
When re-deploying or downloading a source file, the file can NOT be found. It will happen after the task is executed followed by a restart of platform-busi-config. If platform-busi-config is restarted, the existing files under it are removed.

##### Troubleshooting

Restart the application of the package, and the package will be re-deployed.

##### Message Example

```json
{"timestamp":"2021-07-22T16:49:16","timestampServer":"2021-07-22T08:49:16.473","status":500,"error":"Internal Server Error","path":"/v1/downloadS3File","traceId":"3ae0973224cfb256","traceIdContainer":null,"exception":"...platform.bizdataio.exception.FileParseException","code":"TenantCode-PLATFORM-COMMON-E9999","message":"Exception has been thrown, check it in ELK with trace id: 3ae0973224cfb256, message content is: java.io.FileNotFoundException: /app/repos/party_party_D97B37ED2C7BAAEDD3888E68C187D76F:party_D97B37ED2C7BAAEDD3888E68C187D76F.zip (No such file or directory)"}
```

#### Task Shows **RUNNING** All the Time
  The deployment is not always a task that lasts very long. As a rule of thumb, a full deployment of a package in a new environment should take no more than 15 minutes; For newly added or updated records, the process should wrap up within 5 minutes.
  
  You can check the steps of the task, if the steps change over time, it is really in deployment; if not, there might be an issue.

##### Common Cause
It usually happens when the task is executed and then platform-busi-config is restarted. 

When platform-busi-config is restarted, the task cannot continue to be executed.

During deployment, excessive configuration data from certain tenants can trigger a restart of platform-busi-config (you can check the live duration of platform-busi-config in boot-admin or eureka).

##### Troubleshooting
Please execute the following SQL in the configuration DB of the target environment. Then the task will be re-executed automatically. You can check the status of the task later.

```UPDATE t_pub_deploy_task t SET t.`STATUS`='SUBMIT' WHERE t.`STATUS`='RUNNING';```


#### Duplicate Entry xxx for Key xxxxx
##### Common Cause

1. Data has been published to the current environment, but the data in the MC environment was deleted and created a new record with the same Code or Name. When the configuration data is pushed again, the primary key (Code or Name) of the two (old and new) is the same, but the IDS in the DB is inconsistent, and a conflict error pops up.

2. For temporary testing or validation, data is created in the current environment, then identical configuration data is created in the MC environment. When pushing from MC to the current environment, the primary key (Code or Name) of the two is the same, but the IDS in the DB is inconsistent and then a conflict error pops up.

3. The platform configuration data conflicts with the tenant configuration data. For example, the tenant has already created the relevant data and the platform adds the same data later to cause conflicts.

##### Troubleshooting

* For Cause 1 and 2: Delete the existing data in the current environment and re-execute the deployment task of configuration data.
* For Cause 3: Delete the data in the tenant MC environment and current environment and use the data in the platform.

##### Message Example

```java
record = DataRecord{action=Insert, isReferencedRecord=true, businessColumnList_=[DataBusinessColumn{key='T_DD_BUSI_DATA_TABLE.DATA_TABLE_ID', tableName='T_DD_BUSI_DATA_TABLE', columnName='DATA_TABLE_ID', columnValue=373321088}], columnValueList=[373321088, Gender, Gender, {"DataValue":{"DataType":-4,"IsIndex":"N","IsPrimaryKey":"N","Label":"DataValue","Name":"DataValue","NeedMultipleLanguage":"N","PhysicalTableColumn":""},"DisplayValue":{"DataType":-4,"IsIndex":"N","IsPrimaryKey":"N","Label":"DisplayValue","Name":"DisplayValue","NeedMultipleLanguage":"N","PhysicalTableColumn":""}}, -1, 17312387, 17312387, 2, -1, null, Fri Apr 03 16:56:00 IST 2020, Fri Apr 03 16:56:00 IST 2020, 130, Gender]}
sql = INSERT INTO T_DD_BUSI_DATA_TABLE(`DATA_TABLE_ID`,`NAME`,`DESCRIPTION`,`FIELDS`,`CONTEXT_ID`,`INSERT_BY`,`UPDATE_BY`,`RECORD_USAGE`,`DATA_TABLE_TYPE`,`PHYSICAL_TABLE_NAME`,`INSERT_TIME`,`UPDATE_TIME`,`GROUP_ID`,`CODE`) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?)

Duplicate entry 'Gender' for key 'UNI_DD_BUSI_DATA_TABLE__CODE'
...platform.bizdataiogateway.StandardDeployTaskEngine.doImport(StandardDeployTaskEngine.java:505)
...platform.bizdataiogateway.StandardDeployTaskEngine.execute(StandardDeployTaskEngine.java:423)
...platform.bizdataiogateway.StandardDeployTaskEngine.scheduleExecuteDeployTask(StandardDeployTaskEngine.java:387)
...platform.bizdataiogateway.StandardDeployTaskEngine$$FastClassBySpringCGLIB$$5a5f3629.invoke()
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
org.springframework.cloud.sleuth.instrument.scheduling.TraceSchedulingAspect.traceBackgroundThread(TraceSchedulingAspect.java:69)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633)
org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
...platform.bizdataiogateway.StandardDeployTaskEngine$$EnhancerBySpringCGLIB$$2b2acece.scheduleExecuteDeployTask()
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
```

#### Cannot Add or Update a Child Row 

A Foreign Key Constraint Failed.

##### Common Cause

1. There is no foreign key data in the exported data. When the relevant foreign key data does not exist when importing, then an error will pop up.
2. The execution sequence of the data needs to be adjusted. The dependent foreign key data should be pushed before the current data.

The InsureMO-related data contains:
* Market Product and Technical Product
  * When importing Market Product, the related technical product should have been imported.

* Field and DD, CodeTable and DD
  * When importing DD, the related Field and CodeTable should have been imported.

* CodeTable and DataTable
  * When importing CodeTable, the related DataTable should have been imported.

* Rule and Rule Group, Rule Group and Rule Event, Rule Event and Driver
   * When importing Rule, the related Rule Group should have been imported.
   * When importing Rule Group, the related Rule Event and Driver should have been imported.
   * When importing Rule Event, the related Rule Driver should have been imported.

*  Index and DD
   * When importing Index, the related DD should have been imported.

##### Troubleshooting
* For Cause 1: Export the related data and submit to SVN/GIT, then re-push the configuration data.
* For Cause 2: Add the prefix for the folder, for example: `01_xxx`, to ensure that the dependent data should be executed before the data.

##### Message Example
As shown below, when INSERT INTO `T_RM_EVENT_GROUP_MAPPING`, the related GROUP_ID does not exist in the table `T_RM_EVENT_GROUP`.

```java
record = DataRecord{action=Insert, isReferencedRecord=false, businessColumnList_=[DataBusinessColumn{key='T_RM_EVENT_GROUP_MAPPING.MAPPING_ID', tableName='T_RM_EVENT_GROUP_MAPPING', columnName='MAPPING_ID', columnValue=373330762}], columnValueList=[373330762, 373277151, null, 373277146, 17312387, 17312387, 5, 5, Tue Apr 07 11:19:32 CST 2020, Tue Apr 07 11:19:32 CST 2020]}
sql = INSERT INTO T_RM_EVENT_GROUP_MAPPING(`MAPPING_ID`,`GROUP_ID`,`APPLICABLE_CONDITION`,`EVENT_ID`,`INSERT_BY`,`UPDATE_BY`,`RECORD_USAGE`,`RUN_PRIORITY`,`INSERT_TIME`,`UPDATE_TIME`) VALUES(?,?,?,?,?,?,?,?,?,?)

Cannot add or update a child row: a foreign key constraint fails (`vela_dev_ms_cfg`.`t_rm_event_group_mapping`, CONSTRAINT `FK_RM_EVENT_GROUP_MAPPING_G` FOREIGN KEY (`GROUP_ID`) REFERENCES `t_rm_group` (`GROUP_ID`) ON DELETE NO ACTION ON UPDATE NO ACTION)
...platform.bizdataiogateway.StandardDeployTaskEngine.doImport(StandardDeployTaskEngine.java:514)
...platform.bizdataiogateway.StandardDeployTaskEngine.execute(StandardDeployTaskEngine.java:432)
...platform.bizdataiogateway.StandardDeployTaskEngine.scheduleExecuteDeployTask(StandardDeployTaskEngine.java:396)
...platform.bizdataiogateway.StandardDeployTaskEngine$$FastClassBySpringCGLIB$$5a5f3629.invoke()
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
org.springframework.cloud.sleuth.instrument.scheduling.TraceSchedulingAspect.traceBackgroundThread(TraceSchedulingAspect.java:69)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633)
org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
...platform.bizdataiogateway.StandardDeployTaskEngine$$EnhancerBySpringCGLIB$$1ab147a0.scheduleExecuteDeployTask()
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

```

#### Cannot Delete or Update A Parent Row

A Foreign Key Constraint Fails.

##### Common Causes


1. There is no foreign key data in the exported data. When the relevant foreign key data does not exist when importing, then an error will pop up.
2. The execution sequence of data needs to be adjusted. The dependent foreign key data should be pushed before the current data.

The InsureMO-related associated data contains:
* Market Product and Technical Product
  * When importing Market Product, the related Technical Product should have been imported.

* Field and DD, CodeTable and DD
  * When importing DD, the related Field and CodeTable should have been imported.

* CodeTable and DataTable
  * When importing CodeTable, the related DataTable should have been imported.

* Rule and Rule Group, Rule Group and Rule Event, Rule Event and Driver
   * When importing Rule, the related Rule Group should have been imported.
   * When importing Rule Group, the related Rule Event and Driver should have been imported.
   * When importing Rule Event, the related Rule Driver should have been imported.

* Index and DD
  * When importing Index, the related DD should have been imported.

##### Troubleshooting
* For Cause 1: Export the related data and submit to SVN/GIT, then re-push the configuration data.
* For Cause 2: Add the prefix for the folder, for example: `01_xxx`, to ensure the dependent data should have been executed before the data.

##### Message Example
As shown below, when DELETE FROM `T_DD_BUSI_DATA_TABLE_RECOERD`, the related `DATA_TABLE_ID` does not exist in the table `T_DD_BUSI_DATA_TABLE`.

```java
record = DataRecord{action=Delete, isReferencedRecord=false, businessColumnList_=[DataBusinessColumn{key='T_DD_BUSI_DATA_TABLE.DATA_TABLE_ID', tableName='T_DD_BUSI_DATA_TABLE', columnName='DATA_TABLE_ID', columnValue=370466619}], columnValueList=[370466619, CountryCode, {"CharacterCode":{"DataType":-4,"IsIndex":"N","IsOverride":"N","IsPrimaryKey":"N","Label":"CharacterCode","Name":"CharacterCode","NeedMultipleLanguage":"N","PhysicalTableColumn":""},"DigitalCode":{"DataType":-4,"IsIndex":"N","IsOverride":"N","IsPrimaryKey":"N","Label":"DigitalCode","Name":"DigitalCode","NeedMultipleLanguage":"N","PhysicalTableColumn":""},"EnglishName":{"DataType":-4,"IsIndex":"N","IsOverride":"N","IsPrimaryKey":"N","Label":"EnglishName","Name":"EnglishName","NeedMultipleLanguage":"N","PhysicalTableColumn":""},"Name":{"DataType":-4,"IsIndex":"N","IsOverride":"N","IsPrimaryKey":"N","Label":"Name","Name":"Name","NeedMultipleLanguage":"N","PhysicalTableColumn":""}}, -1, 17312387, 17312387, 5, -1, null, 2020-05-13 15:08:03.0, 2020-05-13 15:08:03.0, 127, null]}
sql = DELETE FROM T_DD_BUSI_DATA_TABLE WHERE DATA_TABLE_ID = ?

Cannot delete or update a parent row: a foreign key constraint fails (`vela_smk_ms_cfg`.`t_dd_busi_data_table_record`, CONSTRAINT `FK_DD_BUSI_DATA_TB_RECORD__DID` FOREIGN KEY (`DATA_TABLE_ID`) REFERENCES `t_dd_busi_data_table` (`DATA_TABLE_ID`) ON DELETE NO ACTI)
...platform.bizdataiogateway.StandardDeployTaskEngine.doImport(StandardDeployTaskEngine.java:505)
...platform.bizdataiogateway.StandardDeployTaskEngine.execute(StandardDeployTaskEngine.java:423)
...platform.bizdataiogateway.StandardDeployTaskEngine.scheduleExecuteDeployTask(StandardDeployTaskEngine.java:387)
...platform.bizdataiogateway.StandardDeployTaskEngine$$FastClassBySpringCGLIB$$5a5f3629.invoke()
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
org.springframework.cloud.sleuth.instrument.scheduling.TraceSchedulingAspect.traceBackgroundThread(TraceSchedulingAspect.java:69)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633)
org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
...platform.bizdataiogateway.StandardDeployTaskEngine$$EnhancerBySpringCGLIB$$99185dde.scheduleExecuteDeployTask()
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

```

#### Table Does Not Exist
##### Common Cause
1. The current environment version is updated and the structure of the data table is changed into being different from the MC environment.
2. The current environment version is updated and the structure of the data table is too different from the MC environment.

##### Troubleshooting
1. Confirm with ADs how it works.
2. Update the current environment or MC environment.

##### Message Example

As shown below, when the current environment version is updated, the structure of data table is changed into being different in SVN/GIT. We need to export or update the MC environment and then re-export the data.

```java
Table 'vela_smk_ms_cfg.t_prd_plan_co_insurence'doesn't exist
...platform.bizdataiogateway.StandardDeployTaskEngine.doImport(StandardDeployTaskEngine.java:505)
...platform.bizdataiogateway.StandardDeployTaskEngine.execute(StandardDeployTaskEngine.java:423)
...platform.bizdataiogateway.StandardDeployTaskEngine.scheduleExecuteDeployTask(StandardDeployTaskEngine.java:387)
...platform.bizdataiogateway.StandardDeployTaskEngine$$FastClassBySpringCGLIB$$5a5f3629.invoke()
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
org.springframework.cloud.sleuth.instrument.scheduling.TraceSchedulingAspect.traceBackgroundThread(TraceSchedulingAspect.java:69)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644)
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633)
org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
...platform.bizdataiogateway.StandardDeployTaskEngine$$EnhancerBySpringCGLIB$$fe2a7b94.scheduleExecuteDeployTask()
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

```

#### The Data Package zip Is Not Valid According to Spec
##### Common Cause
If the error message is displayed as follows,
the data package `indeez-config-data-indeez-1.10/context/2052_151724_MESSAGE/PubMessage_20201121_105043000.zip` is not valid according to spec 

In the above data package, **context** is one folder under the bizdata. According to design, only one-layer folder is allowed.
- The red path on the right contains 2 folder layer.
- The green path on the left contains 1 folder layer.

![folders only1 layer allowed](./image/dataio/folders_only1_layer_allowed.png)

##### Message Example:
```json
{"uuid":"indeez-config-data-bizasset_9DF58A599C448BF2C4A5E39679F16E86","status":"FAIL","traceId":"63b682af8fcf5290","message":"task [indeez-config-data-bizasset_9DF58A599C448BF2C4A5E39679F16E86] deploy failed, file [/app/repos/in•	Market Product and Technical Product data-bizasset_9DF58A599C448BF2C4A5E39679F16E86.zip] <br/>the data package indeez-config-data-indeez-1.10/context/2052_151724_MESSAGE/PubMessage_20201121_105043000.zip is not valid according to spec 
```

### Data Abnormal

#### Expected data is not updated

##### Debug Step

For the data deployment workflow is
1. Maintained in MC
2. Committed to GIT
3. Do CICD to build a package and deploy the package to test environment. 
4. Deloy tested package to some formal environment.
 
So, for debugging purposes, we need to check the data step by step. Hereinafter, we will take all deployed environment as target environment in below.

1. Get the Id of the expected to updated data record(you can use F12 develop mode). 
- For Example, field Id, object Id, mapping Id, rule Id or batch Id or data table Id or data record Id.. etc. The below example ake Id **552263146** for example.

2. Check the change history of the record by record Id under menu **"Public Setting > Audit Log**" in target environment. 
- If user is **VIRTUAL**, it means it is updated from bizasset deployment. 
- If it is a user's name, it means it is updated from UI by mannually add/edit/delete. 
- If mannually changed by someone, and you want to revert to the data as deployment done, please rerun excuted step by following guide [Rerun Excuted Step](#file-executed-before).

![debug step 001](./image/dataio/datarecord_debug_0011.png)

- If can not access the UI, but can access the DB，please run below SQL in DB to collect the information. For configuration  information some in pub DB, and some in cfg DB, please try both or confirm with AD first.

```
SELECT * FROM t_pub_database_audit_log t WHERE t.`ENTITY_ID`=552263146 ORDER BY t.`AUDIT_TIME` DESC;
```

3. If not mannully updated, then check the change history for the primary key under menu "**Data Import Export > Data Change Record**". Then download the file and share it with AD for help. Or you can check the file content whether the data expected in file by yourself. 
    * If expected data exists, ask AD for help.
    * If expected data does not exist, go to step4.

![debug step 003](./image/dataio/datarecord_debug_001.png)

- If can not access the UI, but can access the DB，please run below SQL in cfg DB to collect the information.

```
SELECT * FROM t_pub_dataio_import_record t WHERE t.`PRIMARY_KEY`=552263146 ORDER BY t.`IMPORT_TIME` DESC;

```

4. Compare the GIT data with the downloaded file in step3. 
    * If the same, ask AD for help to do further investigation.
    * If not the same, go to step 5.
    
5. Check the submitted data in GIT. 
    * If expected data does not exist, please re-submit from MC to git, the do the CICD again. 
    * If expected data exist, please check the CICD build history and deployment history in [portal devOps CICD](https://portal.insuremo.com/#/home/utility-and-admin/devOps).   Make sure the build history of the expected package is after the expected data commit time in GIT. 
    
6. Check the orginal data in MC. 
    * If expected data does not exist, please update it and do the CICD again. If you want to know how data changed, please refer to step2.



#### Error Message Display a Single Null

##### Common Cause

There's extra unnecessary file in the check-in folder.

##### Troubleshooting

Check the GIT folder to see whether there's any unnecessary file like below.

![dataio_error_checksum](./image/dataio/dataio_error_checksum.jpg)



##### Message Example

[traceId:66617a0eb96811bc]
null 
...data.transfer.exception.DataTransferException
	at ...platform.bizdataiogateway.schedule.job.StepJob.doImport(StepJob.java:92)
	at ...platform.bizdataiogateway.schedule.job.StepJob$$FastClassBySpringCGLIB$$18308ae7.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
	
	


## Appendix B Detailed Deployment Component Sample

### GI Product Configuration Data Export and Check-In UI Reference Operation

#### 1. Technical Product 

![Product](./image/config_data_release/pd_tech.png)

##### Git

![Git](./image/config_data_release/pd_svn.png)


#### 2. Market Product 

  The operation is the same as that of technical product.

#### 3. Plan
Plan is allowed to export selectively, so we can export all plans or export them by product.

![Plan](./image/config_data_release/plan.PNG)

##### Git

![Git](./image/config_data_release/plan_svn.PNG)


### Common Configuration Data Export and Check-In UI Reference Operation

#### DD-Common Model

![dd_common_model](./image/config_data_release/dd_common_model.png)


##### Git

`..\src\main\bizdata\ddobject`

The DataIO will execute the folders by names (from 0-9, A-Z).

Please adjust the folders' names to make sure when bind field to object, the field and object exist:
1. Object
2. Field
3. FieldBinding

Or

1. Field
2. Object
3. FieldBinding


![Git](./image/config_data_release/dd_common_model_svn.png)


#### DD-Data Table

   Data Tables are used to be exported when exporting codetable. Then if some data table without codetable (for example, data table for configuration items), we need to export the data table singly.
   
   So platform changed the design to export the data table separately.
   
   ![DDTable](./image/config_data_release/dd_datatable.png)
   

##### Git
 The following folders are automatically generated when committing.
 
* `../src/main/bizdata/datatable/DataTableWithRecord_groupId_PA_COMMON_PA Common_zip`
* `../src/main/bizdata/datatable/DataTableWithoutRecord_groupId_PA_COMMON_PA Common_zip`

#### DD-CodeTable

The DataIO will execute the folders by names (from 0-9, A-Z).


![DDTable](./image/config_data_release/dd_codetable.png)


##### Git

* `..\src\main\bizdata\codetable\ group126_pa_common_codetable_zip`
* `..\src\main\bizdata\codetable\ group126_pa_common_datatable_zip`

The codetables and related datatables are always exported together. Then the folders for them are in pair.

<div class="docs-tip"><span class="docs-admonitions-text">tip</span>

 If the related data table is set to be "**Update Records Remotely**"，then the records will not be exported, and only the data table definition-related information will be exported. Then clients can maintain the records by API.

</div>

As the picture below shows, 
* The DataTableWithoutRecords that is created for data table is set to be "**Update Records Remotely**".
* The DataTableWithRecords that is created for data table is NOT set to be "**Update Records Remotely**".

![Git](./image/config_data_release/dd_codetable_svn.png)


#### DD-CodeTable-Large Table

If Code table is too large (>20K records), the logic for it is different. It should be set under Large Table.

![Table](./image/config_data_release/dd_codetable_largetable.png)


##### Git

`..\src\main\bizdata\codetable\Large_CodeTable_zip`


#### Context-Pub Message

![Table](./image/config_data_release/context_pub_message.png)


##### Git

`..\src\main\bizdata\context`


![Table](./image/config_data_release/context_pub_message_svn.png)


#### Rating-Common

![Table](./image/config_data_release/common_rating.png)



##### Git

`..\src\main\bizdata\Rating`

Rating used to be designed to export by each rating, and now it is changed to export by group only. Please export by group and remove the single exported formula included in group from git.

![Table](./image/config_data_release/common_rating_svn.png)


#### Rule-Common

The DataIO will execute the folders by names (from 0-9, A-Z).

Please adjust folders' names to make sure:
- When importing rule groups, the drivers and events exsit.
- When importing rules, the rule groups exsit.

Please adjust the folders' names to be:
- Drivers and events
- Rule groups
- Rules 

Recommended folders to be submitted are as below. To decrease the efforts to choose folders, we recommend users to export all of the drivers and events or groups or common rules totally in one commit.


* `01_ruleDriverEvent_xxxEvent_zip`
* `01_ruleDriverEvent_zip`   
* `02_ruleGroup_zip`
* `03_rule_EventGroupMapping_zip`
* `04_rule_CommonGroovyRule_zip`

![Table](./image/config_data_release/rule_svn.PNG)


#### Rule Driver and Event

Rule Driver and Event need to be exported **ONE BY ONE**. The system does NOT support to deploy all the drivers and events in batch.

Please export **1** each time.

It is recommended to submit to the folder like `01_ruleDriverEvent_zip`  or  `01_ruleDriverEvent_xxxEvent_zip`. 

![Table](./image/config_data_release/rule_driverevent.PNG)

![Table](./image/config_data_release/rule_driverevent_all.PNG)


##### Rule Group
Export all Rule Groups. The 2 folders in the exported zip are similar in:
  *   `rule_EventGroupById_groupIdAndRecordUsage_20210623_155654000_zip`, the data recorded the events and groups mapping. It is recommended to submit to the folder like `03_ruleEventGroupMapping_zip`.
  *   `rule_GroupById_groupIdAndRecordUsage_20210623_155654000_zip`, the data recorded the groups. It is recommended to submit to the folder like `02_ruleGroup_zip`.

![Table](./image/config_data_release/rule_group.PNG)

##### Common Groovy Rules (NOT Mapped to Product)

Export all the common rules. It is recommended to submit to the folder like `04_rule_CommonGroovyRule_zip`.

![Table](./image/config_data_release/rule_common.PNG)


##### Common Validation Rules (NOT Mapped to Product)

The validation rules are NOT recommended to use now. To validate, please try to use Groovy rules or DD-related validation when fields mapping.


#### Rate Table-Common

![Table](./image/config_data_release/ratetable_common.png)


##### Git

`..\src\main\bizdata\ratetable`

#### Search Index

If the index is not added by a tenant, only index fileds need exporting.
If the index is newly added, the index domain needs to be exported, as well as index fields.


![Table](./image/config_data_release/index.png)


##### Git

`..\src\main\bizdata\search`

To make sure the index can be deployed successfully, please adjust the folders' names to make sure the index is deployed before field deployment.

1.  `01_index_zip`
2.  `01_index_fields_zip`


#### URP-API Auth

Some tenants customized the API. Users are allowed to get access to authorized APIs, and we need to configure the APIs to URP.

![Export by Module](./image/config_data_release/urp_permissioncode_export.png)


##### Git

`..\src\main\bizdata\urp`

![Git](./image/config_data_release/urp_permissioncode_git.png)

##### Global Parameter

Some tenants customized the global parameters under **Public Setting > Global Parameter**. 

![Global Parameter](./image/config_data_release/globalparameter.png)


###### Git

`..\src\main\bizdata\configtable`

![Git](./image/config_data_release/git_configtable.png)


##### Batch

Batch - Job Definition

![Table](./image/dataio/3-1-1.png)

Select the target job and click **Export**.

Unzip the exported "`.zip`" and create a folder with suffix` _zip` under ***`batch`***.

![Table](./image/dataio/3-1-2.png)

##### Content Template


![Table](./image/dataio/3-1-8.png)

Select the target Group and click **Export**.

If the Group is newly added, please make sure the related context definition data is also exported. Please refer to configuration group mentioned below.

Unzip the exported "`.zip`" and create a folder with suffix `_zip` under `content_template`.

##### Context - Configuration Group


![Table](./image/dataio/3-1-9.png)

Select the target Group and click **Export**.

Unzip the exported "`.zip`" and create a folder with suffix `_zip` under `context`.

![Table](./image/dataio/3-1-10.png)



