Incremental flag is moved from source section to target section in UI and yml. If it is used in existing processing configs then upgrade utility will move this property to target section during upgrade
Additional Columns section is moved from source section to target section in UI and yml. If it is used in existing processing configs then upgrade utility will move this section to target section during upgrade
Framework generated flag is now removed for framework columns specified in additional columns. If value is specified for framework column then specified value will be used otherwise value will be always framework generated. If framework_generated flag is used in existing processing configs then upgrade utility will remove framework_generated flag from additional columns if its value is "true". If any other value is specified in yml then upgrade utility will keep that framework_generated flag as is and mark the respective config as non-upgradable. When framework_generated flag value is other than "true", upgrade utility will move "incremental" and "additional_columns" section to target but that config will be later marked as non-upgradable. This may result the config to be partially upgraded if the previous phases of upgrade have succeeded
Specifying value for framework column is now only supported for certain framework columns and that depends on operation and incremental settings of the config. New config validation rules are introduced for this so if invalid framework column setting are used then that config will be considered invalid. Consider operation v2 sheet from processing_module_2.1.0.xlsx for more details
Support for w_created_busines_ts framework column is now removed
Processing template engine was treating specified value of additional column as column from source. This behavior is now changed and made consistent with processing spark engine. All value specified in additional columns are now treated as literal (constant) values
Processing module usage and behavior of some framework column is now updated. Refer to clarifications sheet from processing_module_2.1.0.xlsx for details
When using spark engine - statistics of processing job run like inserted, updated,soft deleted etc are stored in job_info record. Values in these columns might not be correct
When using template engine - if the strings are provided as single quotes in existing configs, then quotes should be removed manually