Patch Level Update

This guide explains how to perform a patch level update. The patch level is the version number “after the second dot”. Example: update from 7.14.2 to 7.14.3.

企业版特征

Please note that Patch Level Updates are only provided to enterprise customers, they are not available in the community edition.

查看 Camunda 企业版主页 了解更多信息以及获得 免费试用版。

Database Patches

Between patch levels, the structure of the database schema is not changed. The database structure of all patch releases is backward compatible with the corresponding minor version. Our database schema update guide provides details on the update procedure as well as available database patches.

Special Considerations

This section describes noteworthy changes between individual patch levels.

7.3.2 to 7.3.3

By default it is not possible anymore to pass arbitrary expressions as parameters of task queries.

Reason: Passing EL expressions in a task query enables execution of arbitrary code when the query is evaluated.

The process engine no longer evaluates these expressions by default and throws an exception instead. The previous behavior can be re-enabled by setting the process configuration enableExpressionsInAdhocQueries to true.

See the user guide on security considerations for custom code for details.

7.6.10 to 7.6.11 / 7.7.5 to 7.7.6 / 7.8.0 to 7.8.1

Java serialization format

You can now configure, if you forbid the usage of Java serialization format, when passing object variables in their Java serialized representation.

The new configuration parameter javaSerializationFormatEnabled defaults to true, but can be configured to false in Camunda engine configuration.

Following use cases are affected:

  • REST API:
PUT /process-instance/{id}/variables/{varName}

{
  "value" : "ab",
  "type" : "Object",
  "valueInfo" : {
    "objectTypeName": "com.example.MyObject",
    "serializationDataFormat": "application/x-java-serialized-object"
  }
}
  • Java API:
runtimeService.setVariable(processInstanceId, "varName",
        Variables
          .serializedObjectValue("ab")
          .serializationDataFormat("application/x-java-serialized-object")
          .objectTypeName("com.example.MyObject")
          .create());

You can disable Java serialization usage with the help of this configuration parameter:

<property name="javaSerializationFormatEnabled">false</property>

Groovy version

The pre-built Camunda distributions of versions 7.6.10, 7.7.5 and 7.8.0 ship with Groovy library of version 2.4.5, whereas newer versions come with Groovy 2.4.13. Please update the library groovy-all-$GROOVY_VERSION.jar in the lib folder of your application server.

7.8.1. to 7.8.2

Restrict heatmap/statistics by time period

In the historic process definition diagram it is possible to select time periods for which activity instance badges are displayed.

By default the displayed timer period is set to ‘today’ but can be extended to show badges of ‘this week’, ‘this month’ or the ‘complete’ history.

This feature can be configured in two ways:

  1. The default timer period can be changed to ‘this week’, ‘this month’ or ‘complete’
  2. The manual selection of the time period within cockpit can be disabled.

These attributes can be modified in the configuration file

7.8.6 to 7.8.7

History cleanup can be parallelized

As of v. 7.8.7, history cleanup can be parallelized, which leads to creation of several jobs in the database. For this reason:

  • call to HistoryService#cleanupHistoryAsync does not guarantee to return correct Job object in return and you should not rely on the returned value any more. The same valid for REST call POST /history/cleanup
  • HistoryService#findHistoryCleanupJob is deprecated (as well as GET /history/cleanup/job), one should use HistoryService#findHistoryCleanupJobs instead.

7.10.16 to 7.10.17 / 7.11.10 to 7.11.11 / 7.12.3 to 7.12.4

DMN 1.3 Support in Cockpit

With this release, cockpit adds support for DMN 1.3, the next version of the DMN standard. If you edit and deploy DMN diagrams in Cockpit, which use earlier versions of DMN, they will automatically be migrated to DMN 1.3.

The Camunda engine already supports the DMN 1.3 namespace by default, so there are no more steps required to migrate. Make sure you have the latest version of Camunda Modeler installed to edit DMN 1.3 files locally.

7.12.5 to 7.12.6

Oracle JDBC Driver Removed from Camunda Docker Images

The Docker images for Camunda 7.13 no longer provide an Oracle JDBC driver out of the box. If you relied on this, apply the strategy outlined in https://github.com/camunda/docker-camunda-bpm-platform#database-environment-variables: Add the driver to the container and configure the database settings manually by linking the configuration file into the container.

7.13.6 to 7.13.7 / 7.12.11 to 7.12.12 / 7.11.18 to 7.11.19

In the mentioned patches above, a telemetry functionality is introduced. For more information please visit the telemetry page. Before you upgrade to a Camunda Platform Runtime version >= 7.14.0-alpha1, 7.13.7+, 7.12.12+, and 7.11.19+, or activate the telemetry functionality, please make sure that you are authorized to take this step, and that the installation or activation of the telemetry functionality is not in conflict with any company-internal policies, compliance guidelines, any contractual or other provisions or obligations of your company.

Camunda cannot be held responsible in the event of unauthorized installation or activation of this function.

Custom REST API

In case you are deploying a custom REST API that builds upon the one provided by Camunda, please make sure to add the following listener to the web.xml:

<web-app ...>
  ...
  <listener>
    <listener-class>org.camunda.bpm.engine.rest.impl.web.bootstrap.RestContainerBootstrap</listener-class>
  </listener>
  ...
</web-app>

This servlet context listener is used for bootstrapping the REST API and should therefore be included in your custom application setup.

7.14.0 to 7.14.1 / 7.13.6 to 7.13.7

FEEL Engine: Changed Module Structure

With the above-mentioned patch releases, the module structure has changed in conjunction with the FEEL Engine. From now on, the FEEL Engine will be delivered as a dedicated module feel-engine. It is no longer part of the camunda-engine-feel-scala module. The FEEL Engine module follows its own versioning.

The following modules are dependent on the newly introduced feel-engine module:

  • camunda-engine-plugin-spin
  • camunda-engine-feel-scala

7.14.3 to 7.14.4 / 7.13.9 to 7.13.10 / 7.12.15 to 7.12.16

Update of MySQL JDBC Driver in Camunda Docker Images

With this release, the docker images contain a new version of the MySQL JDBC Driver.

Old Version: 5.1.21
New Version: 8.0.23

Behavior Changes

The driver’s new version has two significant behavioral changes you should take care of when migrating your Docker-based Camunda Runtime installation.

Downgrade to 5.1.21

You don’t want to migrate to the new version? You can replace the new MySQL JDBC Driver with the old one to restore the previous behavior. To do so, you can create a new Dockerfile based on one of our official docker images and add your custom commands to replace the MySQL JDBC Driver.

For the Wildfly image, additionally make sure to adjust the data source class in the standalone.xml file located under /camunda/standalone/configuration/ from com.mysql.cj.jdbc.MysqlXADataSource back to com.mysql.jdbc.jdbc2.optional.MysqlXADataSource:

<!-- ... -->
<driver name="mysql" module="mysql.mysql-connector-java">
    <xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class>
</driver>
<!-- ... -->
1) Milliseconds Precision for Date/Time values

The new version of the Driver changes how a date/time value is handled. Please make sure to configure the Driver as described in MySQL Database Configuration to avoid breaking behavior.

2) Changed Time Zone Handling

In case the process engine and the MySQL Server operate in different time zones, and you use the MySQL JDBC Driver’s default configuration, migrating to the new release leads to a wrong conversion of date values (e.g., the due date of a task can change).

You can configure the driver to convert date values from and to the MySQL Server into the time zone in which the process engine/JVM operates. This ensures that values that were stored before the migration are returned correctly after the migration and date values are stored correctly after the migration. You can achieve this by specifying the correct time zone via the property serverTimeZone in your JDBC connection URL.
For instance, if your process engine operates in CET but your MySQL Server does not, set the property to serverTimeZone=CET.

Heads-up!

Changing the time zone of the MySQL Server to the one the process engine operates in can have unwanted side-effects to date values that are stored in columns of type TIMESTAMP: MySQL converts TIMESTAMP values from the server time zone to UTC for storage, and back from UTC to the current time zone for retrieval. Read more about it in the MySQL Docs.

Further Reading

7.15.3 to 7.15.4 / 7.14.9 to 7.14.10 / 7.13.15 to 7.13.16

Spin: updated dependencies

With this release, the following dependencies of Spin have been updated:

  • json-path from 2.4.0 to 2.6.0
  • accessors-smart from 1.2 to 2.4.7
  • json-smart from 2.3 to 2.4.7

Standalone Webapplication: HikariCP replaces Commons DBCP

The standalone web applications now use HikariCP for data source configuration instead of Apache Commons DBCP. Replace the org.apache.commons.dbcp.BasicDataSource class in your applicationContext.xml with a com.zaxxer.hikari.HikariDataSource. The HikariCP data source expects the JDBC URL on a property called jdbcUrl instead of url. Thus, you need to rename the url property. Your data source description should look similar to this, with DB-DRIVER-CLASS, JDBC-URL, DB-USER, and DB-PASSWORD replaced by values related to your database:

<bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
  <property name="targetDataSource">
    <bean class="com.zaxxer.hikari.HikariDataSource">
      <property name="driverClassName" value="DB-DRIVER-CLASS" />
      <property name="jdbcUrl" value="JDBC-URL" />
      <property name="username" value="DB-USER" />
      <property name="password" value="DB-PASSWORD" />
    </bean>
  </property>
</bean>

7.15.5 to 7.15.6 / 7.14.11 to 7.14.12 / 7.13.17 to 7.13.18

The patches include version 2.1.0 of the org.camunda.template-engines artifacts, in particular camunda-template-engines-freemarker, camunda-template-engines-velocity, and camunda-template-engines-xquery-saxon.

This updates the following template engine versions:

Please note that the new version of Freemarker contains changes that are not compatible with the previous version. We strongly recommend to test the execution of your templates before applying the update. In addition, you can replace the artifacts of version 2.1.0 by the old artifacts in version 2.0.0 to continue using the old versions of Freemarker and Velocity.

Full Distribution

This section is applicable if you installed the Full Distribution with a shared process engine. In this case you need to update the libraries and applications installed inside the application server.

Please note that the following procedure may differ for cluster scenarios. Contact our support team if you need further assistance.

  • Shut down the server
  • Exchange Camunda Platform libraries, tools and webapps (EAR, RAR, Subsystem (JBoss), Shared Libs) - essentially, follow the installation guide for your server.
  • Restart the server

Application With Embedded Process Engine

In case you use an embedded process engine inside your Java Application, you need to

  1. update the Process Engine library in your dependency management (Apache Maven, Gradle …),
  2. re-package the application,
  3. deploy the new version of the application.

Standalone Webapplication Distribution

In case you installed the Standalone Webapplication Distribution you need to

  1. undeploy the previous version of the webapplication,
  2. deploy the new version of the webapplication.

Applying Multiple Patches at Once

It is possible to apply multiple patches in one go (e.g., updating from 7.1.0 to 7.1.4).

目录: