| Name | Description | Introduced in Version |
|---|---|---|
| IBM DB2 - Bulk table Database CDC | Self and Cloud Managed | Source |
The initial release of the IBM Db2 CDC Bulk Micro-Integration enables seamless data flow from IBM Db2 databases to Solace event brokers. It captures database changes (inserts, updates, and deletes) from multiple tracked tables simultaneously and publishes them as events to Solace, allowing downstream applications to react to changes in operational systems in real-time. For more details, check out the full documentation.
This Micro-Integration is designed for event-driven integration rather than full database replication, making it ideal for use cases such as analytics, notifications, and real-time data processing. |
1.6.0 |
| Reference Number | Description | Introduced in Version |
|---|---|---|
| DATAGO-132479 |
Upgrade connector base image to Eclipse Temurin 17.0.18_8-jre-alpine-3.23 (sha256:88c0002860cda56384d5ed3b2da4d0d9a2b44dc2ee4dc02344be985bd8b524bc)
|
1.7.1 |
| DATAGO-133191 |
Upgrade connector to Spring Boot 3.5.14
|
1.7.1 |
| DATAGO-128439 |
Add support for Prometheus Monitoring tooling for observability.
|
1.6.0 |
| DATAGO-130790 |
Upgrade connector to Spring Cloud 2025.0.2
|
1.6.0 |
| DATAGO-130215 |
Upgrade connector to JCSMP 10.28.3
|
1.6.0 |
| DATAGO-124644 |
Upgrade connector base image to Eclipse Temurin 17.0.18_8-jre-alpine (sha256:7aa804a1824d18d06c68598fe1c2953b5b203823731be7b9298bb3e0f1920b0d).
|
1.6.0 |
| DATAGO-128563 |
Upgrade Spring Boot Admin to 3.5.8
|
1.6.0 |
| DATAGO-130061 |
Upgrade connector to Spring Boot 3.5.13
|
1.6.0 |
| Resolved in Version | Severity (CVSS v3 Score) | Vulnerability ID | Solace Reference Number | Affected Products | Description |
|---|---|---|---|---|---|
| 1.7.1 | CVSS v3: 9.1 (CRITICAL) | DATAGO-134038 | JAR |
A possible security vulnerability has been identified in Apache Kafka. By default, the broker property `sasl.oauthbearer.jwt.validator.class` is set to `org.apache.kafka.common.security.oauthbearer.DefaultJwtValidator`. It accepts any JWT token without validating its signature, issuer, or audience. An attacker can generate a JWT token from any issuer with the `preferred_username` set to any user, and the broker will accept it. We advise the Kafka users using kafka v4.1.0 or v4.1.1 to set the config `sasl.oauthbearer.jwt.validator.class` to `org.apache.kafka.common.security.oauthbearer.BrokerJwtValidator` explicitly to avoid this vulnerability. Since Kafka v4.1.2 and v4.2.0 and later, the issue is fixed and will correctly validate the JWT token.
|
|
| 1.6.0 | CVSS v3: 8.7 (HIGH) | DATAGO-126998 | JAR |
### Summary The non-blocking (async) JSON parser in `jackson-core` bypasses the `maxNumberLength` constraint (default: 1000 characters) defined in `StreamReadConstraints`. This allows an attacker to send JSON with arbitrarily long numbers through the async parser API, leading to excessive memory allocation and potential CPU exhaustion, resulting in a Denial of Service (DoS). The standard synchronous parser correctly enforces this limit, but the async parser fails to do so, creating an inconsistent enforcement policy. ### Details The root cause is that the async parsing path in `NonBlockingUtf8JsonParserBase` (and related classes) does not call the methods responsible for number length validation. - The number parsing methods (e.g., `_finishNumberIntegralPart`) accumulate digits into the `TextBuffer` without any length checks. - After parsing, they call `_valueComplete()`, which finalizes the token but does *not* call `resetInt()` or `resetFloat()`. - The `resetInt()`/`resetFloat()` methods in `ParserBase` are where the `validateIntegerLength()` and `validateFPLength()` checks are performed. - Because this validation step is skipped, the `maxNumberLength` constraint is never enforced in the async code path. ### PoC The following JUnit 5 test demonstrates the vulnerability. It shows that the async parser accepts a 5,000-digit number, whereas the limit should be 1,000. ```j
|
|
| 1.6.0 | CVSS v3: 8.2 (HIGH) | DATAGO-109708 | JAR |
A flaw was found in GnuTLS. A double-free vulnerability exists in GnuTLS due to incorrect ownership handling in the export logic of Subject Alternative Name (SAN) entries containing an otherName. If the type-id OID is invalid or malformed, GnuTLS will call asn1_delete_structure() on an ASN.1 node it does not own, leading to a double-free condition when the parent function or caller later attempts to free the same structure. This vulnerability can be triggered using only public GnuTLS APIs and may result in denial of service or memory corruption, depending on allocator behavior.
|