|  | 
|  | 1 | +--- | 
|  | 2 | +layout: post | 
|  | 3 | +title: Spark Release 3.5.7 | 
|  | 4 | +categories: [] | 
|  | 5 | +tags: [] | 
|  | 6 | +status: publish | 
|  | 7 | +type: post | 
|  | 8 | +published: true | 
|  | 9 | +meta: | 
|  | 10 | +  _edit_last: '4' | 
|  | 11 | +  _wpas_done_all: '1' | 
|  | 12 | +--- | 
|  | 13 | + | 
|  | 14 | +Spark 3.5.7 is the seventh maintenance release containing security and correctness fixes. This release is based on the branch-3.5 maintenance branch of Spark. We strongly recommend all 3.5 users to upgrade to this stable release. | 
|  | 15 | + | 
|  | 16 | +### Notable changes | 
|  | 17 | + | 
|  | 18 | +- [[SPARK-52721]](https://issues.apache.org/jira/browse/SPARK-52721): Wrong message parameter for CANNOT_PARSE_DATATYPE | 
|  | 19 | +- [[SPARK-52749]](https://issues.apache.org/jira/browse/SPARK-52749): Replace preview1 to dev1 in its PyPI package name | 
|  | 20 | +- [[SPARK-53518]](https://issues.apache.org/jira/browse/SPARK-53518): catalogString of User Defined Type should not be truncated | 
|  | 21 | +- [[SPARK-46941]](https://issues.apache.org/jira/browse/SPARK-46941): Can't insert window group limit node for top-k computation if contains SizeBasedWindowFunction | 
|  | 22 | +- [[SPARK-49872]](https://issues.apache.org/jira/browse/SPARK-49872): Spark History UI -- StreamConstraintsException: String length (20054016) exceeds the maximum length (20000000) | 
|  | 23 | +- [[SPARK-52023]](https://issues.apache.org/jira/browse/SPARK-52023): udaf returning Option can cause data corruption and crashes | 
|  | 24 | +- [[SPARK-52032]](https://issues.apache.org/jira/browse/SPARK-52032): ORC filter pushdown causes incorrect results with eqNullSafe (<=>) in DataFrame filter | 
|  | 25 | +- [[SPARK-52240]](https://issues.apache.org/jira/browse/SPARK-52240): VectorizedDeltaLengthByteArrayReader throws ParquetDecodingException: Failed to read X bytes | 
|  | 26 | +- [[SPARK-52339]](https://issues.apache.org/jira/browse/SPARK-52339): Relations may appear equal even though they are different | 
|  | 27 | +- [[SPARK-52516]](https://issues.apache.org/jira/browse/SPARK-52516): Memory Leak with coalesce foreachpartitions and v2 datasources | 
|  | 28 | +- [[SPARK-52611]](https://issues.apache.org/jira/browse/SPARK-52611): Fix SQLConf version for excludeSubqueryRefsFromRemoveRedundantAliases configuration | 
|  | 29 | +- [[SPARK-52684]](https://issues.apache.org/jira/browse/SPARK-52684): Make CACHE TABLE Commands atomic while encounting execution errors | 
|  | 30 | +- [[SPARK-52737]](https://issues.apache.org/jira/browse/SPARK-52737): SHS performance regression when visiting homepage | 
|  | 31 | +- [[SPARK-52776]](https://issues.apache.org/jira/browse/SPARK-52776): ProcfsMetricsGetter splits the comm field if it contains space characters | 
|  | 32 | +- [[SPARK-52791]](https://issues.apache.org/jira/browse/SPARK-52791): Inferring a UDT errors when first element is null | 
|  | 33 | +- [[SPARK-52809]](https://issues.apache.org/jira/browse/SPARK-52809): Don't hold reader and iterator references for all partitions in task completion listeners for metric update | 
|  | 34 | +- [[SPARK-52873]](https://issues.apache.org/jira/browse/SPARK-52873): Hint causes semi join results to vary | 
|  | 35 | +- [[SPARK-53054]](https://issues.apache.org/jira/browse/SPARK-53054): The Scala Spark Connect DataFrameReader does not use the correct default format | 
|  | 36 | +- [[SPARK-53094]](https://issues.apache.org/jira/browse/SPARK-53094): Cube-related data quality problem | 
|  | 37 | +- [[SPARK-53155]](https://issues.apache.org/jira/browse/SPARK-53155): Global lower agggregation should not be removed | 
|  | 38 | +- [[SPARK-53435]](https://issues.apache.org/jira/browse/SPARK-53435): race condition in CachedRDDBuilder | 
|  | 39 | +- [[SPARK-53560]](https://issues.apache.org/jira/browse/SPARK-53560): Crash looping when retrying uncommitted batch in Kafka source and AvailableNow trigger | 
|  | 40 | +- [[SPARK-53581]](https://issues.apache.org/jira/browse/SPARK-53581): Potential thread-safety issue for mapTaskIds.add() in IndexShuffleBlockResolver | 
|  | 41 | + | 
|  | 42 | +### Dependency changes | 
|  | 43 | + | 
|  | 44 | +While being a maintenance release we did still upgrade some dependencies in this release they are: | 
|  | 45 | +- [[SPARK-52635]](https://issues.apache.org/jira/browse/SPARK-52635): Upgrade ORC to 1.9.7 | 
|  | 46 | +- [[SPARK-53532]](https://issues.apache.org/jira/browse/SPARK-53532): Upgrade Jetty to 9.4.58.v20250814 | 
|  | 47 | + | 
|  | 48 | +You can consult JIRA for the [detailed changes](https://s.apache.org/spark-3.5.7). | 
|  | 49 | + | 
|  | 50 | +We would like to acknowledge all community members for contributing patches to this release. | 
|  | 51 | + | 
0 commit comments