Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
These features and Azure Databricks platform improvements were released in December 2025.
Note
Releases are staged. Your Azure Databricks account might not be updated until a week or more after the initial release date.
Delta Sharing to external Iceberg clients is in Public Preview
December 15, 2025
You can now share tables, foreign tables, materialized views, and streaming tables to external Iceberg clients such as Snowflake, Trino, Flink, and Spark. External Iceberg clients can query shared Delta tables with zero-copy access. For details, see Enable sharing to external Iceberg clients and Iceberg clients: Read shared Delta tables.
Disable legacy features settings are now GA
December 11, 2025
To help migrate accounts and workspaces to Unity Catalog, two admin settings that disable legacy features are now generally available:
- Disable legacy features: Account-level setting that disables access to DBFS, Hive Metastore, and No-isolation shared compute in new workspaces.
- Disable access to Hive metastore: Workspace-level setting that disables access to the Hive metastore used by your workspace.
Customizable SharePoint connector (Beta)
December 10, 2025
The standard SharePoint connector offers more flexibility than the managed SharePoint connector. It allows you to ingest structured, semi-structured, and unstructured files into Delta tables with full control over schema inference, parsing options, and transformations. To get started, see Ingest files from SharePoint.
For an in-depth comparison of the SharePoint connectors, see Choose your SharePoint connector.
NetSuite connector (Public Preview)
December 10, 2025
You can now ingest data from the NetSuite2.com data source programmatically using the Azure Databricks API, the Databricks CLI, or an Azure Databricks notebook. See Configure NetSuite for ingestion into Azure Databricks.
Change owner for materialized views or streaming tables defined in Databricks SQL
December 10, 2025
You can now change the owner for materialized views or streaming tables defined in Databricks SQL through Catalog Explorer. For materialized view details, see Configure materialized views in Databricks SQL. For streaming table details, see Use streaming tables in Databricks SQL.
Discover files in Auto Loader efficiently using file events
December 10, 2025
Auto Loader with file events is now GA. With this feature, Auto Loader can discover files with the efficiency of notifications while retaining the setup simplicity of directory listing. This is the recommended way to use Auto Loader (and particularly file notifications) with Unity Catalog. Learn more here.
To start using Auto Loader with file events, see the following:
- (Prerequisite) Enable file events for an external location
- File notification mode with and without file events enabled on external locations
ForEachBatch for Lakeflow Spark Declarative Pipelines is available (Public Preview)
December 9, 2025
You can now process streams in Lakeflow Spark Declarative Pipelines as a series of micro-batches in Python, using a ForEachBatch sink. The ForEachBatch sink is available in public preview.
See Use ForEachBatch to write to arbitrary data sinks in pipelines.
Databricks Runtime 18.0 and Databricks Runtime 18.0 ML are in Beta
December 9, 2025
Databricks Runtime 18.0 and Databricks Runtime 18.0 ML are now in Beta, powered by Apache Spark 4.0.0. The release includes JDK 21 as the default, new features for jobs and streaming, and library upgrades.
See Databricks Runtime 18.0 (Beta) and Databricks Runtime 18.0 for Machine Learning (Beta).
Databricks Runtime maintenance updates (12/09)
December 9, 2025
New maintenance updates are available for supported Databricks Runtime versions. These updates include bug fixes, security patches, and performance improvements. For details, see:
- Databricks Runtime 17.3 LTS
- Databricks Runtime 17.2
- Databricks Runtime 17.1
- Databricks Runtime 17.0
- Databricks Runtime 16.4 LTS
- Databricks Runtime 15.4 LTS
- Databricks Runtime 14.3 LTS
- Databricks Runtime 13.3 LTS
- Databricks Runtime 12.2 LTS
New columns in Lakeflow system tables (Public Preview)
December 9, 2025
New columns are now available in the Lakeflow system tables to provide enhanced job monitoring and troubleshooting capabilities:
jobs table: trigger, trigger_type, run_as_user_name, creator_user_name, paused, timeout_seconds, health_rules, deployment, create_time
job_tasks table: timeout_seconds, health_rules
job_run_timeline table: source_task_run_id, root_task_run_id, compute, termination_type, setup_duration_seconds, queue_duration_seconds, run_duration_seconds, cleanup_duration_seconds, execution_duration_seconds
job_task_run_timeline table: compute, termination_type, task_parameters, setup_duration_seconds, cleanup_duration_seconds, execution_duration_seconds
pipelines table: create_time
These columns are not populated for rows emitted before early December 2025. See Jobs system table reference.
New token expiration policy for open Delta Sharing
December 8, 2025
All new Delta Sharing open sharing recipient tokens are issued with a maximum expiration of one year from the date of creation. Tokens with an expiration period longer than one year or no expiration date can no longer be created.
Existing open sharing recipient tokens issued before December 8, 2025, with expiration dates after December 8, 2026, or with no expiration date, automatically expire on December 8, 2026. If you currently use recipient tokens with long or unlimited lifetimes, review your integrations and renew tokens as needed to avoid breaking changes after this date.
See Create a recipient object for non-Databricks users using bearer tokens (open sharing).
Vector Search reranker is now generally available
December 8, 2025
The Vector Search reranker is now generally available. Reranking can help improve retrieval quality. For more information, see Use the reranker in a query.
Built-in Excel file format support (Beta)
December 2, 2025
Databricks now provides built-in support for reading Excel files. You can query Excel files directly using Spark DataFrames without external libraries. See Read Excel files.