Skip to main content

SQL Server 2025 AI Features: What's New for Data Engineers

March 21, 2026 5 min read

SQL Server 2025 brings AI capabilities directly into the database engine without sacrificing the performance characteristics that make it the backbone of enterprise systems. Let's explore what's genuinely useful, how to implement it, and where the real wins are for data engineers and .NET developers.

Native Vector Search

The headline feature: store, index, and query vector embeddings directly in relational tables. No separate vector database needed. This means you can combine semantic similarity search with traditional SQL filtering, transactions, and security — all in one place.

CREATE TABLE Products (
    ProductId INT PRIMARY KEY IDENTITY,
    Name NVARCHAR(200) NOT NULL,
    Description NVARCHAR(MAX),
    Price DECIMAL(10, 2),
    Category NVARCHAR(100),
    DescriptionEmbedding VECTOR(1536),
    CreatedAt DATETIME2 DEFAULT GETUTCDATE()
);

CREATE VECTOR INDEX IX_Products_Embedding
ON Products(DescriptionEmbedding)
WITH (METRIC = 'COSINE', LISTS = 100);

Querying combines vector similarity with standard SQL filters — something that requires complex multi-system architectures with separate vector databases:

DECLARE @searchVector VECTOR(1536) = (
    SELECT DescriptionEmbedding FROM Products WHERE ProductId = 42
);

SELECT TOP 10 ProductId, Name, Price,
    VECTOR_DISTANCE('cosine', DescriptionEmbedding, @searchVector) AS Similarity
FROM Products
WHERE Category = 'Electronics'
ORDER BY VECTOR_DISTANCE('cosine', DescriptionEmbedding, @searchVector);

Performance is solid for datasets up to ~5 million vectors. An 800K product catalog search returns results in under 50ms with the vector index — comparable to dedicated vector databases. The LISTS parameter in the index controls the number of clusters for approximate nearest neighbor search — higher values improve recall at the cost of index build time.

For teams already running SQL Server, this eliminates the operational overhead of managing a separate vector database like Pinecone or Qdrant. Your existing backup, security, and monitoring infrastructure covers vector data automatically. You can also join vector search results with relational data in a single query — something that requires application-level joins when using a separate vector store.

In-Database AI Model Invocation

sp_invoke_external_model lets you call Azure OpenAI directly from T-SQL. Best suited for batch processing, not real-time request paths:

-- Generate embeddings for new products in batches
WITH NewProducts AS (
    SELECT TOP 100 ProductId, Description
    FROM Products WHERE DescriptionEmbedding IS NULL
    ORDER BY CreatedAt DESC
)
UPDATE p
SET p.DescriptionEmbedding = ai.embedding
FROM Products p
INNER JOIN NewProducts np ON p.ProductId = np.ProductId
CROSS APPLY sp_invoke_external_model(
    @model_name = 'text-embedding-ada-002',
    @input = np.Description,
    @provider = 'AzureOpenAI',
    @endpoint = 'https://myinstance.openai.azure.com/'
) AS ai(embedding VECTOR(1536));

Use this in scheduled batch jobs, not row-by-row triggers. For real-time embedding generation, generate embeddings in your application layer and pass them to SQL Server as parameters.

Integrating with .NET Applications

From the .NET side, you can query vector data using standard ADO.NET or Dapper. Here's how to perform a semantic search from an ASP.NET Core API:

app.MapGet("/api/products/similar/{id}", async (int id, AppDbContext db) =>
{
    var results = await db.Database.SqlQueryRaw<ProductSimilarity>("""
        DECLARE @v VECTOR(1536) = (
            SELECT DescriptionEmbedding FROM Products WHERE ProductId = {0}
        );
        SELECT TOP 10 ProductId, Name, Price,
            VECTOR_DISTANCE('cosine', DescriptionEmbedding, @v) AS Similarity
        FROM Products
        WHERE ProductId != {0}
        ORDER BY VECTOR_DISTANCE('cosine', DescriptionEmbedding, @v)
        """, id).ToListAsync();

    return Results.Ok(results);
});

Performance Improvements

Beyond AI features, several improvements matter for daily workloads:

  • Optimized Locking — lock-free reads for read-committed isolation; 25% throughput improvement on write-heavy tables with zero code changes. This is the kind of improvement that benefits every application on upgrade.
  • Parameter-Sensitive Plan Optimization — the optimizer maintains multiple plans for the same query based on parameter distribution, reducing parameter sniffing issues that have plagued SQL Server applications for years. No more needing OPTION (RECOMPILE) hints on problematic stored procedures.
  • DOP Feedback — the engine learns optimal degree of parallelism for recurring queries automatically, eliminating the need for manual MAXDOP hints
  • Approximate Query ProcessingAPPROX_PERCENTILE_CONT and APPROX_PERCENTILE_DISC functions for fast statistical analysis on large datasets without scanning every row

Migration Path

Set compatibility level 170 to enable new features, but test in staging first using Query Store:

-- Step 1: Enable Query Store to capture baseline
ALTER DATABASE YourDatabase SET QUERY_STORE = ON;
ALTER DATABASE YourDatabase SET QUERY_STORE (
    OPERATION_MODE = READ_WRITE,
    MAX_STORAGE_SIZE_MB = 1000
);

-- Step 2: Capture baseline for at least one business cycle

-- Step 3: Upgrade compatibility level
ALTER DATABASE YourDatabase SET COMPATIBILITY_LEVEL = 170;

-- Step 4: Compare plans and identify regressions
SELECT * FROM sys.query_store_plan
WHERE last_compile_start_time > DATEADD(DAY, -1, GETUTCDATE())
ORDER BY avg_compile_duration DESC;

Licensing note: Vector search and AI features are included in Enterprise and Developer editions. Standard edition has vector index size limitations.

Key Takeaways

  • Vector search is production-ready for datasets under 5M vectors — eliminates the need for a separate vector database in most enterprise scenarios.
  • In-database AI invocation works best for batch embedding generation and enrichment jobs.
  • Performance improvements in locking, parameter sensitivity, and DOP feedback deliver measurable gains without code changes.
  • Upgrade carefully — use Query Store as your safety net when changing compatibility levels.

References

  1. SQL Server 2025 documentation — Official overview of new features in SQL Server 2025
  2. Vector search in SQL Server — How to use native vector data types and indexes
  3. Intelligent Query Processing — Details on parameter-sensitive plans, DOP feedback, and adaptive joins
  4. Query Store overview — Using Query Store for performance monitoring and regression detection
  5. Azure OpenAI integration with SQL Server — Calling AI models from T-SQL with sp_invoke_external_model
Share this post

Comments

Ajit Gangurde

Software Engineer II at Microsoft | 15+ years in .NET & Azure