Azure Data Services for .NET Developers
Introduction
Azure offers a comprehensive suite of data services for every use case - from relational databases to NoSQL, caching, messaging, and analytics. This guide covers the essential Azure data services for .NET developers with practical implementation examples.
Table of Contents
- Azure SQL Database
- Azure Cosmos DB
- Azure Storage Services
- Azure Cache for Redis
- Messaging Services
- Azure Data Factory
- Azure Synapse Analytics
- Choosing the Right Service
- Connection String Management
- Performance Tuning
- Interview Questions
Azure SQL Database
Azure SQL Database is a fully managed relational database service based on the SQL Server engine.
Service Tiers
DTU-Based (Database Transaction Units)
| Tier | DTUs | Max Size | Use Case |
|---|---|---|---|
| Basic | 5 | 2 GB | Light workloads, dev/test |
| Standard | 10-3000 | 1 TB | Most production workloads |
| Premium | 125-4000 | 4 TB | High-performance, low-latency |
vCore-Based (Recommended)
| Tier | vCores | Max Size | Features |
|---|---|---|---|
| General Purpose | 2-80 | 4 TB | Balanced compute/storage |
| Business Critical | 2-128 | 4 TB | Low latency, built-in HA |
| Hyperscale | 2-128 | 100 TB | Very large databases, fast scaling |
DTU vs vCore Comparison
| Feature | DTU | vCore |
|---|---|---|
| Pricing model | Bundled (CPU + Memory + IO) | Separate compute/storage |
| Azure Hybrid Benefit | No | Yes (use existing licenses) |
| Reserved capacity | No | Yes (up to 65% savings) |
| Max storage | 4 TB | 100 TB (Hyperscale) |
| Best for | Simple, predictable workloads | Advanced features, cost optimization |
Entity Framework Core with Azure SQL
// DbContext configuration
public class ApplicationDbContext : DbContext
{
public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
: base(options) { }
public DbSet<Order> Orders { get; set; }
public DbSet<Customer> Customers { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Optimized for Azure SQL
modelBuilder.Entity<Order>()
.HasIndex(o => o.CustomerId);
modelBuilder.Entity<Order>()
.HasIndex(o => o.OrderDate);
}
}
// Program.cs - Configuration for Azure SQL
builder.Services.AddDbContext<ApplicationDbContext>(options =>
{
options.UseSqlServer(
builder.Configuration.GetConnectionString("Database"),
sqlOptions =>
{
// Enable retry for transient failures
sqlOptions.EnableRetryOnFailure(
maxRetryCount: 5,
maxRetryDelay: TimeSpan.FromSeconds(30),
errorNumbersToAdd: null);
// Command timeout
sqlOptions.CommandTimeout(30);
// Connection resiliency for Azure
sqlOptions.UseAzureSqlDefaults();
});
// Enable sensitive data logging in development
if (builder.Environment.IsDevelopment())
{
options.EnableSensitiveDataLogging();
options.EnableDetailedErrors();
}
});
Managed Identity Authentication
// Connection string with managed identity (no password!)
// Server=tcp:myserver.database.windows.net;Database=mydb;Authentication=Active Directory Managed Identity;
// Or programmatically
public class DatabaseConnectionFactory
{
public async Task<SqlConnection> CreateConnectionAsync()
{
var credential = new DefaultAzureCredential();
var token = await credential.GetTokenAsync(
new TokenRequestContext(new[] { "https://database.windows.net/.default" }));
var connection = new SqlConnection(
"Server=tcp:myserver.database.windows.net;Database=mydb;");
connection.AccessToken = token.Token;
return connection;
}
}
// Entity Framework Core with Azure AD authentication
builder.Services.AddDbContext<ApplicationDbContext>(options =>
{
options.UseSqlServer(
builder.Configuration.GetConnectionString("Database"),
sqlOptions =>
{
sqlOptions.EnableRetryOnFailure();
})
.AddInterceptors(new AzureAdAuthenticationDbConnectionInterceptor());
});
// Custom interceptor for Azure AD token
public class AzureAdAuthenticationDbConnectionInterceptor : DbConnectionInterceptor
{
public override async ValueTask<InterceptionResult> ConnectionOpeningAsync(
DbConnection connection,
ConnectionEventData eventData,
InterceptionResult result,
CancellationToken cancellationToken = default)
{
if (connection is SqlConnection sqlConnection)
{
var credential = new DefaultAzureCredential();
var token = await credential.GetTokenAsync(
new TokenRequestContext(new[] { "https://database.windows.net/.default" }),
cancellationToken);
sqlConnection.AccessToken = token.Token;
}
return result;
}
}
Elastic Pools
Elastic pools share resources among multiple databases for cost efficiency.
// All databases in the pool share the same resources
// Good for: Multi-tenant apps, SaaS applications
// Connection per tenant
public class TenantDbContextFactory
{
private readonly string _elasticPoolServer = "myserver.database.windows.net";
public ApplicationDbContext CreateContextForTenant(string tenantId)
{
var connectionString =
$"Server=tcp:{_elasticPoolServer};Database=tenant_{tenantId};" +
"Authentication=Active Directory Managed Identity;";
var options = new DbContextOptionsBuilder<ApplicationDbContext>()
.UseSqlServer(connectionString)
.Options;
return new ApplicationDbContext(options);
}
}
Azure Cosmos DB
Azure Cosmos DB is a globally distributed, multi-model NoSQL database service.
Data Models
| API | Data Model | Use Case |
|---|---|---|
| NoSQL | Document (JSON) | Most common, flexible schema |
| MongoDB | Document (BSON) | MongoDB compatibility |
| Cassandra | Wide-column | High-throughput writes |
| Table | Key-value | Simple key-value storage |
| Gremlin | Graph | Relationship-heavy data |
Consistency Levels
| Level | Guarantee | Latency | Use Case |
|---|---|---|---|
| Strong | Linearizable reads | Highest | Financial transactions |
| Bounded Staleness | Reads lag by K versions or T time | High | Leader boards, inventory |
| Session | Read-your-writes within session | Medium | Most applications (default) |
| Consistent Prefix | No out-of-order reads | Low | Social feeds, activity logs |
| Eventual | No ordering guarantee | Lowest | Analytics, IoT telemetry |
Partitioning Strategy
// Partition key is CRITICAL for performance
// Good partition keys have:
// - High cardinality (many distinct values)
// - Even data distribution
// - Frequently used in queries
// β
Good partition keys
public class Order
{
[JsonProperty("id")]
public string Id { get; set; }
[JsonProperty("customerId")] // Good: High cardinality, even distribution
public string CustomerId { get; set; }
[JsonProperty("orderDate")]
public DateTime OrderDate { get; set; }
}
// β Bad partition keys
// - Country (low cardinality - hot partitions)
// - Status (only a few values - uneven distribution)
// - OrderDate (monotonically increasing - hot partition)
// Physical partition limits:
// - 50 GB storage per physical partition
// - 10,000 RU/s per physical partition
Cosmos DB SDK for .NET
// Program.cs - Configuration
builder.Services.AddSingleton<CosmosClient>(sp =>
{
var configuration = sp.GetRequiredService<IConfiguration>();
return new CosmosClient(
configuration["CosmosDb:Endpoint"],
new DefaultAzureCredential(), // Managed identity
new CosmosClientOptions
{
ApplicationName = "MyApp",
ConnectionMode = ConnectionMode.Direct, // Better performance
ConsistencyLevel = ConsistencyLevel.Session,
SerializerOptions = new CosmosSerializationOptions
{
PropertyNamingPolicy = CosmosPropertyNamingPolicy.CamelCase
}
});
});
// Repository pattern
public class OrderRepository : IOrderRepository
{
private readonly Container _container;
public OrderRepository(CosmosClient cosmosClient)
{
_container = cosmosClient.GetContainer("OrdersDb", "Orders");
}
// Point read (most efficient - O(1))
public async Task<Order?> GetByIdAsync(string id, string customerId)
{
try
{
var response = await _container.ReadItemAsync<Order>(
id,
new PartitionKey(customerId));
return response.Resource;
}
catch (CosmosException ex) when (ex.StatusCode == HttpStatusCode.NotFound)
{
return null;
}
}
// Query within partition (efficient)
public async Task<IEnumerable<Order>> GetByCustomerAsync(string customerId)
{
var query = new QueryDefinition(
"SELECT * FROM c WHERE c.customerId = @customerId ORDER BY c.orderDate DESC")
.WithParameter("@customerId", customerId);
var iterator = _container.GetItemQueryIterator<Order>(
query,
requestOptions: new QueryRequestOptions
{
PartitionKey = new PartitionKey(customerId),
MaxItemCount = 100
});
var results = new List<Order>();
while (iterator.HasMoreResults)
{
var response = await iterator.ReadNextAsync();
results.AddRange(response);
}
return results;
}
// Cross-partition query (expensive - use sparingly)
public async Task<IEnumerable<Order>> SearchOrdersAsync(
DateTime startDate,
DateTime endDate)
{
var query = new QueryDefinition(
"SELECT * FROM c WHERE c.orderDate >= @start AND c.orderDate <= @end")
.WithParameter("@start", startDate)
.WithParameter("@end", endDate);
var iterator = _container.GetItemQueryIterator<Order>(
query,
requestOptions: new QueryRequestOptions
{
MaxConcurrency = -1 // Max parallelism for cross-partition
});
var results = new List<Order>();
while (iterator.HasMoreResults)
{
var response = await iterator.ReadNextAsync();
results.AddRange(response);
}
return results;
}
// Upsert (create or update)
public async Task UpsertAsync(Order order)
{
await _container.UpsertItemAsync(
order,
new PartitionKey(order.CustomerId));
}
// Transactional batch (same partition only)
public async Task CreateOrderWithItemsAsync(Order order, List<OrderItem> items)
{
var batch = _container.CreateTransactionalBatch(
new PartitionKey(order.CustomerId));
batch.CreateItem(order);
foreach (var item in items)
{
batch.CreateItem(item);
}
var response = await batch.ExecuteAsync();
if (!response.IsSuccessStatusCode)
{
throw new InvalidOperationException(
$"Batch failed with status {response.StatusCode}");
}
}
}
Change Feed Processing
// React to changes in Cosmos DB
public class OrderChangeFeedProcessor : BackgroundService
{
private readonly CosmosClient _cosmosClient;
private readonly IOrderEventHandler _eventHandler;
private ChangeFeedProcessor? _processor;
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
var leaseContainer = _cosmosClient.GetContainer("OrdersDb", "leases");
var monitoredContainer = _cosmosClient.GetContainer("OrdersDb", "Orders");
_processor = monitoredContainer
.GetChangeFeedProcessorBuilder<Order>("OrderProcessor", HandleChangesAsync)
.WithInstanceName("Instance1")
.WithLeaseContainer(leaseContainer)
.WithStartTime(DateTime.UtcNow)
.Build();
await _processor.StartAsync();
await Task.Delay(Timeout.Infinite, stoppingToken);
}
private async Task HandleChangesAsync(
ChangeFeedProcessorContext context,
IReadOnlyCollection<Order> changes,
CancellationToken cancellationToken)
{
foreach (var order in changes)
{
await _eventHandler.HandleOrderChangedAsync(order);
}
}
public override async Task StopAsync(CancellationToken cancellationToken)
{
if (_processor != null)
await _processor.StopAsync();
await base.StopAsync(cancellationToken);
}
}
Azure Storage Services
Blob Storage
// Program.cs
builder.Services.AddSingleton<BlobServiceClient>(sp =>
{
return new BlobServiceClient(
new Uri(builder.Configuration["Storage:BlobEndpoint"]),
new DefaultAzureCredential());
});
// Blob storage service
public class BlobStorageService : IFileStorageService
{
private readonly BlobContainerClient _containerClient;
public BlobStorageService(BlobServiceClient blobServiceClient)
{
_containerClient = blobServiceClient.GetBlobContainerClient("documents");
}
public async Task<string> UploadAsync(Stream content, string fileName, string contentType)
{
var blobClient = _containerClient.GetBlobClient(fileName);
await blobClient.UploadAsync(content, new BlobUploadOptions
{
HttpHeaders = new BlobHttpHeaders
{
ContentType = contentType
},
AccessTier = AccessTier.Hot
});
return blobClient.Uri.ToString();
}
public async Task<Stream> DownloadAsync(string fileName)
{
var blobClient = _containerClient.GetBlobClient(fileName);
var response = await blobClient.DownloadStreamingAsync();
return response.Value.Content;
}
// Generate SAS URL for temporary access
public string GenerateSasUrl(string fileName, TimeSpan expiry, BlobSasPermissions permissions)
{
var blobClient = _containerClient.GetBlobClient(fileName);
var sasUri = blobClient.GenerateSasUri(
permissions,
DateTimeOffset.UtcNow.Add(expiry));
return sasUri.ToString();
}
// List blobs with pagination
public async IAsyncEnumerable<BlobItem> ListBlobsAsync(
string prefix = "",
[EnumeratorCancellation] CancellationToken cancellationToken = default)
{
await foreach (var blob in _containerClient.GetBlobsAsync(
prefix: prefix,
cancellationToken: cancellationToken))
{
yield return blob;
}
}
}
Storage Tiers
| Tier | Access Frequency | Minimum Storage Duration | Use Case |
|---|---|---|---|
| Hot | Frequent | None | Active data, frequently accessed |
| Cool | Infrequent | 30 days | Backups, short-term archival |
| Cold | Rare | 90 days | Compliance data, infrequent access |
| Archive | Very rare | 180 days | Long-term retention, compliance |
Queue Storage
public class QueueStorageService
{
private readonly QueueClient _queueClient;
public QueueStorageService(QueueServiceClient queueServiceClient)
{
_queueClient = queueServiceClient.GetQueueClient("orders");
}
public async Task SendMessageAsync<T>(T message)
{
var json = JsonSerializer.Serialize(message);
var base64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(json));
await _queueClient.SendMessageAsync(base64);
}
public async Task SendMessageWithDelayAsync<T>(T message, TimeSpan delay)
{
var json = JsonSerializer.Serialize(message);
var base64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(json));
await _queueClient.SendMessageAsync(
base64,
visibilityTimeout: delay,
timeToLive: TimeSpan.FromDays(7));
}
}
Table Storage
public class TableStorageService
{
private readonly TableClient _tableClient;
public TableStorageService(TableServiceClient tableServiceClient)
{
_tableClient = tableServiceClient.GetTableClient("users");
}
public async Task UpsertAsync(UserEntity entity)
{
await _tableClient.UpsertEntityAsync(entity);
}
public async Task<UserEntity?> GetAsync(string partitionKey, string rowKey)
{
try
{
var response = await _tableClient.GetEntityAsync<UserEntity>(
partitionKey, rowKey);
return response.Value;
}
catch (RequestFailedException ex) when (ex.Status == 404)
{
return null;
}
}
public async IAsyncEnumerable<UserEntity> QueryAsync(string filter)
{
await foreach (var entity in _tableClient.QueryAsync<UserEntity>(filter))
{
yield return entity;
}
}
}
public class UserEntity : ITableEntity
{
public string PartitionKey { get; set; } = string.Empty; // e.g., "tenant_123"
public string RowKey { get; set; } = string.Empty; // e.g., user ID
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; }
public string Email { get; set; } = string.Empty;
public string Name { get; set; } = string.Empty;
}
Azure Cache for Redis
Configuration
// Program.cs
builder.Services.AddStackExchangeRedisCache(options =>
{
options.Configuration = builder.Configuration.GetConnectionString("Redis");
options.InstanceName = "MyApp_";
});
// Or with connection multiplexer for advanced scenarios
builder.Services.AddSingleton<IConnectionMultiplexer>(sp =>
{
var configuration = ConfigurationOptions.Parse(
builder.Configuration.GetConnectionString("Redis"));
configuration.AbortOnConnectFail = false;
configuration.ConnectRetry = 3;
configuration.ConnectTimeout = 5000;
return ConnectionMultiplexer.Connect(configuration);
});
Caching Patterns
public class CacheService : ICacheService
{
private readonly IDistributedCache _cache;
private readonly IConnectionMultiplexer _redis;
public CacheService(IDistributedCache cache, IConnectionMultiplexer redis)
{
_cache = cache;
_redis = redis;
}
// Cache-aside pattern
public async Task<T?> GetOrSetAsync<T>(
string key,
Func<Task<T>> factory,
TimeSpan? expiry = null) where T : class
{
var cached = await _cache.GetStringAsync(key);
if (cached != null)
{
return JsonSerializer.Deserialize<T>(cached);
}
var value = await factory();
if (value != null)
{
await _cache.SetStringAsync(key, JsonSerializer.Serialize(value),
new DistributedCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = expiry ?? TimeSpan.FromMinutes(5)
});
}
return value;
}
// Direct Redis operations for advanced scenarios
public async Task<long> IncrementAsync(string key)
{
var db = _redis.GetDatabase();
return await db.StringIncrementAsync(key);
}
public async Task PublishAsync<T>(string channel, T message)
{
var subscriber = _redis.GetSubscriber();
await subscriber.PublishAsync(channel, JsonSerializer.Serialize(message));
}
// Distributed locking
public async Task<bool> AcquireLockAsync(string key, TimeSpan expiry)
{
var db = _redis.GetDatabase();
return await db.StringSetAsync(
$"lock:{key}",
Environment.MachineName,
expiry,
When.NotExists);
}
public async Task ReleaseLockAsync(string key)
{
var db = _redis.GetDatabase();
await db.KeyDeleteAsync($"lock:{key}");
}
}
Redis Tiers
| Tier | Features | Use Case |
|---|---|---|
| Basic | Single node, no SLA | Dev/test |
| Standard | Replicated (primary/secondary), 99.9% SLA | Production |
| Premium | Clustering, persistence, VNet, geo-replication | Enterprise |
| Enterprise | Redis Modules, active geo-replication | High-performance |
Messaging Services
Service Bus vs Event Hub vs Event Grid
| Feature | Service Bus | Event Hub | Event Grid |
|---|---|---|---|
| Pattern | Enterprise messaging | Event streaming | Event routing |
| Throughput | Moderate | Very high (millions/sec) | High |
| Message size | 256KB-100MB | 1MB | 1MB |
| Ordering | FIFO (sessions) | Per partition | No guarantee |
| Dead-lettering | Yes | No | Yes |
| Use case | Business transactions | Telemetry, IoT | Event-driven architecture |
Service Bus Implementation
// Publisher
public class ServiceBusPublisher
{
private readonly ServiceBusSender _sender;
public ServiceBusPublisher(ServiceBusClient client)
{
_sender = client.CreateSender("orders");
}
public async Task PublishAsync<T>(T message, string? sessionId = null)
{
var busMessage = new ServiceBusMessage(
BinaryData.FromObjectAsJson(message))
{
ContentType = "application/json",
MessageId = Guid.NewGuid().ToString()
};
if (!string.IsNullOrEmpty(sessionId))
{
busMessage.SessionId = sessionId; // For ordered processing
}
await _sender.SendMessageAsync(busMessage);
}
// Batch sending for efficiency
public async Task PublishBatchAsync<T>(IEnumerable<T> messages)
{
using var batch = await _sender.CreateMessageBatchAsync();
foreach (var message in messages)
{
var busMessage = new ServiceBusMessage(
BinaryData.FromObjectAsJson(message));
if (!batch.TryAddMessage(busMessage))
{
// Batch is full, send it and create a new one
await _sender.SendMessagesAsync(batch);
batch.Dispose();
}
}
if (batch.Count > 0)
{
await _sender.SendMessagesAsync(batch);
}
}
}
// Consumer
public class ServiceBusConsumer : BackgroundService
{
private readonly ServiceBusProcessor _processor;
private readonly IOrderService _orderService;
public ServiceBusConsumer(ServiceBusClient client, IOrderService orderService)
{
_orderService = orderService;
_processor = client.CreateProcessor("orders", new ServiceBusProcessorOptions
{
MaxConcurrentCalls = 10,
AutoCompleteMessages = false,
PrefetchCount = 20
});
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_processor.ProcessMessageAsync += ProcessMessageAsync;
_processor.ProcessErrorAsync += ProcessErrorAsync;
await _processor.StartProcessingAsync(stoppingToken);
}
private async Task ProcessMessageAsync(ProcessMessageEventArgs args)
{
try
{
var order = args.Message.Body.ToObjectFromJson<Order>();
await _orderService.ProcessAsync(order);
await args.CompleteMessageAsync(args.Message);
}
catch (Exception ex)
{
// Dead-letter after max retries
if (args.Message.DeliveryCount > 5)
{
await args.DeadLetterMessageAsync(args.Message, ex.Message);
}
else
{
await args.AbandonMessageAsync(args.Message);
}
}
}
private Task ProcessErrorAsync(ProcessErrorEventArgs args)
{
// Log error
return Task.CompletedTask;
}
}
Event Grid
// Publishing events
public class EventGridPublisher
{
private readonly EventGridPublisherClient _client;
public EventGridPublisher(string topicEndpoint, string accessKey)
{
_client = new EventGridPublisherClient(
new Uri(topicEndpoint),
new AzureKeyCredential(accessKey));
}
public async Task PublishEventAsync<T>(string eventType, T data)
{
var cloudEvent = new CloudEvent(
source: "/myapp/orders",
type: eventType,
data: data);
await _client.SendEventAsync(cloudEvent);
}
}
// Handling in Azure Function
[Function("HandleOrderCreated")]
public async Task HandleOrderCreated(
[EventGridTrigger] CloudEvent cloudEvent)
{
var order = cloudEvent.Data.ToObjectFromJson<Order>();
await ProcessNewOrderAsync(order);
}
Azure Data Factory
Data Factory is used for ETL (Extract, Transform, Load) pipelines.
// Triggering a pipeline from .NET
public class DataFactoryService
{
private readonly DataFactoryClient _client;
public DataFactoryService(string subscriptionId, string resourceGroup)
{
var credential = new DefaultAzureCredential();
_client = new DataFactoryClient(credential, subscriptionId);
}
public async Task<string> TriggerPipelineAsync(
string factoryName,
string pipelineName,
Dictionary<string, object>? parameters = null)
{
var response = await _client.Pipelines.CreateRunAsync(
resourceGroup,
factoryName,
pipelineName,
parameters);
return response.RunId;
}
public async Task<PipelineRun> GetPipelineRunStatusAsync(
string factoryName,
string runId)
{
return await _client.PipelineRuns.GetAsync(
resourceGroup,
factoryName,
runId);
}
}
Choosing the Right Service
Decision Matrix
What type of data?
βββ Relational (SQL, joins, ACID)
β βββ < 4 TB, single region β Azure SQL Database
β βββ > 4 TB or massive scale β Azure SQL Hyperscale
β βββ MySQL/PostgreSQL β Azure Database for MySQL/PostgreSQL
βββ Document/NoSQL
β βββ Global distribution needed β Azure Cosmos DB
β βββ MongoDB compatibility β Cosmos DB for MongoDB
β βββ Simple key-value β Azure Table Storage
βββ Caching
β βββ Azure Cache for Redis
βββ Files/Blobs
β βββ Azure Blob Storage
βββ Messaging
β βββ Enterprise transactions β Service Bus
β βββ High-throughput streaming β Event Hub
β βββ Event routing β Event Grid
βββ Analytics
βββ Azure Synapse Analytics
Cost Comparison (Approximate)
| Service | Minimum | Typical Production |
|---|---|---|
| Azure SQL (Gen Purpose) | ~$15/month | ~$300-1000/month |
| Cosmos DB | ~$25/month (400 RU/s) | ~$100-500/month |
| Redis Basic | ~$16/month | ~$100-500/month |
| Service Bus Basic | ~$10/month | ~$50-200/month |
| Blob Storage | ~$20/TB/month | Variable |
Connection String Management
Key Vault Integration
// Store connection strings in Key Vault
builder.Configuration.AddAzureKeyVault(
new Uri(builder.Configuration["KeyVaultEndpoint"]),
new DefaultAzureCredential());
// Access as regular configuration
var sqlConnection = builder.Configuration.GetConnectionString("Database");
var cosmosEndpoint = builder.Configuration["CosmosDb:Endpoint"];
App Service Configuration
# Reference Key Vault secrets in App Settings
az webapp config appsettings set \
--name myapp \
--resource-group myrg \
--settings \
"ConnectionStrings__Database=@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/SqlConnection/)" \
"CosmosDb__Endpoint=@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/CosmosEndpoint/)"
Performance Tuning
Azure SQL Optimization
-- Enable Query Store for performance insights
ALTER DATABASE [MyDb] SET QUERY_STORE = ON;
-- Add missing indexes (from DMVs)
SELECT
'CREATE INDEX IX_' + t.name + '_' + c.name
+ ' ON ' + s.name + '.' + t.name + ' (' + c.name + ')'
FROM sys.dm_db_missing_index_details AS d
INNER JOIN sys.tables AS t ON d.object_id = t.object_id
INNER JOIN sys.schemas AS s ON t.schema_id = s.schema_id
CROSS APPLY sys.dm_db_missing_index_columns(d.index_handle) AS c
WHERE d.database_id = DB_ID();
Cosmos DB Optimization
// Use point reads instead of queries when possible
// Point read: O(1), ~1 RU
var item = await container.ReadItemAsync<Order>(id, new PartitionKey(customerId));
// Query: O(n), multiple RUs
var query = container.GetItemQueryIterator<Order>(
"SELECT * FROM c WHERE c.id = @id");
// Use projection to reduce RU cost
var query = new QueryDefinition(
"SELECT c.id, c.status FROM c WHERE c.customerId = @customerId");
// Optimize indexing policy
{
"indexingMode": "consistent",
"includedPaths": [
{ "path": "/customerId/*" },
{ "path": "/status/*" }
],
"excludedPaths": [
{ "path": "/description/*" },
{ "path": "/*" } // Exclude all others
]
}
Interview Questions
1. When would you choose Cosmos DB over Azure SQL?
Answer: Choose Cosmos DB when you need:
- Global distribution with multi-region writes
- Flexible schema (document, key-value, graph)
- Massive horizontal scale (millions of ops/sec)
- Single-digit millisecond latency guarantees
- Multi-model support (NoSQL, MongoDB, Cassandra, Gremlin)
Choose Azure SQL when you need:
- Complex relational queries (joins, aggregations)
- ACID transactions across multiple tables
- Existing SQL Server compatibility
- Mature tooling (SSMS, Entity Framework)
- Stored procedures and complex T-SQL
2. Explain Cosmos DB consistency levels and when to use each.
Answer:
- Strong: Financial transactions - guaranteed latest data
- Bounded Staleness: Inventory systems - acceptable lag within bounds
- Session (default): Most apps - read-your-writes within session
- Consistent Prefix: Activity feeds - no out-of-order reads
- Eventual: Analytics, IoT - maximum throughput, may be stale
Trade-off: Stronger consistency = higher latency and cost.
3. How do you choose a partition key for Cosmos DB?
Answer: Good partition key should have:
- High cardinality (many distinct values)
- Even distribution (no hot partitions)
- Common in queries (avoid cross-partition queries)
- Not monotonically increasing (like timestamps)
Example: For an e-commerce order system, customerId is good (high cardinality, even distribution, queries are per customer).
Bad examples: country (low cardinality), status (few values), orderDate (hot partition for recent dates).
4. Whatβs the difference between Service Bus, Event Hub, and Event Grid?
Answer:
| Service Bus | Event Hub | Event Grid | |
|---|---|---|---|
| Purpose | Reliable messaging | High-throughput streaming | Event routing |
| Pattern | Queue/Pub-Sub | Event streaming | Push-based events |
| Ordering | FIFO (sessions) | Per partition | None |
| Use case | Order processing | IoT telemetry | Azure resource events |
| Dead-letter | Yes | No | Yes |
5. How do you handle connection resiliency for Azure SQL?
Answer:
// Entity Framework Core built-in retry
options.UseSqlServer(connectionString, sqlOptions =>
{
sqlOptions.EnableRetryOnFailure(
maxRetryCount: 5,
maxRetryDelay: TimeSpan.FromSeconds(30),
errorNumbersToAdd: null);
});
// Polly for more control
services.AddDbContext<MyContext>(options =>
options.UseSqlServer(connectionString)
.AddInterceptors(new PollyRetryInterceptor()));
Key points:
- Azure SQL has transient errors (network, throttling)
- EF Core has built-in retry logic
- Use managed identity to avoid credential issues
- Monitor with Azure Monitor and Query Performance Insights
Key Takeaways
- Use managed identity - No connection strings with passwords
- Choose the right consistency - Cosmos DB default (Session) works for most apps
- Partition key is critical - Plan carefully for Cosmos DB
- Enable retry policies - Azure services have transient failures
- Use appropriate tiers - Donβt over-provision, use elastic pools for multi-tenant
- Monitor performance - Query Store for SQL, Request Charge for Cosmos
- Cache strategically - Redis for frequently accessed data