In the last post, we looked at the ‘Direct’ DCR that simplifies API-based data ingestion. Today, we’re looking at the AgentDirectToStore Data Collection Rule type, which gives you more options for where to send your data.
The ‘AgentDirectToStore’ DCR uses the Azure Monitor Agent to gather Azure VM data and route it directly to Azure Storage accounts (Blob/Table) or Event Hubs - bypassing Log Analytics workspaces - to give you greater flexibility in handling your monitoring data
While this DCR comes with very strict limitations right now it remains an excellent choice for simpler setups and smaller organizations. If you are seeking an all-in-one solution for Azure VMs that avoids the complexity of managing complex data pipelines or integrating third-party tools, this DCR can streamline your workflow.
Read another episode of this four-part series:
- Direct: In the previous post, I focused on the Direct DCR type, which streamlines log collection configuration for API-based sources.
- AgentDirectToStore: In this latest post, I demonstrate how to use this DCR to natively send events from your Azure VMs to a Storage Account or Event Hub.
- Upcoming
- Upcoming
AgentDirectToStore DCR for Azure VMs
The ‘AgentDirectToStore’ Data Collection Rule (DCR) is a special type of DCR that routes data collected by the Azure Monitor Agent directly to Azure Storage - either to a Blob container or a Table storage - or to Azure Event Hubs, rather than sending it to a Log Analytics workspace.
This approach is especially useful when you want to archive telemetry for long-term retention, store less-critical monitoring data in cost-effective storage, or connect VM-hosted logs with external systems for real-time processing and analytics. By supporting these alternative destinations, AgentDirectToStore DCRs provide added flexibility for organizations looking to simplify their monitoring workflows and integrate with a wider range of data pipelines.
AgentDirectToStore DCRs on the GUI
Constraints
While it is a really unique DCR, due to several significant limitations, the ‘AgentDirectToStore’ DCR is not optimal for most scenarios. For full details, refer to the official Microsoft portal.
Here are the most important constraints:
- Azure VM Only: This DCR works exclusively with Azure VMs; Microsoft has stated there are no plans to support Arc-enabled machines.
- Event Log Structure: While you can collect Windows Event logs, the SecurityEvent and WindowsEvent (WEF) structures are not available. All logs arrive in the native Event log format.
Unsupported Windows Security Event schema
- Syslog Format: Syslog is supported, but not in the CEF (CommonSecurityLog) format. All Syslog-based sources are ingested as raw Syslog, which can limit downstream processing and compatibility.
- No GUI support: While not an issue for most, it is good to know that as of now you cannot deploy this DCR on the GUI, you have to use an ARM template for this.
- No transformKQL: Data transformation via KQL queries in this DCR type is not supported.
No support for transformKql
- Different format: The logs will have a slightly different format than what you are used to in Sentinel. This could require some queries to be rewritten.
Syslog log format in Event Hub - see the Message field instead of SyslogMessage
Scenarios
Despite its limitations, there are situations where this DCR can provide value and when these limitations are not a problem:
- Azure-Hosted Log Collectors: Many organizations already run collector VMs (such as Syslog collectors) in Azure. These machines often handle large log volumes, and since this DCR is already available, it can be utilized for this dataset right away. AgentDirectToStore DCRs can forward all these noisy logs to a Storage Account or Event Hub (and subsequently to Azure Data Explorer) without complex or custom setups.
- Simpler Pipelines: Currently, most companies use Logstash or similar tools to send data to ADX. While these tools offer flexibility, they require additional management. The AgentDirectToStore DCR offers a simpler - though more limited - alternative for direct forwarding. For smaller companies, the ability to simplify tool management by reducing the number of tools to handle (no need for Logstash or FluentBit) while gaining cloud-based administration capabilities is a major advantage, highlighting the benefit of this DCR type.
- Compliance and Data Lakes: When logs are retained for compliance or data lake purposes, the precise format is less important. The lack of pre-parsing and format limitations are more tolerable in these use cases.
Note: The log format will differ from what you’d see in standard Log Analytics tables. Some enrichment data may be missing, and field names could vary (for example, Syslog messages may lack fields like SyslogMessage or TimeGenerated). So, do not consider this a one-to-one copy of data in Sentinel.
Cloud-managed Azure Log collectors
While this DCR has some limitations and is mainly suited for testing or simple deployments, it can offer real value as a cheap, cloud-managed storage alternative, especially for small companies looking to move complexity from on-premises infrastructure into the cloud. It is an ideal fit for an already Azure-based collector setup, redirecting high-volume, low-security-value network data - such as logs from routers, switches, and access points - away from costly Sentinel storage and instead routing it to Event Hubs and then to ADX for long-term storage and non-detection queries, while still allowing Sentinel to access extra context when needed.
If you are already comfortable with Logstash or require support for on-premises machines, there are currently few advantages for you, as shown below:
Comparison of the two designs
Adopting this DCR does not exactly simplifies things compared to Logstash; instead, it shifts the complexity to Azure by introducing the need for a new DCR and an Event Hub. However, for many situations (e.g for MSSPs without direct access to a collector VM) handling this complexity in the cloud solves a range of challenges. Plus, DCRs and Event Hubs are fundamental Azure components most cloud engineers are already comfortable with.
Deployment steps
If you still want to use this DCR type or you just want to test it, you can do it by following these crucial steps:
- Create Destination Storage: Set up your Event Hub namespace and instance, or Storage Account to be used as the destination. The destination has to be valid and has to exist before the next step, otherwise the AgentDirectToStore DCR deployment will fail.
Deployment step 1
- Create the DCR: Deploy the AgentDirectToStore DCR. You can use the sample code in my gitlab repo. At this time, the GUI cannot be used to create this DCR, so your best option is to utilize the ARM template.
Deployment step 2
- Associate the VM: Connect the VM to the DCR using a Data Collection Rule Association (DCRA). You can use any method to do is. For testing you can use my simple ARM template.
Deployment step 3
- Assign Permissions: Grant the VM’s (user or system) managed identity the necessary roles, depending on your destination. You can do this on the GUI by going to the IAM page of your resources (Event Hub or Storage Account) and providing one of the following permissions to the identity:
- Storage Table Data Contributor (for table storage)
- Storage Blob Data Contributor (for blob storage)
- Azure Event Hubs Data Sender (for Event Hubs)
Deployment step 4
Alternative Destination Capabilities
The AgentDirectToStore DCR from Microsoft currently has several notable limitations, as previously mentioned, which often restrict its practical application.
While the DCR API documentation mentions additional destinations, these are not yet supported by general DCRs like those for Linux or Windows. Some destinations can be used in particular cases, such as metric collection with the PlatformMetrics DCR type, while others appear to be accessible only through private preview features.
Storage destinations in the API documentation.
So, Azure’s log collection pipeline already offers various destination solutions, even if they are not yet available for general log collection DCRs like those for Linux or Windows.
While Microsoft is not expected to extend AgentDirectToStorage DCR support to on-premises machines, it is hoped that support for these destinations will be broadened to other log collection DCRs in the future. An expansion like this would greatly improve the flexibility and integration options of Azure’s DCR-based logging pipeline.
Microsoft’s general log collection pipeline with Event Hub, ADX, or Storage Account integration capabilities could easily push many third-party telemetry collectors and processors aside for many projects. I’m genuinely excited to see what Microsoft introduces next in this area.
Continuing the series
If you are interested in learning about some other lesser-known aspects of DCRs, check out another article in this four-part series:
- Direct DCRs: In the previous post, I focused on the Direct DCR type, which streamlines log collection configuration for API-based sources.
- AgentDirectToStore DCRs: In this latest post, I demonstrate how to use this DCR to natively send events from your Azure VMs to a Storage Account or Event Hub.
- Coming soon
- Coming soon