Giter VIP home page Giter VIP logo

diagnostics-eventflow's Introduction

Microsoft.Diagnostics.EventFlow

Introduction

The EventFlow library suite allows applications to define what diagnostics data to collect, and where they should be outputted to. Diagnostics data can be anything from performance counters to application traces. It runs in the same process as the application, so communication overhead is minimized. It also has an extensibility mechanism so additional inputs and outputs can be created and plugged into the framework.

The EventFlow suite supports .NET applications and .NET Core applications. The core of the library, as well as inputs and outputs listed below are available as NuGet packages.

Topics

Getting started

Inputs

Outputs

There are several EventFlow extensions available from non-Microsoft authors and vendors, including:

Filters

Standard metatadata types

Health reporter

Pipeline settings

Service Fabric support

Filter expressions

Secret storage

Extensibility

Troubleshooting

Platform support

Contributing to EventFlow

Getting Started

The EventFlow pipeline is built around three core concepts: inputs, outputs, and filters. The number of inputs, outputs, and filters depend on the need of diagnostics. The configuration also has a healthReporter and settings section for configuring settings fundamental to the pipeline operation. Finally, the extensions section allows declaration of custom developed plugins. These extension declarations act like references. On pipeline initialization, EventFlow will search extensions to instantiate custom inputs, outputs, or filters.

  1. To quickly get started, you can create a simple console application in VisualStudio and install the following Nuget packages:

    • Microsoft.Diagnostics.EventFlow.Inputs.Trace
    • Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsights
    • Microsoft.Diagnostics.EventFlow.Outputs.StdOutput
  2. Add a JSON file named "eventFlowConfig.json" to your project and set the Build Action property of the file to "Copy if Newer". Set the content of the file to the following:

{
    "inputs": [
    {
      "type": "Trace",
      "traceLevel": "Warning"
    }
  ],
  "outputs": [
    // Please update the instrumentationKey.
    {
      "type": "ApplicationInsights",
      "instrumentationKey": "00000000-0000-0000-0000-000000000000"
    }
  ],
  "schemaVersion": "2016-08-11"
}

Note: if you are using VisualStudio for Mac, you might need to edit the project file directly. Make the following snippet for eventFlowConfig.json file is included in the project definition:

<ItemGroup>
 <None Include="eventFlowConfig.json">
   <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
 </None>
</ItemGroup>
  1. If you wish to send diagnostics data to Application Insights, fill in the value for the instrumentationKey. If not, you can send traces to console output instead by replacing the Application Insights output with the standard output. The outputs property in the configuration should then look like this:
    "outputs": [
    {
      "type": "StdOutput"
    }
  ],
  1. Create an EventFlow pipeline in your application code using the code below. Make sure there is at least one output defined in the configuration file. Run your application and see your traces in console output or in Application Insights.
    using (var pipeline = DiagnosticPipelineFactory.CreatePipeline("eventFlowConfig.json"))
    {
        System.Diagnostics.Trace.TraceWarning("EventFlow is working!");
        Console.WriteLine("Trace sent to Application Insights. Press any key to exit...");
        Console.ReadKey(intercept: true);
    }

It usually takes a couple of minutes for the traces to show in Application Insights Azure portal.

Back to Topics

Inputs

These define what data will flow into the engine. At least one input is required. Each input type has its own set of parameters.

Trace

Nuget Package: Microsoft.Diagnostics.EventFlow.Inputs.Trace

This input listens to traces written with System.Diagnostics.Trace API. Here is an example showing all possible settings:

{
    "type": "Trace",
    "traceLevel":  "Warning"
}
Field Values/Types Required Description
type "Trace" Yes Specifies the input type. For this input, it must be "Trace".
traceLevel Critical, Error, Warning, Information, Verbose, All No Specifies the collection trace level. Traces with equal or higher severity than specified are collected. For example, if Warning is specified, then Critial, Error, and Warning traces are collected. Default is Error.

Back to Topics

EventSource

Nuget Package: Microsoft.Diagnostics.EventFlow.Inputs.EventSource

This input listens to EventSource traces. EventSource classes can be created in the application by deriving from the System.Diagnostics.Tracing.EventSource class. Here is an example showing all possible settings:

{
    "type": "EventSource",
    "sources": [
        {
            "providerName": "MyEventSource",
            "level": "Informational",
            "keywords": "0x7F"
        }
    ]
}

Top object

Field Values/Types Required Description
type "EventSource" Yes Specifies the input type. For this input, it must be "EventSource".
sources JSON array Yes Specifies the EventSource objects to collect.

Source object (element of the sources array)

Field Values/Types Required Description
providerName EventSource name Yes(*) Specifies the name of the EventSource to track.
providerNamePrefix EventSource name prefix Yes(*) Specifies the name prefix of EventSource(s) to track. For example, if the value is "Microsoft-ServiceFabric", all EventSources that have names starting with Microsoft-ServiceFabric (Microsoft-ServiceFabric-Services, Microsoft-ServiceFabric-Actors and so on) will be tracked.
disabledProviderNamePrefix provider name Yes(*) Specifies the name prefix of the EventSource(s) that must be ignored. No events from these sources will be captured(***).
level Critial, Error, Warning, Informational, Verbose, LogAlways No(**) Specifies the collection trace level. Traces with equal or higher severity than specified are collected. For example, if Warning is specified, then Critial, Error, and Warning traces are collected. Default is LogAlways, which means "provider decides what events are raised", which usually results in all events being raised.
keywords An integer No(**) A bitmask that specifies what events to collect. Only events with keyword matching the bitmask are collected, except if it's 0, which means everything is collected. Default is 0.
eventCountersSamplingInterval An integer No Specifies the sampling interval in seconds for collecting data from EventCounters. Default is 0, which means no EventCounter data is collected.

Remarks

(*) Out of providerName, providerNamePrefix and disabledProviderNamePrefix, only one can be used for a single source. In other words, with a single source one can enable an EventSource by name, or enable a set of EventSources by prefix, or disable a set of EventSources by prefix.

(**) level and keywords can be used for enabling EventSources, but not for disabling them. Disabling events using level and keywords is not supported (but one can use level and/or keywords to selectively enable a subset of events from a given EventSource).

(***) There is an issue with .NET frameworks 4.6 and 4.7, and .NET Core framework 1.1 and 2.0 where dynamically created EventSource events are dispatched to all listeners, regardless whether listeners subscribe to events from these EventSources; for more information see https://github.com/dotnet/coreclr/issues/14434 disabledProviderNamePrefix property can be usesd to suppress these events.
Disabling EventSources is not recommended under normal circumstances, as it introduces a slight performance penalty. Instead, selectively enable necessary events through combination of EventSource names, event levels, and keywords.

Back to Topics

DiagnosticSource

Nuget Package: Microsoft.Diagnostics.EventFlow.Inputs.DiagnosticSource

This input listens to System.Diagnostics.DiagnosticSource sources. See the DiagnosticSource User's Guide for details of usage. Here is an example showing all possible settings:

{
    "type": "DiagnosticSource",
    "sources": [
        { "providerName": "MyDiagnosticSource" }
    ]
}

Top object

Field Values/Types Required Description
type "DiagnosticSource" Yes Specifies the input type. For this input, it must be "DiagnosticSource".
sources JSON array Yes Specifies the DiagnosticSource objects to collect.

Source object (element of the sources array)

Field Values/Types Required Description
providerName DiagnosticSource name Yes Specifies the name of the DiagnosticSource to track.

Back to Topics

ActivitySource

Nuget Package: Microsoft.Diagnostics.EventFlow.Inputs.ActivitySource

This input listens to System.Diagnostics.ActivitySource sources (introduced in .NET 5.0). ActivitySource is designed to emit telemetry in a way that is compatible with OpenTelemetry specification. Here is an example showing all possible settings:

{
    "type": "ActivitySource",
    "sources": [
        { 
            "ActivitySourceName": "JobSchedulingActivitySource",
            "ActivityName": "JobStatusRequest",
            "CapturedData": "PropagationData"
        },
        { 
            "ActivitySourceName": "JobSchedulingActivitySource",
            "ActivityName": "NewJobRequest",
            "CapturedData": "AllDataAndRecorded",
            "CapturedEvents": "Both" 
        }
    ]
}

Top object

Field Values/Types Required Description
type "ActivitySource" Yes Specifies the input type. For this input, it must be "ActivitySource".
sources JSON array Yes Specifies the activity data to collect.

Source object (element of the sources array)

Field Values/Types Required Description
ActivitySourceName string No The name of the ActivitySource to track. If left empty or omitted, all ActivitySources available in the process will be tracked.
ActivityName string No The name of the activity to track. If left empty or omitted, all activities from a given source will be captured.
CapturedData AllData, AllDataAndRecorded, PropagationData, or None No Specifies what data will be captured for the activity. For more information about what each option means see ActivitySamplingResult documentation

The default value for this setting is is AllData.
CapturedEvents Start, Stop, Both, or None No Specifies when activity data gets captured. Start means activity data will be captured just after the activity is started. Stop means the activity data will be captured just after the activity is completed. Both means the activity data will be captured twice: at the beginning, and at the end of the activity.

The default value for this setting is is Stop.

Notes

Be careful about leaving ActivitySourceName and ActivityName blank. It might be tempting to capture all activity data for the process, but it might have a significant, negative impact on performance, and result in a lot of data that will be expensive to process and store.

It is possible to come up with configuration that will "match" a given activity multiple times. Records in the sources array are matched to activities in the order in which they appear in configuration; the first record "wins" and determines what settings will be used. For example, if the configuration is

{
    "Type": "ActivitySource",
    "Sources": [
        { "ActivitySourceName": "S", "CapturedData": "PropagationData" },
        { "ActivitySourceName": "S", "ActivityName": "A", "CapturedData": "AllData" }
    ]
}

and activity A from source S occurs, the first sources record will match the activity, and only propagation data will be captured for activity A. That is why more specific sources should generally precede less specific sources (ones that omit ActivitySourceName or ActivityName). A common scenario when this rule comes handy is when it is necessary to capture all data from specific subset of activities from a given source, and then capture propagation data for all other activities from the same source.

Back to Topics

PerformanceCounter

Nuget Package: Microsoft.Diagnostics.EventFlow.Inputs.PerformanceCounter

This input enables gathering data from Windows performance counters. Only process-specific counters are supported, that is, the counter must have an instance that is associated with the current process. For machine-wide counters use an external agent such as Azure Diagnostics Agent or create a custom input.

Finding the counter instance that corresponds to the current process

In general there is no canonical way to find a performance counter instance that corresponds to current process. Two methods are commonly used in practice:

  • A special performance counter that provides instance name to process ID mapping. This solution involves a set of counters that use the same instance name for a given process. Among them there is a special counter with a value that is the process ID of the corresponding process. Searching for the instance of the special counter with a value equal to the current process ID allows to discover what the instance name is used for the current process. Examples of this approach include the Windows Process category (special counter "ID Process") and all .NET counters (special counter "Process ID" in the ".NET CLR Memory" category).

  • Process ID can be encoded directly into the instance name. .NET performance counters can use this approach when ProcessNameFormat flag is set in the in the registry.

EventFlow PerformanceCounter input supports the first method of deterimining counter instance name for current process via configuration settings. It also supports the second method, but only for .NET performance counters.

Configuration example

{
    "type": "PerformanceCounter",
    "sampleIntervalMsec": "5000",
    "counters": [
        {
            "counterCategory": "Process",
            "counterName": "Private Bytes"
        }, 
        {
            "counterCategory": ".NET CLR Exceptions",
            "counterName": "# of Exceps Thrown / sec"
        }
    ]
}

Top-level configuration settings

Field Values/Types Required Description
type "PerformanceCounter" Yes Specifies the input type. For this input, it must be "PerformanceCounter".
sampleIntervalMsec integer No Specifies the sampling rate for the whole input (in milliseconds). This is the rate at which the collection loop for the whole input executes. Default is 10 seconds.
counters JSON array of Counter objects Yes Specifies performance counters to collect data from.

Counter class

Field Values/Types Required Description
counterCategory string Yes Category of the performance counter to monitor
counterName string Yes Name of the counter to monitor.
collectionIntervalMsec integer No Sampling interval for the counter. Values for the counter are read not more often than at this rate. Default is 30 seconds.
processIdCounterCategory and processIdCounterName string No The category and name of the performance counter that provides process ID to counter instance name mapping. It is not necessary to specify these for the "Process" counter category and for .NET performance counters.
useDotNetInstanceNameConvention boolean No Indicates that the counter instance names include process ID as described in ProcessNameFormat documentation.

Important usage note

Some performance counters require the user to be a member of the Performance Monitor Users system group. This can manifest itself by health reporter reporting "category does not exist" errors from PerformanceCounter output, despite the fact that the category and counter are properly configured and clearly visible in Windows Performance Monitor. If you need to consume such counters, make sure the account your process runs under belongs to Performance Monitor Users group.

Back to Topics

Serilog

Nuget package: Microsoft.Diagnostics.EventFlow.Inputs.Serilog

This input enables capturing diagnostic data created through Serilog library.

Configuration example

{
  "type": "Serilog",
  "useSerilogDepthLevel": true
}

Top-level configuration settings

Field Values/Types Required Description
type "Serilog" Yes Specifies the input type. For this input, it must be "Serilog".
useSerilogDepthLevel bool No If true the input will try to preserve the structure of data passed to it, up to a maximum depth determined by Serilog maximum destructuring depth setting; otherwise all objects will be flattened. Defaults to false for backward compatibility.

Example: instantiating a Serilog logger that uses EventFlow Serilog input

using System;
using System.Linq;
using Microsoft.Diagnostics.EventFlow;
using Serilog;

namespace SerilogEventFlow
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var pipeline = DiagnosticPipelineFactory.CreatePipeline(".\\eventFlowConfig.json"))
            {
                Log.Logger = new LoggerConfiguration()
                    .WriteTo.EventFlow(pipeline)
                    .CreateLogger();

                Log.Information("Hello from {friend} for {family}!", "SerilogInput", "EventFlow");
                
                Log.CloseAndFlush();
                Console.ReadKey();
            }
        }
    }
}

Back to Topics

Microsoft.Extensions.Logging

Nuget package: Microsoft.Diagnostics.EventFlow.Inputs.MicrosoftLogging

This input enables capturing diagnostic data created through Microsoft.Extensions.Logging library and ILogger interface.

Configuration example The ILogger input has no configuration, other than the "type" property that specifies the type of the input (must be "Microsoft.Extensions.Logging"):

{
  "type": "Microsoft.Extensions.Logging"
}

Example: instantiating a ILogger that uses EventFlow ILogger input

using Microsoft.Diagnostics.EventFlow;
using Microsoft.Extensions.Logging;

namespace LoggerEventFlow
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var pipeline = DiagnosticPipelineFactory.CreatePipeline(".\\eventFlowConfig.json"))
            {
                var factory = new LoggerFactory()
                    .AddEventFlow(pipeline);

                var logger = new Logger<Program>(factory);
                using (logger.BeginScope(myState))
                {
                    logger.LogInformation("Hello from {friend} for {family}!", "LoggerInput", "EventFlow");
                }
            }
        }
    }
}

Example: using EventFlow ILogger input with ASP.NET Core The following example shows how to enable EventFlow ILogger inside a Service Fabric stateless service that uses ASP.NET Core.

  1. Modify the service class so that its constructor takes a DiagnosticPipeline instance as a parameter:
        private static void Main()
        {
            try
            {
                using (ManualResetEvent terminationEvent = new ManualResetEvent(initialState: false))
                using (var pipeline = ServiceFabricDiagnosticPipelineFactory.CreatePipeline("CoreBasedFabricPlusEventFlow-Diagnostics"))
                {
                    Console.CancelKeyPress += (sender, eventArgs) => Shutdown(diagnosticsPipeline, terminationEvent);

                    AppDomain.CurrentDomain.UnhandledException += (sender, unhandledExceptionArgs) =>
                    {
                        ServiceEventSource.Current.UnhandledException(unhandledExceptionArgs.ExceptionObject?.ToString() ?? "(no exception information)");
                        Shutdown(diagnosticsPipeline, terminationEvent);
                    };

                    ServiceRuntime.RegisterServiceAsync("Web1Type",
                        context => new Web1(context, pipeline)).GetAwaiter().GetResult();

                    ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id, typeof(Web1).Name);

                    terminationEvent.WaitOne();
                }
            }
            catch (Exception e)
            {
                ServiceEventSource.Current.ServiceHostInitializationFailed(e.ToString());
                throw;
            }
        }

        private static void Shutdown(IDisposable disposable, ManualResetEvent terminationEvent)
        {
            try
            {
                disposable.Dispose();
            }
            finally
            {
                terminationEvent.Set();
            }
        }
  1. In the CreateServiceInstanceListeners() method add the pipeline as a singleton service to ASP.NET dependency injection container
        protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
        {
            return new ServiceInstanceListener[]
            {
                new ServiceInstanceListener(serviceContext =>
                    new WebListenerCommunicationListener(serviceContext, "ServiceEndpoint", (url, listener) =>
                    {
                        ServiceEventSource.Current.ServiceMessage(serviceContext, $"Starting WebListener on {url}");

                        return new WebHostBuilder().UseWebListener()
                                    .ConfigureServices(
                                        services => services
                                            .AddSingleton<StatelessServiceContext>(serviceContext)
                                            .AddSingleton<DiagnosticPipeline>(this.diagnosticPipeline))
                                    .UseContentRoot(Directory.GetCurrentDirectory())
                                    .UseStartup<Startup>()
                                    .UseApplicationInsights()
                                    .UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
                                    .UseUrls(url)
                                    .Build();
                    }))
            };
        }
  1. In the Startup class configure the loggerFactory by calling AddEventFlow on it:
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();
            var diagnosticPipeline = app.ApplicationServices.GetRequiredService<DiagnosticPipeline>();
            loggerFactory.AddEventFlow(diagnosticPipeline);

            app.UseMvc();
        }
  1. Now you can assume the logger factory will be constructor-injected into your controllers:
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        private readonly ILogger<ValuesController> logger;

        public ValuesController(ILogger<ValuesController> logger)
        {
            this.logger = logger;
        }

        // GET api/values
        [HttpGet]
        public IEnumerable<string> Get()
        {
            this.logger.LogInformation("Hey, someone just called us!");
            return new string[] { "value1", "value2" };
        }

      // (rest of controller code is irrelevant)

Back to Topics

ETW (Event Tracing for Windows)

Nuget package: Microsoft.Diagnostics.EventFlow.Inputs.Etw

This input captures data from Microsoft Event Tracing for Windows (ETW) providers. Both manifest-based providers as well as providers based on managed EventSource infrastructure are supported. The data is captured machine-wide and requires that the identity the process uses belongs to Performance Log Users built-in administrative group.

Note

To capture data from EventSources running in the same process as EventFlow, the EventSource input is a better choice, with better performance and no additional security requirements.

Configuration example

{
    "type": "ETW",
    "sessionNamePrefix": "MyCompany-MyApplication",
    "cleanupOldSessions": true,
    "reuseExistingSession": true,
    "providers": [
        {
            "providerName": "Microsoft-ServiceFabric",
            "level": "Warning",
            "keywords": "0x7F"
        }
    ]
}

Top Object

Field Values/Types Required Description
type "ETW" Yes Specifies the input type. For this input, it must be "ETW".
sessionNamePrefix string No The ETW trace session will be created with this name prefix, which helps in differentiating ETW input instances owned by multiple applications. If not set, a default prefix will be used.
cleanupOldSessions boolean No If set, existing ETW trace sessions matching the sessionNamePrefix will be closed. This helps to collect leftover session instances, as there is a limit on their number.
reuseExistingSession boolean No If turned on, then an existing trace session matching the sessionNamePrefix will be re-used. If cleanupOldSessions is also turned on, then it will leave one session open for re-use.
providers JSON array Yes Specifies ETW providers to collect data from.

Providers object

Field Values/Types Required Description
providerName provider name Yes(*) Specifies the name of the ETW provider to track.
providerGuid provider GUID Yes(*) Specifies the GUID of the ETW provider to track.
level Critial, Error, Warning, Informational, Verbose, LogAlways No Specifies the collection trace level. Traces with equal or higher severity than specified are collected. For example, if Warning is specified, then Critial, Error, and Warning traces are collected. Default is LogAlways, which means "provider decides what events are raised", which usually results in all events being raised.
keywords An integer No A bitmask that specifies what events to collect. Only events with keyword matching the bitmask are collected, except if it's 0, which means everything is collected. Default is 0.

(*) Either providerName, or providerGuid must be specified. When both are specified, provider GUID takes precedence.

Back to Topics

Application Insights input

Nuget package: Microsoft.Diagnostics.EventFlow.Inputs.ApplicationInsights

Application Insights input is designed for the following scenario:

  1. You have an application that uses Application Insights for monitoring and diagnostics.
  2. You want to send a portion of your Application Insights telemetry to some destination other than Application Insights (e.g. Azure EventHub or Elasticsearch; the assumption is there is an EventFlow output for where the data needs to go).

For example, you might want to leverage Application Insights sampling capabilities to reduce the amount of data analyzed by Application Insights without losing analysis fidelity, while sending full raw logs to Elasticsearch to do detailed log search during problem troubleshooting.

Application Insights input supports all standard Application Insights telemetry types: trace, request, event, dependency, metric, exception, page view and availability.

Usage–leveraging EventFlow configuration file

To use Application Insights input in an application that creates EventFlow pipeline using a configuration file, do the following:

  1. Add the EventFlowTelemetryProcessor to your Application Insights configuration file (it goes into TelemetryProcessors element):

    <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" >
      <!-- ... -->
      <TelemetryProcessors>
         <!-- ... -->
         <Add Type="Microsoft.Diagnostics.EventFlow.ApplicationInsights.EventFlowTelemetryProcessor, Microsoft.Diagnostics.EventFlow.Inputs.ApplicationInsights" />
         <!-- ... -->
      </TelemetryProcessors>
      <!-- ... -->
    </ApplicationInsights>

    Note that the order of telemetry processors does matter. In particular, if the EventFlowTelemetryProcessor is placed before the Application Insights sampling processor, EventFlow will capture all telemetry, but if the EventFlowTelemetryProcessor is placed after the sampling processor, it will only "see" telemetry that was sampled in. For more information on configuring Application Insights see Application Insights configuration documentation.

  2. In the EventFlow configuration make sure to include the Application Insights input. It does not take any parameters:

    { "type": "ApplicationInsights" }
  3. In your application code, after the EventFlow pipeline is created, find the EventFlowTelemetryProcessor and set its Pipeline property to the instance of the EventFlow pipeline:

    using (var pipeline = DiagnosticPipelineFactory.CreatePipeline("eventFlowConfig.json"))
    {
        // ...
        EventFlowTelemetryProcessor efTelemetryProcessor = TelemetryConfiguration.Active.TelemetryProcessors.OfType<EventFlowTelemetryProcessor>().First();
        efTelemetryProcessor.Pipeline = pipeline;
        // ...
    }

This is it–after the EventFlowTelemetryProcessor.Pipeline property is set, the EventFlowTelemetryProcessor will start sending AI telemetry into the EventFlow pipeline.

Usage–ASP.NET Core

ASP.NET Core application tend to use code to create various parts of their request processing pipeline, and both EventFlow and Application Insights follow that model. Application Insights input in EventFlow provides a class called EventFlowTelemetryProcessorFactory that helps connecting Application Insights with EventFlow in ASP.NET Core environment. Here is an example how one could set up Application Insights input to send telemetry to Elasticsearch:

using System;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Diagnostics.EventFlow;
using Microsoft.Diagnostics.EventFlow.ApplicationInsights;
using Microsoft.Diagnostics.EventFlow.HealthReporters;
using Microsoft.Diagnostics.EventFlow.Inputs;
using Microsoft.Diagnostics.EventFlow.Outputs;
using Microsoft.Diagnostics.EventFlow.Configuration;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.ApplicationInsights.AspNetCore;

namespace AspNetCoreEventFlow
{
    public class Program
    {
        public static void Main(string[] args)
        {
            using (var eventFlow = CreateEventFlow(args))
            {
                BuildWebHost(args, eventFlow).Run();
            }
        }

        public static IWebHost BuildWebHost(string[] args, DiagnosticPipeline eventFlow) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureServices(services => services.AddSingleton<ITelemetryProcessorFactory>(sp => new EventFlowTelemetryProcessorFactory(eventFlow)))
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .Build();

        private static DiagnosticPipeline CreateEventFlow(string[] args)
        {
            // Create configuration instance to access configuration information for EventFlow pipeline
            // To learn about common configuration sources take a peek at https://github.com/aspnet/MetaPackages/blob/master/src/Microsoft.AspNetCore/WebHost.cs (CreateDefaultBuilder method). 
            // Here we assume all necessary information comes from command-line arguments and environment variables.
            var configBuilder = new ConfigurationBuilder()
                .AddEnvironmentVariables();
            if (args != null)
            {
                configBuilder.AddCommandLine(args);
            }
            var config = configBuilder.Build();

            var healthReporter = new CsvHealthReporter(new CsvHealthReporterConfiguration());
            var aiInput = new ApplicationInsightsInputFactory().CreateItem(null, healthReporter);
            var inputs = new IObservable<EventData>[] { aiInput };
            var sinks = new EventSink[]
            {
                new EventSink(new ElasticSearchOutput(new ElasticSearchOutputConfiguration {
                    ServiceUri = config["ElasticsearchServiceUri"]
                    // Set other configuration settings, as necessary
                }, healthReporter), null)
            };

            return new DiagnosticPipeline(healthReporter, inputs, null, sinks, null, disposeDependencies: true);
        }
    }
}

Back to Topics

Log4net

Nuget package: Microsoft.Diagnostics.EventFlow.Inputs.Log4net

This input enables capturing diagnostic data sent to the Log4net project.

Configuration example The Log4net input has one configuration, the Log4net Level:

{
  "type": "Log4net",
  "logLevel": "Debug"
}
Field Values/Types Required Description
type "Log4net" Yes Specifies the output type. For this output, it must be "Log4net".
logLevel "Debug", "Info", "Warn", "Error", or "Fatal" Yes Specifies minimum Log4net Level for captured events. For example, if the level is Warn, the input will capture events with levels equal to Warn, Error, or Fatal.

Example: instantiating a Log4net logger that uses EventFlow Log4net input

using System;
using Microsoft.Diagnostics.EventFlow;
using log4net;

namespace ConsoleApp2
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var pipeline = DiagnosticPipelineFactory.CreatePipeline(".\\eventFlowConfig.json"))
            {
                var logger = LogManager.GetLogger("EventFlowRepo", "MY_LOGGER_NAME");

                logger.Debug("Hey! Listen!", new Exception("uhoh"));
            }
        }
    }
}

Back to Topics

NLog

Nuget package: Microsoft.Diagnostics.EventFlow.Inputs.NLog

This input enables capturing diagnostic data sent to the NLog library.

Configuration example The NLog input has no configuration, other than the "type" property that specifies the type of the input (must be "NLog"):

{
  "type": "NLog"
}

Example: instantiating a NLog logger that uses EventFlow NLog input

using System;
using Microsoft.Diagnostics.EventFlow;
using NLog;

namespace NLogEventFlow
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var pipeline = DiagnosticPipelineFactory.CreatePipeline(".\\eventFlowConfig.json"))
            {
                var nlogTarget = pipeline.ConfigureNLogInput(pipeline, NLog.LogLevel.Info);
                var logger = NLog.LogManager.GetCurrentClassLogger();
                logger.Info("Hello from {friend} for {family}!", "NLogInput", "EventFlow");
                NLog.LogManager.Shutdown();
                Console.ReadKey();
            }
        }
    }
}

Back to Topics

Outputs

Outputs define where data will be published from the engine. It's an error if there are no outputs defined. Each output type has its own set of parameters.

StdOutput

Nuget Package: Microsoft.Diagnostics.EventFlow.Outputs.StdOutput

This output writes data to the console window. Here is an example showing all possible settings:

{
    "type": "StdOutput"
}
Field Values/Types Required Description
type "StdOutput" Yes Specifies the output type. For this output, it must be "StdOutput".

Back to Topics

Http

Nuget Package: Microsoft.Diagnostics.EventFlow.Outputs.HttpOutput

This output writes data to a webserver using diffent encoding methods (Json or JsonLines, eg. for logstash). Here is an example showing all possible settings:

{
    "type": "Http",
    "serviceUri": "https://example.com/",
    "format": "Json",
    "httpContentType": "application/x-custom-type",
    "basicAuthenticationUserName": "httpUser1",
    "basicAuthenticationUserPassword": "<MyPassword>"
}
Field Values/Types Required Description
type "Http" Yes Specifies the output type. For this output, it must be "Http".
serviceUri string Yes Target service URL endpoint (can be HTTP and HTTPS)
format "Json", "JsonLines" No Defines the message format (and the default HTTP Content-Type header). "Json" a json object with multiple array items and "JsonLines" one line per json object (multiple objects)
basicAuthenticationUserName string No Specifies the user name used to authenticate with webserver.
basicAuthenticationUserPassword string No Specifies the password used to authenticate with webserver. This field should be used only if basicAuthenticationUserName is specified.
httpContentType string No Defines the HTTP Content-Type header
headers object No Specifies custom headers that will be added to event upload request. Each property of the object becomes a separate header.

Back to Topics

Event Hub

Nuget Package: Microsoft.Diagnostics.EventFlow.Outputs.EventHub

This output writes data to the Azure Event Hub.

Here is an example showing configuration using connection string:

{
    "type": "EventHub",
    "eventHubName": "myEventHub",
    "connectionString": "Endpoint=sb://<myEventHubNamespace>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=<MySharedAccessKey>"
}

Here is an example showing configuration using Azure Identity:

{
    "type": "EventHub",
    "useAzureIdentity": true,
    "fullyQualifiedNamespace": "<myEventHubNamespace>.servicebus.windows.net",
    "eventHubName": "myEventHub"
}
Field Values/Types Required Description
type "EventHub" Yes Specifies the output type. For this output, it must be "EventHub".
connectionString connection string Yes(*) Specifies the connection string for the event hub. The corresponding shared access policy must have send permission. If the event hub name does not appear in the connection string, then it must be specified in the eventHubName field.
eventHubName event hub name No(**) Specifies the name of the event hub.
useAzureIdentity boolean Yes(*) Specifies that Event Hub client will be using Azure Identity for authentication. The identity (managed of user) must have assigned policy with send permission.
fullyQualifiedNamespace string Yes(**) Specifies the root vent hub namespace. Should be a fully qualified name, i.e. <namespace>.servicebus.windows.net.
partitionKeyProperty string No The name of the event property that will be used as the PartitionKey for the Event Hub events.

(*) Either connectionString or useAzureIdentity must be specified. useAzureIdentity takes precedence.

(**) When useAzureIdentity is set to true then both eventHubName and fullyQualifiedNamespace must be specified.

Back to Topics

Application Insights

Nuget Package: Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsights

This output writes data to the Azure Application Insights service. Here is an example showing all possible settings:

{
    "type": "ApplicationInsights",
    "instrumentationKey": "00000000-0000-0000-0000-000000000000",
    "configurationFilePath": "path-to-ApplicationInsights.config"
}
Field Values/Types Required Description
type "ApplicationInsights" Yes Specifies the output type. For this output, it must be "ApplicationInsights".
instrumentationKey string No (*) Specifies the instrumentation key for the targeted Application Insights resource. The key is typically a GUID; it can be found on the Azure Monitor Application Insights blade in Azure portal.
The value in this field overrides any value in the Application Insights configuration file (see the configurationFilePath parameter) and in the connection string (connectionString parameter).
connectionString string No (*) Specifies the Application Insights connection string.
The value in this field overrides any value in the Application Insights configuration file (see the configurationFilePath parameter).
configurationFilePath string No (*) Specifies the path to Application Insights configuration file. This parameter is optional-if no value is specified, default configuration for the Application Insights output will be used. For more information see Application Insights documentation.

(*) At least one of the parameters: instrumentationKey, connectionString, or configurationFilePath must be provided for the Application Insights output configuration to be valid.

In Service Fabric environment the Application Insights configuration file can should be part of the default service configuration package (the 'Config' package). To resolve the path of the configuration file within the service configuration package set the value of configurationFilePath to servicefabricfile:/ApplicationInsights.config. For more information on this syntax see Service Fabric support

Standard metadata support

Application Insights output supports all standard metadata (request, metric, dependency and exception). Each of these metadata types corresponds to a native Application Insights telemetry type, enabling rich support for visualization and alerting that Application Insights provides. The Application Insights output also supports event metadata (corresponding to AI event telemetry type). This metadata is meant to represent significant application events, like a new user registered with the system or a new version of the code being deployed. Event metadata supports following properties:

Field Values/Types Required Description
metadata "ai_event" Yes Indicates Application Insights event metadata; must be "ai_event".
eventNameProperty string (see Remarks) The name of the event property that will be used as the name of the AI event telemetry.
eventName string (see Remarks) The name of the event (if the name is supposed to be taken verbatim from metadata).

Remarks:

  1. Either eventNameProperty or eventName must be given.

All other events will be reported as Application Insights traces (telemetry of type Trace).

Back to Topics

Elasticsearch

Nuget Package: Microsoft.Diagnostics.EventFlow.Outputs.ElasticSearch

This output writes data to the Elasticsearch. Here is an example showing all possible settings:

{
    "type": "ElasticSearch",
    "indexNamePrefix": "app1",
    "indexFormat": "yyyy.MM.dd",
    "serviceUri": "https://myElasticSearchCluster:9200;https://myElasticSearchCluster:9201;https://myElasticSearchCluster:9202",
    "connectionPoolType": "Sniffing",
    "basicAuthenticationUserName": "esUser1",
    "basicAuthenticationUserPassword": "<MyPassword>",
    "numberOfShards": 1,
    "numberOfReplicas": 1,
    "refreshInterval": "15s",
    "defaultPipeline": "my-pipeline",
    "mappings": {
        "properties": {
            "timestamp": {
                "type": "date_nanos"
            }
        }
    },
    "proxy": {
        "uri": "https://myESProxy/"
    }
}
Field Values/Types Required Description
type "ElasticSearch" Yes Specifies the output type. For this output, it must be "ElasticSearch".
indexNamePrefix string No Specifies the prefix to be used when creating the Elasticsearch index. This prefix, together with the date of when the data was generated, will be used to form the name of the Elasticsearch index. If not specified, a prefix will not be used.
indexFormat string No Specifies the format to be used when creating the Elasticsearch index in C# date/time formatting conventions against the date of when the data was generated. There is also a 'q' for Quarter (shown as 'q1', etc), 'w' for Week of Month (shown as 'w1', etc), and 'W' and 'WW' for Week of Year (also shown as 'w1', where 'WW' shows preceeding zeros). This will be used to form the name of the Elasticsearch index (defaults to 'yyyy.MM.dd' if not specified). If a prefix is specified, the prefix and a dash will be prepended.
serviceUri URL:port Yes Specifies where the Elasticsearch cluster is. This is needed for EventFlow to locate the cluster and send the data. Single URL or semicolon-separated URLs for connection pooling are accepted
connectionPoolType "Static", "Sniffing", or "Sticky" No Specifies the Connection Pool that takes care of registering what nodes there are in the cluster.
basicAuthenticationUserName string No Specifies the user name used to authenticate with Elasticsearch. To protect the cluster, authentication is often setup on the cluster.
basicAuthenticationUserPassword string No Specifies the password used to authenticate with Elasticsearch. This field should be used only if basicAuthenticationUserName is specified.
eventDocumentTypeName string Yes (ver < 2.7.0)
N/A (ver >= 2.7.0)
Specifies the document type to be applied when data is written. Elasticsearch allows documents to be typed, so they can be distinguished from other types. This type name is user-defined.

Starting with Elasticsearch 7.x the mapping types have been removed. Consequently this configuration setting has been removed from Elasticsearch output version 2.7.0 and newer.
numberOfShards int No Specifies how many shards to create the index with. If not specified, it defaults to 1.
numberOfReplicas int No Specifies how many replicas the index is created with. If not specified, it defaults to 5.
refreshInterval string No Specifies what refresh interval the index is created with. If not specified, it defaults to 15s.
defaultPipeline string No Specifies the default ingest node pipeline the index is created with. If not specified, a default pipeline will not be used.
mappings object No Specifies how documents created by the Elasticsearch output are stored and indexed (index mappings). For more information refer to Elasticsearch documentation on index mappings.
mappings.properties object No Specifies property mappings for documents created by the Elasticsearch output. Currently only property type mappings can be specified. Supported types are: text, keyword, date, date_nanos, boolean, long, integer, short, byte, double, float, half_float, scaled_float, ip, geo_point, geo_shape, and completion.
proxy object No Specifies connection proxy settings. Valid properties are uri (proxy URI, i.e. address), userName and userPassword.

Standard metadata support

Elasticsearch output supports all standard metadata types. Events decorated with metadata will get additional properties when sent to Elasticsearch.

Fields injected by metric metadata are:

Field Description
MetricName The name of the metric, read directly from the metadata.
Value The value of the metric, read from the event property specified by metricValueProperty.

Fields injected byt the request metadata are:

Field Description
RequestName The name of the request, read the event property specified by requestNameProperty.
Duration Request duration, read from the event property specified by durationProperty (if available).
IsSuccess Success indicator, read from the event property specified by isSuccessProperty (if available).
ResponseCode Response code for the request, read from the event property specified by responseCodeProperty (if available).

Elasticsearch version support

Elasticsearch output package version Supported Elasticsearch server version
1.x 2.x
2.6.x 6.x
2.7.x 7.x

Back to Topics

Azure Monitor Logs

Nuget package: Microsoft.Diagnostics.EventFlow.Outputs.AzureMonitorLogs

The Azure Monitor Logs output writes data to Azure Monitor Logs service (also knowns as Log Analytics service) via HTTP Data Collector API. You will need to create a Log Analytics workspace in Azure and know its ID and key before using Azure Monitor Logs output. Here is a sample configuration fragment enabling the output:

{
  "type": "AzureMonitorLogs",
  "workspaceId": "<workspace-GUID>",
  "workspaceKey": "<base-64-encoded workspace key>",
  "serviceDomain" : "<optional domain for Log Analytics workspace>"
}

Supported configuration settings are:

Field Values/Types Required Description
type "AzureMonitorLogs" Yes Specifies the output type. For this output, it must be "AzureMonitorLogs".
workspaceId string (GUID) Yes Specifies the workspace identifier.
workspaceKey string (base-64) Yes Specifies the workspace authentication key.
logTypeName string No Specifies the log entry type created by the output. Default value for this setting is "Event", which results in "Event_CL" entries being created in Log Analytics (the "_CL" suffix is appended automatically by Log Analytics Data Collector).
serviceDomain string No Specifies the domain for your Log Analytics workspace. Default value is "ods.opinsights.azure.com", for Azure Commercial.

Back to Topics

Filters

As data comes through the EventFlow pipeline, the application can add extra processing or tagging to them. These optional operations are accomplished with filters. Filters can transform, drop, or tag data with extra metadata, with rules based on custom expressions. With metadata tags, filters and outputs operating further down the pipeline can apply different processing for different data. For example, an output component can choose to send only data with a certain tag. Each filter type has its own set of parameters.

Filters can appear in two places in the EventFlow configuration: on the same level as inputs and outputs (global filters) and as part of output declaration (output-specific filters). Global filters are applied to all data coming from the inputs. Output-specific filters are applied to just one output, just before the data reaches the output. Here is an example with two global filters and one output-specific filter:

{
    "inputs": [...],

    "filters": [
        // The following are global filters
        {
            "type": "drop",
            "include": "..."
        },
        {
            "type": "drop",
            "include": "..."
        }
    ],

    "outputs": [
        {
            "type": "ApplicationInsights",
            "instrumentationKey": "00000000-0000-0000-0000-000000000000",
            "filters": [
                {
                    "type": "metadata",
                    "metadata": "metric",
                    // ... 
                }
            ]
        }
    ]
}

EventFlow comes with two standard filter types: drop and metadata.

Drop

Nuget Package: Microsoft.Diagnostics.EventFlow.Core

This filter discards all data that satisfies the include expression. Here is an example showing all possible settings:

{
    "type": "drop",
    "include": "Level == Verbose || Level == Informational"
}
Field Values/Types Required Description
type "drop" Yes Specifies the filter type. For this filter, it must be "drop".
include logical expression Yes Specifies the logical expression that determines if the action should apply to the event data or not. For information about the logical expression, please see section Logical Expressions.

Back to Topics

Metadata

Nuget Package: Microsoft.Diagnostics.EventFlow.Core

This filter adds additional metadata to all event data that satisfies the include expression. The filter recognizes a few standard properties (type, metadata and include); the rest are custom properties, specific for the given metadata type:

{
    "type": "metadata",
    "metadata": "metric",
    "include": "ProviderName == MyEventProvider && EventId == 3",
    "customTag1": "tag1",
    "customTag2": "tag2"
}
Field Values/Types Required Description
type "metadata" Yes Specifies the filter type. For this filter, it must be "metadata".
metadata string Yes Specifies the metadata type. This field is used only if type is "metadata", so it shouldn't appear in other filter types. The metadata type is user-defined and is persisted along with metadata tag added to the event data.
include logical expression Yes Specifies the logical expression that determines if the metadata is applied to the event data. For information about the logical expression, please see section Logical Expressions.
[others] string No Specifies custom properties that should be added along with this metadata object. When the event data is processed by other filters or outputs, these properties can be accessed. The names of these properties are custom-defined and the possible set is open-ended. For a particular filter, zero or more custom properties can be defined. In the example above, customTag1 and customTag2 are such properties.

Here are a few examples of using the metadata filter:

  1. Submit a metric with a value of 1 (a counter) whenever there is a Service Fabric stateful service run failure

    {
    "type": "metadata",
    "metadata": "metric",
    "include": "ProviderName==Microsoft-ServiceFabric-Services 
                && EventName==StatefulRunAsyncFailure",
    "metricName": "StatefulRunAsyncFailure",
    "metricValue": "1.0"
    }
  2. Turn processor time performance counter into a metric

    {
      "type": "metadata",
      "metadata": "metric",
      "include": "ProviderName==EventFlow-PerformanceCounterInput && CounterCategory==Process 
                  && CounterName==\"% Processor Time\"",
      "metricName": "MyServiceProcessorTimePercent",
      "metricValueProperty": "Value"
    }
  3. Turn a custom EventSource event into a request. The event has 3 interesting properties: requestTypeName indicates what kind of request it was; durationMsec has the total request processing duration and ‘isSuccess’ indicates whether the processing succeeded or failed

    {
      "type": "metadata",
      "metadata": "request",
      "include": "ProviderName==MyCompany-MyApplication-FrontEndService 
                  && EventName==ServiceRequestStop",
      "requestNameProperty": "requestTypeName",
      "durationProperty": "durationMsec",
      "isSuccessProperty": "isSuccess"
    }

Back to Topics

Standard metadata types

EventFlow core library defines several standard metadata types. They have pre-defined set of fields and are recognized by Application Insights and Elasticsearch outputs (see documentation for each output, respectively, to learn how they handle standard metadata).

Metric metadata type

Metrics are named time series of floating-point values. Metric metadata defines how metrics are derived from ordinary events. Following fields are supported:

Field Values/Types Required Description
metadata "metric" Yes Indicates metric metadata definition; must be "metric".
metricName string (see Remarks) The name of the metric.
metricNameProperty string (see Remarks) The name of the event property that holds metric name.
metricValue double (see Remarks) The value of the metric. This is useful for "counter" type of metric when each occurrence of a particular event should result in an increment of the counter.
metricValueProperty string (see Remarks) The name of the event property that holds the metric value.

Remarks:

  1. Either metricName or metricNameProperty must be specified.
  2. Either metricValue or metricValueProperty must be specified.

Back to Topics

Request metadata type

Requests are special events that represent invocations of a network service by its clients. Request metadata defines how requests are derived from ordinary events. Following fields are supported:

Field Values/Types Required Description
metadata "request" Yes Indicates request metadata definition; must be "request".
requestNameProperty string Yes The name of the event property that contanis the name of the request (to distinguish between different kinds of requests).
isSuccessProperty string No The name of the event property that specifies whether the request ended successfully. It is expected that the event property is, or can be converted to a boolean.
durationProperty string No The name of the event property that specifies the request duration (execution time).
durationUnit "TimeSpan", "milliseconds", "seconds", "minutes" or "hours" No Specifies the type of data used by request duration property. If not set, it is assumed that request duration is expressed as a double value, representing milliseconds.
responseCodeProperty string No The name of the event property that specifies response code associated with the request. A response code describes in more detail the outcome of the request. It is expected that the event property is, or can be converted to a string.

Back to Topics

Dependency metadata type

Dependency event represents the act of calling a service that your service depends on. It has the following properties:

Field Values/Types Required Description
metadata "dependency" Yes Indicates dependency metadata definition; must be "dependency".
isSuccessProperty string No The name of the event property that specifies whether the request ended successfully. It is expected that the event property is, or can be converted to a boolean.
durationProperty string No The name of the event property that specifies the request duration (execution time).
durationUnit "TimeSpan", "milliseconds", "seconds", "minutes" or "hours" No Specifies the type of data used by request duration property. If not set, it is assumed that request duration is expressed as a double value, representing milliseconds.
responseCodeProperty string No The name of the event property that specifies response code associated with the request. A response code describes in more detail the outcome of the request. It is expected that the event property is, or can be converted to a string.
targetProperty string Yes The name of the event property that specifies the target of the call, i.e. the identifier of the service that your service depends on.
dependencyType string No An optional, user-defined designation of the dependency type. For example, it could be "SQL", "cache", "customer_data_service" or similar.

Back to Topics

Exception metadata type

Exception event corresponds to an occurrence of an unexpected exception. Usually a small amount of exceptions is continuously being thrown, caught and handled by a .NET process, this is normal and should not raise a concern. On the other hand, if an exception is unhandled, or unexpected, it needs to be logged and examined. This metadata is meant to cover the second case. It has the following properties:

Field Values/Types Required Description
metadata "exception" Yes Indicates exception metadata definition; must be "exception".
exceptionProperty string Yes The name of the event property that carries the (unexpected) exception object. Note that (for maximum information fidelity) the expected type of the event property is System.Exception. In other words, the actual exception is expected to be part of event data, and not just a stringified version of it.

Also see Application Insight Exception metadata type with EvenSource input issue

Back to Topics

Health Reporter

Every software component can generate errors or warnings the developer should be aware of. The EventFlow library is no exception. An EventFlow health reporter reports errors and warnings generated by any components in the EventFlow pipeline. In what format the report is presented depends on the implementation of the health reporter. The EventFlow library suite includes two health reporters: CsvHealthReporter and ServiceFabricHealthReporter.

The CsvHealthReporter is the default health reporter for EventFlow library and is used if the health reporter configuration section is omitted. Its configuration parameters are described below.

ServiceFabricHealthReporter is described in the Service Fabric support paragraph. It is designed to be used in the context of Service Fabric applications and does not need any configuration.

CsvHealthReporter

Nuget Package: Microsoft.Diagnostics.EventFlow.Core

This health reporter writes all errors, warnings, and informational traces generated from the pipeline into a CSV file. Here is an example showing all possible settings:

"healthReporter": {
    "type": "CsvHealthReporter",
    "logFileFolder": ".",
    "logFilePrefix": "HealthReport",
    "minReportLevel": "Warning",
    "throttlingPeriodMsec": "1000",
    "singleLogFileMaximumSizeInMBytes": "8192",
    "logRetentionInDays": "30",
    "ensureOutputCanBeSaved": "false",
    "assumeSharedLogFolder": "false"
}
Field Values/Types Required Description
type "CsvHealthReporter" Yes Specifies the health reporter type. For this reporter, it must be "CsvHealthReporter".
logFileFolder file path No Specifies a path for the CSV log file to be written. It can be an absolute path, or a relative path. If it's a relative path, then it's computed relative to the current working directory of the program. However, if it's an ASP.NET application, it's relative to the app_data folder.
logFilePrefix file path No Specifies a prefix used for the CSV log file. CsvHealthReporter creates the log file name by combining this prefix with the date of when the file is generated. If the prefix is omitted, then a default prefix of "HealthReport" is used.
minReportLevel Error, Warning, Message No Specifies the collection report level. Report traces with equal or higher severity than specified are collected. For example, if Warning is specified, then Error, and Warning traces are collected. Default is Error.
throttlingPeriodMsec number of milliseconds No Specifies the throttling time period. This setting protects the health reporter from being overwhelmed, which can happen if a message is repeatedly generated due to an error in the pipeline. Default is 0, for no throttling.
singleLogFileMaximumSizeInMBytes File size in MB/number No Specifies the size of the log file in MB before rotating happens. The default value is 8192 MB (8 GB). Once the size of log file exceeds the value, it will be renamed from fileName.csv to fileName_last.csv. Then logs will be written to a new fileName.csv. This setting prevents a single log file become too big.
logRetentionInDays number of days for the logs files retain No Specifies how long log files will be retained. The default value is 30 days. Any log files created earlier than the specified number of days ago will be removed automatically. This prevents continuous generation of logs that might lead to storage exhaustion.
ensureOutputCanBeSaved boolean No Specifies whether the health reporter is going to ensure the permission to write to the log folder. The default value is false. When set to true, it will prevent the pipeline creation when it can't write the log. Otherwise, it will ignore the error.
assumeSharedLogFolder boolean No Specifies whether the health reporter will use a random string in attempt to make the health reporter file unique. This is useful if multiple processes using EventFlow use the same folder to store their health reporter files. Default value is false.

CsvHealthReporter will try to open the log file for writing during initialization. If it can't, by default, a debug message will be output to the debugger viewer like Visual Studio Output window, etc. This can happen especially if a value for the log file path is not provided (default is used, which is application executables folder) and the application executables are residing on a read-only file system. Docker tools for Visual Studio use this configuration during debugging, so for containerized services the recommended practice is to specify the log file path explicitly.

Back to Topics

Pipeline Settings

The EventFlow configuration has settings allowing the application to adjust certain behaviors of the pipeline. These range from how many events the pipeline buffer, to the timeout the pipeline should use when waiting for an operation. If this section is omitted, the pipeline will use default settings. Here is an example of all the possible settings:

"settings": {
    "pipelineBufferSize": "1000",
    "maxEventBatchSize": "100",
    "maxBatchDelayMsec": "500",
    "maxConcurrency": "8",
    "pipelineCompletionTimeoutMsec": "30000"
}
Field Values/Types Required Description
pipelineBufferSize number No Specifies how many events the pipeline can buffer if the events cannot flow through the pipeline fast enough. This buffer protects loss of data in cases where there is a sudden burst of data.
maxEventBatchSize number No Specifies the maximum number of events to be batched before the batch gets pushed through the pipeline to filters and outputs. The batch is pushed down when it reaches the maxEventBatchSize, or its oldest event has been in the batch for more than maxBatchDelayMsec milliseconds.
maxBatchDelayMsec number of milliseconds No Specifies the maximum time that events are held in a batch before the batch gets pushed through the pipeline to filters and outputs. The batch is pushed down when it reaches the maxEventBatchSize, or its oldest event has been in the batch for more than maxBatchDelayMsec milliseconds.
maxConcurrency number No Specifies the maximum number of threads that events can be processed. Each event will be processed by a single thread, by multiple threads can process different events simultaneously.
pipelineCompletionTimeoutMsec number of milliseconds No Specifies the timeout to wait for the pipeline to shutdown and clean up. The shutdown process starts when the DiagnosePipeline object is disposed, which usually happens on application exit.

Back to Topics

Service Fabric Support

Nuget Package: Microsoft.Diagnostics.EventFlow.ServiceFabric

This package contains two components that make it easier to include EventFlow in Service Fabric applications: the ServiceFabricDiagnosticPipelineFactory and ServiceFabricHealthReporter. ServiceFabricHealthReporter is used automatically by ServiceFabricDiagnosticPipelineFactory. It does not require any configuration and does not need to be listed in the pipeline configuration file.

The ServiceFabricDiagnosticPipelineFactory is a replacement for the standard DiagnosticPipelineFactory, one that uses Service Fabric configuration support to load pipeline configuration. The resulting pipeline reports any execution problems through the Service Fabric health subsystem. The factory exposes a static Create() method that takes the following parameters:

Parameter Default Value Description
healthEntityName (none) The name of the health entity that will be used to report EventFlow pipeline health to Service Fabric. Usually it is set to a value that helps you identify the service using the pipeline, for example "MyApplication-MyService-DiagnosticsPipeline".
configurationFileName "eventFlowConfig.json" The name of the configuration file that contains pipeline configuration. The file is expected to be part of a (Service Fabric) service configuration package and is typically placed in PackageRoot/Config folder under the service project folder.
configurationPackageName "Config" The name of the Service Fabric configuration package that contains the pipeline configuration file.

The recommended place to create the diagnostic pipeline is in the service Main() method. The following code will work all types of Service Fabric services (statefull, stateless and actor):

public static void Main(string[] args)
{
    try
    {
        using (ManualResetEvent terminationEvent = new ManualResetEvent(initialState: false))
        using (var diagnosticsPipeline = ServiceFabricDiagnosticPipelineFactory.CreatePipeline("MyApplication-MyService-DiagnosticsPipeline"))
        {
            Console.CancelKeyPress += (sender, eventArgs) => Shutdown(diagnosticsPipeline, terminationEvent);

            AppDomain.CurrentDomain.UnhandledException += (sender, unhandledExceptionArgs) =>
            {
                ServiceEventSource.Current.UnhandledException(unhandledExceptionArgs.ExceptionObject?.ToString() ?? "(no exception information)");
                Shutdown(diagnosticsPipeline, terminationEvent);
            };

            ServiceRuntime.RegisterServiceAsync("MyServiceType", ctx => new MyService(ctx)).Wait();

            ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id, typeof(MyService).Name);

            terminationEvent.WaitOne();
        }
    }
    catch (Exception e)
    {
        ServiceEventSource.Current.ServiceHostInitializationFailed(e.ToString());
        throw;
    }    
}

private static void Shutdown(IDisposable disposable, ManualResetEvent terminationEvent)
{
    try
    {
        disposable.Dispose();
    }
    finally
    {
        terminationEvent.Set();
    }
}

The purpose of handling CancelKeyPress and UnhandledException events (the latter for full .NET Framework only) is to ensure that the EventFlow pipeline is cleanly disposed. This is important for pipeline elements that rely on system-level resources. For example, Event Tracing for Windows (ETW) input creates a system-wide ETW listening session, which must be disposed when the EventFlow pipeline is shut down. Ctrl-C signal is the standard way the Service Fabric runtime uses to notify service processes that they need to perform cleanup and exit. By default the process has 30 seconds to react.

The UnhandledException event method is a very simple addition to the standard ServiceEventSource:

    [Event(UnhandledExceptionEventId, Level = EventLevel.Error, Message = "An unhandled exception has occurred")]
    public void UnhandledException(string exception)
    {
       WriteEvent(UnhandledExceptionEventId, exception);
    }

Depending on the type of inputs and outputs used, additional startup code may be necessary. For example Microsoft.Extensions.Logging input requires a call to LoggerFactory.AddEventFlow() method to register EventFlow logger provider.

Back to Topics

Support for Service Fabric settings and application parameters

Version 1.0.1 of the EventFlow Service Fabric NuGet package introduced the ability to refer to Service Fabric settings from EventFlow configuration using special syntax for values:

servicefabric:/<section-name>/<setting-name>

where <section-name> is the name of Service Fabric configuration section and <setting-name> is the name of the Service Fabric configuration setting that is providing the value for some EventFlow setting. For example:

"basicAuthenticationUserPassword": "servicefabric:/DiagnosticPipelineParameters/ElasticSearchUserPassword"

The EventFlow configuration entry above means "take the ElasticSearchUserPassword setting from DiagnosticPipelineParameters section of the Service Fabric service configuration and use its value as the value for the EventFlow basicAuthenticationUserPassword setting". As with other Service Fabric settings, the values can also be overriden by Service Fabric application parameters. For more information on this see Manage application parameters for multiple environments topic in Service Fabric documentation.

Version 1.1.2 added support for resolving paths to other configuration files that are part of the default service configuration package. The syntax is:

servicefabricfile:/<configuration-file-name>

At run time this value will be substituted with a full path to the configuration file with the given name. This is especially useful if an EventFlow pipeline element wraps an existing library that has its own configuration file format (as is the case with Application Insights, for example).

Back to Topics

Using Application Insights ServerTelemetryChannel

For Service Fabric applications running in production, we recommend to use Application Insights ServerTelemetryChannel. This channel has the capability to store data on disk during periods of intermittent connectivity and is a better choice for long-running server processes than the default in-memory channel.

To use the ServerMemoryChannel you need to create an EventFlow configuration file and an Application Insights configuration file in the same Service Fabric configuration package:

eventFlowConfig.json

{
  "inputs": [
    {
      "type": "EventSource",
      "sources": [
        { "providerName": "Microsoft-ServiceFabric-Services" },
        // Other event sources, as necessary...
      ]
    }
  ],
  "filters": [
    // Filters, as necessary
  ],
  "outputs": [
    {
      "type": "ApplicationInsights",      
      "configurationFilePath":  "servicefabricfile:/ApplicationInsights.config"
    }
  ]
}

ApplicationInsights.config

<?xml version="1.0" encoding="utf-8" ?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">

  <TelemetryModules>    
  <!-- Telemetry module configuration, as necessary -->
  </TelemetryModules>

  <TelemetryProcessors>
  <!-- Telemetry processor configuration, as necessary -->    
  </TelemetryProcessors> 

  <TelemetryChannel Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel, Microsoft.AI.ServerTelemetryChannel"/>

</ApplicationInsights>

Ensure that your service consumes Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel NuGet package.

Back to Topics

Filter Expressions

Filter expressions allows you to filter events based on the event properties. For example, you can have an expression like ProviderName == MyEventProvider && EventId == 3, where you specify the event property name on the left side and the value to compare on the right side. If the value on the right side contains special characters, you can enclose it in double quotes.

Standard and custom (payload) properties are treated equally; there is no need to prefix custom properties in any way. For example, expression TenantId != Unknown && ProviderName == MyEventProvider will evaluate to true for events that

  • were created by MyEventProvider provider (ProviderName is a standard property available for all events), and
  • have a TenantId custom property, with value that is different from "Unknown" string.

The following table lists operators supported by logical expressions

Class Operators Description
Comparison ==, !=, <, >, <=, >= Evaluates to true if the property (left side of the expression) successfully compares to the value (right side of the expression).
Property presence and absence hasproperty hasnoproperty hasproperty transactionId will only allow events that have a property named "transactionId".
hasnoproperty transactionId does the opposite: it will only allow events that do not have a property named "transactionId".
Bitwise Equality &== Evaluates to true if bit mask is set, i.e., (lhsValue & rhsValue) == rhsValue. This is useful to filter on properties like Keywords.
Regular Expression ~= Evaluates to true if the property value matches a regular expression pattern on the right.
Logical &&, `
Grouping (expression) Grouping can be used to change the evaluation order of expressions with logical operators.

Back to Topics

Store Secrets Securely

If you don't want to put sensitive information in the EventFlow configuration file, you can store the information at a secured place and pass it to the configuration at run time. Here is the sample code:

string configFilePath = @".\eventFlowConfig.json";
IConfiguration config = new ConfigurationBuilder().AddJsonFile(configFilePath).Build();
IConfiguration eventHubOutput = config.GetSection("outputs").GetChildren().FirstOrDefault(c => c["type"] == "EventHub");

if (eventHubOutput != null)
{
    string eventHubConnectionString = GetEventHubConnectionStringFromSecuredPlace();
    eventHubOutput["connectionString"] = eventHubConnectionString;
}

using (DiagnosticPipeline pipeline = DiagnosticPipelineFactory.CreatePipeline(config))
{
    // ...
}

Back to Topics

Extensibility

Every pipeline element type (input, filter, output and health reporter) is a point of extensibility, that is, custom elements of these types can be used inside the pipeline. Contracts for all EventFlow element types are provided by EventFlow.Core assembly (except from the input type, see below).

EventData class

EventFlow pipelines operate on EventData objects. Each EventData object represents a single telemetry record (event) and has the following public properties and methods:

Name Type Description
Timestamp DateTimeOffset Indicates time when the event was created.
ProviderName string Identifies the source of the event.
Level LogLevel (enumeration) Provides basic severity assesment of the event: is it a critical error, regular error, a warning, etc.
Keywords long Provides means to efficiently classify events. The field is supposed to be interpreted as a set of bits (64 bits available). The meaning of each bit is specific to the event provider; EventFlow does not interpret them in any way.
Payload IDictionary<string, object> Stores event properties
TryGetMetadata(string kind, out IReadOnlyCollection<EventMetadata> metadata) bool Retrieves metadata of a given kind (if any) from the event. Returns true if metadata of given kind was found, otherwise false.
SetMetadata(EventMetatada metadata) void Adds (attaches) a new piece of metadata to the event.
GetValueFromPayload<T>(string name, ProcessPayload<T> handler) bool Retrieves a payload value from the payload (set of event properties). Although the payload can be accessed directly via Payload property, this method is useful because it will check whether the property exists and perform basic type conversion as necessary.
AddPayloadProperty(string key, object value, IHealthReporter healthReporter, string context) void Adds a new payload property to the event, performing property name disambiguation as necessary. The healthReporter and context parameters are used to produce a warning in case the name disambiguation is necessary (had to be changed because the event already had a property with name equal to key parameter).
DeepClone() EventData Performs a deep clone operation on the EventData. The resulting copy is independent from the original and either can be modified without affecting the other (e.g. properties or metadata can be added or removed). The only exception is that payload values are not cloned (and thus are shared between the copies).

Note that EventData type is not thread-safe. Don't try to use it concurrently from multiple threads.

Back to Topics

EventFlow pipeline element types

Inputs

Inputs are producing new events (EventData instances). Anything that implements IObservable<EventData> can be used as an input for EventFlow. IObservable<T> is a standard .NET interface in the System namespace.

Filters

Filters have a dual role: they modify events (e.g. by decorating them with metadata) and instruct EventFlow to keep or discard the event. They are expected to implement the IFilter interface:

public enum FilterResult
{
    KeepEvent = 0,
    DiscardEvent = 1
}
public interface IFilter
{
    FilterResult Evaluate(EventData eventData);
}

Outputs

Output's purpose is to send data to its final destination. It is expected to implement IOutput interface:

public interface IOutput
{
    Task SendEventsAsync(IReadOnlyCollection<EventData> events, long transmissionSequenceNumber, CancellationToken cancellationToken);
}

The output receives a batch of events, along with transmission sequence number and a cancellation token. The transmission sequence number can be treated as an identifier for the SendEventsAsync() method invocation; it is guaranteed to be unique for each invocation, but there is no guarantee that there will be no "gaps", nor that it will be strictly increasing. The cancellation token should be used to cancel long-running operations if necessary; typically it is passed as a parameter to asynchronous i/o operations.

Back to Topics

Pipeline structure and threading considerations

Every EventFlow pipeline can be created imperatively, i.e. in the program code. The structure of every pipeline is reflected in the constructor of the DiagnosticPipeline class and is as follows:

  1. One or more inputs produce events.
  2. A set of common filters (also known as 'global' filters) does initial filtering.
  3. Then the events are sent to one or more outputs (by cloning them as necessary). Each output can have its own set of filters. A combination of an output and a set of filters associated with that output is called a 'sink'.

EventFlow employs the following policies with regards to concurrency:

  1. Inputs are free to call OnNext() on associated observers using any thread.
  2. EventFlow will ensure that only one filter at a time is evaluating any given EventData object. That said, the same filter can be invoked concurrently for different events.
  3. Outputs will invoked concurrently for different batches of data.

Back to Topics

Using custom pipeline items imperatively

The simplest way to use custom EventFlow elements is to create a pipeline imperatively, in code. You just need to create a read-only collections of the inputs, global filters and sinks and pass them to DiagnosticPipeline constructor. Custom and standard elements can be combined freely; each of the standard pipeline elements has a public constructor and associated public configuration class and can be created imperatively.

Creating a pipeline with custom elements using configuration

To create an EventFlow pipeline with custom elements from configuration each custom element needs a factory. The factory is an object implementing IPipelineItemFactory<TPipelineItem> and is expected to have a parameter-less constructor.

The factory's CreateItem(IConfiguration configuration, IHealthReporter healthReporter) method will receive a configuration fragment that represents the pipeline item being created. The health reporter is available to report issues in case configuration is corrupt or some other problem occurrs during item creation. The health reporter can also be passed to and used by the created pipeline item.

For EventFlow to know about the item factory it must appear in the 'extensions' section of the configuration file. Each extension record has 3 properties:

  1. "category" identifies extension type. Currently types recognized by DiagnosticPipelineFactory are inputFactory, filterFactory, outputFactory or healthReporter.
  2. "type" is the tag that identifies the extension in other parts of the configuration document(s). It is totally up to the user of an extension what she uses here.
  3. "qualifiedTypeName" is the string that allows DiagnosticPipelineFactory to instantiate the extension factory

Here is a very simple example that illustrates how to create a custom output and instantiate it from a configuration file.

namespace EventFlowCustomOutput
{
    class Program
    {
        static void Main(string[] args)
        {
            using (DiagnosticPipelineFactory.CreatePipeline("eventFlowConfig.json"))
            {
                Trace.TraceError("Hello, world!");
                Thread.Sleep(1000);
            }
        }
    }

    class CustomOutput : IOutput
    {
        public Task SendEventsAsync(IReadOnlyCollection<EventData> events, long transmissionSequenceNumber, CancellationToken cancellationToken)
        {
            foreach(var e in events)
            {
                Console.WriteLine($"CustomOutput: event says '{e.Payload["Message"]?.ToString() ?? "nothing"}'");
            }
            return Task.CompletedTask;
        }
    }

    class CustomOutputFactory : IPipelineItemFactory<CustomOutput>
    {
        public CustomOutput CreateItem(IConfiguration configuration, IHealthReporter healthReporter)
        {
            return new CustomOutput();
        }
    }
}

(content of eventFlowConfig.json)

{
  "inputs": [
    {
      "type": "Trace"
    }
  ],
  "filters": [
  ],
  "outputs": [
    {
      "type":  "CustomOutput"
    }
  ],
  "schemaVersion": "2016-08-11",

  "extensions": [
    {
      "category": "outputFactory",
      "type": "CustomOutput",
      "qualifiedTypeName": "EventFlowCustomOutput.CustomOutputFactory, EventFlowCustomOutput"
    }
  ]
}

Back to Topics

Troubleshooting

Events are getting dropped

Under high load you might emit a health warning that says

An event was dropped from the diagnostic pipeline because there was not enough capacity

The “capacity” referred to in the error message is the size of the event buffer that EventFlow has. By default the buffer can hold 1000 events, although that can be changed by configuration settings. The “not enough capacity” error indicates EventFlow output(s) cannot consume incoming events fast enough. The buffer is exhausted and events are starting to get dropped.

If the event inflow is quite bursty, increasing the buffer size might help.

For ApplicationInsightsOutput make you are using the ServerTelemetryChannel for sending data to Application Insights. This channel has the ability to buffer data on disk if the connectivity to Application Insights is intermittent. It can be enabled via output configuration, using a separate ApplicationInsights configuration file.

Ultimately it might be necessary just to reduce the number of events produced by the service

Back to Topics

Platform Support

EventFlow supports full .NET Framework (.NET 4.5 series and 4.6 series) and .NET Core, but not all inputs and outputs are supported on all platforms. The following table lists platform support for standard inputs and outputs.

Input Name .NET 4.5.1 .NET 4.6 .NET Core and .NET 5
Inputs
System.Diagnostics.Trace Yes Yes Yes
EventSource No Yes Yes
PerformanceCounter Yes Yes No
Serilog Yes Yes Yes
Microsoft.Extensions.Logging Yes Yes Yes
ETW (Event Tracing for Windows) Yes Yes No
Application Insights input Yes Yes Yes
Log4net Yes Yes Yes
NLog Yes Yes Yes
Outputs
StdOutput (console output) Yes Yes Yes
Application Insights Yes Yes Yes
Azure EventHub No Yes Yes
Elasticsearch Yes Yes Yes
Azure Monitor Logs Yes Yes Yes
HTTP (json via http) Yes Yes Yes

Back to Topics

Contributions

Refer to contribution guide.

Code of Conduct

Refer to Code of Conduct guide.

Back to Topics

diagnostics-eventflow's People

Contributors

andersosthus avatar antymon4o avatar ape-box avatar assaftzurel avatar brahmnes avatar dependabot[bot] avatar dtpath avatar eloekset avatar expecho avatar frblondin avatar fredrikgoransson avatar ghelyar avatar jeremysmith1 avatar karolz-ms avatar kzadora avatar lnhzd avatar mattfrear avatar mblaschke avatar microsoft-github-policy-service[bot] avatar monasteryjohn avatar n-sidorov avatar nblumhardt avatar sarjee avatar slyons avatar snakefoot avatar tedvanderveen avatar thecodejedi avatar tomheijmans avatar xiaomi7732 avatar yantang-msft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diagnostics-eventflow's Issues

Roadmap

Hi there,

can you share what the roadmap is for this library? I've noticed not all input and outputs are supported on .Net Core. Will that change in the (near) future or is there a limitation that causes it to never reach the .Net Core platform?

What additional standard metadata items should be included in EventFlow?

The primary reason for attaching metadata to events is to enable outputs to produce various semantically-rich telemetry types. In general, metadata has no fixed schema, but for the sake of standardization and to ease output implementation EventFlow has the notion of "standard metadata". The standard metadata defines property names and types associated with well-known metadata tags. Currently two kinds of standard metadata are defined: "metric" and "request": https://github.com/Azure/diagnostics-eventflow#standard-metadata-types

We are planning to add some more standard metadata types and would appreciate feedback on what additional standard metadata types should be added to EventFlow, either to its core, or as a well-known metadata for one of its outputs?

For example, Microsoft Application Insights has notions of "dependency call", "event" (representing a significant user interaction or significant configuration change), "exception" (unexpected error) and "page view". The dependency call seems like a useful and generic concept that could be added to the core, but the others I am less sure about.

Duplicated properties

Some of the inputs allow additional properties to be passed with message and such properties are added to EventData.Payload.
Such inputs are

  • EventSource
  • Serilog

However, they may have the same key as predefined payload properties such as Message or EventId.
We should make sure inputs handle this case in the same way. See Karol's comment on #18

Agreed that all inputs should behave the same way. Feel free to open an issue about this and we can deal with it as a separate change.

I think whatever the original event carries should take priority over what the input adds. So if there is a "Message" property on the event, we should not overwrite it. We could either skip input additions completely or make the added property name unique, e.g. by appending random number at the end of the name. These are, essentially, variations of the (1) option you have. I have slight preference for the latter.

I do not think using a separate dictionary is a good idea. It will make the output code and, ultimately, consumption of the data much more complicated, for a marginal benefit.

Oh, and I would just report a warning, not an error, if a name collision happens, esp. if you decide to make the added property name unique and include the data, mitigating the conflict.

EventFlow vs. Semantic Logging

Hi, since Semantic Logging is also from Microsoft, I'm wondering what your guidance is on when we should use which.

It seems like the main difference is that Semantic Logging can be used out-of-process, however there hasn't been much activity lately and the new .NET 4.6 features (rich payload, ...) are not yet supported.

Warning re. duplicate "scope" keys in Service Fabric

We have set up a Service Fabric application hosted on Azure. In the application we have a ASP.Net Core stateless service configured with Event Flow logging with two input sources (EventSource and Microsoft.Extensions.Logging) and a single output destination (ApplicationInsights). The EventSource input only uses a single providerName (the one for the ServiceEventSource created by Service Fabric).
The diagnostics pipeline is hooked into the logger factory as per the documentation.

Generally it works great, but we are quite often (but not all the time) seeing a warning in the Service Fabric Explorer similar to this:
Unhealthy event: SourceId='...-DiagnosticsPipeline', Property='Connectivity', HealthState='Warning', ConsiderWarningAsError=false.
The property with the key 'Scope' already exist in the event payload. Value was added under key 'Scope_8'

Inspecting the Application Insights traces it seems to be related to the events logged using the ILogger by the ASP.Net Core parts of the code.

Inspecting the issues list it seems others have previously had similar issues, but that these issues should have been fixed in the latest packages. However we are using all of the latest packages (except for the ServiceFabric package, but as far as I know, nothing has changed related to this behavior):
<PackageReference Include="Microsoft.Diagnostics.EventFlow" Version="1.1.0" /> <PackageReference Include="Microsoft.Diagnostics.EventFlow.Inputs.EventSource" Version="1.1.1" /> <PackageReference Include="Microsoft.Diagnostics.EventFlow.Inputs.MicrosoftLogging" Version="1.1.1" /> <PackageReference Include="Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsights" Version="1.1.5" /> <PackageReference Include="Microsoft.Diagnostics.EventFlow.ServiceFabric" Version="1.1.3" />
Is this a known issue?

System.ArgumentException: The key already existed in the dictionary

Hello,

I am using the Application Insights output and the EventSource input with an application that I wrote for Service Fabric. My application itself is heathy, but the EventSource pipeline is throwing this exception (copied from the Service Fabric Explorer window):

Error event: SourceId='WeatherAPI-WeatherAPIService-DiagnosticsPipeline', Property='Connectivity'.
Diagnostics data upload has failed.
System.ArgumentException: The key already existed in the dictionary.
at System.Collections.Concurrent.ConcurrentDictionary2.System.Collections.Generic.IDictionary.Add(TKey key, TValue value) at Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsightsOutput.AddProperties(ISupportProperties item, EventData e, Boolean setTimestampFromEventData) at Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsightsOutput.SendEventsAsync(IReadOnlyCollection1 events, Int64 transmissionSequenceNumber, CancellationToken cancellationToken)

What is this error telling me? I'm not sure what dictionary it is keeping, its keys, or what that has to do with connectivity.

Event Reclassification Using Logger Input/Application Insights Output

EventFlow appears to reclassify events. Here is how I logged an error in my code:

   this.logger.LogError($"No response from WebCacheService, which indicates an exception was thrown attempting to open a connection to server hosting the url. Please check the WebCacheService logs for errors related to this url: '{id.ToString()}'");

The logger was injected from the Asp.Net Core startup.cs.

Attached is a screenshot of Application Insights. I see what was originally an error logged instead with Trace status with severity level of verbose.
eventflow

Changes to eventFlowConfig.json at runtime

Are there any plans to support changes to the eventFlowConfig.json file at runtime? I'm thinking a bit into the future of a project that we're starting and how we might want to change the Level of events that EventFlow handles, particularly to enable lower-level messages for troubleshooting. Since the idea is for EventFlow to forward everything to a central logging system, I'd rather not have Verbose messages consuming bandwidth all the time. If we could change a filter or even add a new output (say, local to a file) on demand for scenarios like this one, it would be nice.

I took a quick look at the code and it seems from the DiagnosticPipelineFactory that the configuration file is only loaded once when building the pipeline. Would adding reloadOnChange: true work? Is it even something you'd want to enable?

Application Insights, how can I prevent Exceptions from being recorded as Traces

Hi,

I'm using Event Flow with Service Fabric, SeriLog and Application Insights.

When an Exception is logged by my application, Serilog's Log.Error is called and I can see the Error in Application Insights Traces, however, I was hoping to see the Error recorded as an Exception within Application Insights.

The Custom Data section of the Trace shows that the Level was Error, so I think I've got everything wired up correctly. As is stands, the exceptions will be hard to spot along with all other legitimate Traces.

Is there a way to force the Application Insights output library to do this or is this a responsibility of the Serilog input library?

Thanks

tools\restore.cmd failed with the error

Have investigated yet. I do not have pre-VS2017 and FW 3.5 on this machine. So all sorts of unexpected failures were happening here...

I'll try to get to it tomorrow on not-that-fresh machine.

Build started 7/10/2017 11:40:41 PM.
Project "C:\src\github\diagnostics-eventflow\src\Microsoft.Diagnostics.EventFlow.FilterParserGenerator\Microsoft.Diagnostics.EventFlow.FilterParserGenera
tor.csproj" on node 1 (CompilePegGrammars target(s)).
C:\src\github\diagnostics-eventflow\src\Microsoft.Diagnostics.EventFlow.FilterParserGenerator\Microsoft.Diagnostics.EventFlow.FilterParserGenerator.cspro
j(1,1): error MSB4041: The default XML namespace of the project must be the MSBuild XML namespace. If the project is authored in the MSBuild 2003 format,
 please add xmlns="http://schemas.microsoft.com/developer/msbuild/2003" to the <Project> element. If the project has been authored in the old 1.0 or 1.2
format, please convert it to MSBuild 2003 format.
Done Building Project "C:\src\github\diagnostics-eventflow\src\Microsoft.Diagnostics.EventFlow.FilterParserGenerator\Microsoft.Diagnostics.EventFlow.Filt
erParserGenerator.csproj" (CompilePegGrammars target(s)) -- FAILED.


Build FAILED.

"C:\src\github\diagnostics-eventflow\src\Microsoft.Diagnostics.EventFlow.FilterParserGenerator\Microsoft.Diagnostics.EventFlow.FilterParserGenerator.cspr
oj" (CompilePegGrammars target) (1) ->
  C:\src\github\diagnostics-eventflow\src\Microsoft.Diagnostics.EventFlow.FilterParserGenerator\Microsoft.Diagnostics.EventFlow.FilterParserGenerator.csp
roj(1,1): error MSB4041: The default XML namespace of the project must be the MSBuild XML namespace. If the project is authored in the MSBuild 2003 forma
t, please add xmlns="http://schemas.microsoft.com/developer/msbuild/2003" to the <Project> element. If the project has been authored in the old 1.0 or 1.
2 format, please convert it to MSBuild 2003 format.

    0 Warning(s)
    1 Error(s)

Time Elapsed 00:00:00.11
C:\src\github\diagnostics-eventflow [master ≡ +0 ~3 -0 !]>

Template missing from Serilog input

The template in Serilog messages is a great way to consistently match the source of the message independent of the values in the message. Optionally including the template as part of the output would be very useful.

OMS Connection Issue Unclear

I am implementing EventFlow in a Service Fabric project that is using Serilog for the input and OMS for the output. The Service was failing to start due to a "Bad Request" but no further information was given about the Error Code as "Bad Request" can have 9 different Sub-Codes as listed on https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-data-collector-api. It would be nice if the error message also contained the response.Content data which specifies the Sub-Code so it would be easier to determine the issue.

Collecting performance counters from other process into counters of current process

Hi team!

We're trying to collect performance counters from other process and send it via EventFlow of the current process of a console application that will act as an agent (or even in a test).

To achieve this we replace counter and category names before creating diagnostic pipeline. You can find code sample and configuration in the attached archive.

We assume that our attempts fail because performance counters of current process are read only (as per ProcessInstanceNameCache.cs file

Could you please have a look and let me know if our approach is feasible or should we try implementing custom input for this?

EventFlowPOC.zip

P.S. Sorry if this is wrong place to post such questions but didn't find other places to ask.

Configuration of eventFlowConfig.json in Service Fabric at Application level

I would like to be able to override settings in eventFlowConfig.json when using Service Fabric at the Application level.

This is order that I can have specify a different Application Insights Key in an Application Parameter file i.e. Dev, Pre-Prod and Prod and then have the Service use that key, rather than hard-coding it at the service level.

There is a health event with same SourceId and Property with equal or higher sequence number. ServiceFabric + ApplicationInsights

Hi @karolz-ms,

I'm now seeing this error while upgrading a ServiceFabric application via VSTS release management.
This causes the release to fail, occasionally the release succeeds without any errors, or if the release is rolled back due to the error after 15 - 30 minutes the error disappears and you no longer see it in the ServiceFabric explorer.

Error event: SourceId='My-DiagnosticPipeline', Property='Connectivity'.
Diagnostics data upload has failed.
System.Fabric.FabricException: There is a health event with same SourceId and Property with equal or higher sequence number. Health report versus existing sequence numbers: 131341925004487551, 131341925006674981. ---> System.Runtime.InteropServices.COMException: Exception from HRESULT: 0x80071C11
at System.Fabric.Interop.NativeClient.IFabricHealthClient3.ReportHealth(IntPtr healthReport)
at System.Fabric.FabricClient.HealthClient.ReportHealthHelper(HealthReport healthReport)
at System.Fabric.Interop.Utility.<>c__DisplayClass13.b__12()
at System.Fabric.Interop.Utility.WrapNativeSyncInvoke[TResult](Func1 func, String functionTag, String >functionArgs) --- End of inner exception stack trace --- at System.Fabric.Interop.Utility.WrapNativeSyncInvoke[TResult](Func1 func, String functionTag, String >functionArgs)
at System.Fabric.Interop.Utility.RunInMTA(Action action)
at System.Fabric.FabricClient.HealthClient.ReportHealth(HealthReport healthReport)
at >Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsightsOutput.SendEventsAsync(IReadOnlyCollection`1 events, Int64 transmissionSequenceNumber, CancellationToken cancellationToken)

Any help again would be much appreciated.
Cheers
Keagan

filter documentation or filter may need to be modified

for the json filtering section it seems like it is looking for EventId instead of id as the documentation indicates. The include statement I used was "include": "ProviderName == MyProvider && EventId == 7", instead of "include": "ProviderName == MyProvider && id == 7",

Verify all output types produce UTC timestamps

Currently the EventHub output does not produce the 'time' field using a proper UTC string. It would be prudent to double-check that all the existing outputs use the same DateTime output format.

Application Insight Exception metadata type with EvenSource input

I'm using EventSource as the input and AI as an output. Looking at the documentation
here, in the section that relates to applying metadata to AI exception type,

The name of the event property that carries the (unexpected) exception object. Note that (for maximum information fidelity) the expected type of the event property is System.Exception. In other words, the actual exception is expected to be part of event data, and not just a stringified version of it.

However, EventSource does not allow you to supply a System.Exception type, nor even an object as a message argument.

Any ideas on how to get this to work with EventSource? Is this even supported for this output type? I've looked at the tests related to outputs and there isn't anything there for AI exception logging. I'm hopping that I won't have to go down the route of writing a custom output to achieve this.

{ "type": "metadata", "include": "ProviderName == PersonalLoanWebService-ApplicationEvents && EventName == ApplicationError", "metadata": "exception", "exceptionProperty": "exception" }

Using Microsoft.Extensions.Logging as input and logging an exception does not TrackException when using Application Insights as output

I have a test .Net Core application that is setup with the standard .net core logging as an input and application insights as an output.

My issue is that I see a trace message in application insights but not the exception as a tracked exception. Is this a bug?

The Code:

           using (var pipeline = DiagnosticPipelineFactory.CreatePipeline("eventFlowConfig.json"))
            {
                var factory = new LoggerFactory()
                    .AddEventFlow(pipeline);

                var logger = new Logger<Program>(factory);
                
                logger.LogError(new EventId(12345), new Exception("My exception"), "Some exception happened");
            }

The Config: "eventFlowConfig.json"

{
  "inputs": [
    {
      "type": "Trace",
      "traceLevel": "Verbose"
    },
    {
      "type": "Microsoft.Extensions.Logging"
    }

  ],
  "filters": [
  ],
  "outputs": [
    {
      "type": "ApplicationInsights",
      "instrumentationKey": "[MyKey]"
    }
  ],
  "schemaVersion": "2016-08-11",
 "extensions": []
}

Project Dependencies:

<ItemGroup>
    <PackageReference Include="Microsoft.Diagnostics.EventFlow" Version="1.1.0" />
    <PackageReference Include="Microsoft.Diagnostics.EventFlow.Inputs.ApplicationInsights" Version="1.1.0" />
    <PackageReference Include="Microsoft.Diagnostics.EventFlow.Inputs.EventSource" Version="1.1.1" />
    <PackageReference Include="Microsoft.Diagnostics.EventFlow.Inputs.MicrosoftLogging" Version="1.1.2" />
    <PackageReference Include="Microsoft.Diagnostics.EventFlow.Inputs.Trace" Version="1.1.0" />
    <PackageReference Include="Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsights" Version="1.2.0" />
    <PackageReference Include="Microsoft.Diagnostics.EventFlow.Outputs.StdOutput" Version="1.1.0" />
    <PackageReference Include="System.Diagnostics.Tracing" Version="4.3.0" />
  </ItemGroup>

Cannot add custom output via extensions section

The DiagnosticPipelineFactory ignore extensions for custom outputs specified in the extensions section (following instruction given is #41).

Way to reproduce, Change "StdOutput" to any other name (such as "MyOutput" not of the provided standard outputs in the CanOverrideDefaultPipelineItems unit test. The test will fail.

Debugging through the code, ProcessSection in DiagnosticsPipelineFactory only process predefined outputs keys. Since "MyOutput" is not processed, so it is ignored even though outputFactories already have the entry defined.

Service Fabric overrides don't allow null

In some sections eg the drop filter it seems that null is a valid value for the include property

if (string.IsNullOrWhiteSpace(value))
{
this.Evaluator = PositiveEvaluator.Instance.Value; // Empty condition == include everything
}

However in the service fabric settings overrides, null gets ignored when using replacement

string newValue = configurationRoot[valueReferencePath];
if (string.IsNullOrEmpty(newValue))
{
healthReporter.ReportWarning(
$"Configuration value reference '{kvp.Value}' was encountered but no corresponding configuration value was found using path '{valueReferencePath}'",
EventFlowContextIdentifiers.Configuration);
}

If i am translating the intention here then this flag check should instead trigger if the setting is not available from configurationRoot and not when the returned value is null or empty

ElasticSearchOutput failed to send health data

Trying to send logs from service fabric to elasticsearch (setup using the documentation from: https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostic-how-to-use-elasticsearch). Followed through the instruction and try to send log from service fabric. Got the following exceptions:

...
ElasticSearchOutput: diagnostics data upload has failed.
System.ArgumentException: Value does not fall within the expected range.
at System.Fabric.Interop.NativeClient.IFabricHealthClient3.ReportHealth(IntPtr healthReport)
at System.Fabric.FabricClient.HealthClient.ReportHealthHelper(HealthReport healthReport)
at System.Fabric.Interop.Utility.<>c__DisplayClass13.b__12()
at System.Fabric.Interop.Utility.WrapNativeSyncInvoke[TResult](Func`1 func, String functionTag, String functionArgs)
at System.Fabric.Interop.Utility.RunInMTA(Action action)
at System.Fabric.FabricClient.HealthClient.ReportHealth(HealthReport healthReport)
at Microsoft.Diagnostics.EventFlow.ServiceFabric.ServiceFabricHealthReporter.ReportMessage(HealthState healthState, String description)
at Microsoft.Diagnostics.EventFlow.Outputs.ElasticSearchOutput.ReportEsRequestError(IResponse response, String request)
at Microsoft.Diagnostics.EventFlow.Outputs.ElasticSearchOutput.d__10.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Diagnostics.EventFlow.Outputs.ElasticSearchOutput.d__6.MoveNext()

The same set up works without an issues in an local cluster. ElasticSearch version is 5.1.1. Eventflow is the using the latest version 1.1.0.

eventFlowConfig.json not added when ServiceFabric extension is added from nuget

When I add the service fabric the json config file is not added. Per the current implementation in preview 3, the json file should probably be added and placed in the Config folder so that once packaged the config file will be found in the config package as the Event Flow pipeline for Service Fabric is expecting.

Need an operator for the "is keyword set" test

In the current implementation none of the operators supported by expression allow for easy testing of individual keywords. We need an operator for that (to test that all bits in the keyword mask are set)

Something like this:

  "filters" : [
    {
      "type": "drop",
      "include": "Keywords &== 0x4"
    }
  ]

We probably also need an operator that checks whether none of the bits in the keyword mask are set (perhaps !== makes sense as a symbol for this operation)

Shutdown of DataPipeline may lead to data loss

Repro:

bool cont = true;

            Console.WriteLine("String Logger!");

            while (cont)
            {
                Console.WriteLine();
                Console.WriteLine("Enter any text to log. Entering 'X' will exit.");
                string input = Console.ReadLine().Trim();

                if (input.Equals("X", StringComparison.InvariantCultureIgnoreCase))
                {
                    cont = false;
                }
                else
                {
                    using (var pipeline = DiagnosticPipelineFactory.CreatePipeline("eventFlowConfig.json"))
                    {
                        
                        System.Diagnostics.Trace.TraceWarning($"TraceWarning: {input}");
                        Console.WriteLine($"{input} logged.");
                    }
                }
            }

Expected -- if you run the program above with Trace input and StdOutput you should see the logs

Actual -- because the pipeline is destroyed immediately after the trace is submitted, no logs show up in the output

The ability to add Nested Scopes when using MicrosoftLogging input

Would it be possible to add the ability to add all nested scopes in the payload.

using (logger.BeginScope("Test:{Test}", "Test")) { logger.LogInformation("Test Scoped Info"); using (logger.BeginScope("Test1:{Test1}", "Test1")) { logger.LogInformation("Test Scoped 2 Info"); } logger.LogInformation("Test Scoped Info"); }

Here it would be nice for the payload to have a property of "Test" with value "Test" for log entries before and after the nested scope. Then the log entry in the nested scope would have a payload with properties, "Test" = "Test" and "Test1" = "Test1".

Microsoft.Diagnostics.EventFlow.Inputs.Serilog Nuget package Dependency

Microsoft.Diagnostics.EventFlow.Inputs.Serilog 1.2.2 is dependent on Microsoft.Diagnostics.EventFlow.Core (>= 1.1.6) which currently can not be met as 1.1.6 is not released yet.

Microsoft.Diagnostics.EventFlow.Inputs.Serilog 1.2.1 was dependent on Microsoft.Diagnostics.EventFlow.Core (>= 1.1.0) maybe it is a possibility to step back to this Version dependency?

ServiceFabric + ApplicationInsights - System.NullReferenceException

Hi Team,

I'm currently experiencing an issue where a Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsightsOutput.AddProperties is throwing a NullReferenceException in a ServiceFabric cluster. I cannot replicate the issue running the applications in my local cluster.

Here is the error I'm seeing in the Service Fabric Explorer:

Error event: SourceId='My-DiagnosticPipeline', Property='Connectivity'.
Diagnostics data upload has failed.
System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsightsOutput.AddProperties(ISupportProperties item, EventData e)
at Microsoft.Diagnostics.EventFlow.Outputs.ApplicationInsightsOutput.SendEventsAsync(IReadOnlyCollection`1 events, Int64 transmissionSequenceNumber, CancellationToken cancellationToken)

Any help would be much appreciated.
Cheers
Keagan

Creating Custom Output Extensions

How does one create a custom output. I've take the code form the standard output sink but when hooking things up it does not work (as in, no breakpoint are hit, no error messages appear.

I think I need more info about what to put into the extensions configuration of the config file.

ServiceFabric + ETW : behavior issue

Hello,

Based on your documentation, the code with service fabric is :

        using (var diagnosticsPipeline = ServiceFabricDiagnosticPipelineFactory.CreatePipeline("MyApplication-MyService-DiagnosticsPipeline"))
        {
            ServiceRuntime.RegisterServiceAsync("MyServiceType", ctx => new MyService(ctx)).Wait();

            ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id, typeof(MyService).Name);

            Thread.Sleep(Timeout.Infinite);
        }

I use the ETW input but each time the service starts, a new TraceEventSession is created by this code in StandardTraceEventSession.cs :

this.inner = new TraceEventSession($"EventFlow-{nameof(EtwInput)}-{Guid.NewGuid().ToString()}", TraceEventSessionOptions.Create);

But when i request a stop to the service, Service Fabric engine should force the program exit : the DiagnosticPipeline Dispose seems to be never called. So the TraceEventSession stays running. After several restarts I have a system resource error because of the limit of maximum running TraceEventSession allowed by Windows.
I did a workaround with an event in my Stateless service which is triggered on OnClose and OnAbort methods to be able to call the DiagnosticPipeline Dispose method to free the running TraceEventSession.

        using (var diagnosticsPipeline = ServiceFabricDiagnosticPipelineFactory.CreatePipeline("MyApplication-MyService-DiagnosticsPipeline"))
        {
            ServiceRuntime.RegisterServiceAsync("MyServiceType", ctx => {
                    var service = new MyService(ctx);
                    service.OnStopping += diagnosticsPipeline.Dispose();
                    return service;
            }).Wait();

            ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id, typeof(MyService).Name);

            Thread.Sleep(Timeout.Infinite);
            // Code here never called
        }

Is there a better way to Dispose the TraceEventSession ? Maybe it is a question for ServiceFabric team.

Seeing TPL events as traces in AppInsights

When deploying an app with Service Fabric and pushing its diagnostics to Application Insights from EventFlow, TPL event traces show up.

eventflow traces

We were able to use a filter to remove these from showing up, but in general, these should probably be removed anyway.

Microsoft.Extensions.Logging input factory namespace issue.

When trying to use the Microsoft.Extensions.Logging input factory I found that I had to install
"Microsoft.Diagnostics.EventFlow.Inputs.MicrosoftLogging" rather than "Microsoft.Diagnostics.EventFlow.Inputs.Microsoft.Extensions.Logging" as specified in the read me. Then at run time I found that the DiagnosticsPipelineFactory was falling over when trying to reflect on "Microsoft.Diagnostics.EventFlow.Inputs.Microsoft.Extensions.Logging"

DiagnosticPipelineFactory: item of type 'Microsoft.Extensions.Logging' could not be created
System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Diagnostics.EventFlow.Inputs.Microsoft.Extensions.Logging, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified.
File name: 'Microsoft.Diagnostics.EventFlow.Inputs.Microsoft.Extensions.Logging, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName, ObjectHandleOnStack type)
at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName)
at System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark)
at System.Type.GetType(String typeName, Boolean throwOnError)
at Microsoft.Diagnostics.EventFlow.DiagnosticPipelineFactory.ProcessSection[PipelineItemType,PipelineItemChildType](IConfigurationSection configurationSection, IHealthReporter healthReporter, IDictionary2 itemFactories, IDictionary2 childFactories, String childSectionName)

I have made a change and am happy to raise a pull request, but wanted to check which namespace you were looking to use.

Output provider for Splunk

We have a need for an output provider for writing events to Splunk indexer. I'm quite surprised that Elasticsearch support is provided whereas Splunk is not.

Given how (relatively) trivial it is to implement an output provider, we can certainly build our own. But given Splunk's market penetration, I'm a bit surprised that there isn't one already implemented.

Is there one planned in the roadmap, and if not, are you folks open to accepting one here, or should we open our own github repo for this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.