Apache官网对Appenders的介绍(英文)
以下内容转自:http://logging.apache.org/log4j/2.x/manual/appenders.html
Appenders
Appenders are responsible for delivering LogEvents to their destination. Every Appender must implement the Appender interface. Most Appenders will extendAbstractAppender which adds Lifecycle and Filterable support. Lifecycle allows components to finish initialization after configuration has completed and to perform cleanup during shutdown. Filterable allows the component to have Filters attached to it which are evaluated during event processing.
Appenders usually are only responsible for writing the event data to the target destination. In most cases they delegate responsibility for formatting the event to a layout. Some appenders wrap other appenders so that they can modify the LogEvent, handle a failure in an Appender, route the event to a subordinate Appender based on advanced Filter criteria or provide similar functionality that does not directly format the event for viewing.
Appenders always have a name so that they can be referenced from Loggers.
In the tables below, the "Type" column corresponds to the Java type expected. For non-JDK classes, these should usually be in Log4j Core unless otherwise noted.
AsyncAppender
The AsyncAppender accepts references to other Appenders and causes LogEvents to be written to them on a separate Thread. Note that exceptions while writing to those Appenders will be hidden from the application. The AsyncAppender should be configured after the appenders it references to allow it to shut down properly.
By default, AsyncAppender uses java.util.concurrent.ArrayBlockingQueue which does not require any external libraries. Note that multi-threaded applications should exercise care when using this appender as such: the blocking queue is susceptible to lock contention and our tests showed performance may become worse when more threads are logging concurrently. Consider using lock-free Async Loggers for optimal performance.
| Parameter Name | Type | Description |
|---|---|---|
| AppenderRef | String | The name of the Appenders to invoke asynchronously. Multiple AppenderRef elements can be configured. |
| blocking | boolean | If true, the appender will wait until there are free slots in the queue. If false, the event will be written to the error appender if the queue is full. The default is true. |
| shutdownTimeout | integer | How many milliseconds the Appender should wait to flush outstanding log events in the queue on shutdown. The default is zero which means to wait forever. |
| bufferSize | integer | Specifies the maximum number of events that can be queued. The default is 1024. Note that when using a disruptor-style BlockingQueue, this buffer size must be a power of 2.
When the application is logging faster than the underlying appender can keep up with for a long enough time to fill up the queue, the behavious is determined by the AsyncQueueFullPolicy. |
| errorRef | String | The name of the Appender to invoke if none of the appenders can be called, either due to errors in the appenders or because the queue is full. If not specified then errors will be ignored. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| name | String | The name of the Appender. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to falsewhen wrapping this Appender in a FailoverAppender. |
| includeLocation | boolean | Extracting location is an expensive operation (it can make logging 5 - 20 times slower). To improve performance, location is not included by default when adding a log event to the queue. You can change this by setting includeLocation="true". |
| BlockingQueueFactory | BlockingQueueFactory | This element overrides what type of BlockingQueue to use. See below documentation for more details. |
There are also a few system properties that can be used to maintain application throughput even when the underlying appender cannot keep up with the logging rate and the queue is filling up. See the details for system properties log4j2.AsyncQueueFullPolicy and log4j2.DiscardThreshold.
A typical AsyncAppender configuration might look like:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <File name="MyFile" fileName="logs/app.log">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- </File>
- <Async name="Async">
- <AppenderRef ref="MyFile"/>
- </Async>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="Async"/>
- </Root>
- </Loggers>
- </Configuration>
Starting in Log4j 2.7, a custom implementation of BlockingQueue or TransferQueue can be specified using a BlockingQueueFactory plugin. To override the default BlockingQueueFactory, specify the plugin inside an <Async/> element like so:
- <Configuration name="LinkedTransferQueueExample">
- <Appenders>
- <List name="List"/>
- <Async name="Async" bufferSize="262144">
- <AppenderRef ref="List"/>
- <LinkedTransferQueue/>
- </Async>
- </Appenders>
- <Loggers>
- <Root>
- <AppenderRef ref="Async"/>
- </Root>
- </Loggers>
- </Configuration>
Log4j ships with the following implementations:
| Plugin Name | Description |
|---|---|
| ArrayBlockingQueue | This is the default implementation that uses ArrayBlockingQueue. |
| DisruptorBlockingQueue | This uses the Conversant Disruptor implementation of BlockingQueue. This plugin takes a single optional attribute, spinPolicy, which corresponds to |
| JCToolsBlockingQueue | This uses JCTools, specifically the MPSC bounded lock-free queue. |
| LinkedTransferQueue | This uses the new in Java 7 implementation LinkedTransferQueue. Note that this queue does not use the bufferSize configuration attribute from AsyncAppender as LinkedTransferQueue does not support a maximum capacity. |
CassandraAppender
The CassandraAppender writes its output to an Apache Cassandra database. A keyspace and table must be configured ahead of time, and the columns of that table are mapped in a configuration file. Each column can specify either a StringLayout (e.g., a PatternLayout) along with an optional conversion type, or only a conversion type for org.apache.logging.log4j.spi.ThreadContextMap or org.apache.logging.log4j.spi.ThreadContextStack to store the MDC or NDC in a map or list column respectively. A conversion type compatible with java.util.Date will use the log event timestamp converted to that type (e.g., use java.util.Date to fill atimestamp column type in Cassandra).
| Parameter Name | Type | Description |
|---|---|---|
| batched | boolean | Whether or not to use batch statements to write log messages to Cassandra. By default, this is false. |
| batchType | BatchStatement.Type | The batch type to use when using batched writes. By default, this is LOGGED. |
| bufferSize | int | The number of log messages to buffer or batch before writing. By default, no buffering is done. |
| clusterName | String | The name of the Cassandra cluster to connect to. |
| columns | ColumnMapping[] | A list of column mapping configurations. Each column must specify a column name. Each column can have a conversion type specified by its fully qualified class name. By default, the conversion type isString. If the configured type is assignment-compatible with ReadOnlyStringMap / ThreadContextMapor ThreadContextStack, then that column will be populated with the MDC or NDC respectively. If the configured type is assignment-compatible with java.util.Date, then the log timestamp will be converted to that configured date type. If a literal attribute is given, then its value will be used as is in the INSERT query without any escaping. Otherwise, the layout or pattern specified will be converted into the configured type and stored in that column. |
| contactPoints | SocketAddress[] | A list of hosts and ports of Cassandra nodes to connect to. These must be valid hostnames or IP addresses. By default, if a port is not specified for a host or it is set to 0, then the default Cassandra port of 9042 will be used. By default, localhost:9042 will be used. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender. |
| keyspace | String | The name of the keyspace containing the table that log messages will be written to. |
| name | String | The name of the Appender. |
| password | String | The password to use (along with the username) to connect to Cassandra. |
| table | String | The name of the table to write log messages to. |
| useClockForTimestampGenerator | boolean | Whether or not to use the configured org.apache.logging.log4j.core.util.Clock as aTimestampGenerator. By default, this is false. |
| username | String | The username to use to connect to Cassandra. By default, no username or password is used. |
| useTls | boolean | Whether or not to use TLS/SSL to connect to Cassandra. This is false by default. |
Here is an example CassandraAppender configuration:
- <Configuration name="CassandraAppenderTest">
- <Appenders>
- <Cassandra name="Cassandra" clusterName="Test Cluster" keyspace="test" table="logs" bufferSize="10" batched="true">
- <SocketAddress host="localhost" port="9042"/>
- <ColumnMapping name="id" pattern="%uuid{TIME}" type="java.util.UUID"/>
- <ColumnMapping name="timeid" literal="now()"/>
- <ColumnMapping name="message" pattern="%message"/>
- <ColumnMapping name="level" pattern="%level"/>
- <ColumnMapping name="marker" pattern="%marker"/>
- <ColumnMapping name="logger" pattern="%logger"/>
- <ColumnMapping name="timestamp" type="java.util.Date"/>
- <ColumnMapping name="mdc" type="org.apache.logging.log4j.spi.ThreadContextMap"/>
- <ColumnMapping name="ndc" type="org.apache.logging.log4j.spi.ThreadContextStack"/>
- </Cassandra>
- </Appenders>
- <Loggers>
- <Logger name="org.apache.logging.log4j.nosql.appender.cassandra" level="DEBUG">
- <AppenderRef ref="Cassandra"/>
- </Logger>
- <Root level="ERROR"/>
- </Loggers>
- </Configuration>
This example configuration uses the following table schema:
- CREATE TABLE logs (
- id timeuuid PRIMARY KEY,
- timeid timeuuid,
- message text,
- level text,
- marker text,
- logger text,
- timestamp timestamp,
- mdc map<text,text>,
- ndc list<text>
- );
ConsoleAppender
As one might expect, the ConsoleAppender writes its output to either System.out or System.err with System.out being the default target. A Layout must be provided to format the LogEvent.
| Parameter Name | Type | Description |
|---|---|---|
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| layout | Layout | The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout of "%m%n" will be used. |
| follow | boolean | Identifies whether the appender honors reassignments of System.out or System.err via System.setOut or System.setErr made after configuration. Note that the follow attribute cannot be used with Jansi on Windows. Cannot be used with direct. |
| direct | boolean | Write directly to java.io.FileDescriptor and bypass java.lang.System.out/.err. Can give up to 10x performance boost when the output is redirected to file or other process. Cannot be used with Jansi on Windows. Cannot be used with follow. Output will not respect java.lang.System.setOut()/.setErr() and may get intertwined with other output to java.lang.System.out/.err in a multi-threaded application. New since 2.6.2. Be aware that this is a new addition, and it has only been tested with Oracle JVM on Linux and Windows so far. |
| name | String | The name of the Appender. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in aFailoverAppender. |
| target | String | Either "SYSTEM_OUT" or "SYSTEM_ERR". The default is "SYSTEM_OUT". |
A typical Console configuration might look like:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Console name="STDOUT" target="SYSTEM_OUT">
- <PatternLayout pattern="%m%n"/>
- </Console>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="STDOUT"/>
- </Root>
- </Loggers>
- </Configuration>
FailoverAppender
The FailoverAppender wraps a set of appenders. If the primary Appender fails the secondary appenders will be tried in order until one succeeds or there are no more secondaries to try.
| Parameter Name | Type | Description |
|---|---|---|
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| primary | String | The name of the primary Appender to use. |
| failovers | String[] | The names of the secondary Appenders to use. |
| name | String | The name of the Appender. |
| retryIntervalSeconds | integer | The number of seconds that should pass before retrying the primary Appender. The default is 60. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. |
| target | String | Either "SYSTEM_OUT" or "SYSTEM_ERR". The default is "SYSTEM_ERR". |
A Failover configuration might look like:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <RollingFile name="RollingFile" fileName="logs/app.log" filePattern="logs/app-%d{MM-dd-yyyy}.log.gz"
- ignoreExceptions="false">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- <TimeBasedTriggeringPolicy />
- </RollingFile>
- <Console name="STDOUT" target="SYSTEM_OUT" ignoreExceptions="false">
- <PatternLayout pattern="%m%n"/>
- </Console>
- <Failover name="Failover" primary="RollingFile">
- <Failovers>
- <AppenderRef ref="Console"/>
- </Failovers>
- </Failover>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="Failover"/>
- </Root>
- </Loggers>
- </Configuration>
FileAppender
The FileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter. The FileAppender uses a FileManager (which extends OutputStreamManager) to actually perform the file I/O. While FileAppenders from different Configurations cannot be shared, the FileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.
| Parameter Name | Type | Description |
|---|---|---|
| append | boolean | When true - the default, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. |
| bufferedIO | boolean | When true - the default, records will be written to a buffer and the data will be written to disk when the buffer is full or, if immediateFlush is set, when the record is written. File locking cannot be used with bufferedIO. Performance tests have shown that using buffered I/O significantly improves performance, even if immediateFlush is enabled. |
| bufferSize | int | When bufferedIO is true, this is the buffer size, the default is 8192 bytes. |
| createOnDemand | boolean | The appender creates the file on-demand. The appender only creates the file when a log event passes all filters and is routed to this appender. Defaults to false. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| fileName | String | The name of the file to write to. If the file, or any of its parent directories, do not exist, they will be created. |
| immediateFlush | boolean |
When set to true - the default, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. Flushing after every write is only useful when using this appender with synchronous loggers. Asynchronous loggers and appenders will automatically flush at the end of a batch of events, even if immediateFlush is set to false. This also guarantees the data is written to disk but is more efficient. |
| layout | Layout | The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout of "%m%n" will be used. |
| locking | boolean | When set to true, I/O operations will occur only while the file lock is held allowing FileAppenders in multiple JVMs and potentially multiple hosts to write to the same file simultaneously. This will significantly impact performance so should be used carefully. Furthermore, on many systems the file lock is "advisory" meaning that other applications can perform operations on the file without acquiring a lock. The default value is false. |
| name | String | The name of the Appender. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in aFailoverAppender. |
| filePermissions | String |
File attribute permissions in POSIX format to apply whenever the file is created. Underlying files system shall support POSIX file attribute view. Examples: rw------- or rw-rw-rw- etc... |
| fileOwner | String |
File owner to define whenever the file is created. Changing file's owner may be restricted for security reason and Operation not permitted IOException thrown. Only processes with an effective user ID equal to the user ID of the file or with appropriate privileges may change the ownership of a file if _POSIX_CHOWN_RESTRICTED is in effect for path. Underlying files system shall support file owner attribute view. |
| fileGroup | String |
File group to define whenever the file is created. Underlying files system shall support POSIX file attribute view. |
Here is a sample File configuration:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <File name="MyFile" fileName="logs/app.log">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- </File>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="MyFile"/>
- </Root>
- </Loggers>
- </Configuration>
FlumeAppender
This is an optional component supplied in a separate jar.
Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store. The FlumeAppender takes LogEvents and sends them to a Flume agent as serialized Avro events for consumption.
The Flume Appender supports three modes of operation.
- It can act as a remote Flume client which sends Flume events via Avro to a Flume Agent configured with an Avro Source.
- It can act as an embedded Flume Agent where Flume events pass directly into Flume for processing.
- It can persist events to a local BerkeleyDB data store and then asynchronously send the events to Flume, similar to the embedded Flume Agent but without most of the Flume dependencies.
Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel and then control will be immediately returned to the application. All interaction with remote agents will occur asynchronously. Setting the "type" attribute to "Embedded" will force the use of the embedded agent. In addition, configuring agent properties in the appender configuration will also cause the embedded agent to be used.
| Parameter Name | Type | Description |
|---|---|---|
| agents | Agent[] | An array of Agents to which the logging events should be sent. If more than one agent is specified the first Agent will be the primary and subsequent Agents will be used in the order specified as secondaries should the primary Agent fail. Each Agent definition supplies the Agents host and port. The specification of agents and properties are mutually exclusive. If both are configured an error will result. |
| agentRetries | integer | The number of times the agent should be retried before failing to a secondary. This parameter is ignored when type="persistent" is specified (agents are tried once before failing to the next). |
| batchSize | integer | Specifies the number of events that should be sent as a batch. The default is 1. This parameter only applies to the Flume Appender. |
| compress | boolean | When set to true the message body will be compressed using gzip |
| connectTimeoutMillis | integer | The number of milliseconds Flume will wait before timing out the connection. |
| dataDir | String | Directory where the Flume write ahead log should be written. Valid only when embedded is set to true and Agent elements are used instead of Property elements. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| eventPrefix | String | The character string to prepend to each event attribute in order to distinguish it from MDC attributes. The default is an empty string. |
| flumeEventFactory | FlumeEventFactory | Factory that generates the Flume events from Log4j events. The default factory is the FlumeAvroAppender itself. |
| layout | Layout | The Layout to use to format the LogEvent. If no layout is specified RFC5424Layout will be used. |
| lockTimeoutRetries | integer | The number of times to retry if a LockConflictException occurs while writing to Berkeley DB. The default is 5. |
| maxDelayMillis | integer | The maximum number of milliseconds to wait for batchSize events before publishing the batch. |
| mdcExcludes | String | A comma separated list of mdc keys that should be excluded from the FlumeEvent. This is mutually exclusive with the mdcIncludes attribute. |
| mdcIncludes | String | A comma separated list of mdc keys that should be included in the FlumeEvent. Any keys in the MDC not found in the list will be excluded. This option is mutually exclusive with the mdcExcludes attribute. |
| mdcRequired | String | A comma separated list of mdc keys that must be present in the MDC. If a key is not present a LoggingException will be thrown. |
| mdcPrefix | String | A string that should be prepended to each MDC key in order to distinguish it from event attributes. The default string is "mdc:". |
| name | String | The name of the Appender. |
| properties | Property[] |
One or more Property elements that are used to configure the Flume Agent. The properties must be configured without the agent name (the appender name is used for this) and no sources can be configured. Interceptors can be specified for the source using "sources.log4j-source.interceptors". All other Flume configuration properties are allowed. Specifying both Agent and Property elements will result in an error. When used to configure in Persistent mode the valid properties are:
|
| requestTimeoutMillis | integer | The number of milliseconds Flume will wait before timing out the request. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender. |
| type | enumeration | One of "Avro", "Embedded", or "Persistent" to indicate which variation of the Appender is desired. |
A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, and formats the body using the RFC5424Layout:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true">
- <Agent host="192.168.10.101" port="8800"/>
- <Agent host="192.168.10.102" port="8800"/>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- </Flume>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="eventLogger"/>
- </Root>
- </Loggers>
- </Configuration>
A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, formats the body using the RFC5424Layout, and persists encrypted events to disk:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true" type="persistent" dataDir="./logData">
- <Agent host="192.168.10.101" port="8800"/>
- <Agent host="192.168.10.102" port="8800"/>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- <Property name="keyProvider">MySecretProvider</Property>
- </Flume>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="eventLogger"/>
- </Root>
- </Loggers>
- </Configuration>
A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume Agent.
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true" type="Embedded">
- <Agent host="192.168.10.101" port="8800"/>
- <Agent host="192.168.10.102" port="8800"/>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- </Flume>
- <Console name="STDOUT">
- <PatternLayout pattern="%d [%p] %c %m%n"/>
- </Console>
- </Appenders>
- <Loggers>
- <Logger name="EventLogger" level="info">
- <AppenderRef ref="eventLogger"/>
- </Logger>
- <Root level="warn">
- <AppenderRef ref="STDOUT"/>
- </Root>
- </Loggers>
- </Configuration>
A sample FlumeAppender configuration that is configured with a primary and a secondary agent using Flume configuration properties, compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume Agent.
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="error" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true" type="Embedded">
- <Property name="channels">file</Property>
- <Property name="channels.file.type">file</Property>
- <Property name="channels.file.checkpointDir">target/file-channel/checkpoint</Property>
- <Property name="channels.file.dataDirs">target/file-channel/data</Property>
- <Property name="sinks">agent1 agent2</Property>
- <Property name="sinks.agent1.channel">file</Property>
- <Property name="sinks.agent1.type">avro</Property>
- <Property name="sinks.agent1.hostname">192.168.10.101</Property>
- <Property name="sinks.agent1.port">8800</Property>
- <Property name="sinks.agent1.batch-size">100</Property>
- <Property name="sinks.agent2.channel">file</Property>
- <Property name="sinks.agent2.type">avro</Property>
- <Property name="sinks.agent2.hostname">192.168.10.102</Property>
- <Property name="sinks.agent2.port">8800</Property>
- <Property name="sinks.agent2.batch-size">100</Property>
- <Property name="sinkgroups">group1</Property>
- <Property name="sinkgroups.group1.sinks">agent1 agent2</Property>
- <Property name="sinkgroups.group1.processor.type">failover</Property>
- <Property name="sinkgroups.group1.processor.priority.agent1">10</Property>
- <Property name="sinkgroups.group1.processor.priority.agent2">5</Property>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- </Flume>
- <Console name="STDOUT">
- <PatternLayout pattern="%d [%p] %c %m%n"/>
- </Console>
- </Appenders>
- <Loggers>
- <Logger name="EventLogger" level="info">
- <AppenderRef ref="eventLogger"/>
- </Logger>
- <Root level="warn">
- <AppenderRef ref="STDOUT"/>
- </Root>
- </Loggers>
- </Configuration>
JDBCAppender
The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured to obtain JDBC connections using a JNDI DataSourceor a custom factory method. Whichever approach you take, it must be backed by a connection pool. Otherwise, logging performance will suffer greatly. If batch statements are supported by the configured JDBC driver and a bufferSize is configured to be a positive number, then log events will be batched. Note that as of Log4j 2.8, there are two ways to configure log event to column mappings: the original ColumnConfig style that only allows strings and timestamps, and the new ColumnMapping plugin that uses Log4j's built-in type conversion to allow for more data types (this is the same plugin as in the Cassandra Appender).
| Parameter Name | Type | Description |
|---|---|---|
| name | String | Required. The name of the Appender. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| bufferSize | int | If an integer greater than 0, this causes the appender to buffer log events and flush whenever the buffer reaches this size. |
| connectionSource | ConnectionSource | Required. The connections source from which database connections should be retrieved. |
| tableName | String | Required. The name of the database table to insert log events into. |
| columnConfigs | ColumnConfig[] | Required (and/or columnMappings). Information about the columns that log event data should be inserted into and how to insert that data. This is represented with multiple <Column> elements. |
| columnMappings | ColumnMapping[] | Required (and/or columnConfigs). A list of column mapping configurations. Each column must specify a column name. Each column can have a conversion type specified by its fully qualified class name. By default, the conversion type is String. If the configured type is assignment-compatible with ReadOnlyStringMap / ThreadContextMap orThreadContextStack, then that column will be populated with the MDC or NDC respectively (this is database-specific how they handle inserting a Map or List value). If the configured type is assignment-compatible with java.util.Date, then the log timestamp will be converted to that configured date type. If the configured type is assignment-compatible with java.sql.Clob or java.sql.NClob, then the formatted event will be set as a Clob or NClob respectively (similar to the traditional ColumnConfig plugin). If a literal attribute is given, then its value will be used as is in the INSERT query without any escaping. Otherwise, the layout or pattern specified will be converted into the configured type and stored in that column. |
When configuring the JDBCAppender, you must specify a ConnectionSource implementation from which the Appender gets JDBC connections. You must use exactly one of the <DataSource> or <ConnectionFactory> nested elements.
| Parameter Name | Type | Description |
|---|---|---|
| jndiName | String | Required. The full, prefixed JNDI name that the javax.sql.DataSource is bound to, such as java:/comp/env/jdbc/LoggingDatabase. The DataSource must be backed by a connection pool; otherwise, logging will be very slow. |
| Parameter Name | Type | Description |
|---|---|---|
| class | Class | Required. The fully qualified name of a class containing a static factory method for obtaining JDBC connections. |
| method | Method | Required. The name of a static factory method for obtaining JDBC connections. This method must have no parameters and its return type must be either java.sql.Connection or DataSource. If the method returns Connections, it must obtain them from a connection pool (and they will be returned to the pool when Log4j is done with them); otherwise, logging will be very slow. If the method returns a DataSource, the DataSource will only be retrieved once, and it must be backed by a connection pool for the same reasons. |
When configuring the JDBCAppender, use the nested <Column> elements to specify which columns in the table should be written to and how to write to them. The JDBCAppender uses this information to formulate a PreparedStatement to insert records without SQL injection vulnerability.
| Parameter Name | Type | Description |
|---|---|---|
| name | String | Required. The name of the database column. |
| pattern | String | Use this attribute to insert a value or values from the log event in this column using a PatternLayout pattern. Simply specify any legal pattern in this attribute. Either this attribute, literal, or isEventTimestamp="true" must be specified, but not more than one of these. |
| literal | String | Use this attribute to insert a literal value in this column. The value will be included directly in the insert SQL, without any quoting (which means that if you want this to be a string, your value should contain single quotes around it like this: literal="'Literal String'"). This is especially useful for databases that don't support identity columns. For example, if you are using Oracle you could specify literal="NAME_OF_YOUR_SEQUENCE.NEXTVAL" to insert a unique ID in an ID column. Either this attribute, pattern, or isEventTimestamp="true" must be specified, but not more than one of these. |
| isEventTimestamp | boolean | Use this attribute to insert the event timestamp in this column, which should be a SQL datetime. The value will be inserted as a java.sql.Types.TIMESTAMP. Either this attribute (equal to true), pattern, or isEventTimestamp must be specified, but not more than one of these. |
| isUnicode | boolean | This attribute is ignored unless pattern is specified. If true or omitted (default), the value will be inserted as unicode (setNString or setNClob). Otherwise, the value will be inserted non-unicode (setString or setClob). |
| isClob | boolean | This attribute is ignored unless pattern is specified. Use this attribute to indicate that the column stores Character Large Objects (CLOBs). If true, the value will be inserted as a CLOB (setClob or setNClob). If false or omitted (default), the value will be inserted as a VARCHAR or NVARCHAR (setString or setNString). |
Here are a couple sample configurations for the JDBCAppender, as well as a sample factory implementation that uses Commons Pooling and Commons DBCP to pool database connections:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="error">
- <Appenders>
- <JDBC name="databaseAppender" tableName="dbo.application_log">
- <DataSource jndiName="java:/comp/env/jdbc/LoggingDataSource" />
- <Column name="eventDate" isEventTimestamp="true" />
- <Column name="level" pattern="%level" />
- <Column name="logger" pattern="%logger" />
- <Column name="message" pattern="%message" />
- <Column name="exception" pattern="%ex{full}" />
- </JDBC>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
- </Configuration>
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="error">
- <Appenders>
- <JDBC name="databaseAppender" tableName="LOGGING.APPLICATION_LOG">
- <ConnectionFactory class="net.example.db.ConnectionFactory" method="getDatabaseConnection" />
- <Column name="EVENT_ID" literal="LOGGING.APPLICATION_LOG_SEQUENCE.NEXTVAL" />
- <Column name="EVENT_DATE" isEventTimestamp="true" />
- <Column name="LEVEL" pattern="%level" />
- <Column name="LOGGER" pattern="%logger" />
- <Column name="MESSAGE" pattern="%message" />
- <Column name="THROWABLE" pattern="%ex{full}" />
- </JDBC>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
- </Configuration>
- package net.example.db;
- import java.sql.Connection;
- import java.sql.SQLException;
- import java.util.Properties;
- import javax.sql.DataSource;
- import org.apache.commons.dbcp.DriverManagerConnectionFactory;
- import org.apache.commons.dbcp.PoolableConnection;
- import org.apache.commons.dbcp.PoolableConnectionFactory;
- import org.apache.commons.dbcp.PoolingDataSource;
- import org.apache.commons.pool.impl.GenericObjectPool;
- public class ConnectionFactory {
- private static interface Singleton {
- final ConnectionFactory INSTANCE = new ConnectionFactory();
- }
- private final DataSource dataSource;
- private ConnectionFactory() {
- Properties properties = new Properties();
- properties.setProperty("user", "logging");
- properties.setProperty("password", "abc123"); // or get properties from some configuration file
- GenericObjectPool<PoolableConnection> pool = new GenericObjectPool<PoolableConnection>();
- DriverManagerConnectionFactory connectionFactory = new DriverManagerConnectionFactory(
- "jdbc:mysql://example.org:3306/exampleDb", properties
- );
- new PoolableConnectionFactory(
- connectionFactory, pool, null, "SELECT 1", 3, false, false, Connection.TRANSACTION_READ_COMMITTED
- );
- this.dataSource = new PoolingDataSource(pool);
- }
- public static Connection getDatabaseConnection() throws SQLException {
- return Singleton.INSTANCE.dataSource.getConnection();
- }
- }
JMS Appender
The JMS Appender sends the formatted log event to a JMS Destination.
Note that in Log4j 2.0, this appender was split into a JMSQueueAppender and a JMSTopicAppender. Starting in Log4j 2.1, these appenders were combined into the JMS Appender which makes no distinction between queues and topics. However, configurations written for 2.0 which use the <JMSQueue/> or <JMSTopic/> elements will continue to work with the new <JMS/> configuration element.
| Parameter Name | Type | Default | Description |
|---|---|---|---|
| factoryBindingName | String | Required | The name to locate in the Context that provides the ConnectionFactory. This can be any subinterface of ConnectionFactory as well. |
| factoryName | String | Required | The fully qualified class name that should be used to define the Initial Context Factory as defined inINITIAL_CONTEXT_FACTORY. If a factoryName is specified without a providerURL a warning message will be logged as this is likely to cause problems. |
| filter | Filter | null | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| layout | Layout | Required | The Layout to use to format the LogEvent. New since 2.9, in previous versions SerializedLayout was default. |
| name | String | Required | The name of the Appender. |
| password | String | null | The password to use to create the JMS connection. |
| providerURL | String | Required | The URL of the provider to use as defined by PROVIDER_URL. |
| destinationBindingName | String | Required | The name to use to locate the Destination. This can be a Queue or Topic, and as such, the attribute namesqueueBindingName and topicBindingName are aliases to maintain compatibility with the Log4j 2.0 JMS appenders. |
| securityPrincipalName | String | null | The name of the identity of the Principal as specified by SECURITY_PRINCIPAL. If a securityPrincipalName is specified without securityCredentials a warning message will be logged as this is likely to cause problems. |
| securityCredentials | String | null | The security credentials for the principal as specified by SECURITY_CREDENTIALS. |
| ignoreExceptions | boolean | true | When true, exceptions caught while appending events are internally logged and then ignored. When falseexceptions are propagated to the caller. You must set this to false when wrapping this Appender in aFailoverAppender. |
| immediateFail | boolean | false | When set to true, log events will not wait to try to reconnect and will fail immediately if the JMS resources are not available. New in 2.9. |
| reconnectIntervalMillis | long | 5000 | If set to a value greater than 0, after an error, the JMSManager will attempt to reconnect to the broker after waiting the specified number of milliseconds. If the reconnect fails then an exception will be thrown (which can be caught by the application if ignoreExceptions is set to false). New in 2.9. |
| urlPkgPrefixes | String | null | A colon-separated list of package prefixes for the class name of the factory class that will create a URL context factory as defined by URL_PKG_PREFIXES. |
| userName | String | null | The user id used to create the JMS connection. |
Here is a sample JMS Appender configuration:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp">
- <Appenders>
- <JMS name="jmsQueue" destinationBindingName="MyQueue"
- factoryBindingName="MyQueueConnectionFactory">
- <JsonLayout properties="true"/>
- </JMS>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="jmsQueue"/>
- </Root>
- </Loggers>
- </Configuration>
To map your Log4j MapMessages to JMS javax.jms.MapMessages, set the layout of the appender to MessageLayout with <MessageLayout /> (Since 2.9.):
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp">
- <Appenders>
- <JMS name="jmsQueue" destinationBindingName="MyQueue"
- factoryBindingName="MyQueueConnectionFactory">
- <MessageLayout />
- </JMS>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="jmsQueue"/>
- </Root>
- </Loggers>
- </Configuration>
JPAAppender
The JPAAppender writes log events to a relational database table using the Java Persistence API 2.1. It requires the API and a provider implementation be on the classpath. It also requires a decorated entity configured to persist to the table desired. The entity should either extendorg.apache.logging.log4j.core.appender.db.jpa.BasicLogEventEntity (if you mostly want to use the default mappings) and provide at least an @Id property, ororg.apache.logging.log4j.core.appender.db.jpa.AbstractLogEventWrapperEntity (if you want to significantly customize the mappings). See the Javadoc for these two classes for more information. You can also consult the source code of these two classes as an example of how to implement the entity.
| Parameter Name | Type | Description |
|---|---|---|
| name | String | Required. The name of the Appender. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in aFailoverAppender. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| bufferSize | int | If an integer greater than 0, this causes the appender to buffer log events and flush whenever the buffer reaches this size. |
| entityClassName | String | Required. The fully qualified name of the concrete LogEventWrapperEntity implementation that has JPA annotations mapping it to a database table. |
| persistenceUnitName | String | Required. The name of the JPA persistence unit that should be used for persisting log events. |
Here is a sample configuration for the JPAAppender. The first XML sample is the Log4j configuration file, the second is the persistence.xml file. EclipseLink is assumed here, but any JPA 2.1 or higher provider will do. You should always create a separate persistence unit for logging, for two reasons. First, <shared-cache-mode> must be set to "NONE," which is usually not desired in normal JPA usage. Also, for performance reasons the logging entity should be isolated in its own persistence unit away from all other entities and you should use a non-JTA data source. Note that your persistence unit must also contain <class> elements for all of the org.apache.logging.log4j.core.appender.db.jpa.converter converter classes.
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="error">
- <Appenders>
- <JPA name="databaseAppender" persistenceUnitName="loggingPersistenceUnit"
- entityClassName="com.example.logging.JpaLogEntity" />
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
- </Configuration>
- <?xml version="1.0" encoding="UTF-8"?>
- <persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
- http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
- version="2.1">
- <persistence-unit name="loggingPersistenceUnit" transaction-type="RESOURCE_LOCAL">
- <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapJsonAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackJsonAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.MarkerAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.MessageAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.StackTraceElementAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ThrowableAttributeConverter</class>
- <class>com.example.logging.JpaLogEntity</class>
- <non-jta-data-source>jdbc/LoggingDataSource</non-jta-data-source>
- <shared-cache-mode>NONE</shared-cache-mode>
- </persistence-unit>
- </persistence>
- package com.example.logging;
- ...
- @Entity
- @Table(name="application_log", schema="dbo")
- public class JpaLogEntity extends BasicLogEventEntity {
- private static final long serialVersionUID = 1L;
- private long id = 0L;
- public TestEntity() {
- super(null);
- }
- public TestEntity(LogEvent wrappedEvent) {
- super(wrappedEvent);
- }
- @Id
- @GeneratedValue(strategy = GenerationType.IDENTITY)
- @Column(name = "id")
- public long getId() {
- return this.id;
- }
- public void setId(long id) {
- this.id = id;
- }
- // If you want to override the mapping of any properties mapped in BasicLogEventEntity,
- // just override the getters and re-specify the annotations.
- }
- package com.example.logging;
- ...
- @Entity
- @Table(name="application_log", schema="dbo")
- public class JpaLogEntity extends AbstractLogEventWrapperEntity {
- private static final long serialVersionUID = 1L;
- private long id = 0L;
- public TestEntity() {
- super(null);
- }
- public TestEntity(LogEvent wrappedEvent) {
- super(wrappedEvent);
- }
- @Id
- @GeneratedValue(strategy = GenerationType.IDENTITY)
- @Column(name = "logEventId")
- public long getId() {
- return this.id;
- }
- public void setId(long id) {
- this.id = id;
- }
- @Override
- @Enumerated(EnumType.STRING)
- @Column(name = "level")
- public Level getLevel() {
- return this.getWrappedEvent().getLevel();
- }
- @Override
- @Column(name = "logger")
- public String getLoggerName() {
- return this.getWrappedEvent().getLoggerName();
- }
- @Override
- @Column(name = "message")
- @Convert(converter = MyMessageConverter.class)
- public Message getMessage() {
- return this.getWrappedEvent().getMessage();
- }
- ...
- }
HttpAppender
The HttpAppender sends log events over HTTP. A Layout must be provided to format the LogEvent.
Will set the Content-Type header according to the layout. Additional headers can be specified with embedded Property elements.
Will wait for response from server, and throw error if no 2xx response is received.
Implemented with HttpURLConnection.
| Parameter Name | Type | Description |
|---|---|---|
| name | String | The name of the Appender. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| layout | Layout | The Layout to use to format the LogEvent. |
| Ssl | SslConfiguration | Contains the configuration for the KeyStore and TrustStore for https. Optional, uses Java runtime defaults if not specified. |
| verifyHostname | boolean | Whether to verify server hostname against certificate. Only valid for https. Optional, defaults to true |
| url | string | The URL to use. The URL scheme must be "http" or "https". |
| method | string | The HTTP method to use. Optional, default is "POST". |
| connectTimeoutMillis | integer | The connect timeout in milliseconds. Optional, default is 0 (infinite timeout). |
| readTimeoutMillis | integer | The socket read timeout in milliseconds. Optional, default is 0 (infinite timeout). |
| headers | Property[] | Additional HTTP headers to use. The values support lookups. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender. |
Here is a sample HttpAppender configuration snippet:
- <?xml version="1.0" encoding="UTF-8"?>
- ...
- <Appenders>
- <Http name="Http" url="https://localhost:9200/test/log4j/">
- <Property name="X-Java-Runtime" value="$${java:runtime}" />
- <JsonLayout properties="true"/>
- <SSL>
- <KeyStore location="log4j2-keystore.jks" password="changeme"/>
- <TrustStore location="truststore.jks" password="changeme"/>
- </SSL>
- </Http>
- </Appenders>
KafkaAppender
The KafkaAppender logs events to an Apache Kafka topic. Each log event is sent as a Kafka record with no key.
| Parameter Name | Type | Description |
|---|---|---|
| topic | String | The Kafka topic to use. Required. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| layout | Layout | The Layout to use to format the LogEvent. Required, there is no default. New since 2.9, in previous versions <PatternLayout pattern="%m"/> was default. |
| name | String | The name of the Appender. Required. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in aFailoverAppender. |
| syncSend | boolean | The default is true, causing sends to block until the record has been acknowledged by the Kafka server. When set to falsesends return immediately, allowing for lower latency and significantly higher throughput. New since 2.8. Be aware that this is a new addition, and it has not been extensively tested. Any failure sending to Kafka will be reported as error to StatusLogger and the log event will be dropped (the ignoreExceptions parameter will not be effective). Log events may arrive out of order to the Kafka server. |
| properties | Property[] | You can set properties in Kafka producer properties. You need to set the bootstrap.servers property, there are sensible default values for the others. Do not set the value.serializer property. |
Here is a sample KafkaAppender configuration snippet:
- <?xml version="1.0" encoding="UTF-8"?>
- ...
- <Appenders>
- <Kafka name="Kafka" topic="log-test">
- <PatternLayout pattern="%date %message"/>
- <Property name="bootstrap.servers">localhost:9092</Property>
- </Kafka>
- </Appenders>
This appender is synchronous by default and will block until the record has been acknowledged by the Kafka server, timeout for this can be set with the timeout.msproperty (defaults to 30 seconds). Wrap with Async appender and/or set syncSend to false to log asynchronously.
This appender requires the Kafka client library. Note that you need to use a version of the Kafka client library matching the Kafka server used.
Note:Make sure to not let org.apache.kafka log to a Kafka appender on DEBUG level, since that will cause recursive logging:
- <?xml version="1.0" encoding="UTF-8"?>
- ...
- <Loggers>
- <Root level="DEBUG">
- <AppenderRef ref="Kafka"/>
- </Root>
- <Logger name="org.apache.kafka" level="INFO" /> <!-- avoid recursive logging -->
- </Loggers>
MemoryMappedFileAppender
New since 2.1. Be aware that this is a new addition, and although it has been tested on several platforms, it does not have as much track record as the other file appenders.
The MemoryMappedFileAppender maps a part of the specified file into memory and writes log events to this memory, relying on the operating system's virtual memory manager to synchronize the changes to the storage device. The main benefit of using memory mapped files is I/O performance. Instead of making system calls to write to disk, this appender can simply change the program's local memory, which is orders of magnitude faster. Also, in most operating systems the memory region mapped actually is the kernel's page cache (file cache), meaning that no copies need to be created in user space. (TODO: performance tests that compare performance of this appender to RandomAccessFileAppender and FileAppender.)
There is some overhead with mapping a file region into memory, especially very large regions (half a gigabyte or more). The default region size is 32 MB, which should strike a reasonable balance between the frequency and the duration of remap operations. (TODO: performance test remapping various sizes.)
Similar to the FileAppender and the RandomAccessFileAppender, MemoryMappedFileAppender uses a MemoryMappedFileManager to actually perform the file I/O. While MemoryMappedFileAppender from different Configurations cannot be shared, the MemoryMappedFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.
| Parameter Name | Type | Description |
|---|---|---|
| append | boolean | When true - the default, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. |
| fileName | String | The name of the file to write to. If the file, or any of its parent directories, do not exist, they will be created. |
| filters | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| immediateFlush | boolean |
When set to true, each write will be followed by a call to MappedByteBuffer.force(). This will guarantee the data is written to the storage device. The default for this parameter is false. This means that the data is written to the storage device even if the Java process crashes, but there may be data loss if the operating system crashes. Note that manually forcing a sync on every log event loses most of the performance benefits of using a memory mapped file. Flushing after every write is only useful when using this appender with synchronous loggers. Asynchronous loggers and appenders will automatically flush at the end of a batch of events, even if immediateFlush is set to false. This also guarantees the data is written to disk but is more efficient. |
| regionLength | int | The length of the mapped region, defaults to 32 MB (32 * 1024 * 1024 bytes). This parameter must be a value between 256 and 1,073,741,824 (1 GB or 2^30); values outside this range will be adjusted to the closest valid value. Log4j will round the specified value up to the nearest power of two. |
| layout | Layout | The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout of "%m%n" will be used. |
| name | String | The name of the Appender. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in aFailoverAppender. |
Here is a sample MemoryMappedFile configuration:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <MemoryMappedFile name="MyFile" fileName="logs/app.log">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- </MemoryMappedFile>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="MyFile"/>
- </Root>
- </Loggers>
- </Configuration>
NoSQLAppender
The NoSQLAppender writes log events to a NoSQL database using an internal lightweight provider interface. Provider implementations currently exist for MongoDB and Apache CouchDB, and writing a custom provider is quite simple.
| Parameter Name | Type | Description |
|---|---|---|
| name | String | Required. The name of the Appender. |
| ignoreExceptions | boolean | The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. You must set this to false when wrapping this Appender in a FailoverAppender. |
| filter | Filter | A Filter to determine if the event should be handled by this Appender. More than one Filter may be used by using a CompositeFilter. |
| bufferSize | int | If an integer greater than 0, this causes the appender to buffer log events and flush whenever the buffer reaches this size. |
| NoSqlProvider | NoSQLProvider<C extends NoSQLConnection<W, T extends NoSQLObject<W>>> | Required. The NoSQL provider that provides connections to the chosen NoSQL database. |
You specify which NoSQL provider to use by specifying the appropriate configuration element within the <NoSql> element. The types currently supported are <MongoDb> and <CouchDb>. To create your own custom provider, read the JavaDoc for the NoSQLProvider, NoSQLConnection, and NoSQLObject classes and the documentation about creating Log4j plugins. We recommend you review the source code for the MongoDB and CouchDB providers as a guide for creating your own provider.
| Parameter Name | Type | Description |
|---|---|---|
| collectionName | String | Required. The name of the MongoDB collection to insert the events into. |
| writeConcernConstant | Field | By default, the MongoDB provider inserts records with the instructions com.mongodb.WriteConcern.ACKNOWLEDGED. Use this optional attribute to specify the name of a constant other than ACKNOWLEDGED. |
| writeConcernConstantClass | Class | If you specify writeConcernConstant, you can use this attribute to specify a class other than com.mongodb.WriteConcern to find the constant on (to create your own custom instructions). |
| factoryClassName | Class | To provide a connection to the MongoDB database, you can use this attribute and factoryMethodName to specify a class and static method to get the connection from. The method must return a com.mongodb.DB or a com.mongodb.MongoClient. If the DB is not authenticated, you must also specify a username and password. If you use the factory method for providing a connection, you must not specify the databaseName, server, or port attributes. |
| factoryMethodName | Method | See the documentation for attribute factoryClassName. |
| databaseName | String | If you do not specify a factoryClassName and factoryMethodName for providing a MongoDB connection, you must specify a MongoDB database name using this attribute. You must also specify a username and password. You can optionally also specify a server (defaults to localhost), and a port (defaults to the default MongoDB port). |
| server | String | See the documentation for attribute databaseName. |
| port | int | See the documentation for attribute databaseName. |
| username | String | See the documentation for attributes databaseName and factoryClassName. |
| password | String | See the documentation for attributes databaseName and factoryClassName. |
| capped | boolean | Enable support for capped collections |
| collectionSize | int | Specify the size in bytes of the capped collection to use if enabled. The minimum size is 4096 bytes, and larger sizes will be increased to the nearest integer multiple of 256. See the capped collection documentation linked above for more information. |
| Parameter Name | Type | Description |
|---|---|---|
| factoryClassName | Class | To provide a connection to the CouchDB database, you can use this attribute and factoryMethodName to specify a class and static method to get the connection from. The method must return a org.lightcouch.CouchDbClient or aorg.lightcouch.CouchDbProperties. If you use the factory method for providing a connection, you must not specify the databaseName, protocol, server, port, username, or password attributes. |
| factoryMethodName | Method | See the documentation for attribute factoryClassName. |
| databaseName | String | If you do not specify a factoryClassName and factoryMethodName for providing a CouchDB connection, you must specify a CouchDB database name using this attribute. You must also specify a username and password. You can optionally also specify a protocol (defaults to http), server (defaults to localhost), and a port (defaults to 80 for http and 443 for https). |
| protocol | String | Must either be "http" or "https." See the documentation for attribute databaseName. |
| server | String | See the documentation for attribute databaseName. |
| port | int | See the documentation for attribute databaseName. |
| username | String | See the documentation for attributes databaseName. |
| password | String | See the documentation for attributes databaseName. |
Here are a few sample configurations for the NoSQLAppender:
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="error">
- <Appenders>
- <NoSql name="databaseAppender">
- <MongoDb databaseName="applicationDb" collectionName="applicationLog" server="mongo.example.org"
- username="loggingUser" password="abc123" />
- </NoSql>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
- </Configuration>
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="error">
- <Appenders>
- <NoSql name="databaseAppender">
- <MongoDb collectionName="applicationLog" factoryClassName="org.example.db.ConnectionFactory"
- factoryMethodName="getNewMongoClient" />
- </NoSql>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
- </Configuration>
- <?xml version="1.0" encoding="UTF-8"?>
- <Configuration status="error">
- <Appenders>
- <NoSql name="databaseAppender">
- <CouchDb databaseName="applicationDb" protocol="https" server="couch.example.org"
- username="loggingUser" password="abc123" />
- </NoSql>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
- </Configuration>
The following example demonstrates how log events are persisted in NoSQL databases if represented in a JSON format:
{
"level": "WARN",
"loggerName": "com.example.application.MyClass",
"message": "Something happened that you might want to know about.",
"source": {
"className": "com.example.application.MyClass",
"methodName": "exampleMethod",
"fileName": "MyClass.java",
"lineNumber": 81
},
"marker": {
"name": "SomeMarker",
"parent" {
"name": "SomeParentMarker"
}
},
"threadName": "Thread-1",
"millis": 1368844166761,
"date": "2013-05-18T02:29:26.761Z",
"thrown": {
"type": "java.sql.SQLException",
"message": "Could not insert record. Connection lost.",
"stackTrace": [
{ "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1049 },
{ "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 },
{ "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 },
{ "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 }
],
"cause": {
"type": "java.io.IOException",
"message": "Connection lost.",
"stackTrace": [
{