Spring Integration’s File support extends the Spring Integration Core with a dedicated vocabulary to deal with reading, writing, and transforming files. It provides a namespace that enables elements defining Channel Adapters dedicated to files and support for Transformers that can read file contents into strings or byte arrays.
This section will explain the workings of FileReadingMessageSource
and FileWritingMessageHandler
and how to configure them as beans.
Also the support for dealing with files through file specific implementations of Transformer
will be discussed.
Finally the file specific namespace will be explained.
A FileReadingMessageSource
can be used to consume files from the filesystem.
This is an implementation of MessageSource
that creates messages from a file system directory.
<bean id="pollableFileSource" class="org.springframework.integration.file.FileReadingMessageSource" p:directory="${input.directory}"/>
To prevent creating messages for certain files, you may supply a FileListFilter
. By default the following 2 filters are used:
IgnoreHiddenFileListFilter
AcceptOnceFileListFilter
The IgnoreHiddenFileListFilter
ensures that hidden files are not being processed.
Please keep in mind that the exact definition of hidden is system-dependent. For example,
on UNIX-based systems, a file beginning with a period character is considered to be hidden.
Microsoft Windows, on the other hand, has a dedicated file attribute to indicate
hidden files.
Important | |
---|---|
The |
The AcceptOnceFileListFilter
ensures files are picked up only once from the directory.
Note | |
---|---|
The Since version 4.0, this filter requires a Since version 4.1.5, this filter has a new property |
<bean id="pollableFileSource" class="org.springframework.integration.file.FileReadingMessageSource" p:inputDirectory="${input.directory}" p:filter-ref="customFilterBean"/>
A common problem with reading files is that a file may be detected before it is ready.
The default AcceptOnceFileListFilter
does not prevent this.
In most cases, this can be prevented if the file-writing process renames each file as soon as it is ready for reading.
A filename-pattern or filename-regex filter that accepts only files that are ready (e.g.
based on a known suffix), composed with the default AcceptOnceFileListFilter
allows for this.
The CompositeFileListFilter
enables the composition.
<bean id="pollableFileSource" class="org.springframework.integration.file.FileReadingMessageSource" p:inputDirectory="${input.directory}" p:filter-ref="compositeFilter"/> <bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter"> <constructor-arg> <list> <bean class="o.s.i.file.filters.AcceptOnceFileListFilter"/> <bean class="o.s.i.file.filters.RegexPatternFileListFilter"> <constructor-arg value="^test.*$"/> </bean> </list> </constructor-arg> </bean>
If it is not possible to create the file with a temporary name and rename to the final name, another alternative is
provided.
The LastModifiedFileListFilter
was added in version 4.2.
This filter can be configured with an age
property and only files older than this will be passed by the filter.
The age defaults to 60 seconds, but you should choose an age that is large enough to avoid picking up a file early, due
to, say, network glitches.
<bean id="filter" class="org.springframework.integration.file.filters.LastModifiedFileListFilter"> <property name="age" value="120" /> </bean>
Starting with version 4.3.7 a ChainFileListFilter
(an extension of CompositeFileListFilter
) has been introduced to allow scenarios when subsequent filters should only see the result of the previous filter.
(With the CompositeFileListFilter
, all filters see all the files, but only files that pass all filters are passed by the CompositeFileListFilter
).
An example of where the new behavior is required is a combination of LastModifiedFileListFilter
and AcceptOnceFileListFilter
, when we do not wish to accept the file until some amount of time has elapsed.
With the CompositeFileListFilter
, since the AcceptOnceFileListFilter
sees all the files on the first pass, it won’t pass it later when the other filter does.
The CompositeFileListFilter
approach is useful when a pattern filter is combined with a custom filter that looks for a secondary indicating file transfer is complete.
The pattern filter might only pass the primary file (e.g. foo.txt
) but the "done" filter needs to see if, say foo.done
is present.
Say we have files a.txt
, a.done
, and b.txt
.
The pattern filter only passes a.txt
and b.txt
, the "done" filter will see all three files and only pass a.txt
.
The final result of the composite filter is only a.txt
is released.
Note | |
---|---|
With the |
Directory scanning and polling
The FileReadingMessageSource
doesn’t produce messages for files from the directory immediately.
It uses an internal queue for eligible files returned by the scanner
.
The scanEachPoll
option is used to ensure that the internal queue is refreshed with the latest input directory
content on each poll.
By default (scanEachPoll = false
), the FileReadingMessageSource
empties its queue before scanning the directory
again.
This default behavior is particularly useful to reduce scans of large numbers of files in a directory.
However, in cases where custom ordering is required, it is important to consider the effects of setting this flag to
true
; the order in which files are processed may not be as expected.
By default, files in the queue are processed in their natural (path
) order.
New files added by a scan, even when the queue already has files, are inserted in the appropriate position to maintain
that natural order.
To customize the order, the FileReadingMessageSource
can accept a Comparator<File>
as a constructor argument.
It is used by the internal (PriorityBlockingQueue
) to reorder its content according to the business requirements.
Therefore, to process files in a specific order, you should provide a comparator to the FileReadingMessageSource
,
rather than ordering the list produced by a custom DirectoryScanner
.
The configuration for file reading can be simplified using the file specific namespace. To do this use the following template.
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration" xmlns:int-file="http://www.springframework.org/schema/integration/file" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd http://www.springframework.org/schema/integration/file http://www.springframework.org/schema/integration/file/spring-integration-file.xsd"> </beans>
Within this namespace you can reduce the FileReadingMessageSource
and wrap it in an inbound Channel Adapter like this:
<int-file:inbound-channel-adapter id="filesIn1" directory="file:${input.directory}" prevent-duplicates="true" ignore-hidden="true"/> <int-file:inbound-channel-adapter id="filesIn2" directory="file:${input.directory}" filter="customFilterBean" /> <int-file:inbound-channel-adapter id="filesIn3" directory="file:${input.directory}" filename-pattern="test*" /> <int-file:inbound-channel-adapter id="filesIn4" directory="file:${input.directory}" filename-regex="test[0-9]+\.txt" />
The first channel adapter example is relying on the default FileListFilter
s:
IgnoreHiddenFileListFilter
(Do not process hidden files)
AcceptOnceFileListFilter
(Prevents duplication)
Therefore, you can also leave off the 2 attributes prevent-duplicates
and ignore-hidden
as they are true
by default.
Important | |
---|---|
The |
The second channel adapter example is using a custom filter, the third is using the filename-pattern attribute to
add an AntPathMatcher
based filter, and the fourth is using the filename-regex attribute to add a regular expression Pattern based filter to the FileReadingMessageSource
.
The filename-pattern and filename-regex attributes are each mutually exclusive with the regular filter reference attribute.
However, you can use the filter attribute to reference an instance of CompositeFileListFilter
that combines any number of filters, including one or more pattern based filters to fit your particular needs.
When multiple processes are reading from the same directory it can be desirable to lock files to prevent them from being picked up concurrently.
To do this you can use a FileLocker
.
There is a java.nio based implementation available out of the box, but it is also possible to implement your own locking scheme.
The nio locker can be injected as follows
<int-file:inbound-channel-adapter id="filesIn" directory="file:${input.directory}" prevent-duplicates="true"> <int-file:nio-locker/> </int-file:inbound-channel-adapter>
A custom locker you can configure like this:
<int-file:inbound-channel-adapter id="filesIn" directory="file:${input.directory}" prevent-duplicates="true"> <int-file:locker ref="customLocker"/> </int-file:inbound-channel-adapter>
Note | |
---|---|
When a file inbound adapter is configured with a locker, it will take the responsibility to acquire a lock before the file is allowed to be received.
It will not assume the responsibility to unlock the file. If you have processed the file and keeping the locks hanging around you have a memory leak.
If this is a problem in your case you should call |
When filtering and locking files is not enough it might be needed to control the way files are listed entirely.
To implement this type of requirement you can use an implementation of DirectoryScanner
.
This scanner allows you to determine entirely what files are listed each poll.
This is also the interface that Spring Integration uses internally to wire FileListFilter
s and FileLocker
to the FileReadingMessageSource
.
A custom DirectoryScanner
can be injected into the <int-file:inbound-channel-adapter/>
on the scanner
attribute.
<int-file:inbound-channel-adapter id="filesIn" directory="file:${input.directory}" scanner="customDirectoryScanner"/>
This gives you full freedom to choose the ordering, listing and locking strategies.
It is also important to understand that filters (including patterns
, regex
, prevent-duplicates
etc) and locker
s,
are actually used by the scanner
.
Any of these attributes set on the adapter are subsequently injected into the internal scanner
.
For the case of an external scanner
, all filter and locker attributes are prohibited on the
FileReadingMessageSource
; they must be specified (if required) on that custom DirectoryScanner
.
In other words, if you inject a scanner
into the FileReadingMessageSource
, you should supply filter
and locker
on that scanner
not on the FileReadingMessageSource
.
Note | |
---|---|
The |
This scanner was added in version 4.2. It replaces the existing RecursiveLeafOnlyDirectoryScanner
which is
inefficient for large directory trees.
The FileReadingMessageSource.WatchServiceDirectoryScanner
requires Java 7 or above.
This scanner relies on file system events when new files are added to the directory. During initialization, the directory is registered to generate events; the initial file list is also built. While walking the directory tree, any subdirectories encountered are also registered to generate events. On the first poll, the initial file list from walking the directory is returned. On subsequent polls, files from new creation events are returned. If a new subdirectory is added, its creation event is used to walk the new subtree to find existing files, as well as registering any new subdirectories found.
Note | |
---|---|
There is a case with |
Since version 4.3, the top level WatchServiceDirectoryScanner
has been deprecated in favor of
FileReadingMessageSource
internal logic for the WatchService
.
Now this can be enable via use-watch-service
option, which is mutually exclusive with the scanner
option.
An internal FileReadingMessageSource.WatchServiceDirectoryScanner
instance is populated for the provided directory
.
In addition, now the WatchService
polling logic can track the StandardWatchEventKinds.ENTRY_MODIFY
and
StandardWatchEventKinds.ENTRY_DELETE
, too.
The ENTRY_MODIFY
events logic should be implemented properly in the FileListFilter
to track not only new files but
also the modification, if that is requirement.
Otherwise the files from those events are treated the same way.
The ENTRY_DELETE
events have effect for the ResettableFileListFilter
implementations and, therefore, their files
are provided for the remove()
operation.
This means that (when this event is enabled), filters such as the AcceptOnceFileListFilter
will have the file removed,
meaning that, if a file with the same name appears, it will pass the filter and be sent as a message.
For this purpose the watch-events
(FileReadingMessageSource.setWatchEvents(WatchEventType... watchEvents)
) has been introduced
(WatchEventType
is a public inner enum in FileReadingMessageSource
).
With such an option we can implement some scenarios, when we would like to do one downstream flow logic for new files,
and other for modified.
We can achieve that with different <int-file:inbound-channel-adapter>
definitions, but for the same directory:
<int-file:inbound-channel-adapter id="newFiles" directory="${input.directory}" use-watch-service="true"/> <int-file:inbound-channel-adapter id="modifiedFiles" directory="${input.directory}" use-watch-service="true" filter="acceptAllFilter" watch-events="MODIFY"/> <!-- CREATE by default -->
A HeadDirectoryScanner
can be used to limit the number of files retained in memory.
This can be useful when scanning large directories.
With XML configuration, this is enabled using the queue-size
property on the inbound channel adapter.
Prior to version 4.2, this setting was incompatible with the use of any other filters.
Any other filters (including prevent-duplicates="true"
) overwrote the filter used to limit the size.
Note | |
---|---|
The use of a Generally, instead of using an |
The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:
@SpringBootApplication public class FileReadingJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FileReadingJavaApplication.class) .web(false) .run(args); } @Bean public MessageChannel fileInputChannel() { return new DirectChannel(); } @Bean @InboundChannelAdapter(value = "fileInputChannel", poller = @Poller(fixedDelay = "1000")) public MessageSource<File> fileReadingMessageSource() { FileReadingMessageSource source = new FileReadingMessageSource(); source.setDirectory(new File(INBOUND_PATH)); source.setFilter(new SimplePatternFileListFilter("*.txt")); return source; } @Bean @Transformer(inputChannel = "fileInputChannel", outputChannel = "processFileChannel") public FileToStringTransformer fileToStringTransformer() { return new FileToStringTransformer(); } }
The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:
@SpringBootApplication public class FileReadingJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FileReadingJavaApplication.class) .web(false) .run(args); } @Bean public IntegrationFlow fileReadingFlow() { return IntegrationFlows .from(s -> s.file(new File(INBOUND_PATH)) .patternFilter("*.txt"), e -> e.poller(Pollers.fixedDelay(1000))) .transform(Transformers.fileToString()) .channel("processFileChannel") .get(); } }
Another popular use case is to get lines from the end (or tail) of a file, capturing new lines when they are added.
Two implementations are provided; the first, OSDelegatingFileTailingMessageProducer
, uses the native tail
command (on operating systems that have one).
This is likely the most efficient implementation on those platforms.
For operating systems that do not have a tail
command, the second implementation ApacheCommonsFileTailingMessageProducer
which uses the Apache commons-io
Tailer
class.
In both cases, file system events, such as files being unavailable etc, are published as ApplicationEvent
s using the normal Spring event publishing mechanism.
Examples of such events are:
[message=tail: cannot open `/tmp/foo' for reading:
No such file or directory, file=/tmp/foo]
[message=tail: `/tmp/foo' has become accessible, file=/tmp/foo]
[message=tail: `/tmp/foo' has become inaccessible:
No such file or directory, file=/tmp/foo]
[message=tail: `/tmp/foo' has appeared;
following end of new file, file=/tmp/foo]
This sequence of events might occur, for example, when a file is rotated.
Note | |
---|---|
Not all platforms supporting a |
Example configurations:
<int-file:tail-inbound-channel-adapter id="native" channel="input" task-executor="exec" file="/tmp/foo"/>
This creates a native adapter with default -F -n 0 options (follow the file name from the current end).
<int-file:tail-inbound-channel-adapter id="native" channel="input" native-options="-F -n +0" task-executor="exec" file-delay=10000 file="/tmp/foo"/>
This creates a native adapter with -F -n +0 options (follow the file name, emitting all existing lines).
If the tail command fails (on some platforms, a missing file causes the tail
to fail, even with -F
specified), the command will be retried every 10 seconds.
<int-file:tail-inbound-channel-adapter id="native" channel="input" enable-status-reader="false" task-executor="exec" file="/tmp/foo"/>
By default native adapter capture from standard output and send them as messages and from standard error to raise events.
Starting with version 4.3.6, you can discard the standard error events by setting the enable-status-reader
to false
.
<int-file:tail-inbound-channel-adapter id="apache" channel="input" task-executor="exec" file="/tmp/bar" delay="2000" end="false" reopen="true" file-delay="10000"/>
This creates an Apache commons-io Tailer
adapter that examines the file for new lines every 2 seconds, and checks for existence of a missing file every 10 seconds.
The file will be tailed from the beginning (end="false"
) instead of the end (which is the default).
The file will be reopened for each chunk (the default is to keep the file open).
Important | |
---|---|
Specifying the |
To write messages to the file system you can use a FileWritingMessageHandler. This class can deal with the following payload types:
You can configure the encoding and the charset that will be used in case of a String payload.
To make things easier, you can configure the FileWritingMessageHandler
as part of an Outbound Channel Adapter or
Outbound Gateway using the provided XML namespace support.
Starting with version 4.3, you can specify the buffer size to use when writing files.
In its simplest form, the FileWritingMessageHandler
only requires a destination directory for writing the files.
The name of the file to be written is determined by the handler’s FileNameGenerator.
The default implementation looks for a Message header whose key matches the constant defined as FileHeaders.FILENAME.
Alternatively, you can specify an expression to be evaluated against the Message in order to generate a file name, e.g. headers[myCustomHeader] + '.foo'.
The expression must evaluate to a String
.
For convenience, the DefaultFileNameGenerator
also provides the setHeaderName method, allowing you to explicitly specify the Message header whose value shall be used as the filename.
Once setup, the DefaultFileNameGenerator
will employ the following resolution steps to determine the filename for a given Message payload:
String
, use it as the filename.
java.io.File
, use the file’s filename.
msg
as the filename.
When using the XML namespace support, both, the File Outbound Channel Adapter and the File Outbound Gateway support the following two mutually exclusive configuration attributes:
filename-generator
(a reference to a FileNameGenerator
implementation)
filename-generator-expression
(an expression evaluating to a String
)
While writing files, a temporary file suffix will be used (default: .writing
).
It is appended to the filename while the file is being written.
To customize the suffix, you can set the temporary-file-suffix attribute on both the File Outbound Channel Adapter and the File Outbound Gateway.
Note | |
---|---|
When using the APPEND file mode, the temporary-file-suffix attribute is ignored, since the data is appended to the file directly. |
Starting with version 4.2.5 the generated file name (as a result of filename-generator
/filename-generator-expression
evaluation) can represent a sub-path together with the target file name.
It is used as a second constructor argument for File(File parent, String child)
as before, but in the past we didn’t
created (mkdirs()
) directories for sub-path assuming only the file name.
This approach is useful for cases when we need to restore the file system tree according the source directory.
For example we unzipping the archive and want to save all file in the target directory at the same order.
Both, the File Outbound Channel Adapter and the File Outbound Gateway provide two configuration attributes for specifying the output directory:
Note | |
---|---|
The directory-expression attribute is available since Spring Integration 2.2. |
Using the directory attribute
When using the directory attribute, the output directory will be set to a fixed value, that is set at initialization time of the FileWritingMessageHandler
.
If you don’t specify this attribute, then you must use the directory-expression attribute.
Using the directory-expression attribute
If you want to have full SpEL support you would choose the directory-expression attribute. This attribute accepts a SpEL expression that is evaluated for each message being processed. Thus, you have full access to a Message’s payload and its headers to dynamically specify the output file directory.
The SpEL expression must resolve to either a String
or to java.io.File
.
Furthermore the resulting String
or File
must point to a directory.
If you don’t specify the directory-expression attribute, then you must set the directory attribute.
Using the auto-create-directory attribute
If the destination directory does not exists, yet, by default the respective destination directory and any non-existing parent directories are being created automatically. You can set the auto-create-directory attribute to false in order to prevent that. This attribute applies to both, the directory and the directory-expression attribute.
Note | |
---|---|
When using the directory attribute and auto-create-directory is Instead of checking for the existence of the destination directory at initialization time of the adapter, this check is now performed for each message being processed. Furthermore, if auto-create-directory is |
When writing files and the destination file already exists, the default behavior is to overwrite that target file. This behavior, though, can be changed by setting the mode attribute on the respective File Outbound components. The following options exist:
Note | |
---|---|
The mode attribute and the options APPEND, FAIL and IGNORE, are available since Spring Integration 2.2. |
REPLACE
If the target file already exists, it will be overwritten. If the mode attribute is not specified, then this is the default behavior when writing files.
APPEND
This mode allows you to append Message content to the existing file instead of creating a new file each time. Note that this attribute is mutually exclusive with temporary-file-suffix attribute since when appending content to the existing file, the adapter no longer uses a temporary file. The file is closed after each message.
APPEND_NO_FLUSH
This has the same semantics as APPEND but the data is not flushed and the file is not closed after each message. This can provide a significant performance at the risk of data loss in the case of a failure. See Section 14.3.4, “Flushing Files When using APPEND_NO_FLUSH” for more information.
FAIL
If the target file exists, a MessageHandlingException is thrown.
IGNORE
If the target file exists, the message payload is silently ignored.
Note | |
---|---|
When using a temporary file suffix (default: |
The APPEND_NO_FLUSH mode was added in version 4.3. This can improve performance because the file is not closed after each message. However, this can cause data loss in the event of a failure.
Several flushing strategies, to mitigate this data loss, are provided:
flushInterval
- if a file is not written to for this period of time, it is automatically flushed.
This is approximate and may be up to 1.33x
this time.
trigger
method containing a regular expression.
Files with absolute path names matching the pattern will be flushed.
MessageFlushPredicate
implementation to modify the action taken when a message
is sent to the trigger
method.
flushIfNeeded
methods passing in a custom FileWritingMessageHandler.FlushPredicate
or FileWritingMessageHandler.MessageFlushPredicate
implementation.
The predicates are called for each open file. See the java docs for these interfaces for more information.
When using flushInterval
, the interval starts at the last write - the file is flushed only if it is idle for the interval.
Starting with version 4.3.7, and additional property flushWhenIdle
can be set to false
, meaning that the interval starts with the first write to a previously flushed (or new) file.
By default, the destination file lastModified
timestamp will be the time the file was created (except a rename
in-place will retain the current timestamp).
Starting with version 4.3, you can now configure preserve-timestamp
(or setPreserveTimestamp(true)
when using
Java configuration).
For File
payloads, this will transfer the timestamp from the inbound file to the outbound (regardless of whether a
copy was required).
For other payloads, if the FileHeaders.SET_MODIFIED
header (file_setModified
) is present, it will be used to set
the destination file’s lastModified
timestamp, as long as the header is a Number
.
<int-file:outbound-channel-adapter id="filesOut" directory="${input.directory.property}"/>
The namespace based configuration also supports a delete-source-files
attribute.
If set to true
, it will trigger the deletion of the original source files after writing to a destination.
The default value for that flag is false
.
<int-file:outbound-channel-adapter id="filesOut" directory="${output.directory}" delete-source-files="true"/>
Note | |
---|---|
The |
Starting with version 4.2 The FileWritingMessageHandler
supports an append-new-line
option.
If set to true
, a new line is appended to the file after a message is written.
The default attribute value is false
.
<int-file:outbound-channel-adapter id="newlineAdapter" append-new-line="true" directory="${output.directory}"/>
In cases where you want to continue processing messages based on the written file, you can use the outbound-gateway
instead.
It plays a very similar role as the outbound-channel-adapter
.
However, after writing the file, it will also send it to the reply channel as the payload of a Message.
<int-file:outbound-gateway id="mover" request-channel="moveInput" reply-channel="output" directory="${output.directory}" mode="REPLACE" delete-source-files="true"/>
As mentioned earlier, you can also specify the mode attribute, which defines the behavior of how to deal with situations where the destination file already exists. Please see Section 14.3.3, “Dealing with Existing Destination Files” for further details. Generally, when using the File Outbound Gateway, the result file is returned as the Message payload on the reply channel.
This also applies when specifying the IGNORE mode. In that case the pre-existing destination file is returned. If the payload of the request message was a file, you still have access to that original file through the Message Header FileHeaders.ORIGINAL_FILE.
Note | |
---|---|
The outbound-gateway works well in cases where you want to first move a file and then send it through a processing pipeline.
In such cases, you may connect the file namespace’s |
If you have more elaborate requirements or need to support additional payload types as input to be converted to file content you could extend the FileWritingMessageHandler
, but a much better option is to rely on a Transformer
.
The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:
@SpringBootApplication @IntegrationComponentScan public class FileWritingJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(FileWritingJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.writeToFile("foo.txt", new File(tmpDir.getRoot(), "fileWritingFlow"), "foo"); } @Bean @ServiceActivator(inputChannel = "writeToFileChannel") public MessageHandler fileWritingMessageHandler() { Expression directoryExpression = new SpelExpressionParser().parseExpression("headers.directory"); FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression); handler.setFileExistsMode(FileExistsMode.APPEND); return handler; } @MessagingGateway(defaultRequestChannel = "writeToFileChannel") public interface MyGateway { void writeToFile(@Header(FileHeaders.FILENAME) String fileName, @Header(FileHeaders.FILENAME) File directory, String data); } }
The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:
@SpringBootApplication public class FileWritingJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(FileWritingJavaApplication.class) .web(false) .run(args); MessageChannel fileWritingInput = context.getBean("fileWritingInput", MessageChannel.class); fileWritingInput.send(new GenericMessage<>("foo")); } @Bean public IntegrationFlow fileWritingFlow() { return IntegrationFlows.from("fileWritingInput") .enrichHeaders(h -> h.header(FileHeaders.FILENAME, "foo.txt") .header("directory", new File(tmpDir.getRoot(), "fileWritingFlow"))) .handleWithAdapter(a -> a.fileGateway(m -> m.getHeaders().get("directory"))) .channel(MessageChannels.queue("fileWritingResultChannel")) .get(); } }
To transform data read from the file system to objects and the other way around you need to do some work.
Contrary to FileReadingMessageSource
and to a lesser extent FileWritingMessageHandler
, it is very likely that you will need your own mechanism to get the job done.
For this you can implement the Transformer
interface.
Or extend the AbstractFilePayloadTransformer
for inbound messages.
Some obvious implementations have been provided.
FileToByteArrayTransformer
transforms Files into byte[]
using Spring’s FileCopyUtils
.
It is often better to use a sequence of transformers than to put all transformations in a single class.
In that case the File
to byte[]
conversion might be a logical first step.
FileToStringTransformer
will convert Files to Strings as the name suggests.
If nothing else, this can be useful for debugging (consider using with a Wire Tap).
To configure File specific transformers you can use the appropriate elements from the file namespace.
<int-file:file-to-bytes-transformer input-channel="input" output-channel="output" delete-files="true"/> <int-file:file-to-string-transformer input-channel="input" output-channel="output" delete-files="true" charset="UTF-8"/>
The delete-files option signals to the transformer that it should delete the inbound File after the transformation is complete.
This is in no way a replacement for using the AcceptOnceFileListFilter
when the FileReadingMessageSource
is being used in a multi-threaded environment (e.g.
Spring Integration in general).
The FileSplitter
was added in version 4.1.2 and namespace support was added in version 4.2.
The FileSplitter
splits text files into individual lines, based on BufferedReader.readLine()
.
By default, the splitter uses an Iterator
to emit lines one-at-a-time as they are read from the file.
Setting the iterator
property to false
causes it to read all the lines into memory before emitting them as messages.
One use case for this might be if you want to detect I/O errors on the file before sending any messages containing
lines.
However, it is only practical for relatively short files.
Inbound payloads can be File
, String
(a File
path), InputStream
, or Reader
.
Other payload types will be emitted unchanged.
<int-file:splitter id="splitter" iterator="" markers="" markers-json="" apply-sequence="" requires-reply="" charset="" input-channel="" output-channel="" send-timeout="" auto-startup="" order="" phase="" />
The bean name of the splitter. | |
Set to | |
Set to | |
When | |
Set to | |
Set to | |
Set the charset name to be used when reading the text data into | |
Set the input channel used to send messages to the splitter. | |
Set the output channel to which messages will be sent. | |
Set the send timeout - only applies if the | |
Set to | |
Set the order of this endpoint if the | |
Set the startup phase for the splitter (used when |
Java Configuration
@Splitter(inputChannel="toSplitter") @Bean public MessageHandler fileSplitter() { FileSplitter splitter = new FileSplitter(true, true); splitter.setApplySequence(true); splitter.setOutputChannel(outputChannel); return splitter; }
The FileSplitter
will also split any text-based InputStream
into lines.
When used in conjunction with an FTP or SFTP streaming inbound channel adapter, or an FTP or SFTP outbound gateway
using the stream
option to retrieve a file, starting with version 4.3, the splitter will automatically close
the session supporting the stream, when the file is completely consumed.
See Section 15.5, “FTP Streaming Inbound Channel Adapter” and Section 27.8, “SFTP Streaming Inbound Channel Adapter” as well as Section 15.7, “FTP Outbound Gateway” and Section 27.10, “SFTP Outbound Gateway” for more
information about these facilities.
When using Java configuration, an additional constructor is available:
public FileSplitter(boolean iterator, boolean markers, boolean markersJson)
When markersJson
is true, the markers will be represented as a JSON string, as long as a suitable JSON processor library, such as Jackson or Boon, is on the classpath.