0

I have a Java application that writes content to a file in a AWS S3 bucket.

The writer is created with this code

SequenceWriter getBufferedWriter(final ObjectWriter newWriter)
                throws IOException {
            var key = "myFile.csv";

            manager = new StreamTransferManager(bucket, key, client.getClient()).numStreams(1)
                    .numUploadThreads(1)
                    .queueCapacity(1)
                    .partSize(PART_SIZE_MB);
            outputStream = manager.getMultiPartOutputStreams()
                    .get(0);
            return newWriter.writeValues(outputStream);
        }

Then I write values with

writer.writeValue(myData);

The application works fine, and when it is finished, the data is in the S3 file. However, I'd like to have the content written (and flushed) while the application is running, so if for any reason the application crashes, I still get partial content in the file.

I'd actually like to "flush" it programmatically, so when a certain event occurs in my application, I force the flush

I've tried using writer.flush() but it didn't achieve what I wanted.

How can I force the content to be written to S3?

3
  • If you want the data to be uploaded "earlier" reduce the partSize.
    – luk2302
    Commented Feb 8, 2023 at 10:51
  • I'd actually like to "flush" it programmatically, so when a certain event occurs in my application, I force the flush (I'll add it to the question). Commented Feb 8, 2023 at 10:52
  • You can't. S3 is an object store, not a filesystem. Objects become available in S3 when they have been fully written, not before. You could play around with multi-part uploads and recover the upload after failure, but this doesn't seem to be what you're asking for (and isn't going to happen automatically in any case). If you need to capture incremental results, either use a filesystem such as EFS, or write smaller files and combine later.
    – kdgregory
    Commented Feb 8, 2023 at 15:34

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Browse other questions tagged or ask your own question.