const fs = require('fs');
const filename="binary.bin";
fs.readFile(filename, (err, knowledge) => {
if (err) {
console.error('Error studying file:', err);
return;
}
console.log(knowledge);
// course of the Buffer knowledge utilizing Buffer strategies (e.g., slice, copy)
});
Streaming recordsdata in JavaScript
One other aspect of coping with recordsdata is streaming in chunks of information, which turns into a necessity when coping with massive recordsdata. Right here’s a contrived instance of writing out in streaming chunks:
const fs = require('fs');
const filename="large_file.txt";
const chunkSize = 1024 * 1024; // (1)
const content material="That is some content material to be written in chunks."; // (2)
const fileSizeLimit = 5 * 1024 * 1024; // // (3)
let writtenBytes = 0; // (4)
const writeStream = fs.createWriteStream(filename, { highWaterMark: chunkSize }); // (5)
perform writeChunk() { // (6)
const chunk = content material.repeat(Math.ceil(chunkSize / content material.size)); // (7)
if (writtenBytes + chunk.size fileSizeLimit) {
console.error('File dimension restrict reached');
writeStream.finish();
return;
}
console.log(`Wrote chunk of dimension: ${chunk.size}, Whole written: ${writtenBytes}`);
}
}
writeStream.on('error', (err) => { // (10)
console.error('Error writing file:', err);
});
writeStream.on('end', () => { // (10)
console.log('Completed writing file');
});
writeChunk();
Streaming offers you extra energy, however you’ll discover it includes extra work. The work you might be doing is in setting chunk sizes after which responding to occasions primarily based on the chunks. That is the essence of avoiding placing an excessive amount of of an enormous file into reminiscence directly. As an alternative, you break it into chunks and cope with every one. Listed here are my notes concerning the attention-grabbing elements of the above write instance:
- We specify a piece dimension in kilobytes. On this case, we have now a 1MB chunk, which is how a lot content material can be written at a time.
- Right here’s some pretend content material to write down.
- Now, we create a file-size restrict, on this case, 5MB.
- This variable tracks what number of bytes we’ve written (so we will cease writing after 5MB).
- We create the precise
writeStream
object. ThehighWaterMark
component tells it how huge the chunks are that it’ll settle for. - The
writeChunk()
perform is recursive. Every time a piece must be dealt with, it calls itself. It does this until the file restrict has been reached, wherein case it exits. - Right here, we’re simply repeating the pattern textual content till it reaches the 1MB dimension.
- Right here’s the attention-grabbing half. If the file dimension just isn’t exceeded, then we name
writeStream.write(chunk)
:writeStream.write(chunk)
returnsfalse
if the buffer dimension is exceeded. Which means we will’t match extra within the buffer given the dimensions restrict.- When the buffer is exceeded, the
drain
occasion happens, dealt with by the primary handler, which we outline right here withwriteStream.as soon as('drain', writeChunk);
. Discover that this can be a recursive callback towriteChunk
.
- This retains monitor of how a lot we’ve written.
- This handles the case the place we’re executed writing and finishes the stream author with
writeStream.finish();
. - This demonstrates including occasion handlers for
error
andend
.
And to learn it again off the disk, we will use an identical method: