Streams in Node.js
Process large files efficiently with streams. Handle data in chunks instead of loading everything into memory.
Streams in Node.js
Streams process data in chunks. Essential for handling large files without crashing your app.
The Problem
const fs = require('fs');
const data = fs.readFileSync('large-file.txt');
console.log(data);
If file is 1GB, this loads 1GB into memory. Your app crashes.
The Solution: Streams
const fs = require('fs');
const readStream = fs.createReadStream('large-file.txt');
readStream.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes`);
});
readStream.on('end', () => {
console.log('Finished reading');
});
Now it reads in small chunks (64KB by default). Memory usage stays low.
Real Example: Copy File
const fs = require('fs');
const readStream = fs.createReadStream('input.txt');
const writeStream = fs.createWriteStream('output.txt');
readStream.pipe(writeStream);
writeStream.on('finish', () => {
console.log('File copied!');
});
Real Example: Compress File
const fs = require('fs');
const zlib = require('zlib');
fs.createReadStream('input.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('input.txt.gz'));
This compresses a file without loading it all into memory!
Real Example: HTTP Response
const express = require('express');
const fs = require('fs');
const app = express();
app.get('/download', (req, res) => {
const fileStream = fs.createReadStream('large-video.mp4');
res.setHeader('Content-Type', 'video/mp4');
fileStream.pipe(res);
});
app.listen(3000);
Video starts playing immediately. No need to wait for full download.
Transform Streams
const { Transform } = require('stream');
const fs = require('fs');
const uppercase = new Transform({
transform(chunk, encoding, callback) {
this.push(chunk.toString().toUpperCase());
callback();
}
});
fs.createReadStream('input.txt')
.pipe(uppercase)
.pipe(fs.createWriteStream('output.txt'));
Converts text to uppercase while streaming.
Error Handling
const { pipeline } = require('stream');
const fs = require('fs');
const zlib = require('zlib');
pipeline(
fs.createReadStream('input.txt'),
zlib.createGzip(),
fs.createWriteStream('input.txt.gz'),
(err) => {
if (err) {
console.error('Pipeline failed:', err);
} else {
console.log('Pipeline succeeded');
}
}
);
pipeline handles errors automatically and cleans up streams.
Types of Streams
Readable → Read data (fs.createReadStream)
Writable → Write data (fs.createWriteStream)
Duplex → Both read and write (TCP socket)
Transform → Modify data while streaming (zlib.createGzip)
Performance Comparison
Without Streams (1GB file):
- Memory: 1GB
- Time: 5 seconds
- Risk: Out of memory crash
With Streams (1GB file):
- Memory: 64KB
- Time: 5 seconds
- Risk: None
Key Takeaway
Streams handle large data efficiently by processing in chunks. Use createReadStream for reading, pipe to connect streams, pipeline for error handling. Perfect for large files, videos, and real-time data.