All notable changes to this project will be documented in this file.
- Updated FFmpeg with new encoders, JPEG-XS support, swscale Vulkan support, and more
- Regenerated constants, encoders, and decoders from updated FFmpeg headers
- Various bug fixes and stability improvements
- Updated dependencies
New high-level SharedTexture class for importing Electron's offscreen rendering GPU textures as FFmpeg hardware frames with zero-copy.
Platform support:
| Platform | GPU Format | Handle Type |
|---|---|---|
| macOS | AV_PIX_FMT_VIDEOTOOLBOX |
IOSurface |
| Windows | AV_PIX_FMT_D3D11 |
DXGI shared handle |
| Linux | AV_PIX_FMT_DRM_PRIME |
DMA-BUF |
Example:
import { HardwareContext, SharedTexture, AV_HWDEVICE_TYPE_VIDEOTOOLBOX } from 'node-av';
// Create hardware context (platform-specific)
const hw = HardwareContext.create(AV_HWDEVICE_TYPE_VIDEOTOOLBOX);
using sharedTexture = SharedTexture.create(hw);
// In Electron paint event with offscreen rendering
offscreen.webContents.on('paint', (event) => {
const texture = event.texture;
if (!texture?.textureInfo) return;
// Import as hardware frame (zero-copy)
using frame = sharedTexture.importTexture(texture.textureInfo, { pts: 0n });
// frame.format === AV_PIX_FMT_VIDEOTOOLBOX (macOS)
// frame.format === AV_PIX_FMT_D3D11 (Windows)
// frame.format === AV_PIX_FMT_DRM_PRIME (Linux)
texture.release();
});New mapTo() helper for mapping frames between hardware formats (e.g., DRM PRIME → VAAPI):
// Import DRM PRIME frame
const drmFrame = sharedTexture.importTexture(textureInfo, { pts: 0n });
// Map to VAAPI for encoding
const vaapiHw = HardwareContext.create(AV_HWDEVICE_TYPE_VAAPI);
const vaapiFrame = sharedTexture.mapTo(drmFrame, vaapiHw);fragments()async generator: Yields media fragments (moof+mdat) for streaming, separate from init segmentinitSegmentproperty: Promise that resolves with ftyp+moov data once available (box mode only)- AbortSignal support:
signaloption for graceful stream cancellation
Example:
const stream = FMP4Stream.create('rtsp://camera/stream', {
supportedCodecs: 'avc1.640029,mp4a.40.2',
boxMode: true,
signal: controller.signal,
});
await stream.start();
// Get init segment (ftyp+moov) for MSE SourceBuffer initialization
const init = await stream.initSegment;
sourceBuffer.appendBuffer(init);
// Stream media fragments via async generator
for await (const fragment of stream.fragments()) {
sourceBuffer.appendBuffer(fragment.data);
}- AbortSignal support:
signaloption for graceful stream cancellation, consistent with FMP4Stream
Example:
const controller = new AbortController();
const stream = RTPStream.create('rtsp://camera/stream', {
signal: controller.signal,
onVideoPacket: (rtp) => peer.sendRtp(rtp),
onAudioPacket: (rtp) => peer.sendRtp(rtp),
});
await stream.start();
// Cancel after timeout
setTimeout(() => controller.abort(), 30000);New high-level DeviceAPI for cross-platform device capture with native bindings for macOS, Linux, and Windows.
Example:
import { DeviceAPI } from 'node-av/api';
// List devices
const devices = await DeviceAPI.list();
// Camera capture
await using camera = await DeviceAPI.openCamera({
width: 1280, height: 720, frameRate: 30,
});
// Combined video + audio capture (macOS/Windows)
await using device = await DeviceAPI.openDevice({
videoDevice: 0, audioDevice: 0,
width: 1280, height: 720, frameRate: 30,
});
// Screen capture with system audio (macOS 13.0+)
await using screen = await DeviceAPI.openScreen({
frameRate: 30, drawMouse: true,
avfoundation: { captureSystemAudio: true, audioSampleRate: 48000 },
});Platform support:
| Feature | macOS | Linux | Windows |
|---|---|---|---|
| Camera | AVFoundation | V4L2 | DirectShow |
| Microphone | AVFoundation | ALSA | DirectShow |
| Combined | AVFoundation | — | DirectShow |
| Screen | ScreenCaptureKit | x11grab | GDI grab |
All high-level API classes now support AbortSignal for cancellation via an optional signal property in their options:
Demuxer,Muxer,Decoder,Encoder,FilterAPI,BitStreamFilterAPI— passsignalin optionspipeline()— pass{ signal }as the last argument- Async generators (
packets(),frames(),packets()) stop yielding on abort - Async methods (
decode(),encode(),writePacket(), etc.) throwAbortErroron abort - Pre-aborted signals are rejected immediately
close()is never affected — cleanup always runs
Example:
const controller = new AbortController();
await using input = await Demuxer.open('input.mp4', { signal: controller.signal });
using decoder = await Decoder.create(input.video()!, { signal: controller.signal });
// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);
try {
for await (const packet of input.packets()) {
for await (const frame of decoder.frames(packet)) {
// Process frame...
}
}
} catch (err) {
if (err.name === 'AbortError') {
console.log('Processing cancelled');
}
}
// Pipeline with signal
const control = pipeline(input, decoder, encoder, output, { signal: controller.signal });
await control.completion;Demuxer.open()now accepts a Node.jsReadablestream as input, andMuxer.open()now accepts aWritablestream as output. This enables seamless integration with Node.js stream APIs.
Example:
import { createReadStream, createWriteStream } from 'fs';
// Demux from a Readable stream
const readable = createReadStream('input.mkv');
await using input = await Demuxer.open(readable, { format: 'matroska' });
// Mux to a Writable stream (use non-seekable formats like mpegts or matroska)
const writable = createWriteStream('output.ts');
await using output = await Muxer.open(writable, { format: 'mpegts' });IOContextnow implements the synchronousDisposableinterface (Symbol.dispose) in addition toAsyncDisposable. This allows usingusing(synchronous) instead ofawait usingwhen async cleanup is not needed.
Demuxer.open()andDemuxer.openSync()now accept a pre-createdIOContextas input, enabling advanced custom I/O scenarios with more control over buffering and seeking.
- New
startTimeoption inMuxerstream options for controlling packet timestamp offsets.
Updated to the latest FFmpeg version with new codec and hardware acceleration support:
- New codec:
AV_CODEC_ID_JPEGXS(JPEG XS) - New decoder:
FF_DECODER_LIBSVTJPEGXS(SVT-JPEG XS) - New encoder:
FF_ENCODER_LIBSVTJPEGXS(SVT-JPEG XS) - New hardware encoders:
FF_ENCODER_AV1_D3D12VA,FF_ENCODER_H264_D3D12VA,FF_ENCODER_HEVC_D3D12VA(Direct3D 12)
- Stricter typings across the entire codebase — branded types are now enforced at compile time, preventing accidental use of raw number literals
- Replaced inline magic numbers and objects with predefined constants
FMP4Stream.create(),WebRTCStream.create(), andRTPStream.create()now accept a pre-openedDemuxerinstance as input in addition to URL strings. This enables using device capture or custom I/O as input for streaming.
- Enhanced symbol visibility flags and exports for native bindings
- Updated linker flags for improved compatibility
- Fixed external buffer access in Electron environment by refactoring native buffer creation to use
NewOrCopy - Ensures safe buffer handling when Node.js buffers are accessed from native code in Electron's process model
- Fixed channel layout formatting for audio frames in filter graphs.
New comprehensive benchmark tool for comparing node-av performance against FFmpeg CLI.
- Transcode speed benchmarks (software and hardware encoding)
- Memory usage measurements
- Latency metrics
Thread count now defaults to 0 (auto-detect) when not explicitly specified. This allows FFmpeg to automatically determine the optimal number of threads based on the system.
receive(). Proper flushing is required to retrieve all buffered frames at stream end.
Example:
// Using async generators - flushing is handled automatically
// input.packets() yields null at EOF which flushes the decoder
for await (const packet of input.packets()) {
await decoder.decode(packet); // null packet at EOF triggers flush
while (true) {
const frame = await decoder.receive();
if (!frame) break; // EAGAIN - no more frames available yet
// Process frame...
}
}- Enhanced
setOption()to support optional filter-specific parameters - Allows passing codec-specific options to bitstream filters
- Methods now properly handle
nullframes/packets for explicit EOF signaling - Enables manual flushing of internal buffers in encoding/decoding chains
HardwareContext.auto() now caches the successful hardware device type instead of testing all hardware types on every call. Subsequent calls skip the full hardware test and directly create the cached type.
- First call: Tests all hardware types, caches the successful device type
- Subsequent calls: Directly creates the cached hardware type (much faster)
resetAutoCache(): Clears cache, forces re-testing on next call- Custom options bypass cache (always tests)
- Muxer Option Validation:
Muxernow throws errors when setting invalid options instead of silently failing
-
VAAPI Runtime Check: Added FFmpeg patch for dynamic VAAPI/DRM library loading. Gracefully handles missing libraries instead of crashing.
-
HardwareContext.testDecoder(): Fixed logic bug where hardware types without codec support (like DRM without VAAPI) were incorrectly accepted. Now properly returns
falsewhen the hardware doesn't support decoding, ensuringHardwareContext.auto()only returns functional hardware acceleration.
- Stability Fix: Improved resource cleanup order in
close()to properly invalidate FilterContext references before freeing the graph.
The encode, decode, filter, and process methods now follow FFmpeg's send/receive pattern more closely. FFmpeg can produce multiple output frames/packets for a single input (e.g., B-frames in encoding, frame buffering in decoding).
Changes:
- Methods now return
voidinstead of a singleFrameorPacket - You must call
receive()/receiveSync()to retrieve output frames/packets - Supports proper multi-frame/packet output handling
Migration Example:
// Before
const frame = await decoder.decode(packet);
const packet = await encoder.encode(frame);
// After
await decoder.decode(packet);
const frame = await decoder.receive(); // May need to call multiple times
await encoder.encode(frame);
const packet = await encoder.receive(); // May return multiple packets-
Fifo - Generic FIFO buffer bindings (AVFifo) for arbitrary data types
-
FilterComplexAPI - Support for complex filtergraphs with multiple inputs/outputs
- Advanced multi-input/multi-output filter operations
- Direct mapping to FFmpeg's filtergraph functionality
- Use cases: overlay, picture-in-picture, side-by-side, multi-stream mixing
-
WhisperTranscriber - High-level API for automatic speech recognition
- Based on OpenAI's Whisper model with whisper.cpp integration
- GPU acceleration support (Metal/Vulkan/OpenCL)
- Voice Activity Detection (VAD) for better audio segmentation
- Automatic model downloading from HuggingFace
- Multiple model sizes: tiny, base, small, medium, large
- Type-safe transcription segments with precise timestamps
FilterComplexAPI - Picture-in-Picture Effect:
import { FilterComplexAPI } from 'node-av/api';
using complex = FilterComplexAPI.create(
'[1:v]scale=320:240[pip];[0:v][pip]overlay=x=W-w-10:y=H-h-10[out]',
{
inputs: [{ label: '0:v' }, { label: '1:v' }],
outputs: [{ label: 'out' }],
}
);
for await (using frame of complex.frames('out', {
'0:v': decoder1.frames(input1.packets(streamIndex1)),
'1:v': decoder2.frames(input2.packets(streamIndex2)),
})) {
for await (using packet of encoder.packets(frame)) {
await output.writePacket(packet, outputStreamIndex);
}
}WhisperTranscriber - Audio Transcription:
import { Demuxer, Decoder, WhisperTranscriber } from 'node-av/api';
using transcriber = await WhisperTranscriber.create({
model: 'base.en',
modelDir: './models',
language: 'en',
useGpu: true,
});
await using input = await Demuxer.open('podcast.mp3');
using decoder = await Decoder.create(input.audio());
for await (const segment of transcriber.transcribe(decoder.frames(input.packets()))) {
const timestamp = `[${(segment.start / 1000).toFixed(1)}s - ${(segment.end / 1000).toFixed(1)}s]`;
console.log(`${timestamp}: ${segment.text}`);
// [0.0s - 5.2s]: Welcome to the podcast...
// [5.2s - 10.8s]: Today we will discuss...
// ...
}Comprehensive improvements to end-of-file handling across the entire API stack, ensuring data integrity and preventing frame/packet loss during stream termination:
- Decoder - Proper EOF propagation through decode/receive pipeline with complete buffer flushing
- Encoder - Correct EOF handling in encode/receive pipeline guaranteeing all buffered packets output
- FilterAPI - Consistent EOF processing through filter chains preventing dropped frames during flush
- Demuxer - Reliable EOF detection and signaling for all stream types
- Muxer - Proper finalization and trailer writing on EOF
- Various bug fixes and stability improvements across the codebase
This release brings the High-Level API closer to FFmpeg CLI behavior, making it more intuitive, stable, and robust for production use.
Class Renaming: The High-Level API classes have been renamed to better reflect their FFmpeg terminology:
MediaInput→DemuxerMediaOutput→Muxer
High-Level API Refactoring: All High-Level API classes (Demuxer, Muxer, Decoder, Encoder, FilterAPI, BitStreamFilterAPI) have been refactored with improved type definitions, option handling, and significantly enhanced stability. Many aspects have been brought closer to FFmpeg CLI behavior, including automatic parameter propagation, metadata preservation, robust error handling, and better defaults. This makes the API more intuitive and production-ready.
Native Bindings Enhancement: Many additional useful utility functions have been added to the native bindings for improved low-level control and functionality.
Migration: Update your imports and class references. Review your High-Level API usage - some option property names/types may have changed. The Low-Level API remains stable.
- FFmpeg Update: Updated to latest FFmpeg master version with newest features, performance improvements, and bug fixes
- Numerous bug fixes and stability improvements across the entire codebase
- RTSP Backchannel/Talkback Support: New methods for bi-directional RTSP communication with IP cameras
FormatContext.getRTSPStreamInfo(): Retrieve detailed stream information including:- Transport type (TCP/UDP)
- Stream direction (sendonly/recvonly/sendrecv)
- Codec details (ID, MIME type, payload type)
- Audio properties (sample rate, channels)
- MIME type
- FMTP parameters
FormatContext.sendRTSPPacket(): Send RTP packets to RTSP streams with automatic transport handling. Supports both TCP (interleaved) and UDP modes, enabling audio transmission to camera backchannel streams for two-way communication.- Use cases: IP camera talkback/intercom functionality, security system audio announcements, remote audio injection, WebRTC integration with original SDP parameters
- See
examples/rtsp-stream-info.tsfor detailed RTSP stream inspection including FMTP parameters - See
examples/browser/webrtcfor a complete implementation of RTSP talkback.
MediaInput: Custom I/O callbacks support viaIOInputCallbacks
import { MediaInput } from 'node-av/api';
import type { IOInputCallbacks } from 'node-av/api';
const callbacks: IOInputCallbacks = {
read: (size: number) => {
// Read data from custom source
return buffer; // or null for EOF
},
seek: (offset: bigint, whence: AVSeekWhence) => {
// Seek in custom source
return offset;
}
};
await using input = await MediaInput.open(callbacks, {
format: 'mp4',
bufferSize: 8192
});MediaInput: Buffer input support in synchronous modeMediaInput.openSync()now acceptsBufferinput- Previously restricted due to callback requirements
- Enabled by direct callback invocation improvements
- Critical: Fixed deadlock when using
usingkeyword withIOOutputCallbacksMediaOutputwith custom I/O callbacks now properly closes synchronously- Direct callback invocation in same thread eliminates event loop dependency
// This now works without deadlock!
try {
using output = MediaOutput.openSync(callbacks, { format: 'mp4' });
// ... write packets
// Automatically closes without deadlock
} catch (e) {
console.error('Error caught correctly!', e); // ✅ Works now
}WebRTCSession: Complete WebRTC streaming with SDP negotiation and ICE handling- Automatic codec detection and transcoding (H.264, H.265, VP8, VP9, AV1 video; Opus, PCMA, PCMU audio)
- Hardware acceleration support
- Werift integration for peer connection management
import { WebRTCSession } from 'node-av/api';
const session = await WebRTCSession.create('rtsp://camera.local/stream', {
hardware: 'auto'
});
// Handle signaling
session.onIceCandidate = (candidate) => ws.send({ type: 'candidate', candidate });
const answer = await session.setOffer(sdpOffer);
await session.start();WebRTCStream: Library-agnostic WebRTC streaming with RTP callbacks for custom WebRTC implementations
FMP4Stream: Fragmented MP4 streaming for Media Source Extensions- Browser codec negotiation (H.264, H.265, AV1 video; AAC, FLAC, Opus audio)
- Automatic transcoding based on browser support
- Hardware acceleration support
import { FMP4Stream, FMP4_CODECS } from 'node-av/api';
const stream = await FMP4Stream.create('input.mp4', {
supportedCodecs: 'avc1.640029,mp4a.40.2', // From browser
hardware: 'auto',
onChunk: (chunk) => ws.send(chunk)
});
const codecString = stream.getCodecString(); // For MSE addSourceBuffer()
await stream.start();FMP4_CODECS: Predefined codec strings (H.264, H.265, AV1, AAC, FLAC, Opus)
⚠️ Version 3.x is NOT compatible with version 2.x due to FFmpeg major version upgrade- Native bindings rebuilt against FFmpeg 8.0 (was 7.1.2 in v2.x)
- Updated FFmpeg from 7.1.2 to 8.0
- Deprecated FFmpeg 7.x APIs and constants that were removed in FFmpeg 8.0
version <next>:
- ffprobe -codec option
- EXIF Metadata Parsing
- gfxcapture: Windows.Graphics.Capture based window/monitor capture
- hxvs demuxer for HXVS/HXVT IP camera format
- MPEG-H 3D Audio decoding via mpeghdec
version 8.0:
- Whisper filter
- Drop support for OpenSSL < 1.1.0
- Enable TLS peer certificate verification by default (on next major version bump)
- Drop support for OpenSSL < 1.1.1
- yasm support dropped, users need to use nasm
- VVC VAAPI decoder
- RealVideo 6.0 decoder
- OpenMAX encoders deprecated
- libx265 alpha layer encoding
- ADPCM IMA Xbox decoder
- Enhanced FLV v2: Multitrack audio/video, modern codec support
- Animated JPEG XL encoding (via libjxl)
- VVC in Matroska
- CENC AV1 support in MP4 muxer
- pngenc: set default prediction method to PAETH
- APV decoder and APV raw bitstream muxing and demuxing
- APV parser
- APV encoding support through a libopenapv wrapper
- VVC decoder supports all content of SCC (Screen Content Coding): IBC (Inter Block Copy), Palette Mode and ACT (Adaptive Color Transform
- G.728 decoder
- pad_cuda filter
- Sanyo LD-ADPCM decoder
- APV in MP4/ISOBMFF muxing and demuxing
- OpenHarmony hardware decoder/encoder
- Colordetect filter
- Add vf_scale_d3d11 filter
- No longer disabling GCC autovectorization, on X86, ARM and AArch64
- VP9 Vulkan hwaccel
- AV1 Vulkan encoder
- ProRes RAW decoder
- ProRes RAW Vulkan hwaccel
- ffprobe -codec option
- HDR10+ metadata passthrough when decoding/encoding with libaom-av1
- The Whisper filter from FFmpeg 8.0 is not yet available in this release and will be implemented in a future update
- Automatic Hardware Decoder Selection:
Decoder.create()now automatically selects hardware decoders when hardware context is provided- Mimics FFmpeg CLI behavior: "Selecting decoder 'hevc_qsv' because of requested hwaccel method qsv"
- New
HardwareContext.getDecoderCodec()method to find hardware-specific decoders (e.g.,hevc_qsv) - Falls back to software decoder if no hardware decoder is available
- Works with both async
create()and synccreateSync()methods
- QSV Filter Support:
FilterPresetsnow usesvpp_qsvfilter for Intel Quick Sync Video instead ofscale_qsv
- Hardware Frame Allocation Control: Added
extraHWFramesoption toDecoderOptionsandFilterOptionsfor controlling hardware frame buffer size- Low-level access via
codecContext.extraHWFramesandfilterContext.extraHWFrames
- Low-level access via
-
High-Level API Error Handling: All high-level API methods now return
nullinstead of throwing errors when resources are closed- Affected methods:
decode(),encode(),process(),flush()and their sync variants - Generator methods (
frames(),packets(), etc.) now exit gracefully whenclosed/isClosedflag is set - Improves error handling in cleanup scenarios
- Affected methods:
-
BitStreamFilterAPI Lifecycle Management:
- Renamed
dispose()method toclose()for consistency with other high-level APIs - Added
isBitstreamFilterOpengetter to check filter state - Symbol.dispose still supported for automatic cleanup with
usingstatement
- Renamed
-
Consistent Closed State Behavior:
- Methods check closed state and return
nullinstead of throwing exceptions - Generator loops respect closed state
- Methods check closed state and return
-
Log Callback Event Loop: Fixed Node.js process not exiting when using
Log.setCallback()- ThreadSafeFunction is now unref'd to prevent keeping event loop alive
- Proper cleanup order in
SetCallback()andResetCallback()
-
VideoToolbox Patch: Fixed "Duplicated pixel format" error in hardware acceleration
- Corrected patch 1006 to avoid duplicate
AV_PIX_FMT_BGRAentries insupported_formats[] - Added
AV_PIX_FMT_GRAY8andAV_PIX_FMT_RGB24to VideoToolbox format support
- Corrected patch 1006 to avoid duplicate
Added FrameUtils class for efficient image processing of NV12 video frames. This native implementation provides crop, resize, and format conversion operations with internal resource pooling for improved performance in streaming scenarios.
Usage:
import { FrameUtils } from 'node-av/lib';
// Initialize once for your input dimensions
const processor = new FrameUtils(1920, 1080);
// Process frames with various operations
const output = processor.process(nv12Buffer, {
crop: { left: 100, top: 100, width: 640, height: 480 },
resize: { width: 1280, height: 720 },
format: { to: 'rgba' }
});
processor.close();
// Automatic cleanup with using statement
{
using processor = new FrameUtils(320, 180);
// Process frames...
} // Automatically disposed- FFmpeg Binary Access: New
node-av/ffmpegentry point provides easy access to FFmpeg binariesffmpegPath()- Get path to FFmpeg executableisFfmpegAvailable()- Check if FFmpeg binary is available- Automatic download and installation of platform-specific FFmpeg binaries from GitHub releases
- Added Windows MSVC builds alongside existing MinGW builds for better compatibility
- Now distributes both
@seydx/node-av-win32-x64-msvcand@seydx/node-av-win32-x64-mingwpackages
- Now includes standalone FFmpeg v7.1.2 binaries as release assets for all supported platforms:
- Jellyfin builds:
ffmpeg-v7.1.2-{platform}-jellyfin.zip(Windows MinGW, Linux, macOS) - MSVC builds:
ffmpeg-v7.1.2-win-{arch}.zip(Windows MSVC only)
- Jellyfin builds:
- Updated FFmpeg from 7.1 to 7.1.2 with latest performance improvements and bug fixes
Added synchronous variants for all async methods to eliminate AsyncWorker overhead and achieve near-native FFmpeg performance. Every async method now has a corresponding Sync suffix variant.
Performance Improvements:
- Eliminates N-API AsyncWorker overhead for CPU-bound operations
- Near-native FFmpeg performance for sequential processing
Usage:
// Async version (non-blocking, good for concurrent operations)
const frame = await decoder.decode(packet);
for await (const packet of input.packets()) { /* ... */ }
// Sync version (faster for sequential processing)
const frame = decoder.decodeSync(packet);
for (const packet of input.packetsSync()) { /* ... */ }The hardware option has been removed from Encoder.create(). Hardware context is now automatically detected from input frames.
// Before (v1.x)
const hw = HardwareContext.auto();
const encoderCodec = hw.getEncoderCodec('h264'); // e.g., returns FF_ENCODER_H264_VIDEOTOOLBOX
const encoder = await Encoder.create(encoderCodec.name, streamInfo, {
hardware: hw,
bitrate: 5000000
});
// After (v2.0)
const hw = HardwareContext.auto();
const encoderCodec = hw.getEncoderCodec('h264');
const encoder = await Encoder.create(encoderCodec, {
bitrate: 5000000
});
// Hardware context automatically detected from input frames
// Or using typed constants directly:
import { FF_ENCODER_H264_VIDEOTOOLBOX } from '@seydx/av/constants';
const encoder = await Encoder.create(FF_ENCODER_H264_VIDEOTOOLBOX, { bitrate: 5000000 });HardwareFilterPresets class has been removed. Use FilterPreset with chain() for hardware acceleration.
// Before (v1.x)
const hw = HardwareContext.auto();
const hwFilter = new HardwareFilterPresets(hw);
const filter = hwFilter.scale(1920, 1080);
// After (v2.0)
const hw = HardwareContext.auto();
const filterChain = FilterPreset.chain(hw).scale(1920, 1080).build(); // Pass hardware context to chainNo longer need to manually manage headers and trailers.
// Before (v1.x)
const output = await MediaOutput.create('output.mp4');
await output.writeHeader();
// ... write packets ...
await output.writeTrailer();
await output.close();
// After (v2.0)
await using output = await MediaOutput.create('output.mp4');
// ... write packets ...
// Header/trailer handled automatically, close on dispose- More Filter presets
- Better error messages throughout the API
- Video duration calculation issues (was showing 10000+ seconds instead of actual duration)
- Memory management in filter buffer handling
- Dictionary.fromObject to properly handle number values
- Codec context initialization and cleanup
HardwareFilterPresetsclass (replaced by enhancedFilterPreset)- Manual
writeHeader()andwriteTrailer()requirements in MediaOutput - Unused stream information types from type exports
- Initial Release