Self-contained, resizable HTML5 player for audio (MP3/M4A) and video (MP4/MOV/WEBM). Pure vanilla JS—no frameworks required. New since v 1.8: tempo-aware track-list showing BPM (integer-rounded), auto-loading from tempo_meta.json; initial volume defaults to 17 % at page-load.
File locations
Place nmp.html anywhere.
Media files live in a sibling /media/ folder.
Ensure readable permissions with chmod 644 *.
Playlist
Optional /media/playlist.json—an array of paths (order preserved). If absent, the player simply waits for user uploads.
Tempo metadata
Run extract_meta_from_media.py v 2.4 to generate tempo_meta.json (single integer-rounded bpm). Player displays it beside each track and in the title-bar as “### BPM”.
Uploads
➕ Upload button and drag-&-drop. Files become blob-URLs, so nothing is written to disk.
First-30-second attention cue
Uploader border, hint-text and container gently pulse, glow and scale every 2 s for the first 30 s after page-load.
A-B Looping
Seek-bar sports two cerulean “brackets”:
• A handle “[” — left edge marks loop-start.
• B handle “]” — right edge marks loop-end.
Drag to set; ultramarine bar fills the loop range. ✖ Clear button instantly resets the loop.
Click-to-toggle video
Click anywhere on the visible video to play/pause; the ⏸︎/▶︎ button stays synchronised.
Autoplay
The first track auto-starts; subsequent behaviour follows Repeat Mode.
Repeat Mode
Begins at 🔂 One (loop current). Button cycles: 🔂 One → 🔁 All → 🔁 Off.
Controls
⏮︎ Prev • ⏸︎/▶︎ Toggle • ⏭︎ Next • Repeat — plus ✖ Loop-Clear beside the seek-bar.
Seek & Time
Sleek seek-bar with live “elapsed / total” timer, integrated A-B loop handles and ultramarine fill.
Volume
Smooth 0–100 % slider with live percentage label; initial default 17 % (0.17).
Speed
0.70× – 2.00× slider with − / + step buttons and 1× reset. Applies to audio & video.
Resizable wrapper
Outer .wrapper uses resize:both; default width governed by --w (360 px). Track-list is vertically resizable.
Accent colour
Edit --accent (default #1e90ff) to rebrand buttons, slider thumbs, active-track row and uploader pulse.
Source-code reveal
Built-in “Full Source Code” accordion shows the entire page, syntax-highlighted via Highlight.js.
Namespace
All logic wrapped in an IIFE; CSS uses local class names—safe to embed anywhere.
Guide to nGene Media Player v 1.8 (c)
Topic
Details
Purpose
Self‑contained, resizable HTML5 player for audio (MP3/M4A) and video (MP4/MOV/WEBM). Pure vanilla JS—no frameworks. New since v 1.6 (c): draggable cerulean‑blue “bracket” handles for precise A‑B looping, ultramarine loop‑fill, and click‑to‑toggle playback directly on the video surface.
File locations
Place nmp.html anywhere.
Media files live in a sibling /media/ folder.
Ensure readable permissions with chmod 644 *.
Playlist
Optional /media/playlist.json—an array of paths (order preserved). If absent, the player simply waits for user uploads.
Uploads
➕ Upload button and drag‑&‑drop. Files become blob‑URLs, so nothing is written to disk.
First‑30‑second attention cue
Uploader border, hint‑text and container gently pulse, glow and scale every 2 s for the first 30 s after page‑load.
A‑B Looping (1.8 series)
Seek‑bar sports two cerulean “brackets”:
• A handle “[” — left edge marks loop‑start.
• B handle “]” — right edge marks loop‑end.
Drag to set; ultramarine bar fills the loop range. ✖ Clear button instantly resets the loop.
Click‑to‑toggle video
Click anywhere on the visible video to play/pause; the ⏸︎/▶︎ button stays synchronised.
Autoplay
The first track auto‑starts; subsequent behaviour follows Repeat Mode.
Repeat Mode (default)
Begins at 🔂 One (loop current). Button cycles: 🔂 One → 🔁 All → 🔁 Off.
Controls
⏮︎ Prev • ⏸︎/▶︎ Toggle • ⏭︎ Next • Repeat — plus ✖ Loop‑Clear beside the seek‑bar.
Seek & Time
Sleek seek‑bar with live “elapsed / total” timer. Integrates A‑B loop handles and ultramarine fill described above.
Volume
Smooth 0–100 % slider with live percentage label.
Resizable wrapper
Outer .wrapper uses resize:both; default width governed by --w (360 px). Track‑list is vertically resizable.
Accent colour
Edit --accent (default #1e90ff) to rebrand buttons, slider thumbs, active‑track row and uploader pulse.
Source‑code reveal
Built‑in “Full Source Code” accordion shows the entire page, syntax‑highlighted via Highlight.js.
Namespace
All logic wrapped in an IIFE; CSS uses local class names—safe to embed anywhere.
Media Format and Codec Overview
Modern media players should support a variety of audio and video file formats. Below is an overview of commonly used formats, including their typical use cases, compatibility considerations, licensing issues, technical notes, and recommendations for use. Emphasis is placed on desktop and HTML5/JavaScript environments.
Common Audio Formats
MP3 (MPEG Audio Layer III)
Typical Use Cases & Popularity:
MP3 is one of the most ubiquitous audio formats for music and podcasts. It gained popularity for its efficient compression and acceptable quality, making it the standard for digital music distribution for decades. It is commonly used for streaming audio, music libraries, and virtually any scenario where audio files are shared.
Browser & Platform Support:
Support for MP3 is universal across modern browsers and operating systems. All major browsers (Chrome, Firefox, Safari, Edge, etc.) can play MP3 files in an HTML5
<audio>
element. Likewise, almost every media player and mobile device supports MP3 out-of-the-box. This wide compatibility makes MP3 a safe choice for any web-based player.
Licensing & Limitations:MP3 was historically patented, but as of 2017 all relevant patents have expired. This means there are no longer licensing fees required to use MP3 encoding or decoding. There are no significant legal restrictions for use in applications now. The format itself does not support multichannel audio beyond stereo (no native support for surround sound), and it is a lossy format (audio quality is reduced compared to the original).
Technical Considerations:
MP3 provides lossy compression with file sizes roughly 1/10 of raw audio, depending on bitrate (common bitrates are 128–320 kbps for music). It supports metadata via ID3 tags (ID3v1 and ID3v2), which can store title, artist, album, cover art, etc., within the file. MP3 files are easily streamable and seekable; an MP3 can be progressively downloaded/streamed, and most encoders include internal indexing that allows quick seeking to different timestamps. Being an older format, it lacks some technical improvements of newer codecs (for example, it struggles with very low bitrates compared to modern codecs), but it remains efficient for most purposes.
Recommendation:
MP3 is still a recommended default format for audio in a general-purpose media player due to its universal support and lack of licensing hurdles. For any desktop or web application where broad compatibility is needed, including MP3 support is essential. Its audio quality at higher bitrates is adequate for most users, though for pristine quality or more efficient compression other formats may be considered as supplements.
AAC / M4A (Advanced Audio Coding)
Typical Use Cases & Popularity:
AAC is the audio codec often packaged in the M4A or MP4 container. It is the successor to MP3 in many ways, offering better audio quality at similar bitrates. AAC is widely used in streaming (e.g., Apple Music, YouTube audio, etc.), radio broadcasts (DAB+ uses AAC), and is the default for many modern platforms. M4A files (which are essentially MP4 containers with only audio, usually AAC) are common for music purchased or downloaded from services like iTunes and are used when a slightly higher quality or more modern codec than MP3 is desired.
Browser & Platform Support:
AAC audio is supported by all major browsers, primarily when contained in an MP4 or M4A file. For example, an M4A file with AAC audio will play in HTML5
<audio>
in Chrome, Firefox, Safari, Edge, etc. (Firefox historically relied on OS codecs for AAC but on modern systems this is seamless). Virtually all smartphones and tablets support AAC playback (it's the default for iOS devices). In summary, AAC in MP4/M4A has near-universal support similar to MP3, except old browsers or very old devices may lack it.
Licensing & Limitations:
AAC is a patented format (under MPEG-LA/Via Licensing). Technically, implementers of AAC encoders/decoders are supposed to obtain a license. However, for a media player using the browser’s built-in decoding, this is not a direct concern (browser vendors have taken care of licensing). There are no fees for end-users to play AAC. AAC offers lossy compression; its quality at a given bitrate generally surpasses MP3, especially at lower bitrates. Like MP3, it’s typically stereo for music (though AAC can support multichannel audio in other contexts and is used for surround sound in movies). One limitation is that creating or distributing an independent AAC encoder requires dealing with licensing. Also, older devices or software (pre-2000s era) might not support AAC, whereas they might support MP3.
Technical Considerations:
AAC supports a range of profiles (AAC-LC, HE-AAC, HE-AAC v2, etc.), where newer profiles are optimized for extremely low bitrates (HE-AAC uses spectral band replication and is used in streaming radio at 48 kbps or less, for example). In a typical scenario, AAC-LC (Low Complexity) at 128-256 kbps provides excellent audio quality. M4A files can contain metadata similar to MP3’s ID3 tags (in MP4, metadata atoms can store title, artist, album, cover art, etc.). Seeking and streaming of AAC in MP4 is very good: MP4 containers have moov atoms that index the file for quick seeking and are designed for progressive download and streaming. AAC is also used in video files (MP4) as the audio track, meaning a player already handling MP4 video implicitly has AAC audio support via the browser.
Recommendation:
AAC (in M4A/MP4) is highly recommended as a modern audio format, often side by side with MP3. For a web media player, supporting AAC is important (and typically comes with supporting MP4). It can be considered a default for high-quality audio if one doesn’t mind the patent status, as its quality/compression is superior. Many platforms have already shifted to AAC as the default (e.g., streaming services), so a player intended for broad use should handle it. In practice, having both MP3 and AAC support covers virtually all common audio content a user will have.
Ogg Vorbis (and Opus)
Typical Use Cases & Popularity:
Ogg Vorbis was one of the first successful open-source audio codecs, often used in open content projects (like Wikipedia audio, many open-source games, and earlier digital music stores focused on Linux). While not as ubiquitous as MP3 or AAC, Vorbis saw adoption in applications like Spotify (early on) and is still used in some streaming (internet radio, etc.). Opus is a more recent codec (standardized in 2012) that combines technologies from Vorbis and Skype’s SILK codec. Opus is now widely used for real-time communication (it’s the audio codec for WebRTC) and for streaming in Discord, WhatsApp, etc., and is considered state-of-the-art for lossy audio compression at a wide range of bitrates. Opus can be stored in an Ogg container (usually with a .opus or .ogg extension) or in a WebM container for web video/audio.
Browser & Platform Support:
Ogg Vorbis audio is supported in most modern browsers except some legacy holdouts. Chrome, Firefox, Opera, and the new Edge (Chromium-based) all support Vorbis in an
<audio>
element (.ogg files). Safari historically did not support Ogg Vorbis until recently – as of Safari 15 (on macOS Monterey and iOS 15), Safari added support for WebM and also for Opus in WebM, but it still does not natively play .ogg Vorbis files unless additional components are installed. Therefore, Vorbis support is almost universal on desktop except older Safari versions. Opus is supported in Chrome, Firefox, Opera, and Edge; Safari added Opus support when contained in WebM (Safari 15+). However, Safari (even latest) may not play a standalone .opus file or Ogg Opus file, as its Opus support is tied to WebM container. On desktop, most third-party audio players support Vorbis, and many now support Opus as it gains popularity. In summary, for HTML5: Vorbis is widely supported except older Apple browsers; Opus is supported by all major browsers except older Safari (though Safari is catching up via WebM).
Licensing & Limitations:
Both Vorbis and Opus are royalty-free and open. Vorbis was developed by the Xiph.org Foundation explicitly to avoid patent issues, and Opus was standardized through IETF with contributors making it royalty-free. There are no licensing fees to use these codecs or include them in applications. As for limitations: Vorbis, being older, does not perform as well at very low bitrates (below ~64 kbps) and isn’t as efficient as Opus or AAC in certain cases. Opus, while excellent, is more complex to implement encoding for (but decoding is lightweight). Another limitation is ecosystem: MP3/AAC are so entrenched that .ogg or .opus files might be rarer in a user’s personal library unless they specifically seek open formats. Also, the Ogg container doesn’t officially support certain metadata as richly as ID3 (it uses Vorbis comments, which are flexible but lack standardized fields for some less common tags).
Technical Considerations:
Vorbis offers better audio quality than MP3 at a given bitrate, especially noticeable at medium to high quality settings. It typically uses .ogg container for audio-only files, which can also multiplex with Theora video ( .ogv files ) or Opus audio. Seeking in Ogg Vorbis is reasonably supported; the container format has an index at the end, but browsers can seek if the file is fully downloaded or if the web server supports byte-range requests to facilitate seeking. Opus is very flexible: it can seamlessly adapt from very low bitrate speech to high-quality music. Opus files can have a variety of sample rates internally but are typically presented as 48 kHz. Both Vorbis and Opus use the Ogg container for standalone files, and they use Vorbis-style comment headers for metadata (which can include title, artist, album, etc., and even cover art if a METADATA_BLOCK_PICTURE is used in Opus). For streaming: Vorbis was widely used in Icecast/Shoutcast streams; Opus is now used in WebRTC and some streaming radio as well. Both are well-suited to streaming, with low latency and small frame sizes (Opus especially excels at low-latency streaming).
Recommendation:
For an open-source oriented media player, supporting Ogg Vorbis and Opus is highly encouraged. They provide freedom from patent worries and excellent quality (Opus in particular often outperforms other codecs). However, because of Safari’s historical lack of support, it may not be wise to use these as the only format for web content if targeting a broad audience. In practice, one would offer Vorbis/Opus in addition to MP3/AAC. For instance, nGene Media Player can support .ogg/.opus files so that users who have audio in those formats can play them, and possibly use Opus internally if it ever encodes or records audio. Opus is the recommended choice for any new project that needs a versatile audio format (especially if targeting modern environments or uses like chat, recordings, etc.), while Vorbis ensures compatibility with legacy open audio. They are not the “default” for general consumer media (which is still MP3/AAC), but they are important in a comprehensive media player feature set.
FLAC (Free Lossless Audio Codec)
Typical Use Cases & Popularity:
FLAC is a popular format for lossless audio. It is widely used by audiophiles for music archival, by musicians and studios for distributing masters, and by anyone who wants to preserve exact audio quality. FLAC compresses audio without any loss in quality (unlike MP3/AAC), typically reducing file size to about 50-60% of the original WAV. It’s common to find FLAC versions of albums on band websites or as downloads accompanying vinyl or CD purchases. While not used for streaming (due to large size), it’s popular for personal music collections and any scenario where storage or bandwidth can accommodate it and quality is paramount.
Browser & Platform Support:
As of the mid-2010s, browser support for FLAC has become quite good. Chrome and Firefox support FLAC playback in
<audio>
(Chrome has since version 56, Firefox since 51). Safari added FLAC support in version 11 (around macOS High Sierra). This means modern versions of all major browsers can play .flac files directly. However, older browsers or old mobile devices might not support it. Outside the browser, FLAC is supported by many desktop music players (e.g., VLC, foobar2000, etc.) and even by some car audio systems and high-end portable music players. On Windows and macOS, FLAC can be played with native or easily available codecs (Windows 10 added native support for FLAC in its media player). One caveat: some browsers may only support FLAC in certain container forms (usually .flac extension with FLAC codec; FLAC-in-Ogg might have different support matrix). In general, .flac files (the official container/extension) are recognized by modern browsers.
Licensing & Limitations:
FLAC is open-source and royalty-free (its reference implementation is BSD licensed). There are no patent concerns known for FLAC compression. The main limitation of FLAC is the large file size compared to lossy formats: a 5-minute song in FLAC might be 20–30 MB (at CD quality 44.1 kHz/16-bit) whereas the same in MP3 320 kbps is around 12 MB, or 5 MB at 128 kbps. Thus, FLAC is not efficient for streaming over limited bandwidth. Another limitation is that FLAC, being lossless, doesn’t scale down to extremely low bitrates at all (it’s always full quality). For distribution, users have to explicitly choose FLAC for quality; otherwise, many casual listeners prefer smaller files. But as storage and bandwidth increase, FLAC’s popularity in consumer use is slowly growing (some streaming platforms even offer FLAC for premium subscribers). There are no playback performance issues for FLAC—decoding is not overly CPU intensive—, but encoding FLAC is heavier than encoding MP3 (still, this is usually done offline).
Technical Considerations:
FLAC supports various bit depths and sampling rates (from 16-bit/44.1 kHz CD quality up to 24-bit/192 kHz and beyond), making it suitable for high-resolution audio. It compresses by finding patterns in the audio data (lossless compression similar to ZIP but optimized for audio). FLAC files contain metadata in the form of “Vorbis comments,” which is a flexible tagging system. They can also embed album art images and even cue sheets for gapless playback indexing. Streaming a FLAC file is possible (the
<audio>
element will progressively download it), but users will experience delays if the connection is not fast, due to file size. Seeking in FLAC is typically good because FLAC frames contain markers that allow jumping to approximate positions, and most players build a seek table. For our context (desktop web player), if a user opens a local FLAC file, it should play smoothly. If a FLAC is hosted online, the browser will download a large amount but can start playing once a bit is buffered. There’s no technical issue playing partial FLAC data aside from the bandwidth concern.
Recommendation:
Supporting FLAC in a media player is highly beneficial for users who value audio quality. For a desktop-focused player, it is quite likely that some users will have FLAC files (since on desktop, people often manage large music libraries). It is recommended to include FLAC support so that such users can play their lossless files. However, FLAC should not replace MP3/AAC as a default for general use on the web, because of the much larger file sizes. Think of FLAC as a premium option: the player should handle it, display its metadata (which often includes detailed tags and high-resolution cover art), and perhaps even indicate that the track is lossless. For distribution or general sharing, one would still default to a lossy format, but having FLAC capability makes the player versatile. In summary: include FLAC support for completeness, but use it when lossless audio is required; do not use FLAC as the primary format for streaming or casual listening scenarios.
WAV (Waveform Audio File Format / PCM)
Typical Use Cases & Popularity:WAV is a raw audio format (often containing PCM – Pulse Code Modulation – data). It is the standard audio format for uncompressed audio on Windows and is widely used in professional audio recording and editing. When someone exports audio from a digital audio workstation (DAW) for mixing or mastering, they often use WAV to preserve quality. WAV files are also common for short sound clips, system sounds, or any case where compression isn’t applied. In consumer use, one doesn’t often encounter long music tracks in WAV form (due to size), but it’s not unheard of (some people do maintain WAV libraries or share WAVs to avoid any loss). Essentially, WAV is popular as the lowest-common-denominator audio format and for its simplicity (no complex encoding, just raw samples).
Browser & Platform Support:
Browser support for WAV is broad. All major browsers that support the audio element can play PCM WAV files (.wav extension). This has been supported for a long time because WAV is such a basic format. On operating systems, WAV can be played natively on Windows, and on other OSes there are built-in support or easily available support (macOS can play WAV through QuickTime/CoreAudio, etc.). One nuance: WAV is a container that can technically hold compressed audio (like ADPCM), but the most common usage is uncompressed PCM at 44.1 kHz 16-bit stereo (CD quality) or similar. Browsers typically support PCM in WAV; they may not support an obscure codec in a WAV container. In practice, virtually any WAV that comes from standard sources will play in the browser. The only limitation is file size handling (very large WAV files might be slow to load entirely into memory if not streamed, but the browser streams it like other media).
Licensing & Limitations:
There are no licensing issues with WAV or PCM audio – it’s an open format (actually created by Microsoft and IBM, but as part of the RIFF specification from 1991). It’s essentially just raw data. The big limitation of WAV is the lack of compression: files are huge. Roughly, CD quality audio uses about 10 MB per minute per channel (so 2-channel stereo is ~20 MB per minute). This means a 3-minute song is ~60 MB in WAV, which is roughly 10× larger than an MP3 of decent quality. Because of this, WAV is impractical for distributing lots of music over the internet or storing on devices with limited capacity. Another limitation is metadata: WAV has an INFO chunk and now can support tags like “ID3 chunks” or LIST chunks, but it is not as standardized or rich as ID3 in MP3 or Vorbis comments. Most WAV files have minimal metadata aside from perhaps an embedded ID3 tag or just a filename. WAV also typically only supports up to 4 GB file size due to 32-bit length fields (though the RF64 variant extends this), which is plenty for audio but could be hit with extremely long recordings or high sample rates.
Technical Considerations:
WAV is straightforward: it usually contains PCM audio data (in various bit depths and sampling rates). Because it’s raw, playback uses more bandwidth/storage but minimal CPU (no decoding needed aside from byte order alignment). Streaming a WAV in the browser is similar to streaming any file, but because there is no compression, the download time for a given duration is much longer than a compressed file. Seeking in a WAV file is very easy and precise because each sample frame can be calculated by position (no need for complex time indexing). This is one reason WAV is used in editing—random access is trivial. For a media player, handling WAV means just passing it to the audio element, which is trivial for the browser. The player might consider implementing a downsampling or conversion if it needed to (for example, if a WAV has an unusual sample rate, browsers usually handle it though via their audio pipeline).
Recommendation:
It is worthwhile for nGene Media Player to support WAV, mainly to ensure that if a user tries to play an uncompressed audio file, it works. Many desktop users, especially in professional or archival contexts, might have WAV files (e.g., recordings or sound effects). While WAV is not a distribution format, including support is low-effort (browsers handle it) and adds robustness. That said, WAV should not be used as a default format for everyday use when a compressed alternative can be used, given the file sizes. The player can treat WAV as a source format—if someone drops a WAV file in, it plays—but in terms of guiding users, one would typically convert WAV to FLAC for lossless storage or to MP3/AAC for lossy needs. In summary, support WAV for completeness and as part of being a “desktop” player that might encounter many file types, but do not expect or encourage routine use of WAV for general media consumption.
Common Video Formats
MP4 (H.264 Video in MP4 Container)
Typical Use Cases & Popularity:
MP4 is the most prevalent video format for consumer use. When someone refers to an “MP4 video,” they usually mean a file with an .mp4 extension containing H.264/AVC video and AAC audio (the most standard codecs). MP4 is used everywhere: from video streaming services (which often deliver MP4 fragments or files for progressive download) to home video recordings on phones and cameras (most record directly to MP4/H.264 now). It’s the format of choice for platforms like YouTube (for legacy compatibility, though YouTube now leans on adaptive streaming with multiple formats, they still provide MP4 as an option), Vimeo, and virtually any site offering downloadable video. Its popularity comes from the balance of efficiency, quality, and widespread support.
Browser & Platform Support:
MP4 (with H.264 video and AAC or MP3 audio) is supported by all major browsers in the HTML5
<video>
element. This was a cornerstone of HTML5 video adoption — while initially there was debate over open formats, all browser vendors converged on supporting H.264 in MP4 by around mid-2010s (with the lone holdout Firefox eventually relying on OS decoders to avoid licensing fees). In practical terms, any user on a modern browser (whether on Windows, Mac, Linux, or mobile) can play MP4 video. Additionally, all desktop and mobile operating systems have native support: e.g., Windows’ Movies & TV app, macOS QuickTime, iOS, Android, Smart TVs, etc., all handle MP4/H.264. This ubiquity is unmatched by any other video format currently.
Licensing & Limitations:The MP4 container itself is an ISO standard (ISO/IEC 14496-12) and is open to use; however, the typical codecs inside (H.264 for video, AAC for audio) are patented. H.264 (also known as AVC) is patented by many parties organized under MPEG-LA, and AAC is also patented. Browser makers and device manufacturers have licensed these technologies so end-users generally don’t worry about it. For an independent developer, using H.264/AAC encoding in software would require licensing, but if just using browser capabilities, there’s no direct liability. In terms of limitations: H.264 is a lossy codec (there is a lossless mode but rarely used) and was designed for up to HD/Full HD content originally (extensions exist for 4K, but at 4K and beyond it’s less efficient than newer codecs). Another limitation is that older browsers might not support newer codecs in MP4 (for instance, H.265/HEVC video or Dolby Vision in MP4 would not play in most browsers except Safari for HEVC). Essentially, MP4 is a broad standard, but when used colloquially, it implies H.264/AAC. As long as one stays within those codecs, there are few limitations aside from licensing.
Technical Considerations:MP4 is a robust container: it can hold video, audio, subtitles (Timed Text or image-based subtitles), and metadata (like title, encoder info, chapters). It supports streaming mode: a common practice is to ensure the “moov” atom (which contains the index and header info) is at the beginning of the file for fast start. This allows a video to begin playback before the entire file is downloaded. Tools and libraries exist to “fast-start” an MP4 if needed. MP4/H.264 video offers good compression at relatively low computational cost; even modest devices can decode 1080p H.264 smoothly with hardware acceleration. Seeking in an MP4 is efficient thanks to indexed keyframes; the player can jump to the nearest keyframe timestamp and resume decoding. MP4 also supports progressive streaming and adaptive streaming (as in MPEG-DASH or HLS, where .mp4 segments are used). For metadata, MP4 files can carry info tags (similar to ID3) in a “udta” atom – though not as commonly used for general videos, it is used in broadcast or professional contexts. Overall, MP4’s technical profile makes it suitable for almost any video application.
Recommendation:MP4 with H.264 video (and AAC audio) is the de facto default for video and should be the primary format supported in nGene Media Player. It will cover the vast majority of use cases. Whenever sharing or supporting video on the web, having an MP4 option ensures even older and less flexible clients can play it. The recommendation is to always include H.264/MP4 as a baseline. Only in specialized closed environments would one omit it (due to licensing), but for a desktop app relying on browser tech, it’s assumed present. In short, for general use and maximum reach: use MP4/H.264 as the standard video format.
WebM (VP8/VP9 Video in WebM Container)
Typical Use Cases & Popularity:
WebM is a multimedia format sponsored by Google as a royalty-free alternative to MP4. It usually contains video encoded with VP8 or VP9 (which are video codecs developed by Google) and audio encoded with Vorbis or Opus. WebM was adopted early on by YouTube for its HTML5 player (YouTube serves videos in WebM VP9 to browsers that support it for better compression than H.264). It’s also used in contexts like web video conferencing or anywhere an open format is desired. While not as common as MP4 for end-user files, WebM is popular in the web developer community for embedding videos without patent concerns, and for high-quality streaming (VP9 can achieve similar quality to H.265 and better than H.264 at the same bitrate).
Browser & Platform Support:
WebM support in browsers is now widespread: Chrome, Firefox, Opera, and Edge (Chromium) have long supported WebM (both VP8 and VP9 codecs). Safari was the last major browser to add support; as of Safari 14 (2021) on macOS Big Sur and Safari on iOS 14, WebM video with VP8/VP9 is supported. This means current versions of all major browsers can play WebM video. However, older versions of Safari (or older iOS devices that cannot upgrade past a certain iOS) will not play WebM. Internet Explorer (now obsolete) never supported WebM natively. On the platform side, support is more hit-or-miss: Windows 10’s native player doesn’t play WebM without additional codecs, and older Android phones might have only partial hardware support for VP9. But within a browser context on desktop, if the user is on a modern browser, WebM is fine. It is worth noting that some hardware-accelerated environments prefer specific codecs: for example, some low-power devices may accelerate H.264 but not VP9, which can affect performance for large videos when using WebM. Nonetheless, for desktop-class machines, VP8/VP9 playback is generally smooth if supported.
Licensing & Limitations:
WebM and its codecs (VP8, VP9, and now AV1 which is often mentioned in the same context) are royalty-free. Google made the VP8 codec open source in 2010, and VP9 in 2013. There were some historical patent concerns from other companies, but Google either resolved or indemnified those, and now WebM is considered safe to use without licensing fees. One limitation is that WebM as a container is relatively limited in what it can hold: it was designed specifically for those codecs (VP8/VP9 video, Vorbis/Opus audio) and doesn’t support arbitrary codecs. This is by design to keep it simple and free. Another limitation is that while VP9 offers great compression, it is more CPU-intensive to encode (and to a lesser extent decode) than H.264. For real-time applications or editing, that can be a factor. But for playback of finished content, decoding VP9 is usually fine on modern CPUs. WebM also doesn’t have as mature an ecosystem for things like embedded subtitles or chapters (though one could combine WebM in an MKV context to get those, but then it’s essentially MKV). In typical use, WebM relies on external subtitle tracks (like VTT for web captions) if needed.
Technical Considerations:
VP8 (the older codec in WebM) is roughly on par with H.264 Baseline/Main profile in quality. VP9 (the successor) is significantly better, comparable to H.265 (HEVC) in efficiency, especially at higher resolutions. VP9 supports resolutions up to 4K and beyond, and is used by YouTube for 4K streaming. WebM container is based on a subset of the Matroska format (which is why .webm is similar to .mkv internally). It is well-suited for streaming and adaptive bitrate (Google uses it in DASH streaming by providing multiple WebM versions of a video). Seeking in WebM works, though not quite as instantly as in MP4 if the file isn’t indexed; however, players typically can seek by locating the nearest keyframe within the WebM file (which is structured in clusters for that purpose). Opus audio in WebM is a common combination (giving very high audio quality). For metadata, WebM can embed some info (like color profile, and simple tags), but it’s not commonly used for rich metadata the way MP4 can be. Also, with the rise of AV1 (the next-gen codec), WebM is one of the primary carriers of AV1 video on the web (alongside MP4, which can also carry AV1 now). That means the WebM format is here to stay as a vessel for new open codecs. In summary, technically WebM offers cutting-edge compression (with VP9/AV1) at the cost of higher computational load and slightly less legacy support.
Recommendation:
It is advisable for a modern media player to support WebM playback, especially if targeting an audience that values open standards or if the media player might play content from sources like WebRTC or certain web video streams. For nGene Media Player, adding support for .webm files (for video and audio) would expand the range of playable content (for instance, someone might have downloaded a WebM video from a site or created screen recordings in WebM). However, MP4 should remain the primary default due to guaranteed support. WebM can be offered as an additional option: for example, if one is building a site and wants to provide both MP4 and WebM sources to users, the player can choose the one the browser supports. Given that Safari’s support for WebM is relatively recent, a truly universal approach might still need MP4 fallback. But looking forward, WebM (especially with the AV1 codec) is a key format. So, the recommendation is: include WebM/VP9 support to future-proof and to cater to modern content, but do not rely on it as the sole format if broad compatibility (including older devices/browsers) is a concern. Both MP4 and WebM can co-exist as supported formats, with WebM being a great choice for high-quality and patent-free requirements.
AV1 (Next-Generation Open Video Codec)
Typical Use Cases & Popularity:
AV1 is a relatively new video codec (finalized in 2018 by the Alliance for Open Media) designed to be the successor to VP9/HEVC with about 30% better compression efficiency than those. It’s not a format by itself but a codec usually used inside either a WebM or MP4 container. Its use is growing in streaming services – for example, YouTube and Netflix have started encoding some content in AV1 for capable devices. The typical use case is high-resolution streaming (4K and beyond) where the bandwidth savings are significant. Also, as an open, royalty-free codec, it’s intended to be widely adopted across the industry to avoid patent fees of HEVC. While still emerging, AV1 is relevant to consider in a forward-looking media player context.
Browser & Platform Support:
Browser support for AV1 is good in the latest versions: Chrome (and other Chromium browsers) supports AV1, Firefox supports AV1, and Microsoft Edge (Chromium) does as well. Safari’s support came later; Apple announced support for AV1 in macOS Ventura (Safari 16+), but only on hardware that has an AV1 decoder (recent Apple Silicon chips). So Safari might still be a gap on older Mac hardware. Many browsers rely on hardware acceleration to play AV1 smoothly because software decoding is very CPU intensive for high resolutions. As of 2025, a lot of new GPUs and mobile chipsets include AV1 decode support (e.g., newer NVIDIA/AMD GPUs, and mobile SoCs from 2020+ often have it). On the platform side, Windows 10/11 support AV1 via an optional install of the AV1 Video Extension (if hardware supports it or to use CPU). Android has support if the hardware does. So, support is rapidly improving, but one cannot assume every user’s device can play AV1, especially if it’s a bit older.
Licensing & Limitations:
AV1 is royalty-free; all contributors (including big names like Google, Mozilla, Microsoft, Cisco, Intel, Netflix, etc.) agreed not to charge for its use. This is a big advantage over H.265/HEVC which has hefty licensing fees in some cases. There are no licensing costs to support AV1 playback or encoding. The limitations of AV1 currently mostly revolve around computational complexity: encoding AV1 is extremely slow compared to H.264 or even HEVC (by an order of magnitude slower in software), which means not all content providers offer it yet for live or quick turnaround content. Decoding is also heavier, so on devices without hardware support, playing an AV1 video (especially high-res) can tax the CPU heavily or might not be feasible in real-time. Another limitation is that, since it’s new, some software and workflows haven’t fully integrated AV1; for example, older video editors might not import AV1 footage yet without updates.
Technical Considerations:
AV1 can be stored in .webm (often .webm with AV1 video and Opus audio) or in .mp4 (the MP4 container was extended to support an AV1 codec identifier). It supports all the modern video features (alpha channel, HDR metadata, wide color gamut, etc.). In terms of quality, AV1 shines at high resolutions and low bitrates: it can maintain decent 1080p quality at bitrates where H.264 would appear very blocky. It also has tools for better compressing grain (film grain synthesis) and other complex scenes. For a media player, handling AV1 means the underlying browser must have the codec. If using the native HTML5
<video>
element, one is reliant on browser support. If the browser supports it, the media player just needs to be ready to supply an AV1 source. If not, the player might need to fall back to a different format. Also, detecting support might be necessary (using
canPlayType()
or similar). For local files: if a user opens an .mkv or .webm with AV1 content in nGene Media Player, Chrome or Firefox would likely play it, but if they tried in a non-supporting environment, it would fail. Thus, handling it gracefully (maybe an error that “this format is not supported on your system”) could be needed.
Recommendation:
As AV1 becomes more common, it is wise for nGene Media Player to be aware of it. Ensuring that the player’s file-opening logic and UI recognizes .webm or .mkv files with AV1, and attempting to play them, will cover the needs of advanced users who have started collecting AV1-encoded videos. It is not yet recommended to choose AV1 as the only default format because not every device can handle it; however, including support costs little (mostly relying on the browser) and positions the player as up-to-date. For instance, if providing sample videos or if the player is part of a pipeline, one could include an AV1 version for those who can use it. In summary, support AV1 where possible (the player should try to play it if the environment allows), but continue to provide H.264 or VP9 alternatives as default until AV1 penetration is total.
MKV (Matroska Video Container)
Typical Use Cases & Popularity:Matroska (MKV) is an open container format often used for video files especially in the context of high-quality downloads, such as HD movies, TV shows, and anime in fan communities. It gained popularity because it can hold multiple audio tracks (different languages, commentaries), multiple subtitle tracks, chapter information, and support virtually any codec. Many “scene” releases or user-created video rips are in MKV format to leverage this flexibility. However, MKV is more popular for local playback (with VLC, MPC-HC, etc.) than in web streaming, where MP4 or WebM are typically used. It’s also the container for the WebM subset (WebM is essentially a limited MKV). In summary, MKV is a favorite for power users and video enthusiasts who need an all-encompassing container for multimedia.
Browser & Platform Support:Native browser support for .mkv files in the HTML5 video element is not consistent. While the underlying codecs might be supported (for instance, an MKV containing H.264/AAC might technically play if the browser’s engine recognizes MKV), in practice many browsers do not list .mkv as a supported extension. Chrome and Firefox have historically not advertised MKV support; Chrome’s media stack (based on FFmpeg) could potentially parse MKV, but Chrome might not enable it fully for
<video>
. Edge (Chromium) similarly. Safari does not support MKV. Therefore, one generally cannot rely on dragging an MKV file into a browser and having it play, unless it's in a special case where the MKV contains exactly the same streams as a WebM (VP9/Opus) and even then it might fail due to container recognition. That said, some users have reported Chrome can play certain MKV files, but this is not officially documented. On desktop platforms, MKV is well supported by third-party players (VLC, etc.), but not by default OS players (Windows Media Player doesn’t natively play MKV without codec packs; older QuickTime on Mac didn’t either). New Windows 10/11 Movies & TV app does support MKV to an extent (since Microsoft added MKV support in 2015 to their player). This means a Windows user might double-click an MKV and it could play in the Movies & TV app if codecs inside are supported by OS (H.264, etc.). Overall, for a web app, MKV is not a safe format to rely on without conversion or using a custom player library that can demux MKV in JavaScript.
Licensing & Limitations:MKV is completely open and free to use. It’s governed by the Matroska project (now part of the Multimedia Container Format (MCF) project) and has no licensing costs. The limitations of MKV mostly revolve around the lack of uniform support rather than the format itself. Because it can contain anything, an MKV might have a codec that the playback system doesn’t support (e.g., an MKV with old RealMedia video or Dolby TrueHD audio — browsers definitely won’t handle those streams even if they could parse the container). So the burden is ensuring the contained codecs are supported by the playback engine. MKV files also tend to be larger than necessary if not carefully optimized for streaming, because by default they might not have progressive download in mind (though MKV does allow for some streaming optimization, it’s not as standardized as MP4’s progressive download structure). Additionally, certain web-based DRM or streaming features (like Common Media Application Format for DRM) use MP4, so MKV is excluded from those scenarios. For a standalone desktop player not worrying about DRM, MKV’s only limitation is just compatibility and possibly slightly higher overhead in parsing.
Technical Considerations:Matroska is very flexible: it can incorporate virtually any video codec (H.264, H.265/HEVC, VP9, AV1, MPEG-2, etc.) and any audio codec (AAC, MP3, Vorbis, Opus, FLAC, AC-3, DTS, etc.), plus subtitles (SRT, ASS, or image-based like PGS). It uses EBML (a binary XML-like schema) for metadata, making it extensible. It supports chapters (with titles), file attachments (fonts for subtitles, cover images), and menu structures (less used). For seeking, MKV typically contains an index of clusters and keyframes, so seeking is efficient if the player reads that index (often at the end of the file). If streaming an MKV via HTTP, one needs byte-range requests to fetch the end or to seek, which browsers can do, but again if they don’t natively support MKV, it’s moot. MKV doesn’t inherently reduce compression or quality — it’s just a container — so it’s often chosen to avoid any loss or limitation (for example, if one wanted to include a DTS audio track, MP4 cannot store DTS, but MKV can). From the perspective of a web app, supporting MKV might mean integrating a demuxer library (there are JS libraries that can demux MKV to get at the frames and feed to Media Source Extensions). This is advanced and rarely done unless there’s a strong need.
Recommendation:
For nGene Media Player, outright relying on native browser support for MKV is not recommended. Instead, if the scope allows, the player could detect an MKV file and politely prompt the user to convert it to a supported format (or possibly use a library to play it, if going that route). However, since this is a desktop-focused player and possibly used in controlled environments, it might be viable to incorporate some MKV handling. For example, if using Electron or a custom environment where you have more control (Node.js could leverage FFmpeg), MKV could be supported. If sticking strictly to in-browser capabilities, assume MKV will not play and thus it’s a format to handle as an exception. The best practice is to convert MKV to MP4 or WebM for web playback. If a user base commonly has MKVs (which is likely for a desktop media player), consider integrating a conversion utility or at least documentation telling them to convert using a tool. In sum: acknowledge MKV as a common desktop format, but for the “general use” default, do not plan on it working in a vanilla JS web app without extra help. Prioritize adding support for the codecs inside (H.264, etc.) through supported containers instead.
AVI (Audio Video Interleave)
Typical Use Cases & Popularity:AVI is an older video container format introduced by Microsoft in the early 90s. It was very common in the early days of digital video and the internet (late 90s and early 2000s) — for instance, many early DV camcorders captured to AVI (with DV codec), and formats like DivX/Xvid (MPEG-4 Part 2) often used AVI as the container. Over time, AVI has fallen out of favor for new content, replaced by MP4/MKV, but a lot of legacy content still exists as .avi files. People might have old video files or downloads in AVI format (often with codecs like Xvid, MP3 or AC3 audio). It’s also sometimes used in simple screen recording or surveillance systems (due to ease of implementation). In modern contexts, AVI is mostly encountered when dealing with older archives or when a specific device outputs it.
Browser & Platform Support:
Browsers do not support AVI in the HTML5
<video>
tag. There’s virtually no push to include AVI support in browsers because it’s outdated and the codecs inside might be unsupported (e.g., MPEG-4 ASP, which browsers don’t decode, or various obscure codecs). So an AVI file will not play in an HTML5 player without conversion. On desktop, Windows has native support for AVI (since it was a native format for a long time): if the proper codec is installed, Windows Media Player can play an AVI. By default, Windows can play AVIs that use older standard codecs (like Cinepak, or uncompressed, or DV). For DivX/Xvid AVIs, users often needed to install codec packs or use third-party players like VLC. macOS never supported AVI natively in QuickTime without plugins; again, third-party players are used. In summary, a web app cannot directly play .avi, and users themselves often rely on software like VLC to play their AVI files.
Licensing & Limitations:The AVI format itself is part of the public domain (it’s an older Microsoft format, but widely implemented). No license is needed to implement the container. The codecs often found in AVIs, however, may have patents (e.g., DivX/Xvid are implementations of MPEG-4 Part 2, which was patented, though those patents are expiring around now; MP3/AC3 audio in AVIs had patents, etc.). But again, since any support would likely come via existing libraries or OS, a developer typically doesn’t license them directly. The limitations of AVI are quite severe by modern standards: it does not natively support modern compression features like B-frames without hacks (there’s something called OpenDML AVIs to allow indexes for those, but support can be flaky). It has trouble with VBR (variable bitrate) audio syncing unless handled carefully. It doesn’t support subtitles tracks or advanced metadata well. And maximum file size was historically 2 GB (or 4 GB with extensions). Essentially, it’s a simple chunk-based container not designed for today’s high-res, long-duration content. Streaming an AVI is also problematic; since the index (idx1 chunk) is at the end, you must often download the whole file or otherwise have metadata to seek, making it poor for progressive playback. Some streaming solutions in early 2000s used server-side hacks to serve AVI in a streaming manner, but that’s obsolete now.
Technical Considerations:
If one absolutely needed to play an AVI in a browser, it would require a custom JavaScript decoder for both the container and the codec. For example, a JS library could theoretically demux AVI and decode a codec like MJPEG or uncompressed frames, but for something like DivX (MPEG-4 ASP), there’s no native browser decoder available, so you’d need a complete video decoder in JS/WASM. That is far too much effort for little gain when conversion is an option. Therefore, technically, the approach for dealing with AVI is conversion to a modern format. Many tools (ffmpeg, etc.) can losslessly repackage or transcode AVI content to MP4 or MKV with minimal quality loss (depending on codecs). For instance, an AVI with Xvid/MP3 could be transcoded to MP4 with H.264/AAC for broad compatibility. This is generally the advised path rather than trying to support playback natively.
Recommendation:
It is not recommended to attempt native AVI support in a web-based media player. If a user of nGene Media Player needs to play an AVI, the best course is to guide them to convert the file. For a desktop-centric scenario, one might integrate conversion behind the scenes (like detect .avi, and use a tool to convert then play), but that adds complexity. Unless there is a strong demand or a controlled environment where AVIs are common (and one could bundle a decoder), it’s reasonable to state that AVI is not supported for playback. Providing documentation or a message like “Please convert .avi files to MP4 for playback” could be sufficient. In a comprehensive media app, you might include a converter using ffmpeg to automate this. But as a default strategy: focus on more modern formats and treat AVI as a legacy format that lies outside the scope of direct support.
MOV (QuickTime File Format)
Typical Use Cases & Popularity:MOV is the file extension for the QuickTime File Format, which is the predecessor and close relative to MP4. Apple’s QuickTime framework uses .mov for a variety of media files, especially in professional video editing, camera captures, and older multimedia CD-ROMs. Many professional cameras (DSLRs, action cams) save video in .mov format (often with codecs like ProRes, or older formats like Motion JPEG or H.264). In consumer use, .mov was more common in the past; today, casual users encounter .mov mainly if they use Apple devices or software that outputs .mov. For example, an iPhone might record video as .mov (though it’s actually H.264 or HEVC inside). iMovie or Final Cut might export .mov files by default. So, MOV is popular in production, but for final distribution, those files are often converted to MP4 for compatibility.
Browser & Platform Support:
Safari (on Mac and iOS) can play .mov files, because QuickTime is integrated. For instance, if a .mov contains H.264/AAC, Safari will treat it much like an MP4 and play it. However, Chrome, Firefox, and Edge do not list .mov as supported. If the .mov contains codecs they support (H.264, etc.), sometimes renaming to .mp4 would even allow it, implying the barrier is partly just the container recognition. But typically, a .mov file served in a webpage will prompt a download in those browsers or just fail to play. On Windows, .mov files can be played by installing QuickTime (deprecated now) or by using players like VLC. Modern Windows 10/11 might play simple .mov (H.264) in its Films & TV app, but anything exotic (ProRes, etc.) won’t play without specific software. So for a web app, .mov is not a reliably playable format except in Safari. This makes it a poor choice for cross-browser support.
Licensing & Limitations:
MOV as a format is controlled by Apple but was published as a basis for the ISO MP4 standard. It’s not encumbered to play or create .mov files, but implementing the full spec might require understanding Apple’s extensions. There’s no royalty for using .mov itself. The codecs inside .mov often have licensing considerations (e.g., H.264, HEVC, AAC – same story as MP4). The limitations of MOV primarily revolve around compatibility; technically, MOV can do almost everything MP4 can, but it also allows some older codec integrations that MP4 doesn’t (like Sorenson Video, Cinepak, or even things like animation codecs). Those older codecs would definitely not be supported in browsers. Also, some MOV files use Apple-specific features (chapter tracks, timecode tracks, reference movies that link to external media) which are not widely supported outside QuickTime. As a container, it’s fine, but since MP4 has become the standard, .mov support hasn’t kept up outside Apple’s ecosystem.
Technical Considerations:
Under the hood, MOV and MP4 share the same structure (atoms/boxes). Many MOV files could be converted to MP4 simply by changing the container without re-encoding, provided they contain compatible codecs. One technical consideration is that some .mov files (particularly from editing software or high-end cameras) use codecs like Apple ProRes or even uncompressed video, which are huge and not meant for streaming. Those will not play in browsers (except possibly Safari if the codec is installed at OS level). Another issue: .mov files might not be optimized for progressive download (they might have the index at the end, or not have interleaved streams well for streaming) which can cause delays in playback start. There’s a concept of “fast start” in QuickTime too (similar to MP4) but not all .mov files you get will have that. For a web player, if one wanted to support .mov, one might utilize MediaSource Extensions to feed the content if the codecs are known and supported. But this is complex and rarely done because it’s easier to convert .mov to .mp4 externally. On the positive side, any .mov that contains standard codecs could be converted with minimal fuss. For example, an iPhone .mov (H.264 video, AAC audio) can be turned into .mp4 by rewriting the container, making it then playable everywhere.
Recommendation:
It’s recommended to treat .mov similar to how we treat .avi: as a format to convert rather than play directly in a cross-platform web player. For nGene Media Player, if someone tries to open a .mov file, the best approach is to either leverage the browser’s capability (if on Safari, it might just work) or alert that conversion is needed. If this player is for a controlled environment (like an internal tool where maybe everyone is on Mac/Safari), then .mov could be supported. But for general distribution, not all users will be on Safari, so relying on .mov is not wise. Therefore, encourage or perform conversion of .mov files to MP4 for actual playback. In an ideal scenario, the player could have a small feature: “Detected .mov file – converting to .mp4 for playback…” (using a JavaScript ffmpeg.wasm or prompting the user). But that may be overkill; simply informing users is acceptable. In summary, .mov is a notable format (especially around Apple ecosystems and professional video), so be aware of it, but default to MP4 for compatibility. Ensuring that any content provided for the player is in MP4 will avoid .mov issues altogether.
Recommended Default Formats: Considering the above, for broadest compatibility and ease of use in a web-based desktop player, the recommended default formats are MP3 for audio and MP4 (H.264/AAC) for video. These two cover nearly all browsers and platforms with no special setup. In practice, this means the player should primarily handle MP3 for music and MP4 for video. However, to make nGene Media Player more robust and appealing, it should also support the common alternatives: including AAC (M4A) ensures high-quality audio support, Ogg Vorbis/Opus provides open-format options, and FLAC allows for lossless audio playback. On the video side, adding support for WebM (VP8/VP9) is advisable for modern browsers, and being mindful of AV1 will keep the player up-to-date with emerging standards. Less common or legacy formats like MKV, AVI, and MOV can be acknowledged, but the strategy should be to handle them via conversion or not at all, rather than as primary supported formats. By focusing on MP3 and MP4 as the core, and supplementing with the next tier of formats, the player will cater to most use cases while maintaining reliability.
Written on March 9, 2025
Meta Information Extraction (Audio and Video)
A media player like nGene Media Player not only plays audio and video but often also presents information about the media to the user. This includes basic details (duration, title) and possibly more advanced metadata (like album name, video resolution, etc.). Below, we outline what metadata can be obtained from media files and discuss methods to extract this information using web technologies (JavaScript in the browser) and Python (which could be used server-side or via PyScript in-browser). We also provide guidance on when to use client-side vs. server-side (or local) analysis based on the depth of metadata required.
Types of Media Metadata
Basic Properties:
Fundamental attributes are available for almost all media files:
Duration:
the total play time of the media (e.g., 3 minutes 45 seconds for a song, or 1 hour 30 minutes for a movie).
Format/Codec:
the container and codec information (e.g., “MP3 audio”, “H.264 video, AAC audio in MP4”). This can include the codec name, profile, and version.
File Size:
the size of the file on disk, which can indirectly indicate quality or compression (though not a linear relationship).
Bitrate:
the overall bitrate or stream bitrate. For audio, this might be 128 kbps, 320 kbps, etc. For video, there can be a video bitrate and audio bitrate separately. Bitrate influences quality and file size.
Dimensions (Video):
for video, the resolution in pixels (width × height, e.g., 1920×1080) and the aspect ratio (16:9, 4:3, etc.).
Frame Rate (Video):
frames per second (fps) of the video, e.g., 24 fps, 30 fps, 60 fps. This affects motion smoothness.
Audio Channels:
number of audio channels (mono, stereo, 5.1 surround, etc.). For example, an audio file might be 2-channel stereo; a movie might have 6-channel 5.1 surround.
Descriptive Tags (Embedded Metadata):
Many media files include human-readable metadata tags:
Title & Artist (for audio):
Song title, artist name, album name, track number, genre, release year, etc., often stored in ID3 tags (MP3), Vorbis comments (FLAC/Ogg), or metadata atoms (MP4).
Album Art:
Image embedded in audio files (like the album cover in an MP3 or M4A). Also, video files might have a poster image or thumbnail embedded.
Video Titles and Chapters:
Some video containers (MP4, MKV) can have a title for the piece and chapter markers with titles (like DVD chapters). This metadata can describe scenes or sections of the video.
Creator/Software:
Metadata about how the file was created, e.g., the encoding software name, or camera model for video, which can be stored in certain metadata fields.
Lyrics or Subtitle Tracks:
Audio files can have synchronized lyrics or unsynchronized lyrics in tags. Video files often contain subtitle tracks or closed captions that can be considered metadata (timed text streams separate from the main video).
Technical Audio Attributes:
Specific to audio files or audio tracks:
Sample Rate:
e.g., 44,100 Hz (CD quality), 48,000 Hz (video standard), or higher for high-res audio. Indicates the fidelity of the audio sampling.
Bit Depth:
(for PCM/lossless) e.g., 16-bit, 24-bit. This indicates dynamic range capability. Compressed formats might not explicitly expose this, but for WAV/FLAC it’s relevant.
Audio Codec Details:
such as profile (AAC-LC vs HE-AAC), bitrate mode (CBR vs VBR), etc.
Loudness/Volume Metadata:
Some files have ReplayGain or Sound Check values indicating the average loudness, so players can normalize volume between tracks. Also, modern streaming standards use LUFS (Loudness Units) info; a file might carry an integrated loudness value as metadata.
Additional Music Tags:
BPM (beats per minute, tempo) and key. These are not standard in all files, but ID3 tags include a “TBPM” frame for BPM and a “TKEY” frame for musical key. If present (often in electronic music tracks or DJ-tagged files), they tell the song’s tempo and key as tagged by a human or software.
Advanced Derived Data (Analysis-Based):
These are characteristics one might compute rather than find explicitly in the file:
Waveform Data:
The amplitude over time, which can be used to draw a waveform display. Not stored explicitly (though some formats allow a “peak file” or waveform preview embedded), but can be calculated by reading the audio samples.
Spectrum or Equalizer Bands:
Frequencies present in the audio. A snapshot of this can create visualizers. Again computed via Fourier transforms on the audio, not stored in the file (except maybe as proprietary data in some production formats).
BPM and Key Detection:
If not tagged, an algorithm can estimate the BPM (tempo) of a music track or the musical key by analyzing the audio content. This requires signal processing (e.g., onset detection for BPM, chroma analysis for key).
Video Key Frames and Scene Changes:
Analysis on video can detect where scene cuts occur or identify key frames (which might coincide with the codec’s keyframes but not always). This can be used to generate thumbnails for seeking.
Color Histogram or Dominant Colors (Video):
Not a common need for a media player, but one could analyze video frames to pick a dominant color for UI theming (e.g., background color to match the video content) or just as an advanced feature.
Most of the above metadata can be accessed or computed with the right tools. The next sections describe how to retrieve these details using JavaScript in the browser and using Python, respectively.
Client-Side JavaScript Methods
In a purely browser-based environment (vanilla JavaScript), one can extract a subset of the above information. The HTML5 media elements and additional libraries are the primary means to do so:
HTMLMediaElement API (Built-in):
The
<audio>
and
<video>
elements provide some basic metadata once a media file is loaded. For example, after setting
audio.src = URL.createObjectURL(file)
(for a File object) and waiting for it to load metadata, the property
audio.duration
gives the length in seconds. For video,
video.videoWidth
and
video.videoHeight
provide the pixel dimensions, and
video.duration
the length. There’s also
video.poster
attribute (for an assigned poster image) but not for embedded thumbnails. The
readyState
and
networkState
can tell if metadata is loaded. Additionally, the
textTracks
,
audioTracks
, and
videoTracks
properties can list tracks (like subtitle tracks or multiple audio tracks) if the format/container supports it (for instance, an MP4 with subtitles might expose textTracks). However, the HTMLMediaElement does
not
expose detailed codec info (no direct way to get “this is MP3” or “this is H.264” from the element) and does not give access to content tags like title or artist. It is limited to playback-related info. So, while this API easily gives duration, resolution, and allows for retrieving current playback time (for sync or manual analysis), it won’t retrieve, say, ID3 tags.
File API + JavaScript Metadata Libraries:
To get richer metadata (titles, cover art, codec names, etc.), one can use the File API to read the raw file bytes in JavaScript, then parse those bytes with a library. For audio files, a popular choice is the
music-metadata
library (available as an NPM package, and it has a version optimized for browser use). This library can parse many formats: MP3 (ID3v1 and v2 tags), MP4/M4A (reads the metadata atoms, iTunes tags), FLAC (Vorbis comments), Ogg Vorbis/Opus, WAV (INFO tags or RIFF chunks), etc. Using it is straightforward: you provide either an ArrayBuffer or a File stream and it returns a metadata object. For example, parsing an MP3 might return an object with common tags (title, artist, album, track number, genre), an array of native tags (the raw ID3 frames), and a format object (with info like codec: “MPEG 1 Layer 3”, sampleRate: 44100, duration: seconds, bitrate: 128000, etc.). Similarly, it can extract the embedded picture (album art) as binary data if present. Another library is
jsmediatags
which focuses on ID3 and MP4 tags. For just MP3, one could even write a simple parser to get ID3 frames using DataView (if one only needs a couple of fields), but using a well-tested library is better. For video files, pure JS libraries are less common (because video metadata is more complex to parse). However,
mp4box.js
is a library that can parse MP4 container structure in JavaScript. It could retrieve things like track codec info, track titles, etc., from MP4. For MKV, there is a library called
matroska-js
or one could use the general
mux.js(which has some capabilities for TS, but not MKV). In summary, by reading file bytes in the browser, the player can get a lot of metadata: titles, artists, cover images, codec names, bitrate, etc. This approach runs fully client-side with no need for external services.
WebAssembly Tools (ffprobe.wasm / MediaInfo):
For comprehensive technical metadata (especially for video), one can compile existing C/C++ tools to WebAssembly.
ffprobe(part of FFmpeg) and
MediaInfo
are two robust metadata extraction tools. There are projects that provide ffprobe as a WASM module and similarly a project
mediainfo.js
which is MediaInfo library compiled to WASM. Using these, a web app can get extremely detailed information. For example, MediaInfo will return data like: video codec profile (Main@L4.1 for H.264), bit depth (8-bit vs 10-bit video), chroma subsampling, exact frame rate (e.g., 23.976), encoder library name, audio channel layout (5.1, 2.0), etc., along with tags like title and chapters if present. The output can be JSON or text. The trade-off is that loading these WASM libraries (which might be 1-2 MB or more) adds overhead, and running them is somewhat heavy (parsing a large file in WASM takes a bit of time and CPU). But they are very powerful. For instance, if nGene Media Player wants to display a “Media Info” panel similar to what VLC or Media Player Classic shows, using MediaInfo.js would be ideal. You’d feed it the file (File object or ArrayBuffer) and get a structured report.
ffprobe.wasm
similarly could be invoked with arguments to show streams and format info. A practical approach is to load such a tool on-demand (e.g., only if the user opens a “Details” pane) to avoid unnecessary performance cost during normal playback.
Web Audio API for Signal Analysis:
JavaScript also provides the Web Audio API, which, while mainly for audio processing and synthesis, can decode audio data for analysis. By using an
AudioContext
, one can take an audio file (via fetch or FileReader) and call
decodeAudioData
to get an AudioBuffer containing raw PCM samples. This is limited by file size (very large files might be too much to hold in memory at once), but for moderate files it’s fine. Once the AudioBuffer is obtained, the script can analyze it: e.g., compute a waveform array (by sampling the amplitude periodically or calculating RMS levels for segments), which is great for drawing waveforms. It could also do an FFT to get frequency data for visualization or even attempt auto BPM detection (by looking for periodic peaks in the time domain or using autocorrelation techniques). The Web Audio API can also be used in real-time: connecting the media element to an AnalyserNode allows you to get real-time frequency data for visualization (good for showing a live EQ or bars that jump with the music). However, this is more about visuals; it’s not a robust way to get static metadata like “this song’s BPM is 120” (that would require a bit more algorithmic work in JS or a library). Still, it's client-side and leverages the browser’s audio decoding capabilities. Note that for protected content or some streaming formats, decodeAudioData might not work due to CORS or codec restrictions. But for local files and common codecs, it should. Overall, the Web Audio API complements metadata parsing by providing the means to derive new data (waveforms, loudness, etc.) from the raw audio.
Using the above methods, a web-based media player can gather a wealth of information without leaving the browser. For instance, on loading a file, the player could immediately display the duration via the duration property, show the title/artist by parsing tags with music-metadata, show the resolution via videoWidth/Height , and perhaps generate a waveform preview using Web Audio – all done client-side. The main constraints are performance (very large files or very detailed analysis can be slow) and the necessity to include libraries or WASM modules (increasing app size). When extremely detailed info or heavy computation is needed, one might then consider Python or server-side tools, as described next.
Python and PyScript Approaches
Python has a rich ecosystem for media processing, and it can be used in two ways: on a backend server (or a local machine, outside the browser) to preprocess or analyze media, or via PyScript/WebAssembly to run Python code in the browser. Here we outline how Python libraries can extract metadata and do deeper analysis, and how that might fit into the architecture of the media player.
FFmpeg/ffprobe for Technical Metadata:FFmpeg
is the Swiss army knife of media. Within FFmpeg,
ffprobe
is a tool specifically for reading media information. Running
ffprobe
on a media file can output details in a structured format (e.g., JSON or XML) that includes essentially everything about the file. This includes: container format, file size, duration, bitrates, codec names and profiles, frame rate, resolution, pixel format, audio sample rate, channel layout, and even the contents of metadata tags (like title, artist, etc., if present). For example,
ffprobe -v quiet -print_format json -show_format -show_streams file.mp4
will produce a JSON with a “format” section (with tags and duration/size/bitrate) and a “streams” array (each stream having codec type, codec name, width, height, channel count, language, etc.). In a Python context, one could invoke ffprobe via
subprocess
and parse this JSON. There are also wrapper libraries (like
ffmpeg-python
or
pymediainfo
for MediaInfo) that can retrieve similar info. If nGene Media Player has access to a local Python environment or a server, using ffprobe is one of the most straightforward ways to get a comprehensive metadata dump. The output can then be filtered to display relevant info in the UI. For instance, one could show “Video: H.264, 1080p, 30fps, ~5 Mbps; Audio: AAC, 2 channels, 128 kbps; File Size: 700MB; Duration: 01:30:00”. FFmpeg can also extract thumbnails (e.g., generate an image at a certain timestamp) or even waveforms (generate a waveform image), which can be part of metadata enrichment (though those outputs are more content-derived). If running Python server-side, the player could send the file (or its path) to the server to analyze with ffprobe and return results. If using PyScript, one could compile ffprobe or use a Python binding (though likely you’d just call a JavaScript ffprobe as mentioned earlier to avoid double overhead).
Mutagen and eyeD3 for Audio Tags:
Python’s
mutagen
library is a pure Python module that handles metadata tags for many audio formats. It supports ID3 (for MP3, AIFF, etc.), MP4 metadata, Vorbis/FLAC comments, ASF (for WMA), and more. Using mutagen, one can open a file and inspect tags easily. For example:
from mutagen.mp3 import MP3
audio = MP3("song.mp3")
print(audio.info.length, audio.info.bitrate)
# duration in seconds, bitrate in bps
print(audio.tags.get("TIT2"), audio.tags.get("TPE1"))
# Title and Artist ID3 frames
Mutagen would read the ID3 frames and allow access by frame identifiers or via a common interface (mutagen also has
mutagen.File()
which auto-detects the format and gives a generic object). Similarly, for FLAC:
This will give the Vorbis comment tags. Mutagen also handles pictures in tags (it can extract the image bytes). The library is lightweight and fast for tag reading. Another library,
eyeD3
, is specialized for MP3 and focuses on ID3 v2. It provides a slightly higher-level interface for MP3 metadata and can also do things like calculate BPM (if a plugin is used) or manage cover art. EyeD3 could tell you, for instance, if an ID3 tag has a certain encoding or if there are multiple tag versions. However, for most use cases, mutagen suffices and works across formats. In context, Python with mutagen could be used to scan a library of songs and build a database of metadata that the JS player then uses. Or if PyScript is considered, one could load a smaller subset (maybe just mutagen’s logic for ID3) to parse a file in-browser. But that might be redundant if JS libraries exist. Mutagen truly shines server-side or in batch processing scenarios.
Librosa and Essentia for Audio Analysis:
For deeper audio analysis, Python has powerful libraries:
Librosa:
A popular library for music and audio analysis. It can compute a wide range of features: tempo (BPM), beats, onset times, spectral features (chromagrams for pitch content, MFCCs for timbral texture, etc.), and even perform pitch detection. For example, using librosa one could do:
y, sr = librosa.load("song.mp3")
# decodes audio to waveform (requires ffmpeg or audioread backend)
tempo, beat_frames = librosa.beat.beat_track(y=y, sr=sr)
print("Estimated tempo:", tempo, "BPM")
This will output an estimated BPM for the track. Librosa might mis-estimate if the track has variable tempo or unclear beats, but it’s generally good for reasonably rhythmic music. To get the key, one approach is to compute the chroma (which gives an energy for each pitch class over time) and then use a heuristic or a simple algorithm to guess the key from the aggregate chroma. Librosa doesn’t directly give “key = C# minor” in one call, but it provides tools to derive it. Essentia (next point) does have a built-in for key.
Essentia:
A comprehensive C++ library with Python bindings (developed by Music Technology Group, Barcelona) for audio analysis. It includes algorithms for tempo, key, loudness (EBU R128 standard, which gives LUFS values), danceability, mood, and much more. Essentia can, for example, take an audio file and output: BPM = 128, Key = G minor, Loudness = -7.5 LUFS integrated, etc., along with detailed descriptors. Using Essentia in Python might look like:
This might output “BPM: 127.9, Key: G, scale: minor” for example. Essentia is very powerful but also heavy; running it in real-time in a browser via PyScript would be challenging. It’s more suited to offline analysis or backend processing.
Other Python Tools:
There are other specialized tools: e.g.,
Pydub
(which wraps ffmpeg) can quickly get duration and basic info, or
aubio
which can do onset detection and pitch tracking for things like detecting beats or notes in real-time. There’s also
OpenCV
if one wanted to analyze video frames (like to detect scene changes, one could use OpenCV’s frame difference or dedicated scene detection libraries). If working with video in Python, one might use
MoviePy
or
OpenCV
to extract frames or sections. For metadata like EXIF (if a video has camera EXIF metadata), one could use
exiftool
bindings. But those are niche for a media player context.
The idea is that Python can handle the heavy lifting of analyzing media content deeply, beyond what a browser would normally do. This could be used to enrich the user experience (imagine the player showing “Song Key: G minor, BPM: 128” for a music track, which is something a DJ or musician might appreciate). Achieving that purely in JS is possible but much more effort, whereas using an existing Python library might be quicker if the infrastructure allows it.
Architecture – PyScript vs Server-side:
If one opts to use Python for these tasks, there are two main ways:
Local in-browser via PyScript:
PyScript is an initiative to run Python in the browser by loading the Python interpreter compiled to WebAssembly (Pyodide). The benefit is everything stays client-side (no server needed, privacy of local files preserved). One could load, for instance, mutagen and a small BPM detection algorithm and run it in the page. However, the overhead is significant: Pyodide is dozens of megabytes, and heavy libraries like Essentia are not trivial to bring in. PyScript is still an evolving tech; currently, it’s great for demos but for a production media player, relying on it might make the app weighty and possibly slower to start. If only a small part of Python is needed, an alternative is to compile a specific Python algorithm to WebAssembly (like take Essentia’s C++ core and compile just the needed part to WASM, bypassing Python entirely). That’s essentially what some JS libraries do. So PyScript in practice would be justified if the media player’s environment is known and controlled (e.g., an internal tool where the user doesn’t mind loading that overhead to get advanced analysis), or if implementing the logic in JS would be prohibitively complex. A possible middle ground is to use PyScript for analysis as an optional feature: e.g., a user clicks “Analyze track” and then the Python runtime loads to do BPM, rather than loading it for every user upfront.
Server-side or Preprocessing:
This approach treats the media player as a client and uses a server (or a separate local process) to do heavy analysis. For example, if nGene Media Player were a web application with a backend, when a user uploads a file or opens a file, the file could be sent (or a hash) to the server, the server runs ffprobe, mutagen, etc., and returns the metadata which the front-end displays. This offloads the work from the browser (keeping the front-end light) but requires transferring the file or having a backend environment. For large media files, uploading just to get metadata might be inefficient, so one might only do this for certain features (like “compute BPM” or “get detailed codec info”) while basic info is gotten locally. In a desktop scenario, server-side could also mean a local background service. If nGene Media Player is an Electron app, for example, it could include Node.js or Python in the backend and directly call libraries on the user’s machine. That is a powerful option: the player UI (front-end) can send a message to the backend “give me metadata for file X”, and the backend (with full ffmpeg, mutagen, etc.) responds with everything needed. This way, the user experience is seamless and rich, and no internet is needed. The trade-off is increased complexity in development (maintaining the backend code, bundling ffmpeg or Python environment, etc.).
In summary, Python is capable of extracting essentially any piece of information one might want from media, given the right libraries. The decision of where to use it depends on the use case: for very advanced analysis or batch processing of many files, a backend (or offline script) is very useful. For lightweight, immediate needs, the browser alone is often sufficient. One could imagine a hybrid: the browser gets what it can quickly (e.g., duration, basic tags), and perhaps the user can trigger a deeper analysis (which might use a server or PyScript). The key is to align the approach with performance and complexity constraints of the project.
Architectural Considerations
When implementing metadata extraction in nGene Media Player, it’s important to choose the right tool for the job to provide a good user experience without unnecessary overhead. Here are some guidelines on when to use client-side JS vs. Python/back-end solutions:
Use Browser JS for Immediate and Basic Metadata:
For quick access to information like duration, file name, basic tags (title, artist), and showing something as soon as a file is loaded, the built-in HTML5 media properties and a lightweight JS tag parser are ideal. They are fast and require no internet connection or heavy computation. For example, when a user opens a song, the player can almost instantly display the title (from ID3 via music-metadata) and length (from
audio.duration
). This keeps the interface responsive. Also, showing waveforms or simple visualizations using Web Audio can be done progressively (e.g., decode small chunks or downsampled audio) so that the UI remains interactive.
Leverage Python/Server for Heavy Lifting and Batch Operations:
If the media player will include features like analyzing an entire music library for BPM/key or scanning videos to generate preview thumbnails for a timeline, doing this in pure JS might be slow or impossible due to browser sandbox limitations. A Python backend or a one-time preprocessing step can efficiently handle this. For instance, a server could precompute a waveform and store it, so the front-end just fetches it. Or a Python script could run ffprobe on every new file added to a library, populating a database with codecs and quality info. The player front-end then just queries that database. In a desktop context, this could be a background thread in the app. This approach excels when there are many files to process or when using algorithms not available in JS. The downside is the complexity of setting up that infrastructure and requiring the user to possibly install additional components or having the application manage an internal Python environment.
Consider PyScript for Isolated Advanced Features:
If choosing to avoid any server/backend and keep everything self-contained in the browser, PyScript can be a middle ground for advanced features. The idea would be to load it only when needed. For example, if 99% of users never care about BPM/key detection, there’s no need to always load Essentia or librosa. But for the curious or advanced user who clicks an “Analyze Music” button, the app could then load the Pyodide runtime and run a predefined Python function to compute and display those advanced metrics. This way, the core experience remains lightweight, and only those who opt-in pay the cost (in time and bandwidth) for the advanced analysis. It’s important to communicate that something is happening (e.g., show a loading indicator “Calculating audio features...”) because loading PyScript might introduce a noticeable delay before analysis begins.
Security and Privacy:
Metadata extraction, especially on the client side, means dealing with user’s local files. One advantage of doing it in the browser or in a local app is that the file’s content and metadata stay private to the user (no uploading to a server). This is likely a priority for a desktop-focused player. If a server is used, ensure it’s secure and perhaps give users the choice (maybe the user opts in to an online metadata fetch for convenience). Additionally, running WASM or Python in the browser is safe in terms of not exfiltrating data (unless coded to do so), but one must be mindful of performance and memory (loading entire files into memory can be heavy; streaming approaches are preferred for large files).
Incremental Enhancement:
It’s not necessary to implement everything at once. The architecture can be designed so that adding a backend or PyScript later is possible. Perhaps start with JS-only metadata that gives the essentials. If down the road there’s demand for more, a module can be added. By modularizing (for example, having a separate component/service for “MediaInfo”), one can swap out implementations. Maybe initially it calls music-metadata JS, and later it can call an API. Maintaining a clear interface (like a function that given a file returns a metadata object with certain fields) will allow experimentation with different approaches behind the scenes without changing the rest of the app.
In conclusion, the strategy for metadata should match the needs of the user base and the resources available. For a relatively small-scale or personal project, sticking to client-side solutions keeps things simple and respects user privacy. For a larger-scale application with many users and files, investing in backend services for richer metadata could greatly enhance the user experience. nGene Media Player can start by extracting what’s easy (duration, basic tags via JS) and progressively incorporate more advanced metadata features using Python tools as needed, ensuring that the architecture remains flexible for such upgrades.
Written on March 9, 2025
Design and UX Improvements for Desktop
With the functionality in place, attention turns to improving the user interface and experience of nGene Media Player. A desktop-focused web media player should leverage the larger screen and input options (mouse, keyboard) to provide an engaging and efficient experience. Below are suggestions for design and UX enhancements, organized into layout/visual improvements, interaction improvements, and the use of modern libraries to add polish. The tone of these suggestions is to enhance usability and aesthetics in a professional, subtle way without overwhelming the user.
Enhanced Layout and Visualizations
Dedicated Metadata Display Area:
The player should have a clear section in the UI where track or video information is shown. For audio tracks, this could be a header or side panel showing the track title, artist, album, and possibly the album cover art if available. For video, a small overlay or a line below the video could display the video title or filename, resolution (e.g., “1080p”), and length. By giving metadata a designated space, users can immediately identify what is playing. The design should use readable typography and maybe icons (a music note icon for song title, a video camera icon for video title, etc.) to visually distinguish types of info. Keeping this info visible (rather than hidden in a menu) is useful on desktop where space is available. That said, it should not be so large as to distract from playback; a balance is needed with font sizing and placement (for instance, text could be semi-transparent over a video and become fully opaque on hover).
Waveform Progress Bar:
Replacing or augmenting the traditional seek slider with a waveform visualization can significantly improve the player’s look and functionality. A waveform gives a quick visual cue of the audio’s dynamics—silence vs loud parts, song structure, etc. Practically, this could be implemented as a canvas or SVG element that displays the waveform of the current track. The waveform can serve as the clickable seek bar: users can click or drag on it to seek to specific points. This is particularly helpful for long audio (podcasts, DJ mixes) where a waveform helps mark sections. It also just looks more impressive than a plain bar. For implementation, the waveform data could be precomputed when the file loads (using Web Audio API to get sample peaks). If precomputing for a large file takes too long, consider showing a simplified waveform (maybe downsampled resolution) initially, or a placeholder “loading waveform” animation, then refine it. Visually, the waveform could be styled with a neutral color (or matching a theme color) and the current playhead position indicated by a contrasting line or a shaded region (e.g., left of current time is one color, right is another or faded). For videos, a waveform might be less common, but it could still be shown if the focus is on audio analysis (for instance, a video of a music track). Alternatively for video, consider a thumbnail strip as progress indicator (see next point in seeking).
Segment Loop (A–B Repeat):
Introducing an A–B loop feature can greatly benefit users who need to rehearse or closely study part of a media file. The UI for this could consist of a “Loop” toggle button and a way to set points A and B. One design approach: when Loop mode is activated, two extra buttons or markers appear near the timeline – one to mark the start point (A) and one to mark the end point (B). The user would seek to a desired start time and click “Mark A” (this could also be done via a keyboard shortcut for precision), then seek to an end time and click “Mark B”. The segment between A and B could then be visually highlighted on the timeline (perhaps with a different color or a block above the timeline). When loop is active, playback would constrain to that segment. The design should also allow resetting the loop easily (maybe a “Clear loop” option or just toggling off the loop mode resets the markers). A small indicator (like A and B letters) on the timeline helps the user remember where their loop points are. This feature should be somewhat tucked away (maybe in an “advanced controls” section or only visible when activated) so that casual users are not confused by extra buttons. But for those who need it, it should be easily accessible. From a UX perspective, providing a tooltip or mini-label showing the A and B times when set would be helpful (e.g., “Loop from 00:30 to 00:45”). The loop button might glow or highlight when active so the user remembers they are in loop mode.
Smooth Animations and Feedback:
Adding subtle animations improves perceived responsiveness and quality. For example, when the user hits play, instead of an instant switch of the play/pause icon, one could animate the icon (a common approach is morphing the play triangle into a pause bars using an SVG animation). Progress bar movements can be animated so if a user seeks to a new position, the indicator glides to the new spot rather than jumping abruptly. Visual feedback on interactions is also crucial: buttons should have hover states (change color or slight scale-up on hover) and active states (perhaps a slight glow or depress effect when clicked). If the player has a volume slider, dragging it could show the volume level numerically in a tooltip that follows the thumb, fading out after release. Another idea is a tiny “blip” animation on the waveform or an equalizer icon bouncing to indicate audio is playing (especially when the window is out of focus or the player is compact). All these micro-interactions make the player feel modern. Using easing (non-linear movement) makes the animations feel natural. It’s important to keep them quick and not interfere with user control (e.g., don’t animate things in a way that delays a response to a click). The overall visual theme of these animations should match the design language (for instance, if the player is sleek and minimal, animations should be subtle and not cartoonish).
Responsive and Adaptive Layout:
Even though the player is desktop-focused, it’s good design practice to ensure the layout can adapt to different window sizes or be docked in a corner of the screen. For example, nGene Media Player might sometimes run in a small window or a half-screen view. The layout could switch to a more compact mode if the width gets too small (hiding text labels and showing just icons, or collapsing the playlist/metadata panel). Conversely, if there’s plenty of space, perhaps show additional information or larger album art. Using CSS flexbox or grid will help in making the layout flexible. On desktop, also consider multi-column layouts: one could have, for instance, the media element (video or an album art visualization) on the left and a list of upcoming tracks or metadata on the right. That kind of layout takes advantage of horizontal space. If implementing a playlist, that panel could be collapsible. Overall, the design should not be static; it should accommodate both a minimal player view and an expanded detail view gracefully. This gives users control over how much information they see at once.
Improved User Interaction
Keyboard Shortcuts for Common Actions:
Desktop users often expect keyboard controls in addition to mouse interactions. Implementing a set of intuitive shortcuts greatly enhances the user experience for power users. Some standard ones:
Spacebar – Toggle Play/Pause (when the player is focused). This is almost universal in media players.
Arrow Left/Right – Seek backward/forward by a small step (e.g., 5 seconds). This allows quick rewind or skip without reaching for the mouse.
Ctrl + Arrow Left/Right (or Alt+Arrow) – Seek backward/forward by a larger step (e.g., 15 or 30 seconds). Useful for navigating longer content quickly.
Arrow Up/Down – Increase/Decrease Volume. Perhaps in increments of 5 or 10%. Combining with a modifier (Shift+Up/Down) could jump to max or min quickly.
M – Mute/Unmute audio. A single key toggle for mute is handy.
L – Toggle Loop mode (if A–B loop is off, this might loop the whole track; if A–B points are set, it could toggle the segment looping). This follows YouTube’s convention (L jumps forward, but here using L for loop since loop is a distinct concept).
A/B – Set Loop Start/End points. If implementing A–B looping via keyboard, pressing “A” could mark the current time as point A, and “B” as point B. This provides precision for those who prefer keyboard.
F – Enter/exit fullscreen (for video). Many video players use F for fullscreen toggling on desktop.
These shortcuts should be documented in a help dialog or tooltip. For example, hovering over the play button could show “Play (Space)” to let the user know the shortcut. Additionally, an always-available “help” icon or a mention in a README accessible from the player can list all shortcuts. It’s important that keyboard interactions don’t interfere with other page shortcuts (so scoping them to when the player has focus or when no text input is active). By implementing these, the player becomes much faster to control, especially for advanced users who might be doing other tasks and quickly want to pause or skip a track using the keyboard.
Advanced Seeking Aids:
For video content, scrubbing through the timeline can be improved by showing a preview thumbnail. This feature, common in YouTube and other modern players, displays a small image of the frame at the hovered time on the progress bar. Implementing this requires either precomputed thumbnails or generating them on the fly (precomputing is typically done on the server or beforehand; on the fly in the browser is possible with the
canvas
element drawing video frames during load, but can be complex and heavy). Even without thumbnails, a tooltip with the timestamp at the cursor is very useful (e.g., hover at a point on the timeline and see “1:23:45” so the user knows where they will jump if clicked). If chapters or track markers are known (say an album is playing and each track in a DJ mix has a timestamp, or a video has chapter metadata), the timeline could incorporate small tick marks or icons to indicate these. Clicking on those could jump to that chapter. Another idea is “speed scrubbing”: if the user drags the seek handle, moving the cursor farther up or down while dragging could change the seek speed (some video editors do this, but it might be too advanced for a simple player). At the very least, ensuring that clicking on the progress bar is easy and precise (perhaps making the clickable area tall enough, etc.) is important. Additionally, for long audio tracks, one could allow direct input of a time to jump to (e.g., clicking the elapsed time display could turn it into an editable field where the user types a timestamp like “5:00” to jump to 5 minutes). Such features cater to advanced usage scenarios.
Drag-and-Drop and File Management:
On desktop, drag-and-drop is a natural way to open files. nGene Media Player’s interface can include a drop target – likely the whole window or a specific area (maybe a “Open file” panel) – where users can drop an audio/video file from their file explorer. When a file is dragged over, the UI can highlight (e.g., outline the player with a colored border or show a big “Drop to play” message). This feedback reassures users that the action is recognized. On drop, the player should immediately load the file and start playback or at least get ready. This is often quicker than using an “Open” dialog. That said, an “Open File” button is also useful (which can trigger a file picker). If possible, integrate with system file associations (in an Electron context, for example, the app could be made the default for certain file types to open them on double-click). Once a file is loaded, showing the file name (somewhere in the metadata area, as mentioned) is good. If multiple files are queued, a simple playlist view should be available. The playlist could be a sidebar or a dropdown list of filenames/tracks that the user has loaded in this session. It would allow selecting another track without going back to the file picker repeatedly. Perhaps when the user drags in a folder or multiple files at once, the player could treat that as a playlist (queue them up). Controls for “Next” and “Previous” track (and maybe “Shuffle” or “Repeat All”) could then appear or activate if more than one item is in the list. These controls can be small next/prev buttons near the play button or at the top of the playlist panel. In terms of UX, ensure that adding files to the playlist does not interrupt current playback unless the user initiates it; for example, dragging new files in could just add them to the list while the current song continues, rather than immediately switching.
Context Menu and Right-Click Options:
Desktop users often right-click expecting more options. The player can offer a context menu on right-clicking the video or audio area. For instance, options might include: “Play/Pause”, “Mute”, “Loop On/Off”, “Show in Folder” (if it’s a local file and such integration is possible through an Electron app or so), or “Properties/Details” to show metadata. If implementing a custom context menu, one must disable the default browser menu for that element. This is an optional enhancement, but it can make the app feel more like a native desktop app. Another interactive improvement is keyboard focus management: allowing the player to be tabbed into and then using arrow keys or space to control (for accessibility and power users who navigate via keyboard). Proper ARIA roles for the player controls would also improve the experience for screen reader users or those using assistive tech, which can be part of UX quality on desktop as well.
Modern UI Libraries and Frameworks
Smooth Animations with Anime.js or GSAP:
While CSS can handle basic transitions, complex or coordinated animations are much easier with a dedicated library.
Anime.js
is a lightweight library that can animate CSS properties, SVG, canvas, and more. It could be used, for example, to create the play/pause icon morph animation or to animate the waveform’s appearance (like a nice reveal from left to right when a track loads).
GSAP (GreenSock Animation Platform)
is a more powerful (but slightly heavier) library that can handle sequencing of animations and has a rich plugin ecosystem. If the media player were to have an introduction animation (say, a logo or splash, or a transition when opening a new file), GSAP could orchestrate that easily. GSAP can also be used for more subtle things like smoothly scrolling a playlist or fading elements in and out. The choice between Anime.js and GSAP may come down to the complexity of animations: Anime.js covers most needs and is small; GSAP is enterprise-grade for very elaborate effects. In either case, these libraries ensure that animations are performant (using requestAnimationFrame under the hood and handling browser quirks) and can be tuned with different easing functions for a professional feel. By using them, the developer avoids reinventing the wheel for timing functions, sequences, or cross-browser behavior. For example, one could animate the volume icon on mute (line through it) with a little rotation or scale using Anime.js in just a line or two of code. It’s important not to overdo animations – they should support the user experience (feedback or delight) without slowing down interaction.
Building UI Components with Lit:
As the application grows in features (waveform display, playlist, etc.), maintaining clean structure is key.
Lit(formerly LitElement / lit-html) is a library for building Web Components using a simple, efficient approach. Using Lit, one could encapsulate parts of the UI into custom elements. For instance, a
<media-player>
main component might contain sub-components like
<media-controls>
(play, pause, volume, timeline),
<media-playlist>
, and
<media-metadata-display>
. Each of those could be a Lit component with its own styles and reactive properties. The advantage is modularity: the code for the playlist doesn’t directly interfere with the code for controls, and each component can be developed and tested somewhat in isolation. Lit makes it straightforward to reflect properties to the DOM and update when data changes (for example, if the track title changes, the
media-metadata-display
component will automatically re-render that part). Web Components also ensure that if the player is integrated into a larger page or reused, it has a self-contained scope (shadow DOM can prevent styles from leaking in or out). Since nGene Media Player is desktop-focused, you might not need to worry about other page content, but the developer ergonomics of Lit still apply. It’s a humble framework in the sense that it doesn’t impose heavy structures; it just helps create elements. Another benefit is theming: with Web Components, one can define CSS custom properties for theming that users of the component (or a global theme) can set. For instance, the player could expose
--player-accent-color
which would propagate to the play button, progress bar, etc., to easily change the color scheme. In summary, adopting Lit can future-proof the codebase as the UI grows and ensure performance (Lit updates are efficient) and maintainability.
UI Component Library (Shoelace):
If the aim is to speed up development while ensuring a consistent look,
Shoelace
is an excellent choice. Shoelace is a collection of pre-built Web Components for common UI elements, with a modern look-and-feel. Instead of styling basic HTML elements from scratch or using a heavy CSS framework, one can use Shoelace components like:
<sl-button>
– which can be used for play/pause or other buttons (it supports variants, toggling, icons, etc.). For example, a play button could be
<sl-button pill icon="play-fill"></sl-button>
(using an icon pack integration) giving a nice circular icon button with hover effects built-in.
<sl-slider>
– a stylized slider which could serve as the volume or progress bar. Shoelace sliders are themable and accessible, and they can have tooltips showing the value on hover if enabled.
<sl-range>
– similar to slider, might be used for volume with a min-max display of value or for brightness if needed (for video).
<sl-dialog>
– could be used if you implement a “Preferences” or “About” dialog, or a confirmation (like “Are you sure you want to clear the playlist?”). It provides a responsive, accessible modal out of the box.
<sl-menu>
and
<sl-menu-item>
– can help build a context menu or dropdown menus for settings.
<sl-icon>
– Shoelace comes with an icon library (or you can plug in your own SVGs) for consistent icons.
Shoelace components are built on web standards, so they integrate nicely with Lit or any framework (or no framework). They also allow customization via CSS variables and classes, so the player can have a unique theme if desired, while still using Shoelace’s base styling and functionality. By using such components, a lot of the cross-browser CSS work and accessibility ARIA attributes are handled by the library, freeing the developer to focus on functionality. For example,
<sl-slider>
already works with keyboard arrow keys and is screen-reader friendly, whereas a custom range input might need additional handling to reach the same level. The end result is a more polished UI with less effort. One just has to be mindful to include the Shoelace script and define the custom elements; after that, it’s plug-and-play.
Consistency and Theming:
Whether using Shoelace or custom components, maintaining a consistent style is key to a professional look. Define a color scheme (perhaps based on the nGene Media Player brand or a neutral palette). Use CSS variables so that changing the theme is easy. For instance,
--accent-color
could be used for progress bar fill, button highlights, etc. For desktop, often a dark theme is preferred for media apps (think of VLC, Spotify, etc.), but it should be a tasteful dark: dark gray backgrounds with light text, using accent colors for highlights (like play button when hovering or progress). Provide enough contrast for readability. Additionally, ensure the design scales for High-DPI screens (using SVG icons or font icons so they don’t blur). All text should use a clear font (the default system font is usually fine, or a clean sans-serif). If wanting a high-end feel, subtle shadows and blurs can be used (for example, a slight shadow behind the control bar to ensure it’s visible over a video). The goal is a UI that feels cohesive; every element’s style should appear part of the same family. Modern libraries like Shoelace already follow a coherent design system, which helps. If custom-building, one can draw inspiration from material design or fluent design systems but adapt them lightly to avoid a generic look. Testing the UI on different screens and lighting conditions (monitor vs laptop, bright room vs dark room) can inform tweaks in contrast or sizing to ensure usability.
By implementing these design and UX improvements, nGene Media Player will not only be functionally robust but also user-friendly and visually appealing. It will feel like a modern desktop application, with responsive controls, rich visuals like waveforms, and thoughtful details (like shortcuts and drag-drop) that desktop users appreciate. The use of web technologies and libraries means the player can achieve a high level of polish comparable to native apps, while remaining customizable and lightweight. As always, incremental enhancement is wise: features can be added step by step, gathering user feedback to refine the UX. Over time, these improvements can significantly elevate the user’s enjoyment and efficiency when using the media player, fulfilling the goal of a comprehensive and professional media playback experience.
Written on May 9, 2025
Script
Editing Videos
Trimming Videos with FFmpeg or Graphical Editors on Windows and macOS, and Resolving Homebrew PATH Issues
This guide provides a structured approach to trimming a specific section from a video using either FFmpeg or a graphical editor on Windows and macOS. It explains how to install and use FFmpeg, explores alternative editing methods, and offers troubleshooting steps to resolve PATH issues when installing FFmpeg via Homebrew on macOS. Every step and consideration is presented to ensure a smooth and professional workflow.
Trimming Videos with FFmpeg
Installing FFmpeg
FFmpeg is a free, open-source tool that supports a wide range of audio and video operations. It is available on both Windows and macOS.
Ensure that the FFmpeg directory is included in the system’s PATH.
Note: If Homebrew is used on Apple Silicon (M1/M2) Macs, binaries often reside in /opt/homebrew/bin. On Intel Macs, they often reside in /usr/local/bin.
Basic Command to Trim a Video
Once FFmpeg is installed, the following command trims a segment from abc.mp4—starting at 00:00:49 and ending at 00:04:41—and saves the trimmed content into abc_edited.mp4:
Re-encoding may reduce quality and increase processing time, but it allows changing codecs or formats.
Keyframe Alignment:
Using -ssbefore-i can cause FFmpeg to seek to the nearest keyframe, which occasionally introduces slight timing differences. If necessary, experiment with placing -ss either before or after -i:
In most scenarios, -ss after -i is sufficient with -c copy.
Trimming Videos Using Graphical Editors
Although FFmpeg is command-line based, some may prefer graphical methods. These editors typically re-encode video, which can take longer and potentially reduce quality, but they offer an intuitive visual interface.
Windows (Built-In Photos/Video Editor on Windows 10/11)
Right-click abc.mp4 and select Open with → Photos (or Video Editor).
Choose the Trim option (in Photos) or create a New video project (in Video Editor).
Drag the slider handles to select content from 00:49 to 04:41.
Save the trimmed section by choosing Save a copy (in Photos) or Finish video (in Video Editor).
Export the result as abc_edited.mp4.
macOS (Using iMovie)
Launch iMovie and import abc.mp4 into a new or existing project timeline.
Position the playhead at 00:49 and use Command + B (or the Split Clip command) to split the clip.
Repeat at 00:04:41 to isolate the desired segment.
Delete content before 00:49 and after 04:41.
Go to File → Share → File to export the trimmed result as abc_edited.mp4.
Troubleshooting FFmpeg PATH Issues on macOS with Homebrew
Occasionally, macOS users who install FFmpeg via Homebrew experience “command not found” errors. This typically indicates that the shell cannot locate the installed FFmpeg binary, often due to PATH misconfiguration.
Verifying FFmpeg Installation
brew list ffmpeg
or
brew info ffmpeg
These commands display details about the FFmpeg package. If no information appears, consider reinstalling:
brew reinstall ffmpeg
Locating the FFmpeg Binary
Homebrew generally installs software in one of the following directories:
Apple Silicon (M1/M2): /opt/homebrew/bin
Intel Macs: /usr/local/bin
To confirm the exact location of FFmpeg, run:
find "$(brew --prefix)" -name ffmpeg -type f
This command returns the full path to the installed ffmpeg binary (for example, /opt/homebrew/bin/ffmpeg).
Checking the PATH Environment Variable
To see if the correct installation directory is in the PATH, run:
echo $PATH
If /opt/homebrew/bin (Apple Silicon) or /usr/local/bin (Intel) is absent, the shell will not be able to locate FFmpeg.
Updating the PATH
Apple Silicon (M1/M2) Macs
If /opt/homebrew/bin is missing, add the following line to the shell configuration file (e.g., ~/.zshrc), then reload:
Intel Macs
If /usr/local/bin is missing (uncommon, but possible), add the following line to the shell configuration file (e.g., ~/.zshrc or ~/.bash_profile):
export PATH="/usr/local/bin:$PATH"
source ~/.zshrc # or source ~/.bash_profile if using bash
Running ffmpeg -version afterward verifies a successful configuration.
Written on February 12, 2025
Trimming Videos on macOS with FFmpeg
FFmpeg is a powerful, open‐source multimedia framework capable of handling a wide range of video and audio operations. On macOS, it provides an efficient way to trim, concatenate, and re‐encode video clips via command‐line instructions. This guide focuses on installing and configuring FFmpeg on macOS, trimming videos (both single and multiple segments), and verifying the tool’s installation path.
Installing and Configuring FFmpeg on macOS
Homebrew Installation (Recommended)
Install Homebrew if it is not already present. Instructions are available at
https://brew.sh/.
A successful output indicates FFmpeg is correctly installed and accessible.
Locating the FFmpeg Installation
In some cases, it may be necessary to confirm the exact path where FFmpeg has been installed (for example, when configuring external tools or diagnosing “command not found” errors). The following command uses Homebrew’s prefix to locate the ffmpeg binary:
find "$(brew --prefix)" -name ffmpeg -type f
brew --prefix returns the base directory where Homebrew is installed. On Apple Silicon systems, this is commonly /opt/homebrew; on Intel-based Macs, /usr/local.
Substitution with $(...) instructs the shell to execute brew --prefix and insert that output into the find command.
find "$(brew --prefix)" -name ffmpeg -type f searches all subdirectories under Homebrew’s prefix for any file named ffmpeg, restricting results to regular files (-type f).
Outcome: Provides the precise file path to the FFmpeg binary, enabling verification or troubleshooting of installation and PATH issues.
Trimming a Single Segment with FFmpeg
Trimming one continuous portion of a video is simple using -ss (start time), -to (end time), and -c copy (stream copy). Stream copy avoids re-encoding, preserving original quality and saving time.
-safe 0: Allows absolute or relative paths in mylist.txt.
-c copy: Maintains source quality by copying streams without re-encoding.
Note: This two-step method is fast and lossless but requires creating multiple intermediate files.
Approach B: Filter Complex (Single Command)
For a one-step method or when advanced processing (like overlays, resizing, or format changes) is needed, FFmpeg’s filter_complex can be used. This process involves re-encoding:
Keyframe Alignment: Placing -ss before -i can sometimes lead to frame-inaccurate trims in stream copy mode. If exact frame accuracy is critical, consider placing -ss after -i or re-encode for more precise cuts.
Audio/Video Synchronization: Always verify final outputs to ensure audio and video stay in sync, especially when using advanced filters or re-encoding.
Written on February 21, 2025
Meta Information
Python Script for BPM & Tempo Extraction from Multiple M4A Files (Written May 18, 2025)
This document describes extract_meta_from_media.py (v1.1), an enhanced Python script that
computes the global BPM of every .m4a file in ~/Desktop/m4a
and—new in this release—extracts tempo metadata and an instantaneous tempo curve
for deeper musical analysis.
1. Objective
The script will:
Locate all .m4a files in the m4a folder on your Desktop.
The complete v1.1 source code is reproduced below.
#!/usr/bin/env python3
"""
Filename : extract_meta_from_media.py
Version : 1.1
Author : Hyunsuk Frank Roh
Description
-----------
Walk through ~/Desktop/m4a, estimate the *global* BPM of every .m4a file,
**and** (new in v1.1) extract extra tempo information:
• Embedded tempo/BPM tag from the file’s metadata (iTunes ‘tmpo’ atom).
• An instantaneous tempo curve so you can see how BPM fluctuates over time.
Dependencies
------------
pip install librosa mutagen numpy
Usage
-----
python extract_meta_from_media.py
"""
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import os
from typing import List, Tuple, Optional
import numpy as np
import librosa
from mutagen.mp4 import MP4
# --------------------------------------------------------------------------- #
# Core routines #
# --------------------------------------------------------------------------- #
def compute_tempo(
audio_file_path: str,
sr_target: int | None = None
) -> Tuple[float, List[float]]:
"""
Return (global_bpm, tempo_curve).
Parameters
----------
audio_file_path : str
Path to an audio file (.m4a).
sr_target : int | None
Target sample-rate for librosa.load (None = original file rate).
Returns
-------
global_bpm : float
Single BPM estimate from librosa’s beat tracker.
tempo_curve : list[float]
Frame-level BPMs returned by librosa.beat.tempo(..., aggregate=None).
"""
y, sr = librosa.load(audio_file_path, sr=sr_target)
# Global BPM via beat tracking
global_bpm, _ = librosa.beat.beat_track(y=y, sr=sr)
# Instantaneous tempo curve
tempo_curve = librosa.beat.tempo(y=y, sr=sr, aggregate=None)
return float(global_bpm), tempo_curve.tolist()
def read_tagged_tempo(audio_file_path: str) -> Optional[float]:
"""
Fetch embedded tempo/BPM tag (iTunes ‘tmpo’ atom) if present.
Returns None when no tag is found or the file type is unsupported.
"""
try:
audio = MP4(audio_file_path)
if "tmpo" in audio.tags: # ‘tmpo’ is usually a single int
return float(audio.tags["tmpo"][0])
except Exception:
pass # Unsupported container or no tag
return None
# --------------------------------------------------------------------------- #
# Main driver #
# --------------------------------------------------------------------------- #
def main() -> None:
desktop_path = os.path.join(os.path.expanduser("~"), "Desktop")
m4a_folder = os.path.join(desktop_path, "m4a")
if not os.path.isdir(m4a_folder):
print(f"Folder not found: {m4a_folder}")
return
m4a_files = sorted(
f for f in os.listdir(m4a_folder) if f.lower().endswith(".m4a")
)
if not m4a_files:
print(f"No .m4a files found in {m4a_folder}")
return
for filename in m4a_files:
file_path = os.path.join(m4a_folder, filename)
print(f"\nProcessing {filename} …")
try:
global_bpm, tempo_curve = compute_tempo(file_path)
tagged_tempo = read_tagged_tempo(file_path)
print(f"Estimated global BPM : {global_bpm:.2f}")
if tagged_tempo is not None:
print(f"Embedded tempo tag : {tagged_tempo:.2f} BPM")
else:
print("Embedded tempo tag : – (none)")
if tempo_curve:
arr = np.array(tempo_curve)
print(
"Instantaneous tempo stats:"
f" min {arr.min():.2f}"
f" | mean {arr.mean():.2f}"
f" | max {arr.max():.2f} BPM"
)
# Uncomment if you want to peek at the first few entries
# print('Tempo curve (first 10):', ', '.join(f'{v:.2f}' for v in arr[:10]))
except Exception as exc:
print(f"Error processing {filename}: {exc}")
if __name__ == "__main__":
main()
4. Explanation of Key Enhancements
Component
v1.0 Behaviour
v1.1 Upgrade
read_tagged_tempo()
—
Uses mutagen to pull the iTunes BPM tag (tmpo) if it exists.
compute_tempo()
Returned a single BPM value.
Also returns a frame-level tempo curve via librosa.beat.tempo(..., aggregate=None).
Console output
Only global BPM printed.
Adds embedded tag (if present) plus min/mean/max of the tempo curve for quick insight.
Dependencies
librosa, soundfile
Now librosa, mutagen, numpy (soundfile is still auto-pulled by librosa).
5. Program Flow Diagram (Updated)
┌────────────────────────────┐
│ Start Script │
└────────────────────────────┘
│
▼
┌────────────────────────────┐
│ 1. Verify ~/Desktop/m4a │
└────────────────────────────┘
│
▼
┌────────────────────────────┐
│ 2. List all .m4a files │
└────────────────────────────┘
│
┌────────┴─────────┐
│ Any files found? │
└────────┬─────────┘
Yes │ No
│
▼
┌────────────────────────────────────┐
│ 3. For each file: │
│ • Estimate global BPM │
│ • Read embedded BPM tag │
│ • Compute tempo curve │
│ • Print results │
└────────────────────────────────────┘
│
▼
┌────────────────────────────┐
│ End │
└────────────────────────────┘
6. Usage Instructions
Activate your venv each session (from ~/Desktop):
source venv/bin/activate
Run the script:
python extract_meta_from_media.py
Inspect output—for each track you’ll see:
Processing song1.m4a …
Estimated global BPM : 128.12
Embedded tempo tag : 128.00 BPM
Instantaneous tempo stats: min 127.50 | mean 128.05 | max 128.60 BPM
When finished, deactivate:
deactivate
Written on May 18, 2025
Python Script for BPM & Tempo Extraction from Multiple Media Files (Written June 21, 2025)
This document presents extract_meta_from_media.py (v1.2), an upgraded
Python script that scans ~/Desktop/media for audio-capable files
(.m4a, .mp3, .mp4), computes each track’s global BPM, and
extracts embedded tempo tags plus an instantaneous tempo curve for detailed
musical analysis.
1. Objective
The script will:
Locate all supported files (.m4a, .mp3, .mp4)
in the media folder on your Desktop.
The complete v1.2 source code is reproduced below.
#!/usr/bin/env python3
"""
Filename : extract_meta_from_media.py
Version : 1.2
Author : Hyunsuk Frank Roh
Description
-----------
Walk through ~/Desktop/media, estimate the *global* BPM of every audio-capable
file (.m4a, .mp3, .mp4), **and** extract extra tempo information:
• Embedded tempo/BPM tag from the file’s metadata
– iTunes 'tmpo' atom for .m4a / .mp4
– ID3 'TBPM' (or EasyID3 "bpm") for .mp3
• An instantaneous tempo curve so you can see how BPM fluctuates over time.
Dependencies
------------
pip install librosa mutagen numpy
Usage
-----
python extract_meta_from_media.py
"""
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import os
from typing import List, Tuple, Optional
import numpy as np
import librosa
from mutagen.mp4 import MP4
from mutagen import File as MutagenFile
# --------------------------------------------------------------------------- #
# Core routines #
# --------------------------------------------------------------------------- #
def compute_tempo(
audio_file_path: str,
sr_target: int | None = None
) -> Tuple[float, List[float]]:
"""
Return (global_bpm, tempo_curve).
"""
y, sr = librosa.load(audio_file_path, sr=sr_target, mono=True)
# Global BPM via beat tracking
global_bpm, _ = librosa.beat.beat_track(y=y, sr=sr)
# Instantaneous tempo curve
tempo_curve = librosa.beat.tempo(y=y, sr=sr, aggregate=None)
return float(global_bpm), tempo_curve.tolist()
def read_tagged_tempo(audio_file_path: str) -> Optional[float]:
"""
Return embedded BPM tag (if any) or None.
"""
ext = os.path.splitext(audio_file_path)[1].lower()
try:
if ext in {".m4a", ".mp4"}:
audio = MP4(audio_file_path)
if "tmpo" in audio.tags:
return float(audio.tags["tmpo"][0])
elif ext == ".mp3":
audio = MutagenFile(audio_file_path)
if audio and audio.tags:
if "bpm" in audio.tags:
return float(audio.tags["bpm"][0])
if "TBPM" in audio.tags:
return float(audio.tags["TBPM"].text[0])
except Exception:
pass
return None
# --------------------------------------------------------------------------- #
# Main driver #
# --------------------------------------------------------------------------- #
def main() -> None:
desktop_path = os.path.join(os.path.expanduser("~"), "Desktop")
media_folder = os.path.join(desktop_path, "media")
if not os.path.isdir(media_folder):
print(f"Folder not found: {media_folder}")
return
audio_exts = {".m4a", ".mp3", ".mp4"}
media_files = sorted(
f for f in os.listdir(media_folder)
if os.path.splitext(f)[1].lower() in audio_exts
)
if not media_files:
print(f"No supported audio files found in {media_folder}")
return
for filename in media_files:
file_path = os.path.join(media_folder, filename)
print(f"\nProcessing {filename} …")
try:
global_bpm, tempo_curve = compute_tempo(file_path)
tagged_tempo = read_tagged_tempo(file_path)
print(f"Estimated global BPM : {global_bpm:.2f}")
if tagged_tempo is not None:
print(f"Embedded tempo tag : {tagged_tempo:.2f} BPM")
else:
print("Embedded tempo tag : – (none)")
if tempo_curve:
arr = np.array(tempo_curve)
print(
"Instantaneous tempo stats:"
f" min {arr.min():.2f}"
f" | mean {arr.mean():.2f}"
f" | max {arr.max():.2f} BPM"
)
except Exception as exc:
print(f"Error processing {filename}: {exc}")
if __name__ == "__main__":
main()
4. Key Enhancements over v1.1
Component
v1.1 Behavior
v1.2 Upgrade
Target folder
~/Desktop/m4a
~/Desktop/media with mixed formats
Supported extensions
.m4a
.m4a, .mp3, .mp4
read_tagged_tempo()
iTunes tmpo only
Adds ID3 TBPM / EasyID3 “bpm” for .mp3
Error handling
Basic
Robust across multiple formats
Console output
Per-track stats for .m4a
Same stats for all supported formats
5. Program Flow Diagram (Updated)
┌────────────────────────────┐
│ Start Script │
└────────────────────────────┘
│
▼
┌────────────────────────────┐
│ 1. Verify ~/Desktop/media │
└────────────────────────────┘
│
▼
┌────────────────────────────┐
│ 2. List .m4a/.mp3/.mp4 │
└────────────────────────────┘
│
┌────────┴─────────┐
│ Any files found? │
└────────┬─────────┘
Yes │ No
│
▼
┌──────────────────────────────────────────────┐
│ 3. For each file: │
│ • Estimate global BPM │
│ • Read embedded BPM tag (if any) │
│ • Compute tempo curve │
│ • Print results │
└──────────────────────────────────────────────┘
│
▼
┌────────────────────────────┐
│ End │
└────────────────────────────┘
6. Usage Instructions
Activate your venv (each session):
source venv/bin/activate
Run the script:
python extract_meta_from_media.py
Inspect output — example:
Processing track2.mp3 …
Estimated global BPM : 124.37
Embedded tempo tag : 125.00 BPM
Instantaneous tempo stats: min 123.90 | mean 124.25 | max 125.10 BPM