Articles & Transcripts

Notes from Bob Ludwig on the History of Recording and Mastering

Bob Ludwig received his masters degree from the Eastman School of Music and became the principal trumpet player for the Utica Symphony Orchestra.  While also becoming a classical musician at Eastman, he worked in the sound department and started out cutting vinyl records using a fixed width lathe for mono and stereo records.

Later on, lathe technology progressed to a use computer (digital) control to control the groove spacing. See for more technical information about analog disc mastering using lathe technology.

Any nostalgia we see today for vinyl pressings does not mean the intrinsic technical limitations of vinyl have been overcome,

For example, you can hear "mold grain" or the noise supposedly "blank" vinyl generates between cuts.  Only a few cutting houses created high quality vinyl pressings with low noise; the cutting houses in the U.S. were some of the worst for pressing records with high noise levels.  If the record noise level is not constant, then it is hard for the ear to tune out and it detracts from focusing upon the music.

Notable papers on the engineering of the vinyl disc are:

"Performance Characteristics of the Commercial Stereo Disc" by John Eargle

"The Dynamic Range of Disc and Tape Records" by Gravereaux, Gust & Bauer

Eventually, Bob became an engineer for Sterling Sound and would go on to work for Masterdisk.  Bob would ultimately start his own mastering facility in Portland, Maine named "Gateway Mastering".

Notable dates from Bob:

1968 - Bob masters Led Zeppelin II and Jimi Hendrix's "Cry of Love" with the blue'ish cover.

1971 - Dolby B noise reduction for cassettes.

1974 - Improvements in technology.

While Dolby B did not provide full spectrum noise reduction it was enough to propel cassette tape technology into the consumer hi-fi world.  The portability of the cassette would make taping your records and bringing them with you outside the house all the rage via the Sony Walkman, boom boxes, and other portable players.  More advanced tape formulations and broadband noise reduction (Dolby C) were to follow and convince even some audiophiles that cassette technology had something to offer in comparison with the far more expensive consumer version of open reel tape technology.  It's worth noting that in the professional audio world, DBX noise reduction was popular (and provided full spectrum noise reduction), but it was only ever adopted in a few consumer cassette decks because of the relatively higher cost of the electronics.

1975 - Cassette sales really take off.

1976 - Bob changes from working at Sterling to Masterdisk which is located at 110 W57th in Manhattan.  Masterdisk and Mercury Records are in the same building.

1977 - Advent of professional digital recording machines.  Sony introduces the PCM-1.

The first digital recording machine for audio were so expensive that some studios could only afford to rent them.  There was no standardization in digital formats or across different manufacturers machines.  Tom Stockman, considered by some to be the father of digital recording, used a custom digital recorder based upon a Honeywell data recorder.  This is not surprising since computing technology had already been recording digital data on magnetic tape.  See the Soundstream Digital Recorder, Tom worked on algorithms to reduce noise which would become part of the NoNoise which would ultimately be available in the future Sonic Solutions platform.

1978 - Professional digital audio recording arrives with a vengeance.

The first digital 32 track multitrack recorder debuts from 3M ($115,000).  Sony introduces the successor to the PCM-1, namely the PCM-1600 digital recorder to be eventually followed by the 1610 and the 1630.  The technology was progressing from using data converters from the computer world to the design of converters for audio,

1979 - Sony Walkman pushes consumer audio forward to the point where consumers want decent sounding headphones because the headphones provided with the Walkman are so cheap.

1981 - IBM personal computer.

1982 - First commercial CD players, the Phillips CD-100 followed by the Sony CD-101 ($730) launched shortly after the Phillips model.

1983 - Cassettes now outsell vinyl LP's and the CD is introduced.

1984 - Macintosh personal computer

1984 - Advent of "Sound Designer" audio editing software ($995) from DigiDesign in California which would later become the ProTools DAW.

1985 - the CD starts to get attention for well produced mixes and mastering, e.g., Dire Straights - Brothers in Arms.  The Sony PCM-1630 is used for the soundtrack from the movie "The Falcon and the Snowman".

1986 - Direct metal mastering lathe becomes the ultimate vinyl disc cutting tool.

1987 - Masterering software.

James Moorer develops a hard disk based, non-linear audio editor named SoundDroid drawing on his work at Lucasfilms where he designed an audio signal processor to record the soundtracks for various films.  This would lead to the development of the Sonic Solutions workstation which would become the mastering standard for software in its time.

1987 - 2005  digital audio tape (DAT) format evolves, e.g., the Sony PCM-2500 DAT machine.

The 4mm DAT tape was used for computer data backup and subsequently larger formats with higher data capacity were also used (8mm).  Silicon Graphics workstations had a unique 4mm DAT drive with custom firmware that could format/read/write 4mm DAT tapes for audio (as well as use them for standard data backups).  While the 4mm audio DAT format was used by audio professionals, it never took off in the consumer world as an interactive (read/write) digital medium for audio.

1988 - Masterdisk gets first Phillips system to write/read CD-R red book CD's.

Phillips knows they have to make the CD format interactive to compete with convenience of magnetic tape in the consumer world (which allows consumers to make their own compilations and playlists).  Phillips commissioned Yamaha to build the CD-R system which cost $35,000.  The cost of each blank CD-R was $80 but we charged the customer $300/ea. for them.

1990 - Sadie Audio Software.

1991 - ProTools.

CD's now outsell vinyl LP's and cassettes combined. The first CD is pressed in the USA which is Bruce Springsteen's Born to Run.  Consumers have to be an entire CD at ~$15/ea. even if they only like one or two songs on an album and they are unhappy about it.  This sets the stage for a future consumer appetite for sharing small size files of individual album tracks via peer to peer (P2P) network file sharing both legally and illegally (Napster).

1992 - Studer Dyaxis digital recorder.

1992 - 2013 Sony develops the Mini-disc which they hoped would be the digital successor to the analog cassette.

Sony had hoped that DAT tape would replace the cassette but the hardware was too expensive and not popular with consumers. Only the professional audio market could afford DAT hardware.  The mini-disc used lossy compression of audio to achieve the same program lengths as the cassette. 

1995 - HDCD (High Definition Compatible Digital) introduced by Pacific Microsonics.  

HDCD hardware the first really good A/D and D/A for PCM audio and it was used for both stereo and surround sound.  HDCD used a unique dithering scheme to encode and recover 20 bits of data otherwise written as 16 bits on a red book CD.  From a standard red book CD player you would recover the standard 16 bit PCM audio.  But with a machine equipped with the Pacific Microsonics licensed hardware, the D/A would recover 20 bit PCM.  This is just a stop-gap higher resolution band-aid until we get to ...

1997 - THE DVD.

The increased storage capacity of the standard definition DVD for movies allows consumers to playback music in a higher quality format compared to red book CD.  Movie soundtracks are 24 bit 48kHz compared to CD's at 16 bit 44.1kHz.  Gateway Mastering becomes an authorized DVD authoring studio.  Some say this indicates the AES should have standardized on the video industry use of 24 bit 48kHz linear PCM instead of accepting 16 bit 44.1kHz for the CD format.

1999 - Not to be left behind by the DVD, Sony introduces a 24 bit professional DAT machine, the Tascam DA-45HR ($4600).

1999 - Napster is founded.

The software rips albums utilizing lossy/low resolution MPEG audio encoding (which sounds terrible at low bit rates).  Napster will ultimately store and cache countless numbers of individual audio tracks for users to pick and choose from across a distributed network consisting of each members home machine.  Freely available digital codecs running on consumers PC's and Macs eventually erode demand for the physical CD and propel the industry towards a future of digital streaming.  First via Napster (and ultimately via streaming) consumers no longer have to buy an entire CD to hear the individual tracks they like.

1999 into the 2000's - Optical disc technology increases in storage capacity to accommodate longer and higher resolution video programs; audio benefits as well.

1999 - SACD (Super Audio CD). 

Phillips and Sony roll out take 2 on the physical CD with a high resolution audio format that stores encryption keys on the physical disc.  The CD was "red book" so maybe it follows the SACD is "scarlet book". The high resolution audio format (1 bit serial sigma/delta encoding at 2.8MHz) is protected and cannot be extracted from the disc into a file that can be stored on a computer.  Only lower resolution PCM conversions of the original SACD stream are output from the digital output of a SACD player using licensed SACD hardware.

2001 - DVD-A (DVD audio)

DVD-A allows for stereo and multi-channel/surround mixes.

2002 - Hybrid SACD

The "hybrid-SACD" contains both a red book CD layer and a SACD layer.  The hope was the hybrid design would appeal to more consumers.  When played in a conventional or old CD player, the players laser will only see the standard red-book layer for playback.  But when played in a SACD player, the laser hardware can focus on SACD layer and playback the high resolution audio.  A physical SACD can hold over 4G of data enabling it deliver both high resolution stereo mixes and 5.1 surround sound mixes.

SACD audio remains popular with audiophiles but never became popular with consumers because of the expense and the limited number of available SACD titles, i.e., major labels only pay for SACD re-issues of very popular titles.  However, classical music labels continue to release SACD's.

2002 - Gateway Mastering does re-mastering for the SACD re-issues of early Rolling Stones albums on ABKO label.


2013 - BluRay audio or High Fidelity Pure Audio (HPFA) from Sony/Universal Music Group.

BluRay audio allows for the distribution of high resolution 24 bit PCM audio files at sample rates up to 192 kHz on optical disc with both stereo and surround sound.  BluRay and universal players are inexpensive now, and so they play CD's, low and high resolution DVD's as well as BluRay DVD's (but not usually SACD's). Once again, only a limited number of popular music titles are being re-issued in BluRay audio.  BluRay audio discs will play on existing BluRay video disc players.

2014 - MQA (Master Quality Authenticated) launched by Meridian Audio.

The idea is to provide better than red book CD quality audio in a small file format that reduces the signal bandwidth needed for streaming.  To achieve this, Meridian created a proprietary way of sampling and filtering audio in the A/D process that includes using what they call a "de-blurring" filter.  Meridian claims their A/D process can improve the sound of existing analog source material even though like HDCD, MQA uses psycho-acoustically based encoding and it also reduces bandwidth by encoding some high frequency content into the noise floor via their own dithering scheme.

A 24 bit "touch up" stream represents the difference between the original and modified signal.  As few as 13 bits in this stream can be reserved for the original PCM audio such that playback without an MQA decoder will result in the high order bits being rendered as noise.  However, Meridian claims their modified sampling process applied to the original audio results in that 13 bits of original PCM sounding better than conventionally sampled 16 bit 48kHz PCM audio.  If the audio is both MQA encoded and subsequently MQA decoded, then you get an "authenticated" reconstruction of the original audio but with an improved impulse and frequency response in the D/A process.

Audio hardware manufacturers have been reluctant to adopt MQA because of the licensing requirements but it has become popular with streaming services.  See,

2015 - Apple announces "Mastered for iTunes" and tries to stop the "loudness wars".

Apple releases software to prepare music for release on iTunes.  The maximum loudness is restricted, and Apple states that if you submit a master with too high a level, the level will be reduced.  Or if you master music to be heavily compressed and subsequently raise the level to achieve maximum loudness, then Apple will penalize your mix by reducing the overall level.  Your mix will lose whatever dynamic range it had and may end up sounding less loud compared to mixes mastered with more dynamic range.

2017 - MQA CD debuts.

2017 - Tidal online streaming service provides MQA as part of their "HiFi" subscription plan (lossy 24 bit 96kHz encoding).

2017 - Mobile carrier Sprint buys 33% stake in Tidal.

2018 - Streaming services well established.

Online streaming accounts for ~75%% of industry revenue compared with ~12% for digital downloads and ~10% for physical media.

2019 - Deutsche Grammophon releasing the most BluRay Audio titles of any classical music label.

Additional Comments:

DAW plugins need to get more professional.  Some plugins limit the bandwidth internally so that even if your project sample rate is set to 96kHz, you would see your frequency response cutoff below 30kHz if you passed a test sweep through the system.  Vendors like HD tracks check the frequency response of audio files and if they find the frequency content of a file to be curtailed w.r.t the sampling rate, then they reject your mastered files.

Music production is a global business now where anyone who has a copy of ProTools thinks they can be a mastering engineer.  Many things work against project and home studios; the rooms are acoustically bad and the lower 2 octaves cannot be reproduced; the quality of the equipment and signal chain won't produce a good mix.  It's hard to learn how to listen and mix if you never work in a correct room where you hear the truth.  The result is people self-master or pre-master their mixes out of the sweet spot where a well equipped mastering facility could do much more to help them.  There's no substitute for starting with a world class mix.  If you don't have the resources for that, then often "less is more" when it comes to production and mixing.

Remarks from Bob Ludwig on His Credits Over the Years

Bob created the reference lacquer for The Band, “Music from Big Pink”.  He tweaked cutting the disc in order to try and get more low bass on it.  However, the bass was subsequently filtered out on the commercial pressing - probably in order to prevent skipping on cheaper turntable/stylus setups that would be unable to track the grooves.

Bob did the lacquer for the Frank Zappa album, “Sheik Yer Booty”.  There were a large number of tweaks including EQ on individual words, and filtering for certain passages; he states he had to memorize parts of the music in order to get all his modifications done correctly.

Bob did the lacquer from the master of Steely Dan’s record, “Gaucho”, which was mixed to 15” tape using Dolby A noise reduction (mixed by Elliot Scheiner).  This album utilized a drum machine built by the bands engineer, Roger Nichols (the bands recording engineer).  Bob loved the mix and the record.  Bob also did some mastering for Donald Fagen’s solo album, “The Nightly”.  By that time Roger Nichols was using an early Sony digital system for recording, but Bob says it still sounded great (despite the limitations of that early digital technology).

Bob re-edited/remastered the entire Rolling Stones catalog for Virgin records in the 1990’s; some feel these are still the best remasters to date.  The album “Tattoo You” was assembled mostly from outtakes of previous recording sessions.  Apparently this was the last Stones album that reached the top of the charts.  It was also a long record, and it was hard to fit onto a vinyl pressing.

Audio and MP3 (by SynMuse Productions)

A Look (back) at the Mechanisms that Drive MPEG-compressed Audio

MPEG(Moving Picture Experts Group) audio and video have been with us now for more than 10 years. The MPEG group was formed in the late 1980s to create standards for the compression of digital audio and video signals. In 1992, MPEG became a standard as agreed upon by the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC). MPEG 2 became a standard in 1994 and added the ability to encode content at lower bit rates (16 kbps, 22.05 kbps, 24 kbps) and to encode a signal according to psychoacoustic models. The psychological models exploit masking and threshold effects in human hearing to decrease the amount of encoded data below the audibility threshold. For video encoding, MPEG may decrease the number of bits per frame for less complex pictures because the human eye will not notice the loss of quality. There are three layers to the MPEG specification that apply to audio (only) encoding—layer one, layer two, and the most recent, layer three.

Interestingly, the most prolific users of MP3 are musicians and Web surfers who have all but created an online culture around the exchange of MP3 (music only) files, so most of the information gleaned about this cutting-edge technology has been through its application on the Internet.  The format of this transmission, compression and decoding scheme will no doubt undergo many changes over time to suit the needs of its users and exploit what will be an ever-changing computer hardware marketplace.

MPEG offers a unique way of transmitting and receiving audio and video data over the Internet. In the future, manufacturers will be able to design devices that can be updated with such new program material as music and messages as well as program material that is developed, modified and mixed via computer. An advantage to this technology is its media—it does not wear out like tape or other magnetic media, nor does it require more permanent media to be produced, such as CD-ROM or DVD. It can be continuously updated via an Internet connection, thereby avoiding costly on-site service. Venues where extremely high-fidelity program material is not paramount, such as background music applications or video in such environments as restaurants, bars and retail applications, are likely to benefit the most at a point in time sooner than other venues as the technology develops further.

Although MPEG 2 delivers audio and picture quality equivalent to TV studio standards, it is not perfect. MPEG is not a loss-free encoding or compression scheme, and its application results in a loss in signal quality. A way to understand compression loss lies in understanding that you do not get 100% signal quality after compression, but by setting the encoding parameters in advance of the compression so that you can ensure that the material you encode and subsequently transmit, download or read from a CD is of a high enough quality for your target audience. Another point to remember is that the encoding setups will differ with different applications, using different schemes for broadcast, downloading over the Internet, or producing a CD.

Defining the technology

A single piece of hardware (or software) that can do both encoding and decoding is sometimes referred to as a codec (encoder/decoder). In audio, the current MPEG specifications are broken up into layers, termed 1, 2, and 3. The popular abbreviation, MP3, refers to audio layer three encoding. No one seems to know why 128 kbps MP3 became the choice for downloading files from the Internet instead of 128 kbps MP2. In all likelihood, it happened this way because MP3 is a more recent development than MP2, and although MP3 is a higher revision version than MP2, people sometimes assume MP3 is superior. It is a fact that MP3’s predecessor was audio layer two or MP2 encoding, and many people believe it to be superior to MP3 at bit rates of 128 kbps or higher.

Higher fidelity encoding, however, requires more resources, and this means more bandwidth and an increased demand for data storage space. Higher layers increase the amount of audio data compression and the complexity of encoding the audio signal. It is less mathematically intensive (and therefore takes less time) to encode a signal on layer one as audio than it is to encode the same signal on layer two as audio. The layers are hierarchical, so a layer three decoder should be able to decode layer two audio. Layer three is built on the features of layer two, adding a modified FFT (Fast Fourier Transform) and a modified discrete cosine transform to the encoding process. Encoding for layer three is more computationally intensive then layer two or layer one. The more complex encoding schemes and algorithms of the higher layers can improve audio quality, despite having greater compression. Even with increased compression and a lower bit rate, layer three audio encoding offers equal or greater quality then layer two audio encoding. For the overall effectiveness of MPEG audio’s different layers, refer to table 1.

Economy of scale: time vs. audio quality

Increased computation time on the encoding side is a small price to pay for the quality and compression that even MP3 affords. Thus, MP3 encoding is starting to be applied at even the professional audio level. For example, 4 minutes of audio from a standard audio CD requires about 40 MB of disk or server space. The equivalent MP3 or MP2 file encoded at a 128 kbps constant bit rate takes up about 4 MB of space, a tenth of the space (a 10:1 compression ratio).

Some audiophiles describe the quality of MP3 audio at 128 kbps as not being even remotely close to CD. Most people, however, hear 128 kbps constant bit rate MP3 audio as comparable to a Dolby B or Dolby C cassette recording of a state-of-the art CD; there is a reduction in the dynamic range and some loss of highs and imaging, but content will remain a far cry from unlistenable. Different codecs can provide varying levels of audio quality, and more importantly, such encoding parameters as the encoding model or algorithm, where to cutoff low frequencies, and the choice of stereo modes can affect the sound quality any MP3 encoder will produce. Decoders can vary in quality in similar ways.

Tradeoffs: bit rate vs. bandwidth

An important consideration in encoding audio is the relationship between audio quality and bit rate (or bandwidth) and how much space the data requires on disk or in memory. If you encode at lower bit rates, audio quality can suffer, but lower bit rates are better suited to slower speed network and transmission lines. Similarly, files encoded at lower bit rates also take up less size in memory or to data storage. If you are willing to double your bandwidth from 128 kbps to 256 kbps, then constant bit rate MP2 or MP3 audio is fairly close and perhaps indistinguishable from CD quality. The 4 minute selection example mentioned earlier now requires about 8 MB of disk space when encoded in 256 kbps constant bit rate MP2 audio, or you get a 5:1 compression ratio.

Further, doubling the bandwidth from 128 kbps to 256 kbps to increase the audio quality halves the compression ratio from 10:1 to 5:1 and doubles the storage to contain the entire file all at once on disk or in memory. Broadcasters will also need to rent or buy faster network connections to transmit audio at higher bit rates.


Downloading audio means receiving audio data from a server over the network. Downloads of files usually require the entire file to be copied to disk before anything can subsequently be done with the file (like playing it). Therefore, downloading files of MP3 music means the end user waits for the entire file to be copied over the network to his local disk before playing it. If you are transmitting data to a client with a 56 kbps modem, the 4 minute, 4 MB MP3 file will take about 10 minutes to download, assuming the network connection between your computer and the server does not encounter severe degradation or bottlenecks. The 8 MB MP2 file would take twice as long, or about 20 minutes to download, but compare this to the amount of time it would take to download the original 40 MB CD audio file—100 minutes.

Downloading music files in their entirety is an expensive operation in terms of time, and as such, it is not great for broadcasting or real-time applications. If the download gets interrupted by a network or transmission line failure, you usually have to start again, unless some rather smart network protocols are employed. However time consuming and tedious, an advantage of downloading is that you only have to do this operation once, and then you have your own private copy of the music or other data on your local disk. If there are no copy protections on this file, you can duplicate it as many times as you like. With such programs as WinAmp for PC users and MacAmp for Apple users, many people on the Web are beginning to collect MP3 files on their computers and trade them with others. These JukeBox programs create playlists, and you can organize your music files allowing one to program their playback in any order or fashion. If a song in the form of an MP3 file gets popular, it can be copied and transmitted over the network among fans hundreds of times in a day or two. Musicians, of course, love this. Record labels, however, typically loathe this practice.


Streaming audio is the ability to start playing audio before it has been downloaded into your system from the Web as a complete file. This is necessary because of the time needed for a complete download, and it allows the listener to have access to the material much more quickly in the process. By buffering and assembling the bits as they are received, an MP3 decoder can start to play audio almost right away. The stream is played in real-time, and a copy of the entire file need not be assembled and saved to your local disk.

For example, a player with a buffering scheme that stores up to 30 seconds of music might start to play music after it has downloaded only the first 5 seconds of music from the Internet. The 5 second or so time lag between receiving and playing audio is a small price to pay for the improved real-time performance. Also, decreasing the playback bit rate to something less then 56 kbps (like 28 kbps) ensures that there is a steady stream of music; the player will not run out of music to play before enough new music is downloaded to and buffered in the player. Streaming is really a clever tradeoff that delays playing music in real-time but is not so costly in terms of time as waiting for the music to download in its entirety.

Unlike downloading entire files that provide a complete copy of the music on your hard disk, the piecemeal technique of streaming is sensitive to problems with the network and transmission lines. If the network gets interrupted for longer than the player can buffer music, then the stream of music will be broken and the player will produce an audible skip, which sounds like static or background noise. Because most people are wired to the Internet over consumer-grade phone lines, they will inevitably experience bit rates of much less than even 53 kbps from network congestion and bottlenecks. Streaming is going to skip sometimes as a result, and it is going to take even longer to download the complete file or broadcast. High-end users with cable modems, ISDN and ADSL may still encounter bottlenecks downloading data from a server, but they can generally stream audio at higher bit rates. When streaming audio, we are normally constrained by the bit rates available over conventional telephone networks, and audio quality suffers. It is not yet a perfectly networked world.

Let me add a quick note about codecs. Codec manufacturers are still developing their algorithms. It may not be surprising to find that algorithms that sound good for encoding speech can actually sound lousy when encoding music. What is surprising is that algorithms designed to extend the high end for encoding broadcast music in MP3 often do not sound good for speech range material. The best way to find out which algorithms work best for your program material is to audition your program material with the different encoding schemes available on your codec.

Legal issues

No one paid much attention to the legal implications MP3 files bring to use of commercial music on the Web until publicity broke about a Stanford University sophmore who had posted his collection of favorite music on a university server. The server was taking so many hits that it began to attract attention. University networks are in no way immune from the law, but what occurs behind private university or corporate network firewalls is unlikely to be scrutinized heavily from a legal standpoint despite official regulations, and therein lies the copyrighting issue that owners of material being posted or moved around on the Web most fear.

It is probably inappropriate for this article to describe at length all the legal ramifications of posting commercial music or intellectual property on the internet, but some guidelines are in order for streaming or downloaded audio. There is nothing illegal about the copyright holder posting MPEG audio files of his or her work on the network, and anyone can subsequently copy those files as many times as they like or propagate them anywhere on the network. The music industry claims that this is not what concerns the authorities, and they mostly dismiss this to be a fringe market populated by musicians seeking publicity.

The major recording labels claim that they do care about anyone’s duplicating or ripping off the music or intellectual property of their artists for the purpose of making it freely available on the Web because they receive no royalty payment. Although the copyright law allows you to make a reasonable number of copies for your private use, this concept of reasonable use does not include posting your favorite copy-protected songs on your home page at AOL as far as the record labels are concerned.

Copyright ID and protection

MPEG audio can be encoded with different kinds of ID tags to identify such things as the copyright owner, song title, artist and album. Unfortunately, these tags do not provide any kind of physical copy protection. The music industry is literally clamoring to provide its own secure electronic music distribution scheme known as SDMI (Secure Digital Music Initiative).

Like schemes before it, SDMI provides a digital watermarking scheme that imbeds a virtually inaudible digital signature in the file as well as a copy protection scheme. If everyone uses an SDMI-compliant player, then SDMI watermarked files could be played, and their distribution could be tracked and royalties collected. Once the copy count was exceeded, playing or copying the file would not be possible. Although SDMI has an impressive number of companies as members, there is no reason to believe, based upon Internet culture, that SDMI will replace free MP3.  (Now we have MP4 as a de facto standard for internet audio and video). 

History has shown that wherever intellectual property is distributed, regulation of intellectual property law is sure to follow. First, there were printing rights, then came audio and video rights.

View the Figures

For more information on MPEG compression, visit


The engineers who developed the MPEG technology leading to .mp3 files did not intend for it to become a mechanism for piracy.   For some insight into both the engineering and consumer perspective of how compressed audio changed the course of the music industry in unexpected ways, read the book “How Music Got Free”,

No content may be copied or reproduced without permission.                            © 2001-2021 SynMuse Productions