[HN Gopher] Why does MPEG Transport Stream still exist?
       ___________________________________________________________________
        
       Why does MPEG Transport Stream still exist?
        
       Author : mmcclure
       Score  : 60 points
       Date   : 2023-06-05 17:58 UTC (5 hours ago)
        
 (HTM) web link (www.obe.tv)
 (TXT) w3m dump (www.obe.tv)
        
       | andersced wrote:
       | Another MPEG-TS alternative used by some projects.
       | 
       | https://ieeexplore.ieee.org/document/9828725
       | 
       | https://github.com/agilecontent/efp
        
       | tonymillion wrote:
       | The simple answer:
       | 
       | MPEG-TS still exists because it's _still_ the best media
       | container /transport system in existence.
       | 
       | It was designed and has evolved over a long time based on solid
       | foundations by _very_ smart people in the industry, across
       | multiple institutions and professions.
       | 
       | As opposed to the "format du jour" that was quickly thrown
       | together by some eastern block script kiddie who saw the basic
       | outline of an AVI file and figured they could do better...
       | 
       | Case in point: MPEG-TS has scaled from DVD across BluRay (and
       | DVD-HD) is the backbone of both satellite and terrestrial digital
       | broadcasting.
        
         | keithwinstein wrote:
         | (FWIW, DVDs use MPEG-2 program streams, not transport streams.
         | Both MPEG-2 part 1 systems streams, just different kinds.)
        
       | [deleted]
        
       | Sean-Der wrote:
       | This article is incorrect about WebRTC. I don't know about other
       | protocols and what they offer.
       | 
       | * Clock Recovery
       | 
       | I have had no problems with measuring NTP drift. As the clocks
       | change I would measure.
       | 
       | * Common Clock for Audio and Video
       | 
       | Sender Reports contain a mapping of Sequence Number to RTP
       | Sequence Numbers. This is respected by every player I have used.
       | My guess is author put their media in different MediaStreams. If
       | you want all your tracks to be synced you need to mark them as
       | one MediaStream.
       | 
       | * Defined latency
       | 
       | WebRTC provides playoutDelay.
       | https://webrtc.googlesource.com/src/+/main/docs/native-code/....
       | This allows the sender to add delay to give a better experience.
       | 
       | * Legacy transport
       | 
       | You can transport anything you want via RTP or DataChannels.
       | Maybe I am missing something with this one?
        
         | derf_ wrote:
         | _> I have had no problems with measuring NTP drift._
         | 
         | Yeah, their claim is just weird. RTP does not impose an
         | accuracy requirement on its timestamps (despite the name "NTP
         | timestamp" in the Sender Reports, they are not actually
         | expected to be synchronized with an NTP source), but I am
         | skeptical such requirements would be met in practice if they
         | did exist. The author only talks about video, but audio is a
         | much bigger problem: if you do not send the right number of
         | samples to the sound card at the right rate, you are going to
         | get annoying clicks and pops, and "dropping or duplicating"
         | audio frames will only cause more problems. You do not need to
         | send media for a day to have these issues. Oscillators are bad
         | enough they show up in minutes, and WebRTC obviously has
         | strategies for dealing with them.
         | 
         |  _> If you want all your tracks to be synced you need to mark
         | them as one MediaStream._
         | 
         | More specifically, the underlying mechanism of giving tracks
         | that need to be synchronized the same CNAME has existed in RTP
         | since the original RFC 1889 from the year 1996. The timestamp
         | wrapping does require some care to get right, but is basically
         | a non-issue.
         | 
         | That said, a lot of WebRTC applications will not ask for
         | synchronization because it necessarily introduces latency, and
         | for interactive use cases you are often better served by
         | sacrificing exact sync for lower latency (as long as latency is
         | low enough, sync is never going to get too bad, anyway). But
         | that is very different from saying the standards do not support
         | it if you want it.
        
           | kierank wrote:
           | The RFC does not mandate that the RTCP timestamp (which you
           | need to handle wraparound if you join a stream halfway
           | through) needs to be the same as the video/audio clock.
           | 
           | In practice this clock is generated via the PC clock so it
           | isn't the same clock at all: https://chromium.googlesource.co
           | m/external/webrtc/+/lkgr/mod...
           | 
           | RTCP SRs are sent quite rarely (defaulting to 1s for video,
           | 5s for audio) so quite poor for precise clock recovery
           | required in professional applications.
           | 
           | Probably practical implementations just use buffer fullness
           | to drive their resampler.
        
           | izacus wrote:
           | Don't get myopic here - the timing constraints are critical
           | for systems like DVB-T/C/S (and whatever the US equivalent
           | is), not as much web. The things you're talking about might
           | be dismissable when you're sending things from your blog app,
           | but TS is primarily used in broadcasting.
           | 
           | The DVB machines I've worked with were very sensitive to any
           | kind of jitter and clock skew.
        
         | kierank wrote:
         | Author here:
         | 
         | >I have had no problems with measuring NTP drift. As the clocks
         | change I would measure.
         | 
         | Did you read the article? NTP is not the same as the
         | video/audio clock which is what you need to care about. I have
         | to now take a drink even though it's 5am here in Singapore.
         | 
         | > Common Clock for Audio and Video
         | 
         | No idea what sequence numbers have to do with clocks here.
         | Maybe you mean a mapping of absolute time to relative time in
         | RTCP? If the RTCP SR value for absolute time is using NTP (or
         | any other wrong clock as it's not mandated to match audio/video
         | clock), then it's by definition impossible to know how to sync
         | audio and video after several wraparounds of each RTP
         | timestamp.
         | 
         | > WebRTC provides playoutDelay.
         | 
         | This is not the same as a defined, video frame or audio sample
         | accurate delay (ten milliseconds as a unit...) to allow for
         | variations in frame size to maximise quality. It also appears
         | to mix up network delays vs VBV delays. They are separate
         | delays and are handled at different layers of the stack.
         | 
         | > You can transport anything you want via RTP or DataChannels
         | 
         | None of this is standardised and therefore requires control of
         | both ends. Also high end applications need Uncompressed Audio
         | and for the above RTP timestamp reasons this can't be precisely
         | synced with video.
        
           | asabil wrote:
           | Each RTP packet has a 32bit timestamp, and a 32 bit SSRC.
           | Each "sender" in an RTP session must use the same SSRC, this
           | is how synchronisation between audio and video streams from
           | the same sender (lip-sync) is achieved.
           | 
           | The timestamps have a resolution defined by the clock rate
           | communicated externally through a signalling channel.
        
         | kuschku wrote:
         | Regarding clock, ideally you'd want to be able to genlock them.
         | 
         | The goal is to ensure that all input sourced send their video
         | frames at the exact same point in time, and that each audio
         | device also samples at the exact same points in time.
         | 
         | In the past that was even more important, as you'd want to make
         | sure the scanline of CRTs in the studio and of the camera were
         | perfectly synced.
        
       | numpad0 wrote:
       | MPEG-TS is incorporated in many digital TV standards. They will
       | continue to be around for a long time simply because of that,
       | regardless of technical points.
        
         | drmpeg wrote:
         | Fun fact. DOCSIS 1.0 through 3.0 cable Internet uses MPEG-2
         | Transport Streams to deliver the IP packets. It has to, because
         | the QAM specification (ANSI/SCTE 07) is built around 188 byte
         | TS packets.
        
           | lxgr wrote:
           | I've always wondered if that was done to allow mixed video
           | and DOCSIS channels, shared hardware on either end, or just
           | to ensure that TVs and STBs can quickly and safely skip
           | DOCSIS channels that they won't be able to decode anyway.
        
         | Taniwha wrote:
         | exactly - almost every cable and satellite system is using them
        
         | wmf wrote:
         | Right, but transport stream should _only_ be used in
         | broadcasting. It shouldn 't be used on discs (ahem Blu-ray) or
         | other storage and it shouldn't be used over the Internet.
         | Program stream is usually a better choice.
        
           | lxgr wrote:
           | What about bridging broadcast media to IP or vice versa?
           | 
           | One of the advantages of MPEG-TS is that it's dead simple to
           | map it to RTP or even plain UDP and back even with packet
           | loss and data errors.
        
             | wmf wrote:
             | Extract the PS from the TS and send that over the network.
        
       | aeturnum wrote:
       | I am not an expert in this area, but I've worked around its
       | edges, and video has always struck me as one of tech's great HARD
       | problems. It's a really frustrating combination of: meant for
       | human consumption, difficult to characterize algorithmically,
       | realtime, having a distinct future temporal envelope, etc. The
       | problem is precisely that many people want to many different
       | things with video - and depending on what you want to do with it
       | you may prefer an entirely different stack!
       | 
       | I almost want to compare it to making good vaccines in the
       | medical world - some of the most beneficial work to all parties,
       | but also some of the least commercially rewarding.
        
         | wheybags wrote:
         | All those patents don't help either (in medicine it's more
         | debatable)
        
       ___________________________________________________________________
       (page generated 2023-06-05 23:01 UTC)