LipTracker™ Frequently Asked Questions
contact site map home

What makes LipTracker™ different from all other types of lip sync analyzers?

How are the LipTracker™ audio offset measurements used to correct the lip sync error?

Does LipTracker™ affect the program source material?


How long does it take to measure the lip sync error?


Does LipTracker™ work in any language?


Is a special test signal required?


Does LipTracker™ work with any kind of programming?


How is LipTracker™ better than an operator watching the program for lip sync errors and making adjustments?


How does LipTracker™ maintain optimum performance even though face sizes may change during a program?


What happens with LipTracker™ when the face is moving around on the screen?


Does an operator need to interpret the LipTracker™ results?


Does LipTracker™ interface to video and audio delays from other manufacturers for automatic correction?


How can the SD version of LipTracker™ analyze HD program material?


Can LipTracker™ analyze the lip sync of an MPEG encoded stream?


Does LipTracker™ accept a Dolby encoded audio input?



What makes LipTracker™ different from all other types of lip sync analyzers?
 
LipTracker™ is the only product on the market that can analyze and measure A/V synchronization from program material at any point in the production or distribution channel. Products that use any form of signal marking technology have two significant limitations. First, they require known "good" A/V timing at every program source. Second they require a watermark or other mark embedder at every source. Watermarking technology then attempts to preserve the A/V timing downstream using the embedded mark in the video. LipTracker™ has neither of these limitations because it measures A/V offsets directly from the program material.
How are the LipTracker™ audio offset measurements used to correct the lip sync error?
 
The same way it is done now - by manually adjusting an existing audio delay in the facility. The difficult task is not adjusting the audio delay but determining the magnitude of the error. LipTracker™ improves operator productivity by supplementing subjective and time consuming analysis of lip sync with rapid objective results measured in real time from the program material. LipTracker™’s rapid and accurate results allow the operator to quickly decide when and how to correct for lip sync errors. In addition, all results are logged for future verification or analysis. The bottom line - LipTracker™ saves time, improves overall performance, increases operator efficiency and most important, ensures consistent accurate lip sync correction.
Does LipTracker™ affect the program source material?
 
No. LipTracker™ is completely non-invasive. No code or watermarks of any kind are added to the program material and no pre-processing of any kind is required. Any program from live broadcasts to those that were recorded years ago can be tested in service without any special encoding.
How long does it take to measure the lip sync error?
 
LipTracker™ looks for specific sounds and mouth shapes in the program material so the time to make a measurement is content dependent. The first result is often generated as soon as 4 seconds after a face is detected and then is updated every 2 seconds.
Does LipTracker™ work in any language?
 
Yes. The unique method of comparing sounds and mouth shapes allows measurements to be made with all common languages.
Is a special test signal required?
 
No. LipTracker™ is designed for on-air use with live program material. Of course it can also be used for offline testing with the same type of material.
Does LipTracker™ work with any kind of programming?
 
LipTracker™ is most effective with material commonly used in news programming, sporting events, talk shows, infomercials and other areas where the elimination of lip sync errors is most critical. LipTracker™ is also a beneficial quality assurance tool for other types of programming.
How is LipTracker™ better than an operator watching the program for lip sync errors and making adjustments?
 
LipTracker™ significantly improves operator efficiency. LipTracker™ eliminates the subjective and time consuming analysis by a human operator. LipTracker™’s numeric and graphic displays allow the operator to quickly decide if and how any given lip sync error should be corrected. LipTracker™ also creates a log of lip sync errors providing a “proof of performance” report in the event that questions arise as to where an error originated.

Tests have shown that most people are not consciously aware of lip sync errors if the audio offset is between 20 milliseconds early and 90 milliseconds late. However, these “small” lip sync errors cannot be ignored. Research has shown that viewers can become annoyed to the point of changing channels due to subconscious irritation caused by these “small” lip sync errors. Even for larger errors, people generally have difficulty deciding how much error there is, especially in the pressure of an on-air environment. LipTracker™ on the other hand generates scientifically accurate and objective results and has infinite patience to analyze the program material.
How does LipTracker™ maintain optimum performance even though face sizes may change during a program?
 
LipTracker™ will lock to faces that are approximately ¼ of the picture height (from the top of the forehead to the chin) or larger. As you might expect, when the face size gets smaller, lip sync errors become less obvious and therefore less of an issue. In the absence of a suitable face to lock to, LipTracker™ continuously searches the input video until it finds one.
 
What happens with LipTracker™ when the face is moving around on the screen?
 
LipTracker™ will stay locked onto a face with normal head motion and camera pans and zooms. If the face turns to profile view; looks down at a script etc. LipTracker™ continuously monitors the video and immediately reacquires the face and resumes analyzing when the lips return to view.
Does an operator need to interpret the LipTracker™ results?
 
Yes - in some situations. Consider this example - a typical news segment where archive video of a politician giving a speech is broadcast with an off screen news anchor’s commentary. In a case like this, the mouth shapes in the video are not related to the sounds in the audio. A human operator will interpret the LipTracker™ results appropriately in situations like this.
Does LipTracker™ interface to video and audio delays from other manufacturers for automatic correction?
 
Unfortunately, there is no industry standard protocol for controlling such devices. Preliminary work on an interface standard has been started by a SMPTE working group. We are currently researching this issue to determine the best approach to use.
How can the SD version of LipTracker™ analyze HD program material?
 
Until the HD version is available, an external HD downconverter can be used. The downconverter will have a known video delay (check the user manual for your downconverter) and this delay value can be used as the LipTracker™ Measurement Offset parameter to ensure that LipTracker™ operates in the center of its range.
Can LipTracker™ analyze the lip sync of an MPEG encoded stream?
 
Not while the stream is in MPEG form. The stream must be decoded to SDI video (standard definition now – stay tuned for availability in HD) and AES-3id audio to be analyzed. If the decoder has a known audio/video offset or latency (check the user manual for your decoder) this value can be used as the LipTracker Measurement Offset parameter to ensure that LipTracker™ operates in the center of its range.
Does LipTracker™ accept a Dolby encoded audio input?
 
No. An external Dolby decoder must be used. If the decoder has a known audio processing delay or latency (check the user manual for your decoder) this value can be used as the LipTracker™ Measurement Offset parameter to ensure that LipTracker™ operates in the center of its range.