Monday, 29 April 2013

Process of digital video interface


So when light enters the lens it goes through a prism which seperates the light into three primary colours. This is with the 3 CCD cameras which stands for charged coupled device. The reason that there is three CCD’s is so that each one can process each colour information separately. So here we have the light going into one of the CCD’s (fig1), and this light is then converted into a voltage depending on its brightness.



The different levels of voltages are sent vertically through a shift register which then gets pushed through the horizontal shift register.



We now go ahead to the process of out of the end of the camera.

This information is then transferred into a linear matrix converting the sampled gamma corrected RGB signal into a Y’ B-Y’ R-Y’signal so that the data is less. This sampling method is known as 4:2:2 as discussed before. The analogue signal is sampled at 5.5MHz: 2.75MHz: 2.75MHz for Y:B-Y:R-Y respectably. The signal will need to go through an analogue to digital converter where they are then clocked at 13.5MBs:6.75MBs:6.75MBs respectably. These are then multiplexed together at 27MHz 10 bits.

 
 
 
 
This process is used within any broadcasting 3CCD camera, and is just to help better to understand how cameras work in a simple form. We will be going into more detail of each process soon. Enjoy!

RGB Component

This way of transmission separates the red, blue and green information. each colour signal will travel down individual cables and will contain that colour information with luminance as well. there will be in most cases a fourth cable containing the sync information which is known as RGBs. Other versions include RGBHV which has five cables and contains red, blue, green, horizonal and vertical sync. agian just as with composite the amplitude of all three signals are one volt peak to peak.
Another transmission method that seperates the signal is YUV. this signal is also known as Y' R'-Y' B'-Y' where the Y contains green and luminace while the blue and red information minus the luminace so overall uses less information as R'G'B'. so with the PAL signal being clocked at 270Mhz the rates at which each colour signal will be equal at 90MHz each.
The signal gets created at the start where the video camera caotures the light and splits into three primary colours. Now depending on the luminance of the colour of the pixel it will be sampled as a voltage that can be read later on in conversion.



RGB component are used in various ways such as,
  • Scart
  • Phono
  • DVI
  • VGA
These are just a few of the universal connectors widely used in the world.

Phono is easily noticed by there red green and blue cases around the connectors as above. These will enter the red green and blue video in/output on the machine.
Scart is another easy one with the universal plug going male-male as above.
VGA is a 15 Dtype connector but can carry HD signals down if transmitter and receiver intructs it to.
DVI is different as uses less of the pins required to send/receive RGB SD.

DVI-A only allows analogue video through so will not work with DVI-D (dual/single link).

Thursday, 4 April 2013

Sampling Frequency

Sampling frequency is best descibed as how the video is divided up in luminance and chromanace ratio. So SD PAL video runs at 270Mbs which the total frequency that this source runs at is 270 MHz. This will help out in understanding how the sampling frequency is seen. An example is 4:2:2, with the first number representing luminance/green Y/G channel, and the last two representing the two colour channels red and blue / R and B / Cr and Cb / R-Y and B-Y. For every 4 luminace packets there will be two each of blue and red information.


 
We will look at these as individual sampling packets of 8. So 270/8 = 33.75 Mz per packet. with 4 luminace packets and two each of chromanance then the signal information will be divided up like this, 135MHz:67.5MHz:67MHz. this is used with all video signals and is to describe the luminance and colour information ratio.

Wednesday, 3 April 2013

Moving Picture Experts Group (MPEG) - MPEG-1



Moving Picture Experts Group, or known as MPEG for short, is a group of experts which design standards for video/audio transmition formed IEC and ISO. MPEG have many standards but we are just going to look at the few that are used or have been used within the industry.


MPEG-1


Layer 1


This layer contains the system information. the system information carries the video, audio, sync and other non video/audio data signals. the information keeps all of this information in time with each other as it has been multiplexed. the multiplexed encoded signal gets transmitted at 1.5Mbs. MPEG started the multiplexed digital signal. this signal is based on an on time, error free delivery of a stream of bytes. this code maintains the constant delay which embeds a clock in the byte stream that can be used by the receiver to control a clock reference in the receiver.



Layer 2



this layer contains the video data information. the layer has a few guidelines being,

- The video must be replayable forwards and backwards

- Fast forward/reverse modes have to be supported.

- Editing must be possible, such as replacing frames etc.



Layer 3



The audio layer also known as MP3 one of the popular choices in music formats as storage on PC systems and transmittion over the internet. It has made its way through one of the top music formats through PC . THis operates at bit rates in the range of 32 to 448 kb/s and supporting sampling frequencies of 32, 44.1 and 48 kHz. Usually for Layer II the bit rate ranges between 128-256 kbit/s, and 384 kb/s for professional applications. this layer has the abillity to carry 1 or 2 channel of audio content. Labelled layer I designed for low complexity decoding and encoding, and layer II designed for slightly higher complexity. Layer I can compress high quality audio data at bit rate of 384 kb/s while still able to recover high quality through to the decoding process.

Layer II has low complexity decoding combined with high robustness against cascaded encoding/decoding and transmission errors. This is ideal for digital video/audio broadcast applications (DVB and DAB).

So if the sampling frequency is 32 kHz then the input signal is transformed into 32 sub bands.giving layer II 36 subband samples totalling 1152 subband samples and Layer I with 12 subands totalling 384 subband samples over the sampling frequency.


More to come, Next will be MPEG-2

Composite Video Blank and Sync (CVBS)

Okay lets start with probably the most important part of broadcasting transmission, which is the signal. there are a number of video signals, but we are going to look at the main ones at which are used in this country frequently. each singal will have different names but all have been set to the same standards. These names will need to be known as to better help in the industry when talking to other engineers. When i say standards i mean the parameters of a signal that have been met and produced by a group of engineers which then gets put into the systems to help the distrobution of from one machine to another even when made by different manufacturers. Well it can be argued as there is a lot out there we tend to use the signals provided by top manufacturers such as Sony, Grass Valley, EVS and so on.

Composite Video Blank and Sync (CVBS)

The composite signal is a single signal which carries all of the active video information, red green and blue, along with the luminace blanking and sync. When viewing this signal through a waveform monitor we can see that the ideal amplitude is between one volt peak to peak.

Fig. 1

In Fig. 1 it shows the amplitude being 1.2 volts peak to peak. even though the ampiltude is 200 mV above ideal value this is still a safe enough to push through a monitor. the waveform monitor will warn if the amplitude is too high by highlighting red. An amplifier can be used here providing that it has the ability to adjust the gain manually to bring it to a correct level.
Composite has the ability to distrobute PAL and NTSC signal standards just to name a couple. PAL being what we use in England at a rate of 25 frames per second with horizontal lines in total or 576 active lines. Fig 1 is colour bars set to 100% and is used to accuratly test equipment. It just makes it easier for us when we go E-E that the signal coming out of the equipment is as going in or correctly adjusting monitor brightness, contrast and colour. Most equipment now will have a colour bar generator built in to correctly set up external equipment.

Video Sync

Video synchronization is an analog signal which is used to sync a number of video signals together. This method is used in all methods of transmittion. black and burst is the SD sync. this signal contains the sync and colour burst information that is displayed at the start of any active video information. different video signals will have a differnt start of active video time when using vision mixers transistions each picture will be placed at different points of the screen as in fig 1. the other sync is tri-level sync. the name tri meaning that there is three level of sync but without the colour burst as with black and burst. Tri level is the only HD signal which is analog and be seen on an analog waveform monitor as seen in fig 2. A little useful information for SD an active video line takes 64uS.

fig 1.
fig 2.
 
 
When using multiple cameras the problem that you will find is that each camera will be timed from SAV (start of active video) to EAV (end of active video) this is the information that will be displayed on the screen. when on air the cameras will go through a video router/switcher inorder to switch between the two or more cameras to see different locations. So when a defaul tri level sync or black and burst get put in with the signal this corrects all timing for each camera. Each transition now will be smooth and with no error. Also the timing can be changed in respect so that it matches the TLS/BB input to the micro second. This is needed as the timing signal might endure some lag in processing, so will still keep a little out of time. this is not noticalbe in SDI but we need everything to be perfect for broadcast. This will help lessen any future faults and fault findings.