Television Production Handbook 
Home Introduction Contents Ordering Information
1980-2009 Roger Inman & Greg Smith. All rights reserved.

Editing and Program Continuity

Editing a television program is much more than just putting shots together on a single piece of tape so they can be viewed as a whole. It is in the editing phase that a production is finally made to conform to the producer's vision of it. Editing is both the last and one of the most powerful opportunities the producer has to influence whether a program will communicate the information it was meant to convey successfully, and whether it will affect the emotions and moods of the audience as the producer wished. The editing process is also a difficult and demanding craft, and presents many opportunities for the unprepared to go astray. Indeed, the failure to communicate effectively in many beginners' television programs is much more often a failure of the editor to develop a coherent conception of the flow of the program than of any problem with other technical aspects of the art of television, such as camera work or sound recording.

Conversely, it is possible for a skilled editor to take even seriously flawed original footage and produce a program which still manages to say what the writer and director had in mind - but we certainly hope you will never be forced to work that way.

In commercial television and film production, the "shooting ratio," or amount of film or tape shot divided by the length of the finished program, averages about 10:1. In other words, for every minute of edited program you see, ten minutes or more of tape was recorded by the camera crew. In some specialized and difficult programs the ratio routinely climbs above 100:1. On the other hand, in live television production in the relatively comfortable atmosphere of a studio, it is often possible to use every second of the original recording with a shooting ratio of 1:1.

Actually both of these represent extremes which you will not ordinarily encounter. Ways to reduce editing time include keeping the overall shooting ratio as low as practical, grouping scenes shot in the same place or time together on the original tape, and, when possible, "slating" shots by holding up a card in front of the camera showing the place, time, date, and crew members present and any other information you think may be helpful in identifying material later.

You may have inferred from the above that the process of editing in a sense begins long before any tape is recorded. In those productions which are scripted in detail before shooting, the script may in fact dictate almost exactly how the finished program will be edited. So a knowledge of some of the rules of visual and aural continuity is as essential for the script writer, director, and camera operator as for the editor. Again, the more planning and preparation before entering the editing room, the less time and money spent there. Even if your program doesn't lend itself well to detailed scripting in advance, it is usually possible to write up an outline, or perhaps even an exact "editing script," based on several viewings of the original footage, which can guide the editor in putting things together properly in the minimum amount of time. This is not to imply that many decisions won't have to be made during the process of editing - especially the specifics of exactly on which frame to make cuts - but good planning can make the actual work with the editing machines much quicker and more pleasant.

There are basically three purposes in editing a program. The are:

1. To eliminate material which is unusable because it is technically flawed;

2. To remove footage which is irrelevant to the information to be presented in the program;

3. To assemble what is left in a way which communicates the important information in the program and, at worst, the editing isn't distracting to viewers and, at best, the program is both interesting and entertaining.

There is almost never a justifiable reason for making a program longer than the absolute minimum necessary to cover the topic adequately. If in doubt about the relevance of a particular shot or sequence, it is almost always better to leave it out. Keep cutting away at what you have available (while watching the shooting ratio soar) until every frame of the remaining tape has something valuable to say. Your audience will be grateful for it, and may reward you by staying awake through the whole program.

What follows is a selection of various rules and concepts of editing which have been developed through experience over the last century of editing motion pictures and, more recently, television programs. The rules are, to some degree, flexible, but they should be violated blatantly only with great trepidation and after carefully considering all of the alternate ways of editing the sequence. However, if you have something important to say, and only unconventional editing will communicate it effectively, then the content should take precedence over any editing rules. Remember, your final consideration is your audience and their reactions.

1. WS - MS - CU sequences; using close-ups

Beginning television news camera people are always taught to take three shots of each subject, using different distances, angles, or lens settings to yield a long shot, medium shot, and close-up of the subject. Then these three pieces of tape are edited together in that order to give the viewer an impression of moving in on the action from the outside, finally becoming involved (through the close-ups) in the action itself. Watch television news coverage of a fire or similar event the next time you can. Typically, you'll see something like:

LONG SHOT, the burning building;

MEDIUM SHOT, firefighters pulling hoses off the truck and carrying them toward the fire;

CLOSE-UP, firefighter's face or hands as he wrestles with the hose.

Additional shots would tend to be more close-ups: equipment on the truck, hands holding the hose, the faces of spectators, etc. At some time, determined by the pacing of the editing and, in this case, the severity of the fire, there might be a return to the long shot to reestablish the overall layout of the scene in the viewer's mind.

Notice the emphasis on close-ups. Television is a relatively low-definition medium, and subjects seen in medium shots or wider just don't come across powerfully on a television screen. It is details, sometimes shot at very close distances, which are most effective in adding visual interest to a story. Use close shots - details of objects, a person's surroundings, or especially of the human face -whenever it is possible and meaningful to do so. They do more than anything else to add excitement and interest to your program.

As the amount of information on the screen decreases, so does the viewer's ability to watch it for a long time. This implies that close-ups, which contain relatively little visual information but spread it over the entire area of the screen, shouldn't be held on the screen for a very long time. When presented with a single detail of something, an audience can look at it for no more than three or four seconds before it becomes bored and turns away. Exceptions to this rule occur when there is something going on to maintain interest. For example, moving objects can be held longer than totally inanimate ones. A narrator heard in a voice over may point out interesting details in what is shown so that the audience is continually discovering new aspects of the picture. In these cases, even extreme close-ups can remain on the screen for a relatively long time.

Close-ups of faces, on the other hand, can be held for along time, as we seem to find endless fascination in that particular subject. Close-ups are very powerful and revealing for interviews and tend to give at least the illusion of great insight into the speaker.

By contrast, long and medium shots are used less in television, although they certainly have a very important place in most programs. The audience can become disoriented if they are not occasionally reminded of where they are, the overall arrangement of objects and people in the location and the relationships between them. This is the purpose of the long shot. Medium shots reveal more about a single subject, without the emotional commitment of a close-up and they are also most often used for showing the progress of action, as in our fire example above. Of course, there are an infinite number of possible variations. It is the subject matter and overall mood and structure of your program which determine in the end how sequences of different angles views will fit together.

2. Jump cuts and the thirty degree rule

One of the most distracting mistakes it is possible to make in editing is called a "jump cut." If you've watched commercial television all you life, you may in fact never have seen a jump cut. They are so visually distracting, and such pains are taken to avoid them, that this particular error almost never makes it to your home screen.

A jump cut happens when you attempt to edit together two shots which are too similar in visual content. The most obvious example might occur if you remove a single sentence from an interview shot while the camera (which recorded the interview originally as a single shot) remained static. What would happen is that the background would remain still, while the subject's head might "jump" a bit to one side, or lips which were closed might appear instantly to open.

The result is a very jarring interruption in the otherwise smooth flow of action. There are several solutions to this problem. In the case of the interview, an appropriate response would be a cutaway shot - about which more will be said later. In other situations application of the "rule of thirty degrees" can help.

thirty degree rule

The Thirty-degree Rule

The thirty degree rule states that if you make sure that the angle of view of the camera to the subject changes by thirty degrees or more between two shots cut together, the background will move enough that the shots will cut together well without any apparent visual discontinuity. The diagram below illustrates the rule. Failure to move the camera at least thirty degrees between shots almost invariable leads to a jump cut. There are exceptions, of course. Cutting from a medium shot to a close-up, for example, can be done even if the camera is not moved an inch. The point is to make each shot as visually different from the one preceding it as possible, so that the viewer's point of view of the scene appears to change in a natural manner. Notice that if you invariably follow the long shot - medium shot - close-up sequence suggested earlier, the problem of jump cuts caused by violations of the thirty degree rule never develops. And cutting from one subject to an entirely different one, or from one location to another, rarely presents any problems.

There are other types of visual discontinuities you must guard against. Most of these are essentially the responsibility of the camera crew at the time of shooting, and the editor's only involvement is to try to cover them up when they occur.

Errors of this sort usually creep in when shooting is spread out over several hours, days, or months. A typical problem might be shooting part of an interview one day, then coming back a few days later to get additional material and finding the subject dressed in different clothing. (Invariably the clothes the subject wore the first time will be in the laundry and totally unavailable.) You can't cut together parts of the two interviews without making it look as though the subject instantly changed clothes. Errors like these can only be avoided if you are aware of them and plan around their occurrence. It is also among the editor's responsibilities to point out continuity problems when they occur, so that a way can be found to avoid them or minimize their effects. (For example, in this case, it might be possible to group the footage in such a way that the clothing appears to change during a long interlude while other subjects are being shown. When the action returns to the interview the change won't be obvious.)

3. Direction of motion and the 180 degree rule

By this time you may think that editors and directors have to be experts in geometry. Not so! There is really only one other "angle rule," which is designed to keep people facing and moving in the same direction on the screen. One of the more distressing visual discontinuities is reversal in direction of motion, or in screen position of people. An example:

Someone walks out of one room and into another. In the first shot, you set up the camera in the middle of the first room and the subject walks from the left side of the screen and exits through a door on the right. Now on entering the second room, into which the subject is to walk, you find a large expanse of windows on one wall, with the door on the left as you face them. You don't want to shoot into those bright windows, as the camera will give you a poor picture, so you set up the camera on the window side of the room looking toward the middle. Now the subject walks in, but this time WILL BE ENTERING FROM THE RIGHT SIDE OF THE SCREEN. When this is edited together with the previous shot, it will look like the subject changed direction in mid-stride.


The solution is to keep the camera always on the same side of the moving subject. If something is traveling from left to right, it should continue to go in the same direction in every shot you think might be edited into the same sequence. Sometimes, as in our window example, this takes considerable pre-planning and can be something of a headache if there are many locations involved. But it is necessary if the audience is to be able to follow the action without confusion. Again, if you just can't think of any other way, a cutaway may be used between two shots where the action reverses direction. But this technique is all too obvious to the viewer much of the time.

A somewhat related situation arises when you are dealing with multiple subjects, perhaps in a discussion situation. It is necessary to arrange your shots, and their editing, so that subjects don't appear to be moving from one side of the screen to the other, or looking in different directions at different times.

Study the diagram below, which represents a simple situation with two people seated at a table. If the camera always stays in the area indicated by the number "1" no matter what two shots you cut together, subject "A" will be looking toward the right side of the screen at subject "B," who faces left. This will result in proper visual continuity.

180 degree rule

The 180 Degree Rule

If, however, the camera crosses over to the other side of the table, the apparent positions of the two subjects will be reversed. If you cut two shots of "B" together in succession, "B" will suddenly appear to be looking at himself, but from opposite directions.

This is called the 180 degree rule. When dealing with two or more subjects, visualize a straight line drawn through both of them. As long as the camera always stays on one side or the other of this line, apparent visual continuity can be maintained. The name derives from the fact that the camera can move through an arc of 180 degrees relative to the center point between the subjects.

Now remember the thirty degree rule. We have set up a situation in which, in order to maintain effective visual continuity, the camera has to move at least thirty degrees between shots but has an overall field of only 180 degrees to work in. It can be frustrating, to say the least, to the editor who has to work with material shot without regard for these two rules. Yet the editor, too, has the responsibility of putting shots together in such a sequence that these rules are observed, while still trying to make sense out of the overall meaning of the program. Good luck!

4. Cutaways and inserts

Often the availability of proper cutaway or insert shots is the only thing which saves the editor from, at best, profound frustration, or possible even insanity.

You see cutaways frequently in television news interviews. These are the shots of the interviewer looking at his notes, or nodding at what the subject is saying. Usually the cutaway shot has been used to cover up an edit in a continuous shot of the interview subject when removing a few words or sentences would other wise have produced an unacceptable jump cut. As such, the cutaway is a valuable device, but it should be used with great discretion, as it looks rather contrived. (Often these shots are done after the interview is completed, and the interviewer isn't really reacting to anything - which often shows.) Over-the-shoulder shots are common too - and also often don't work because the sound is noticeably out of sync with the subject's body or facial movements.

It is also possible to cut away to other subjects, like the crowd reacting to a speaker's words, or a close-up of the subject's hands. These sometimes work well and can cover many otherwise embarrassing gaps in visual continuity.

Insert shots have another function in that they actually contribute to the meaning of the program. Inserts are shots, or sequences, which usually show something that a speaker is talking about. While an interview subject describes a process, an insert sequence can be designed to show that process actually taking place. While these segments can be used to cover continuity difficulties they also tend to make a program more interesting and meaningful.

Achieving the proper balance between "people" footage (interviews and speakers) and "process" material is difficult, and of course depends on the nature of the specific program you are making. In general, though, it is better to show something than to talk about it; television is a visual medium and only by making use of its ability to show things as they actually happen can it truly be distinguished from radio or audio tape. Of course, there are programs where the emphasis is on the people involved as much as the things they are doing, and in these cases there is nothing more beautiful or interesting than the human face. So keep your purpose in mind when deciding on an overall plan for the editing of your programs.

Insert and cutaway footage is so important in editing that it is essential to be aware of the need to produce this type of material at the time of shooting. Many discontinuities are not the result of ignorance of the rules on the part of the editor, but happen simply because the required quantity or quality of insert material was never shot. Much of a good camera person's time is spent looking for and recording reaction shots and various close-ups and cutaways which can be used by the editor to cover any difficulties later. It is definitely something to keep in mind when you start to shoot tape.

5. Shot timing and pacing

The need to keep certain shots fairly short was discussed earlier. In general, close-ups do not hold interest as long as medium shots and it is uncommon to see any shot that lasts longer than about ten seconds, but certain types of action can be held longer if it seems appropriate. It is a good idea to vary the length of shots, particularly if many of your shots are fairly short, unless the building of a definite and predictable rhythm is what you have in mind.

One final note. Changes in shots that are too frequent and done for no apparent reason can be worse than long static shots. Editing should never be allowed to interfere with or distract from program content.

6. Cutting sound

In professional film editing, it is not considered much of a problem at all to restructure sentences or even words by precise editing to change the meaning of what someone says. Digital audio files and the audio portion of digital video files can be processed using computers to change not only the sequence of sounds, but volume, pitch, and other characteristics. Even so, most audio editing for video is restricted to making cuts between phrases or sentences, trying to fit together the sometimes random-sounding ramblings of your subjects into a smooth whole.

It is beyond the scope of this chapter to try to dictate how your subjects' thoughts should be fitted together, so we will concentrate on a few rules and suggestions as to how the continuity of sound fits into the overall production of a program.

Most programs of the informational or educational genre have their continuity dictated almost entirely by the content of the sound track. In fact, the spoken word probably conveys most of the actual information in most of the television programs you have seen.

This is a difficult problem, because errors in visual continuity (which is what most of this chapter has been about) are usually much more obvious and distracting to the viewer than a more abstract lack of logic or coherence in what the interview subjects or narrator say. So in many cases the content will have to be adapted to the needs of maintaining a smooth VISUAL flow. Cutting within interview footage, which is often necessary from a content perspective, almost always generates a visually offensive jump cut which requires a cutaway or something similar to reduce the distraction.

The use of sound other than voices should be considered. The natural sounds of many settings - chirping crickets or the cacophony of a factory - can be used to make some kinds of points more effectively than any narration. The use of music in setting moods is fairly obvious; indeed it is possible, and often very powerful, to edit the visual portion of a program to fit, in both rhythm and content, a prerecorded piece of music.

One technical detail about editing sound which seems relevant here involves the timing of different spoken segments to be edited together. In average conversation, most people pause about half a second between sentences. If you are trying to edit dialogue together so that it still sounds natural and flowing, you should try to maintain the time between utterances at about this figure. Most people do not pause noticeably between individual words, although a gap of a tenth of a second or so will go unnoticed if it doesn't occur too often. Naturally, these times have to be adjusted somewhat to fit the specific speech patterns of the individuals involved, so they are only a guide.

A second consideration in sound editing might be thought of as an audio jump cut. Every recording location has a characteristic sound, or presence which may even change slightly during different times of the day. People, too, vary the quality of their voices according to many conditions from stuffy noses to fatigue or emotion. Very few narrators can duplicate the sound of their own voices from one day to the next. Even though two sequences might be recorded using the same equipment in the same location, the quality of the audio may well be so different that a noticeable and objectionable change in sound occurs at an edit point. In editing narrative sequences, the speaking pace must also be fairly constant if the edit is to be "believable."

This barely begins to scratch the surface of the field of editing, yet the rules and ideas presented here are basic ones you will have to keep in mind while shooting and editing videotape. Watching television critically, with an eye toward the contribution of the editor to the finished program, then editing your own work, will teach you more about the craft than any stack of books could. Don't be afraid of all those buttons!


Video editing is the selective copying of material from one videotape (or computer file) to another. The process is entirely electronic. Nothing is cut, glued, or pasted. The original is not altered in any way by the editing process. Successful and efficient editing requires some specialized equipment, some knowledge of how the equipment works, and a great deal of planning and preparation both in shooting original footage and in editing itself.


The necessary editing equipment includes two videotape recorders, two television monitors, and an edit controller. The original tape is played back on the source recorder, which is sometimes called the master recorder. This recorder must be designed to be run by remote control. The audio and video outputs of the source recorder are connected to the inputs on the editing recorder, sometimes called the slave recorder. The editing recorder, in addition to being operated by remote control, also needs some features not found on most videotape recorders. First, it must operate in sync with the playback recorder. That is, its internal timing circuits have to lock to the sync portion of the incoming composite video signal. Second, to make clean edits between old and new video, it must be able to go from the playback mode to the record mode and back only in the vertical interval, or the brief time between pictures. Finally, to accomplish this, it must have special erase heads, called "flying" erase heads, actually mounted on the video head assembly. Most videotape recorders have erase heads that are fixed and erase the entire width of the tape. Because the video signal is laid down on tape in long diagonal passes across the tape, the conventional erase head would erase portions of many frames of video. The erase heads mounted on the video drum can erase video one field at a time, allowing very clean transitions between old and new video.

The audio and video signals from each recorder are also connected to television monitors. This allows the operator to see and hear what is on either tape at any time. Finally, both recorders are connected to a compatible edit controller. The controller includes the basic transport controls for both recorders, such as fast forward, rewind, play, pause, and stop, plus special editing functions.

Editing modes:

In the ASSEMBLE mode it is assumed that there is nothing recorded on the edited videotape after the selected edit point. Each new sequence is edited onto the end of the previous sequence until the tape is completed. No picture of sound which might already have been on the tape is used.

In the INSERT mode, it is assumed that material already on the tape is to be retained. New material is inserted into old. Not all of the signals during the edit need to be replaced. The operator sets the editing machine to change the picture or either of the sound channels or any combination of the three. At the end of the edit, the recorder will return from the record mode to the playback mode.

The Control Track:

Almost all videotape recorders record and play back a special sixty hertz pulse called the control track. This track is used in playback to make sure the video heads are positioned correctly to read video information recorded on the tape. Any break in the control track, or sudden shift in phase, or loss of signal level will cause a videotape recorder to vary its speed until it returns to lock. This in turn usually causes the picture to break up or disappear entirely. The essential difference between the assemble and insert edit modes is that in the assemble mode new control track is recorded from the edit point on, while in the insert mode prerecorded control track is used and no new control track is generated. Therefore, the picture will always break up at the end of an assemble edit and, conversely, there must always be good continuous control track already on the tape throughout an insert edit or the picture will break up on later playback wherever the control track was flawed, even though no trouble was observed during the actual insert edit.

Many editors commonly in use are called control track editors because they use the control track as a reference for all of the editing functions. It is critical to know and understand this. Without a good and continuous control track from at least six seconds prior to an edit point to at least two or three seconds after an edit point it may be impossible to make an edit at all. Actual requirements vary from machine to machine, so it is advisable to make sure there is always at least ten seconds of control track in front of and behind every shot recorded.

SMPTE Editing:

The Society of Motion Picture and Television Engineers devised a special audio signal that can be recorded on tape and used to identify the location of each frame of video precisely. This SMPTE code is used in many edit suites for three reasons. First, edits using SMPTE code are frame-accurate and repeatable. Second, the code can be used to trigger events in other equipment, such as special effects generators, computers, and audio recorders. Third, preview copies of raw tapes can be made with the frame numbers showing on the screen, so you can make editing decisions "off-line."

Most SMPTE code is recorded on a linear audio channel. That means you have to have two audio channels to use it. Three are better. Most one inch and many 3/4 inch VTR's have three audio channels, leaving two for program content.

SMPTE is not the only time code in use today. A number of companies have their own proprietary codes. They all serve the same purpose - allowing precise control over editing equipment.

Computer Editing
The rules for editing are the same for film, videotape, and computer editing.  The medium changes, not the message.

In addition to playback and record VCR's, video monitors and audio gear, computer editing requires a computer that is fast enough and has enough memory to  process video, a device to capture video and audio and turn them into computer files, and hard drives that are big enough and fast enough to handle all of the video and audio you will need to store.  On Windows computers you want to make sure your capture drive uses the NTFS file format.

Editing video on a computer offers several advantages over using videotape.  The first is the ability to change the content or length of any part of a program without having to re-edit everything from that point to the end.  In computer editing you are constructing a list of instructions, describing how the program is to be assembled by the computer.  Because you can work on any part of the program without adversely affecting subsequent parts, computer editing is commonly referred to as "non-linear editing."

A second advantage of computer-based editing is the ability to use more audio and video tracks.  Tape-to-tape editing allows for only one or two video tracks and (for most systems) two monaural audio tracks.  In theory, computer-based editors could have virtually unlimited audio and video tracks available.  In practice, four or five video tracks and the same number stereo audio tracks are usually sufficient.  On the video side, this allows the usual "A" and "B" video rolls, the transitions between the A and B rolls, a track for titles and other luminance or chroma keys, and even enough tracks to do a "quad split."  Audio would generally consist of the location sound (two tracks to cross fade with the video transitions), the narration, and two music tracks (to cross fade between cuts).

A third advantage of computer editing lies in the ability to duplicate files without loss.  As you move files from memory to a hard drive or to tape backup or to a CD or DVD-ROM and back again, there is no loss of quality.
On the "down" side, video has to be "captured," or transferred from tape into the computer.  Capturing is in itself an editing process.  Clips have to be identified and transferred in real time.  You might expect capturing to take up to twice as long as the total length of the video you are transferring even if the process is automated.

The way video is captured and stored in one editing system may not be compatible with another.  In other words, while you can be sure that an NTSC VHS videocassette can be played back on any NTSC VHS machine, a video computer file may not be readable by any software other than the software used to create it.
Almost all video on computers is "compressed."  Uncompressed video is equivalent, more or less, to a 20 Megabytes per second data stream.  The fastest "safe" video data rate for most hard drives is half of the tested sustained speed, or between two and eight Megabytes per second.  Compression schemes that reduce the effective data rate to three to six Megabytes per second produce excellent video and manageable file sizes.  For example, one hour of video at four Megabytes per second would fit on a fifteen gigabyte hard drive.

Depending on the sophistication of your editing hardware, some, most, or all, of the transitions, keys, and other effects you apply to clips in your editing program have to be created and saved on disk by the program before they can be viewed or played back.  This process is called "rendering."  It can be quite time-consuming, especially in low-end editing systems.

As far as creating a “first draft” is concerned, non-linear, or computer editing is not really any faster than tape-to-tape editing, especially when the time required to digitize clips is considered.  It is much more powerful and more flexible.

The first step in nonlinear editing is capturing the source video and audio.  There are, broadly speaking, two types of video; analog and digital.

Analog Video Capture
To capture analog video you will need a video capture card that can convert composite video or S-video to digital video.  High-end cards may also be able to convert component video to digital.  Some capture cards convert both audio and video and some rely on your sound card to handle the audio.

The conversion process may be carried out entirely by the hardware on the card, or by software running on your computer, or by some combination of both.  In general, hardware conversion is more reliable than software conversion. On windows computers the digital conversion product is generally an AVI file.  An AVI (Audio Video Interleaved) file is a sound and motion picture file that conforms to the Microsoft Windows Resource Interchange File Format (RIFF) specification.  MacIntosh files conform to their Quicktime format.  In either case, the converted file is almost always compressed.  That is, much of the picture information is truncated or discarded according to a compression scheme called a codec.  If the file format is dependant on hardware on the capture card or deviates from one of the generally accepted codecs your ability to play back files will be limited.  Do not assume that all AVI files are alike in the way that all VHS tapes are alike.  They aren’t.  

Most capture cards do not compress audio.  In fact, rather than using the 44100 Hz sample rate found on commercial CD audio disks, audio for video is sampled at 48000 Hz.  That equates to 1.536 Mbps.

You probably will not be able to monitor audio or video levels on the computer as your video is captured.  If possible, you should use a time base corrector and waveform monitor to make sure the signal going to the computer meets broadcast standards.  Although some capture cards have time base correctors built in, most do not.

 It is not sufficient to monitor the audio input, since the computer has software audio level and balance controls.

 Windows audio control panel
Audio Recording Control

In general, the audio signal can be recorded only if “line in” is selected in the audio recording window.  This is true whether or not playback of “line in” is muted.  Use an audio recording program to verify the presence of the signal and to check the loudest part of your program.

 Windows audio recorder
Windows Sound Recorder

These programs will show you a graphic representation of audio amplitude in real time.  Set the incoming audio at zero VU if you can, then adjust the line in record level to make sure the loudest audio on your tape is not “clipped.”    The dynamic range from noise to clipping is so great in digital audio that there is no excuse for clipping a digital audio signal.  Only when you are satisfied that the record audio level is correct should you begin the capture process.

Among the options you can select in most capture programs is the division of the source video into discrete clips.  When capturing composite or S-Video, the capture program attempts to detect scene changes by measuring the difference between frames.  You may also be able to set an arbitrary interval and create a new clip, for example, every ten or twenty seconds.  There are two types of clips.  In some programs each clip is a separate file.  In others the captured video is a single file with software-defined clips.  You should consider your application before using video clip detection.  If, for example, you are going to be making minor changes to an event video you would not want to have to re-assemble the event from a large number of discreet clips.  On the other hand, if you are looking for short segments from your tape to be used in another program or you need to alter the sequence of events, clip detection can be very helpful.

Your digital recording will never be better than the tape you digitize.  Because of the rapid degradation of the quality of the tape signal from one generation to the next you should make every reasonable effort to digitize the original raw video and audio.

Except in rare cases you will not have computer control over your video playback machine.  You will have to start the tape machine and the digitizing process separately.  Some capture programs will detect absence of video.  Some will wait to begin digitizing until the video signal amplitude is greater than color black.  Your goal should be to start digitizing before any useful video or audio begins and, obviously, to stop within five or ten seconds of the end of your material.  You are going to be consuming enormous chunks of hard drive.  You don’t want to waste it with color black or snow.

You can estimate the amount of disk space you will need.  At 4.5 MBps for audio and video you will use 270 MB per minute and 16 GB per hour of recording. Your digital file may hang around on your hard drive for weeks or months.

Digital Capture

Digital capture is a much simpler process than analog capture.  The most common digital recording format is “DV.”  This format is already digital and already compressed to about 4 MB/sec and already compatible with the Microsoft AVI format.  To move it to your computer you need to connect your camcorder or DV recorder to your computer using the IEEE-1394
Interface, also called “FireWire.”  There is no loss of audio or video quality in the transfer.  That is the good news.  The bad news is that there is no way to adjust the video (level, setup, chroma, hue) or audio (level, balance, equalization).

Capture software for digital transfer generally offers the additional advantage of “machine control.”  The playback device can be controlled by the computer.  DV tapes can have two different times embedded in the video signal.  One is zeroed at the beginning of the recording and shows the time on tape.  It can be displayed on the camcorder monitor in the upper right hand corner.  Your capture software depends on the tape time recorded on your DV tape.  If that signal is not continuous it will zero itself and start over.  This is confusing for people.  It is fatal for some digital capture software, since tape times are not unique and the software uses the machine control interface to search the tape for specific time points on the tape.

DV tapes can also have a digital time stamp that records the actual date and time each frame is recorded. Clip detection can be based on the digital time stamp on the tape.  A discontinuity in the time stamp indicates the tape was stopped and restarted, ending one clip and beginning a new one.  It may also be possible to detect clips by looking for sudden changes in video content.  Whether you want to detect clips depends on how your software treats clips and the nature of your project.

With the exception of very high-end systems, nonlinear editing systems use AVI or Quicktime files that are compressed from 20 MegaBytes per second down to around 4 MB/sec.  One CDR will hold about three minutes.  One DVD-R will hold about 16 minutes per layer.

See the editing forum at

Handbook Contents