The Journey to Non-Linear Editing (Part 2)

, , 100 Comments


Welcome to filmmakerIQ.com and part 2 of our
journey to modern editing. In the first part we looked at the accomplishments of video
engineers that made it possible to record video signals and edit them. But now the story
turns from engineers to programmers and computer scientists as we look at the explosion of
digital and how it made filmmaking accessible to practically everyone. What is the difference between Analog and
Digital? To explain let’s imagine an audio recording of a tone. An analog recording would
look like the original wave – all the details intact, it’s a copy- an analog to the original.
On the other hand (pun intended) a Digital recording, breaks the wave into chunks called
samples and the measures the amplitude of the wave at each sample and stores these measurements
in a stream of binary code – a square wave of 0s and 1s. A digital player would reconstruct
the wave using these measurements. So right off the bat, you may think that analog
is the better of the two formats – and you aren’t alone. There are plenty of people
who swear that analog audio recordings are the best. But digital comes with some great
advantages and analog simply doesn’t have. The first is resistance to noise – introduce
noise into an analog signal and you’re going to destroy the signal. Digital signals, because
they’re either 0 or 1 and nothing in between, can withstand some noise and not lose any
quality at all. Digital is also easier to copy, there is no
generation loss as analog loses a bit of quality every time it’s copied – like a game of
telephone. Digital signals can also be synced up and read by computers which analog can’t.
And very importantly for video, patterns can be found in the sequence of 1s and 0s in digital
signals, so digital can be compressed – and that is key for making video as ubiquitous
as it is today. II. The First Digital Tapes By the late 1970s and into the 80s, electronics
manufacturers were experimenting with digital recording. The first commercially available
digital video tape was made available in 1986 with the Sony D1 video tape recorder. This
machine recorded a uncompressed standard definition component signal onto a ¾” tape at a whopping
173 million bits per second! that’s a lot of zeros and ones in a single second! In comparison you are watching this video
in HD at a bit rate of only 5 million bits per second. The D1 was expensive and only large networks
could afford them. But they soon proved their worth as a rugged format. The Sony D1 would
be challenged by Ampex with D2 in 1988, and Panasonic with D3 in 1991. Sony follow-up
to the D1 was the Digital Beta format in 1993. DigiBeta was cheaper than D1, used tapes similar
to Betacam SP which was a standard television industry tape at the time, it recorded composite
video which was how most tv studios were wired, and it used a 3 to 1 Discrete Cosine Transform
video compression to get the bitrate down to 90 million bits per second. Before we dive too deeply into how data is
compressed let’s talk about Chroma subsampling – a type of compression that was used even
on the uncompressed D1 digital video recorder. The human eye is comprised of light sensitive
cells called rods and cones. Rods are sensitive to changes in brightness only and provide
images to the brain in black and white. Cones are sensitive to either Red Green or Blue
provide our brains with the information to see in color. But we have a lot more rods
in our eyes than cones – 120 million rods to only 6 million cones. Because of this we’re more responsive to
changes in brightness which means you can take an image and throw away some of the color
information while keeping the brightness and it would still look as crisp and bright as
fully colored video. So to compress color images, first we have
to pull the brightness information out of the signal Video is made of the primary colors Red Green
and Blue – but storing signals in RGB leads to a lot redundancies. So the RGB signal is
converted to what’s called a YCbCr colorspace. Y stands for Luma or brightness, Cb is the
difference in the blue channel and Cr is the difference in red channel. Now by separating out color from the brightness
we can start to compress the color information reducing the resolution of the Cb and Cr channels. The amount of subsampling or how much we’re
reducing the color resolution is expressed in a ratio: J:a:b where J is the number of horizontal pixels in the
compression scheme, usually 4, “a” is the number of Cb and Cr pixels
in that sample r ow of pixels and “b: is the number of different Cb and Cr
pixels in the row of pixels. Let’s illustrate what this means. A 4:4:4 signal is said to have NO chroma subsampling.
There are 4 pixels in our sample – that’s four pixels of Y
Each of those 4 pixels have their own Cr and Cb values – so 4 Cr and Cb pixels.
And in the next line there are 4 more Cb and Cr pixels Now let’s start subsampling In a 4:2:2 subsample we again have 4 pixels
in the sample – four pixels of y – we don’t throw away the Y value.
But now now we only have 2 pixels of Cr and Cb… two of the pixels share the same values.
And in the next line again we have 2 pixels of Cr and Cb.
The information needed to construct a 4:2:2 image is a third smaller than 4:4:4 and is
considered good enough for most professional uses. Another common one is 4:1:1
4 pixels in the s ame and this time only 1 pixel of Cr and Cb in the same and one on
the following line. Here’s 4:2:0 – four in the sample, 2 pixels
in the sample line and zero in the next line – essentially the 2 get carried over to the
next line. Both 4:1:1 and 4:2:0 need half as much data as 4:4:4 Chroma subsampling is a good start but we
have ways to get the video data even smaller. One of the most important ways is the Discrete
Cosine Transform. DCT is a seriously brilliant mathematical achievement – basically what
it does is approximates the square wave signal that is the binary stream as a sum of different
cosine waves. The mathematics is nothing short of amazing
and seriously well beyond my capability to explain. In the most simple terms – the more
cosines waves you use, to describe the square wave the more accurate you can be. But since digital is so resistant to noise the little bumps here and there don’t affect the quality and you don’t need that many waves to get an accurate result. DCT is an important part of every compression scheme. The
first compression widely used for editing
video was Motion-JPEG in the early 90s. Motion JPEG is an intraframe compression.
It uses DCT to break down individual frames into macroblocks. It basically looks the frame
and finds chunks of the image that are similar then simplfies them. Now it didn’t look
that great – the first Avid editing systems in the early 90s used an early form Motion
JPEG compression and the quality was about that of VHS tape. But since the compression
was done frame by frame, the codec wasn’t too taxing for the computer hardware at the
time – and it was just good for offline editing. Major breakthroughs came in 1995 with two
important technological releases. On the Distribution side – 1995 saw the introduction
of DVD optical discs. These discs used a new kind of compression called MPEG-2 – not to
be confused with motion-JPEG. MPEG-2 was developed by the Moving Picture Experts Group who had
a rather novel approach to handling compression. Instead of standardizing the way video signals
were encoded, they standardized the way video was decoded from a digital stream. The way
a MPEG-2 was decoded stayed the same no matter where it was done, on DVD player, to your
computer, or ever on a modern day DVR. Now how that digital stream was encoded- what
algorithms were used to compress that original data was left open so that media companies
could continually fight it out and develop the more and more efficient encoders. MPEG-2 was Interframe compression. Unlike
Intra-frame, which compressed frames individually. Interframe compression puts frames into GOPs,
groups of pictures. These GOPs would start with an I-frames or reference frames – a full
image. Then the encoder would wait a few frames and then record a P frame – a predictive frame.
This frame only contains the information that is DIFFERENT from the I frame. Then the encoder
goes back a calculates the difference between the I and P frames and records them as B frames
– Bidirectional predictive frames. Describing the process almost sounds like
magic – building frames based on reference frames and how they changed That’s very computationally
taxing – it would take a while before computers could muscle the processing power to edit
this type of compression. But in 1995 they didn’t have to as that
was the same year the DV Format was introduced. Intended to be a consumer grade video tape
format, DV recorded video at a 4:1:1 color subsample using an Intra-frame DCT compression
giving 25 million bits per second – quite an improvement in size from original the D1 This wasn’t considered a professional quality
standard, but it was a huge step up from consumer analog formats like VHS or Hi8. And all the
DV cameras had IEEE1394 (Firewire) connections which mean people could get a perfect digital
copy of their video onto their computer without having to specialized hardware to encode the
file. The tapes themselves were extremely cheap, $3-5 per hour. Armed with relatively inexpensive cameras
digital video production began to take off. In Hollywood during 90s, AVID was the king
of Nonlinear editing systems but it was still a fairly expensive system. Several companies
tried to compete for a share of that video production market. Beginning In 1990, Newtek released the first
Video Toaster on the Amiga system. Though technically it was more of a Video Switcher
which only had limited linear editing capabilities until they added the Flyer, the video toaster
brought video production to lots small televisoin studios, production shops and schools. Costing
only a few thousand but loaded with effects, character generator and even a 3d package
called Lightwave 3D, Video Toaster proved there was a market for small scale media production. So unleash your potential, infiltrate the networks. Make Money, make a statement. And what ever kind of television you make. Make it yours with the Video Toaster 4000 As computers continued getting more powerful
and storage cheaper and cheaper, Software based Non-linear editors like Adobe Premiere
and Media 100 kept nipping at the heels on Avid forcing the company to constantly release systems that were cheaper and cheaper. A media company called Macromedia wanted to
get in the game. The hired the lead developer of Adobe Premiere Randy Ubillos
to create a program called “Keygrip” based on Apple’s Quicktime codecs. The product
was fairly developed when Macromedia realized it would be unable to release the program
as it interferred with their licensing agreements they had with their partner Truvision and
Microsoft. So Macromedia sought a company to buy the product they had developed and
they found one at a private demonstration at NAB in 1998. The buyer’s name was Steve
Jobs and his company Apple would release the software the following year 1999 as Final Cut Pro. The divide between television/video production
and film production began to close with the adoption of high definition video production.
Engineering commissions had been working on the standardization of High Def video since
the 70s and experiments in HD broadcast were being conducted by the late 80s in Japan.
The first public HD broadcast in the United States occurred on July 23, 1996. Now about this same time, the mid to late
90s, Hollywood studios were beginning to use DI or digital intermediaries to create special
effects. A DIs were created by sending 35mm celluoid film through a telecine which scanned
the film to created a digital files. These could be manipulated and composited in the
computer and when they were satisfied, the final shot would sent to an optical printer
which put the digital images back on film. Hence the term Intermediary. In 1992, Visual Effects Supervisor/Producer,
Chris Woods overcame several technological barriers with telecine to create the visual
effects for 1993’s release of Super Mario Bros. Over 700 visual effects plates were
created at a 2K resolution – that’s roughly 2,000 pixels across. Chris Watts, further revolutionized the DI
process with 1998’s Pleasantville. Pleasantville held the record for most visual effects shots
in a single film as almost every shot when the characters visit the fictional 1950s idyllic
town of Pleasantville required some kind of color special effects. Ok right here. Alright stop. STOP!! Where is it? Hey here – grab the nozzle. But where’s the cat? C’mon just hold on tight. Whoa! So that’s what these things do. The first Hollywood film which utilized the
DI process for the ENTIRE length of the movie was the Coen brother’s O Brother Where Art
Thou in 2000. After trying bleach processes but never quite getting the right look, cinematographer
Roger Deakins suggested doing it digitally with a DI. He spent 11 weeks pushing the color
of the scanned 2k DI, fine-tuning the look of old-timy American south. Appears to be some kinda congregation – care
for some gopher? No thank you Delmer. A third of gopher would
only rouse my appetite without bedding her back down. Oh you can have the whole thing – me and Pete
already had one. We ran across a whole gopher village. The thing is HD video and 2K film scans share
roughly the same resolution – HD being 1920×1080 whereas 2K is 2048×1080. So it wasn’t long
before Hollywood started asking, can we just skip the whole 35mm film step all together. The first major motion picture shot entirely
on digital was Star Wars II: Attack of the Clones in 2002 and it was shot on a preproduction model of a Sony HDW-F900 And by the late half of the 2000s, with faster
computers and storage, better cameras and even 4K resolution, it became conceivable
to capture straight onto a digital format, edit online which means working with the original
full quality files rather than a low quality working file, and even project digital files
– all without celluoid film. Moving into the second decade of the 21st
century we’re adding even faster computer and video processors, incredibly efficient
compression techniques like MPEG-4 and H.265 and a powerful network of data distribution
with broadband internet capable of sending video across the globe. The journey to get to modern day film/video
editing can trace all the way back to TV networks needing to delay the broadcast of their shows.
Everything we have now is built on the sparks of genius that electronics engineers, software
engineers, and mathmaticians had over the past 60 years – coming up with incredibly
brilliant solutions to problems that hounded electronics from the start. Each step, each
advancement adding more and more tools for us filmmakers to realize our dreams. How can
you not look at the momentum of history and how we got here and not wonder in awe that
so much as changed in so little time and it’s all so we can just tell stories to each other
– Filmmaking is it technological fullfillment of our most basic human need, the need to
communicate. So go out there and communitcate! Use these tools that are available. Be part
of the next chapter in filmmaking history. I’m John Hess and I’ll see you at filmmaker
IQ.com

 

100 Responses

  1. Artūrs Savickis

    December 13, 2013 7:35 pm

    Really, really great educational videos – lot of useful information and facts, all interestingly presented! Thank you very much.

    Reply
  2. FilmNerden

    December 29, 2013 3:17 am

    How you are not more famous than you are is for me a mystery. Love the high quality and production value. It´s inspiring and very informative. Good job, and keep up the good work.

    Reply
  3. kepa gainza

    January 2, 2014 1:21 am

    congrats you are doing a great work on depicting the beauty of the technical perspective of video creation history !!! best channel find of the year so far 😉
    keep it up! thank you very much!

    Reply
  4. Ian Tester

    January 20, 2014 1:51 am

    Great video, but you went off the rails with the DCT explaination. The DCT is used to represent the pixel values, not the "square wave" of the bits.
    Seriously, why do people always focus on the bits in a computer? It's like saying that mathematics is based on the digits 0 through to 9. It totally misses the point. You usually only have to worry about actual bits when they get stored or transmitted. Otherwise, they're used to store numbers, which represent other things (pixels, letters, etc).
    /rant

    Reply
  5. wyxvt

    February 11, 2014 10:49 pm

    Nice video and you really communicate well. But it's flawed. How can you tell the story of NLE without mentioning Lightworks? Lightworks was there at the start with Avid. Before Toaster and specially much before FCP and Premiere. Give Lightworks recent comeback it makes it even a more blatant mistake to omit it.
    Also more time should have been give to the Editdroid. Basically if it wasn't for it we would probably not have NLE as we do today.

    Reply
  6. Ilya Malov

    February 26, 2014 3:19 pm

    Very informative! Finaly got the understanding of 444 422 sampling. Thanks a lot and greetings from Russia!

    Reply
  7. Gorkab

    March 9, 2014 8:43 pm

    Thanks for that information on Super Mario Bros, I didn't know its effects were the first to use the 2K resolution standard ! 😉

    Reply
  8. Jeevan Jayaram

    March 25, 2014 6:19 pm

    I came here to watch a video about non linear editing. Got a master class about video compression from my college syllabus. That's an entire unit! Thanks dude

    Reply
  9. Drongo Brothers

    June 6, 2014 8:56 am

    Beautifully produced, well presented, accurate and concisely constructed narration of a highly technical field in easily digestible layman's terms.  Much appreciated John.  I will use your video for educational purposes when discussing the history of motion picture production and technology to undergraduate tertiary students, complete with citation (plug) to you and your work.  

    Thank you for this powerfully coherent contribution to the body of freely accessible material covering this exciting and increasingly ubiquitous field.  After all, nowadays a 12-year-old 'kid' can produce broadcast quality media on a home PC straight out-of-the-box, that would have either costed millions, or be practically impossible only a couple of decades ago.

    This historical account of motion picture technology represents the passage from an age of pioneering film production and delivery, to the era of ceaslessly emerging media technology that continues to in some way, shape millions of lives.

    Thank you so much for the time and energy put into this presentation.  You are a gentleman and a scholar.  

    Now, awaiting the definitive update sir… UHD, 3D and beyond. wink

    Cue: Applause
    (Roll Credits)

    Reply
  10. travis whitcher

    June 12, 2014 6:08 am

    i will never forget the first time i got into Digital Video i went to my friends house and he show me his editing System A Sony VAIO PCV-RX650 With Sony Vegas 4 And A Sony Digital 8 DCR-TRV740 in 2003 That Was The Start of it All for me lol

    Reply
  11. LeDodger1

    June 15, 2014 1:12 pm

    What a marvelous series of videos, explained in layman terms and very well presented. Great stuff indeed! 

    Reply
  12. Dennis Degan

    September 17, 2014 1:29 pm

    I absolutely love this series of videos on film history.  They are clearly written and beautifully presented.  You are a master at connecting all the dots of film/video history and processes together.  Each episode leaves me wanting more!
    As a video editor and engineer, I do have one small critical comment:  Sony's Digital BetaCam actually WAS a component format internally.  However, it was D-2 that was the first digital COMPOSITE system of recording, where the video actually recorded digitally on tape was the composite video signal.  Digital BetaCam came out at a time when component video was not as common as it is today, so Sony included analog composite inputs and outputs to allow it to be used in composite systems immediately.  But the machines did have Y/R-Y/B-Y inputs and outputs as well and recorded the video as digital components.  Digital BetaCam also had BOTH analog and digital inputs and outputs, making it the first machine to bridge the gap between both analog and digital as well as between composite and component video.  And unlike the D-1 format, Digital Beta recordings were mildly compressed.
    Other than this error, this series is amazingly informative and entertaining.  I look forward to viewing each one, excited at what's in store.  Keep up this great work.

    Reply
  13. TVperson1

    October 9, 2014 11:09 am

    Hold on John, Digital Betacam is component, D2 is composite. DigiBeta is also 10-bit colour over the 8-bit of D1. 

    Reply
  14. TheLokiLokes

    January 7, 2015 4:00 am

    I'm a media student and I must say, I am eternally grateful for these videos. They are very intuitive and enlightening. They have aided my research and essays tremendously and for that I am thankful. Keep up the fantastic job, I look forward to much more of your content! 

    Reply
  15. Becky Morris-Ashton

    February 11, 2015 11:52 pm

    Hey I was just wondering, is everything your are talking about in this video non-linear or is some parts of it linear as well. Please help 🙂

    Reply
  16. Filmmaker IQ

    February 12, 2015 12:25 am

    @Becky Morris-Ashton. This lesson isn't really about non-linear or linear – it's mostly about digital and how it came to use in video and filmmaking. Though most digital stuff really became more on the non-linear side.

    Reply
  17. RMoribayashi

    May 19, 2015 8:58 pm

    Musician/record producer Todd Rundgren built a music video studio around the Video Toaster and Lightwave3D. He made several videos NewTek's used on their demo tape. A year or two later music videos converted to film and Todd quit the video business. In '93 Babylon 5 used Toasters and Lightwave3D to create groundbreaking CGI effects for TV.

    Reply
  18. g silvax

    July 1, 2015 4:02 am

    Have another way to prestige your better than rise from my chair and applaud you right now?!
    We have here a masterpiece for every aspirant of film making on entire YT.

    Reply
  19. The Great Agitator

    August 7, 2015 1:50 am

    Excellent video! I still remember the greatness of the Video Toaster/Flyer. Edited so much stuff on that thing. Great times. 🙂

    Reply
  20. lobachevscki

    December 3, 2015 10:34 pm

    As a mathematician (one focused on computer graphics) i'm really thankful for this video, it is not only informative but it does justice to the science behind the field, which most people dont' know.

    Thanks.

    Reply
  21. Jeff Billings

    January 8, 2016 8:21 am

    FYI – The media for most DI's would acquired with a film data scan, not telecine. And DI output to film is done with a film recorder, not an optical printer.

    Reply
  22. Ihab Hassan

    February 27, 2016 9:34 pm

    Thanks a lot for those lovely lectures, I really appreciate you and your skills!! Thanks a lot again and again!

    Reply
  23. marcinswidzinski

    March 14, 2016 2:30 pm

    As a total amateur who just briefly touched the topic of filmmaking and is just before buying his own filmmaker equipment – I have to thank you for those videos. I watch them constantly, learning, discovering new stuff. You even made me improve in my primary "work/hobby" – photography. You have a great way of giving your vast knowledge to others and inspire them – now I can't wait to lay my hands on my own camera and making something. Thank you so, so much!

    Reply
  24. Олег Ярыгин

    July 9, 2016 9:22 pm

    8:21 – In simple words, DCT (Discrete Cosine Transform) breaks each block into a bunch of differently sized horizontal and vertical sine stripes plus grid-ajusted “hills” (or “goosebumps”).

    Reply
  25. T'Proxy

    August 27, 2016 6:16 am

    man i love your videos. its not only very informative, but also motivating. the way you're presenting the story moved and motivated me to keep learning about film making. thank you so much

    Reply
  26. 8068

    September 6, 2016 10:21 pm

    I work at Eastman Kodak in Rochester, NY.  While significantly less than 20 years ago, a number of big-budget films are still using Kodak 35mm Color Negative film for capture.

    Reply
  27. Kirk Nelson

    September 24, 2016 8:48 am

    you skipped over video-cd's which used the original MPEG 1 compression. not widely available in the US, usually found being demo'd at computer expo's, but i know was very popular in Asia. not sure about any were else, allowed the full screen playback on a PC from a 1x speed cd rom drive. if i recall that means it could play video and video with a combined data rate of 150 k bits per second. even my x86 based 386 could keep up with that. and if necessary they sold ISA cards to off load the work from the cpu. the ISA cards used hardware to decode MPEG1 and let you watch full screen movies on even really slow computers. can't explain how the ISA cards did the math but thats what they did freeing up the cpu to do other things. I used to have a 486 with a MJpeg capture card, used it to encode home movies, edit them on the computer. and output back to video. the files were not playable on a computer with out the MJpeg card installed, and while the quality was not great it was still cool to add titling and scene transitions. biggest problem i had was balancing compression level and file size due the limitations of a 16 bit OS at the time. think i was using windows 3.11. and that one took me some time figure out since the software would crash while saving without an explanation.

    Reply
  28. Exquisite Corpse

    November 21, 2016 2:36 am

    I worked control room in local TV when I was a teenager and literally never knew what the 'toaster' was……..my main memory of it is the station owner shouting "TOASTER'S OUT! HARD PATCH IT! HARD PATCH IT!!!"

    These videos are awesomely informative.

    Reply
  29. Kyle Tekaucic

    March 6, 2017 5:40 am

    This is a very good video—I got here as a related link from looking up BBC training films on the VT department. All the thanks for your work.

    I'm not a mathematician (and have more experience with audio editing than video), but I think I can try and explain the DCT for those interested…

    The DCT is a specific implementation of something called a Fourier transform. The Fourier transform takes a signal represented as points in time (like the luma or chroma signals in a video recording, or the level of an audio signal), and transforms it into a signal represented as levels of sine and cosine waves at different frequencies. If you add up enough of these waves together, you can reproduce any signal you care to be interested in. (The D stands for Discrete, describing the fact that the signal is represented by sampled points (the digital world) rather than a continuous signal (the analog world).)

    However, the classic Fourier transform uses both sine and cosine waves to do this transformation—this allows for shifts of the waves in one direction or another in time or space, called phase. This can be important information, but it's usually OK to discard either the sine part or the cosine part of the transform to reduce the size of the information and not heavily compromise quality. And so the DCT, as its name implies, uses only the cosine part of the signal. The specific reason for this involves complicated math concepts I couldn't really explain properly, but in general, using cosines to represent a signal better matches many real-world signals (especially images), and usually leads to less noise/aliasing caused by using just sine waves.

    The end result: compact encoding of a video signal that doesn't lose a lot of quality in the process.

    Hope that's a sensible explanation.

    Reply
  30. Rex Romanillos

    May 24, 2017 3:26 am

    The best video that talks about the history of editing! And I love the last part, so inspiring!

    Reply
  31. videolabguy

    July 10, 2017 9:11 pm

    The DCT does not compress the information. You get the same amount out as you put in. This information is then rearranged to produce a serial stream with a lot of long stretches of zeros and ones. Huffman coding then compressed this stream. Even learning that much made my brain hurt! The math behind it all is amazing.

    Reply
  32. Daniel Martinez

    July 31, 2017 7:08 pm

    Excellent explanation of something I use at work every day and take for granted as an editor.

    Thanks

    Reply
  33. Michael Elliott

    August 8, 2017 9:53 pm

    Man, if I could like this video (and the whole series so far, really) twice, I would. Thanks for these.

    Reply
  34. Steven Watchorn

    August 28, 2017 11:09 pm

    Damn, that last speech was really inspiring. It's like the "Patton" speech of film-making. It really makes me want to go see what stories others want to tell. So I'm off to see The Emoji Movie! 😀

    Really good video. I just discovered these, and I am devouring them. 🙂

    Reply
  35. Gary Peterson

    September 5, 2017 2:51 am

    When I first started at my current TV station in 1980, the older engineers there showed me the microscope-and-razor editing equipment that they still had on hand. That method had been phased out years before in favor of machine to machine editing (on AMPEX 2" Quads). 3/4" UMAT was the new kid on the block. Seeing how you had view the iron filings and slice between them gave me enormous respect for those guys. Of note tho, even back then, the herculean editing on "Laugh-In" was legendary.

    Reply
  36. Darren D. MacDonald

    September 10, 2017 11:09 pm

    My professor would play these videos in class to fill time. His theory is "Why retell everything he is going to say, when i can show you someone that has already said it". They were really helpful. 3 years after graduation and i'm still watching these videos.

    Reply
  37. WeWereYoungandCrazy

    September 12, 2017 4:11 am

    Best series about Video and film on the entire world wide web. Should be equivalent to a college course in broadcast and film production. The history, the technology, the art, is all discussed and seamlessly woven into an interesting and easy to understand sessions. I love them all. Having said that, at 3:44 he states that DigiBeta is a composite video format but it isn't. DigiBeta is most certainly a component system, 4:2:2 to be exact. And the DigiBeta cassette is based not so much on the BetaSP cassette as it is on the original betamax cassette. BetaSP was also based on the betamax cassette. I will forgive them this one error, and admit that DigiBeta was used in facilities that had only composite routing, patching, and switching because the DigBeta model A500 deck had a composite video option. The DigiBeta decks also all had both analog and serial digital inputs and outputs. I recently tossed 4 of them into an electronics recycling dumpster. A sad day. (but I kept 2)

    Reply
  38. Nahuel Martínez

    September 27, 2017 4:53 am

    I said the same on the first part of this, it's so nice and refreshing to see you speaking so passionately about this topic, it's really lovely

    Reply
  39. Stephen Baldassarre

    October 21, 2017 1:15 am

    Yes folk, now anybody can express themselves via full-length online HD video editing. It's a pity most of them have nothing to say but make movies any way.
    On a side note, I thought the color grading of O Brother… was nauseating. Compared to current movies, it is downright tame. I am watching a mainstream movie right now and every scene is orange or blue, with crushed shadows, pasty skin tones and cut-out windows where they are completely unnecessary. Why? Because COMPUTERS!

    Reply
  40. Vegh Atilla

    October 30, 2017 5:43 pm

    This guy makes You feel like, the smartest boy in class explains you the thing that the teacher could never transfer to you properly.
    Bravo
    Love it!!!

    Reply
  41. Alexandr Vladimirovich

    June 17, 2018 5:58 pm

    we still use the avi pal dv format (and anamorhic pal dv widescreen) with 25mbps compression for SD television in russia

    Reply
  42. Clay3613

    August 27, 2018 8:14 pm

    I have a full set of Video Toaster floppies right next to my desk right now.

    Nice to know the SMB movie did one thing right!

    Reply
  43. CarlsTechShed

    September 21, 2018 2:41 pm

    16:10 The colour grading special effects in "Pleasantville" and "O' Brother Where Art Thou" was done using a system called a 'MegaDef' which was designed by Pandora International.

    Reply
  44. MontePrideProductions

    January 29, 2019 9:58 pm

    I showed both part one and two of this series to my high schools students. They were excellent!

    Reply
  45. gpwgpw555

    February 25, 2019 3:49 am

    I saw an optical elusion that illustrates why we do not need a hi amount of color information. I was looking at an image that had a serpentine narrow balk line down the middle of the image. The left side of the line was white the right side was yellow. Luminas was the same on each side of the black line.  .  . .  . The black line was on a clear acetate sheet.  When you lift the sheet you could see the divide between the white and yellow was strait and not serpentine. Put the clear sheet down and it would appear to be serpentine again. Great elusion.

    Reply
  46. Schießstand - Foto & Video

    March 10, 2019 8:12 pm

    I just spend a week in a hospital Netflix and Amazon were blocked in the WiFi so I looked 20 of your Videos and now I feel a lot less stupid then before. This is great and thank you for this great Videos explaining almost everything. Even if I understand and use a lot, in each and every Video are things I didn't know.

    Reply
  47. Roger Rabbit

    March 23, 2019 12:01 pm

    I have to wonder. What was the point of creating 1080p. Why didn't they just create 2K TVs. Were the extra 128 pixels just that much troublesome?

    Reply
  48. lohphat

    April 15, 2019 5:24 pm

    From a former colleague (who was there when MPEG-2 was being developed) has a correction for you relating to the reasons and timing of its development:

    "Development started in 1992, by General Instrument, for cable but its first wide scale deployment was in the mid-nineties for DirecTV…."

    Reply
  49. A Limmi

    May 22, 2019 9:27 pm

    Thank you very much for this well researched documentary! It helped me a lot to prepare a presentation on the history of early video editing!

    Reply

Leave a Reply