HDV on the Set of 24 part 2
Posted: March 24, 2006 at 12:29 am, by Isaac
A couple of months ago I posted a link to a Showreel magazine article on the DPs of 24 testing HDV cameras for show production. In part one the tried the Sony Z1 and the JVC HD100, and here in part two they put the Canon XL H1 and Panasonic HVX200 through their paces. Very interesting stuff, and I recommend that you read the full article. They comment on a number of workflow and camera issues, but their main complaints seem to be that the Canon’s stock lens and viewfinder are useless for drama work, and that the Panasonic shows more image noise than the other cameras.
They also point out that while all network sitcoms have now switched over to digital, all the drama shows and even drama pilots are still being shot on film. In terms of speed, flexibility, and range, 35mm is still unbeatable. Particularly for a show like 24, which is shot fast and moves fast, with chases and pyro work, shot during the day and at night, over a variety of sets and locations. Fortunately, much of the show is shot in the CTU headquarters set, which is fully enclosed and rigged with fully controllable lighting, which made it perfect for video tests.
Posted: March 24, 2006 at 12:18 am, by Isaac
It’s been a long time since I’ve updated the site because I’m swamped with work. Here’s a few things that I’ve been reading about recently. The South by Southwest Film Festival recently ended, and HD for Indies covered that very well. Mike has posted his notes from lectures and panels on things like theater technology, theatrical vs. DVD release, and some great info on Sony’s XDCAM system that they showed off there. CinemaMinima has a writeup on the documentary production panel, and Cinematech posted links to some more articles.
I also found a neat blog on After Effects Techniques posted by Stu Maschwitz, filmmaker, compositor, and “accidental technologist.” He’s one of the founders of The Orphanage, and helped develop Magic Bullet and eLin with Red Giant. eLin is a compositing tool that offers better control over transparencies and color correction by using linear correction rather than gamma (if I understand it right), which is particularly useful for those outputting to film. This is Stu’s area of expertise, and his blog is filled with explanations of the differences between linear and gamma, logarithmic color depth, and how to use After Effects to manage the different color workflows.
Instant HD 1.0
Posted: March 10, 2006 at 1:56 pm, by Isaac
A few years ago, Magic Bullet was the most widely used film-look tool in the video filmmaker’s kit. It had loads of film stock simulation presets, the ability to repair digital artifacts and add analog ones, and a very good deinterlacer. I didn’t use it much, because I personally preferred Re:Vision’s Fieldskit for all my various field adjusting needs, and After Effects had enough image tools that I could make my own presets. However, a few days ago, Red Giant Software introduced a brand new tool that fills a relatively new void.
Instant HD is a plug-in for Adobe Premiere, After Effects, and Final Cut Pro that up-converts standard definition video to HD. Using specialized sharpening and antialiasing algorithms, Instant HD tries to fill in the missing data and smooth the edge details as it increases resolution. There’s a limit to how well this can work, but after a few minutes, I got some pretty good results with some DV footage shot on an XL2.
click to enlarge
What you see in that tiny thumbnail is part of a football helmet facemask. When the original image (left) is resized in AE (center), it is much softer than Instant HD’s result (right). You can see the original DV frame here
(500k), and the full 1080 results here
(2mb). For a more direct comparison’s sake, I didn’t mess with the aspect ratios.
Nevertheless, the end result is quite good. Since the software can only use a progressive image source, I deinterlaced the footage with Fieldskit, and then added the plugin and selected a 1920×1080 image from its many presets. It works best with high-contrast edges, like the player’s name and the facemask; the edges are sharpened but not blocky. The texture on the grass, socks, and skin is a little less convincing, but still low-noise and not obtrusively low-res. The next test was much harder.
click to enlarge
Again, check the original DV
and resulting HD
frames for exact output, again again it’s the hard edges that look best. The collar and jersey number are punched up pretty well, but the soft highlights on the face stay soft. It does an amazing job on the creases of the clothing and the shadows under the bleacher seats, but because there isn’t much detail to begin with, the face looks out of focus. Obviously, a shot with less overwhelming background detail would have produced a much better result.
Obviously this is no replacement for a real HD camera, but for emergency last-minute projects or adding SD footage to an HD edit, the $99 pricetag is incredibly reasonable. I’m sure we’ll see more such tools in the future, as HD broadcasts and media become more widespread. Quantel has a good upscaling algorithm, and Algolith has its own suite of noise-reduction, deinterlacing, upscaling tools, but I haven’t had a chance to use those. In any case, I’m very impressed with Red Giant’s latest effort.
Frame Rates and Interlacing
Posted: March 6, 2006 at 1:53 pm, by Isaac
Due to the electrical constraints that existed during the invention of video, television has relied on interlaced fields since before 1930. Due to the chemical developments that guided the invention of photo-projection, film hasn’t. Due to the technological advances of recent years, computer displays are completely different. Up until a few years ago, using three (or more) totally separate display systems was fine, but today we’re trying to merge those all together so we can watch movies on our cellphones and check email on our televisions.
For now, I’m going to ignore LCD panels, plasma screens, and DLP projectors. How a CRTs actually displays video is complicated to explain but reasonably simple: NTSC video is 30 frames per second (technically, only 29.976fps), and usually 480 lines or pixels high. However, each one of these frames contains two fields which means the video is in effect 60 fps, and only 240 lines high. Played at speed we get a very fluid moving image that resembles a full resolution picture. Unfortunately, the full frames look something like this:
This is not a problem unless the video is being shown on a non-interlaced display, such as a film projector. Film is shot and projected at 24 fps, and each frame is a full-resolution exposure just like any other. When movies are broadcasted on television, they must be converted to the new framerate by doubling up certain frames. Every 1/6th of a second, four film frames must be turned into five video frames or ten video fields, spaced as evenly as possible. The best way to do this is called a 3:2 pull-down.
As you can see, it spreads the frames across the fields 3-2-3-2 at a time. It’s not as smooth and even as true 24fps playback, but it’s the least stuttery option for NTSC video. It’s also a readily available option; nearly every video camera now has some sort of 24fps shooting mode. This has led to plenty of confusion, firstly in how we label different formats, how many different formats there actually are (60i, 59.94, 50i, 30p, 29.97, 25p, 24, 23.97, etc, etc…), and when we use each one.
Too many filmmakers think that the 24p mode on their new camcorder will magically make everything look more better, so regardless of what they are shooting or how they plan to edit, they shoot 24fps. Yes, film looks better than video, but the strobing flicker of 24p video that has been improperly converted to NTSC (or played natively on the wrong hardware) looks nothing like the smooth but slightly slower rate of real film. In my opinion, 24p should be reserved for productions that will be printed to film, made into a 23.97fps DVD, or matched to real telecine’d film.
For projects shot on video and displayed on video cameras, a true video format can give better results; particularly if your project is a documentary, industrial video, news report, or anything that is traditionally in a broadcast format. Furthermore, unless your camera has a true progressive CCD array, you might get a better image (in addition to more flexibility) by shooting an interlaced image, and deinterlacing in post. RE:Vision’s FieldsKit is the best tool for pull-downs, pull-ups, and any other field management processes.
Of course, this process is greatly simplified if you happen to be using PAL gear. You can shoot at 50i, ready for non-American broadcast or deinterlaced playback at 25p. Going from 25fps to 24fps is a slowdown of about 4%, which means that a 90 minute feature on PAL would only be about 3 minutes longer on film. Since nearly all NLEs and video tools support PAL, and proprietary 24p support can still be a little sketchy, it is a simpler option. Carefully reasearch all the options available for specific projects.
Getting the Most out of HD(V)
Posted: March 4, 2006 at 1:51 pm, by Isaac
Studio Monthly has just put up an article featuring lots of first-person advice from eighteen directors, camera operators, cinematographers, and engineers on how to maximize the quality of your HD productions. Most of the tips are for SD folks stepping up to the new resolution and aspect ration (and frame rates, processing, latitude, workflow, etc…), but there’s some detailed data on presets, filters, and editing techniques as well. Most helpful.
And now, in the interest of fairness, I’d like to follow my previous Panasonic coverage with a report of an AG-HVX2000 shoot that did run smoothly. I’m not totally sold on this camera or its recording media, but Shane Ross has had few hiccups with the P2 workflow. His trick was to hire a tech to spend the whole day dumping the cards onto external drives, and he avoided many issues by skipping the Mac and using a PC laptop. Here are parts one and two of his three-part report.