Why Do We Have a Client Appreciation Event?

Hi there! Before getting into this week’s content, I wanted to first thank you. We have been so encouraged by your support, cheerleading, and participation in our Small Business MBA fall semester…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




What Every Matchmove Artist Needs to Know About Lenses

…but is afraid to ask

In this article we focus purely on the lens, a component many consider to be the most influential factor in the look of a film, but how does it influence the way we matchmove?

As a result of lens distortion, features in the captured image do not reflect their position in the real world, which does not suffer from lens distortion. Matchmoving applications, however, often assume such ideal cameras as their underlying model to reengineer the camera and movement of a shot.

Where image features deviate from the assumed position in a perfect camera, their corresponding reengineered 3D positions will not match their real world locations. In the worst cases, this could cause your camera track to fail.

But that’s not where lens distortion’s influence in visual effects ends. For example, the mathematically perfect cameras in 3D animation packages do not exhibit any lens distortion either. Undistorted CG images, however, would not fit the distorted live action plate. Even where 3D packages can artificially distort the renders, the distortion will have to exactly match the real lens’ distortion for the composite to work.

In practice, the effects of lens distortion on the plate (the live action image) will be removed during camera tracking, which makes the matchmoving artist responsible for dealing with lens distortion. As a result, you will get a mathematically perfect virtual camera and undistorted plates. The resulting virtual camera will be used to render the CG elements, which are then composited into the undistorted plates. At this point, we have perfectly matched CG integrated in the undistorted live action plate. However, with other (non-VFX) parts of the footage still exhibiting lens distortion, your undistorted VFX shots may stand out, even if the CG is perfectly matched. That’s why, at the end of this process, (the original) lens distortion is re-applied to the composited frames. As a consequence, matchmoving not only needs the ability to remove lens distortion and export undistorted plates, but also provide a means to re-apply the same lens distortion on the composited result.

Prime lenses cannot change their focal length (more on focal length below), whereas zoom lenses can do so within their zoom range. Not being able to change the focal length comes with some advantages for prime lenses. The simpler design and less optical elements in the lens normally results in a higher quality image, for example exhibiting less distortion, than comparable zoom lenses.

A rule of thumb for matchmoving is that the more information about the real live camera you have, the easier it is to get a good solution. When it comes to collecting this camera information to assist camera tracking, prime lenses have the additional advantage that if you know which lens was being used for a shot, you automatically also know which focal length it has. This is much harder when it comes to using zoom lenses. Even if you know which lens has been used for a shot, you still don’t know the actual focal length the lens was set to. And it is a lot harder to keep track, ideally frame accurate, of any focal length changes. The good news though is that knowing the type of zoom lens can still help in matchmoving. If nothing more, knowing the range of a zoom lens can provide boundaries when calculating the actual focal length for a frame during matchmoving.

Anamorphic lenses’ breakthrough in filmmaking began with the adoption of widescreen formats. In order to utilise as much of the film surface area as possible, the scene was squeezed horizontally.

With digital sensors, the need for anamorphic lenses is reduced to aesthetic considerations. Common anamorphic lenses squeeze the horizontal by the factor 2, which means for a digitised image that a single pixel is twice as wide as it is high, as compared to the square pixels for spherical lenses.

When matchmoving anamorphic footage, make sure to account for the correct pixel aspect ratio. In the above example, this ratio would be the common 2:1, but there are also lenses with different ratios.

Anamorphic lenses are available as both prime and zoom lenses.

Focal length is the most prominent property of a lens. It is often the first thing mentioned in any listing of lenses to distinguish them, and also what differentiates prime and zoom lenses. The focal length, usually denoted in millimetres (mm), defines, for a given camera, the extent of the scene that is captured through the lens. This is also referred to as the (angular) field of view (FOV).

It comes as no surprise that focal length does play its part in matchmoving as well. On the other hand, it may surprise you to hear that focal length is only half the story when it comes to camera tracking. Matchmoving applications are really interested in the field of view rather than any focal length value in mm, and in order to calculate this field of view, they not only need to know the focal length, but also the size of the camera’s sensor, or film back.

The relationship between sensor size, focal length (f) and angular field of view (FOV). Note how for the same focal length f, the field of view (FOV) differs for both sensor sizes.

The matter gets a bit more complicated through today’s plethora of different sensor sizes, and the fact that, depending on the format, not all of the sensor is being used capture images. In the above illustration, it doesn’t matter whether the sensor in the bottom camera is actually smaller than in the top camera, or if it’s just a smaller part of the sensor that has been used due to the chosen format. For example, your camera’s resolution may be 4500 x 3000, which is a 3:2 aspect ratio. If you now plan to shoot HD video, which has an aspect ratio of 19:6, some parts of the sensor will not be recorded in the video. For a full frame sensor, this would reduce the effective sensor size for HD video from 36 x 24 mm to 36 x 20.25 mm, as is illustrated below.

Cropped imaging area (36 x 20.25mm) vs full sensor size (36 x 24mm)

Depending on the sensor size and format, cropping may occur at the top of bottom, as in the example above, or from the sides of the sensor.

Add a comment

Related posts:

The Difference Between Useful and Useful

I would like to create a formula that tells how useful a product is. In this route I have arrived to a big station. I realized that there is a hierarchy of usefulness of things. There are useful…

Crypto could supplement central bank gold reserves

Central Banks might resort to cryptocurrency reserves as means of supplementing national gold reserves, according to veteran cryptographer Nick Szabo. He also holds that the use of digital currencies…

Bagaimana Databloc Bisa Membantu Kita

Memang saat ini penyimpanan data digital sedang menjadi perhatian masyarakat dan menjadi jenis kebutuhan baru yang mereka inginkan. Hal ini nyata terjadi dimana sebagian besar kalangan masyarakat…