Skip to content

Quick Referential Workflow


1. Prepare Data Receiving

1.1. Video Input

Select the feed from your camera here. You can refresh the list of video sources using the button in the top-right.

After changing the source, you have to press "Open Stream" to receive video again.

Once you see the video in the middle area you can move on to the next step.

Using OBS to interface with video capture card

1.2. Tracking Input

1.2.1. Select the tracking protocol

FreeD or TCD. If you don't use EZtrack, the only possible option is FreeD.

1.2.2. Provide lens data

Depending on the tracking protocol used, there are a few options to provide the calibrated lens data:

  • FreeD:

    • Lens File: Load a JSON file exported by EZprofile using the Open Lens File export

  • TCD:

    • Embedded Lens Data: the TCD protocol already carries calibrated lens data

    • ST-Map: for advanced users using another model than Brown-Conrady.

Detailed Page: FreeD input + Lens File
Detailed Page: TCD input
Detailed Page: TCD input + ST-Map

1.2.3. Choose the receiving IP and Port

The "Receiving IP" drop down lists all available network interfaces on the machine. To listen on all interfaces use "0.0.0.0".

The port is specified below.

Filter by source ID

Only takes messages from a specific source according to an index in the tracking messages. It is only useful in some multi-camera setups.

1.2.4. Start Receiving

Press the "Start Receiving" button.

To verify the tracking data is complete, check that all fields have a value in the panel on the right:

If you move the camera, the values should move as well. Perfectly static tracking values are suspicious.

2. Take Pictures

When everything is set to start the calibration, the "Take Picture" button will unlock itself:

This means you are ready to take calibration pictures of the cube.

Note

The following process gives a robust and fast way to proceed, but once the tracking and video are set, you can go whereever you please with the camera to take pictures of the cube, as long as the camera is completely static at the moment the picture is taken.

2.1. Pictures for the current position

For this step the camera should be static, for example on a tripod.

Go to each of these 6 positions using Pan and Tilt:

For each position, lock the camera and then take a picture.

Tip

To take a picture, it is also possible to use a remote instead of the "Take Picture" the button.

2.2. Move the camera to a new position

Danger

only do this step if your tracking system allows you to move the camera. If you are using an encoded Pan-Tilt head or a PTZ without a slider, stop here after the first 6 pictures.

Move the camera to another position in the tracked area and repeat step 2.1.

The idea is to roughly cover the tracking area.

Example

Imagine you have a cyclorama against a wall. The camera will be able to move mostly left and right toward the wall.

In this case, three positions would be enough. They are placed near the main angles the camera will have to cover during production:

3. Calibrate

Once you have done enough pictures, press the "Calibrate" button:

A progress bar will appear at the top and the viewport will switch to a 3D scene where cameras positions are added one by one:

Note

You can still go back after calibration and add new pictures to the dataset by clicking on "Display Video".

4. Result

4.1. Offset Transforms

These values are the result of the calibration:

  • The Referential Offset Transform gives the transform to apply to go from the studio referential to the tracking referential.
  • The Tracker Offset Transform gives the transform to apply to go from the tracker to the camera sensor.

If you are using EZtrack, you can quickly copy and paste the transforms into the web-app.

4.2. Verify the result

The viewport displays each picture with a reprojection of the cube:

You can assess the quality of the alignment by sifting through the pictures using the "Previous" and "Next" buttons.

Detailed Page on Troubleshooting