Tuning Process

This section describes how to tune Detect for different circumstances and environments. This section describes a subset of all tuning parameters most useful for adjusting the performance of Detect. The tuning parameters are referred to as settings. See Settings for instructions on adjusting the settings through the Detect Viewer.

Profiles

By adjusting settings, Detect can be optimized for different use cases depending on the application. Detect provides predefined groups of settings values referred to as settings profiles. Detect ships with 5 preconfigured settings profiles:

people_settings

Default settings profile configured in Detect. This profile is optimized for detecting people. This classification settings in this profile are strongly biased against classifying vehicles. This profile has a higher precision for identifying people close together but will not detect vehicles. This profile uses background subtraction. Objects stationary for 3 minutes will stop being detected.

its_settings

This profile is optimized for outdoor environments with people, bicycles and vehicles. The classification is balanced between classifying these types of objects. This profile uses background subtraction. Objects stationary for 3 minutes will stop being detected.

security_settings

This profile is optimized for applications where we want to detect people crawling and running over a short distance. This profile is tuned to pick up people even if they’re crouching or crawling and make sure we detect people right away if they’re moving fast. This profile will have more false positives for smaller objects than the people_settings or its_settings profiles because we cannot filter out small objects as much based on size.

dl_people_settings

This profile is optimized for detecting people using deep learning. This profile uses background subtraction. Objects stationary for 5.5 hours will stop being detected. Note this profile requires a GPU.

dl_its_settings

This profile is optimized for outdoor environments with people, bicycles and vehicles using deep learning. This profile uses background subtraction. Objects stationary for 5.5 hours will stop being detected. Note this profile requires a GPU.

dl_parking_settings

This profile is optimized for parking lots using deep learning. This profile does not use background subtraction for filtering false positives unlike the other profiles. By not relying on the background subtraction, this profile will detect moving objects immediately without requiring them to move on startup. This profile is ideal for scenarios when detection is required on objects which are stationary for a long period of time. Note this settings profile requires a GPU.

Note when switching profiles to one using background subtraction, objects will not be visible for 20 seconds until the background filter initializes. Profiles can be configured using the Detect Viewer. See Settings for instructions on switching profiles.

For details on the differences between the profiles please refer to Perception Tuning in the appendix.

Additional Changes

Improving Computation Time

In some cases, we may need to improve computation time. If the system is not keeping up with the sensor data it will sporadically drop frames to stay live, resulting in a diagnostic warning and reduced accuracy. To check how close the system is to keeping up lowest_frame_rate fields is available in telemetry. If the lowest_frame_rate is lower than the configured sensor frame rate then data will sometimes be dropped.

Reducing the amount of data processed

The simplest way to improve computation is to reduce the amount of data that the system processes.

  • If a lidar is placed against a wall or poll removing the azimuths that point towards the near range blocking objects will improve computation and have no effect on performance.

  • If only close range objects are important, reduce the resolution of the lidar. For ranges less that 20 meters reducing to 512x10 will have minimal impact on detection.

  • If there are areas in the lidar field of view that are not of interest an exclusion zone can be used.

Improving computation through tuning

If the system is still running slow, the next step is to determine which part of the system is running too slow. To determine the cause, a user can use swagger and benchmark timer endpoint to pinpoint which section of the system is slowest. The benchmark timer lists the modules from slowest to fastest.

There are 2 circumstances that need to be handled differently:

  • object_pipeline slow due to a large number of people.

  • lidar_pipeline due to either slow ML inference or slow background_filter

For the Object Pipeline:

The user can adjust the data_association/distance_threshold

Object Pipeline

Setting

Description

Original

Proposed

/object_pipeline/

data_association/

distance_threshold

This is the distance to check for associations between the predicted position, and the cluster positions. The smaller this value, the more likely we will fail to associate to the correct object, however the computation time will reduce as well.

1.5

1.0

For the Lidar Pipeline:

When using ML the user should decrease the ML AOI, as shown in the Gemini Detect Viewer section of the guide.

When using classic mode the user should decrease the background filter settings as shown below.

LidarPipeline

Setting

Description

Original

Proposed

/lidar_pipeline/

background_filter/

range_resolution

This is the depth of each bin in the range image. Each bin stores an occupancy percentage, and if it is less than that percentage, then it is considered to be typically not occupied. Increasing this range decreases the number of bins to update, decreasing the computation. This will result in lower resolution of the background, resulting in more points to be background.

0.25

0.35

/lidar_pipeline/

background_filter/

max_range

This is the maximum range used for analysis. Decreasing the range will reduce memory usage and computation.

150

100

Track People and Vehicles

If the user does not want to track bicycles or large vehicles, it is recommended to switch to the heuristic classifier. The heuristic classifier will only classify people and vehicles, while the probabilistic classifiers will classify people, vehicles, bicycles, and large vehicles, because it needs to classify more types of objects, it’s accuracy will be lower.

Note

For heuristic classifier, bicycle classification is primarily based on speed since the size between people and bicycles are similar. Slow moving bicycles will likely show as people, and fast moving bicycles will likely show up as vehicles.

object_pipeline/classifier

Setting

Description

Original

Proposed

/object_pipeline/

classifier/algorithm

The classification algorithm to use. Supported algorithms are: heuristic, bayes, random_forest

bayes

heuristic

Decreasing the Number of Unknowns

The user may want to decrease the number of unknowns within their scene. We can configure this with various settings, however some drawbacks to this:

  • Reason for the required minimum distance travelled for being classified, is to get a better understanding of the object’s speed. A moving object is easier to classify.

  • Minimum distance travelled also gives objects entering the scene enough time to potentially get better coverage within the scene, so that the object’s size is better captured.

  • There will be more objects misclassified throughout the scenes. Stationary objects like chairs, boxes, desks could all be classified since they no longer meet a required distance.

object_pipeline/classifier/heuristic

Setting

Description

Original

Proposed

/object_pipeline/

classifier/

heuristic/

person_min_duration

and,

/object_pipeline/

classifier/

heuristic/

vehicle_min_duration

or,

/object_pipeline/

classifier/bayes/

classification_min_duration

The minimum time for a prospect to be detected before it is promoted to an unknown

1

0

/object_pipeline/

classifier/heuristic/

person_min_distance_travelled

and,

/object_pipeline/

classifier/heuristic/

vehicle_min_distance_travelled

or,

/object_pipeline/

classifier/bayes/

classification_min_distance_travelled

The minimum distance for a prospect to travel before it is promoted to an unknown

2

0

Prevent Things Turning to Background

If an object is stationary in the scene for an extended period of time, you may not want objects to change as quickly to background. This is configurable, and we can increase the time objects persist in the object. Some drawbacks to this:

  • Moved stationary objects (chairs, boxes, desks) will stay as objects for a longer period of time

  • If the server is restarted, it will take much longer to initialize the background.

lidar_pipeline/background_filter

Setting

Description

Original

Proposed

/lidar_pipeline/

background_filter/

fade_time

Maximum time in seconds for an object to fade to background

200

6000

/lidar_pipeline/

background_filter/

fade_to_free_ratio

How fast an occupied space changes to background vs an occupied space changes to be considered unoccupied. This ratio gets scaled by free_space_rate_doubling_frames so continuously unoccupied area free quickly. If vegetation is showing as objects turn this up to have it more likely to get filtered out.

3

5

Improving Detections in Heavy Wind

In windy conditions resulting in sensor movement, Detect can create multiple false positive objects from clustering the ground together. The background filter can be made more aggressive to handle this. This will reduce false positive objects throughout the scene. However, with a more aggressive background filter the likelihood of some objects not being tracked is higher, meaning that there is a higher chance of false negative objects at farther ranges.

lidar_pipeline/background_filter

Setting

Description

Original

Proposed

/lidar_pipeline/

background_filter/

neighbour_check_range

The background filter checks its neighbours to see if the return is typically occupied in the neighbouring cells. By increasing this, it increasing the tolerance of movement.

1

2

/lidar_pipeline/

background_filter/

check_diagonal_neighbours

Setting this to true will enable checking more neighbours for this occupancy threshold. Having this set will allow for more aggressive background filtering.

false

true

Preventing Small (in any dimension) Objects from Disappearing

There is a ClusterFilter module, which filters clusters based on the number of points, the minimum side length, and minimum vertical size. This module exists to avoid small noisy foreground point clusters from appearing and being tracked, reducing the number of false positives.

Sometimes there is a need to track small clusters, such as as people laying down.

lidar_pipeline/cluster_filter

Setting

Description

Original

Proposed

/lidar_pipeline/

cluster_filter/

min_side_length

The minimum side length for an object to be tracked. If a cluster has a side length smaller than this, it will be filtered.

0.15

<0.15

/lidar_pipeline/

cluster_filter/

min_vertical_size

The minimum height for a cluster. A cluster will height smaller than this will be filtered.

0.3

<0.3

Increase Likelihood of Detection Small Objects at Far Ranges

There are several filters being used to reduce noisy detections in a general scene. In some cases it may be acceptable to increase the probability of false positive objects while simultaneously increasing the probability of detection for small objects at range.

Increasing the resolution of the sensor can often result in further detection ranges. The higher resolution increases the likelihood multiple beams hit a single object. Increasing from 1024 to 2048 is recommended.

The settings in Preventing Small (in any dimension) Objects from Disappearing should be applied as well as the following settings:

lidar_pipeline

Setting

Description

Original

Proposed

/lidar_pipeline/

foreground_filter

min_num_neighbours

The number of directly adjacent points a given point must have to prevent it from being discarded as noise

3

<3

/lidar_pipeline/

cluster_filter/

min_num_points

The number of points a cluster must contain to prevent it from being discarded (as noise).

7

<7

Increasing NearIR Image Publish Frequency

It may be preferred to increase the publish frequency for display.

lidar_pipeline/cluster_filter

Setting

Description

Original

Proposed

/image/image_publish_period

The period of time for a image to be published (s)

2

0.1

/image/image_channel

The image to publish. Supports either reflectivity or near_ir

reflectivity

nearir

Adjust sensor blockage detector

The table below lists the settings for adjusting the blockage detection functionality.

lidar_pipeline/blockage_detection

Setting

Value range

Default value

Description

max_range

0 - 200

3.0

Limits the range where we consider something an obstruction. Points beyond this value are not considered for blockage detection. Can be increased if users care about obstructions beyond 3 meters.

max_range_ratio_threshold

1.0 - 3.0

1.2

Controls how sensitive the detector is to range changes. Represents the allowed ratio between the current max range measurements and average past measurements. Increasing this number will make the blockage detector less sensitive.

max_return_ratio_threshold

1.0 - 1.5

1.2

Controls how sensitive the detector is to measurement changes (i.e lack of return that might happen due to very close objects. Increasing this number will make the blockage detector less sensitive.

memory_factor

0.0 - 1.0

0.995

Controls how fast the system adjusts to changes in range measurements. If the value is close to 1, the blockage detector will slowly adjust to movements in the scene. If the value is close to 0, the blockage detector will quickly adjust to movements in the scene. If the blockage detector is giving false positives due to weather conditions (i.e., rain or fog), reducing this value may remove those false positives.

Reducing the effect of reflections, wind or rain

Using different returns from the lidar can minimize the effect of reflections or environmental factors. The system can be configured for the return modes based on the /lidar_manager/return_selection settings. Changing this setting will only affect sensors once perception is reset or the lidar is removed + added.


  • The three return modes are:

  • Farthest: This is the default for outdoor profiles. It means that Gemini will process the furthest return from each detection. This is optimal for outdoor scenarios because it reduces the chance that snow, rain or dust prevent getting a return from objects of interest. It also helps see objects through partial obstructions such as light foilage or a wire fence. It does increase the chance that reflections (such as off wet ground or windows are detected as objects) but exclusion area can be used to filter those areas.

  • Nearest: This is the default for indoor profiles. It reduces the chance that reflection will affect the system.

  • Strongest: This is a compromise between Farthest and Nearest. It should be used when reflections areas cannot be filtered out but there is a risk that close returns prevent detections of the objects of interest.


Gemini cannot operate on multiple returns simultaneously, so a sensor set in dual return mode will function but Gemini will use the primary return it receives.