Home

Correspondence-Free Fast and Robust Spherical Point Pattern Registration

Anik Sarker 1 Alan T. Asbeck 1
Virginia Tech 1
Your Image
The spherical point pattern registration algorithm estimates the rotation that aligns a noisy, rotated source spherical point cloud with the template point cloud.

Overview

We address the problem of rotation estimation between two spherical patterns (S2), a task fundamental to many applications in computer vision and robotics. Traditional correspondence-based or correlation-based methods often fail on the sphere due to unreliable surface correspondences and high computational cost (>O(n3)). We propose a linear-time (O(n)) spherical alignment framework that reformulates rotation estimation as a correspondence-free point-set alignment problem on the unit sphere. Three algorithms — SPMC, FRS, and a hybrid SPMC+FRS — achieve over 10× faster and 10× more accurate performance than current state-of-the-art methods on our new Robust Vector Alignment Dataset.

Furthermore, we adapt our methods to two real-world tasks: (i) Point Cloud Registration (PCR) and (ii) rotation estimation for spherical images. In the PCR task, our approach successfully registers point clouds exhibiting overlap ratios as low as 65%. In spherical image alignment, we show that our method robustly estimates rotations even under challenging conditions involving substantial clutter (over 19%) and large rotational offsets. Our results highlight the effectiveness and robustness of our algorithms in realistic, complex scenarios.


Method 1: Spherical Pattern Matching by Correlation (SPMC)

Your Image

To align the Template and Source spherical patterns, three rotations are applied sequentially:

  1. Template Alignment (Rz): Rotate the Template so that its mean direction points toward the North Pole. This normalizes the Template orientation.
  2. Source Alignment (Rm): Similarly, rotate the Source so that its mean direction also aligns with the North Pole. After this step, both Template and Source share a common reference frame centered at the pole.
  3. Correlation Rotation (Rs*): Perform circular cross-correlation between the Template and Source histograms (in azimuthal direction) to find the optimal rotation Rs* that maximizes alignment around the pole.

The final optimal rotation that aligns the Source with the Template is then obtained by combining these three rotations:

Ropt = Rz-1 · Rs* · Rm

This process first standardizes both patterns to the same pole-aligned frame, then finds their best azimuthal alignment through correlation, achieving robust spherical pattern matching.

Method 2: Fast Rotation Search (FRS)

Your Image

At a high level, the Fast Rotation Search (FRS) algorithm is an iterative optimization method that estimates the rotation matrix aligning the Source to the Template. The detailed pseudocode is shown in Figure alg:pseudocode.

The algorithm begins by computing fixed histograms from the axis direction angles of the Template. At each iteration, moving histograms are computed from the corresponding axis direction angles of the Source. A 1D circular cross-correlation is then performed between the fixed and moving histograms to determine the optimal angular shifts along each axis. From these shifts, an intermediate rotation matrix is computed, and the Source is rotated accordingly.

This process repeats iteratively—updating the Source, recomputing its histograms, and refining the rotation—until convergence, i.e., when the optimal shifts become negligible. Despite its iterative nature, FRS achieves near-linear time complexity.

Method 3: SPMC + FRS

This hybrid approach combines the strengths of Algorithm 1 (SPMC) and Algorithm 2 (FRS). The procedure is straightforward: the SPMC algorithm first provides a coarse initialization by estimating an initial alignment between the Template and Source. This initial rotation is then used to initialize the FRS algorithm, which refines the alignment through iterative optimization.

This hybrid method leverages the global efficiency of SPMC and the local precision of FRS, yielding faster convergence and improved robustness against noise and partial overlap.


Robust Vector Alignment Dataset

To evaluate our algorithms, we introduce a novel synthetic dataset, termed the Robust Vector Alignment Dataset. The dataset consists of five base Template patterns (A1–A5), representing diverse geometric and physical structures, including complex shapes, robot trajectories, locally symmetric maps, and randomly scattered point configurations. For each Template pattern, seven Source variants are generated by introducing varying levels of outliers (up to 90%) and additive Gaussian noise. Each Template–Source pair is further tested under 100 randomly sampled rotations across the rotation group SO(3), providing a comprehensive benchmark for robustness and rotational accuracy.

Your Image

Qualitative results of the SPMC+FRS algorithm on the Robust Vector Alignment Dataset. For each of the five base datasets, the left panel shows the 3D spherical point cloud and its 2D projection of the Template pattern, along with one corresponding Source configuration. The right panel displays the result after registration. The visual alignment confirms that the Source pattern has been successfully registered to the Template. Here, Ai denotes the Template pattern number, Bi the Source configuration, and Ri the rotation instance used for evaluation.


Application 1: Point Cloud Registration

Overview

For point cloud registration, our approach primarily focuses on rotation estimation between two point clouds. Once the optimal rotation is obtained, the translation can be computed using adaptive voting or centroid-based shifting, depending on the scenario — such as complete-to-complete, partial-to-complete, or partial-to-partial alignment.
In the first stage, both the Template and Source point clouds are converted into spherical representations. Our proposed algorithms; SPMC, FRS, and the hybrid SPMC+FRS are then applied to estimate the rotation between the two spherical signals.
We evaluated our algorithms on the ModelNet40 dataset, KITI dataset, and the 3DMatch dataset.

Converting point cloud to spherical signal

Your Image

Two representations are used: (1) CASE, which projects all points from the cloud’s centroid onto the unit sphere, and (2) EGI, which maps the surface normal distribution of the point cloud onto the sphere. These spherical embeddings serve as the input patterns for the proposed alignment algorithms.

ModelNet40

Your Image
Qualitative results of on the ModelNet40 dataset.
Your Image
Modelnet40 quantitative results: Rotational error in degrees (top row) and translation error in meters (bottom row) for both complete to complete (Comp2Comp) and partial to complete (Part2Comp) in 4 cases with different amounts of noise and correspondence.

KITTI & 3DMatch

Your Image
Qualitative (left) and quantitative (right) results on the 3DMatch and KITTI datasets. Point clouds were voxel-downsampled with a voxel size of 0.03. For KITTI, an additional preprocessing step was performed to remove ground reflection points by simply removing the points with very low elevation.

Bunny Dataset: Scale Invariance and Robustness

Your Image
Robustness and Invariance of CASE to Rotation, Translation, and Scale. The first column depicts the input point clouds, where the source set is first scaled to fit into a unit cube. Isotropic Gaussian noise is added to the source set with a mean of 0 and a standard deviation of 0.01 (third row) or 0.1 (fourth row). The second column visualizes the CASE directly derived from the input. The third column shows the CASE after applying SPMC+FRS, and the final column presents the registration results in \protect \mathbb {R}^3 space. The top row illustrates a simple one-to-one case. The second row demonstrates a source set with 50% outliers and a scale four times that of the template. The third row depicts a source set with the same scale as the template but includes 0.01 Gaussian noise and 90% added outliers. The bottom row shows a source set with the same scale as the template, 0.1 Gaussian noise, and 90% added outliers.

Application 2: Spherical Image Registration (Rotation estimation)

Overview

For spherical image registration, the first step is to convert the Template and Source spherical images into spherical point clouds. Our proposed algorithms are then applied to align the two point clouds. The resulting estimated rotation corresponds to the desired rotation between the two spherical images.

Converting spherical image to spherical point cloud

Your Image

For each pixel in the spherical image, we compute its normalized intensity value in the range [0, 1]. To obtain a binary representation, a threshold is applied: pixels with intensity below the threshold are assigned a value of 0, while those above are set to 1. Each pixel with an intensity of 1 is then mapped to a corresponding point on the sphere. A lower threshold increases the number of points, potentially capturing more overlapping features but also introducing additional noise and clutter. In contrast, a higher threshold produces fewer points, which may hinder registration if insufficient structural features are retained. Conversely, a higher threshold yields fewer points, which can make registration more challenging if insufficient structural features are preserved. In our experiments, we set the intensity threshold to 0.21, providing a balance between feature richness and noise suppression.

Map Data: Robustness to Clutter, Noise and Large Rotations

Your Image
Results of rotation estimation from spherical images. The top row shows the Template spherical image and its 2D projection. The next six rows display Source spherical images with varying levels of pixelwise difference (D%) or thresholded clutter (C%). For this example, the input rotations were given by Euler angles (α, β, γ) = (55.27°, 12.11°, 11.02°). The estimated rotation error from our algorithm is presented in the rightmost columns.

BibTeX

@article{sarker2025-cfsppr-arxiv, title = {Correspondence-Free Fast and Robust Spherical Point Pattern Registration}, author = {Sarker, Anik and Asbeck, Alan T.}, journal = {arXiv preprint arXiv:2508.02339}, year = {2025}, doi = {10.48550/arXiv.2508.02339}, url = {https://arxiv.org/abs/2508.02339}, eprint = {2508.02339}, archivePrefix = {arXiv}, primaryClass = {cs.CV} }