Deep Learning.
large deformations models.
It is very common in the field of medical imagery, as well as for satellite image analysis and optical flow.
intensity-based methods because they are less commonly used.
Traditional Feature-based Approaches
In brief, we select points of interest in both images, associate each point of interest in the reference image to its equivalent in the sensed image and transform the sensed image so that both images are aligned.
keypoint detection and feature description:
- SIFT (Scale-invariant feature transform) is the original algorithm used for keypoint detection but it is not free for commercial use. The SIFT feature descriptor is invariant to uniform scaling, orientation, brightness changes, and partially invariant to affine distortion.
- SURF (Speeded Up Robust Features) is a detector and descriptor that is greatly inspired by SIFT. It presents the advantage of being several times faster. It is also patented.
- ORB (Oriented FAST and Rotated BRIEF) is a fast binary descriptor based on the combination of the FAST (Features from Accelerated Segment Test) keypoint detector and the BRIEF (Binary robust independent elementary features) descriptor. It is rotation invariant and robust to noise. It was developed in OpenCV Labs and it is an efficient and free alternative to SIFT.
- AKAZE (Accelerated-KAZE) is a sped-up version of KAZE. It presents a fast multiscale feature detection and description approach for non-linear scale spaces. It is both scale and rotation invariant. It is also free!
These algorithms are all available and easily usable in OpenCV. In the example below, we used the OpenCV implementation of AKAZE. The code remains roughly the same for the other algorithms: only the name of the algorithm needs to be modified.
import numpy as np import cv2 as cv img = cv.imread('image.jpg') gray= cv.cvtColor(img, cv.COLOR_BGR2GRAY) akaze = cv.AKAZE_create() kp, descriptor = akaze.detectAndCompute(gray, None) img=cv.drawKeypoints(gray, kp, img) cv.imwrite('keypoints.jpg', img)
Feature Matching
best matches with the minimal distance.
We then apply a ratio filter to only keep the correct matches. In fact, to achieve a reliable matching, matched keypoints should be significantly closer than the nearest incorrect match.
import numpy as np import cv2 as cv import matplotlib.pyplot as plt img1 = cv.imread('image1.jpg', cv.IMREAD_GRAYSCALE) # referenceImage img2 = cv.imread('image2.jpg', cv.IMREAD_GRAYSCALE) # sensedImage # Initiate AKAZE detector akaze = cv.AKAZE_create() # Find the keypoints and descriptors with SIFT kp1, des1 = akaze.detectAndCompute(img1, None) kp2, des2 = akaze.detectAndCompute(img2, None) # BFMatcher with default params bf = cv.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) # Apply ratio test good_matches = [] for m,n in matches: if m.distance < 0.75*n.distance: good_matches.append([m]) # Draw matches img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good_matches,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) cv.imwrite('matches.jpg', img3)
for other feature matching methods implemented in OpenCV.
Image Warping
and apply it to the sensed image.
Least-Median robust method.
# Select good matched keypoints ref_matched_kpts = np.float32([kp1[m[0].queryIdx].pt for m in good_matches]).reshape(-1,1,2) sensed_matched_kpts = np.float32([kp2[m[0].trainIdx].pt for m in good_matches]).reshape(-1,1,2) # Compute homography H, status = cv.findHomography(ref_matched_kpts, sensed_matched_kpts, cv.RANSAC,5.0) # Warp image warped_image = cv.warpPerspective(img1, H, (img1.shape[1]+img2.shape[1], img1.shape[0])) cv.imwrite('warped.jpg', warped_image)
series of useful tutorials.
Deep Learning Approaches
such as image classification, object detection, and segmentation. There is no reason why this couldn’t be the case for Image Registration.
Feature Extraction
learn task-specific features. Since 2014, researchers have applied these networks to the feature extraction step rather than SIFT or similar algorithms.
- robust to transformations. These features, or descriptors, outperformed SIFT descriptors for matching tasks.
here. While we were able to test this registration method on our own images within 15 minutes, the algorithm is approximatively 70 times slower than the SIFT-like methods implemented earlier in this article.
Homography Learning
learn the geometric transformation to align two images.
end-to-end fashion: no need for the previous two-stage process!
ground-truth homography.
expensive to do so on real data.
between the reference image and the sensed transformed image.
and performance compared to the supervised method.
Other Approaches
trained agent to perform the registration.
- only used for rigid transformations.
- artificial agent to optimize the parameters of a deformation model. This method was evaluated on inter-subject registration of prostate MRI images and showed promising results in 2-D and 3-D.
More complex transformations models are necessary, such as diffeomorphisms that can be represented by displacement vector fields.
that have many parameters.
- A first example is Krebs et al.’s Reinforcement Learning method mentioned just above.
- to warp the sensed image according to the reference image.
- Quicksilver registration tackles a similar problem. Quicksilver uses a deep encoder-decoder network to predict patch-wise deformationsdirectly on image appearance.
on deep learning in Medical Image Registration could be a good place to look for more information.