透视变换(进阶)

透视变换(进阶)

一 前言

之前在公众号中写过一篇文章——图像处理的仿射变换与透视变换,这篇文章是对透视变换做了进一步深入研究。

二透视变换

透视变换比仿射变换更普遍。它们不一定保持线条之间的"平行性",但是因为它们更普遍,也更实用,几乎所有在日常图像中遇到的变换都是透视变换。有没有想过为什么两条轨道似乎在远处会相遇?


透视变换(进阶)


1.1 铁轨

这是因为您的眼睛中的对图像好比做了透视变换,透视变换不一定保持平行线平行。如果你站在上面观察图1.1中的铁轨,他们似乎根本不会相会。

给定一个3×3透视变换矩阵,warpPerspective()应用下面的变换:


透视变换(进阶)


请注意,透视变换矩阵的左上角2×2部分的行列式不需要+1。而且,由于前面所示的变换中的分割,将透视比那换矩阵的所有元素乘以常数并不会再所表示的变换中产生任何差异。因此,计算透视变换矩阵时,令M33 = 1是常见的。这使得我们在M中具有八个自由数,并且因此四对对应点足以恢复两个图像之间的透视变换。 OpenCV函数findHomography()为你做到了这一点。有趣的是,如果您在调用此函数时指定了标志CV_RANSAC,它甚至可以占用四个以上的点并使用RANSAC算法来鲁棒地估计所有这些点的变换。 RANSAC使变换估计过程免受嘈杂的“错误”对应关系影响。以下提供的代码读取了两个图像(通过透视变换相关),要求用户点击八对点,使用RANSAC鲁棒地估计透视变换,并显示原始和新的透视变换图像之间的差异,以验证估计的变换。整个工程测试代码:https://github.com/QiYongBETTER/warpPerspective_RANSAC

#include <opencv2> using namespace std; using namespace cv;void on_mouse(int event, int x, int y, int, void* _p){
Point2f* p = (Point2f *)_p; if (event == CV_EVENT_LBUTTONUP)
{
p->x = x;
p->y = y;
}
}class perspective_transformer {private:
Mat im, im_transformed, im_perspective_transformed, im_show, im_transformed_show; vector<point2f> points, points_transformed;
Mat M; Point2f get_click(string, Mat);public:
perspective_transformer(); void estimate_perspective(); void show_diff();

};
perspective_transformer::perspective_transformer()
{
im = imread("./DataFiles/image.bmp");
im_transformed = imread("./DataFiles/transformed.bmp");
}
Point2f perspective_transformer::get_click(string window_name, Mat im)
{ Point2f p(-1, -1);
setMouseCallback(window_name, on_mouse, (void *)&p); while (p.x == -1 && p.y == -1)
{
imshow(window_name, im);
waitKey(20);
} return p;
}void perspective_transformer::estimate_perspective()
{
namedWindow("Original", 1);
namedWindow("Transformed", 1);
imshow("Original", im);
imshow("Transformed", im_transformed); cout << "To estimate the Perspective transform between the original and transformed images you will have to click on 8 matching pairs of points" << endl;
im_show = im.clone();
im_transformed_show = im_transformed.clone();
Point2f p;
for (int i = 0; i < 8; i++)
{ cout << "POINT " << i << endl; cout << "Click on a distinguished point in the ORIGINAL image" << endl;
p = get_click("Original", im_show); cout << p << endl;
points.push_back(p);
circle(im_show, p, 2, Scalar(0, 0, 255), -1);
imshow("Original", im_show); cout << "Click on a distinguished point in the TRANSFORMED image" << endl;
p = get_click("Transformed", im_transformed_show); cout << p << endl;
points_transformed.push_back(p);
circle(im_transformed_show, p, 2, Scalar(0, 0, 255), -1);
imshow("Transformed", im_transformed_show);
} //Estimate perspective transform
M = findHomography(points, points_transformed, CV_RANSAC, 2); cout << "Estimated Perspective transform = " << M << endl; // Apply estimated perspecive trasnform
warpPerspective(im, im_perspective_transformed, M, im.size());
namedWindow("Estimated Perspective transform", 1);
imshow("Estimated Perspective transform", im_perspective_transformed);
imwrite("./DataFiles/im_perspective_transformed.bmp", im_perspective_transformed);
}void perspective_transformer::show_diff()
{
imshow("Difference", im_transformed - im_perspective_transformed);
}int main(){
perspective_transformer a;
a.estimate_perspective(); cout << "Press 'd' to show difference, 'q' to end" << endl; if (char(waitKey(-1)) == 'd') {
a.show_diff(); cout << "Press 'q' to end" << endl; if (char(waitKey(-1)) == 'q') return 0;
} else
return 0;#include <opencv2> using namespace std; using namespace cv;void on_mouse(int event, int x, int y, int, void* _p){
Point2f* p = (Point2f *)_p; if (event == CV_EVENT_LBUTTONUP)
{
p->x = x;

p->y = y;
}
}class perspective_transformer {private:
Mat im, im_transformed, im_perspective_transformed, im_show, im_transformed_show; vector<point2f> points, points_transformed;
Mat M; Point2f get_click(string, Mat);public:
perspective_transformer(); void estimate_perspective(); void show_diff();
};
perspective_transformer::perspective_transformer()
{
im = imread("./DataFiles/image.bmp");
im_transformed = imread("./DataFiles/transformed.bmp");
}
Point2f perspective_transformer::get_click(string window_name, Mat im)
{ Point2f p(-1, -1);
setMouseCallback(window_name, on_mouse, (void *)&p); while (p.x == -1 && p.y == -1)
{
imshow(window_name, im);
waitKey(20);
} return p;
}void perspective_transformer::estimate_perspective()
{
namedWindow("Original", 1);
namedWindow("Transformed", 1);
imshow("Original", im);
imshow("Transformed", im_transformed); cout << "To estimate the Perspective transform between the original and transformed images you will have to click on 8 matching pairs of points" << endl;
im_show = im.clone();
im_transformed_show = im_transformed.clone();
Point2f p;
for (int i = 0; i < 8; i++)
{ cout << "POINT " << i << endl; cout << "Click on a distinguished point in the ORIGINAL image" << endl;
p = get_click("Original", im_show); cout << p << endl;
points.push_back(p);
circle(im_show, p, 2, Scalar(0, 0, 255), -1);
imshow("Original", im_show); cout << "Click on a distinguished point in the TRANSFORMED image" << endl;
p = get_click("Transformed", im_transformed_show); cout << p << endl;
points_transformed.push_back(p);
circle(im_transformed_show, p, 2, Scalar(0, 0, 255), -1);
imshow("Transformed", im_transformed_show);
} //Estimate perspective transform
M = findHomography(points, points_transformed, CV_RANSAC, 2); cout << "Estimated Perspective transform = " << M << endl; // Apply estimated perspecive trasnform
warpPerspective(im, im_perspective_transformed, M, im.size());
namedWindow("Estimated Perspective transform", 1);
imshow("Estimated Perspective transform", im_perspective_transformed);
imwrite("./DataFiles/im_perspective_transformed.bmp", im_perspective_transformed);
}void perspective_transformer::show_diff()
{
imshow("Difference", im_transformed - im_perspective_transformed);
}int main(){
perspective_transformer a;
a.estimate_perspective(); cout << "Press 'd' to show difference, 'q' to end" << endl; if (char(waitKey(-1)) == 'd') {

a.show_diff(); cout << "Press 'q' to end" << endl; if (char(waitKey(-1)) == 'q') return 0;
} else
return 0;
}
}
/<point2f>/<opencv2>/<point2f>/<opencv2>
透视变换(进阶)

图1.2.png

透视变换(进阶)

图1.3.png

由上述分析,对于透视变换,选取多个点时(此处需要≥8),使用findHomography()求的对应性矩阵M比使用getPerspectiveTransform()(此函数对于多个点默认选前四个点)求得的对应性效果更佳。

三 实践

对于我们需要将下述图片中的编码标志点进行透视变换,以使得其能够将椭圆透视变换为圆,周围的环带也进行矫正。我们需要找到下述图片中和矫正后的图片中至少八组对应点。


透视变换(进阶)


图3.1 原图

椭圆透视变换为圆之后,椭圆上的各个位置坐标与圆的对应点坐标关系如图3.2所示:

透视变换(进阶)

图3.2 透视投影变换

根据如上关系,便可以很容易地找到九组对应点:长短轴与坐标轴的四个交点坐标、对角线的四个坐标,以及中心点坐标。

经过九组对应点透视变换后的效果图如图3.3所示:

透视变换(进阶)

图3.3 多组对应点透视变换.png

四 总结

这篇文章既是对上一篇文章的补充,但同时也是不同的方法,可以根据需求进行选用。

最后,感谢朱禹轲在椭圆对应点问题上的帮助,以及感谢张老师对于我探索新方法与新思路的莫大关怀与指导。


分享到:


相關文章: