文章详情

短信预约-IT技能 免费直播动态提醒

请输入下面的图形验证码

提交验证

短信预约提醒成功

C++OpenCV实现图像双三次插值算法详解

2024-04-02 19:55

关注

前言

近期在学习一些传统的图像处理算法,比如传统的图像插值算法等。传统的图像插值算法包括邻近插值法、双线性插值法和双三次插值法,其中邻近插值法和双线性插值法在网上都有很详细的介绍以及用c++编写的代码。但是,网上关于双三次插值法的原理介绍虽然很多,也有对应的代码,但是大多都不是很详细。因此基于自己对原理的理解,自己编写了图像双三次插值算法的c++ opencv代码,在这里记录一下。

一、图像双三次插值算法原理

首先是原理部分。图像双三次插值的原理,就是目标图像的每一个像素都是由原图上相对应点周围的4x4=16个像素经过加权之后再相加得到的。这里的加权用到的就是三次函数,这也是图像双三次插值算法名称的由来(个人猜测)。用到的三次函数如下图所示:

最关键的问题是,这个三次函数的输入和输出分别代表啥。简单来说输入就是原图对应点周围相对于这点的4x4大小区域的坐标值,大小在0~2之间,输出就是这些点横坐标或者纵坐标的权重。4个横坐标、4个纵坐标,对应相乘就是4x4大小的权重矩阵,然后使用此权重矩阵对原图相对应的区域进行相乘并相加就可以得到目标图点的像素。

下图可以帮助大家更好地理解

首先,u和v是什么呢?举一个例子,对于一幅100x100的灰度图像,要将其放大到500x500,那么其缩放因子sx=500/100=5,sy=500/100=5。现在目标图像是500x500,需要用原图的100x100个像素值来填满这500x500个空,根据src_x=i/sx和src_y=j/sy可以得到目标像素坐标(i,j)所对应的原图像素坐标(src_x, src_y),这个src_x和src_y的小数部分就是上图中的u和v。

理解了u和v,就可以利用u和v来计算双三次插值算法的权重了。上面说了三次函数的输入是原图对应点周围相对于这点的4x4大小区域的坐标值,对于上面这幅图而言,横坐标有四个输入,分别是1+u,u,1-u,2-u;纵坐标也有四个输入,分别是1+v,v,1-v,2-v,根据三次函数算出权重之后两两相乘就是对应的4x4大小的权重矩阵。

知道了怎么求权重矩阵之后,就可以遍历整幅图像进行插值了。下面是基于自己对原理的理解编写的c++ opencv代码,代码没有做优化,但是能够让大家直观地理解图像双三次插值算法。

二、C++ OpenCV代码

1.计算权重矩阵

前面说了权重矩阵就是横坐标的4个输出和纵坐标的4个输出相乘,因此只需要求出横坐标和纵坐标相对应的8个输出值就行了。

代码如下:


std::vector<double> getWeight(double c, double a = 0.5)
{
	//c就是u和v,横坐标和纵坐标的输出计算方式一样
	std::vector<double> temp(4);
	temp[0] = 1 + c; temp[1] = c; 
	temp[2] = 1 - c; temp[3] = 2 - c;
	
	//y(x) = (a+2)|x|*|x|*|x| - (a+3)|x|*|x| + 1   |x|<=1
	//y(x) = a|x|*|x|*|x| - 5a|x|*|x| + 8a|x| - 4a  1<|x|<2
	std::vector<double> weight(4);
	weight[0] = (a * pow(abs(temp[0]), 3) - 5 * a * pow(abs(temp[0]), 2) + 8 * a * abs(temp[0]) - 4 * a);
	weight[1] = (a + 2) * pow(abs(temp[1]), 3) - (a + 3) * pow(abs(temp[1]), 2) + 1;
	weight[2] = (a + 2) * pow(abs(temp[2]), 3) - (a + 3) * pow(abs(temp[2]), 2) + 1;
	weight[3] = (a * pow(abs(temp[3]), 3) - 5 * a * pow(abs(temp[3]), 2) + 8 * a * abs(temp[3]) - 4 * a);

	return weight;
}

2.遍历插值

代码如下:


void bicubic(cv::Mat& src, cv::Mat& dst, int dst_rows, int dst_cols)
{
	dst.create(dst_rows, dst_cols, src.type());
	double sy = static_cast<double>(dst_rows) / static_cast<double>(src.rows);
	double sx = static_cast<double>(dst_cols) / static_cast<double>(src.cols);
	cv::Mat border;
	cv::copyMakeBorder(src, border, 1, 1, 1, 1, cv::BORDER_REFLECT_101);

	//处理灰度图
	if (src.channels() == 1)
	{
		for (int i = 1; i < dst_rows + 1; ++i)
		{
			int src_y = (i + 0.5) / sy - 0.5; //做了几何中心对齐
			if (src_y < 0) src_y = 0;
			if (src_y > src.rows - 1) src_y = src.rows - 1;
			src_y += 1;
			//目标图像点坐标对应原图点坐标的4个纵坐标
			int i1 = std::floor(src_y);
			int i2 = std::ceil(src_y);
			int i0 = i1 - 1;
			int i3 = i2 + 1;
			double u = src_y - static_cast<int64>(i1);
			std::vector<double> weight_x = getWeight(u);

			for (int j = 1; j < dst_cols + 1; ++j)
			{
				int src_x = (j + 0.5) / sy - 0.5;
				if (src_x < 0) src_x = 0;
				if (src_x > src.rows - 1) src_x = src.rows - 1;
				src_x += 1;
				//目标图像点坐标对应原图点坐标的4个横坐标
				int j1 = std::floor(src_x);
				int j2 = std::ceil(src_x);
				int j0 = j1 - 1;
				int j3 = j2 + 1;
				double v = src_x - static_cast<int64>(j1);
				std::vector<double> weight_y = getWeight(v);

				//目标点像素对应原图点像素周围4x4区域的加权计算(插值)
				double pix = weight_x[0] * weight_y[0] * border.at<uchar>(i0, j0) + weight_x[1] * weight_y[0] * border.at<uchar>(i0, j1)
					+ weight_x[2] * weight_y[0] * border.at<uchar>(i0, j2) + weight_x[3] * weight_y[0] * border.at<uchar>(i0, j3)
					+ weight_x[0] * weight_y[1] * border.at<uchar>(i1, j0) + weight_x[1] * weight_y[1] * border.at<uchar>(i1, j1)
					+ weight_x[2] * weight_y[1] * border.at<uchar>(i1, j2) + weight_x[3] * weight_y[1] * border.at<uchar>(i1, j3)
					+ weight_x[0] * weight_y[2] * border.at<uchar>(i2, j0) + weight_x[1] * weight_y[2] * border.at<uchar>(i2, j1)
					+ weight_x[2] * weight_y[2] * border.at<uchar>(i2, j2) + weight_x[3] * weight_y[2] * border.at<uchar>(i2, j3)
					+ weight_x[0] * weight_y[3] * border.at<uchar>(i3, j0) + weight_x[1] * weight_y[3] * border.at<uchar>(i3, j1)
					+ weight_x[2] * weight_y[3] * border.at<uchar>(i3, j2) + weight_x[3] * weight_y[3] * border.at<uchar>(i3, j3);
				if (pix < 0) pix = 0;
				if (pix > 255)pix = 255;

				dst.at<uchar>(i - 1, j - 1) = static_cast<uchar>(pix);
			}
		}
	}
	//处理彩色图像
	else if (src.channels() == 3)
	{
		for (int i = 1; i < dst_rows + 1; ++i)
		{
			int src_y = (i + 0.5) / sy - 0.5;
			if (src_y < 0) src_y = 0;
			if (src_y > src.rows - 1) src_y = src.rows - 1;
			src_y += 1;
			int i1 = std::floor(src_y);
			int i2 = std::ceil(src_y);
			int i0 = i1 - 1;
			int i3 = i2 + 1;
			double u = src_y - static_cast<int64>(i1);
			std::vector<double> weight_y = getWeight(u);

			for (int j = 1; j < dst_cols + 1; ++j)
			{
				int src_x = (j + 0.5) / sy - 0.5;
				if (src_x < 0) src_x = 0;
				if (src_x > src.rows - 1) src_x = src.rows - 1;
				src_x += 1;
				int j1 = std::floor(src_x);
				int j2 = std::ceil(src_x);
				int j0 = j1 - 1;
				int j3 = j2 + 1;
				double v = src_x - static_cast<int64>(j1);
				std::vector<double> weight_x = getWeight(v);

				cv::Vec3b pix;

				pix[0] = weight_x[0] * weight_y[0] * border.at<cv::Vec3b>(i0, j0)[0] + weight_x[1] * weight_y[0] * border.at<cv::Vec3b>(i0, j1)[0]
					+ weight_x[2] * weight_y[0] * border.at<cv::Vec3b>(i0, j2)[0] + weight_x[3] * weight_y[0] * border.at<cv::Vec3b>(i0, j3)[0]
					+ weight_x[0] * weight_y[1] * border.at<cv::Vec3b>(i1, j0)[0] + weight_x[1] * weight_y[1] * border.at<cv::Vec3b>(i1, j1)[0]
					+ weight_x[2] * weight_y[1] * border.at<cv::Vec3b>(i1, j2)[0] + weight_x[3] * weight_y[1] * border.at<cv::Vec3b>(i1, j3)[0]
					+ weight_x[0] * weight_y[2] * border.at<cv::Vec3b>(i2, j0)[0] + weight_x[1] * weight_y[2] * border.at<cv::Vec3b>(i2, j1)[0]
					+ weight_x[2] * weight_y[2] * border.at<cv::Vec3b>(i2, j2)[0] + weight_x[3] * weight_y[2] * border.at<cv::Vec3b>(i2, j3)[0]
					+ weight_x[0] * weight_y[3] * border.at<cv::Vec3b>(i3, j0)[0] + weight_x[1] * weight_y[3] * border.at<cv::Vec3b>(i3, j1)[0]
					+ weight_x[2] * weight_y[3] * border.at<cv::Vec3b>(i3, j2)[0] + weight_x[3] * weight_y[3] * border.at<cv::Vec3b>(i3, j3)[0];
				pix[1] = weight_x[0] * weight_y[0] * border.at<cv::Vec3b>(i0, j0)[1] + weight_x[1] * weight_y[0] * border.at<cv::Vec3b>(i0, j1)[1]
					+ weight_x[2] * weight_y[0] * border.at<cv::Vec3b>(i0, j2)[1] + weight_x[3] * weight_y[0] * border.at<cv::Vec3b>(i0, j3)[1]
					+ weight_x[0] * weight_y[1] * border.at<cv::Vec3b>(i1, j0)[1] + weight_x[1] * weight_y[1] * border.at<cv::Vec3b>(i1, j1)[1]
					+ weight_x[2] * weight_y[1] * border.at<cv::Vec3b>(i1, j2)[1] + weight_x[3] * weight_y[1] * border.at<cv::Vec3b>(i1, j3)[1]
					+ weight_x[0] * weight_y[2] * border.at<cv::Vec3b>(i2, j0)[1] + weight_x[1] * weight_y[2] * border.at<cv::Vec3b>(i2, j1)[1]
					+ weight_x[2] * weight_y[2] * border.at<cv::Vec3b>(i2, j2)[1] + weight_x[3] * weight_y[2] * border.at<cv::Vec3b>(i2, j3)[1]
					+ weight_x[0] * weight_y[3] * border.at<cv::Vec3b>(i3, j0)[1] + weight_x[1] * weight_y[3] * border.at<cv::Vec3b>(i3, j1)[1]
					+ weight_x[2] * weight_y[3] * border.at<cv::Vec3b>(i3, j2)[1] + weight_x[3] * weight_y[3] * border.at<cv::Vec3b>(i3, j3)[1];
				pix[2] = weight_x[0] * weight_y[0] * border.at<cv::Vec3b>(i0, j0)[2] + weight_x[1] * weight_y[0] * border.at<cv::Vec3b>(i0, j1)[2]
					+ weight_x[2] * weight_y[0] * border.at<cv::Vec3b>(i0, j2)[2] + weight_x[3] * weight_y[0] * border.at<cv::Vec3b>(i0, j3)[2]
					+ weight_x[0] * weight_y[1] * border.at<cv::Vec3b>(i1, j0)[2] + weight_x[1] * weight_y[1] * border.at<cv::Vec3b>(i1, j1)[2]
					+ weight_x[2] * weight_y[1] * border.at<cv::Vec3b>(i1, j2)[2] + weight_x[3] * weight_y[1] * border.at<cv::Vec3b>(i1, j3)[2]
					+ weight_x[0] * weight_y[2] * border.at<cv::Vec3b>(i2, j0)[2] + weight_x[1] * weight_y[2] * border.at<cv::Vec3b>(i2, j1)[2]
					+ weight_x[2] * weight_y[2] * border.at<cv::Vec3b>(i2, j2)[2] + weight_x[3] * weight_y[2] * border.at<cv::Vec3b>(i2, j3)[2]
					+ weight_x[0] * weight_y[3] * border.at<cv::Vec3b>(i3, j0)[2] + weight_x[1] * weight_y[3] * border.at<cv::Vec3b>(i3, j1)[2]
					+ weight_x[2] * weight_y[3] * border.at<cv::Vec3b>(i3, j2)[2] + weight_x[3] * weight_y[3] * border.at<cv::Vec3b>(i3, j3)[2];

				for (int i = 0; i < src.channels(); ++i)
				{
					if (pix[i] < 0) pix = 0;
					if (pix[i] > 255)pix = 255;
				}
				dst.at<cv::Vec3b>(i - 1, j - 1) = static_cast<cv::Vec3b>(pix);
			}
		}	
	}	
}

3. 测试及结果


int main()
{
	cv::Mat src = cv::imread("C:\\Users\\Echo\\Pictures\\Saved Pictures\\bilateral.png");
	cv::Mat dst;
	bicubic(src, dst, 309/0.5, 338/0.5);
	cv::imshow("gray", dst);
	cv::imshow("src", src);
	cv::waitKey(0);
}

彩色图像(放大两倍)

以上就是C++ OpenCV实现图像双三次插值算法详解的详细内容,更多关于C++ OpenCV 图像双三次插值算法的资料请关注编程网其它相关文章!

阅读原文内容投诉

免责声明:

① 本站未注明“稿件来源”的信息均来自网络整理。其文字、图片和音视频稿件的所属权归原作者所有。本站收集整理出于非商业性的教育和科研之目的,并不意味着本站赞同其观点或证实其内容的真实性。仅作为临时的测试数据,供内部测试之用。本站并未授权任何人以任何方式主动获取本站任何信息。

② 本站未注明“稿件来源”的临时测试数据将在测试完成后最终做删除处理。有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341

软考中级精品资料免费领

  • 历年真题答案解析
  • 备考技巧名师总结
  • 高频考点精准押题
  • 2024年上半年信息系统项目管理师第二批次真题及答案解析(完整版)

    难度     807人已做
    查看
  • 【考后总结】2024年5月26日信息系统项目管理师第2批次考情分析

    难度     351人已做
    查看
  • 【考后总结】2024年5月25日信息系统项目管理师第1批次考情分析

    难度     314人已做
    查看
  • 2024年上半年软考高项第一、二批次真题考点汇总(完整版)

    难度     433人已做
    查看
  • 2024年上半年系统架构设计师考试综合知识真题

    难度     221人已做
    查看

相关文章

发现更多好内容

猜你喜欢

AI推送时光机
位置:首页-资讯-后端开发
咦!没有更多了?去看看其它编程学习网 内容吧
首页课程
资料下载
问答资讯