如何用 OpenCV+OpenGL 在 Android 平台實現一個簡單的 AR 應用
- 2019 年 10 月 7 日
- 筆記
前幾篇文章,我分別寫了如何用 OpenGL 1在 Kinect Azure DK 平台實現一個簡單的 AR 程序。
用 OpenGL 1在 Kinect Azure DK 平台實現一個簡單的 AR 程序 無雨森,公眾號:無雨森的技術分享KinectAzureDK編程實戰_OpenCV + OpenGL 的 AR 應用的實現
以及如何用 OpenGL 3 Shader 程序在 Kinect Azure DK 平台實現一個簡單的 AR 程序。
用 OpenGL 3 Shader 程序在 Kinect Azure DK 平台實現一個簡單的 AR 程序 無雨森,公眾號:無雨森的技術分享KinectAzureDK編程實戰_使用OpenGL 3 shader 的 AR 應用的實現
這篇文章,我們把之前寫的程序完全移植到 Android 平台。
首先看效果。

我在這裡使用的是市面上唯一一款搭載 Tango 平台的手機。該 Tango 手機在背面使用了一個深度攝像頭模組,據說是美國 pmd 公司做的。

如上截圖,左邊長條形狀的鏡頭表示一個深度攝像頭模組,這裡用的是一個基於 TOF 原理的深度攝像頭。我之前講解過也搭載 TOF 深度攝像頭的新款 Kinect Azure DK 系列文章。關於 TOF 原理的簡單講解,請參見。
關於 TOF 原理的簡單講解 無雨森,公眾號:無雨森的技術分享KinectAzureDK編程實戰_用OpenCV、PCL操作和可視化Kinect數據
中間那個大的是魚眼鏡頭,然後是一個指紋識別觸摸圓板。
至於為什麼我要用這個手機做一個簡單的 AR 應用?原因很簡單,我手頭上只有這麼一台 Android 手機,而我又不會 iOS 開發。另外,更重要的一點,Tango 手機因為原生的就搭載 SLAM 平台,手機內存中存有其鏡頭的參數文件。做一個 AR 應用首先應該標定鏡頭。用 Tango 手機就免去了標定的步驟,直接讀取其文件中的內參即可。
廢話不多說,直接進入正題。
基於前幾篇文章的代碼,我直接把代碼移植到了 Android 平台。鑒於我對 Android 開發不是很熟悉,我就基於 OpenCV 4.1.0 SDK 中的 samples: mixedprocessing 代碼框架實現的。這樣我就不用配置環境了。

熟悉 OpenCV Android 開發的同學們一定發現,我在 mixedprocessing 項目中加了很多東西,Java 代碼以及 C++ 代碼都完全不一樣了。
我們實現的 Android 端 AR 程序主要基於 Android 的 JNI 平台,使用 C++ 實現我們的 AR 程序。
先講如何把主機的代碼移植到 Android 平台,使用Android NDK 重新編譯代碼。
cmake_minimum_required(VERSION 3.6) # Enable C++11 set(CMAKE_CXX_STANDARD 11) set(CMAKE_CXX_STANDARD_REQUIRED TRUE) set(target gl3cv4jni) project(${target} CXX) set(ANDROID_OPENCV_COMPONENTS "opencv_java" CACHE STRING "") message(STATUS "ANDROID_ABI=${ANDROID_ABI}") find_package(OpenCV REQUIRED COMPONENTS ${ANDROID_OPENCV_COMPONENTS}) find_library( EGL_LIBRARY EGL "EGL 1.4 library" ) file(GLOB srcs *.cpp *.c) file(GLOB hdrs *.hpp *.h) include_directories("./aruco/") SET(sources "./aruco/cameraparameters.cpp" "./aruco/debug.cpp" "./aruco/dictionary.cpp" "./aruco/ippe.cpp" "./aruco/markerdetector.cpp" "./aruco/markerlabeler.cpp" "./aruco/posetracker.cpp" "./aruco/cvdrawingutils.cpp" "./aruco/dictionary_based.cpp" "./aruco/marker.cpp" "./aruco/markerdetector_impl.cpp" "./aruco/markermap.cpp" "./aruco/fractaldetector.cpp" ) SET(headers "./aruco/aruco_cvversioning.h" "./aruco/cameraparameters.h" "./aruco/dictionary_based.h" "./aruco/ippe.h" "./aruco/markerdetector_impl.h" "./aruco/markermap.h" "./aruco/timers.h" "./aruco/aruco_export.h" "./aruco/cvdrawingutils.h" "./aruco/dictionary.h" "./aruco/levmarq.h" "./aruco/marker.h" "./aruco/picoflann.h" "./aruco/aruco.h" "./aruco/debug.h" "./aruco/markerdetector.h" "./aruco/markerlabeler.h" "./aruco/posetracker.h" "./aruco/fractaldetector.h" ) set(fractal_sources "./aruco/fractallabelers/fractalposetracker.cpp" "./aruco/fractallabelers/fractalmarkerset.cpp" "./aruco/fractallabelers/fractalmarker.cpp" "./aruco/fractallabelers/fractallabeler.cpp" ) set(fractal_headers "./aruco/fractallabelers/fractalposetracker.h" "./aruco/fractallabelers/fractalmarkerset.h" "./aruco/fractallabelers/fractalmarker.h" "./aruco/fractallabelers/fractallabeler.h" ) include_directories("${CMAKE_CURRENT_LIST_DIR}") include_directories(${PROJECT_SOURCE_DIR}/aruco/3rdparty/eigen3) include_directories(${PROJECT_SOURCE_DIR}) include_directories(${PROJECT_SOURCE_DIR}) add_library(${target} SHARED ${srcs} ${hdrs} ${sources} ${fractal_sources} Shader.cpp Texture.cpp VideoRenderer.cpp ARRenderer.cpp) target_link_libraries(${target} ${ANDROID_OPENCV_COMPONENTS} ${EGL_LIBRARY} -lGLESv1_CM -lGLESv3)
令人欣喜的是,現在的 Android 平台使用 C++ 程序員非常熟悉的 CMake 腳本語言編寫項目配置。
為了方便,我直接把 aruco 的源碼複製到了項目目錄下,通過 CMake 配置。
這個 CMake 寫的比較簡單,關鍵有兩點。
1)使用 C++ 11。
2)後面 target_link_libraries 鏈接庫的時候,不僅要鏈接 Android OpenCV 的庫,也要鏈接 GLES 3 的庫。
然後,其餘的 C++ 代碼文件複製過來即可。
再看最重要的 jni_part.cpp。




有三個關鍵的 JNI 相關的函數。
init() 函數:初始化 OpenGL 渲染環境。執行的是 setupGraphics() 函數。
step() 函數:相當於之前的 while() 循環渲染。執行的是 display() 函數。
setImage() 函數:相當於之前的 frameRetriever() 函數。執行的是控制共享數據 buffer,以及圖像處理,在這裡是識別 ArUco 模板圖片,並返回 ArUco 模板圖片在 Android 手機攝像頭坐標系下的相對姿態。
其他的代碼沒啥好說的,都和我上一篇文章寫的代碼一樣。
如果各位不知道這裡說到的幾個函數,請參看我之前寫的文章。
OpenGL 3 實現的 AR 應用 無雨森,公眾號:無雨森的技術分享KinectAzureDK編程實戰_使用OpenGL 3 shader 的 AR 應用的實現
關於 Shader 文件全都轉成了字符串,這個我是找的別人寫的代碼。
OpenGL GLSL shader as a string http://hamelot.io/visualization/opengl-glsl-shader-as-a-string/
再看 Java 文件。
GL3CV4Lib.java
package org.opencv.samples.tutorial2; public class GL3CV4Lib { public static native void init(int width, int height); public static native void step(); public static native void setImage(long imageRGBA); }
聲明三個 native 接口,對應我剛剛講的 jni_part.cpp 文件中的三個函數。
GL3CV4View.java
package org.opencv.samples.tutorial2; import android.content.Context; import android.graphics.PixelFormat; import android.opengl.GLSurfaceView; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; class GL3CV4View extends GLSurfaceView { private Renderer renderer; public GL3CV4View(Context context) { super(context); setZOrderOnTop(true); setEGLConfigChooser(8, 8, 8, 8, 16, 0); setEGLContextClientVersion(3); getHolder().setFormat(PixelFormat.TRANSLUCENT); renderer = new Renderer(); setRenderer(renderer); } private class Renderer implements GLSurfaceView.Renderer { public void onDrawFrame(GL10 gl) { GL3CV4Lib.step(); } public void onSurfaceChanged(GL10 gl, int width, int height) { GL3CV4Lib.init(width, height); } public void onSurfaceCreated(GL10 gl, EGLConfig config) { } } }
這是一個繼承 Android OpenGL GLSurfaceView 類的一個類。實現的也很簡單,主要就是把 Android OpenGL 環境的渲染交給 GL3CV4Lib。
ARApplication.java
package org.opencv.samples.tutorial2; import android.app.Activity; import android.content.pm.ActivityInfo; import android.os.Bundle; import android.util.Log; import android.content.Context; import org.opencv.android.BaseLoaderCallback; import org.opencv.android.LoaderCallbackInterface; import org.opencv.android.OpenCVLoader; import org.opencv.android.CameraBridgeViewBase; import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame; import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2; import org.opencv.core.Mat; import android.widget.RelativeLayout; import android.view.SurfaceView; public class ARApplication extends Activity implements CvCameraViewListener2 { private GL3CV4View mView = null; private Context context; private boolean gl3_loaded = false; private CameraBridgeViewBase mOpenCvCameraView; private RelativeLayout l_layout; private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { switch (status) { case LoaderCallbackInterface.SUCCESS: { Log.i("Loading", "OpenCV loaded successfully"); System.loadLibrary("gl3cv4jni"); gl3_loaded = true; mView = new GL3CV4View(getApplication()); l_layout.addView(mView); setContentView(l_layout); mOpenCvCameraView.enableView(); } break; default: { super.onManagerConnected(status); } break; } } }; public void onCameraViewStarted(int width, int height) { } public void onCameraViewStopped() { } public Mat onCameraFrame(CvCameraViewFrame inputFrame) { Mat input = inputFrame.rgba(); if (gl3_loaded) { GL3CV4Lib.setImage(input.nativeObj); } return null; } @Override protected void onCreate(Bundle icicle) { super.onCreate(icicle); setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE); setContentView(R.layout.ar); l_layout = (RelativeLayout) findViewById(R.id.linearLayoutRest); mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.opencv_camera_surface_view); mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE); mOpenCvCameraView.setMaxFrameSize(1920, 1080); mOpenCvCameraView.setCvCameraViewListener(this); mOpenCvCameraView.disableView(); } @Override protected void onPause() { super.onPause(); if (mView != null) { mView.onPause(); } if (mOpenCvCameraView != null) mOpenCvCameraView.disableView(); gl3_loaded = false; } @Override protected void onResume() { super.onResume(); if (!OpenCVLoader.initDebug()) { OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_0_0, this, mLoaderCallback); } else { mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS); } if (mView != null) { mView.onResume(); } } }
關於這個 Java 文件的寫法,請參考 OpenCV 中的打開攝像頭的示例 camerapreview 以及示例 mixedprocessing 如何加載 JNI 下的 C++ lib 到 java程序中。
在這裡提一下我五年前回答的知乎答案。
想用OpenCV做AR該如何入手? https://www.zhihu.com/question/26983174/answer/35328819
當時使用的還是三星的 Galaxy SIII 手機。那個時候的 Android 還是 4.4 版,用的 opencv sdk 還是 2.4.13 版本。
那個時候寫一個基於 OpenCV + OpenGL 的 Android AR 程序非常簡單。因為那個時候的 OpenCV Android SDK 還支持 native camera,即通過 OpenCV 的 C++ 代碼直接啟動攝像頭。而且那個時候還可以用 OpenGL 1 的固定管線的方式渲染 AR 場景。我們看一下 2014 年時候的代碼。
#include <jni.h> #include <GLES/gl.h> #include <GLES/glext.h> #include <android/log.h> #include <pthread.h> #include <time.h> #include <Math.h> #include <opencv2/highgui/highgui.hpp> #include <opencv/cv.h> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/features2d/features2d.hpp> #include <opencv/cv.h> #include <opencv/highgui.h> #include <opencv/cxcore.h> #include <vector> #include "./build/include/TestDLL.h" #include "./build/include/GeometryTypes.hpp" #include "./build/include/CameraCalibration.hpp" #include "./build/include/TransformPoints.hpp" using namespace std; using namespace cv; #define LOG_TAG "CAMERA_RENDERER" #define LOG(...) __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__) GLuint texture; cv::VideoCapture capture; cv::Mat buffer[30]; cv::Mat rgbFrame; cv::Mat inframe; cv::Mat outframe; int bufferIndex; int rgbIndex; int frameWidth; int frameHeight; int screenWidth; int screenHeight; int orientation; pthread_mutex_t FGmutex; pthread_t frameGrabber; pthread_attr_t attr; struct sched_param param; cv::Mat backPxls; cv::Mat Rvec; cv::Mat raux, taux; cv::Mat_<float> Tvec; cv::Mat camMatrix; cv::Mat op; GLfloat vertices[] = { -1.0f, -1.0f, 0.0f, // V1 - bottom left -1.0f, 1.0f, 0.0f, // V2 - top left 1.0f, -1.0f, 0.0f, // V3 - bottom right 1.0f, 1.0f, 0.0f // V4 - top right }; GLfloat textures[8]; vector<Point3f> modelPoints; double _cm[9] = { 447.58258057, 0, 320, 0, 444.56341553, 240, 0, 0, 1 }; CameraCalibration calibration(447.58258057, 444.56341553, 320, 240); vector<Point2f> imagePoints; bool m_isTextureInitialized; bool isPatternPresent; Transformation patternPose; //===== const GLfloat gVertices[] = { -1.0f, 0.990435689f, 0.64491773f, 1.0f, 0.990435689f, 0.64491773f, 1.0f, -1.086405201f, 0.596322212f, -1.0f, -1.086405201f, 0.596322212f, -0.262226342f, 0.790170874f, 0.646569088f, 0.262226342f, 0.790170874f, 0.646569088f, -0.293826291f, 0.20654355f, 0.903273518f, 0.293826291f, 0.20654355f, 0.903273518f }; const GLfloat gColors[] = { 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f}; static GLfloat rtri; static GLfloat rquad; const GLfloat PI = 3.1415f; //===== TransformPoints trans; extern "C" { void drawBackground(); void createTexture(); void destroyTexture(); void *frameRetriever(void*); void buildProjectionMatrix(const CameraCalibration& calibration, int screen_width, int screen_height, Matrix44& projectionMatrix); void drawAugmentedScene(); JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_initCamera(JNIEnv*, jobject, jint width, jint height) { LOG("Camera Created"); capture.open(CV_CAP_ANDROID + 1); capture.set(CV_CAP_PROP_FRAME_WIDTH, width); capture.set(CV_CAP_PROP_FRAME_HEIGHT, height); frameWidth = width; frameHeight = height; LOG("frameWidth = %d", frameWidth); LOG("frameHeight = %d", frameHeight); pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); pthread_attr_setschedpolicy(&attr, SCHED_FIFO); memset(¶m, 0, sizeof(param)); param.sched_priority = 100; pthread_attr_setschedparam(&attr, ¶m); pthread_create(&frameGrabber, &attr, frameRetriever, NULL); pthread_attr_destroy(&attr); } JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_surfaceChanged(JNIEnv*, jobject, jint width, jint height, jint orien) { LOG("Surface Changed"); LOG("surface changed screenwidth = %d,screenheight = %d",width,height); createTexture(); } JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_releaseCamera(JNIEnv*, jobject) { LOG("Camera Released"); capture.release(); destroyTexture(); } JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_InitFaceDetect(JNIEnv* env, jobject) { Init_Detect(); modelPoints.push_back(Point3f(-0.186790768,1,0.969336463));//34 modelPoints.push_back(Point3f(0.186790768,1,0.969336463));//29 modelPoints.push_back(Point3f(-0.495664225,0.699452693,0.69330426));//27 modelPoints.push_back(Point3f(0.495664225,0.699452693,0.69330426));//19 modelPoints.push_back(Point3f(0,0,0.94963452));//52 op = Mat(modelPoints); Scalar m = mean(Mat(modelPoints)); op = op - m; op=op+Scalar(0,0.5,0); camMatrix = Mat(3, 3, CV_64F, _cm); trans.set_points_3d(modelPoints); LOG("InitFaceDetect after trans.get_points_3d()[0].x = %f, trans.get_points_3d()[0].y = %f, trans.get_points_3d()[0].z = %f", trans.get_points_3d()[0].x,trans.get_points_3d()[0].y,trans.get_points_3d()[0].z); } void createTexture() { glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); } void destroyTexture() { LOG("Texture destroyed"); glDeleteTextures(1, &texture); } JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_renderBackground(JNIEnv*, jobject) { glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); drawBackground(); drawAugmentedScene(); glFlush(); } void drawBackground() { LOG("drawBackgroud start"); int w, h; w=640; h=480; glPixelStorei(GL_PACK_ALIGNMENT, 1); glBindTexture(GL_TEXTURE_2D, texture); if (bufferIndex > 0) { pthread_mutex_lock(&FGmutex); cvtColor(buffer[(bufferIndex - 1) % 30], outframe, CV_BGR2BGR565); pthread_mutex_unlock(&FGmutex); cv::flip(outframe, rgbFrame, 1); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, rgbFrame.ptr()); } LOG("drawBackgroud after pthread"); const GLfloat bgTextureVertices[] = {0, 0, w, 0, 0, h, w, h}; const GLfloat bgTextureCoords[] = {1, 0, 1, 1, 0, 0, 0, 1}; const GLfloat proj[] = {0, -2.f / w, 0, 0, -2.f / h, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1}; float outputpoints[3]= {0,0,0}; vector<cv::Point3f > temp_output; cv::Point3f temp_point; temp_output.clear(); LOG("drawBackground before Pixel2Vertex!"); glMatrixMode(GL_PROJECTION); glLoadMatrixf(proj); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture); // Update attribute values. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glVertexPointer(2, GL_FLOAT, 0, bgTextureVertices); glTexCoordPointer(2, GL_FLOAT, 0, bgTextureCoords); glColor4f(1, 1, 1, 1); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisable(GL_TEXTURE_2D); } void buildProjectionMatrix(const CameraCalibration& calibration, int screen_width, int screen_height, Matrix44& projectionMatrix) { float nearPlane = 0.01f; float farPlane = 100.0f; double max_w_h = MAX(640,480); float f_x = 637.40528; float f_y = 638.37828; float c_x = 320; float c_y = 240; projectionMatrix.data[0] = -2.0f * f_x / 640; projectionMatrix.data[1] = 0.0f; projectionMatrix.data[2] = 0.0f; projectionMatrix.data[3] = 0.0f; projectionMatrix.data[4] = 0.0f; projectionMatrix.data[5] = 2.0f * f_y / 480; projectionMatrix.data[6] = 0.0f; projectionMatrix.data[7] = 0.0f; projectionMatrix.data[8] = 2.0f * c_x / 640 - 1.0f; projectionMatrix.data[9] = 2.0f * c_y / 480 - 1.0f; projectionMatrix.data[10] = 0 - (farPlane + nearPlane) / (farPlane - nearPlane); projectionMatrix.data[11] = -1.0f; projectionMatrix.data[12] = 0.0f; projectionMatrix.data[13] = 0.0f; projectionMatrix.data[14] = 0 - 2.0f * farPlane * nearPlane / (farPlane - nearPlane); projectionMatrix.data[15] = 0.0f; } void drawAugmentedScene() { Matrix44 projectionMatrix; int w = 640; int h = 480; buildProjectionMatrix(calibration, w, h, projectionMatrix); glMatrixMode(GL_PROJECTION); glLoadMatrixf(projectionMatrix.data); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); Matrix44 glMatrix = patternPose.getMat44(); glLoadMatrixf(reinterpret_cast<const GLfloat*>(&glMatrix.data[0])); GLfloat gVertices[8*3]; for(int line_number_1=0;line_number_1<8;line_number_1++) { gVertices[line_number_1*3]=points_3d[line_number_1].x; gVertices[line_number_1*3+1]=points_3d[line_number_1].y; gVertices[line_number_1*3+2]=points_3d[line_number_1].z; } glEnableClientState(GL_VERTEX_ARRAY); glLineWidth(6); glColor4f(1,0,0,1); glVertexPointer(3, GL_FLOAT, 0, gVertices); glDrawArrays(GL_LINES, 0, 8); glDisableClientState(GL_VERTEX_ARRAY); for(int line_number_1=0;line_number_1<8;line_number_1++) { gVertices[line_number_1*3+0]=output_points_3d[line_number_1].x; gVertices[line_number_1*3+1]=output_points_3d[line_number_1].y; gVertices[line_number_1*3+2]=output_points_3d[line_number_1].z; } glEnableClientState(GL_VERTEX_ARRAY); glLineWidth(6); glColor4f(0,1,0,1); glVertexPointer(3, GL_FLOAT, 0, gVertices); glDrawArrays(GL_LINES, 0, 8); glDisableClientState(GL_VERTEX_ARRAY); } void *frameRetriever(void*) { CvPoint2D32f *asmPT; IplImage temp_image; LOG("frameRetriever start"); while (capture.isOpened()) { capture.read(inframe); if (!inframe.empty()) { LOG("frameRetriever before pthread_mutex_lock!"); pthread_mutex_lock(&FGmutex); temp_image = inframe; asmPT = GetFacePoint(&temp_image); if (asmPT) { DrawFacePoint(&temp_image, asmPT); LOG("frameRetriever after DrawFacePoint!"); imagePoints.clear(); imagePoints.push_back(Point2f(asmPT[5].x,asmPT[5].y)); imagePoints.push_back(Point2f(asmPT[12].x,asmPT[12].y)); imagePoints.push_back(Point2f(asmPT[3].x,asmPT[3].y)); imagePoints.push_back(Point2f(asmPT[11].x,asmPT[11].y)); imagePoints.push_back(Point2f(asmPT[16].x,asmPT[16].y)); trans.set_points_2d(imagePoints); LOG("frameRetriever after points_2d.clear()"); } Mat ip(imagePoints); Mat img = inframe; double _dc[] = {0, 0, 0, 0}; solvePnP(op, ip, camMatrix, Mat(1, 4, CV_64F, _dc), raux, taux); raux.convertTo(Rvec, CV_32F); taux.convertTo(Tvec, CV_32F); Mat_<float> rotMat(3, 3); Rodrigues(Rvec, rotMat); for (int col = 0; col < 3; col++) { for (int row = 0; row < 3; row++) { patternPose.r().mat[row][col] = -rotMat(row, col); } patternPose.t().data[col] = Tvec(col); } patternPose=patternPose.getInverted(); Mat temp_inframe; flip(inframe,temp_inframe,1); temp_inframe.copyTo(buffer[(bufferIndex++) % 30]); pthread_mutex_unlock(&FGmutex); } } LOG("Camera Closed"); pthread_exit(NULL); } }
但是現在這個代碼完全不能運行。因為 OpenCV 不再支持 native camera,必須由用戶在 Java 代碼中打開攝像頭,把實時視頻幀傳給 JNI C++ 函數。另外,現在的 Android OpenGL 已經基本不支持固定管線的 GL 1 代碼。
本文的所有代碼都將在 Github 和 Coding 上開源。敬請關注我的 Github 和 Coding 主頁。
Github 主頁 https://github.com/forestsen
Coding 主頁 https://coding.net/u/forestsen