如何用 OpenCV+OpenGL 在 Android 平台实现一个简单的 AR 应用

  • 2019 年 10 月 7 日
  • 筆記

前几篇文章,我分别写了如何用 OpenGL 1在 Kinect Azure DK 平台实现一个简单的 AR 程序。

用 OpenGL 1在 Kinect Azure DK 平台实现一个简单的 AR 程序 无雨森,公众号:无雨森的技术分享KinectAzureDK编程实战_OpenCV + OpenGL 的 AR 应用的实现

以及如何用 OpenGL 3 Shader 程序在 Kinect Azure DK 平台实现一个简单的 AR 程序。

用 OpenGL 3 Shader 程序在 Kinect Azure DK 平台实现一个简单的 AR 程序 无雨森,公众号:无雨森的技术分享KinectAzureDK编程实战_使用OpenGL 3 shader 的 AR 应用的实现

这篇文章,我们把之前写的程序完全移植到 Android 平台。

首先看效果。

我在这里使用的是市面上唯一一款搭载 Tango 平台的手机。该 Tango 手机在背面使用了一个深度摄像头模组,据说是美国 pmd 公司做的。

如上截图,左边长条形状的镜头表示一个深度摄像头模组,这里用的是一个基于 TOF 原理的深度摄像头。我之前讲解过也搭载 TOF 深度摄像头的新款 Kinect Azure DK 系列文章。关于 TOF 原理的简单讲解,请参见。

关于 TOF 原理的简单讲解 无雨森,公众号:无雨森的技术分享KinectAzureDK编程实战_用OpenCV、PCL操作和可视化Kinect数据

中间那个大的是鱼眼镜头,然后是一个指纹识别触摸圆板。

至于为什么我要用这个手机做一个简单的 AR 应用?原因很简单,我手头上只有这么一台 Android 手机,而我又不会 iOS 开发。另外,更重要的一点,Tango 手机因为原生的就搭载 SLAM 平台,手机内存中存有其镜头的参数文件。做一个 AR 应用首先应该标定镜头。用 Tango 手机就免去了标定的步骤,直接读取其文件中的内参即可。

废话不多说,直接进入正题。

基于前几篇文章的代码,我直接把代码移植到了 Android 平台。鉴于我对 Android 开发不是很熟悉,我就基于 OpenCV 4.1.0 SDK 中的 samples: mixedprocessing 代码框架实现的。这样我就不用配置环境了。

熟悉 OpenCV Android 开发的同学们一定发现,我在 mixedprocessing 项目中加了很多东西,Java 代码以及 C++ 代码都完全不一样了。

我们实现的 Android 端 AR 程序主要基于 Android 的 JNI 平台,使用 C++ 实现我们的 AR 程序。

先讲如何把主机的代码移植到 Android 平台,使用Android NDK 重新编译代码。

cmake_minimum_required(VERSION 3.6)    # Enable C++11  set(CMAKE_CXX_STANDARD 11)  set(CMAKE_CXX_STANDARD_REQUIRED TRUE)    set(target gl3cv4jni)  project(${target} CXX)    set(ANDROID_OPENCV_COMPONENTS "opencv_java" CACHE STRING "")  message(STATUS "ANDROID_ABI=${ANDROID_ABI}")  find_package(OpenCV REQUIRED COMPONENTS ${ANDROID_OPENCV_COMPONENTS})  find_library( EGL_LIBRARY EGL "EGL 1.4 library" )  file(GLOB srcs *.cpp *.c)  file(GLOB hdrs *.hpp *.h)    include_directories("./aruco/")    SET(sources      "./aruco/cameraparameters.cpp"      "./aruco/debug.cpp"      "./aruco/dictionary.cpp"      "./aruco/ippe.cpp"      "./aruco/markerdetector.cpp"      "./aruco/markerlabeler.cpp"      "./aruco/posetracker.cpp"      "./aruco/cvdrawingutils.cpp"      "./aruco/dictionary_based.cpp"      "./aruco/marker.cpp"      "./aruco/markerdetector_impl.cpp"      "./aruco/markermap.cpp"      "./aruco/fractaldetector.cpp"      )  SET(headers      "./aruco/aruco_cvversioning.h"      "./aruco/cameraparameters.h"      "./aruco/dictionary_based.h"      "./aruco/ippe.h"      "./aruco/markerdetector_impl.h"      "./aruco/markermap.h"      "./aruco/timers.h"      "./aruco/aruco_export.h"      "./aruco/cvdrawingutils.h"      "./aruco/dictionary.h"      "./aruco/levmarq.h"      "./aruco/marker.h"      "./aruco/picoflann.h"      "./aruco/aruco.h"      "./aruco/debug.h"      "./aruco/markerdetector.h"      "./aruco/markerlabeler.h"      "./aruco/posetracker.h"      "./aruco/fractaldetector.h"      )   set(fractal_sources      "./aruco/fractallabelers/fractalposetracker.cpp"      "./aruco/fractallabelers/fractalmarkerset.cpp"      "./aruco/fractallabelers/fractalmarker.cpp"      "./aruco/fractallabelers/fractallabeler.cpp"      )  set(fractal_headers      "./aruco/fractallabelers/fractalposetracker.h"      "./aruco/fractallabelers/fractalmarkerset.h"      "./aruco/fractallabelers/fractalmarker.h"      "./aruco/fractallabelers/fractallabeler.h"      )    include_directories("${CMAKE_CURRENT_LIST_DIR}")  include_directories(${PROJECT_SOURCE_DIR}/aruco/3rdparty/eigen3)  include_directories(${PROJECT_SOURCE_DIR})  include_directories(${PROJECT_SOURCE_DIR})  add_library(${target} SHARED             ${srcs} ${hdrs} ${sources} ${fractal_sources}               Shader.cpp Texture.cpp VideoRenderer.cpp ARRenderer.cpp)  target_link_libraries(${target} ${ANDROID_OPENCV_COMPONENTS}                        ${EGL_LIBRARY} -lGLESv1_CM -lGLESv3)

令人欣喜的是,现在的 Android 平台使用 C++ 程序员非常熟悉的 CMake 脚本语言编写项目配置。

为了方便,我直接把 aruco 的源码复制到了项目目录下,通过 CMake 配置。

这个 CMake 写的比较简单,关键有两点。

1)使用 C++ 11。

2)后面 target_link_libraries 链接库的时候,不仅要链接 Android OpenCV 的库,也要链接 GLES 3 的库。

然后,其余的 C++ 代码文件复制过来即可。

再看最重要的 jni_part.cpp

有三个关键的 JNI 相关的函数。

init() 函数:初始化 OpenGL 渲染环境。执行的是 setupGraphics() 函数。

step() 函数:相当于之前的 while() 循环渲染。执行的是 display() 函数。

setImage() 函数:相当于之前的 frameRetriever() 函数。执行的是控制共享数据 buffer,以及图像处理,在这里是识别 ArUco 模板图片,并返回 ArUco 模板图片在 Android 手机摄像头坐标系下的相对姿态。

其他的代码没啥好说的,都和我上一篇文章写的代码一样。

如果各位不知道这里说到的几个函数,请参看我之前写的文章。

OpenGL 3 实现的 AR 应用 无雨森,公众号:无雨森的技术分享KinectAzureDK编程实战_使用OpenGL 3 shader 的 AR 应用的实现

关于 Shader 文件全都转成了字符串,这个我是找的别人写的代码。

OpenGL GLSL shader as a string http://hamelot.io/visualization/opengl-glsl-shader-as-a-string/

再看 Java 文件。

GL3CV4Lib.java

package org.opencv.samples.tutorial2;    public class GL3CV4Lib {        public static native void init(int width, int height);      public static native void step();      public static native void setImage(long imageRGBA);  }

声明三个 native 接口,对应我刚刚讲的 jni_part.cpp 文件中的三个函数。

GL3CV4View.java

package org.opencv.samples.tutorial2;    import android.content.Context;  import android.graphics.PixelFormat;  import android.opengl.GLSurfaceView;    import javax.microedition.khronos.egl.EGLConfig;  import javax.microedition.khronos.opengles.GL10;    class GL3CV4View extends GLSurfaceView {        private Renderer renderer;        public GL3CV4View(Context context) {          super(context);          setZOrderOnTop(true);          setEGLConfigChooser(8, 8, 8, 8, 16, 0);          setEGLContextClientVersion(3);          getHolder().setFormat(PixelFormat.TRANSLUCENT);          renderer = new Renderer();          setRenderer(renderer);        }        private class Renderer implements GLSurfaceView.Renderer {          public void onDrawFrame(GL10 gl) {              GL3CV4Lib.step();          }            public void onSurfaceChanged(GL10 gl, int width, int height) {              GL3CV4Lib.init(width, height);          }            public void onSurfaceCreated(GL10 gl, EGLConfig config) { }      }    }

这是一个继承 Android OpenGL GLSurfaceView 类的一个类。实现的也很简单,主要就是把 Android OpenGL 环境的渲染交给 GL3CV4Lib。

ARApplication.java

package org.opencv.samples.tutorial2;    import android.app.Activity;  import android.content.pm.ActivityInfo;  import android.os.Bundle;  import android.util.Log;  import android.content.Context;    import org.opencv.android.BaseLoaderCallback;  import org.opencv.android.LoaderCallbackInterface;  import org.opencv.android.OpenCVLoader;    import org.opencv.android.CameraBridgeViewBase;  import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;  import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;  import org.opencv.core.Mat;    import android.widget.RelativeLayout;  import android.view.SurfaceView;    public class ARApplication extends Activity implements CvCameraViewListener2 {        private GL3CV4View mView = null;      private Context context;        private boolean gl3_loaded = false;      private CameraBridgeViewBase mOpenCvCameraView;      private RelativeLayout l_layout;        private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {          @Override          public void onManagerConnected(int status) {              switch (status) {                  case LoaderCallbackInterface.SUCCESS: {                      Log.i("Loading", "OpenCV loaded successfully");                        System.loadLibrary("gl3cv4jni");                      gl3_loaded = true;                        mView = new GL3CV4View(getApplication());                      l_layout.addView(mView);                      setContentView(l_layout);                        mOpenCvCameraView.enableView();                  }                  break;                  default: {                      super.onManagerConnected(status);                  }                  break;              }          }      };        public void onCameraViewStarted(int width, int height) {      }        public void onCameraViewStopped() {        }        public Mat onCameraFrame(CvCameraViewFrame inputFrame) {          Mat input = inputFrame.rgba();          if (gl3_loaded) {              GL3CV4Lib.setImage(input.nativeObj);          }          return null;      }        @Override      protected void onCreate(Bundle icicle) {          super.onCreate(icicle);            setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);            setContentView(R.layout.ar);            l_layout = (RelativeLayout) findViewById(R.id.linearLayoutRest);          mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.opencv_camera_surface_view);          mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);          mOpenCvCameraView.setMaxFrameSize(1920, 1080);          mOpenCvCameraView.setCvCameraViewListener(this);          mOpenCvCameraView.disableView();      }        @Override      protected void onPause() {          super.onPause();          if (mView != null) {              mView.onPause();          }          if (mOpenCvCameraView != null)              mOpenCvCameraView.disableView();          gl3_loaded = false;      }        @Override      protected void onResume() {          super.onResume();          if (!OpenCVLoader.initDebug()) {              OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_0_0, this, mLoaderCallback);          } else {              mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);          }            if (mView != null) {              mView.onResume();          }      }  }

关于这个 Java 文件的写法,请参考 OpenCV 中的打开摄像头的示例 camerapreview 以及示例 mixedprocessing 如何加载 JNI 下的 C++ lib 到 java程序中。

在这里提一下我五年前回答的知乎答案。

想用OpenCV做AR该如何入手? https://www.zhihu.com/question/26983174/answer/35328819

当时使用的还是三星的 Galaxy SIII 手机。那个时候的 Android 还是 4.4 版,用的 opencv sdk 还是 2.4.13 版本。

那个时候写一个基于 OpenCV + OpenGL 的 Android AR 程序非常简单。因为那个时候的 OpenCV Android SDK 还支持 native camera,即通过 OpenCV 的 C++ 代码直接启动摄像头。而且那个时候还可以用 OpenGL 1 的固定管线的方式渲染 AR 场景。我们看一下 2014 年时候的代码。

#include <jni.h>  #include <GLES/gl.h>  #include <GLES/glext.h>  #include <android/log.h>  #include <pthread.h>  #include <time.h>  #include <Math.h>  #include <opencv2/highgui/highgui.hpp>  #include <opencv/cv.h>  #include <opencv2/core/core.hpp>  #include <opencv2/highgui/highgui.hpp>  #include <opencv2/imgproc/imgproc.hpp>  #include <opencv2/features2d/features2d.hpp>  #include <opencv/cv.h>  #include <opencv/highgui.h>  #include <opencv/cxcore.h>  #include <vector>  #include "./build/include/TestDLL.h"  #include "./build/include/GeometryTypes.hpp"  #include "./build/include/CameraCalibration.hpp"  #include "./build/include/TransformPoints.hpp"  using namespace std;  using namespace cv;    #define LOG_TAG    "CAMERA_RENDERER"  #define LOG(...)  __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)    GLuint texture;  cv::VideoCapture capture;  cv::Mat buffer[30];  cv::Mat rgbFrame;  cv::Mat inframe;  cv::Mat outframe;  int bufferIndex;  int rgbIndex;  int frameWidth;  int frameHeight;  int screenWidth;  int screenHeight;  int orientation;  pthread_mutex_t FGmutex;  pthread_t frameGrabber;  pthread_attr_t attr;  struct sched_param param;    cv::Mat backPxls;  cv::Mat Rvec;  cv::Mat raux, taux;  cv::Mat_<float> Tvec;  cv::Mat camMatrix;  cv::Mat op;  GLfloat vertices[] = { -1.0f, -1.0f, 0.0f, // V1 - bottom left          -1.0f, 1.0f, 0.0f, // V2 - top left          1.0f, -1.0f, 0.0f, // V3 - bottom right          1.0f, 1.0f, 0.0f // V4 - top right          };  GLfloat textures[8];  vector<Point3f> modelPoints;  double _cm[9] = { 447.58258057, 0, 320, 0, 444.56341553,          240, 0, 0, 1 };  CameraCalibration calibration(447.58258057, 444.56341553, 320, 240);  vector<Point2f> imagePoints;  bool m_isTextureInitialized;  bool isPatternPresent;  Transformation patternPose;  //=====  const GLfloat gVertices[] = { -1.0f, 0.990435689f, 0.64491773f, 1.0f,          0.990435689f, 0.64491773f, 1.0f, -1.086405201f, 0.596322212f, -1.0f,          -1.086405201f, 0.596322212f, -0.262226342f, 0.790170874f, 0.646569088f,          0.262226342f, 0.790170874f, 0.646569088f, -0.293826291f, 0.20654355f,          0.903273518f, 0.293826291f, 0.20654355f, 0.903273518f  };    const GLfloat gColors[] = { 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f,          0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f,          0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f};    static GLfloat rtri;  static GLfloat rquad;    const GLfloat PI = 3.1415f;  //=====    TransformPoints trans;    extern "C" {  void drawBackground();  void createTexture();  void destroyTexture();  void *frameRetriever(void*);  void buildProjectionMatrix(const CameraCalibration& calibration,          int screen_width, int screen_height, Matrix44& projectionMatrix);  void drawAugmentedScene();            JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_initCamera(JNIEnv*,                  jobject, jint width, jint height)  {              LOG("Camera Created");              capture.open(CV_CAP_ANDROID + 1);              capture.set(CV_CAP_PROP_FRAME_WIDTH, width);              capture.set(CV_CAP_PROP_FRAME_HEIGHT, height);              frameWidth = width;              frameHeight = height;              LOG("frameWidth = %d", frameWidth);              LOG("frameHeight = %d", frameHeight);                pthread_attr_t attr;              pthread_attr_init(&attr);              pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);                pthread_attr_setschedpolicy(&attr, SCHED_FIFO);              memset(&param, 0, sizeof(param));              param.sched_priority = 100;                pthread_attr_setschedparam(&attr, &param);              pthread_create(&frameGrabber, &attr, frameRetriever, NULL);              pthread_attr_destroy(&attr);          }            JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_surfaceChanged(JNIEnv*,                  jobject, jint width, jint height, jint orien)  {              LOG("Surface Changed");              LOG("surface changed screenwidth = %d,screenheight = %d",width,height);              createTexture();          }            JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_releaseCamera(JNIEnv*,                  jobject)  {              LOG("Camera Released");              capture.release();              destroyTexture();          }          JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_InitFaceDetect(JNIEnv* env, jobject)  {              Init_Detect();              modelPoints.push_back(Point3f(-0.186790768,1,0.969336463));//34              modelPoints.push_back(Point3f(0.186790768,1,0.969336463));//29              modelPoints.push_back(Point3f(-0.495664225,0.699452693,0.69330426));//27              modelPoints.push_back(Point3f(0.495664225,0.699452693,0.69330426));//19              modelPoints.push_back(Point3f(0,0,0.94963452));//52              op = Mat(modelPoints);              Scalar m = mean(Mat(modelPoints));              op = op - m;              op=op+Scalar(0,0.5,0);              camMatrix = Mat(3, 3, CV_64F, _cm);              trans.set_points_3d(modelPoints);                LOG("InitFaceDetect after trans.get_points_3d()[0].x = %f, trans.get_points_3d()[0].y = %f, trans.get_points_3d()[0].z = %f",                      trans.get_points_3d()[0].x,trans.get_points_3d()[0].y,trans.get_points_3d()[0].z);          }          void createTexture()  {              glGenTextures(1, &texture);              glBindTexture(GL_TEXTURE_2D, texture);              glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);              glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);          }          void destroyTexture()  {              LOG("Texture destroyed");              glDeleteTextures(1, &texture);          }          JNIEXPORT void JNICALL Java_hznu_whp_blognativecamera_Native_renderBackground(JNIEnv*,                  jobject)  {              glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);              drawBackground();              drawAugmentedScene();              glFlush();          }          void drawBackground()  {              LOG("drawBackgroud start");              int w, h;              w=640;              h=480;              glPixelStorei(GL_PACK_ALIGNMENT, 1);              glBindTexture(GL_TEXTURE_2D, texture);              if (bufferIndex > 0) {                  pthread_mutex_lock(&FGmutex);                  cvtColor(buffer[(bufferIndex - 1) % 30], outframe, CV_BGR2BGR565);                  pthread_mutex_unlock(&FGmutex);                  cv::flip(outframe, rgbFrame, 1);                  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB,                          GL_UNSIGNED_SHORT_5_6_5, rgbFrame.ptr());              }              LOG("drawBackgroud after pthread");              const GLfloat bgTextureVertices[] = {0, 0, w, 0, 0, h, w, h};              const GLfloat bgTextureCoords[] = {1, 0, 1, 1, 0, 0, 0, 1};              const GLfloat proj[] = {0, -2.f / w, 0, 0, -2.f / h, 0, 0, 0, 0, 0, 1, 0,                  1, 1, 0, 1};              float outputpoints[3]= {0,0,0};              vector<cv::Point3f > temp_output;              cv::Point3f temp_point;              temp_output.clear();              LOG("drawBackground before Pixel2Vertex!");                glMatrixMode(GL_PROJECTION);              glLoadMatrixf(proj);                glMatrixMode(GL_MODELVIEW);              glLoadIdentity();                glEnable(GL_TEXTURE_2D);              glBindTexture(GL_TEXTURE_2D, texture);                // Update attribute values.              glEnableClientState(GL_VERTEX_ARRAY);              glEnableClientState(GL_TEXTURE_COORD_ARRAY);                glVertexPointer(2, GL_FLOAT, 0, bgTextureVertices);              glTexCoordPointer(2, GL_FLOAT, 0, bgTextureCoords);                glColor4f(1, 1, 1, 1);              glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);                glDisableClientState(GL_VERTEX_ARRAY);              glDisableClientState(GL_TEXTURE_COORD_ARRAY);                glDisable(GL_TEXTURE_2D);          }          void buildProjectionMatrix(const CameraCalibration& calibration,                  int screen_width, int screen_height, Matrix44& projectionMatrix)  {              float nearPlane = 0.01f;              float farPlane = 100.0f;              double max_w_h = MAX(640,480);                float f_x = 637.40528;              float f_y = 638.37828;              float c_x = 320;              float c_y = 240;                projectionMatrix.data[0] = -2.0f * f_x / 640;              projectionMatrix.data[1] = 0.0f;              projectionMatrix.data[2] = 0.0f;              projectionMatrix.data[3] = 0.0f;                projectionMatrix.data[4] = 0.0f;              projectionMatrix.data[5] = 2.0f * f_y / 480;              projectionMatrix.data[6] = 0.0f;              projectionMatrix.data[7] = 0.0f;                projectionMatrix.data[8] = 2.0f * c_x / 640 - 1.0f;              projectionMatrix.data[9] = 2.0f * c_y / 480 - 1.0f;              projectionMatrix.data[10] = 0              - (farPlane + nearPlane) / (farPlane - nearPlane);              projectionMatrix.data[11] = -1.0f;                projectionMatrix.data[12] = 0.0f;              projectionMatrix.data[13] = 0.0f;              projectionMatrix.data[14] = 0              - 2.0f * farPlane * nearPlane / (farPlane - nearPlane);              projectionMatrix.data[15] = 0.0f;          }          void drawAugmentedScene()  {              Matrix44 projectionMatrix;              int w = 640;              int h = 480;              buildProjectionMatrix(calibration, w, h, projectionMatrix);                glMatrixMode(GL_PROJECTION);              glLoadMatrixf(projectionMatrix.data);              glMatrixMode(GL_MODELVIEW);              glLoadIdentity();              Matrix44 glMatrix = patternPose.getMat44();              glLoadMatrixf(reinterpret_cast<const GLfloat*>(&glMatrix.data[0]));                 GLfloat gVertices[8*3];               for(int line_number_1=0;line_number_1<8;line_number_1++)               {               gVertices[line_number_1*3]=points_3d[line_number_1].x;               gVertices[line_number_1*3+1]=points_3d[line_number_1].y;               gVertices[line_number_1*3+2]=points_3d[line_number_1].z;               }               glEnableClientState(GL_VERTEX_ARRAY);               glLineWidth(6);               glColor4f(1,0,0,1);               glVertexPointer(3, GL_FLOAT, 0, gVertices);               glDrawArrays(GL_LINES, 0, 8);               glDisableClientState(GL_VERTEX_ARRAY);                 for(int line_number_1=0;line_number_1<8;line_number_1++)               {               gVertices[line_number_1*3+0]=output_points_3d[line_number_1].x;               gVertices[line_number_1*3+1]=output_points_3d[line_number_1].y;               gVertices[line_number_1*3+2]=output_points_3d[line_number_1].z;               }               glEnableClientState(GL_VERTEX_ARRAY);               glLineWidth(6);               glColor4f(0,1,0,1);               glVertexPointer(3, GL_FLOAT, 0, gVertices);               glDrawArrays(GL_LINES, 0, 8);               glDisableClientState(GL_VERTEX_ARRAY);            }          void *frameRetriever(void*)  {              CvPoint2D32f *asmPT;              IplImage temp_image;              LOG("frameRetriever start");              while (capture.isOpened())              {                  capture.read(inframe);                  if (!inframe.empty())                  {                      LOG("frameRetriever before pthread_mutex_lock!");                      pthread_mutex_lock(&FGmutex);                      temp_image = inframe;                      asmPT = GetFacePoint(&temp_image);                      if (asmPT)                      {                          DrawFacePoint(&temp_image, asmPT);                          LOG("frameRetriever after DrawFacePoint!");                          imagePoints.clear();                          imagePoints.push_back(Point2f(asmPT[5].x,asmPT[5].y));                          imagePoints.push_back(Point2f(asmPT[12].x,asmPT[12].y));                          imagePoints.push_back(Point2f(asmPT[3].x,asmPT[3].y));                          imagePoints.push_back(Point2f(asmPT[11].x,asmPT[11].y));                          imagePoints.push_back(Point2f(asmPT[16].x,asmPT[16].y));                          trans.set_points_2d(imagePoints);                          LOG("frameRetriever after points_2d.clear()");                      }                        Mat ip(imagePoints);                      Mat img = inframe;                        double _dc[] = {0, 0, 0, 0};                      solvePnP(op, ip, camMatrix, Mat(1, 4, CV_64F, _dc), raux, taux);                        raux.convertTo(Rvec, CV_32F);                      taux.convertTo(Tvec, CV_32F);                      Mat_<float> rotMat(3, 3);                      Rodrigues(Rvec, rotMat);                      for (int col = 0; col < 3; col++) {                          for (int row = 0; row < 3; row++) {                              patternPose.r().mat[row][col] = -rotMat(row, col);                          }                          patternPose.t().data[col] = Tvec(col);                      }                      patternPose=patternPose.getInverted();                      Mat temp_inframe;                      flip(inframe,temp_inframe,1);                      temp_inframe.copyTo(buffer[(bufferIndex++) % 30]);                      pthread_mutex_unlock(&FGmutex);                  }              }              LOG("Camera Closed");              pthread_exit(NULL);          }      }

但是现在这个代码完全不能运行。因为 OpenCV 不再支持 native camera,必须由用户在 Java 代码中打开摄像头,把实时视频帧传给 JNI C++ 函数。另外,现在的 Android OpenGL 已经基本不支持固定管线的 GL 1 代码。

本文的所有代码都将在 Github 和 Coding 上开源。敬请关注我的 Github 和 Coding 主页。

Github 主页 https://github.com/forestsen

Coding 主页 https://coding.net/u/forestsen