我正在尝试部署 Android TensorFlow-Lite 示例,特别是检测器活动。
我已成功将其部署在平板电脑中。该应用程序运行良好,它能够检测对象,在其周围放置一个边界矩形,并带有标签和置信度。
然后,我设置了 Raspberry Pi 3 B 型板,在其中安装了 Android Things,通过 ADB 连接,然后从 Android Studio 部署了相同的程序。然而,我用于 Rπ 板的屏幕是空白的。
经检查Android Things 相机演示教程 https://github.com/weichen2046/CameraDemoForAndroidThings,我有这个想法,启用硬件加速以支持相机预览。我添加了:
android:hardwareAccelerated="true"
in the application
清单的标签。
我还在应用程序标签中添加了以下内容:
<uses-library android:name="com.google.android.things" />
我的活动标签中有一个意图过滤器:
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.IOT_LAUNCHER" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
以便 TensorFlow App 在启动后运行。
我再次部署了该应用程序,但同样的错误仍然存在——我无法配置预览屏幕会话。
以下是 TensorFlow 示例中包含的以下代码:
private void createCameraPreviewSession() {
try {
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);
LOGGER.e("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());
// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());
// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
} catch (final CameraAccessException e) {
LOGGER.e(e, "Exception!");
LOGGER.e("camera access exception!");
}
}
@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
LOGGER.e("configure failed!!");
}
},
null);
} catch (final CameraAccessException e) {
LOGGER.e("camera access exception!");
LOGGER.e(e, "Exception!");
}
}
错误日志位于onConfigureFailed
override 方法,以及导致该语句的相关错误日志是:
11-12 14:02:40.677 1991-2035/org.tensorflow.demo E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
11-12 14:02:40.679 1991-2035/org.tensorflow.demo E/tensorflow: CameraConnectionFragment: configure failed!!
然而,我无法追踪Session 0:
堆栈跟踪。
除了开启硬件加速和在Manifest中添加其他几个标签之外,我还没有尝试过任何东西。
我已经完成了研究,也看到了其他示例,但它们只是通过单击按钮来拍照。我需要一个可以正常工作的相机预览。
我也有 CameraDemoForAndroidThings 示例,但恐怕我不了解 Kotlin,无法猜测它是如何工作的。
如果有人设法在 Raspberry Pi Android Things 上运行 Java 版本的 TensorFlow 检测活动,请贡献并告诉我们您是如何做到的。
UPDATE:
显然,相机一次只能支持一种流配置。我还能够推断出我必须修改createCaptureSession()
函数仅使用一个表面,我的函数现在如下所示:
cameraDevice.createCaptureSession(
// Arrays.asList(surface, previewReader.getSurface()),
Arrays.asList(surface),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AF_MODE,
// CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());
} catch (final CameraAccessException e) {
LOGGER.e("exception hit while configuring camera!");
LOGGER.e(e, "Exception!");
}
}
@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession) {
LOGGER.e("Configure failed!");
showToast("Failed");
}
},
null);
这使我能够获得实时预览。但是,代码不会继续将图像从预览发送到processImage()
block.
有人成功实施了涉及 Android Things 实时相机预览的 TensorFlow-Lite 示例吗?