Custom Camera For Android

Android Custom Camera is based on Android’s Camera 2 APIHello there! This is my very 1st story here on medium. I am addressing the Camera 2 API today. I did a lot of digging on this before I came up with my own wrapper around camera 2 functionality to increase readability of the camera 2 code base and make it easier for you to use. I was so frustrated because google did a fantastic job on the API but the code base was much complex for me so I wrote my own implementation in Kotlin so it can be embedded into your projects. I wrote the wrapper in such a way that you can further modify the functionality in it as you see fit.You will be able to take a picture and save it in the external storage smoothly.I will try to explain the complex stuff in easier terms if possible. You can find the complete project with sample on Github.

This implementation is based on google’s basic implementation of Camera2 API. I faced many issues with the custom camera sample activity I developed. Because I couldn’t get the code to run optimally on every device.

Hence there is no guarantee that It will run the same on every device and even may cause abnormal behavior as well. And I also concluded that this maybe the reason why each device has a different default camera app irrelevant to API level because of camera hardware.

Content:
Android’s Camera 2 API is one of the hardest APIs to work with. The purpose of this tutorial is to give you a basic know-how about how to properly consume the Android’s camera 2 API and the custom camera. For starters you will be able to make custom camera ,take a picture and provide the user with the functionality to set flash.Then,the photo taken would be saved to external storage asynchronously using RxJava for a smooth user experience. And also the Camera2 class is left open to enhancements/modifications as you need for your work.

Pre-Requisites:
We will be using Kotlin so you must have that configured and RxJava for the purpose of converting and saving picture taken from camera in storage. So I assume you are familiar with some basics of reactive programming. Even if you aren’t familiar I’ll leave some hints along the way. So, Let’s work.

1st of all you will need to add these 2 permissions in your Android Manifest and also ask them at run time before allowing the user to open your camera activity. The permissions are required for using camera and saving data to device storage.


Then you will need to add the following dependency in your build.gradle(Module) file.

implementation ‘io.reactivex.rxjava2:rxandroid:2.0.2’

The dependency above will import the RxJava in your project and further streamline your code using extension features of Kotlin.

Basically AutoFitTextureView is a sub-classed TextureView a native component that provides the surface for camera settings and additionally extended to set aspect ratio for it. So, create a layout with it, and add buttons to enable/disable flash,take a picture and to rotate the camera front/back. After this create a field for Camera2 in your activity class and then in onCreate method initialize it with an instance to your texture view declared in XML layout or code.

// Field declared which must be initialized later.
private lateinit var camera2
: Camera2override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_custom_camera_ui)
camera2 = Camera2(camera_view) // like this
}

Now After doing this, You need to call the wrapper class methods appropriately in your on click methods for intended actions. Then later to start the camera and close it override onPause and onResume methods. All boiler plate code has been taken care of inside these methods the close() method would efficiently release the camera and its held resources and onResume method will start or resume the camera.

rotatecamera.setOnClickListener {
if
(camera2.isFlashEnabled()) { // update UI
}
camera2.switchCamera()
}

captureimage.setOnClickListener {
camera2
.takePhoto { bitmap ->

Toast.makeText(v.context, “Saving Picture”, Toast.LENGTH_SHORT).show()
Converters.convertBitmapToFile(bitmap) { file -> Toast.makeText(v.context, “Saved Picture Path ${file.path}”, Toast.LENGTH_SHORT).show()
}

}}
iv_camera_flash_on.setOnClickListener {
camera2
.setFlash(Camera2.FLASH.ON)
it.alpha = 1f
iv_camera_flash_auto.alpha = 0.4f
iv_camera_flash_off.alpha = 0.4f
}

iv_camera_flash_auto.setOnClickListener {

iv_camera_flash_off.alpha = 0.4f
iv_camera_flash_on.alpha = 0.4f
it.alpha = 1f
camera2.setFlash(Camera2.FLASH.AUTO)
}

iv_camera_flash_off.setOnClickListener {
camera2
.setFlash(Camera2.FLASH.OFF)
it.alpha = 1f
iv_camera_flash_on.alpha = 0.4f
iv_camera_flash_auto.alpha = 0.4f

}override fun onPause() {

camera2.close()
super.onPause()
}

override fun onResume() { camera2.onResume()
super.onResume()
}

Summary of above methods is listed below.

switchCamera(): This method will switch your camera between back and front facing. If the device doesn’t support front camera it will print the exception instead. Basically a camera switch in Android requires a quick restart in order to load the new configurations.

// This method switches Camera Lens Front or Back after restart of camera.
fun switchCamera() {
close()
cameraFacing = if (cameraFacing == CameraCharacteristics.LENS_FACING_BACK) CameraCharacteristics.LENS_FACING_FRONT

else CameraCharacteristics.LENS_FACING_BACK

onResume()

}

setFlash(ON/OFF/AUTO): This enables the flash for current capture session of the camera. It will check 1st if the flash is supported. If it is then if the camera lens are facing forward flash will be enabled for the current session. It has 3 possible values Auto for auto-flash,ON for continuous,and OFF. By default its auto.

fun setFlash(flash: FLASH) { this.flash = flash if (textureView.context.packageManager.hasSystemFeature(PackageManager.FEATURE_CAMERA_FLASH)) { when (cameraFacing) { CameraCharacteristics.LENS_FACING_FRONT -> Log.e(“Camera2”, “Front Camera Flash isn’t supported yet.”) } } }

takePhoto((Bitmap)->Unit): This method captures the rendered image and returns it to your activity/fragment as a bitmap. This method launches the flash then stills the view and checks whether the texture is available for rendering if it is then a bitmap is taken from the view and preview is restored.

fun takePhoto(onBitmapReady: (Bitmap) -> Unit) { this.onBitmapReady = onBitmapReady
lockPreview() }

So we have the image captured from camera as a Bitmap. Now we need to save this image in the device storage but we need to do this without compromising the user experience.Here we’ll use RxJava. When the photo is captured its bitmap would be returned in the callback. So,here you can do your image processing. The process of converting the Bitmap to the File would take some time depending on the size of image.So,make sure to update the UI accordingly.

Use the method employed in Converters object to convert your bitmap to File. The conversion method requires an instance of your bitmap to be converted and compressed with a callback to return the result as a file written to device storage.

// update UI to show any progress for image processing.
disposable = Converters.convertBitmapToFile(it) { file ->

Toast.makeText(v.context, “Saved Picture Path ${file.path}”, Toast.LENGTH_SHORT).show()
}

Now let’s see whats the magic behind this converter method below. The RxJava method observes the primary bitmap on a background thread where the process of compression and conversion takes place. The subscription then upon result is a disposable itself as it currently holds resources of the system.

The below code converts and compresses your bitmap to file then writes in the external storage public directory of pictures under “Android Custom Camera” folder.

// This subscription needs to be disposed off to release the system resources primarily held for purpose.

@JvmStatic // this annotation is required for caller class written in Java to recognize this method as static
fun convertBitmapToFile(bitmap: Bitmap, onBitmapConverted: (File) -> Unit): Disposable {
return Single.fromCallable { compressBitmap(bitmap)
}.subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()) .subscribe({ if
(it != null) { Log.i(“convertedPicturePath”, it.path) onBitmapConverted(it) } }, { it.printStackTrace() })
}

private fun compressBitmap(bitmap: Bitmap): File? {
//create a file to write bitmap data try { val myStuff = File( Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), “Android Custom Camera” ) if (!myStuff.exists()) myStuff.mkdirs() val picture = File(myStuff, “Mobin-” + System.currentTimeMillis() + “.jpeg”) //Convert bitmap to byte array val bos = ByteArrayOutputStream() bitmap.compress(Bitmap.CompressFormat.JPEG, 100 /*ignored for PNG*/, bos) val bitmapData = bos.toByteArray() //write the bytes in file val fos = FileOutputStream(picture) fos.write(bitmapData) fos.flush() fos.close() return picture
} catch (e: IOException) { e.printStackTrace()
} return null
}

Finally, override onDestroy method of your activity to dispose held resources during conversion and its done voilà

override fun onDestroy() {
disposable.dispose()
super.onDestroy()
}

Many thanks to everyone who reads this and even more to those who used this. Let me know if you like this and whatever topics you wish for me to write in regards to Android Development. You can follow me here for updates and/or reach me at LinkedIn or Facebook . You can greatly help by starring and forking on github.