Comments (17)
Google, any proper workaround for this issue? There are many questions on stackoverflow regarding dark preview using this demo. There is no any best solution for this. It would be great if you can suggest to fix this issue.
from android-camera2basic.
This is frustrating because I know the cause of this bug, but I don't know of an adequate solution that's easy to implement.
The problem is that for some reason the surface that's displaying the captured preview frames is acting as if the colors need to be converted from 'tv' ranges to 'full' ranges. This might be because the frames are captured in a YUV format. Some background:
TV signals (which also use YUV, or in the US often YIQ which is similar) need to have some room for the signals to go too high or too low, for various reasons. Thus, for the Y channel the range is 16-235 instead of from 0-255, and for the U and V channels the range is 16-240 (again instead of 0-255).
Most YUV signals are encoded this way, such as mp4 files and so on... But jpeg (and motion jpeg) are not encoded this way - instead using 0-255 for all channels. Overall it depends on what the hardware reports.
Now, how do I know this is the fault of the surface view? Because in the 'Shader Editor' app, I can rescale the color values from the camera and fix them. And yes, it pulls values out from being negative... Which means that the values are being sent to the texture correctly, but the surface is rescaling and displaying the colors incorrectly.
After some Googling, it seems that some people are reporting some devices as exhibiting this behavior and other devices as being fine. I've tonight just spent the last hour or two trying to find somewhere to set whether to use 'tv' or 'pc' (or sometimes known as 'limited' vs. 'full') ranges for the preview surface, and haven't found anything.
It really doesn't help that 'limited' and 'full' are both the technical terms for this setting, and what Android uses to distinguish levels of support for the Camera2 API. Terrible naming conflict in my opinion.
Basically... The 'fix' is to write some GLSL code that rescales the values. Woo :/
from android-camera2basic.
Does any one found some solution for this? It is really very pain full.
from android-camera2basic.
I guess configuration of FPS can help.
private void initFPS(CameraCharacteristics cameraCharacteristics){
try {
Range<Integer>[] ranges = cameraCharacteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
if(ranges != null) {
for (Range<Integer> range : ranges) {
int upper = range.getUpper();
Log.i("Camera", "[FPS Range Available] is:" + range);
if (upper >= 10) {
if (fpsRange == null || upper < fpsRange.getUpper()) {
fpsRange = range;
}
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
Log.i("Camera", "[FPS Range] is:" + fpsRange);
}
And add this config to builders:
if(fpsRange != null) {
captureRequestBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, fpsRange);
captureRequestBuilderImageReader.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, fpsRange);
}
from android-camera2basic.
The issue happens for some devices that use as the default FPS range a value where the lower and upper FPS bounds are the same, which limits how the AE algorithm can adjust to light changes. In addition, some devices seem to report the FPS ranges in a 1000 scale, which needs to be normalized.
This is my current solution. It seeks an FPS range between 0 and 30 (you can change these constants), picking up the range with the widest spread between the lower and upper bounds, and also taking care of normalizing 1000 ranges. Please add any modifications you think it may need to improve it.
Take into account that this method is only meant for photos, as for videos it makes sense to have an FPS range with equal lower and upper bounds.
@Nullable
public static Range<Integer> getOptimalFpsRange(@NonNull final CameraCharacteristics characteristics) {
final int MIN_FPS_RANGE = 0;
final int MAX_FPS_RANGE = 30;
final Range<Integer>[] rangeList = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
if ((rangeList == null) || (rangeList.length == 0)) {
Log.e(TAG, "Failed to get FPS ranges.");
return null;
}
Range<Integer> result = null;
for (final Range<Integer> entry : rangeList) {
int candidateLower = entry.getLower();
int candidateUpper = entry.getUpper();
if (candidateUpper > 1000) {
Log.w(TAG,"Device uses FPS range in a 1000 scale. Normalizing.");
candidateLower /= 1000;
candidateUpper /= 1000;
}
// Discard candidates with equal or out of range bounds
final boolean discard = (candidateLower == candidateUpper)
|| (candidateLower < MIN_FPS_RANGE)
|| (candidateUpper > MAX_FPS_RANGE);
if (discard == false) {
// Update if none resolved yet, or the candidate
// has a >= upper bound and spread than the current result
final boolean update = (result == null)
|| ((candidateUpper >= result.getUpper()) && ((candidateUpper - candidateLower) >= (result.getUpper() - result.getLower())));
if (update == true) {
result = Range.create(candidateLower, candidateUpper);
}
}
}
return result;
}
from android-camera2basic.
I am experiencing the same issue on Sony Z5 (android 6)
from android-camera2basic.
I did not manage to find a good solution. For now I stopped trying.
This is what I did to stop this problem blocking me:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, 12);
mPreviewRequestBuilder.set(CaptureRequest.COLOR_CORRECTION_GAINS, new RggbChannelVector(86, 86, 86, 86));
from android-camera2basic.
The above had no effect for me, but this did:
builder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
from android-camera2basic.
Experiencing this on OnePlus 3 API 25, worked fine on my Note 4 running older API version. Neither of the above solutions worked for me.
from android-camera2basic.
I am experiencing the same issue on Xiaomi 6 (android 7.1.1)
from android-camera2basic.
As a followup to the late night rant I typed up last night, I figured you might appreciate the GLSL code I've found works best. I used to just directly rescale the RGB values, but the colors remained sliightly 'off'. I've since found that the best way to do it seems to be converting into YUV first, rescaling, and finally converting back into RGB.
Here's the code, at least for Shader Editor:
#version 300 es
#extension GL_OES_EGL_image_external_essl3: require
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
// Tell Shader Editor to provide a texture for the back camera
uniform vec2 resolution;
uniform vec2 cameraAddent;
uniform mat2 cameraOrientation;
uniform samplerExternalOES cameraBack;
// For consistency with previous versions of GLSL ES
out vec4 gl_FragColor;
// YUV matrix for BT.601
const mat3 BT601 = mat3(
0.299, -299.0/1772.0, 0.5,
0.587, -587.0/1772.0, -587.0/1402.0,
0.114, 0.5, -57.0/701.0
);
void main(void) {
vec3 color;
vec2 texCoord = gl_FragCoord.xy/resolution;
// Orient the camera texture correctly
texCoord = texCoord*cameraOrientation;
texCoord += cameraAddent;
color = texture(cameraBack, texCoord).rgb;
// Fix incorrect scaling of YUV values by Android
color = BT601*color;
color.rgb *= vec3(219.0/255.0, vec2(224.0/255.0));
color.r += 16.0/255.0;
color = inverse(BT601)*color;
// Old color correction (doesn't use YUV)
//color = color*73.0/85.0 + 16.0/255.0;
gl_FragColor = vec4(color, 1);
}
For devices not supporting OpenGL ES 3.0 and up, you have to provide a pre-computed inverse matrix (and change a few other minor tidbits):
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
// Tell Shader Editor to use the back camera
uniform vec2 resolution;
uniform vec2 cameraAddent;
uniform mat2 cameraOrientation;
uniform samplerExternalOES cameraBack;
// YUV matrix for BT.601
const mat3 BT601 = mat3(
0.299, -299.0/1772.0, 0.5,
0.587, -587.0/1772.0, -587.0/1402.0,
0.114, 0.5, -57.0/701.0
);
// Inverse of the YUV matrix for BT.601
const mat3 BT601_INV = mat3(
1, 1, 1,
0, -25251.0/73375.0, 1.772,
1.402, -209599.0/293500.0, 0
);
void main(void) {
vec3 color;
vec2 texCoord = gl_FragCoord.xy/resolution;
// Orient the camera texture correctly
texCoord = texCoord*cameraOrientation;
texCoord += cameraAddent;
color = texture2D(cameraBack, texCoord).rgb;
// Fix incorrect scaling of YUV values by Android
color = BT601*color;
color.rgb *= vec3(219.0/255.0, vec2(224.0/255.0));
color.r += 16.0/255.0;
color = BT601_INV*color;
// Old color correction (doesn't use YUV)
//color = color*73.0/85.0 + 16.0/255.0;
gl_FragColor = vec4(color, 1);
}
It's honestly probably much more efficient to use a pre-converted inverse matrix anyway, so you might want to use it even in OpenGL ES 3.0+. I don't because I kinda like having as few things defined as possible, and instead deriving everything from a minimal set of data. But that's just me.
The matrices I provide here should be as accurate as possible, as I derived them myself with a calculator application (Qalculate) that lets me see the results as fractions. It may not be a 'minimal set of data', but it's more accurate than any rounding one could do!
I use BT.601 because it's what the JPEG standard specifies, and because ffmpeg
's swscalar library (used by default as ffmpeg
's main library for converting to/from YUV, and for resizing frames) uses it as a default/fallback YUV matrix in case it either can't determine which one it should use and/or if which one to use is not specified by the user. As a result, it's what's in use the vast majority of the time, especially when someone makes a mistake.
And by the way, before anyone says anything, yes I understand the difference between YUV and YCbCr. Yes I know I'm talking almost exclusively about YCbCr in these posts. However, YUV is the most useful thing to call it in this context, since YCbCr is essentially the digital version of YUV, and most people - when talking about software colorspaces - call it YUV - including ffmpeg
's code, documentation, and command line parameters.
Just figured I'd get that out of the way in case anyone felt like correcting me :)
from android-camera2basic.
Experiencing this on Meizu PRO 6s API 25,and setting CONTROL_AE_EXPOSURE_COMPENSATION doesn't work for me.
from android-camera2basic.
I can confirm that @Djek-grif solution works.
Anybody may explain the meaning of the values in the Range in order to make the most appropriate choice, at the moment I have rest to choose the the wider range within the availables [5,30]
in my case, because the default uses the maximum [30,30]
but that makes the preview dark in LG devices I have tested among other issues with LG's seem like the Camera API
would be better suitable for LG devices rather than Camera2
or for Limited Cameras overall.
from android-camera2basic.
i have same isuue and was solve that to other device with this method from stackoverflow
for capture :
//set brignest captureBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE,(getRange()));
for preview :
//set brignes mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE,getRange());
and this method for getrange()
` /**
* An {@link CameraCharacteristics} method for camera preview not dark
*/
private Range<Integer> getRange() {
CameraCharacteristics chars = null;
CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
try {
chars = manager.getCameraCharacteristics(mCameraId);
Range<Integer>[] ranges = chars.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
Range<Integer> result = null;
for (Range<Integer> range : ranges) {
int upper = range.getUpper();
// 10 - min range upper for my needs
if (upper >= 10) {
if (result == null || upper < result.getUpper().intValue()) {
result = range;
}
}
Log.e("Avaliable frame fps :",""+range);
}
if (result == null) {
result = ranges[0];
}
Log.e("frame fps :",""+result);
return result;
} catch (CameraAccessException e) {
e.printStackTrace();
return null;
}
}`
this work for any device , but i get complaint from client if this method not work in some device "Sony" but not all sony device.
please give me some action if you have news idea for solve this issue. thanks
and if i use @Djek-grif your merhod that make Brignest but camera show low quality
from android-camera2basic.
The issue happens for some devices that use as the default FPS range a value where the lower and upper FPS bounds are the same, which limits how the AE algorithm can adjust to light changes. In addition, some devices seem to report the FPS ranges in a 1000 scale, which needs to be normalized.
This is my current solution. It seeks an FPS range between 0 and 30 (you can change these constants), picking up the range with the widest spread between the lower and upper bounds, and also taking care of normalizing 1000 ranges. Please add any modifications you think it may need to improve it.
Take into account that this method is only meant for photos, as for videos it makes sense to have an FPS range with equal lower and upper bounds.
@Nullable public static Range<Integer> getOptimalFpsRange(@NonNull final CameraCharacteristics characteristics) { final int MIN_FPS_RANGE = 0; final int MAX_FPS_RANGE = 30; final Range<Integer>[] rangeList = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES); if ((rangeList == null) || (rangeList.length == 0)) { Log.e(TAG, "Failed to get FPS ranges."); return null; } Range<Integer> result = null; for (final Range<Integer> entry : rangeList) { int candidateLower = entry.getLower(); int candidateUpper = entry.getUpper(); if (candidateUpper > 1000) { Log.w(TAG,"Device uses FPS range in a 1000 scale. Normalizing."); candidateLower /= 1000; candidateUpper /= 1000; } // Discard candidates with equal or out of range bounds final boolean discard = (candidateLower == candidateUpper) || (candidateLower < MIN_FPS_RANGE) || (candidateUpper > MAX_FPS_RANGE); if (discard == false) { // Update if none resolved yet, or the candidate // has a >= upper bound and spread than the current result final boolean update = (result == null) || ((candidateUpper >= result.getUpper()) && ((candidateUpper - candidateLower) >= (result.getUpper() - result.getLower()))); if (update == true) { result = Range.create(candidateLower, candidateUpper); } } } return result; }
same as my testing
from android-camera2basic.
Same issue on Asus Zenfone 3 and Asus Zenfone 5.
Setting up FPS range not helping at all.
from android-camera2basic.
This sample has been migrated to a new location where we can accept Pull Requests (check README for more information).
As recommended by GitHub, we are closing all issues and pull requests now that this older repo will be archived.
If you still see this issue in the updated repo, please reopen the issue/PR there. Thank you!
from android-camera2basic.
Related Issues (20)
- Taking several images in samsung galaxy 7
- Not Getting clear image in Lenovo Tablet Versions HOT 1
- Crash after lockscreen in landscape HOT 1
- InstanceCountViolation HOT 2
- Preview freeze HOT 1
- How to lock ISO or EXPOSURE_TIME when adjusting EXPOSURE_COMPENSATION? HOT 1
- android.hardware.camera2.CameraAccessException: CAMERA_ERROR (3): submitRequestList:283 HOT 10
- The application falls on huawei p8 lite 2017 HOT 1
- AutoFitTextureView capture's a frame that is not visible in camera preview HOT 1
- 疑问 HOT 2
- Usable API? HOT 1
- I/swiftshader: lock failed: buffer of 640x480 too small for window of 480x640 HOT 1
- Image gets rotated by 90 degrees left HOT 3
- how to take care of focus if the af_state return 0. Screen hangs with no camera capture HOT 1
- Native memory usage keeps increasing without interference in the Profiler HOT 1
- Camera2 API sucks HOT 1
- Error Take photo xaomi HOT 1
- why saved twice? HOT 1
- TextureView sometimes does not fit properly while switching between landscape and portrait mode HOT 1
- couldn't find "libnvision_core.so" HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from android-camera2basic.