Android開發:實時處理攝像頭預覽幀視訊------淺析PreviewCallback,onPreviewFrame,AsyncTask的綜合應用 這裡将大緻架構介紹了,但很多人對onPreviewFrame()裡的處理提出質疑。認為下面的轉換是多餘的:
final YuvImage image = new YuvImage(mData, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream os = new ByteArrayOutputStream(mData.length);
if(!image.compressToJpeg(new Rect(0, 0, w, h), 100, os)){
return null;
}
byte[] tmp = os.toByteArray();
Bitmap bmp = BitmapFactory.decodeByteArray(tmp, 0,tmp.length);
因為這個mData是byte[ ]格式,轉換流程是:byte[ ]---YuvImage----ByteArrayOutputStream---byte[ ]-----Bitmap。乍一看這個轉換還真是多餘了。看看看goolge的api:
public abstract void onPreviewFrame (byte[] data, Camera camera)
Added in API level 1
Called as preview frames are displayed. This callback is invoked on the event thread open(int) was called from.
If using the YV12 format, refer to the equations in setPreviewFormat(int) for the arrangement of the pixel data in the preview callback buffers.
Parameters
data the contents of the preview frame in the format defined by ImageFormat, which can be queried with getPreviewFormat(). If setPreviewFormat(int) is never called, the default will be the YCbCr_420_SP (NV21) format.
camera the Camera service object.
大緻意思是:可以用
getPreviewFormat()查詢
支援的預覽幀格式。如果
setPreviewFormat(INT)
從未被調用,預設将使用YCbCr_420_SP的格式(NV21)。
setPreviewFormat裡,它又說:
public void setPreviewFormat (int pixel_format)
Added in API level 1
Sets the image format for preview pictures.
If this is never called, the default format will be NV21, which uses the NV21 encoding format.
Use getSupportedPreviewFormats() to get a list of the available preview formats.
It is strongly recommended that either NV21 or YV12 is used, since they are supported by all camera devices.
For YV12, the image buffer that is received is not necessarily tightly packed, as there may be padding at the end of each row of pixel data, as described in YV12. For camera callback data, it can be assumed that the stride of the Y and UV data is the smallest possible that meets the alignment requirements. That is, if the preview size is width x height, then the following equations describe the buffer index for the beginning of row y for the Y plane and row c for the U and V planes:
yStride = (int) ceil(width / 16.0) * 16;
uvStride = (int) ceil( (yStride / 2) / 16.0) * 16;
ySize = yStride * height;
uvSize = uvStride * height / 2;
yRowIndex = yStride * y;
uRowIndex = ySize + uvSize + uvStride * c;
vRowIndex = ySize + uvStride * c;
size = ySize + uvSize * 2;
強烈建議使用NV21格式和YV21格式,而預設情況下是NV21格式,也就是YUV420SP的。是以不經過轉換,直接用BitmapFactory解析是不能成功的。事實也是如此。直接解析mData将會得到如下的錯誤:

另外下面也提到NV21是通用的。
getSupportedPreviewFormats()
Added in API level 5
Gets the supported preview formats.
NV21
is always supported.
YV12
is always supported since API level 12.
如果嫌YuvImage進行壓縮解析的慢,隻能自己寫轉換函數了,網上常見的有三種:
一:這裡隻是一個編碼架構
參考這裡:Android 實時視訊采集—Camera預覽采集
// 【擷取視訊預覽幀的接口】
mJpegPreviewCallback = new Camera.PreviewCallback()
{
@Override
public void onPreviewFrame(byte[] data, Camera camera)
{
//傳遞進來的data,預設是YUV420SP的
// TODO Auto-generated method stub
try
{
Log.i(TAG, "going into onPreviewFrame");
//mYUV420sp = data; // 擷取原生的YUV420SP資料
YUVIMGLEN = data.length;
// 拷貝原生yuv420sp資料
mYuvBufferlock.acquire();
System.arraycopy(data, 0, mYUV420SPSendBuffer, 0, data.length);
//System.arraycopy(data, 0, mWrtieBuffer, 0, data.length);
mYuvBufferlock.release();
// 開啟編碼線程,如開啟PEG編碼方式線程
mSendThread1.start();
} catch (Exception e)
{
Log.v("System.out", e.toString());
}// endtry
}// endonPriview
};
二、下面是将yuv420sp轉成rgb參考這裡:android視訊采集
private void updateIM() {
try {
// 解析YUV成RGB格式
decodeYUV420SP(byteArray, yuv420sp, width, height);
DataBuffer dataBuffer = new DataBufferByte(byteArray, numBands);
WritableRaster wr = Raster.createWritableRaster(sampleModel,
dataBuffer, new Point(0, 0));
im = new BufferedImage(cm, wr, false, null);
} catch (Exception ex) {
ex.printStackTrace();
}
}
private static void decodeYUV420SP(byte[] rgbBuf, byte[] yuv420sp,
int width, int height) {
final int frameSize = width * height;
if (rgbBuf == null)
throw new NullPointerException("buffer 'rgbBuf' is null");
if (rgbBuf.length < frameSize * 3)
throw new IllegalArgumentException("buffer 'rgbBuf' size "
+ rgbBuf.length + " < minimum " + frameSize * 3);
if (yuv420sp == null)
throw new NullPointerException("buffer 'yuv420sp' is null");
if (yuv420sp.length < frameSize * 3 / 2)
throw new IllegalArgumentException("buffer 'yuv420sp' size "
+ yuv420sp.length + " < minimum " + frameSize * 3 / 2);
int i = 0, y = 0;
int uvp = 0, u = 0, v = 0;
int y1192 = 0, r = 0, g = 0, b = 0;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgbBuf[yp * 3] = (byte) (r >> 10);
rgbBuf[yp * 3 + 1] = (byte) (g >> 10);
rgbBuf[yp * 3 + 2] = (byte) (b >> 10);
}
}
}
public static void main(String[] args) {
Frame f = new FlushMe();
}
}
三、将YUV420SP轉成YUV420格式
參考這裡:Android如何實作邊采集邊上傳
private byte[] changeYUV420SP2P(byte[]data,int length){
int width = 176;
int height = 144;
byte[] str = new byte[length];
System.arraycopy(data, 0, str, 0,width*height);
int strIndex = width*height;
for(int i = width*height+1; i < length ;i+=2)
{
str[strIndex++] = data[i];
}
for(int i = width*height;i<length;i+=2)
{
str[strIndex++] = data[i];
}
return str;
}
至于怎麼從YUV420SP中直接提取出Y分量進行後續檢測,這個還要研究一番。有知道的大神多賜教。
----------------------------------------------------------------------------------------本文系原創,轉載請注明作者:yanzi1225627