上一篇我们实现了Android平台解码avi并用SurfaceView播放。
ffmpeg实战教程(七)Android CMake avi解码后SurfaceView显示
本篇我们在此基础上实现滤镜,水印等功能。
对ffmpeg不熟的客官看这里:ffmpeg源码简析(一)结构总览
先上两张效果图:
黑白:const char *filters_descr = “lutyuv=’u=128:v=128’”;
添加水印:const char *filters_descr = “movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]”;
在前面的几篇文章中我们已经学会了用ffmpeg对音视频进行编解码,下面我们就主要介绍一下libavfilter
ffmpeg的libavfilter是为音视频添加特效功能的。
libavfilter的关键函数如下所示:
avfilter_register_all():注册所有AVFilter。
avfilter_graph_alloc():为FilterGraph分配内存。
avfilter_graph_create_filter():创建并向FilterGraph中添加一个Filter。
avfilter_graph_parse_ptr():将一串通过字符串描述的Graph添加到FilterGraph中。
avfilter_graph_config():检查FilterGraph的配置。
av_buffersrc_add_frame():向FilterGraph中加入一个AVFrame。
av_buffersink_get_frame():从FilterGraph中取出一个AVFrame。
今天我们的示例程序中提供了几种特效:
const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";
上面的黑白特效,和水印使用了下面的两个
const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
更多的特效使用,请到官网学习,http://www.ffmpeg.org/ffmpeg-filters.html
下面看代码实现:
在我们的MainActivity中初始化了一个SurfaceView,并定义一个native函数用于把Surface传到底层(底层把处理过的数据交给Surface传给上层显示)
SurfaceView surfaceView = (SurfaceView) findViewById(R.id.surface_view);
surfaceHolder = surfaceView.getHolder();
surfaceHolder.addCallback(this);
...
public native int play(Object surface);
surfaceCreated()函数中实现play函数。
@Override
public void surfaceCreated(SurfaceHolder holder) {
new Thread(new Runnable() {
@Override
public void run() {
play(surfaceHolder.getSurface());
}
}).start();
}
那么重点就是JNI层的play()函数做了什么?
首先我们在上一篇play()函数的基础上添加libavfilter各种特效需要的头文件
//added by ws for AVfilter start
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/buffersrc.h>
#include <libavfilter/buffersink.h>
//added by ws for AVfilter end
};
然后我们声明初始化一些必要的结构体。
//added by ws for AVfilter start
const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
//added by ws for AVfilter end
现在我们可以正式的初始化AVfilter 了,代码比较多,对着上面的AVfilter 关键函数看比较好
//added by ws for AVfilter start----------init AVfilter--------------------------ws
char args[512];
int ret;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");//新版的ffmpeg库必须为buffersink
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };
AVBufferSinkParams *buffersink_params;
filter_graph = avfilter_graph_alloc();
/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
pCodecCtx->time_base.num, pCodecCtx->time_base.den,
pCodecCtx->sample_aspect_ratio.num, pCodecCtx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
LOGD("Cannot create buffer source\n");
return ret;
}
/* buffer video sink: to terminate the filter chain. */
buffersink_params = av_buffersink_params_alloc();
buffersink_params->pixel_fmts = pix_fmts;
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, buffersink_params, filter_graph);
av_free(buffersink_params);
if (ret < 0) {
LOGD("Cannot create buffer sink\n");
return ret;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
// avfilter_link(buffersrc_ctx, 0, buffersink_ctx, 0);
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0) {
LOGD("Cannot avfilter_graph_parse_ptr\n");
return ret;
}
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {
LOGD("Cannot avfilter_graph_config\n");
return ret;
}
//added by ws for AVfilter end------------init AVfilter------------------------------ws
初始化完成后,
我们把解码器解码出来的帧进行加工改造。
//added by ws for AVfilter start
pFrame->pts = av_frame_get_best_effort_timestamp(pFrame);
//* push the decoded frame into the filtergraph
if (av_buffersrc_add_frame(buffersrc_ctx, pFrame) < 0) {
LOGD("Could not av_buffersrc_add_frame");
break;
}
ret = av_buffersink_get_frame(buffersink_ctx, pFrame);
if (ret < 0) {
LOGD("Could not av_buffersink_get_frame");
break;
}
//added by ws for AVfilter end
改造后的帧就是已经加上特效了
记着最后释放内存:
avfilter_graph_free(&filter_graph); //added by ws for avfilter
到此我们今天的功能已经实现了。
建议大家结合着代码看,否则如盲人摸象
这一篇可能助你理解libavfilter:
libavfilter实践指南 :http://blog.csdn.net/king1425/article/details/71215686
对于C和JNI不熟悉的客官看这里:
http://blog.csdn.net/King1425/article/category/6865816
下面我们再看几个效果图,然后上源码:
const char *filters_descr = "hue='h=60:s=-3'";
const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";
const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
下面是JNI源码:
#include <jni.h>
#include <android/log.h>
#include <android/native_window.h>
#include <android/native_window_jni.h>
#include "native-lib.h"
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"
//added by ws for AVfilter start
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/buffersrc.h>
#include <libavfilter/buffersink.h>
//added by ws for AVfilter end
};
#define LOG_TAG "ffmpegandroidplayer"
#define LOGD(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
//added by ws for AVfilter start
const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
//added by ws for AVfilter end
JNIEXPORT jint JNICALL
Java_com_ws_ffmpegandroidavfilter_MainActivity_play
(JNIEnv *env, jclass clazz, jobject surface) {
LOGD("play");
// sd卡中的视频文件地址,可自行修改或者通过jni传入
char *file_name = "/storage/emulated/0/ws2.mp4";
//char *file_name = "/storage/emulated/0/video.avi";
av_register_all();
avfilter_register_all();//added by ws for AVfilter
AVFormatContext *pFormatCtx = avformat_alloc_context();
// Open video file
if (avformat_open_input(&pFormatCtx, file_name, NULL, NULL) != 0) {
LOGD("Couldn't open file:%s\n", file_name);
return -1; // Couldn't open file
}
// Retrieve stream information
if (avformat_find_stream_info(pFormatCtx, NULL) < 0) {
LOGD("Couldn't find stream information.");
return -1;
}
// Find the first video stream
int videoStream = -1, i;
for (i = 0; i < pFormatCtx->nb_streams; i++) {
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO
&& videoStream < 0) {
videoStream = i;
}
}
if (videoStream == -1) {
LOGD("Didn't find a video stream.");
return -1; // Didn't find a video stream
}
// Get a pointer to the codec context for the video stream
AVCodecContext *pCodecCtx = pFormatCtx->streams[videoStream]->codec;
//added by ws for AVfilter start----------init AVfilter--------------------------ws
char args[512];
int ret;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");//新版的ffmpeg库必须为buffersink
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };
AVBufferSinkParams *buffersink_params;
filter_graph = avfilter_graph_alloc();
/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
pCodecCtx->time_base.num, pCodecCtx->time_base.den,
pCodecCtx->sample_aspect_ratio.num, pCodecCtx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
LOGD("Cannot create buffer source\n");
return ret;
}
/* buffer video sink: to terminate the filter chain. */
buffersink_params = av_buffersink_params_alloc();
buffersink_params->pixel_fmts = pix_fmts;
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, buffersink_params, filter_graph);
av_free(buffersink_params);
if (ret < 0) {
LOGD("Cannot create buffer sink\n");
return ret;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
// avfilter_link(buffersrc_ctx, 0, buffersink_ctx, 0);
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0) {
LOGD("Cannot avfilter_graph_parse_ptr\n");
return ret;
}
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {
LOGD("Cannot avfilter_graph_config\n");
return ret;
}
//added by ws for AVfilter start------------init AVfilter------------------------------ws
// Find the decoder for the video stream
AVCodec *pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (pCodec == NULL) {
LOGD("Codec not found.");
return -1; // Codec not found
}
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
LOGD("Could not open codec.");
return -1; // Could not open codec
}
// 获取native window
ANativeWindow *nativeWindow = ANativeWindow_fromSurface(env, surface);
// 获取视频宽高
int videoWidth = pCodecCtx->width;
int videoHeight = pCodecCtx->height;
// 设置native window的buffer大小,可自动拉伸
ANativeWindow_setBuffersGeometry(nativeWindow, videoWidth, videoHeight,
WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer windowBuffer;
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
LOGD("Could not open codec.");
return -1; // Could not open codec
}
// Allocate video frame
AVFrame *pFrame = av_frame_alloc();
// 用于渲染
AVFrame *pFrameRGBA = av_frame_alloc();
if (pFrameRGBA == NULL || pFrame == NULL) {
LOGD("Could not allocate video frame.");
return -1;
}
// Determine required buffer size and allocate buffer
// buffer中数据就是用于渲染的,且格式为RGBA
int numBytes = av_image_get_buffer_size(AV_PIX_FMT_RGBA, pCodecCtx->width, pCodecCtx->height,
1);
uint8_t *buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
av_image_fill_arrays(pFrameRGBA->data, pFrameRGBA->linesize, buffer, AV_PIX_FMT_RGBA,
pCodecCtx->width, pCodecCtx->height, 1);
// 由于解码出来的帧格式不是RGBA的,在渲染之前需要进行格式转换
struct SwsContext *sws_ctx = sws_getContext(pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
pCodecCtx->width,
pCodecCtx->height,
AV_PIX_FMT_RGBA,
SWS_BILINEAR,
NULL,
NULL,
NULL);
int frameFinished;
AVPacket packet;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// 并不是decode一次就可解码出一帧
if (frameFinished) {
//added by ws for AVfilter start
pFrame->pts = av_frame_get_best_effort_timestamp(pFrame);
//* push the decoded frame into the filtergraph
if (av_buffersrc_add_frame(buffersrc_ctx, pFrame) < 0) {
LOGD("Could not av_buffersrc_add_frame");
break;
}
ret = av_buffersink_get_frame(buffersink_ctx, pFrame);
if (ret < 0) {
LOGD("Could not av_buffersink_get_frame");
break;
}
//added by ws for AVfilter end
// lock native window buffer
ANativeWindow_lock(nativeWindow, &windowBuffer, 0);
// 格式转换
sws_scale(sws_ctx, (uint8_t const *const *) pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGBA->data, pFrameRGBA->linesize);
// 获取stride
uint8_t *dst = (uint8_t *) windowBuffer.bits;
int dstStride = windowBuffer.stride * 4;
uint8_t *src = (pFrameRGBA->data[0]);
int srcStride = pFrameRGBA->linesize[0];
// 由于window的stride和帧的stride不同,因此需要逐行复制
int h;
for (h = 0; h < videoHeight; h++) {
memcpy(dst + h * dstStride, src + h * srcStride, srcStride);
}
ANativeWindow_unlockAndPost(nativeWindow);
}
}
av_packet_unref(&packet);
}
av_free(buffer);
av_free(pFrameRGBA);
// Free the YUV frame
av_free(pFrame);
avfilter_graph_free(&filter_graph); //added by ws for avfilter
// Close the codecs
avcodec_close(pCodecCtx);
// Close the video file
avformat_close_input(&pFormatCtx);
return 0;
}
native-lib.h
#include <jni.h>
#ifndef FFMPEGANDROID_NATIVE_LIB_H
#define FFMPEGANDROID_NATIVE_LIB_H
#ifdef __cplusplus
extern "C" {
#endif
JNIEXPORT jint JNICALL Java_com_ws_ffmpegandroidavfilter_MainActivity_play
(JNIEnv *, jclass, jobject);
#ifdef __cplusplus
}
#endif
#endif
demo:https://github.com/WangShuo1143368701/FFmpegAndroid/tree/master/ffmpegandroidavfilter