1.ffmpeg常用命令
- 视频录制命令
- 多媒体文件的分解/复用命令
- 裁剪与合并互转命令
- 直播相关命令
- 各种滤镜命令
1 | 1. 视频--H264编码/解码 |
2 | 2. 音频--ACC编码/解码 |
2.应用场景
- 直播类:音视频会议、教育直播、娱乐/游戏
- 短视频:抖音、快手
- 网络视频:腾讯视频、优酷视频、爱奇艺
- 视频通话:微信、QQ
- 视频监控:幼儿园、停车场
- 人工智能:人脸视频、智能音箱
3.基本流程
1 | 1. 解复用 --- 音视频解码(ffmpeg)--- 音频播放/视频渲染(SDL) |
2 | 2. YUL数据 --- 渲染器渲染成纹理 --- 显卡计算交换 --- 窗口展示 |
4.ffmpeg的历史
- 2000年,由法布里斯.贝拉创建
- 2004年,迈克尔接管
- 2011年,Libav从ffmpeg分离
5.下载
1 | git clone https://git.ffmpeg.org/ffmpeg.git |
2 | |
3 | ./configure --list-filters |
4 | ./configure --helo | more |
5 | |
6 | --disable-gpl |
6.ffmpeg命令分类
- 基本信息查询命令
- 录制命令
- 分解/复用命令
- 处理原始数据命令
- 裁剪与合并命令
- 图片/视频互转命令
- 直播相关命令
- 各种滤镜命令
7.ffmpeg处理流程
输入文件 – demuxer处理 — 编码数据包 — decoder处理 — 解码后数据帧 — encoder处理 — 编码数据包 — muxer处理 — 输出文件
8.ffmpeg命令之基本信息查询
命令 | 描述 |
---|---|
-version | 显示版本 |
-demuxers | 显示可用的demuxers |
-muxers | 显示可用的muxers |
-devices | 显示可用的设备 |
-codecs | 显示所有编解码器 |
-decoders | 显示可用的解码器 |
-encoders | 显示所有的编码器 |
-bsfs | 显示比特流filter |
-formats | 显示可用的格式 |
-protocols | 显示可用的协议 |
-filters | 显示可用的过滤器 |
-pix_fmts | 显示可用的像素格式 |
-sample_fmts | 显示可用过的采样格式 |
-layouts | 显示channel名称 |
-colors | 显示识别的颜色名称 |
9.ffmpeg命令之录制
1 | 1. 录制屏幕:ffmpeg -f avfoundation -i1-r30 out.yuv |
2 | -f: 指定使用 avfoundation 采集数据 |
3 | -i:指定从哪儿采集数据,它是一个文件的索引号 |
4 | -r:指定帧率 |
5 | .yuv:采集之后保存的数据 |
6 | |
7 | 2. 录制音频:ffmpeg -f avfoundation -i :0 out.wav |
8 | -f: 指定使用 avfoundation 采集数据 |
9 | -i:指定从哪儿采集数据,它是一个文件的索引号(:0 是音频,冒号前是视频) |
10 | .yuv:采集之后保存的数据 |
10.ffmpeg命令之分解与复用
输入文件 – demuxer – 编码数据包 – muxer 输出文件
1 | 1. 多媒体格式转换:ffmpeg -i out.mp4 -vcodec copy -acodec copy out.flv |
2 | -i:输入文件 |
3 | -vcodec copy:视频编码处理方式 |
4 | -acodec copy:音频编码处理方式 |
5 | |
6 | 2. 抽取视频:ffmpeg -i f35.mp4 -an -vcodec copy out.h264 |
7 | 3. 抽取音频:ffmpeg -i f35.mp4 -acodec -vn copy out.aac |
11.ffmpeg命令之处理原始数据
原始数据:ffmpeg解码在之后的数据,对于音频就是pcm数据,对于视频就是yuv数据。
1 | 1. 提取yuv数据:ffmpeg -i input.mp4 -an -c:v rawvideo -pix_fmt yuv420p out.yuv |
2 | -i:输入文件 |
3 | -an:audio no 就是不需要音频 |
4 | -c:v:对视频进行编码,使用rawvideo格式进行编码 |
5 | -pix_fmt:指定像素格式 |
6 | |
7 | 2. 提取pcm数据:ffmpeg -i out.mp4 -vn -ar 44100 -ac2 -f s16le out.pcm |
8 | -i:输入文件 |
9 | -vn:video no 就是不需要视频 |
10 | -ar:audio read:指定音频采样率 |
11 | -ac2:audio channel 指定单声道、双声道、立体声、环绕立体声等等声道数 |
12 | -f:指定抽取出来的数据存储方式 |
13 | |
14 | |
15 | ffplay播放的时候原始数据需要指定一些参数以正确播放: |
16 | 视频: -s 指定分辨率 |
17 | 音频: -ar 指定采样率 -ar 指定声道数 -f 指定数据存储方式 |
12.ffmpeg命令之滤镜
视频加水印、logo、画中画等等…处理解码后的数据帧
decoded frames — filter处理 — filtered frames — encoder处理 — encoded data
1 | 1. 视频裁剪:ffmpeg -i in.mov -vf crop=in_w-200:in_h-200 -c:v libx264 -c:a copy out.mp4 |
2 | -i:输入文件 |
3 | -vf:video filter 指定视频滤镜且要=指定宽高w:h |
4 | -c:v:指定视频编码器 |
5 | -c:a:指定音频编码器 |
13.ffmpeg命令之裁剪与合并
1 | 1. 视频裁剪:ffmpeg -i in.mp4 -ss 00:00:00 -t 10 out.ts |
2 | -i:输入文件 |
3 | -ss:指定裁剪的起始点 |
4 | -t:指定裁剪的时长(持续时长) |
5 | |
6 | 2. 视频合并:ffmpeg -f concat -i inputs.txt out.flv |
7 | -f:file 指定文件处理方式 |
8 | -i: 输入文件集合,inputs.txt内容为 'file filename' 格式 |
14.ffmpeg命令之图片与视频互转
机器学习,视频裁切成一张张图片,然后使用图片识别技术识别图片上的内容。又或者多张图片合成一个视频等等。
1 | 1. 视频转图片:ffmpeg -i in.flv -r 1 -f image2 image-%3d.jpeg |
2 | -i:输入文件 |
3 | -r:指定转换图片的帧率(每秒钟转出1张图片) |
4 | -f:指定文件转换格式 |
5 | |
6 | 2. 图片转视频:ffmpeg -i image-%3d.jpeg out.mp4 |
7 | -i:输入文件 |
15.ffmpeg命令之直播推流与拉流
1 | 1. 直播推流:ffmpeg -re -i out.mp4 -c copy -f flv rtmp://server/live/streamName |
2 | -re:减慢帧率速度,保持帧率同步 |
3 | -i:指定推流的文件 |
4 | -c:音视频编解码 a是音频 v是视频 copy保持参数不变 |
5 | -f:指定推流的文件格式 |
6 | |
7 | 2. 直播拉流:ffmpeg -i rtmp://server/live/streamName -c copy dump.flv |
8 | -i:指定直播发布的rtmp服务地址 |
9 | -c:指定音视频编码器 |
10 | ``` |
11 | |
12 | |
13 | #### 16.vim编辑器 |
14 | |
15 | - 命令模式:拷贝/删除、粘贴 i/a 切换到编辑模式 |
16 | - 编辑模式:esc退出编辑模式 |
17 | - 创建文件:vim filename |
18 | - 保存::w |
19 | - 退出::q |
20 | - 保存并退出::wq |
21 | - 拷贝:yy/yw yy是拷贝一行 yw是拷贝一个单词 |
22 | - 粘贴:p |
23 | - 删除:dd/dw dd是删除一行 dw是删除一个单词 |
24 | - 左下上右:h/j/k/l |
25 | - 跳到文件头:gg |
26 | - 跳到文件尾:GG |
27 | - 移动到行首:^ |
28 | - 移动到行尾:$ |
29 | - 按单词移动:向前 w/2w 向后 b/2b |
30 | - 查找关键字:/关键字 下一个n 上一个N |
31 | - 查找与替换::%s/关键字/替换字/gc c是表示要二次确认 |
32 | - 分窗口:split/vsplit |
33 | - 窗口间跳转:ctrl + ww/w[hjkl] ctrl + w + = 恢复等屏幕 ctrl + w shift + | 最大化当前窗口 ctrl + w shift + - 最大化当前窗口 |
34 | |
35 | |
36 | #### 17.C语言基础 |
37 | |
38 | ```c |
39 | #include <stdio.h> // 导入头文件 |
40 | |
41 | int main(int argc, char* argv[]) { // 入口函数 |
42 | int a = 100; |
43 | float b = 7 .79; |
44 | char c = 'today' |
45 | printf("Hello Worl!\n"); |
46 | printf("a=%d\n", a); |
47 | printf("b=%f\n", b); |
48 | printf("c=%c\n", c); |
49 | return 0; |
50 | } |
51 | |
52 | // clang -g -o helloworld helloworld.c -g输出debug信息 -o是输出可执行程序 |
53 | // ./helloworld |
18.C语言基础之常用基本类型
- short int long
- float double
- char
- void
19.C语言基础之指针与数组
- 指针就是内存地址:void* char*
- 数组: char c[2] int arr[20] 连续的同一类型的内存空间
- 指针本身运算
- 指针所指内容的操作
- 操作系统如何管理内存?栈空间、堆空间、内存映射
- 分配内存:void* mem = malloc(size);
- 释放内存:free(mem);
- 申请的内存不用也不释放以引起内存泄漏
- 占用别人的内存成为野指针
- 函数指针:返回值类型(*指针变量名)([形参列表])
1 | int func(int x); // 声明一个函数 |
2 | int (*f) (int x); // 声明一个函数指针 |
3 | f = func; // 将func函数的首地址赋给指针f |
1 |
|
2 |
|
3 | |
4 | int sum(int a, int b) { |
5 | return a + b; |
6 | } |
7 | |
8 | int sub(int a, int b) { |
9 | return a - b; |
10 | } |
11 | |
12 | int main(int argc, char* argc[]) { |
13 | int *a, *b; // 定义两个指针类型 |
14 | a = (int*)malloc(sizeof(int)); // 在堆里开辟空间 |
15 | b = (int*)malloc(sizeof(int)); |
16 | *a = 1; |
17 | *b = 2; |
18 | int c[3] = {0, 1, 2}; // 定义一个数组 |
19 | printf("addr of a:%p\n, %p, %d\n", &a, a, *a); |
20 | printf("addr of b:%p\n, %p, %d\n", &b, a, *b); |
21 | printf("addr of c:%p, %p, %d, %d, %d", &c, c, c[0], c[1], c[2]); |
22 | |
23 | int (*f)(int, int); // 定义一个函数指针 |
24 | int result; |
25 | int r; |
26 | f = sum; |
27 | retsult = f(3, 5); // 通过函数指针调用函数 |
28 | |
29 | f = sub; |
30 | r = f(result, 5); |
31 | printf("3+5=%d\n", result); |
32 | printf("8-5=%d\n", r); |
33 | return 0; |
34 | } |
20.C语言基础之结构体
原始类型,自定义的类型–结构体、枚举
1 |
|
2 | |
3 | struct st { |
4 | int a; |
5 | int b; |
6 | }; |
7 | |
8 | enum em { |
9 | red = 10, |
10 | green = 20, |
11 | blue = 30 |
12 | } |
13 | |
14 | int main(int argc, char* argv[]) { |
15 | struct st sst; |
16 | sst.a = 10; |
17 | sst.b = 20; |
18 | printf("struct cpntent is:%d. %d\n", sst.a, sst.b); |
19 | |
20 | enum etype et; |
21 | et = red; |
22 | printf("the color is %d\n", et); |
23 | |
24 | et = blue; |
25 | printf("the color is %d\n", et); |
26 | return 0; |
27 | } |
20.C语言基础之算数运算符与比较运算符
- +、-、*、/、%
- <=、<、>、>=
1 |
|
2 | |
3 | int main(int argc, char* argv[]) { |
4 | int a = 10; |
5 | int b = 20; |
6 | int c = a + b; |
7 | printf("c=%d", c); |
8 | } |
21.C语言基础之循环
1 |
|
2 | |
3 | int main(int argc, char* argv[]) { |
4 | for (int i = 0; i < 10; i++) { |
5 | printf("i=%d\n", i); |
6 | } |
7 | |
8 | int j = 0; |
9 | while (j < 10) { |
10 | printf('j=%d\n', j); |
11 | j++; |
12 | } |
13 | } |
22.C语言基础之函数
1 |
|
2 | |
3 | int sum(int a, int b) { |
4 | return a + b; |
5 | } |
6 | |
7 | void log() { |
8 | printf("this is log info..."); |
9 | } |
10 | |
11 | int main(int argc, char* argv[]) { |
12 | int result; |
13 | result = sum(1, 2); |
14 | printf("1 + 2 =%d\n", result); |
15 | |
16 | log(); |
17 | return 0; |
18 | } |
23.C语言基础之文件操作
- 问价类型:FILE* file;
- 打开文件 FILE* fopen(path, mode);
- 关闭文件 fclose(FILE*);
1 |
|
2 | |
3 | int main(int argc, char* argv[]) { |
4 | FILE* file; |
5 | char buf[1024] = {0,}; |
6 | file = fopen("1.txt", "a+"); // 打开一个文件,如果不存在就创建一个同名的文件 |
7 | fwrite("Hello World", 1, 11, file); |
8 | rewind(file); // 手动将游标放到文件的开头 |
9 | fread(buf, 1, 11, file); |
10 | fclose(file); |
11 | printf("buf: %s\n", buf); |
12 | return 0; |
13 | } |
24.C语言基础之编译器
- Mac上使用clang,Linux上使用gcc
1 | 1. gcc/clang -g -O2 -o test test.c -I... -L... -l |
2 | -g:输出文件中的调试信息 |
3 | -O:对输出文件做指令优化 |
4 | -o:输出文件 |
5 | -I:指定头文件 |
6 | -L:指定库文件位置 |
7 | -l:指定使用哪个库 |
- 预编译:将头文件代码拷贝过来和项目代码合并
- 编译
- 链接、动态链接、静态链接
- clang -g -c add.c ==> add.o
- libtool -static -o libmylib.a add.o 生成静态库
- 静态库代码
1
// add.h
2
3
4
5
int add(int a, int b);
6
7
- 引用静态库
1 |
|
2 |
|
3 | |
4 | int main(int argc, char* argv[]) { |
5 | printf("add=%d\n", add(3, 3)); |
6 | return 0; |
7 | } |
- 编译:clang -g -o testlib testlib.c -I . -L . -l -lmylib
25.C语言基础之调试器
Mac下使用的是LLDB,Linux下使用Gdb
- 编译输出带调试信息的程序
- 调试信息包含:指令地址、对应源代码及行号
- 指令完成后,回调
命令 | gdb/lldb |
---|---|
设置断点 | b |
运行程序 | r |
单步执行 | n |
跳入函数 | s |
跳出函数 | finish |
打印内容 | p |
- lldb testlib
1 | break list // 查看断点信息 |
2 | p xxx // 打印变量信息 |
3 | c // 一次执行完毕 |
4 | n // 下一步 |
5 | quit // 退出当前程序 |
6 | xxx.dSYM // 带调试信息的编译后文件,使用dwarfdump xxx 即可查看对应的调试文件 |
27.ffmpeg代码结构
文件夹 | 描述信息 |
---|---|
libavcodec | 提供了一系列编码器的实现 |
libavformat | 实现在流协议,容器格式及基本IO访问 |
libavutil | 包括了hash器,解码器和各种工具函数 |
libavfilter | 提供了各种音视频过滤器 |
libavdevice | 提供了访问捕获设备和回访设备的接口 |
libswresample | 实现了混音和重采样 |
libswscale | 实现了色彩转换和缩放功能 |
28.ffmpeg日志系统
- include <libavutil/log.h> 日志头文件
- av_log_set_level(AV_LOG_DEBUG) 设置debug样式界别
- av_log(NULL, AV_LOG_INFO, “..%s\n”, op) 日志打印
- 常用日志级别
- AV_LOG_ERROR
- AV_LOG_WARNING
- AV_LOG_INFO
- AV_:OG_DEBUG
1 |
|
2 |
|
3 | |
4 | int main(int argc, char* argv[]) { |
5 | av_log_set_level(AV_LOG_DEBUG); |
6 | av_log(NULL, AV_LOG_INFO, "Hello WOrld!:%s\n", "Hi~"); |
7 | return 0 |
8 | } |
- 执行命令:clang -g -o ffmpeg_log ffmpeg_log.c -lavutil
29.ffmpeg文件的删除与重命名
- avpriv_io_delete()
- avpriv_io_move()
1 |
|
2 |
|
3 | |
4 | int main(int argc, char* argv[]) { |
5 | int ret; |
6 | ret = avpriv_io_move("111.txt", "222.txt"); // 重命名文件 |
7 | if (ret < 0) { |
8 | av_log(NULL, AV_LOG_ERROR, "Failed to rename\n"); |
9 | return -1; |
10 | } |
11 | av_log(NULL, AV_LOG_INFO, "Success to rename\n"); |
12 | ret = avpriv_io_delete("./mytestfile.txt"); // 删除文件 |
13 | if (ret < 0) { |
14 | av_log(NULL, AV_LOG_ERROR, "Failed to delete file mytestfile.txt\n"); |
15 | return -1; |
16 | } |
17 | av_log(NULL, AV_LOG_INFO, "Success to delete mytestfile.txt\n"); |
18 | return 0 |
19 | } |
- 执行程序:clang -g -o ffmpeg_del ffmpeg_file.c
pkg-config --libs libavformat
30.ffmpeg操作目录函数
- avio_open_dir()
- avio_read_dir()
- avio_close_dir()
- AVIO_DirContext操作目录的上下文
- AVIODirEntry目录想,用于存放文件名、文件属性等等
1 |
|
2 |
|
3 | |
4 | int main(int argc, char* argv[]) { |
5 | int ret; |
6 | AVIODirEntry *entry = NULL; |
7 | AVIODirCotext *ctx = NULL: |
8 | av_log_set_level(AV_LOG_INFO); |
9 | ret = avio_open_dir(&ctx, "./", NULL); |
10 | if (ret < 0) { |
11 | av_log(NULL, AV_LOG_ERROR, "Cant open dir:%s\n", av_err2str(ret)); |
12 | return -1; |
13 | } |
14 | while (1) { |
15 | ret = avio_read_dir(ctx, &entry); |
16 | if (ret < 0) { |
17 | av_log(NULL, AV_LOG_ERROR, "Cant read dir:%s\n", av_err2str(ret)); |
18 | goto __fail; |
19 | } |
20 | if (!entry) { |
21 | break; |
22 | } |
23 | av_log(NULL, AV_LOG_INFO, "%12"PRId64" %s \n", entry->size, entry->name); |
24 | avio_free_directory_entry(&entry); // 释放内存 |
25 | } |
26 | __fail; |
27 | avio_close_dir(&ctx); |
28 | return 0; |
29 | } |
31.多媒体文件
- 多媒体文件其实就是个容器
- 在容器里又很多流(Stream/Track)
- 每种流是由不同的编码器编码的
- 从流中读取的数据称为包
- 在一个包中包含着一个或多个帧
- AVFormatContext 读取多媒体文件的上下文
- AVStream
- AVPacket
1 | 解复用 -- 获取流 -- 读取数据包 -- 释放资源 |
32.打印音视频信息
- av_register_all()
- avformat_open_input() / avformat_close_input()
- av_dump_format()
1 |
|
2 |
|
3 | |
4 | int main(int argc, char* argv[]) { |
5 | int ret; |
6 | AVFormatContext *fmt_ctx = NULL: // 格式上下文 |
7 | av_log_set_level(AV_LOG_INFO); |
8 | av_register_all(); // 注册各种解码器、协议等等 |
9 | ret = avformat_open_input(&fmt_ctx, "./test.mp4", NULL, NULL); |
10 | if (ret < 0) { |
11 | av_log(NULL, AV_LOG_ERROR, "Cant open file: %s\n", averr2str(ret)); |
12 | return -1; |
13 | } |
14 | av_dump_format(fmt_ctx, 0, "./test.mp4", 0); // 打印metadata |
15 | avformat_close_input(&fmt_ctx); |
16 | return 0; |
17 | } |
- 执行程序:clang -g -o mediainfo mediainfo.c
pkg-config --libs libavutil libavformat
33.抽取音频数据
- av_init_packet() 初始化数据包结构体
- av_find_best_stream() 从对媒体文件中找到最好的一路流
- av_read_frame()/av_packet_unref() 读取流中的数据包/释放读取之后的引用计数-1,防止内存泄漏
1 |
|
2 |
|
3 |
|
4 | |
5 |
|
6 | |
7 | void adts_header(char *szAdtsHeader, int dataLen) { |
8 | int audio_object_type = 2; |
9 | int sampling_frequency_index = 7; |
10 | int channel_config = 2; |
11 | int adtsLen = dataLen + 7; |
12 | |
13 | szAdtsHeader[0] = 0xff; |
14 | szAdtsHeader[1] = 0xf0; |
15 | szAdtsHeader[1] |= (0 << 3); |
16 | szAdtsHeader[1] |= (0 << 1); |
17 | szAdtsHeader[1] |= 1; |
18 | szAdtsHeader[2] = (audio_object_type - 1) << 6; |
19 | szAdtsHeader[2] |= (sampling_frequency_index & 0x0f) << 2; |
20 | szAdtsHeader[2] |= (0 << 1); |
21 | szAdtsHeader[2] |= (channel_config & 0x04) >> 2; |
22 | szAdtsHeader[3] = (channel_config & 0x03) << 6; |
23 | szAdtsHeader[3] |= (0 << 5); |
24 | szAdtsHeader[3] |= (0 << 4); |
25 | szAdtsHeader[3] |= (0 << 3); |
26 | szAdtsHeader[3] |= (0 << 2); |
27 | szAdtsHeader[3] |= ((adtsLen & 0x1800) >> 11); |
28 | szAdtsHeader[4] = (uint8_t)((adtsLen & 0x7f8) >> 3); |
29 | szAdtsHeader[5] = (uint8_t)((adtsLen & 0x7) << 5); |
30 | szAdtsHeader[5] = 0x1f; |
31 | szAdtsHeader[6] = 0xfc; |
32 | } |
33 | |
34 | int main(int argc, char* argv[]) { |
35 | int ret; |
36 | int len; |
37 | int audio_index; |
38 | char* src = NULL; |
39 | char* dst = NULL; |
40 | AVPacket pkt; |
41 | AVFormatContext *fmt_ctx = NULL; |
42 | |
43 | av_log_set_level(AV_LOG_INFO); |
44 | // av_register_all(); // 注册各种解码器、协议等等,ffmpeg4.0之后不再需要手动注册 |
45 | |
46 | // 第一步:从控制台读取两次参数 |
47 | if (argc < 3) { |
48 | av_log(NULL, AV_LOG_ERROR, "the count of params should be more than three! \n"); |
49 | return -1; |
50 | } |
51 | src = argv[1]; |
52 | dst = argv[2]; |
53 | if (!src || !dst) { |
54 | av_log(NULL, AV_LOG_ERROR, "src or dst is null! \n"); |
55 | return -1; |
56 | } |
57 | ret = avformat_open_input(&fmt_ctx, src, NULL, NULL); // 读取多媒体文件 |
58 | if (ret < 0) { |
59 | av_log(NULL, AV_LOG_ERROR, "can not open file: %s\n", av_err2str(ret)); |
60 | return -1; |
61 | } |
62 | FILE* dst_fd = fopen(dst, "wb"); // 以只写的方式打开一个二进制文件,若无则创建这个文件 |
63 | if (!dst_fd) { // 判断输出文件是否存在 |
64 | av_log(NULL, AV_LOG_ERROR, "can not open out file! \n"); |
65 | avformat_close_input(&fmt_ctx); |
66 | return -1; |
67 | } |
68 | av_dump_format(fmt_ctx, 0, src, 0); // 打印metadata |
69 | // 2.第二步:获取流 |
70 | ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, NULL, 0); // 音频 |
71 | if (ret < 0) { |
72 | av_log(NULL, AV_LOG_ERROR, "can not find the best stream!\n"); |
73 | avformat_close_input(&fmt_ctx); |
74 | fclose(dst_fd); |
75 | return -1; |
76 | } |
77 | audio_index = ret; |
78 | av_init_packet(&pkt); |
79 | while(av_read_frame(fmt_ctx, &pkt) >= 0) { // 读取流中的所有数据包 |
80 | if(pkt.stream_index == audio_index) { |
81 | char adts_header_buf[7]; |
82 | adts_header(adts_header_buf, pkt.size); |
83 | fwrite(adts_header_buf, 1, 7, dst_fd); |
84 | // 第三步:读取流数据包 |
85 | len = fwrite(pkt.data, 1, pkt.size, dst_fd); |
86 | if(len != pkt.size) { |
87 | av_log(NULL, AV_LOG_WARNING, "warning length of data is not equal size of pkt! \n"); |
88 | } |
89 | } |
90 | av_packet_unref(&pkt); |
91 | } |
92 | avformat_close_input(&fmt_ctx); |
93 | if (dst_fd) { |
94 | fclose(dst_fd); |
95 | } |
96 | return 0; |
97 | } |
98 | ``` |
99 | |
100 | |
101 | |
102 | #### 34.抽取视频数据 |
103 | |
104 | - Start code 特征码 |
105 | - SPS/PPS 去解码的视频参数 超级小,一般每一帧前面加 |
106 | - codec -> extradata 获取SPS/PPS,在编码器的扩展数据空间里获取 |
107 | |
108 | ```c |
109 |
|
110 |
|
111 | |
112 | int main(int argc, char* argv[]) { |
113 | return 0; |
114 | } |
35.将MP4转成FLV格式
- avformat_alloc_output_context2() / avformat_free_context()
- avformat_new_stream()
- avcodec_parameters_copy()
- avformat_write_header()
- av_write_frame() / av_interleaved_write_frame()
- av_write_trailer()
1 |
|
2 |
|
3 | |
4 | static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag) |
5 | { |
6 | AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base; |
7 | |
8 | printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n", |
9 | tag, |
10 | av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base), |
11 | av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base), |
12 | av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base), |
13 | pkt->stream_index); |
14 | } |
15 | |
16 | int main(int argc, char **argv) |
17 | { |
18 | AVOutputFormat *ofmt = NULL; |
19 | AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL; |
20 | AVPacket pkt; |
21 | const char *in_filename, *out_filename; |
22 | int ret, i; |
23 | int stream_index = 0; |
24 | int *stream_mapping = NULL; |
25 | int stream_mapping_size = 0; |
26 | |
27 | if (argc < 3) { |
28 | printf("usage: %s input output\n" |
29 | "API example program to remux a media file with libavformat and libavcodec.\n" |
30 | "The output format is guessed according to the file extension.\n" |
31 | "\n", argv[0]); |
32 | return 1; |
33 | } |
34 | |
35 | in_filename = argv[1]; |
36 | out_filename = argv[2]; |
37 | |
38 | av_register_all(); |
39 | |
40 | if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) { |
41 | fprintf(stderr, "Could not open input file '%s'", in_filename); |
42 | goto end; |
43 | } |
44 | |
45 | if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) { |
46 | fprintf(stderr, "Failed to retrieve input stream information"); |
47 | goto end; |
48 | } |
49 | |
50 | av_dump_format(ifmt_ctx, 0, in_filename, 0); |
51 | |
52 | avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename); |
53 | if (!ofmt_ctx) { |
54 | fprintf(stderr, "Could not create output context\n"); |
55 | ret = AVERROR_UNKNOWN; |
56 | goto end; |
57 | } |
58 | |
59 | stream_mapping_size = ifmt_ctx->nb_streams; |
60 | stream_mapping = av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping)); |
61 | if (!stream_mapping) { |
62 | ret = AVERROR(ENOMEM); |
63 | goto end; |
64 | } |
65 | |
66 | ofmt = ofmt_ctx->oformat; |
67 | |
68 | for (i = 0; i < ifmt_ctx->nb_streams; i++) { |
69 | AVStream *out_stream; |
70 | AVStream *in_stream = ifmt_ctx->streams[i]; |
71 | AVCodecParameters *in_codecpar = in_stream->codecpar; |
72 | |
73 | if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO && |
74 | in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO && |
75 | in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) { |
76 | stream_mapping[i] = -1; |
77 | continue; |
78 | } |
79 | |
80 | stream_mapping[i] = stream_index++; |
81 | |
82 | out_stream = avformat_new_stream(ofmt_ctx, NULL); |
83 | if (!out_stream) { |
84 | fprintf(stderr, "Failed allocating output stream\n"); |
85 | ret = AVERROR_UNKNOWN; |
86 | goto end; |
87 | } |
88 | |
89 | ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar); |
90 | if (ret < 0) { |
91 | fprintf(stderr, "Failed to copy codec parameters\n"); |
92 | goto end; |
93 | } |
94 | out_stream->codecpar->codec_tag = 0; |
95 | } |
96 | av_dump_format(ofmt_ctx, 0, out_filename, 1); |
97 | |
98 | if (!(ofmt->flags & AVFMT_NOFILE)) { |
99 | ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE); |
100 | if (ret < 0) { |
101 | fprintf(stderr, "Could not open output file '%s'", out_filename); |
102 | goto end; |
103 | } |
104 | } |
105 | |
106 | ret = avformat_write_header(ofmt_ctx, NULL); |
107 | if (ret < 0) { |
108 | fprintf(stderr, "Error occurred when opening output file\n"); |
109 | goto end; |
110 | } |
111 | |
112 | while (1) { |
113 | AVStream *in_stream, *out_stream; |
114 | |
115 | ret = av_read_frame(ifmt_ctx, &pkt); |
116 | if (ret < 0) |
117 | break; |
118 | |
119 | in_stream = ifmt_ctx->streams[pkt.stream_index]; |
120 | if (pkt.stream_index >= stream_mapping_size || |
121 | stream_mapping[pkt.stream_index] < 0) { |
122 | av_packet_unref(&pkt); |
123 | continue; |
124 | } |
125 | |
126 | pkt.stream_index = stream_mapping[pkt.stream_index]; |
127 | out_stream = ofmt_ctx->streams[pkt.stream_index]; |
128 | log_packet(ifmt_ctx, &pkt, "in"); |
129 | |
130 | /* copy packet */ |
131 | pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX); |
132 | pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX); |
133 | pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base); |
134 | pkt.pos = -1; |
135 | log_packet(ofmt_ctx, &pkt, "out"); |
136 | |
137 | ret = av_interleaved_write_frame(ofmt_ctx, &pkt); |
138 | if (ret < 0) { |
139 | fprintf(stderr, "Error muxing packet\n"); |
140 | break; |
141 | } |
142 | av_packet_unref(&pkt); |
143 | } |
144 | |
145 | av_write_trailer(ofmt_ctx); |
146 | end: |
147 | |
148 | avformat_close_input(&ifmt_ctx); |
149 | |
150 | /* close output */ |
151 | if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE)) |
152 | avio_closep(&ofmt_ctx->pb); |
153 | avformat_free_context(ofmt_ctx); |
154 | |
155 | av_freep(&stream_mapping); |
156 | |
157 | if (ret < 0 && ret != AVERROR_EOF) { |
158 | fprintf(stderr, "Error occurred: %s\n", av_err2str(ret)); |
159 | return 1; |
160 | } |
161 | |
162 | return 0; |
163 | } |
164 | |
165 | ``` |
166 | |
167 | |
168 | #### 36.cong从MP4截取一段视频 |
169 | |
170 | - av_seek_frame() |
171 | |
172 | ```c |
173 |
|
174 |
|
175 |
|
176 | |
177 | static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag) |
178 | { |
179 | AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base; |
180 | |
181 | printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n", |
182 | tag, |
183 | av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base), |
184 | av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base), |
185 | av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base), |
186 | pkt->stream_index); |
187 | } |
188 | |
189 | int cut_video(double from_seconds, double end_seconds, const char* in_filename, const char* out_filename) { |
190 | AVOutputFormat *ofmt = NULL; |
191 | AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL; |
192 | AVPacket pkt; |
193 | int ret, i; |
194 | |
195 | av_register_all(); |
196 | |
197 | if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) { |
198 | fprintf(stderr, "Could not open input file '%s'", in_filename); |
199 | goto end; |
200 | } |
201 | |
202 | if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) { |
203 | fprintf(stderr, "Failed to retrieve input stream information"); |
204 | goto end; |
205 | } |
206 | |
207 | av_dump_format(ifmt_ctx, 0, in_filename, 0); |
208 | |
209 | avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename); |
210 | if (!ofmt_ctx) { |
211 | fprintf(stderr, "Could not create output context\n"); |
212 | ret = AVERROR_UNKNOWN; |
213 | goto end; |
214 | } |
215 | |
216 | ofmt = ofmt_ctx->oformat; |
217 | |
218 | for (i = 0; i < ifmt_ctx->nb_streams; i++) { |
219 | AVStream *in_stream = ifmt_ctx->streams[i]; |
220 | AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec); |
221 | if (!out_stream) { |
222 | fprintf(stderr, "Failed allocating output stream\n"); |
223 | ret = AVERROR_UNKNOWN; |
224 | goto end; |
225 | } |
226 | |
227 | ret = avcodec_copy_context(out_stream->codec, in_stream->codec); |
228 | if (ret < 0) { |
229 | fprintf(stderr, "Failed to copy context from input to output stream codec context\n"); |
230 | goto end; |
231 | } |
232 | out_stream->codec->codec_tag = 0; |
233 | if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) |
234 | out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; |
235 | } |
236 | av_dump_format(ofmt_ctx, 0, out_filename, 1); |
237 | |
238 | if (!(ofmt->flags & AVFMT_NOFILE)) { |
239 | ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE); |
240 | if (ret < 0) { |
241 | fprintf(stderr, "Could not open output file '%s'", out_filename); |
242 | goto end; |
243 | } |
244 | } |
245 | |
246 | ret = avformat_write_header(ofmt_ctx, NULL); |
247 | if (ret < 0) { |
248 | fprintf(stderr, "Error occurred when opening output file\n"); |
249 | goto end; |
250 | } |
251 | |
252 | // int indexs[8] = {0}; |
253 | |
254 | |
255 | // int64_t start_from = 8*AV_TIME_BASE; |
256 | ret = av_seek_frame(ifmt_ctx, -1, from_seconds*AV_TIME_BASE, AVSEEK_FLAG_ANY); |
257 | if (ret < 0) { |
258 | fprintf(stderr, "Error seek\n"); |
259 | goto end; |
260 | } |
261 | |
262 | int64_t *dts_start_from = malloc(sizeof(int64_t) * ifmt_ctx->nb_streams); |
263 | memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams); |
264 | int64_t *pts_start_from = malloc(sizeof(int64_t) * ifmt_ctx->nb_streams); |
265 | memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams); |
266 | |
267 | while (1) { |
268 | AVStream *in_stream, *out_stream; |
269 | |
270 | ret = av_read_frame(ifmt_ctx, &pkt); |
271 | if (ret < 0) |
272 | break; |
273 | |
274 | in_stream = ifmt_ctx->streams[pkt.stream_index]; |
275 | out_stream = ofmt_ctx->streams[pkt.stream_index]; |
276 | |
277 | log_packet(ifmt_ctx, &pkt, "in"); |
278 | |
279 | if (av_q2d(in_stream->time_base) * pkt.pts > end_seconds) { |
280 | av_free_packet(&pkt); |
281 | break; |
282 | } |
283 | |
284 | if (dts_start_from[pkt.stream_index] == 0) { |
285 | dts_start_from[pkt.stream_index] = pkt.dts; |
286 | printf("dts_start_from: %s\n", av_ts2str(dts_start_from[pkt.stream_index])); |
287 | } |
288 | if (pts_start_from[pkt.stream_index] == 0) { |
289 | pts_start_from[pkt.stream_index] = pkt.pts; |
290 | printf("pts_start_from: %s\n", av_ts2str(pts_start_from[pkt.stream_index])); |
291 | } |
292 | |
293 | /* copy packet */ |
294 | pkt.pts = av_rescale_q_rnd(pkt.pts - pts_start_from[pkt.stream_index], in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX); |
295 | pkt.dts = av_rescale_q_rnd(pkt.dts - dts_start_from[pkt.stream_index], in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX); |
296 | if (pkt.pts < 0) { |
297 | pkt.pts = 0; |
298 | } |
299 | if (pkt.dts < 0) { |
300 | pkt.dts = 0; |
301 | } |
302 | pkt.duration = (int)av_rescale_q((int64_t)pkt.duration, in_stream->time_base, out_stream->time_base); |
303 | pkt.pos = -1; |
304 | log_packet(ofmt_ctx, &pkt, "out"); |
305 | printf("\n"); |
306 | |
307 | ret = av_interleaved_write_frame(ofmt_ctx, &pkt); |
308 | if (ret < 0) { |
309 | fprintf(stderr, "Error muxing packet\n"); |
310 | break; |
311 | } |
312 | av_free_packet(&pkt); |
313 | } |
314 | free(dts_start_from); |
315 | free(pts_start_from); |
316 | |
317 | av_write_trailer(ofmt_ctx); |
318 | end: |
319 | |
320 | avformat_close_input(&ifmt_ctx); |
321 | |
322 | /* close output */ |
323 | if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE)) |
324 | avio_closep(&ofmt_ctx->pb); |
325 | avformat_free_context(ofmt_ctx); |
326 | |
327 | if (ret < 0 && ret != AVERROR_EOF) { |
328 | fprintf(stderr, "Error occurred: %s\n", av_err2str(ret)); |
329 | return 1; |
330 | } |
331 | |
332 | return 0; |
333 | } |
334 | |
335 | int main(int argc, char *argv[]){ |
336 | if(argc < 5){ |
337 | fprintf(stderr, "Usage: \ |
338 | command startime, endtime, srcfile, outfile"); |
339 | return -1; |
340 | } |
341 | |
342 | double startime = atoi(argv[1]); |
343 | double endtime = atoi(argv[2]); |
344 | cut_video(startime, endtime, argv[3], argv[4]); |
345 | |
346 | return 0; |
347 | } |
348 | ``` |
349 | |
350 | |
351 | #### 37.h264解码 |
352 | |
353 | - 常用数据结构体 |
354 | - AVCodec 编码器结构体 |
355 | - AVCodecCotext 编码器上下文 |
356 | - AVFrame 解码后的帧 |
357 | - av_frame_alloc() / av_frame_free() |
358 | - avcodec_alloc_context3() |
359 | - avcodec_free_context() |
360 | - 解码步骤 |
361 | 1. 查找解码器(avcodec_find_decoder / avcodec_find_encoder_by_name) |
362 | 2. 打开解码器(avcodec_open2) |
363 | 3. 解码(avcodec_decode_video2) |
364 | |
365 | - h264编码流程 |
366 | 1. 查找编码器(avcodec_find_encoder_by_name) |
367 | 2. 设置编码参数,并打开编码器(avcodec_open2) |
368 | 3. 编码(avcodec_encode_video2) |
369 | |
370 | ```c |
371 |
|
372 |
|
373 |
|
374 | |
375 |
|
376 |
|
377 |
|
378 | |
379 |
|
380 | |
381 |
|
382 |
|
383 |
|
384 | |
385 |
|
386 | typedef struct tagBITMAPFILEHEADER { |
387 | WORD bfType; |
388 | DWORD bfSize; |
389 | WORD bfReserved1; |
390 | WORD bfReserved2; |
391 | DWORD bfOffBits; |
392 | } BITMAPFILEHEADER, *PBITMAPFILEHEADER; |
393 | |
394 | |
395 | typedef struct tagBITMAPINFOHEADER { |
396 | DWORD biSize; |
397 | LONG biWidth; |
398 | LONG biHeight; |
399 | WORD biPlanes; |
400 | WORD biBitCount; |
401 | DWORD biCompression; |
402 | DWORD biSizeImage; |
403 | LONG biXPelsPerMeter; |
404 | LONG biYPelsPerMeter; |
405 | DWORD biClrUsed; |
406 | DWORD biClrImportant; |
407 | } BITMAPINFOHEADER, *PBITMAPINFOHEADER; |
408 | |
409 | void saveBMP(struct SwsContext *img_convert_ctx, AVFrame *frame, char *filename) |
410 | { |
411 | //1 先进行转换, YUV420=>RGB24: |
412 | int w = frame->width; |
413 | int h = frame->height; |
414 | |
415 | |
416 | int numBytes=avpicture_get_size(AV_PIX_FMT_BGR24, w, h); |
417 | uint8_t *buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); |
418 | |
419 | |
420 | AVFrame *pFrameRGB = av_frame_alloc(); |
421 | /* buffer is going to be written to rawvideo file, no alignment */ |
422 | /* |
423 | if (av_image_alloc(pFrameRGB->data, pFrameRGB->linesize, |
424 | w, h, AV_PIX_FMT_BGR24, pix_fmt, 1) < 0) { |
425 | fprintf(stderr, "Could not allocate destination image\n"); |
426 | exit(1); |
427 | } |
428 | */ |
429 | avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_BGR24, w, h); |
430 | |
431 | sws_scale(img_convert_ctx, frame->data, frame->linesize, |
432 | 0, h, pFrameRGB->data, pFrameRGB->linesize); |
433 | |
434 | //2 构造 BITMAPINFOHEADER |
435 | BITMAPINFOHEADER header; |
436 | header.biSize = sizeof(BITMAPINFOHEADER); |
437 | |
438 | |
439 | header.biWidth = w; |
440 | header.biHeight = h*(-1); |
441 | header.biBitCount = 24; |
442 | header.biCompression = 0; |
443 | header.biSizeImage = 0; |
444 | header.biClrImportant = 0; |
445 | header.biClrUsed = 0; |
446 | header.biXPelsPerMeter = 0; |
447 | header.biYPelsPerMeter = 0; |
448 | header.biPlanes = 1; |
449 | |
450 | //3 构造文件头 |
451 | BITMAPFILEHEADER bmpFileHeader = {0,}; |
452 | //HANDLE hFile = NULL; |
453 | DWORD dwTotalWriten = 0; |
454 | DWORD dwWriten; |
455 | |
456 | bmpFileHeader.bfType = 0x4d42; //'BM'; |
457 | bmpFileHeader.bfSize = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER)+ numBytes; |
458 | bmpFileHeader.bfOffBits=sizeof(BITMAPFILEHEADER)+sizeof(BITMAPINFOHEADER); |
459 | |
460 | FILE* pf = fopen(filename, "wb"); |
461 | fwrite(&bmpFileHeader, sizeof(BITMAPFILEHEADER), 1, pf); |
462 | fwrite(&header, sizeof(BITMAPINFOHEADER), 1, pf); |
463 | fwrite(pFrameRGB->data[0], 1, numBytes, pf); |
464 | fclose(pf); |
465 | |
466 | |
467 | //释放资源 |
468 | //av_free(buffer); |
469 | av_freep(&pFrameRGB[0]); |
470 | av_free(pFrameRGB); |
471 | } |
472 | |
473 | static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize, |
474 | char *filename) |
475 | { |
476 | FILE *f; |
477 | int i; |
478 | |
479 | f = fopen(filename,"w"); |
480 | fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255); |
481 | for (i = 0; i < ysize; i++) |
482 | fwrite(buf + i * wrap, 1, xsize, f); |
483 | fclose(f); |
484 | } |
485 | |
486 | static int decode_write_frame(const char *outfilename, AVCodecContext *avctx, |
487 | struct SwsContext *img_convert_ctx, AVFrame *frame, int *frame_count, AVPacket *pkt, int last) |
488 | { |
489 | int len, got_frame; |
490 | char buf[1024]; |
491 | |
492 | len = avcodec_decode_video2(avctx, frame, &got_frame, pkt); |
493 | if (len < 0) { |
494 | fprintf(stderr, "Error while decoding frame %d\n", *frame_count); |
495 | return len; |
496 | } |
497 | if (got_frame) { |
498 | printf("Saving %sframe %3d\n", last ? "last " : "", *frame_count); |
499 | fflush(stdout); |
500 | |
501 | /* the picture is allocated by the decoder, no need to free it */ |
502 | snprintf(buf, sizeof(buf), "%s-%d.bmp", outfilename, *frame_count); |
503 | |
504 | /* |
505 | pgm_save(frame->data[0], frame->linesize[0], |
506 | frame->width, frame->height, buf); |
507 | */ |
508 | |
509 | saveBMP(img_convert_ctx, frame, buf); |
510 | |
511 | (*frame_count)++; |
512 | } |
513 | if (pkt->data) { |
514 | pkt->size -= len; |
515 | pkt->data += len; |
516 | } |
517 | return 0; |
518 | } |
519 | |
520 | int main(int argc, char **argv) |
521 | { |
522 | int ret; |
523 | |
524 | FILE *f; |
525 | |
526 | const char *filename, *outfilename; |
527 | |
528 | AVFormatContext *fmt_ctx = NULL; |
529 | |
530 | const AVCodec *codec; |
531 | AVCodecContext *c= NULL; |
532 | |
533 | AVStream *st = NULL; |
534 | int stream_index; |
535 | |
536 | int frame_count; |
537 | AVFrame *frame; |
538 | |
539 | struct SwsContext *img_convert_ctx; |
540 | |
541 | //uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE]; |
542 | AVPacket avpkt; |
543 | |
544 | if (argc <= 2) { |
545 | fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]); |
546 | exit(0); |
547 | } |
548 | filename = argv[1]; |
549 | outfilename = argv[2]; |
550 | |
551 | /* register all formats and codecs */ |
552 | av_register_all(); |
553 | |
554 | /* open input file, and allocate format context */ |
555 | if (avformat_open_input(&fmt_ctx, filename, NULL, NULL) < 0) { |
556 | fprintf(stderr, "Could not open source file %s\n", filename); |
557 | exit(1); |
558 | } |
559 | |
560 | /* retrieve stream information */ |
561 | if (avformat_find_stream_info(fmt_ctx, NULL) < 0) { |
562 | fprintf(stderr, "Could not find stream information\n"); |
563 | exit(1); |
564 | } |
565 | |
566 | /* dump input information to stderr */ |
567 | av_dump_format(fmt_ctx, 0, filename, 0); |
568 | |
569 | av_init_packet(&avpkt); |
570 | |
571 | /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */ |
572 | //memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE); |
573 | // |
574 | |
575 | ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0); |
576 | if (ret < 0) { |
577 | fprintf(stderr, "Could not find %s stream in input file '%s'\n", |
578 | av_get_media_type_string(AVMEDIA_TYPE_VIDEO), filename); |
579 | return ret; |
580 | } |
581 | |
582 | stream_index = ret; |
583 | st = fmt_ctx->streams[stream_index]; |
584 | |
585 | /* find decoder for the stream */ |
586 | codec = avcodec_find_decoder(st->codecpar->codec_id); |
587 | if (!codec) { |
588 | fprintf(stderr, "Failed to find %s codec\n", |
589 | av_get_media_type_string(AVMEDIA_TYPE_VIDEO)); |
590 | return AVERROR(EINVAL); |
591 | } |
592 | |
593 | |
594 | /* find the MPEG-1 video decoder */ |
595 | /* |
596 | codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO); |
597 | if (!codec) { |
598 | fprintf(stderr, "Codec not found\n"); |
599 | exit(1); |
600 | } |
601 | */ |
602 | |
603 | c = avcodec_alloc_context3(NULL); |
604 | if (!c) { |
605 | fprintf(stderr, "Could not allocate video codec context\n"); |
606 | exit(1); |
607 | } |
608 | |
609 | /* Copy codec parameters from input stream to output codec context */ |
610 | if ((ret = avcodec_parameters_to_context(c, st->codecpar)) < 0) { |
611 | fprintf(stderr, "Failed to copy %s codec parameters to decoder context\n", |
612 | av_get_media_type_string(AVMEDIA_TYPE_VIDEO)); |
613 | return ret; |
614 | } |
615 | |
616 | |
617 | /* |
618 | if (codec->capabilities & AV_CODEC_CAP_TRUNCATED) |
619 | c->flags |= AV_CODEC_FLAG_TRUNCATED; // we do not send complete frames |
620 | */ |
621 | |
622 | /* For some codecs, such as msmpeg4 and mpeg4, width and height |
623 | MUST be initialized there because this information is not |
624 | available in the bitstream. */ |
625 | |
626 | /* open it */ |
627 | if (avcodec_open2(c, codec, NULL) < 0) { |
628 | fprintf(stderr, "Could not open codec\n"); |
629 | exit(1); |
630 | } |
631 | |
632 | /* |
633 | f = fopen(filename, "rb"); |
634 | if (!f) { |
635 | fprintf(stderr, "Could not open %s\n", filename); |
636 | exit(1); |
637 | } |
638 | */ |
639 | |
640 | img_convert_ctx = sws_getContext(c->width, c->height, |
641 | c->pix_fmt, |
642 | c->width, c->height, |
643 | AV_PIX_FMT_RGB24, |
644 | SWS_BICUBIC, NULL, NULL, NULL); |
645 | |
646 | if (img_convert_ctx == NULL) |
647 | { |
648 | fprintf(stderr, "Cannot initialize the conversion context\n"); |
649 | exit(1); |
650 | } |
651 | |
652 | frame = av_frame_alloc(); |
653 | if (!frame) { |
654 | fprintf(stderr, "Could not allocate video frame\n"); |
655 | exit(1); |
656 | } |
657 | |
658 | frame_count = 0; |
659 | while (av_read_frame(fmt_ctx, &avpkt) >= 0) { |
660 | /* |
661 | avpkt.size = fread(inbuf, 1, INBUF_SIZE, f); |
662 | if (avpkt.size == 0) |
663 | break; |
664 | */ |
665 | |
666 | /* NOTE1: some codecs are stream based (mpegvideo, mpegaudio) |
667 | and this is the only method to use them because you cannot |
668 | know the compressed data size before analysing it. |
669 |
|
670 | BUT some other codecs (msmpeg4, mpeg4) are inherently frame |
671 | based, so you must call them with all the data for one |
672 | frame exactly. You must also initialize 'width' and |
673 | 'height' before initializing them. */ |
674 | |
675 | /* NOTE2: some codecs allow the raw parameters (frame size, |
676 | sample rate) to be changed at any frame. We handle this, so |
677 | you should also take care of it */ |
678 | |
679 | /* here, we use a stream based decoder (mpeg1video), so we |
680 | feed decoder and see if it could decode a frame */ |
681 | //avpkt.data = inbuf; |
682 | //while (avpkt.size > 0) |
683 | if(avpkt.stream_index == stream_index){ |
684 | if (decode_write_frame(outfilename, c, img_convert_ctx, frame, &frame_count, &avpkt, 0) < 0) |
685 | exit(1); |
686 | } |
687 | |
688 | av_packet_unref(&avpkt); |
689 | } |
690 | |
691 | /* Some codecs, such as MPEG, transmit the I- and P-frame with a |
692 | latency of one frame. You must do the following to have a |
693 | chance to get the last frame of the video. */ |
694 | avpkt.data = NULL; |
695 | avpkt.size = 0; |
696 | decode_write_frame(outfilename, c, img_convert_ctx, frame, &frame_count, &avpkt, 1); |
697 | |
698 | fclose(f); |
699 | |
700 | avformat_close_input(&fmt_ctx); |
701 | |
702 | sws_freeContext(img_convert_ctx); |
703 | avcodec_free_context(&c); |
704 | av_frame_free(&frame); |
705 | |
706 | return 0; |
707 | } |
708 | ``` |
709 | - 执行程序:clang -g -o encode_video encode_video.c `pkg-config --libs libavcodec` |
710 | |
711 | |
712 | #### 38.视频转图片 |
713 | |
714 | ```c |
715 |
|
716 |
|
717 |
|
718 | |
719 |
|
720 |
|
721 |
|
722 | |
723 |
|
724 | |
725 |
|
726 |
|
727 |
|
728 | |
729 |
|
730 | typedef struct tagBITMAPFILEHEADER { |
731 | WORD bfType; |
732 | DWORD bfSize; |
733 | WORD bfReserved1; |
734 | WORD bfReserved2; |
735 | DWORD bfOffBits; |
736 | } BITMAPFILEHEADER, *PBITMAPFILEHEADER; |
737 | |
738 | |
739 | typedef struct tagBITMAPINFOHEADER { |
740 | DWORD biSize; |
741 | LONG biWidth; |
742 | LONG biHeight; |
743 | WORD biPlanes; |
744 | WORD biBitCount; |
745 | DWORD biCompression; |
746 | DWORD biSizeImage; |
747 | LONG biXPelsPerMeter; |
748 | LONG biYPelsPerMeter; |
749 | DWORD biClrUsed; |
750 | DWORD biClrImportant; |
751 | } BITMAPINFOHEADER, *PBITMAPINFOHEADER; |
752 | |
753 | void saveBMP(struct SwsContext *img_convert_ctx, AVFrame *frame, char *filename) |
754 | { |
755 | //1 先进行转换, YUV420=>RGB24: |
756 | int w = frame->width; |
757 | int h = frame->height; |
758 | |
759 | |
760 | int numBytes=avpicture_get_size(AV_PIX_FMT_BGR24, w, h); |
761 | uint8_t *buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); |
762 | |
763 | |
764 | AVFrame *pFrameRGB = av_frame_alloc(); |
765 | /* buffer is going to be written to rawvideo file, no alignment */ |
766 | /* |
767 | if (av_image_alloc(pFrameRGB->data, pFrameRGB->linesize, |
768 | w, h, AV_PIX_FMT_BGR24, pix_fmt, 1) < 0) { |
769 | fprintf(stderr, "Could not allocate destination image\n"); |
770 | exit(1); |
771 | } |
772 | */ |
773 | avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_BGR24, w, h); |
774 | |
775 | sws_scale(img_convert_ctx, frame->data, frame->linesize, |
776 | 0, h, pFrameRGB->data, pFrameRGB->linesize); |
777 | |
778 | //2 构造 BITMAPINFOHEADER |
779 | BITMAPINFOHEADER header; |
780 | header.biSize = sizeof(BITMAPINFOHEADER); |
781 | |
782 | |
783 | header.biWidth = w; |
784 | header.biHeight = h*(-1); |
785 | header.biBitCount = 24; |
786 | header.biCompression = 0; |
787 | header.biSizeImage = 0; |
788 | header.biClrImportant = 0; |
789 | header.biClrUsed = 0; |
790 | header.biXPelsPerMeter = 0; |
791 | header.biYPelsPerMeter = 0; |
792 | header.biPlanes = 1; |
793 | |
794 | //3 构造文件头 |
795 | BITMAPFILEHEADER bmpFileHeader = {0,}; |
796 | //HANDLE hFile = NULL; |
797 | DWORD dwTotalWriten = 0; |
798 | DWORD dwWriten; |
799 | |
800 | bmpFileHeader.bfType = 0x4d42; //'BM'; |
801 | bmpFileHeader.bfSize = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER)+ numBytes; |
802 | bmpFileHeader.bfOffBits=sizeof(BITMAPFILEHEADER)+sizeof(BITMAPINFOHEADER); |
803 | |
804 | FILE* pf = fopen(filename, "wb"); |
805 | fwrite(&bmpFileHeader, sizeof(BITMAPFILEHEADER), 1, pf); |
806 | fwrite(&header, sizeof(BITMAPINFOHEADER), 1, pf); |
807 | fwrite(pFrameRGB->data[0], 1, numBytes, pf); |
808 | fclose(pf); |
809 | |
810 | |
811 | //释放资源 |
812 | //av_free(buffer); |
813 | av_freep(&pFrameRGB[0]); |
814 | av_free(pFrameRGB); |
815 | } |
816 | |
817 | static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize, |
818 | char *filename) |
819 | { |
820 | FILE *f; |
821 | int i; |
822 | |
823 | f = fopen(filename,"w"); |
824 | fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255); |
825 | for (i = 0; i < ysize; i++) |
826 | fwrite(buf + i * wrap, 1, xsize, f); |
827 | fclose(f); |
828 | } |
829 | |
830 | static int decode_write_frame(const char *outfilename, AVCodecContext *avctx, |
831 | struct SwsContext *img_convert_ctx, AVFrame *frame, int *frame_count, AVPacket *pkt, int last) |
832 | { |
833 | int len, got_frame; |
834 | char buf[1024]; |
835 | |
836 | len = avcodec_decode_video2(avctx, frame, &got_frame, pkt); |
837 | if (len < 0) { |
838 | fprintf(stderr, "Error while decoding frame %d\n", *frame_count); |
839 | return len; |
840 | } |
841 | if (got_frame) { |
842 | printf("Saving %sframe %3d\n", last ? "last " : "", *frame_count); |
843 | fflush(stdout); |
844 | |
845 | /* the picture is allocated by the decoder, no need to free it */ |
846 | snprintf(buf, sizeof(buf), "%s-%d.bmp", outfilename, *frame_count); |
847 | |
848 | /* |
849 | pgm_save(frame->data[0], frame->linesize[0], |
850 | frame->width, frame->height, buf); |
851 | */ |
852 | |
853 | saveBMP(img_convert_ctx, frame, buf); |
854 | |
855 | (*frame_count)++; |
856 | } |
857 | if (pkt->data) { |
858 | pkt->size -= len; |
859 | pkt->data += len; |
860 | } |
861 | return 0; |
862 | } |
863 | |
864 | int main(int argc, char **argv) |
865 | { |
866 | int ret; |
867 | |
868 | FILE *f; |
869 | |
870 | const char *filename, *outfilename; |
871 | |
872 | AVFormatContext *fmt_ctx = NULL; |
873 | |
874 | const AVCodec *codec; |
875 | AVCodecContext *c= NULL; |
876 | |
877 | AVStream *st = NULL; |
878 | int stream_index; |
879 | |
880 | int frame_count; |
881 | AVFrame *frame; |
882 | |
883 | struct SwsContext *img_convert_ctx; |
884 | |
885 | //uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE]; |
886 | AVPacket avpkt; |
887 | |
888 | if (argc <= 2) { |
889 | fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]); |
890 | exit(0); |
891 | } |
892 | filename = argv[1]; |
893 | outfilename = argv[2]; |
894 | |
895 | /* register all formats and codecs */ |
896 | av_register_all(); |
897 | |
898 | /* open input file, and allocate format context */ |
899 | if (avformat_open_input(&fmt_ctx, filename, NULL, NULL) < 0) { |
900 | fprintf(stderr, "Could not open source file %s\n", filename); |
901 | exit(1); |
902 | } |
903 | |
904 | /* retrieve stream information */ |
905 | if (avformat_find_stream_info(fmt_ctx, NULL) < 0) { |
906 | fprintf(stderr, "Could not find stream information\n"); |
907 | exit(1); |
908 | } |
909 | |
910 | /* dump input information to stderr */ |
911 | av_dump_format(fmt_ctx, 0, filename, 0); |
912 | |
913 | av_init_packet(&avpkt); |
914 | |
915 | /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */ |
916 | //memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE); |
917 | // |
918 | |
919 | ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0); |
920 | if (ret < 0) { |
921 | fprintf(stderr, "Could not find %s stream in input file '%s'\n", |
922 | av_get_media_type_string(AVMEDIA_TYPE_VIDEO), filename); |
923 | return ret; |
924 | } |
925 | |
926 | stream_index = ret; |
927 | st = fmt_ctx->streams[stream_index]; |
928 | |
929 | /* find decoder for the stream */ |
930 | codec = avcodec_find_decoder(st->codecpar->codec_id); |
931 | if (!codec) { |
932 | fprintf(stderr, "Failed to find %s codec\n", |
933 | av_get_media_type_string(AVMEDIA_TYPE_VIDEO)); |
934 | return AVERROR(EINVAL); |
935 | } |
936 | |
937 | |
938 | /* find the MPEG-1 video decoder */ |
939 | /* |
940 | codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO); |
941 | if (!codec) { |
942 | fprintf(stderr, "Codec not found\n"); |
943 | exit(1); |
944 | } |
945 | */ |
946 | |
947 | c = avcodec_alloc_context3(NULL); |
948 | if (!c) { |
949 | fprintf(stderr, "Could not allocate video codec context\n"); |
950 | exit(1); |
951 | } |
952 | |
953 | /* Copy codec parameters from input stream to output codec context */ |
954 | if ((ret = avcodec_parameters_to_context(c, st->codecpar)) < 0) { |
955 | fprintf(stderr, "Failed to copy %s codec parameters to decoder context\n", |
956 | av_get_media_type_string(AVMEDIA_TYPE_VIDEO)); |
957 | return ret; |
958 | } |
959 | |
960 | |
961 | /* |
962 | if (codec->capabilities & AV_CODEC_CAP_TRUNCATED) |
963 | c->flags |= AV_CODEC_FLAG_TRUNCATED; // we do not send complete frames |
964 | */ |
965 | |
966 | /* For some codecs, such as msmpeg4 and mpeg4, width and height |
967 | MUST be initialized there because this information is not |
968 | available in the bitstream. */ |
969 | |
970 | /* open it */ |
971 | if (avcodec_open2(c, codec, NULL) < 0) { |
972 | fprintf(stderr, "Could not open codec\n"); |
973 | exit(1); |
974 | } |
975 | |
976 | /* |
977 | f = fopen(filename, "rb"); |
978 | if (!f) { |
979 | fprintf(stderr, "Could not open %s\n", filename); |
980 | exit(1); |
981 | } |
982 | */ |
983 | |
984 | img_convert_ctx = sws_getContext(c->width, c->height, |
985 | c->pix_fmt, |
986 | c->width, c->height, |
987 | AV_PIX_FMT_RGB24, |
988 | SWS_BICUBIC, NULL, NULL, NULL); |
989 | |
990 | if (img_convert_ctx == NULL) |
991 | { |
992 | fprintf(stderr, "Cannot initialize the conversion context\n"); |
993 | exit(1); |
994 | } |
995 | |
996 | frame = av_frame_alloc(); |
997 | if (!frame) { |
998 | fprintf(stderr, "Could not allocate video frame\n"); |
999 | exit(1); |
1000 | } |
1001 | |
1002 | frame_count = 0; |
1003 | while (av_read_frame(fmt_ctx, &avpkt) >= 0) { |
1004 | /* |
1005 | avpkt.size = fread(inbuf, 1, INBUF_SIZE, f); |
1006 | if (avpkt.size == 0) |
1007 | break; |
1008 | */ |
1009 | |
1010 | /* NOTE1: some codecs are stream based (mpegvideo, mpegaudio) |
1011 | and this is the only method to use them because you cannot |
1012 | know the compressed data size before analysing it. |
1013 |
|
1014 | BUT some other codecs (msmpeg4, mpeg4) are inherently frame |
1015 | based, so you must call them with all the data for one |
1016 | frame exactly. You must also initialize 'width' and |
1017 | 'height' before initializing them. */ |
1018 | |
1019 | /* NOTE2: some codecs allow the raw parameters (frame size, |
1020 | sample rate) to be changed at any frame. We handle this, so |
1021 | you should also take care of it */ |
1022 | |
1023 | /* here, we use a stream based decoder (mpeg1video), so we |
1024 | feed decoder and see if it could decode a frame */ |
1025 | //avpkt.data = inbuf; |
1026 | //while (avpkt.size > 0) |
1027 | if(avpkt.stream_index == stream_index){ |
1028 | if (decode_write_frame(outfilename, c, img_convert_ctx, frame, &frame_count, &avpkt, 0) < 0) |
1029 | exit(1); |
1030 | } |
1031 | |
1032 | av_packet_unref(&avpkt); |
1033 | } |
1034 | |
1035 | /* Some codecs, such as MPEG, transmit the I- and P-frame with a |
1036 | latency of one frame. You must do the following to have a |
1037 | chance to get the last frame of the video. */ |
1038 | avpkt.data = NULL; |
1039 | avpkt.size = 0; |
1040 | decode_write_frame(outfilename, c, img_convert_ctx, frame, &frame_count, &avpkt, 1); |
1041 | |
1042 | fclose(f); |
1043 | |
1044 | avformat_close_input(&fmt_ctx); |
1045 | |
1046 | sws_freeContext(img_convert_ctx); |
1047 | avcodec_free_context(&c); |
1048 | av_frame_free(&frame); |
1049 | |
1050 | return 0; |
1051 | } |
1052 | ``` |
1053 | |
1054 | |
1055 | #### 39.AAC编码 |
1056 | |
1057 | - 编码流程与视频相同 |
1058 | - 编码函数 avcodec_encodec_audio2 |
1059 | - 1. 添加头文件 |
1060 | - 2. 注册编解码器 |
1061 | - 3. 通过名字去找到编码器 |
1062 | - 4. 设置参数,打开编码器 |
1063 | - 5. 获取数据包,进行编码 |
1064 | |
1065 | ```c |
1066 |
|
1067 |
|
1068 |
|
1069 | |
1070 |
|
1071 | |
1072 |
|
1073 |
|
1074 |
|
1075 |
|
1076 | |
1077 | /* check that a given sample format is supported by the encoder */ |
1078 | static int check_sample_fmt(const AVCodec *codec, enum AVSampleFormat sample_fmt) |
1079 | { |
1080 | const enum AVSampleFormat *p = codec->sample_fmts; |
1081 | |
1082 | while (*p != AV_SAMPLE_FMT_NONE) { |
1083 | if (*p == sample_fmt) |
1084 | return 1; |
1085 | p++; |
1086 | } |
1087 | return 0; |
1088 | } |
1089 | |
1090 | /* just pick the highest supported samplerate */ |
1091 | static int select_sample_rate(const AVCodec *codec) |
1092 | { |
1093 | const int *p; |
1094 | int best_samplerate = 0; |
1095 | |
1096 | if (!codec->supported_samplerates) |
1097 | return 44100; |
1098 | |
1099 | p = codec->supported_samplerates; |
1100 | while (*p) { |
1101 | if (!best_samplerate || abs(44100 - *p) < abs(44100 - best_samplerate)) |
1102 | best_samplerate = *p; |
1103 | p++; |
1104 | } |
1105 | return best_samplerate; |
1106 | } |
1107 | |
1108 | /* select layout with the highest channel count */ |
1109 | static int select_channel_layout(const AVCodec *codec) |
1110 | { |
1111 | const uint64_t *p; |
1112 | uint64_t best_ch_layout = 0; |
1113 | int best_nb_channels = 0; |
1114 | |
1115 | if (!codec->channel_layouts) |
1116 | return AV_CH_LAYOUT_STEREO; |
1117 | |
1118 | p = codec->channel_layouts; |
1119 | while (*p) { |
1120 | int nb_channels = av_get_channel_layout_nb_channels(*p); |
1121 | |
1122 | if (nb_channels > best_nb_channels) { |
1123 | best_ch_layout = *p; |
1124 | best_nb_channels = nb_channels; |
1125 | } |
1126 | p++; |
1127 | } |
1128 | return best_ch_layout; |
1129 | } |
1130 | |
1131 | int main(int argc, char **argv) |
1132 | { |
1133 | const char *filename; |
1134 | const AVCodec *codec; |
1135 | AVCodecContext *c= NULL; |
1136 | AVFrame *frame; |
1137 | AVPacket pkt; |
1138 | int i, j, k, ret, got_output; |
1139 | FILE *f; |
1140 | uint16_t *samples; |
1141 | float t, tincr; |
1142 | |
1143 | if (argc <= 1) { |
1144 | fprintf(stderr, "Usage: %s <output file>\n", argv[0]); |
1145 | return 0; |
1146 | } |
1147 | filename = argv[1]; |
1148 | |
1149 | /* register all the codecs */ |
1150 | avcodec_register_all(); |
1151 | |
1152 | /* find the MP2 encoder */ |
1153 | codec = avcodec_find_encoder(AV_CODEC_ID_MP2); |
1154 | if (!codec) { |
1155 | fprintf(stderr, "Codec not found\n"); |
1156 | exit(1); |
1157 | } |
1158 | |
1159 | c = avcodec_alloc_context3(codec); |
1160 | if (!c) { |
1161 | fprintf(stderr, "Could not allocate audio codec context\n"); |
1162 | exit(1); |
1163 | } |
1164 | |
1165 | /* put sample parameters */ |
1166 | c->bit_rate = 64000; |
1167 | |
1168 | /* check that the encoder supports s16 pcm input */ |
1169 | c->sample_fmt = AV_SAMPLE_FMT_S16; |
1170 | if (!check_sample_fmt(codec, c->sample_fmt)) { |
1171 | fprintf(stderr, "Encoder does not support sample format %s", |
1172 | av_get_sample_fmt_name(c->sample_fmt)); |
1173 | exit(1); |
1174 | } |
1175 | |
1176 | /* select other audio parameters supported by the encoder */ |
1177 | c->sample_rate = select_sample_rate(codec); |
1178 | c->channel_layout = select_channel_layout(codec); |
1179 | c->channels = av_get_channel_layout_nb_channels(c->channel_layout); |
1180 | |
1181 | /* open it */ |
1182 | if (avcodec_open2(c, codec, NULL) < 0) { |
1183 | fprintf(stderr, "Could not open codec\n"); |
1184 | exit(1); |
1185 | } |
1186 | |
1187 | f = fopen(filename, "wb"); |
1188 | if (!f) { |
1189 | fprintf(stderr, "Could not open %s\n", filename); |
1190 | exit(1); |
1191 | } |
1192 | |
1193 | /* frame containing input raw audio */ |
1194 | frame = av_frame_alloc(); |
1195 | if (!frame) { |
1196 | fprintf(stderr, "Could not allocate audio frame\n"); |
1197 | exit(1); |
1198 | } |
1199 | |
1200 | frame->nb_samples = c->frame_size; |
1201 | frame->format = c->sample_fmt; |
1202 | frame->channel_layout = c->channel_layout; |
1203 | |
1204 | /* allocate the data buffers */ |
1205 | ret = av_frame_get_buffer(frame, 0); |
1206 | if (ret < 0) { |
1207 | fprintf(stderr, "Could not allocate audio data buffers\n"); |
1208 | exit(1); |
1209 | } |
1210 | |
1211 | /* encode a single tone sound */ |
1212 | t = 0; |
1213 | tincr = 2 * M_PI * 440.0 / c->sample_rate; |
1214 | for (i = 0; i < 200; i++) { |
1215 | av_init_packet(&pkt); |
1216 | pkt.data = NULL; // packet data will be allocated by the encoder |
1217 | pkt.size = 0; |
1218 | |
1219 | /* make sure the frame is writable -- makes a copy if the encoder |
1220 | * kept a reference internally */ |
1221 | ret = av_frame_make_writable(frame); |
1222 | if (ret < 0) |
1223 | exit(1); |
1224 | samples = (uint16_t*)frame->data[0]; |
1225 | |
1226 | for (j = 0; j < c->frame_size; j++) { |
1227 | samples[2*j] = (int)(sin(t) * 10000); |
1228 | |
1229 | for (k = 1; k < c->channels; k++) |
1230 | samples[2*j + k] = samples[2*j]; |
1231 | t += tincr; |
1232 | } |
1233 | /* encode the samples */ |
1234 | ret = avcodec_encode_audio2(c, &pkt, frame, &got_output); |
1235 | if (ret < 0) { |
1236 | fprintf(stderr, "Error encoding audio frame\n"); |
1237 | exit(1); |
1238 | } |
1239 | if (got_output) { |
1240 | fwrite(pkt.data, 1, pkt.size, f); |
1241 | av_packet_unref(&pkt); |
1242 | } |
1243 | } |
1244 | |
1245 | /* get the delayed frames */ |
1246 | for (got_output = 1; got_output; i++) { |
1247 | ret = avcodec_encode_audio2(c, &pkt, NULL, &got_output); |
1248 | if (ret < 0) { |
1249 | fprintf(stderr, "Error encoding frame\n"); |
1250 | exit(1); |
1251 | } |
1252 | |
1253 | if (got_output) { |
1254 | fwrite(pkt.data, 1, pkt.size, f); |
1255 | av_packet_unref(&pkt); |
1256 | } |
1257 | } |
1258 | fclose(f); |
1259 | |
1260 | av_frame_free(&frame); |
1261 | avcodec_free_context(&c); |
1262 | |
1263 | return 0; |
1264 | } |
1265 | ``` |
1266 | |
1267 | |
1268 | #### 40.SDL简介、编译与安装 |
1269 | |
1270 | - SDL,全称:Simple DirectMedia Layer |
1271 | - 由C语言实现的跨平台的媒体开源库 |
1272 | - 多用于开发游戏、模拟器、媒体播放器等多媒体应用领域 |
1273 | - 官网:https://www.libsdl.org |
1274 | - 安装编译: |
1275 | - 1. 下载SDL源码 |
1276 | - 2. 生成Makefile configure --prefix=/usr/local (--prefix即为安装目录在哪儿) |
1277 | - 3. 安装 sudo make -j 8 && make install (-j 8 的是意思是开8个线程同时进行操作,即内核*2) |
1278 | |
1279 | |
1280 | #### 41.SDL的使用步骤 |
1281 | |
1282 | - 添加头文件 #include<SDL.h> |
1283 | - 初始化SDL |
1284 | - 退出SDL |
1285 | - SDL主要用来渲染窗口 |
1286 | - SDL_Init() / SDL——Quit() |
1287 | - SDL_CreateWindow() / SDL_DestroyWindow() |
1288 | - SDL_CreateRender() 创建渲染器,将图片渲染帧渲染到上边 |
1289 | |
1290 | ```c |
1291 | #include <SDL.h> |
1292 | #include <stdio.h> |
1293 | |
1294 | int main(int argc, char* argv[]) { |
1295 | SDL_Window *window = NULL:]; |
1296 | // 初始化 |
1297 | SDL_Init(SDL_INIT_VIDEO); |
1298 | // 创建窗口 |
1299 | window = SDL_CreateWindow("SDL2 Window", 200, 200, 640, 480, SDL_WINDOW_SHOWN); // name,x,y,height,width,option |
1300 | if (!window) { |
1301 | printf("Failed to Create Window!"); |
1302 | goto __EXIT; |
1303 | } |
1304 | SDL_DestroyWindow(window); |
1305 | __EXIT; |
1306 | // 退出 |
1307 | SDL_Quit(); |
1308 | return 0; |
1309 | } |
1310 | // 创建的窗口实际上是在内存上分配的空间,要显示的话需要将内容推动到显卡驱动上 |
1311 | ``` |
1312 | - 执行:clang -g -o firstsdl firstsdl.c `pkg-config --cflags --libs sdl2` |
1313 | |
1314 | |
1315 | #### 42.SDL渲染窗口 |
1316 | - SDL_CreateRenderer / SDL_DestroyRenderer |
1317 | - SDL_RenderClear |
1318 | - SDL_RenderPresent 推送数据包 |
1319 | |
1320 | ```c |
1321 |
|
1322 |
|
1323 | |
1324 | int main(int argc, char* argv[]) { |
1325 | SDL_Window *window = NULL; |
1326 | SDL_Renderer *render = NULL; |
1327 | // 初始化 |
1328 | SDL_Init(SDL_INIT_VIDEO); |
1329 | // 创建窗口 |
1330 | window = SDL_CreateWindow("SDL2 Window", 200, 200, 640, 480, SDL_WINDOW_SHOWN); // name,x,y,height,width,option |
1331 | if (!window) { |
1332 | printf("Failed to Create Window!"); |
1333 | goto __EXIT; |
1334 | } |
1335 | // 创建渲染器 |
1336 | render = SDL_CreateRenderer(window, -1, 0); |
1337 | if (!render) { |
1338 | SDL_Log("Faild to Create Render!"); |
1339 | goto __DWINDOW; |
1340 | } |
1341 | SDL_SetRenderDrawColor(render, 255, 0, 0, 255); // rgba |
1342 | // 清屏缓存 |
1343 | SDL_RenderClear(render); |
1344 | // 推送到显卡 |
1345 | SDL_RenderPresent(render); |
1346 | // 这里是延迟,为了看到效果 |
1347 | SDL_Delay(30000); |
1348 | // 销毁窗口 |
1349 | __DWINDOW: |
1350 | SDL_DestroyWindow(window); |
1351 | // 退出 |
1352 | __EXIT: |
1353 | SDL_Quit(); |
1354 | return 0; |
1355 | } |
1356 | ``` |
1357 | - 执行:clang -g -o firstsdl firstsdl.c `pkg-config --cflags --libs sdl2` |
1358 | |
1359 | |
1360 | #### 43.SDL处理事件基本原理 |
1361 | - SDL将所有的事件都存放到一个队列中 |
1362 | - 所有对事件的操作,其实就是对队列的操作 |
1363 | - SDL事件分类 |
1364 | - SDL_WIndowEvent: 窗口事件 |
1365 | - SDL_KeyboardEvent: 键盘事件 |
1366 | - SDL_MouseMotionEvent: 鼠标事件 |
1367 | - SDL事件处理 |
1368 | - SDL_PollEvent 轮循队列处理事件 |
1369 | - SDL_WaitEvent 事件触发机制 |
1370 | |
1371 | ```c |
1372 |
|
1373 |
|
1374 | |
1375 | int main(int argc, char* argv[]) { |
1376 | int quit = 1; |
1377 | SDL_Event event; |
1378 | SDL_Window *window = NULL; |
1379 | SDL_Renderer *render = NULL; |
1380 | // 初始化 |
1381 | SDL_Init(SDL_INIT_VIDEO); |
1382 | // 创建窗口 |
1383 | window = SDL_CreateWindow("SDL2 Window", 200, 200, 640, 480, SDL_WINDOW_SHOWN); // name,x,y,height,width,option |
1384 | if (!window) { |
1385 | printf("Failed to Create Window!"); |
1386 | goto __EXIT; |
1387 | } |
1388 | // 创建渲染器 |
1389 | render = SDL_CreateRenderer(window, -1, 0); |
1390 | if (!render) { |
1391 | SDL_Log("Faild to Create Render!"); |
1392 | goto __DWINDOW; |
1393 | } |
1394 | SDL_SetRenderDrawColor(render, 255, 0, 0, 255); // rgba |
1395 | // 清屏缓存 |
1396 | SDL_RenderClear(render); |
1397 | // 推送到显卡 |
1398 | SDL_RenderPresent(render); |
1399 | do { |
1400 | SDL_WaitEvent(&event); |
1401 | switch(event.type) { |
1402 | case SDL_QUIT: |
1403 | quit = 0; |
1404 | break; |
1405 | default: |
1406 | SDL_Log("event type is %d", event.type); |
1407 | } |
1408 | } while(quit); |
1409 | // 销毁窗口 |
1410 | __DWINDOW: |
1411 | SDL_DestroyWindow(window); |
1412 | // 退出 |
1413 | __EXIT: |
1414 | SDL_Quit(); |
1415 | return 0; |
1416 | } |
- 执行:clang -g -o eventsdl eventsdl.c
pkg-config --cflags --libs sdl2
44.纹理渲染
- 内存图像 –(渲染器)– 纹理 –(交换:显卡计算)– 窗口展示
- 纹理相关的API
- SDL_CreateTexture()
- format: YUV,GRB
- access: Texture类型、Target,Stream
- SDL_DestroyTexture()
- SDL_CreateTexture()
- 渲染相关的API
- SDL_SetRenderTarget() // 设置渲染目标
- SDL_RenderClear()
- SDL_RenderCopy()
- SDL_RenderPresent() // 控制渲染
1 |
|
2 |
|
3 | |
4 | int main(int argc, char* argv[]) { |
5 | int quit = 1; |
6 | SDL_Rect rect; |
7 | rect.w = 30; |
8 | rect.h = 30; |
9 | SDL_Event event; |
10 | SDL_Texture *texture = NULL; |
11 | SDL_Window *window = NULL; |
12 | SDL_Renderer *render = NULL; |
13 | // 初始化 |
14 | SDL_Init(SDL_INIT_VIDEO); |
15 | // 创建窗口 |
16 | window = SDL_CreateWindow("SDL2 Window", 200, 200, 640, 480, SDL_WINDOW_SHOWN); // name,x,y,height,width,option |
17 | if (!window) { |
18 | printf("Failed to Create Window!"); |
19 | goto __EXIT; |
20 | } |
21 | // 创建渲染器 |
22 | render = SDL_CreateRenderer(window, -1, 0); |
23 | if (!render) { |
24 | SDL_Log("Faild to Create Render!"); |
25 | goto __DWINDOW; |
26 | } |
27 | // 创建纹理 |
28 | texture = SDL_CreateTexture(render, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 640, 480); |
29 | if (!texture) { |
30 | SDL_Log("Failed to Create Texture!"); |
31 | goto __RENDER; |
32 | } |
33 | do { |
34 | SDL_PollEvent(&event); |
35 | switch(event.type) { |
36 | case SDL_QUIT: |
37 | quit = 0; |
38 | break; |
39 | default: |
40 | SDL_Log("event type is %d", event.type); |
41 | } |
42 | // 创建方块 |
43 | rect.x = rand() % 600; |
44 | rect.y = rand() % 450; |
45 | SDL_SetRenderTarget(render, texture); |
46 | SDL_SetRenderDrawColor(render, 0, 0, 0, 0); |
47 | SDL_RenderClear(render); |
48 | // 绘制方块 |
49 | SDL_RenderDrawRect(render, &rect); |
50 | SDL_SetRenderDrawColor(render, 255, 0, 0, 0); |
51 | SDL_RenderFillRect(render, &rect); |
52 | // 推送到显卡 |
53 | SDL_SetRenderTarget(render, NULL); |
54 | SDL_RenderCopy(render, texture, NULL, NULL); // 拷贝描述文件到显卡驱动 |
55 | SDL_RenderPresent(render); // 告诉显卡让其显示她已经按照描述文件绘制好的图像显示到屏幕上 |
56 | } while(quit); |
57 | // 销毁纹理 |
58 | __RENDER: |
59 | SDL_DestroyTexture(texture); |
60 | // 销毁窗口 |
61 | __DWINDOW: |
62 | SDL_DestroyWindow(window); |
63 | // 退出 |
64 | __EXIT: |
65 | SDL_Quit(); |
66 | return 0; |
67 | } |
- 执行:clang -g -o texturesdl texturesdl.c
pkg-config --cflags --libs sdl2
45.YUV播放器
- 创建线程
- SDL_CreateThread
- fn: 线程执行函数
- name: 线程名
- data: 执行函数参数
- SDL_CreateThread
- 更新纹理
- SDL_UpdateTexture()
- SDL_UpdateYUVTexture() // 需要每一个分量的数据
1
2
3
4
5
6
const int bpp=12;
7
8
int screen_w=500,screen_h=500;
9
10
11
12
//event message
13
14
15
16
int thread_exit=0;
17
18
int refresh_video_timer(void *udata){
19
20
thread_exit=0;
21
22
while (!thread_exit) {
23
SDL_Event event;
24
event.type = REFRESH_EVENT;
25
SDL_PushEvent(&event);
26
SDL_Delay(40);
27
}
28
29
thread_exit=0;
30
31
//push quit event
32
SDL_Event event;
33
event.type = QUIT_EVENT;
34
SDL_PushEvent(&event);
35
36
return 0;
37
}
38
39
int main(int argc, char* argv[])
40
{
41
42
FILE *video_fd = NULL;
43
44
SDL_Event event;
45
SDL_Rect rect;
46
47
Uint32 pixformat = 0;
48
49
SDL_Window *win = NULL;
50
SDL_Renderer *renderer = NULL;
51
SDL_Texture *texture = NULL;
52
53
SDL_Thread *timer_thread = NULL;
54
55
int w_width = 640; w_height = 480;
56
const int video_width = 320, video_height = 180;
57
58
Uint8 *video_pos = NULL;
59
Uint8 *video_end = NULL;
60
61
unsigned int remain_len = 0;
62
unsigned int video_buff_len = 0;
63
unsigned int blank space_len = 0;
64
Uint8 *video_buf[BLOCK_SIZE];
65
66
const char *path = "test_yuv420p_320x180.yuv";
67
68
const unsigned int yuv_frame_len = video_width * video_height * 12 / 8;
69
70
//initialize SDL
71
if(SDL_Init(SDL_INIT_VIDEO)) {
72
fprintf( stderr, "Could not initialize SDL - %s\n", SDL_GetError());
73
return -1;
74
}
75
76
//creat window from SDL
77
win = SDL_CreateWindow("YUV Player",
78
SDL_WINDOWPOS_UNDEFINED,
79
SDL_WINDOWPOS_UNDEFINED,
80
w_width, w_height,
81
SDL_WINDOW_OPENGL|SDL_WINDOW_RESIZABLE);
82
if(!win) {
83
fprintf(stderr, "Failed to create window, %s\n",SDL_GetError());
84
goto __FAIL;
85
}
86
87
renderer = SDL_CreateRenderer(screen, -1, 0);
88
89
//IYUV: Y + U + V (3 planes)
90
//YV12: Y + V + U (3 planes)
91
pixformat= SDL_PIXELFORMAT_IYUV;
92
93
//create texture for render
94
texture = SDL_CreateTexture(renderer,
95
pixformat,
96
SDL_TEXTUREACCESS_STREAMING,
97
video_width,
98
video_height);
99
100
//open yuv file
101
video_fd = fopen(path, "r");
102
if( !video_fd ){
103
fprintf(stderr, "Failed to open yuv file\n");
104
goto __FAIL;
105
}
106
107
//read block data
108
if(video_buff_len = fread(video_buf, 1, BLOCK_SIZE, video_fd) <= 0){
109
fprintf(stderr, "Failed to read data from yuv file!\n");
110
goto __FAIL;
111
}
112
113
//set video positon
114
video_pos = video_buf;
115
video_end = video_buf + video_buff_len;
116
blank_space_len = BLOCK_SIZE - video_buff_len;
117
118
timer_thread = SDL_CreateThread(refresh_video_timer,
119
NULL,
120
NULL);
121
122
do {
123
//Wait
124
SDL_WaitEvent(&event);
125
if(event.type==REFRESH_EVENT){
126
//not enought data to render
127
if((video_pos + yuv_frame_len) > video_end){
128
129
//have remain data, but there isn't space
130
remain_len = video_end - video_pos;
131
if(remain_len && !black_space_len) {
132
//copy data to header of buffer
133
memcpy(video_buf, video_pos, remain_len);
134
135
blank_space_len = BLOCK_SIZE - remain_len;
136
video_pos = video_buf;
137
video_end = video_buf + remain_len;
138
}
139
140
//at the end of buffer, so rotate to header of buffer
141
if(video_end == (video_buf + BLOCK_SIZE)){
142
video_pos = video_buf;
143
video_end = video_buf;
144
blank_space_len = BLOCK_SIZE;
145
}
146
147
//read data from yuv file to buffer
148
if(video_buff_len = fread(video_end, 1, blank_space_len, video_fd) <= 0){
149
fprintf(stderr, "eof, exit thread!");
150
thread_exit = 1;
151
continue;// to wait event for exiting
152
}
153
154
//reset video_end
155
video_end += video_buff_len;
156
}
157
158
SDL_UpdateTexture( texture, NULL, video_pos, video_width);
159
160
//FIX: If window is resize
161
rect.x = 0;
162
rect.y = 0;
163
rect.w = w_width;
164
rect.h = w_height;
165
166
SDL_RenderClear( renderer );
167
SDL_RenderCopy( renderer, texture, NULL, &rect);
168
SDL_RenderPresent( renderer );
169
170
}else if(event.type==SDL_WINDOWEVENT){
171
//If Resize
172
SDL_GetWindowSize(win, &w_width, &w_height);
173
}else if(event.type==SDL_QUIT){
174
thread_exit=1;
175
}else if(event.type==QUIT_EVENT){
176
break;
177
}
178
}while ( 1 );
179
180
__FAIL:
181
182
//close file
183
if(video_fd){
184
fclose(video_fd);
185
}
186
187
SDL_Quit();
188
189
return 0;
190
}
191
```
192
193
194
#### 46.SDL播放音频
195
196
- 将多媒体文件分解成音频轨、视频轨、字幕轨等等(解复用)
197
- 视频轨解码成yuv数据
198
- 将yuv数据输送给SDL
199
- SDL将yuv数据输送到显卡
200
- 显卡显示到屏幕
201
- 音频轨解码成pcm数据
202
- 将pcm数据输送给SDL
203
- SDL驱动声卡将声音播放出来
204
- 播放音频基本流程
205
- 1. 打开音频设备
206
- 2. 设置音频参数 // 通道数、采样率、采样大小
207
- 3. 向声卡喂数据
208
- 4. 播放音频
209
- 5. 关闭设备
210
- 播放音频的基本原则
211
- 声卡向你要数据而不是你主动推给声卡
212
- 数据的多少是由音频参数决定的
213
- SDL音频API
214
- SDL_OpenAudio / SDL_CloseAudio
215
- SDL_PauseAudio 暂停播放
216
- SDL_MixAudio 混音
217
```c
218
219
220
221
int main(int argc, char* argv[]) {
222
return 0;
223
}
224
```
225
226
227
#### 47.PCM音频播放器
228
229
```c
230
231
232
static size_t buffer_len = 0;
233
static Uint8 *audio_buf = NULL;
234
static Uint8 *audio_pos = NULL:
235
236
// 声卡主动读取数据的回调函数
237
void read_audio_data(void *udata, Uint8 *stream, int len) {
238
// 判断当前文件数据是否读区完
239
if (buffer_len == 0) {
240
return;
241
}
242
// 清空声卡SDL缓存数据,防止音质变差
243
SDL_memset(stream, 0, len);
244
// 判断读取数据,如果需要的数据小于缓存数据,则取读取数据大小,反之取我们定义的缓冲大小
245
len = (len < buffer_len) ? len : buffer_len;
246
// 混音拷贝
247
SDL_MixAudio(stream, audio_pos, len, SDL_MIX_MAXVOLUME);
248
// 修改读取之后的缓冲区的位置信息
249
audio_pos += len;
250
buffer_len -= len;
251
}
252
253
int main(int argc, char* argv[]) {
254
int ret = -1;
255
char *path = "./1.pcm";
256
FILE* audio_fd = NULL:
257
258
// 初始化SDL
259
if (SDL_Init(SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
260
SDL_Log("Failed to initial!");
261
return ret;
262
};
263
264
// 打开pcm文件
265
audio_fd = fopen(path, "r");
266
if (!audio_fd) {
267
SDL_Log("Failed to open audio file!");
268
goto __FAIL;
269
}
270
271
// 分配空间
272
audio_buf = (Uint8*)malloc(BLOCK_SIZE);
273
if (!audio_buf) {
274
SDL_Log("Failed to alloc memory!");
275
goto __FAIL;
276
}
277
278
// 打开音频设备
279
SDL_AudioSpec spec;
280
spec.freq = 44100; // 采样率
281
spec.channels = 2; // 通道数
282
spec.format = AUDIO_S16SYS; // 采样大小
283
spec.silence = 0;
284
spec.callback = read_audio_data; // 声卡要数据的回调函数
285
spec.userdata = NULL; // 声卡回调参数
286
if(SDL_OpenAudio(&spec, NULL)) {
287
SDL_Log("Failed to open audio device!");
288
goto __FAIL;
289
}
290
291
// 启动播放
292
SDL_PauseAudio(0); // 0是播放,1是暂停
293
294
// 循环读取文件数据到缓冲区
295
do {
296
buffer_len = fread(audio_buf, 1, BLOCK_SIZE, audio_fd);
297
audio_pos = audio_buf;
298
while (audio_pos < (audio_buf + buffer_len)) { // 判断buffer里是否还有数据
299
SDL_Delay(1);
300
}
301
} while (buffer_len != 0);
302
303
// 关闭音频设备
304
SDL_CloseAudio();
305
ret = 0;
306
307
// 关闭SDL和pcm文件、释放空间
308
__FAIL:
309
if (audio_buf) {
310
free(audio_buf);
311
}
312
if (audio_fd) {
313
fclose(audio_fd);
314
}
315
SDL_Quit();
316
return ret;
317
}
318
```
319
- 执行:clang -g -o pcmplay pcmplay.c `pkg-config --cflags --libs sdl2`
320
321
322
#### 48.实现一个最简单的播放器
323
324
- 只实现播放
325
- 将ffmpeg与SDL结合
326
- 通过ffmpeg解码视频数据
327
- 通过SDL进行渲染
328
329
```c
330
331
332
333
334
335
336
337
// compatibility with newer API
338
339
340
341
342
343
int main(int argc, char *argv[]) {
344
345
int ret = -1;
346
347
AVFormatContext *pFormatCtx = NULL; //for opening multi-media file
348
349
int i, videoStream;
350
351
AVCodecContext *pCodecCtxOrig = NULL; //codec context
352
AVCodecContext *pCodecCtx = NULL;
353
354
struct SwsContext *sws_ctx = NULL;
355
356
AVCodec *pCodec = NULL; // the codecer
357
AVFrame *pFrame = NULL;
358
AVPacket packet;
359
360
int frameFinished;
361
float aspect_ratio;
362
363
AVPicture *pict = NULL;
364
365
SDL_Rect rect;
366
Uint32 pixformat;
367
368
//for render
369
SDL_Window *win = NULL;
370
SDL_Renderer *renderer = NULL;
371
SDL_Texture *texture = NULL;
372
373
//set defualt size of window
374
int w_width = 640;
375
int w_height = 480;
376
377
if(argc < 2) {
378
//fprintf(stderr, "Usage: command <file>\n");
379
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Usage: command <file>");
380
return ret;
381
}
382
383
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
384
//fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
385
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Could not initialize SDL - %s\n", SDL_GetError());
386
return ret;
387
}
388
389
//Register all formats and codecs
390
av_register_all();
391
392
// Open video file
393
if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0){
394
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to open video file!");
395
goto __FAIL; // Couldn't open file
396
}
397
398
// Retrieve stream information
399
if(avformat_find_stream_info(pFormatCtx, NULL)<0){
400
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to find stream infomation!");
401
goto __FAIL; // Couldn't find stream information
402
}
403
404
// Dump information about file onto standard error
405
av_dump_format(pFormatCtx, 0, argv[1], 0);
406
407
// Find the first video stream
408
videoStream=-1;
409
for(i=0; i<pFormatCtx->nb_streams; i++) {
410
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
411
videoStream=i;
412
break;
413
}
414
}
415
416
if(videoStream==-1){
417
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Din't find a video stream!");
418
goto __FAIL;// Didn't find a video stream
419
}
420
421
// Get a pointer to the codec context for the video stream
422
pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;
423
424
// Find the decoder for the video stream
425
pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
426
if(pCodec==NULL) {
427
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Unsupported codec!\n");
428
goto __FAIL; // Codec not found
429
}
430
431
// Copy context
432
pCodecCtx = avcodec_alloc_context3(pCodec);
433
if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) {
434
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Couldn't copy codec context");
435
goto __FAIL;// Error copying codec context
436
}
437
438
// Open codec
439
if(avcodec_open2(pCodecCtx, pCodec, NULL)<0) {
440
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to open decoder!\n");
441
goto __FAIL; // Could not open codec
442
}
443
444
// Allocate video frame
445
pFrame=av_frame_alloc();
446
447
w_width = pCodecCtx->width;
448
w_height = pCodecCtx->height;
449
450
win = SDL_CreateWindow( "Media Player",
451
SDL_WINDOWPOS_UNDEFINED,
452
SDL_WINDOWPOS_UNDEFINED,
453
w_width, w_height,
454
SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE);
455
if(!win){
456
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to create window by SDL");
457
goto __FAIL;
458
}
459
460
renderer = SDL_CreateRenderer(win, -1, 0);
461
if(!renderer){
462
SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to create Renderer by SDL");
463
goto __FAIL;
464
}
465
466
pixformat = SDL_PIXELFORMAT_IYUV;
467
texture = SDL_CreateTexture(renderer,
468
pixformat,
469
SDL_TEXTUREACCESS_STREAMING,
470
w_width,
471
w_height);
472
473
// initialize SWS context for software scaling
474
sws_ctx = sws_getContext(pCodecCtx->width,
475
pCodecCtx->height,
476
pCodecCtx->pix_fmt,
477
pCodecCtx->width,
478
pCodecCtx->height,
479
AV_PIX_FMT_YUV420P,
480
SWS_BILINEAR,
481
NULL,
482
NULL,
483
NULL
484
);
485
486
pict = (AVPicture*)malloc(sizeof(AVPicture));
487
avpicture_alloc(pict,
488
AV_PIX_FMT_YUV420P,
489
pCodecCtx->width,
490
pCodecCtx->height);
491
492
493
// Read frames and save first five frames to disk
494
while(av_read_frame(pFormatCtx, &packet)>=0) {
495
// Is this a packet from the video stream?
496
if(packet.stream_index==videoStream) {
497
// Decode video frame
498
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
499
500
// Did we get a video frame?
501
if(frameFinished) {
502
503
// Convert the image into YUV format that SDL uses
504
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
505
pFrame->linesize, 0, pCodecCtx->height,
506
pict->data, pict->linesize);
507
508
SDL_UpdateYUVTexture(texture, NULL,
509
pict->data[0], pict->linesize[0],
510
pict->data[1], pict->linesize[1],
511
pict->data[2], pict->linesize[2]);
512
513
// Set Size of Window
514
rect.x = 0;
515
rect.y = 0;
516
rect.w = pCodecCtx->width;
517
rect.h = pCodecCtx->height;
518
519
SDL_RenderClear(renderer);
520
SDL_RenderCopy(renderer, texture, NULL, &rect);
521
SDL_RenderPresent(renderer);
522
523
}
524
}
525
526
// Free the packet that was allocated by av_read_frame
527
av_free_packet(&packet);
528
529
/*
530
SDL_Event event;
531
SDL_PollEvent(&event);
532
switch(event.type) {
533
case SDL_QUIT:
534
goto __QUIT;
535
break;
536
default:
537
break;
538
}
539
*/
540
541
}
542
543
__QUIT:
544
ret = 0;
545
546
__FAIL:
547
// Free the YUV frame
548
if(pFrame){
549
av_frame_free(&pFrame);
550
}
551
552
// Close the codec
553
if(pCodecCtx){
554
avcodec_close(pCodecCtx);
555
}
556
557
if(pCodecCtxOrig){
558
avcodec_close(pCodecCtxOrig);
559
}
560
561
// Close the video file
562
if(pFormatCtx){
563
avformat_close_input(&pFormatCtx);
564
}
565
566
if(pict){
567
avpicture_free(pict);
568
free(pict);
569
}
570
571
if(win){
572
SDL_DestroyWindow(win);
573
}
574
575
if(renderer){
576
SDL_DestroyRenderer(renderer);
577
}
578
579
if(texture){
580
SDL_DestroyTexture(texture);
581
}
582
583
SDL_Quit();
584
585
return ret;
586
}
- 执行:clang -g -o player2va player2va.c
pkg-config --cflags --libs sdl2 libavutil libavformat libavcodec libswscale
49.实现一个最简单的播放器(支持视频和音频)
1 |
|
2 |
|
3 | |
4 |
|
5 | |
6 |
|
7 |
|
8 |
|
9 |
|
10 | |
11 | // compatibility with newer API |
12 |
|
13 |
|
14 |
|
15 |
|
16 | |
17 |
|
18 |
|
19 | |
20 | struct SwrContext *audio_convert_ctx = NULL; |
21 | |
22 | typedef struct PacketQueue { |
23 | AVPacketList *first_pkt, *last_pkt; |
24 | int nb_packets; |
25 | int size; |
26 | SDL_mutex *mutex; |
27 | SDL_cond *cond; |
28 | } PacketQueue; |
29 | |
30 | PacketQueue audioq; |
31 | |
32 | int quit = 0; |
33 | |
34 | void packet_queue_init(PacketQueue *q) { |
35 | memset(q, 0, sizeof(PacketQueue)); |
36 | q->mutex = SDL_CreateMutex(); |
37 | q->cond = SDL_CreateCond(); |
38 | } |
39 | |
40 | int packet_queue_put(PacketQueue *q, AVPacket *pkt) { |
41 | |
42 | AVPacketList *pkt1; |
43 | if(av_dup_packet(pkt) < 0) { |
44 | return -1; |
45 | } |
46 | pkt1 = av_malloc(sizeof(AVPacketList)); |
47 | if (!pkt1) |
48 | return -1; |
49 | pkt1->pkt = *pkt; |
50 | pkt1->next = NULL; |
51 | |
52 | SDL_LockMutex(q->mutex); |
53 | |
54 | if (!q->last_pkt) { |
55 | q->first_pkt = pkt1; |
56 | }else{ |
57 | q->last_pkt->next = pkt1; |
58 | } |
59 | |
60 | q->last_pkt = pkt1; |
61 | q->nb_packets++; |
62 | q->size += pkt1->pkt.size; |
63 | SDL_CondSignal(q->cond); |
64 | |
65 | SDL_UnlockMutex(q->mutex); |
66 | return 0; |
67 | } |
68 | |
69 | int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block) |
70 | { |
71 | AVPacketList *pkt1; |
72 | int ret; |
73 | |
74 | SDL_LockMutex(q->mutex); |
75 | |
76 | for(;;) { |
77 | |
78 | if(quit) { |
79 | ret = -1; |
80 | break; |
81 | } |
82 | |
83 | pkt1 = q->first_pkt; |
84 | if (pkt1) { |
85 | q->first_pkt = pkt1->next; |
86 | if (!q->first_pkt) |
87 | q->last_pkt = NULL; |
88 | q->nb_packets--; |
89 | q->size -= pkt1->pkt.size; |
90 | *pkt = pkt1->pkt; |
91 | av_free(pkt1); |
92 | ret = 1; |
93 | break; |
94 | } else if (!block) { |
95 | ret = 0; |
96 | break; |
97 | } else { |
98 | SDL_CondWait(q->cond, q->mutex); |
99 | } |
100 | } |
101 | SDL_UnlockMutex(q->mutex); |
102 | return ret; |
103 | } |
104 | |
105 | int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_size) { |
106 | |
107 | static AVPacket pkt; |
108 | static uint8_t *audio_pkt_data = NULL; |
109 | static int audio_pkt_size = 0; |
110 | static AVFrame frame; |
111 | |
112 | int len1, data_size = 0; |
113 | |
114 | for(;;) { |
115 | while(audio_pkt_size > 0) { |
116 | int got_frame = 0; |
117 | len1 = avcodec_decode_audio4(aCodecCtx, &frame, &got_frame, &pkt); |
118 | if(len1 < 0) { |
119 | /* if error, skip frame */ |
120 | audio_pkt_size = 0; |
121 | break; |
122 | } |
123 | audio_pkt_data += len1; |
124 | audio_pkt_size -= len1; |
125 | data_size = 0; |
126 | if(got_frame) { |
127 | //fprintf(stderr, "channels:%d, nb_samples:%d, sample_fmt:%d \n", aCodecCtx->channels, frame.nb_samples, aCodecCtx->sample_fmt); |
128 | /* |
129 | data_size = av_samples_get_buffer_size(NULL, |
130 | aCodecCtx->channels, |
131 | frame.nb_samples, |
132 | aCodecCtx->sample_fmt, |
133 | 1); |
134 | */ |
135 | data_size = 2 * 2 * frame.nb_samples; |
136 | |
137 | assert(data_size <= buf_size); |
138 | swr_convert(audio_convert_ctx, |
139 | &audio_buf, |
140 | MAX_AUDIO_FRAME_SIZE*3/2, |
141 | (const uint8_t **)frame.data, |
142 | frame.nb_samples); |
143 | |
144 | //memcpy(audio_buf, frame.data[0], data_size); |
145 | } |
146 | if(data_size <= 0) { |
147 | /* No data yet, get more frames */ |
148 | continue; |
149 | } |
150 | /* We have data, return it and come back for more later */ |
151 | return data_size; |
152 | } |
153 | if(pkt.data) |
154 | av_free_packet(&pkt); |
155 | |
156 | if(quit) { |
157 | return -1; |
158 | } |
159 | |
160 | if(packet_queue_get(&audioq, &pkt, 1) < 0) { |
161 | return -1; |
162 | } |
163 | audio_pkt_data = pkt.data; |
164 | audio_pkt_size = pkt.size; |
165 | } |
166 | } |
167 | |
168 | void audio_callback(void *userdata, Uint8 *stream, int len) { |
169 | |
170 | AVCodecContext *aCodecCtx = (AVCodecContext *)userdata; |
171 | int len1, audio_size; |
172 | |
173 | static uint8_t audio_buf[(MAX_AUDIO_FRAME_SIZE * 3) / 2]; |
174 | static unsigned int audio_buf_size = 0; |
175 | static unsigned int audio_buf_index = 0; |
176 | |
177 | while(len > 0) { |
178 | if(audio_buf_index >= audio_buf_size) { |
179 | /* We have already sent all our data; get more */ |
180 | audio_size = audio_decode_frame(aCodecCtx, audio_buf, sizeof(audio_buf)); |
181 | if(audio_size < 0) { |
182 | /* If error, output silence */ |
183 | audio_buf_size = 1024; // arbitrary? |
184 | memset(audio_buf, 0, audio_buf_size); |
185 | } else { |
186 | audio_buf_size = audio_size; |
187 | } |
188 | audio_buf_index = 0; |
189 | } |
190 | len1 = audio_buf_size - audio_buf_index; |
191 | if(len1 > len) |
192 | len1 = len; |
193 | fprintf(stderr, "index=%d, len1=%d, len=%d\n", |
194 | audio_buf_index, |
195 | len, |
196 | len1); |
197 | memcpy(stream, (uint8_t *)audio_buf + audio_buf_index, len1); |
198 | len -= len1; |
199 | stream += len1; |
200 | audio_buf_index += len1; |
201 | } |
202 | } |
203 | |
204 | int main(int argc, char *argv[]) { |
205 | |
206 | int ret = -1; |
207 | int i, videoStream, audioStream; |
208 | |
209 | AVFormatContext *pFormatCtx = NULL; |
210 | |
211 | //for video decode |
212 | AVCodecContext *pCodecCtxOrig = NULL; |
213 | AVCodecContext *pCodecCtx = NULL; |
214 | AVCodec *pCodec = NULL; |
215 | |
216 | struct SwsContext *sws_ctx = NULL; |
217 | |
218 | AVPicture *pict = NULL; |
219 | AVFrame *pFrame = NULL; |
220 | AVPacket packet; |
221 | int frameFinished; |
222 | |
223 | //for audio decode |
224 | AVCodecContext *aCodecCtxOrig = NULL; |
225 | AVCodecContext *aCodecCtx = NULL; |
226 | AVCodec *aCodec = NULL; |
227 | |
228 | |
229 | int64_t in_channel_layout; |
230 | int64_t out_channel_layout; |
231 | |
232 | //for video render |
233 | int w_width = 640; |
234 | int w_height = 480; |
235 | |
236 | int pixformat; |
237 | SDL_Rect rect; |
238 | |
239 | SDL_Window *win; |
240 | SDL_Renderer *renderer; |
241 | SDL_Texture *texture; |
242 | |
243 | //for event |
244 | SDL_Event event; |
245 | |
246 | //for audio |
247 | SDL_AudioSpec wanted_spec, spec; |
248 | |
249 | if(argc < 2) { |
250 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Usage: command <file>"); |
251 | return ret; |
252 | } |
253 | |
254 | // Register all formats and codecs |
255 | av_register_all(); |
256 | |
257 | if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) { |
258 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Could not initialize SDL - %s\n", SDL_GetError()); |
259 | return ret; |
260 | } |
261 | |
262 | // Open video file |
263 | if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0) { |
264 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to open multi-media file"); |
265 | goto __FAIL; // Couldn't open file |
266 | } |
267 | |
268 | // Retrieve stream information |
269 | if(avformat_find_stream_info(pFormatCtx, NULL)<0) { |
270 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Couldn't find stream information "); |
271 | goto __FAIL; |
272 | } |
273 | |
274 | // Dump information about file onto standard error |
275 | av_dump_format(pFormatCtx, 0, argv[1], 0); |
276 | |
277 | // Find the first video stream |
278 | videoStream=-1; |
279 | audioStream=-1; |
280 | |
281 | for(i=0; i<pFormatCtx->nb_streams; i++) { |
282 | if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO && |
283 | videoStream < 0) { |
284 | videoStream=i; |
285 | } |
286 | if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO && |
287 | audioStream < 0) { |
288 | audioStream=i; |
289 | } |
290 | } |
291 | |
292 | if(videoStream==-1) { |
293 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, " Didn't find a video stream "); |
294 | goto __FAIL; // Didn't find a video stream |
295 | } |
296 | |
297 | if(audioStream==-1) { |
298 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, " Didn't find a audio stream "); |
299 | goto __FAIL; // Didn't find a video stream |
300 | } |
301 | |
302 | aCodecCtxOrig=pFormatCtx->streams[audioStream]->codec; |
303 | aCodec = avcodec_find_decoder(aCodecCtxOrig->codec_id); |
304 | if(!aCodec) { |
305 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Unsupported codec! "); |
306 | goto __FAIL; // Didn't find a video stream |
307 | } |
308 | |
309 | // Copy context |
310 | aCodecCtx = avcodec_alloc_context3(aCodec); |
311 | if(avcodec_copy_context(aCodecCtx, aCodecCtxOrig) != 0) { |
312 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Couldn't copy codec context! "); |
313 | goto __FAIL; // Didn't find a video stream |
314 | } |
315 | |
316 | // Set audio settings from codec info |
317 | wanted_spec.freq = aCodecCtx->sample_rate; |
318 | wanted_spec.format = AUDIO_S16SYS; |
319 | wanted_spec.channels = aCodecCtx->channels; |
320 | wanted_spec.silence = 0; |
321 | wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE; |
322 | wanted_spec.callback = audio_callback; |
323 | wanted_spec.userdata = aCodecCtx; |
324 | |
325 | if(SDL_OpenAudio(&wanted_spec, &spec) < 0) { |
326 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to open audio device - %s!", SDL_GetError()); |
327 | goto __FAIL; |
328 | } |
329 | |
330 | avcodec_open2(aCodecCtx, aCodec, NULL); |
331 | |
332 | packet_queue_init(&audioq); |
333 | |
334 | in_channel_layout = av_get_default_channel_layout(aCodecCtx->channels); |
335 | out_channel_layout = in_channel_layout; //AV_CH_LAYOUT_STEREO; |
336 | fprintf(stderr, "in layout:%lld, out layout:%lld \n", in_channel_layout, out_channel_layout); |
337 | |
338 | audio_convert_ctx = swr_alloc(); |
339 | if(audio_convert_ctx){ |
340 | swr_alloc_set_opts(audio_convert_ctx, |
341 | out_channel_layout, |
342 | AV_SAMPLE_FMT_S16, |
343 | aCodecCtx->sample_rate, |
344 | in_channel_layout, |
345 | aCodecCtx->sample_fmt, |
346 | aCodecCtx->sample_rate, |
347 | 0, |
348 | NULL); |
349 | } |
350 | swr_init(audio_convert_ctx); |
351 | |
352 | SDL_PauseAudio(0); |
353 | |
354 | // Get a pointer to the codec context for the video stream |
355 | pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec; |
356 | |
357 | // Find the decoder for the video stream |
358 | pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id); |
359 | if(pCodec==NULL) { |
360 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Unsupported codec!"); |
361 | goto __FAIL; |
362 | } |
363 | |
364 | // Copy context |
365 | pCodecCtx = avcodec_alloc_context3(pCodec); |
366 | if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) { |
367 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to copy context of codec!"); |
368 | goto __FAIL; |
369 | } |
370 | |
371 | // Open codec |
372 | if(avcodec_open2(pCodecCtx, pCodec, NULL)<0) { |
373 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to open audio decoder!"); |
374 | goto __FAIL; |
375 | } |
376 | |
377 | // Allocate video frame |
378 | pFrame=av_frame_alloc(); |
379 | |
380 | w_width = pCodecCtx->width; |
381 | w_height = pCodecCtx->height; |
382 | |
383 | fprintf(stderr, "width:%d, height:%d\n", w_width, w_height); |
384 | |
385 | win = SDL_CreateWindow("Media Player", |
386 | SDL_WINDOWPOS_UNDEFINED, |
387 | SDL_WINDOWPOS_UNDEFINED, |
388 | w_width, w_height, |
389 | SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE); |
390 | if(!win){ |
391 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to create window!"); |
392 | goto __FAIL; |
393 | } |
394 | |
395 | renderer = SDL_CreateRenderer(win, -1, 0); |
396 | if(!renderer){ |
397 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to create renderer!"); |
398 | goto __FAIL; |
399 | } |
400 | |
401 | pixformat = SDL_PIXELFORMAT_IYUV; |
402 | texture = SDL_CreateTexture(renderer, |
403 | pixformat, |
404 | SDL_TEXTUREACCESS_STREAMING, |
405 | w_width, |
406 | w_height); |
407 | if(!texture){ |
408 | SDL_LogError(SDL_LOG_CATEGORY_APPLICATION, "Failed to create Texture!"); |
409 | goto __FAIL; |
410 | } |
411 | |
412 | // initialize SWS context for software scaling |
413 | sws_ctx = sws_getContext(pCodecCtx->width, |
414 | pCodecCtx->height, |
415 | pCodecCtx->pix_fmt, |
416 | pCodecCtx->width, |
417 | pCodecCtx->height, |
418 | AV_PIX_FMT_YUV420P, |
419 | SWS_BILINEAR, |
420 | NULL, |
421 | NULL, |
422 | NULL); |
423 | |
424 | pict = (AVPicture*)malloc(sizeof(AVPicture)); |
425 | avpicture_alloc(pict, |
426 | AV_PIX_FMT_YUV420P, |
427 | pCodecCtx->width, |
428 | pCodecCtx->height); |
429 | |
430 | // Read frames and save first five frames to disk |
431 | while(av_read_frame(pFormatCtx, &packet)>=0) { |
432 | // Is this a packet from the video stream? |
433 | if(packet.stream_index==videoStream) { |
434 | // Decode video frame |
435 | avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet); |
436 | |
437 | // Did we get a video frame? |
438 | if(frameFinished) { |
439 | |
440 | // Convert the image into YUV format that SDL uses |
441 | sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, |
442 | pFrame->linesize, 0, pCodecCtx->height, |
443 | pict->data, pict->linesize); |
444 | |
445 | SDL_UpdateYUVTexture(texture, NULL, |
446 | pict->data[0], pict->linesize[0], |
447 | pict->data[1], pict->linesize[1], |
448 | pict->data[2], pict->linesize[2]); |
449 | |
450 | rect.x = 0; |
451 | rect.y = 0; |
452 | rect.w = pCodecCtx->width; |
453 | rect.h = pCodecCtx->height; |
454 | |
455 | SDL_RenderClear(renderer); |
456 | SDL_RenderCopy(renderer, texture, NULL, &rect); |
457 | SDL_RenderPresent(renderer); |
458 | |
459 | av_free_packet(&packet); |
460 | } |
461 | } else if(packet.stream_index==audioStream) { //for audio |
462 | packet_queue_put(&audioq, &packet); |
463 | } else { |
464 | av_free_packet(&packet); |
465 | } |
466 | |
467 | // Free the packet that was allocated by av_read_frame |
468 | SDL_PollEvent(&event); |
469 | switch(event.type) { |
470 | case SDL_QUIT: |
471 | quit = 1; |
472 | goto __QUIT; |
473 | break; |
474 | default: |
475 | break; |
476 | } |
477 | |
478 | } |
479 | |
480 | __QUIT: |
481 | ret = 0; |
482 | |
483 | __FAIL: |
484 | // Free the YUV frame |
485 | if(pFrame){ |
486 | av_frame_free(&pFrame); |
487 | } |
488 | |
489 | // Close the codecs |
490 | if(pCodecCtxOrig){ |
491 | avcodec_close(pCodecCtxOrig); |
492 | } |
493 | |
494 | if(pCodecCtx){ |
495 | avcodec_close(pCodecCtx); |
496 | } |
497 | |
498 | if(aCodecCtxOrig) { |
499 | avcodec_close(aCodecCtxOrig); |
500 | } |
501 | |
502 | if(aCodecCtx) { |
503 | avcodec_close(aCodecCtx); |
504 | } |
505 | |
506 | // Close the video file |
507 | if(pFormatCtx){ |
508 | avformat_close_input(&pFormatCtx); |
509 | } |
510 | |
511 | if(pict){ |
512 | avpicture_free(pict); |
513 | free(pict); |
514 | } |
515 | |
516 | if(win){ |
517 | SDL_DestroyWindow(win); |
518 | } |
519 | |
520 | if(renderer){ |
521 | SDL_DestroyRenderer(renderer); |
522 | } |
523 | |
524 | if(texture){ |
525 | SDL_DestroyTexture(texture); |
526 | } |
527 | |
528 | SDL_Quit(); |
529 | |
530 | return ret; |
531 | } |
532 | ``` |
533 | - 执行:clang -g -o player2va player2va.c `pkg-config --cflags --libs sdl2 libavutil libavformat libavcodec libswscale libswrsample` |
534 | |
535 | |
536 | |
537 | |
538 | #### 50.多线程与锁 |
539 | |
540 | - 多线程的好处 |
541 | - 充分利用CPU资源【管理】 |
542 | - 线程的互斥与同步 |
543 | - 互斥(抢锁的钥匙) |
544 | - 同步(信号机制) |
545 | - 锁与信号量 |
546 | - 锁的种类 |
547 | - 1. 读写锁 |
548 | - 2. 自旋锁(等待,要短时间) |
549 | - 3. 可重入锁 |
550 | - 通过信号进行同步 |
551 | - SDL创建/等待线程 |
552 | - 1. SDL_CreateThread |
553 | - 2. SDL_WaitThread |
554 | - SDL锁 |
555 | - 1. SDL_CreateMutex / SDL_DestroyMutex |
556 | - 2. SDL_LockMutex / SDL_UnlockMutex |
557 | - SDL条件变量(信号量) |
558 | - 1. SDL_CreateCond / SDL_DestroyCond |
559 | - 2. SDL_CondWait / SDL_CondSignal |
560 | |
561 | |
562 | |
563 | #### 51.锁与条件变量的使用 |
564 | |
565 | ```c |
566 |
|
567 |
|
568 | |
569 |
|
570 | |
571 | // 队列的结构体 |
572 | typedef struct PacketQueue { |
573 | AVPacketList *first_pkt, *last_pkt; // 队列的头和尾,ffmpeg提供的 |
574 | int nb_packets; // 有多少个包 |
575 | int size; // 存储空间 |
576 | SDL_mutex *mutex; // 互斥 |
577 | SDL_cond *cond; // 同步,信号量 |
578 | } PacketQueue; |
579 | |
580 | void packet_queue_init(PacketQueue *q) { |
581 | memset(q, 0, sizeof(PacketQueue)); |
582 | q->mutex = SDL_CreateMutex(); |
583 | q->cond = SDL_CreateCond(); |
584 | } |
585 | |
586 | // 入队函数 |
587 | int packet_queue_put(PacketQueue *q, AVPacket *pkt) { |
588 | |
589 | AVPacketList *pkt1; // 引用计数 |
590 | if(av_dup_packet(pkt) < 0) { |
591 | return -1; |
592 | } |
593 | // 分配内存空间,构造队列中的一个元素,然后将我们的元素插入 |
594 | pkt1 = av_malloc(sizeof(AVPacketList)); |
595 | if (!pkt1) |
596 | return -1; |
597 | pkt1->pkt = *pkt; |
598 | pkt1->next = NULL; |
599 | |
600 | // 加锁 |
601 | SDL_LockMutex(q->mutex); |
602 | |
603 | // 判读是不是第一个元素 |
604 | if (!q->last_pkt) |
605 | q->first_pkt = pkt1; |
606 | else |
607 | q->last_pkt->next = pkt1; |
608 | q->last_pkt = pkt1; // 移动队尾指针 |
609 | q->nb_packets++; |
610 | q->size += pkt1->pkt.size; // 统计包大小 |
611 | SDL_CondSignal(q->cond); // 发送信号,让等待的线程唤醒 |
612 | |
613 | SDL_UnlockMutex(q->mutex); // 解锁 |
614 | return 0; |
615 | } |
616 | |
617 | // 出队函数 |
618 | int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block) |
619 | { |
620 | AVPacketList *pkt1; |
621 | int ret; |
622 | |
623 | SDL_LockMutex(q->mutex); // 加锁 |
624 | |
625 | for(;;) { // 死循环等待 |
626 | |
627 | if(global_video_state->quit) { |
628 | ret = -1; |
629 | break; |
630 | } |
631 | |
632 | pkt1 = q->first_pkt; // 拿到队列头中取元素 |
633 | if (pkt1) { |
634 | q->first_pkt = pkt1->next; // 往后移动 |
635 | if (!q->first_pkt) // 队列为空 |
636 | q->last_pkt = NULL; |
637 | q->nb_packets--; |
638 | q->size -= pkt1->pkt.size; |
639 | *pkt = pkt1->pkt; // 取数据 |
640 | av_free(pkt1); // 释放资源 |
641 | ret = 1; |
642 | break; |
643 | } else if (!block) { |
644 | ret = 0; |
645 | break; |
646 | } else { |
647 | SDL_CondWait(q->cond, q->mutex); |
648 | } |
649 | } |
650 | SDL_UnlockMutex(q->mutex); // 解锁 |
651 | return ret; |
652 | } |
653 | ``` |
654 | |
655 | |
656 | #### 52.播放器线程模型 |
657 | |
658 | - 主线程(对输入参数处理,对事件处理,对视频渲染),一般不做复杂逻辑 |
659 | |
660 | ```text |
661 | --------->视频流队列<---------- |
662 | ^ | |
663 | | | |
664 | 输入文件 --- [创建线程] --- 解复用 --- [创建线程] --- 视频解码线程 |
665 | | | |
666 | V | |
667 | 音频流队列 --------------------------------------音频渲染(SDL) |
668 | | |
669 | V |
670 | 视频渲染 -------------------------------------------->解码视频队列 |
671 | ``` |
672 | |
673 | ```c |
674 | // tutorial04.c |
675 | // A pedagogical video player that will stream through every video frame as fast as it can, |
676 | // and play audio (out of sync). |
677 | // |
678 | // Code based on FFplay, Copyright (c) 2003 Fabrice Bellard, |
679 | // and a tutorial by Martin Bohme (boehme@inb.uni-luebeckREMOVETHIS.de) |
680 | // Tested on Gentoo, CVS version 5/01/07 compiled with GCC 4.1.1 |
681 | // With updates from https://github.com/chelyaev/ffmpeg-tutorial |
682 | // Updates tested on: |
683 | // LAVC 54.59.100, LAVF 54.29.104, LSWS 2.1.101, SDL 1.2.15 |
684 | // on GCC 4.7.2 in Debian February 2015 |
685 | // Use |
686 | // |
687 | // gcc -o tutorial04 tutorial04.c -lavformat -lavcodec -lswscale -lz -lm `sdl-config --cflags --libs` |
688 | // to build (assuming libavformat and libavcodec are correctly installed, |
689 | // and assuming you have sdl-config. Please refer to SDL docs for your installation.) |
690 | // |
691 | // Run using |
692 | // tutorial04 myvideofile.mpg |
693 | // |
694 | // to play the video stream on your screen. |
695 | |
696 |
|
697 |
|
698 |
|
699 | |
700 |
|
701 | |
702 |
|
703 |
|
704 |
|
705 |
|
706 | |
707 | // compatibility with newer API |
708 |
|
709 |
|
710 |
|
711 |
|
712 | |
713 |
|
714 |
|
715 | |
716 |
|
717 |
|
718 | |
719 |
|
720 |
|
721 | |
722 |
|
723 | |
724 | typedef struct PacketQueue { |
725 | AVPacketList *first_pkt, *last_pkt; |
726 | int nb_packets; |
727 | int size; |
728 | SDL_mutex *mutex; |
729 | SDL_cond *cond; |
730 | } PacketQueue; |
731 | |
732 | |
733 | typedef struct VideoPicture { |
734 | AVPicture *pict; |
735 | int width, height; /* source height & width */ |
736 | int allocated; |
737 | } VideoPicture; |
738 | |
739 | typedef struct VideoState { |
740 | |
741 | //for multi-media file |
742 | char filename[1024]; |
743 | AVFormatContext *pFormatCtx; |
744 | |
745 | int videoStream, audioStream; |
746 | |
747 | //for audio |
748 | AVStream *audio_st; |
749 | AVCodecContext *audio_ctx; |
750 | PacketQueue audioq; |
751 | uint8_t audio_buf[(MAX_AUDIO_FRAME_SIZE * 3) / 2]; |
752 | unsigned int audio_buf_size; |
753 | unsigned int audio_buf_index; |
754 | AVFrame audio_frame; |
755 | AVPacket audio_pkt; |
756 | uint8_t *audio_pkt_data; |
757 | int audio_pkt_size; |
758 | struct SwrContext *audio_swr_ctx; |
759 | |
760 | //for video |
761 | AVStream *video_st; |
762 | AVCodecContext *video_ctx; |
763 | PacketQueue videoq; |
764 | struct SwsContext *sws_ctx; |
765 | |
766 | VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE]; |
767 | int pictq_size, pictq_rindex, pictq_windex; |
768 | |
769 | //for thread |
770 | SDL_mutex *pictq_mutex; |
771 | SDL_cond *pictq_cond; |
772 | |
773 | SDL_Thread *parse_tid; |
774 | SDL_Thread *video_tid; |
775 | |
776 | int quit; |
777 | |
778 | } VideoState; |
779 | |
780 | SDL_mutex *texture_mutex; |
781 | SDL_Window *win; |
782 | SDL_Renderer *renderer; |
783 | SDL_Texture *texture; |
784 | |
785 | FILE *audiofd = NULL; |
786 | FILE *audiofd1 = NULL; |
787 | |
788 | /* Since we only have one decoding thread, the Big Struct |
789 | can be global in case we need it. */ |
790 | VideoState *global_video_state; |
791 | |
792 | void packet_queue_init(PacketQueue *q) { |
793 | memset(q, 0, sizeof(PacketQueue)); |
794 | q->mutex = SDL_CreateMutex(); |
795 | q->cond = SDL_CreateCond(); |
796 | } |
797 | |
798 | int packet_queue_put(PacketQueue *q, AVPacket *pkt) { |
799 | |
800 | AVPacketList *pkt1; |
801 | if(av_dup_packet(pkt) < 0) { |
802 | return -1; |
803 | } |
804 | pkt1 = av_malloc(sizeof(AVPacketList)); |
805 | if (!pkt1) |
806 | return -1; |
807 | pkt1->pkt = *pkt; |
808 | pkt1->next = NULL; |
809 | |
810 | SDL_LockMutex(q->mutex); |
811 | |
812 | if (!q->last_pkt) |
813 | q->first_pkt = pkt1; |
814 | else |
815 | q->last_pkt->next = pkt1; |
816 | q->last_pkt = pkt1; |
817 | q->nb_packets++; |
818 | q->size += pkt1->pkt.size; |
819 | //fprintf(stderr, "enqueue, packets:%d, send cond signal\n", q->nb_packets); |
820 | SDL_CondSignal(q->cond); |
821 | |
822 | SDL_UnlockMutex(q->mutex); |
823 | return 0; |
824 | } |
825 | |
826 | int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block) |
827 | { |
828 | AVPacketList *pkt1; |
829 | int ret; |
830 | |
831 | SDL_LockMutex(q->mutex); |
832 | |
833 | for(;;) { |
834 | |
835 | if(global_video_state->quit) { |
836 | fprintf(stderr, "quit from queue_get\n"); |
837 | ret = -1; |
838 | break; |
839 | } |
840 | |
841 | pkt1 = q->first_pkt; |
842 | if (pkt1) { |
843 | q->first_pkt = pkt1->next; |
844 | if (!q->first_pkt) |
845 | q->last_pkt = NULL; |
846 | q->nb_packets--; |
847 | q->size -= pkt1->pkt.size; |
848 | *pkt = pkt1->pkt; |
849 | av_free(pkt1); |
850 | ret = 1; |
851 | break; |
852 | } else if (!block) { |
853 | ret = 0; |
854 | break; |
855 | } else { |
856 | fprintf(stderr, "queue is empty, so wait a moment and wait a cond signal\n"); |
857 | SDL_CondWait(q->cond, q->mutex); |
858 | } |
859 | } |
860 | SDL_UnlockMutex(q->mutex); |
861 | return ret; |
862 | } |
863 | |
864 | int audio_decode_frame(VideoState *is, uint8_t *audio_buf, int buf_size) { |
865 | |
866 | int len1, data_size = 0; |
867 | AVPacket *pkt = &is->audio_pkt; |
868 | |
869 | for(;;) { |
870 | while(is->audio_pkt_size > 0) { |
871 | |
872 | int got_frame = 0; |
873 | len1 = avcodec_decode_audio4(is->audio_ctx, &is->audio_frame, &got_frame, pkt); |
874 | if(len1 < 0) { |
875 | /* if error, skip frame */ |
876 | fprintf(stderr, "Failed to decode audio ......\n"); |
877 | is->audio_pkt_size = 0; |
878 | break; |
879 | } |
880 | |
881 | data_size = 0; |
882 | if(got_frame) { |
883 | /* |
884 | fprintf(stderr, "auido: channels:%d, nb_samples:%d, sample_fmt:%d\n", |
885 | is->audio_ctx->channels, |
886 | is->audio_frame.nb_samples, |
887 | is->audio_ctx->sample_fmt); |
888 |
|
889 | data_size = av_samples_get_buffer_size(NULL, |
890 | is->audio_ctx->channels, |
891 | is->audio_frame.nb_samples, |
892 | is->audio_ctx->sample_fmt, |
893 | 1); |
894 | */ |
895 | data_size = 2 * is->audio_frame.nb_samples * 2; |
896 | assert(data_size <= buf_size); |
897 | //memcpy(audio_buf, is->audio_frame.data[0], data_size); |
898 | |
899 | swr_convert(is->audio_swr_ctx, |
900 | &audio_buf, |
901 | MAX_AUDIO_FRAME_SIZE*3/2, |
902 | (const uint8_t **)is->audio_frame.data, |
903 | is->audio_frame.nb_samples); |
904 | |
905 | |
906 | fwrite(audio_buf, 1, data_size, audiofd); |
907 | fflush(audiofd); |
908 | } |
909 | |
910 | is->audio_pkt_data += len1; |
911 | is->audio_pkt_size -= len1; |
912 | if(data_size <= 0) { |
913 | /* No data yet, get more frames */ |
914 | continue; |
915 | } |
916 | /* We have data, return it and come back for more later */ |
917 | return data_size; |
918 | } |
919 | |
920 | if(pkt->data) |
921 | av_free_packet(pkt); |
922 | |
923 | if(is->quit) { |
924 | fprintf(stderr, "will quit program......\n"); |
925 | return -1; |
926 | } |
927 | |
928 | /* next packet */ |
929 | if(packet_queue_get(&is->audioq, pkt, 1) < 0) { |
930 | return -1; |
931 | } |
932 | |
933 | is->audio_pkt_data = pkt->data; |
934 | is->audio_pkt_size = pkt->size; |
935 | } |
936 | } |
937 | |
938 | void audio_callback(void *userdata, Uint8 *stream, int len) { |
939 | |
940 | VideoState *is = (VideoState *)userdata; |
941 | int len1, audio_size; |
942 | |
943 | SDL_memset(stream, 0, len); |
944 | |
945 | while(len > 0) { |
946 | if(is->audio_buf_index >= is->audio_buf_size) { |
947 | /* We have already sent all our data; get more */ |
948 | audio_size = audio_decode_frame(is, is->audio_buf, sizeof(is->audio_buf)); |
949 | if(audio_size < 0) { |
950 | /* If error, output silence */ |
951 | is->audio_buf_size = 1024*2*2; |
952 | memset(is->audio_buf, 0, is->audio_buf_size); |
953 | } else { |
954 | is->audio_buf_size = audio_size; |
955 | } |
956 | is->audio_buf_index = 0; |
957 | } |
958 | len1 = is->audio_buf_size - is->audio_buf_index; |
959 | fprintf(stderr, "stream addr:%p, audio_buf_index:%d, len1:%d, len:%d\n", |
960 | stream, |
961 | is->audio_buf_index, |
962 | len1, |
963 | len); |
964 | if(len1 > len) |
965 | len1 = len; |
966 | //memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1); |
967 | fwrite(is->audio_buf, 1, len1, audiofd1); |
968 | fflush(audiofd1); |
969 | SDL_MixAudio(stream,(uint8_t *)is->audio_buf, len1, SDL_MIX_MAXVOLUME); |
970 | len -= len1; |
971 | stream += len1; |
972 | is->audio_buf_index += len1; |
973 | } |
974 | } |
975 | |
976 | static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) { |
977 | SDL_Event event; |
978 | event.type = FF_REFRESH_EVENT; |
979 | event.user.data1 = opaque; |
980 | SDL_PushEvent(&event); |
981 | return 0; /* 0 means stop timer */ |
982 | } |
983 | |
984 | /* schedule a video refresh in 'delay' ms */ |
985 | static void schedule_refresh(VideoState *is, int delay) { |
986 | SDL_AddTimer(delay, sdl_refresh_timer_cb, is); |
987 | } |
988 | |
989 | void video_display(VideoState *is) { |
990 | |
991 | SDL_Rect rect; |
992 | VideoPicture *vp; |
993 | float aspect_ratio; |
994 | int w, h, x, y; |
995 | int i; |
996 | |
997 | vp = &is->pictq[is->pictq_rindex]; |
998 | if(vp->pict) { |
999 | |
1000 | if(is->video_ctx->sample_aspect_ratio.num == 0) { |
1001 | aspect_ratio = 0; |
1002 | } else { |
1003 | aspect_ratio = av_q2d(is->video_ctx->sample_aspect_ratio) * |
1004 | is->video_ctx->width / is->video_ctx->height; |
1005 | } |
1006 | |
1007 | if(aspect_ratio <= 0.0) { |
1008 | aspect_ratio = (float)is->video_ctx->width / |
1009 | (float)is->video_ctx->height; |
1010 | } |
1011 | |
1012 | /* |
1013 | h = screen->h; |
1014 | w = ((int)rint(h * aspect_ratio)) & -3; |
1015 | if(w > screen->w) { |
1016 | w = screen->w; |
1017 | h = ((int)rint(w / aspect_ratio)) & -3; |
1018 | } |
1019 | x = (screen->w - w) / 2; |
1020 | y = (screen->h - h) / 2; |
1021 | */ |
1022 | |
1023 | SDL_UpdateYUVTexture( texture, NULL, |
1024 | vp->pict->data[0], vp->pict->linesize[0], |
1025 | vp->pict->data[1], vp->pict->linesize[1], |
1026 | vp->pict->data[2], vp->pict->linesize[2]); |
1027 | |
1028 | rect.x = 0; |
1029 | rect.y = 0; |
1030 | rect.w = is->video_ctx->width; |
1031 | rect.h = is->video_ctx->height; |
1032 | |
1033 | SDL_LockMutex(texture_mutex); |
1034 | SDL_RenderClear(renderer); |
1035 | SDL_RenderCopy(renderer, texture, NULL, &rect); |
1036 | SDL_RenderPresent(renderer); |
1037 | SDL_UnlockMutex(texture_mutex); |
1038 | |
1039 | } |
1040 | } |
1041 | |
1042 | void video_refresh_timer(void *userdata) { |
1043 | |
1044 | VideoState *is = (VideoState *)userdata; |
1045 | VideoPicture *vp; |
1046 | |
1047 | if(is->video_st) { |
1048 | if(is->pictq_size == 0) { |
1049 | schedule_refresh(is, 1); //if the queue is empty, so we shoud be as fast as checking queue of picture |
1050 | } else { |
1051 | vp = &is->pictq[is->pictq_rindex]; |
1052 | /* Now, normally here goes a ton of code |
1053 | about timing, etc. we're just going to |
1054 | guess at a delay for now. You can |
1055 | increase and decrease this value and hard code |
1056 | the timing - but I don't suggest that ;) |
1057 | We'll learn how to do it for real later. |
1058 | */ |
1059 | schedule_refresh(is, 40); |
1060 | |
1061 | /* show the picture! */ |
1062 | video_display(is); |
1063 | |
1064 | /* update queue for next picture! */ |
1065 | if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) { |
1066 | is->pictq_rindex = 0; |
1067 | } |
1068 | SDL_LockMutex(is->pictq_mutex); |
1069 | is->pictq_size--; |
1070 | SDL_CondSignal(is->pictq_cond); |
1071 | SDL_UnlockMutex(is->pictq_mutex); |
1072 | } |
1073 | } else { |
1074 | schedule_refresh(is, 100); |
1075 | } |
1076 | } |
1077 | |
1078 | void alloc_picture(void *userdata) { |
1079 | |
1080 | VideoState *is = (VideoState *)userdata; |
1081 | VideoPicture *vp; |
1082 | |
1083 | vp = &is->pictq[is->pictq_windex]; |
1084 | if(vp->pict) {//free space if vp->pict is not NULL |
1085 | avpicture_free(vp->pict); |
1086 | free(vp->pict); |
1087 | } |
1088 | |
1089 | // Allocate a place to put our YUV image on that screen |
1090 | SDL_LockMutex(texture_mutex); |
1091 | vp->pict = (AVPicture*)malloc(sizeof(AVPicture)); |
1092 | if(vp->pict){ |
1093 | avpicture_alloc(vp->pict, |
1094 | AV_PIX_FMT_YUV420P, |
1095 | is->video_ctx->width, |
1096 | is->video_ctx->height); |
1097 | } |
1098 | SDL_UnlockMutex(texture_mutex); |
1099 | |
1100 | vp->width = is->video_ctx->width; |
1101 | vp->height = is->video_ctx->height; |
1102 | vp->allocated = 1; |
1103 | |
1104 | } |
1105 | |
1106 | int queue_picture(VideoState *is, AVFrame *pFrame) { |
1107 | |
1108 | VideoPicture *vp; |
1109 | int dst_pix_fmt; |
1110 | AVPicture pict; |
1111 | |
1112 | /* wait until we have space for a new pic */ |
1113 | SDL_LockMutex(is->pictq_mutex); |
1114 | while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE && |
1115 | !is->quit) { |
1116 | SDL_CondWait(is->pictq_cond, is->pictq_mutex); |
1117 | } |
1118 | SDL_UnlockMutex(is->pictq_mutex); |
1119 | |
1120 | if(is->quit){ |
1121 | fprintf(stderr, "quit from queue_picture....\n"); |
1122 | return -1; |
1123 | } |
1124 | |
1125 | // windex is set to 0 initially |
1126 | vp = &is->pictq[is->pictq_windex]; |
1127 | |
1128 | /* |
1129 | fprintf(stderr, "vp.width=%d, vp.height=%d, video_ctx.width=%d, video_ctx.height=%d\n", |
1130 | vp->width, |
1131 | vp->height, |
1132 | is->video_ctx->width, |
1133 | is->video_ctx->height); |
1134 | */ |
1135 | |
1136 | /* allocate or resize the buffer! */ |
1137 | if(!vp->pict || |
1138 | vp->width != is->video_ctx->width || |
1139 | vp->height != is->video_ctx->height) { |
1140 | |
1141 | vp->allocated = 0; |
1142 | alloc_picture(is); |
1143 | if(is->quit) { |
1144 | fprintf(stderr, "quit from queue_picture2....\n"); |
1145 | return -1; |
1146 | } |
1147 | } |
1148 | |
1149 | /* We have a place to put our picture on the queue */ |
1150 | |
1151 | if(vp->pict) { |
1152 | |
1153 | // Convert the image into YUV format that SDL uses |
1154 | sws_scale(is->sws_ctx, (uint8_t const * const *)pFrame->data, |
1155 | pFrame->linesize, 0, is->video_ctx->height, |
1156 | vp->pict->data, vp->pict->linesize); |
1157 | |
1158 | /* now we inform our display thread that we have a pic ready */ |
1159 | if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) { |
1160 | is->pictq_windex = 0; |
1161 | } |
1162 | SDL_LockMutex(is->pictq_mutex); |
1163 | is->pictq_size++; |
1164 | SDL_UnlockMutex(is->pictq_mutex); |
1165 | } |
1166 | return 0; |
1167 | } |
1168 | |
1169 | int video_thread(void *arg) { |
1170 | VideoState *is = (VideoState *)arg; |
1171 | AVPacket pkt1, *packet = &pkt1; |
1172 | int frameFinished; |
1173 | AVFrame *pFrame; |
1174 | |
1175 | pFrame = av_frame_alloc(); |
1176 | |
1177 | for(;;) { |
1178 | if(packet_queue_get(&is->videoq, packet, 1) < 0) { |
1179 | // means we quit getting packets |
1180 | break; |
1181 | } |
1182 | |
1183 | // Decode video frame |
1184 | avcodec_decode_video2(is->video_ctx, pFrame, &frameFinished, packet); |
1185 | |
1186 | // Did we get a video frame? |
1187 | if(frameFinished) { |
1188 | if(queue_picture(is, pFrame) < 0) { |
1189 | break; |
1190 | } |
1191 | } |
1192 | |
1193 | av_free_packet(packet); |
1194 | } |
1195 | av_frame_free(&pFrame); |
1196 | return 0; |
1197 | } |
1198 | |
1199 | int stream_component_open(VideoState *is, int stream_index) { |
1200 | |
1201 | int64_t in_channel_layout, out_channel_layout; |
1202 | |
1203 | AVFormatContext *pFormatCtx = is->pFormatCtx; |
1204 | AVCodecContext *codecCtx = NULL; |
1205 | AVCodec *codec = NULL; |
1206 | SDL_AudioSpec wanted_spec, spec; |
1207 | |
1208 | if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) { |
1209 | return -1; |
1210 | } |
1211 | |
1212 | codec = avcodec_find_decoder(pFormatCtx->streams[stream_index]->codec->codec_id); |
1213 | if(!codec) { |
1214 | fprintf(stderr, "Unsupported codec!\n"); |
1215 | return -1; |
1216 | } |
1217 | |
1218 | codecCtx = avcodec_alloc_context3(codec); |
1219 | if(avcodec_copy_context(codecCtx, pFormatCtx->streams[stream_index]->codec) != 0) { |
1220 | fprintf(stderr, "Couldn't copy codec context"); |
1221 | return -1; // Error copying codec context |
1222 | } |
1223 | |
1224 | |
1225 | if(codecCtx->codec_type == AVMEDIA_TYPE_AUDIO) { |
1226 | // Set audio settings from codec info |
1227 | wanted_spec.freq = codecCtx->sample_rate; |
1228 | wanted_spec.format = AUDIO_S16SYS; |
1229 | wanted_spec.channels = codecCtx->channels; |
1230 | wanted_spec.silence = 0; |
1231 | wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE; |
1232 | wanted_spec.callback = audio_callback; |
1233 | wanted_spec.userdata = is; |
1234 | |
1235 | if(SDL_OpenAudio(&wanted_spec, &spec) < 0) { |
1236 | fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError()); |
1237 | return -1; |
1238 | } |
1239 | } |
1240 | |
1241 | if(avcodec_open2(codecCtx, codec, NULL) < 0) { |
1242 | fprintf(stderr, "Unsupported codec!\n"); |
1243 | return -1; |
1244 | } |
1245 | |
1246 | switch(codecCtx->codec_type) { |
1247 | case AVMEDIA_TYPE_AUDIO: |
1248 | is->audioStream = stream_index; |
1249 | is->audio_st = pFormatCtx->streams[stream_index]; |
1250 | is->audio_ctx = codecCtx; |
1251 | is->audio_buf_size = 0; |
1252 | is->audio_buf_index = 0; |
1253 | memset(&is->audio_pkt, 0, sizeof(is->audio_pkt)); |
1254 | packet_queue_init(&is->audioq); |
1255 | SDL_PauseAudio(0); |
1256 | |
1257 | in_channel_layout=av_get_default_channel_layout(is->audio_ctx->channels); |
1258 | out_channel_layout = in_channel_layout; |
1259 | |
1260 | is->audio_swr_ctx = swr_alloc(); |
1261 | swr_alloc_set_opts(is->audio_swr_ctx, |
1262 | out_channel_layout, |
1263 | AV_SAMPLE_FMT_S16, |
1264 | is->audio_ctx->sample_rate, |
1265 | in_channel_layout, |
1266 | is->audio_ctx->sample_fmt, |
1267 | is->audio_ctx->sample_rate, |
1268 | 0, |
1269 | NULL); |
1270 | |
1271 | fprintf(stderr, "swr opts: out_channel_layout:%lld, out_sample_fmt:%d, out_sample_rate:%d, in_channel_layout:%lld, in_sample_fmt:%d, in_sample_rate:%d", |
1272 | out_channel_layout, |
1273 | AV_SAMPLE_FMT_S16, |
1274 | is->audio_ctx->sample_rate, |
1275 | in_channel_layout, |
1276 | is->audio_ctx->sample_fmt, |
1277 | is->audio_ctx->sample_rate); |
1278 | |
1279 | swr_init(is->audio_swr_ctx); |
1280 | |
1281 | break; |
1282 | |
1283 | case AVMEDIA_TYPE_VIDEO: |
1284 | is->videoStream = stream_index; |
1285 | is->video_st = pFormatCtx->streams[stream_index]; |
1286 | is->video_ctx = codecCtx; |
1287 | packet_queue_init(&is->videoq); |
1288 | is->video_tid = SDL_CreateThread(video_thread, "video_thread", is); |
1289 | is->sws_ctx = sws_getContext(is->video_ctx->width, |
1290 | is->video_ctx->height, |
1291 | is->video_ctx->pix_fmt, |
1292 | is->video_ctx->width, |
1293 | is->video_ctx->height, |
1294 | AV_PIX_FMT_YUV420P, |
1295 | SWS_BILINEAR, |
1296 | NULL, NULL, NULL); |
1297 | break; |
1298 | default: |
1299 | break; |
1300 | } |
1301 | |
1302 | return 0; |
1303 | } |
1304 | |
1305 | int decode_thread(void *arg) { |
1306 | |
1307 | Uint32 pixformat; |
1308 | |
1309 | VideoState *is = (VideoState *)arg; |
1310 | AVFormatContext *pFormatCtx; |
1311 | AVPacket pkt1, *packet = &pkt1; |
1312 | |
1313 | int i; |
1314 | int video_index = -1; |
1315 | int audio_index = -1; |
1316 | |
1317 | is->videoStream = -1; |
1318 | is->audioStream = -1; |
1319 | |
1320 | global_video_state = is; |
1321 | |
1322 | // Open video file |
1323 | if(avformat_open_input(&pFormatCtx, is->filename, NULL, NULL)!=0) |
1324 | return -1; // Couldn't open file |
1325 | |
1326 | is->pFormatCtx = pFormatCtx; |
1327 | |
1328 | // Retrieve stream information |
1329 | if(avformat_find_stream_info(pFormatCtx, NULL)<0) |
1330 | return -1; // Couldn't find stream information |
1331 | |
1332 | // Dump information about file onto standard error |
1333 | av_dump_format(pFormatCtx, 0, is->filename, 0); |
1334 | |
1335 | // Find the first video stream |
1336 | for(i=0; i<pFormatCtx->nb_streams; i++) { |
1337 | if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO && |
1338 | video_index < 0) { |
1339 | video_index=i; |
1340 | } |
1341 | if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO && |
1342 | audio_index < 0) { |
1343 | audio_index=i; |
1344 | } |
1345 | } |
1346 | |
1347 | if(audio_index >= 0) { |
1348 | stream_component_open(is, audio_index); |
1349 | } |
1350 | if(video_index >= 0) { |
1351 | stream_component_open(is, video_index); |
1352 | } |
1353 | |
1354 | if(is->videoStream < 0 || is->audioStream < 0) { |
1355 | fprintf(stderr, "%s: could not open codecs\n", is->filename); |
1356 | goto fail; |
1357 | } |
1358 | |
1359 | fprintf(stderr, "video context: width=%d, height=%d\n", is->video_ctx->width, is->video_ctx->height); |
1360 | win = SDL_CreateWindow("Media Player", |
1361 | SDL_WINDOWPOS_UNDEFINED, |
1362 | SDL_WINDOWPOS_UNDEFINED, |
1363 | is->video_ctx->width, is->video_ctx->height, |
1364 | SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE); |
1365 | |
1366 | renderer = SDL_CreateRenderer(win, -1, 0); |
1367 | |
1368 | pixformat = SDL_PIXELFORMAT_IYUV; |
1369 | texture = SDL_CreateTexture(renderer, |
1370 | pixformat, |
1371 | SDL_TEXTUREACCESS_STREAMING, |
1372 | is->video_ctx->width, |
1373 | is->video_ctx->height); |
1374 | |
1375 | // main decode loop |
1376 | for(;;) { |
1377 | |
1378 | if(is->quit) { |
1379 | SDL_CondSignal(is->videoq.cond); |
1380 | SDL_CondSignal(is->audioq.cond); |
1381 | break; |
1382 | } |
1383 | |
1384 | // seek stuff goes here |
1385 | if(is->audioq.size > MAX_AUDIOQ_SIZE || |
1386 | is->videoq.size > MAX_VIDEOQ_SIZE) { |
1387 | SDL_Delay(10); |
1388 | continue; |
1389 | } |
1390 | |
1391 | if(av_read_frame(is->pFormatCtx, packet) < 0) { |
1392 | if(is->pFormatCtx->pb->error == 0) { |
1393 | SDL_Delay(100); /* no error; wait for user input */ |
1394 | continue; |
1395 | } else { |
1396 | break; |
1397 | } |
1398 | } |
1399 | |
1400 | // Is this a packet from the video stream? |
1401 | if(packet->stream_index == is->videoStream) { |
1402 | packet_queue_put(&is->videoq, packet); |
1403 | fprintf(stderr, "put video queue, size :%d\n", is->videoq.nb_packets); |
1404 | } else if(packet->stream_index == is->audioStream) { |
1405 | packet_queue_put(&is->audioq, packet); |
1406 | fprintf(stderr, "put audio queue, size :%d\n", is->audioq.nb_packets); |
1407 | } else { |
1408 | av_free_packet(packet); |
1409 | } |
1410 | |
1411 | } |
1412 | |
1413 | /* all done - wait for it */ |
1414 | while(!is->quit) { |
1415 | SDL_Delay(100); |
1416 | } |
1417 | |
1418 | fail: |
1419 | if(1){ |
1420 | SDL_Event event; |
1421 | event.type = FF_QUIT_EVENT; |
1422 | event.user.data1 = is; |
1423 | SDL_PushEvent(&event); |
1424 | } |
1425 | |
1426 | return 0; |
1427 | } |
1428 | |
1429 | int main(int argc, char *argv[]) { |
1430 | |
1431 | int ret = -1; |
1432 | |
1433 | SDL_Event event; |
1434 | |
1435 | VideoState *is; |
1436 | |
1437 | if(argc < 2) { |
1438 | fprintf(stderr, "Usage: test <file>\n"); |
1439 | exit(1); |
1440 | } |
1441 | |
1442 | audiofd = fopen("testout.pcm", "wb+"); |
1443 | audiofd1 = fopen("testout1.pcm", "wb+"); |
1444 | |
1445 | //big struct, it's core |
1446 | is = av_mallocz(sizeof(VideoState)); |
1447 | |
1448 | // Register all formats and codecs |
1449 | av_register_all(); |
1450 | |
1451 | if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) { |
1452 | fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError()); |
1453 | exit(1); |
1454 | } |
1455 | |
1456 | texture_mutex = SDL_CreateMutex(); |
1457 | |
1458 | av_strlcpy(is->filename, argv[1], sizeof(is->filename)); |
1459 | |
1460 | is->pictq_mutex = SDL_CreateMutex(); |
1461 | is->pictq_cond = SDL_CreateCond(); |
1462 | |
1463 | //set timer |
1464 | schedule_refresh(is, 40); |
1465 | |
1466 | is->parse_tid = SDL_CreateThread(decode_thread, "decode_thread", is); |
1467 | if(!is->parse_tid) { |
1468 | av_free(is); |
1469 | goto __FAIL; |
1470 | } |
1471 | |
1472 | for(;;) { |
1473 | |
1474 | SDL_WaitEvent(&event); |
1475 | switch(event.type) { |
1476 | case FF_QUIT_EVENT: |
1477 | case SDL_QUIT: |
1478 | fprintf(stderr, "receive a QUIT event: %d\n", event.type); |
1479 | is->quit = 1; |
1480 | //SDL_Quit(); |
1481 | //return 0; |
1482 | goto __QUIT; |
1483 | break; |
1484 | case FF_REFRESH_EVENT: |
1485 | //fprintf(stderr, "receive a refresh event: %d\n", event.type); |
1486 | video_refresh_timer(event.user.data1); |
1487 | break; |
1488 | default: |
1489 | break; |
1490 | } |
1491 | } |
1492 | |
1493 | __QUIT: |
1494 | ret = 0; |
1495 | |
1496 | |
1497 | __FAIL: |
1498 | SDL_Quit(); |
1499 | if(audiofd){ |
1500 | fclose(audiofd); |
1501 | } |
1502 | if(audiofd1){ |
1503 | fclose(audiofd1); |
1504 | } |
1505 | return ret; |
1506 | |
1507 | } |
1508 | ``` |
1509 | |
1510 | |
1511 | #### 53.线程的退出机制 |
1512 | |
1513 | - 主线程接收到退出事件 |
1514 | - 解复用线程在循环分流时对quit进行判断 |
1515 | - 视频解码线程从视频流队列中取包时对quit进行判断 |
1516 | - 音频解码从音频流队列中取包时对quit进行判断 |
1517 | - 音视频循环解码时对quit进行判断 |
1518 | - 在收到信号变量消息时对quit进行判断 |
1519 | |
1520 | |
1521 | #### 54.音视频同步 |
1522 | |
1523 | - 时间戳 |
1524 | - PTS: Presentation timestamp 用于最终渲染用的 |
1525 | - DTS: Decoding timestamp 用于视频解码 |
1526 | - I(intra) / B(bidirectional) / P(predicted) 帧 |
1527 | - I 关键帧,帧内压缩 |
1528 | - B 向前向后参考3帧,可多帧 |
1529 | - P 向前参考帧,1帧 |
1530 | - 时间戳顺序 |
1531 | - 实际帧顺序:I B B P |
1532 | - 存放帧顺序:I P B B |
1533 | - 解码时间戳:1 4 2 3 |
1534 | - 展示时间戳:1 2 3 4 |
1535 | - 从哪儿获取PTS |
1536 | - AVPacket中的PTS【解复用的数据包里】 |
1537 | - AvFrame中的PTS【解码数据帧里】 |
1538 | - av_frame_get_best_effort_timestamp() |
1539 | - 时间基 |
1540 | - 不同的时间的基数 |
1541 | - tbr: 帧率 【1/25】 |
1542 | - tbn: time base of stream |
1543 | - tbc: time base of codec |
1544 | - 计算当前帧的PTS |
1545 | - PTS = PTS * av_q2d(video_stream->time_base) |
1546 | - av_q2d(AVRotional a) { return a.num / (double)a.den; } |
1547 | - 计算下一帧的PTS |
1548 | - video_clock: 预测的下一帧视频的PTS |
1549 | - frame_delay: 1/tbr |
1550 | - audio_clokc: 音频当前播放的时间戳 |
1551 | - 音视频同步的方式 |
1552 | - 视频同步到音频(易) |
1553 | - 音频同步到视频(难) |
1554 | - 音频和视频都同步到系统时钟 |
1555 | - 视频播放的基本思路 |
1556 | - 一般的做法,展示第一帧视频帧后,获得要显示的下一个帧视频的PTS,然后设置一个定时器,当定时器超时后,刷新新的视频帧,如此反复操作。 |
1557 | |
1558 | ```c |
1559 |
|
1560 |
|
1561 |
|
1562 | |
1563 |
|
1564 | |
1565 |
|
1566 |
|
1567 |
|
1568 |
|
1569 | |
1570 | // compatibility with newer API |
1571 |
|
1572 |
|
1573 |
|
1574 |
|
1575 | |
1576 |
|
1577 |
|
1578 | |
1579 |
|
1580 |
|
1581 | |
1582 |
|
1583 |
|
1584 | |
1585 |
|
1586 |
|
1587 | |
1588 |
|
1589 | |
1590 | typedef struct PacketQueue { |
1591 | AVPacketList *first_pkt, *last_pkt; |
1592 | int nb_packets; |
1593 | int size; |
1594 | SDL_mutex *mutex; |
1595 | SDL_cond *cond; |
1596 | } PacketQueue; |
1597 | |
1598 | |
1599 | typedef struct VideoPicture { |
1600 | AVPicture *bmp; |
1601 | int width, height; /* source height & width */ |
1602 | int allocated; |
1603 | double pts; |
1604 | } VideoPicture; |
1605 | |
1606 | typedef struct VideoState { |
1607 | |
1608 | AVFormatContext *pFormatCtx; |
1609 | int videoStream, audioStream; |
1610 | |
1611 | AVStream *audio_st; |
1612 | AVCodecContext *audio_ctx; |
1613 | PacketQueue audioq; |
1614 | uint8_t audio_buf[(MAX_AUDIO_FRAME_SIZE * 3) / 2]; |
1615 | unsigned int audio_buf_size; |
1616 | unsigned int audio_buf_index; |
1617 | AVFrame audio_frame; |
1618 | AVPacket audio_pkt; |
1619 | uint8_t *audio_pkt_data; |
1620 | int audio_pkt_size; |
1621 | int audio_hw_buf_size; |
1622 | struct SwrContext *audio_swr_ctx; |
1623 | |
1624 | double audio_clock; |
1625 | double video_clock; ///<pts of last decoded frame / predicted pts of next decoded frame |
1626 | |
1627 | double frame_timer; |
1628 | double frame_last_pts; |
1629 | double frame_last_delay; |
1630 | |
1631 | AVStream *video_st; |
1632 | AVCodecContext *video_ctx; |
1633 | PacketQueue videoq; |
1634 | struct SwsContext *video_sws_ctx; |
1635 | |
1636 | VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE]; |
1637 | int pictq_size, pictq_rindex, pictq_windex; |
1638 | SDL_mutex *pictq_mutex; |
1639 | SDL_cond *pictq_cond; |
1640 | |
1641 | SDL_Thread *parse_tid; |
1642 | SDL_Thread *video_tid; |
1643 | |
1644 | char filename[1024]; |
1645 | int quit; |
1646 | } VideoState; |
1647 | |
1648 | SDL_mutex *text_mutex; |
1649 | SDL_Window *win; |
1650 | SDL_Renderer *renderer; |
1651 | SDL_Texture *texture; |
1652 | |
1653 | /* Since we only have one decoding thread, the Big Struct |
1654 | can be global in case we need it. */ |
1655 | VideoState *global_video_state; |
1656 | |
1657 | void packet_queue_init(PacketQueue *q) { |
1658 | memset(q, 0, sizeof(PacketQueue)); |
1659 | q->mutex = SDL_CreateMutex(); |
1660 | q->cond = SDL_CreateCond(); |
1661 | } |
1662 | int packet_queue_put(PacketQueue *q, AVPacket *pkt) { |
1663 | |
1664 | AVPacketList *pkt1; |
1665 | if(av_dup_packet(pkt) < 0) { |
1666 | return -1; |
1667 | } |
1668 | pkt1 = av_malloc(sizeof(AVPacketList)); |
1669 | if (!pkt1) |
1670 | return -1; |
1671 | pkt1->pkt = *pkt; |
1672 | pkt1->next = NULL; |
1673 | |
1674 | SDL_LockMutex(q->mutex); |
1675 | |
1676 | if (!q->last_pkt) |
1677 | q->first_pkt = pkt1; |
1678 | else |
1679 | q->last_pkt->next = pkt1; |
1680 | q->last_pkt = pkt1; |
1681 | q->nb_packets++; |
1682 | q->size += pkt1->pkt.size; |
1683 | SDL_CondSignal(q->cond); |
1684 | |
1685 | SDL_UnlockMutex(q->mutex); |
1686 | return 0; |
1687 | } |
1688 | |
1689 | int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block) |
1690 | { |
1691 | AVPacketList *pkt1; |
1692 | int ret; |
1693 | |
1694 | SDL_LockMutex(q->mutex); |
1695 | |
1696 | for(;;) { |
1697 | |
1698 | if(global_video_state->quit) { |
1699 | ret = -1; |
1700 | break; |
1701 | } |
1702 | |
1703 | pkt1 = q->first_pkt; |
1704 | if (pkt1) { |
1705 | q->first_pkt = pkt1->next; |
1706 | if (!q->first_pkt) |
1707 | q->last_pkt = NULL; |
1708 | q->nb_packets--; |
1709 | q->size -= pkt1->pkt.size; |
1710 | *pkt = pkt1->pkt; |
1711 | av_free(pkt1); |
1712 | ret = 1; |
1713 | break; |
1714 | } else if (!block) { |
1715 | ret = 0; |
1716 | break; |
1717 | } else { |
1718 | SDL_CondWait(q->cond, q->mutex); |
1719 | } |
1720 | } |
1721 | SDL_UnlockMutex(q->mutex); |
1722 | return ret; |
1723 | } |
1724 | |
1725 | double get_audio_clock(VideoState *is) { |
1726 | double pts; |
1727 | int hw_buf_size, bytes_per_sec, n; |
1728 | |
1729 | pts = is->audio_clock; /* maintained in the audio thread */ |
1730 | hw_buf_size = is->audio_buf_size - is->audio_buf_index; |
1731 | bytes_per_sec = 0; |
1732 | n = is->audio_ctx->channels * 2; |
1733 | if(is->audio_st) { |
1734 | bytes_per_sec = is->audio_ctx->sample_rate * n; |
1735 | } |
1736 | if(bytes_per_sec) { |
1737 | pts -= (double)hw_buf_size / bytes_per_sec; |
1738 | } |
1739 | return pts; |
1740 | } |
1741 | |
1742 | int audio_decode_frame(VideoState *is, uint8_t *audio_buf, int buf_size, double *pts_ptr) { |
1743 | |
1744 | int len1, data_size = 0; |
1745 | AVPacket *pkt = &is->audio_pkt; |
1746 | double pts; |
1747 | int n; |
1748 | |
1749 | for(;;) { |
1750 | while(is->audio_pkt_size > 0) { |
1751 | int got_frame = 0; |
1752 | len1 = avcodec_decode_audio4(is->audio_ctx, &is->audio_frame, &got_frame, pkt); |
1753 | if(len1 < 0) { |
1754 | /* if error, skip frame */ |
1755 | is->audio_pkt_size = 0; |
1756 | break; |
1757 | } |
1758 | data_size = 0; |
1759 | if(got_frame) { |
1760 | /* |
1761 | data_size = av_samples_get_buffer_size(NULL, |
1762 | is->audio_ctx->channels, |
1763 | is->audio_frame.nb_samples, |
1764 | is->audio_ctx->sample_fmt, |
1765 | 1); |
1766 | */ |
1767 | data_size = 2 * is->audio_frame.nb_samples * 2; |
1768 | assert(data_size <= buf_size); |
1769 | |
1770 | swr_convert(is->audio_swr_ctx, |
1771 | &audio_buf, |
1772 | MAX_AUDIO_FRAME_SIZE*3/2, |
1773 | (const uint8_t **)is->audio_frame.data, |
1774 | is->audio_frame.nb_samples); |
1775 | |
1776 | //memcpy(audio_buf, is->audio_frame.data[0], data_size); |
1777 | } |
1778 | is->audio_pkt_data += len1; |
1779 | is->audio_pkt_size -= len1; |
1780 | if(data_size <= 0) { |
1781 | /* No data yet, get more frames */ |
1782 | continue; |
1783 | } |
1784 | pts = is->audio_clock; |
1785 | *pts_ptr = pts; |
1786 | n = 2 * is->audio_ctx->channels; |
1787 | is->audio_clock += (double)data_size / |
1788 | (double)(n * is->audio_ctx->sample_rate); |
1789 | /* We have data, return it and come back for more later */ |
1790 | return data_size; |
1791 | } |
1792 | if(pkt->data) |
1793 | av_free_packet(pkt); |
1794 | |
1795 | if(is->quit) { |
1796 | return -1; |
1797 | } |
1798 | /* next packet */ |
1799 | if(packet_queue_get(&is->audioq, pkt, 1) < 0) { |
1800 | return -1; |
1801 | } |
1802 | is->audio_pkt_data = pkt->data; |
1803 | is->audio_pkt_size = pkt->size; |
1804 | /* if update, update the audio clock w/pts */ |
1805 | if(pkt->pts != AV_NOPTS_VALUE) { |
1806 | is->audio_clock = av_q2d(is->audio_st->time_base)*pkt->pts; |
1807 | } |
1808 | } |
1809 | } |
1810 | |
1811 | void audio_callback(void *userdata, Uint8 *stream, int len) { |
1812 | |
1813 | VideoState *is = (VideoState *)userdata; |
1814 | int len1, audio_size; |
1815 | double pts; |
1816 | |
1817 | SDL_memset(stream, 0, len); |
1818 | |
1819 | while(len > 0) { |
1820 | if(is->audio_buf_index >= is->audio_buf_size) { |
1821 | /* We have already sent all our data; get more */ |
1822 | audio_size = audio_decode_frame(is, is->audio_buf, sizeof(is->audio_buf), &pts); |
1823 | if(audio_size < 0) { |
1824 | /* If error, output silence */ |
1825 | is->audio_buf_size = 1024 * 2 * 2; |
1826 | memset(is->audio_buf, 0, is->audio_buf_size); |
1827 | } else { |
1828 | is->audio_buf_size = audio_size; |
1829 | } |
1830 | is->audio_buf_index = 0; |
1831 | } |
1832 | len1 = is->audio_buf_size - is->audio_buf_index; |
1833 | if(len1 > len) |
1834 | len1 = len; |
1835 | SDL_MixAudio(stream,(uint8_t *)is->audio_buf + is->audio_buf_index, len1, SDL_MIX_MAXVOLUME); |
1836 | //memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1); |
1837 | len -= len1; |
1838 | stream += len1; |
1839 | is->audio_buf_index += len1; |
1840 | } |
1841 | } |
1842 | |
1843 | static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) { |
1844 | SDL_Event event; |
1845 | event.type = FF_REFRESH_EVENT; |
1846 | event.user.data1 = opaque; |
1847 | SDL_PushEvent(&event); |
1848 | return 0; /* 0 means stop timer */ |
1849 | } |
1850 | |
1851 | /* schedule a video refresh in 'delay' ms */ |
1852 | static void schedule_refresh(VideoState *is, int delay) { |
1853 | SDL_AddTimer(delay, sdl_refresh_timer_cb, is); |
1854 | } |
1855 | |
1856 | void video_display(VideoState *is) { |
1857 | |
1858 | SDL_Rect rect; |
1859 | VideoPicture *vp; |
1860 | float aspect_ratio; |
1861 | int w, h, x, y; |
1862 | int i; |
1863 | |
1864 | vp = &is->pictq[is->pictq_rindex]; |
1865 | if(vp->bmp) { |
1866 | |
1867 | SDL_UpdateYUVTexture( texture, NULL, |
1868 | vp->bmp->data[0], vp->bmp->linesize[0], |
1869 | vp->bmp->data[1], vp->bmp->linesize[1], |
1870 | vp->bmp->data[2], vp->bmp->linesize[2]); |
1871 | |
1872 | rect.x = 0; |
1873 | rect.y = 0; |
1874 | rect.w = is->video_ctx->width; |
1875 | rect.h = is->video_ctx->height; |
1876 | SDL_LockMutex(text_mutex); |
1877 | SDL_RenderClear( renderer ); |
1878 | SDL_RenderCopy( renderer, texture, NULL, &rect); |
1879 | SDL_RenderPresent( renderer ); |
1880 | SDL_UnlockMutex(text_mutex); |
1881 | |
1882 | } |
1883 | } |
1884 | |
1885 | void video_refresh_timer(void *userdata) { |
1886 | |
1887 | VideoState *is = (VideoState *)userdata; |
1888 | VideoPicture *vp; |
1889 | double actual_delay, delay, sync_threshold, ref_clock, diff; |
1890 | |
1891 | if(is->video_st) { |
1892 | if(is->pictq_size == 0) { |
1893 | schedule_refresh(is, 1); |
1894 | } else { |
1895 | vp = &is->pictq[is->pictq_rindex]; |
1896 | |
1897 | delay = vp->pts - is->frame_last_pts; /* the pts from last time */ |
1898 | if(delay <= 0 || delay >= 1.0) { |
1899 | /* if incorrect delay, use previous one */ |
1900 | delay = is->frame_last_delay; |
1901 | } |
1902 | /* save for next time */ |
1903 | is->frame_last_delay = delay; |
1904 | is->frame_last_pts = vp->pts; |
1905 | |
1906 | /* update delay to sync to audio */ |
1907 | ref_clock = get_audio_clock(is); |
1908 | diff = vp->pts - ref_clock; |
1909 | |
1910 | /* Skip or repeat the frame. Take delay into account |
1911 | FFPlay still doesn't "know if this is the best guess." */ |
1912 | sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay : AV_SYNC_THRESHOLD; |
1913 | if(fabs(diff) < AV_NOSYNC_THRESHOLD) { |
1914 | if(diff <= -sync_threshold) { |
1915 | delay = 0; |
1916 | } else if(diff >= sync_threshold) { |
1917 | delay = 2 * delay; |
1918 | } |
1919 | } |
1920 | is->frame_timer += delay; |
1921 | /* computer the REAL delay */ |
1922 | actual_delay = is->frame_timer - (av_gettime() / 1000000.0); |
1923 | if(actual_delay < 0.010) { |
1924 | /* Really it should skip the picture instead */ |
1925 | actual_delay = 0.010; |
1926 | } |
1927 | schedule_refresh(is, (int)(actual_delay * 1000 + 0.5)); |
1928 | |
1929 | /* show the picture! */ |
1930 | video_display(is); |
1931 | |
1932 | /* update queue for next picture! */ |
1933 | if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) { |
1934 | is->pictq_rindex = 0; |
1935 | } |
1936 | SDL_LockMutex(is->pictq_mutex); |
1937 | is->pictq_size--; |
1938 | SDL_CondSignal(is->pictq_cond); |
1939 | SDL_UnlockMutex(is->pictq_mutex); |
1940 | } |
1941 | } else { |
1942 | schedule_refresh(is, 100); |
1943 | } |
1944 | } |
1945 | |
1946 | void alloc_picture(void *userdata) { |
1947 | |
1948 | int ret = -1; |
1949 | |
1950 | VideoState *is = (VideoState *)userdata; |
1951 | VideoPicture *vp; |
1952 | |
1953 | vp = &is->pictq[is->pictq_windex]; |
1954 | if(vp->bmp) { |
1955 | |
1956 | // we already have one make another, bigger/smaller |
1957 | avpicture_free(vp->bmp); |
1958 | free(vp->bmp); |
1959 | |
1960 | vp->bmp = NULL; |
1961 | } |
1962 | |
1963 | // Allocate a place to put our YUV image on that screen |
1964 | SDL_LockMutex(text_mutex); |
1965 | vp->bmp = (AVPicture*)malloc(sizeof(AVPicture)); |
1966 | ret = avpicture_alloc(vp->bmp, AV_PIX_FMT_YUV420P, is->video_ctx->width, is->video_ctx->height); |
1967 | if (ret < 0) { |
1968 | fprintf(stderr, "Could not allocate temporary picture: %s\n", av_err2str(ret)); |
1969 | } |
1970 | |
1971 | SDL_UnlockMutex(text_mutex); |
1972 | |
1973 | vp->width = is->video_ctx->width; |
1974 | vp->height = is->video_ctx->height; |
1975 | vp->allocated = 1; |
1976 | |
1977 | } |
1978 | |
1979 | int queue_picture(VideoState *is, AVFrame *pFrame, double pts) { |
1980 | |
1981 | VideoPicture *vp; |
1982 | |
1983 | /* wait until we have space for a new pic */ |
1984 | SDL_LockMutex(is->pictq_mutex); |
1985 | while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE && |
1986 | !is->quit) { |
1987 | SDL_CondWait(is->pictq_cond, is->pictq_mutex); |
1988 | } |
1989 | SDL_UnlockMutex(is->pictq_mutex); |
1990 | |
1991 | if(is->quit) |
1992 | return -1; |
1993 | |
1994 | // windex is set to 0 initially |
1995 | vp = &is->pictq[is->pictq_windex]; |
1996 | |
1997 | /* allocate or resize the buffer! */ |
1998 | if(!vp->bmp || |
1999 | vp->width != is->video_ctx->width || |
2000 | vp->height != is->video_ctx->height) { |
2001 | |
2002 | vp->allocated = 0; |
2003 | alloc_picture(is); |
2004 | if(is->quit) { |
2005 | return -1; |
2006 | } |
2007 | } |
2008 | |
2009 | /* We have a place to put our picture on the queue */ |
2010 | if(vp->bmp) { |
2011 | |
2012 | vp->pts = pts; |
2013 | |
2014 | // Convert the image into YUV format that SDL uses |
2015 | sws_scale(is->video_sws_ctx, (uint8_t const * const *)pFrame->data, |
2016 | pFrame->linesize, 0, is->video_ctx->height, |
2017 | vp->bmp->data, vp->bmp->linesize); |
2018 | |
2019 | /* now we inform our display thread that we have a pic ready */ |
2020 | if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) { |
2021 | is->pictq_windex = 0; |
2022 | } |
2023 | SDL_LockMutex(is->pictq_mutex); |
2024 | is->pictq_size++; |
2025 | SDL_UnlockMutex(is->pictq_mutex); |
2026 | } |
2027 | return 0; |
2028 | } |
2029 | |
2030 | double synchronize_video(VideoState *is, AVFrame *src_frame, double pts) { |
2031 | |
2032 | double frame_delay; |
2033 | |
2034 | if(pts != 0) { |
2035 | /* if we have pts, set video clock to it */ |
2036 | is->video_clock = pts; |
2037 | } else { |
2038 | /* if we aren't given a pts, set it to the clock */ |
2039 | pts = is->video_clock; |
2040 | } |
2041 | /* update the video clock */ |
2042 | frame_delay = av_q2d(is->video_ctx->time_base); |
2043 | /* if we are repeating a frame, adjust clock accordingly */ |
2044 | frame_delay += src_frame->repeat_pict * (frame_delay * 0.5); |
2045 | is->video_clock += frame_delay; |
2046 | return pts; |
2047 | } |
2048 | |
2049 | int decode_video_thread(void *arg) { |
2050 | VideoState *is = (VideoState *)arg; |
2051 | AVPacket pkt1, *packet = &pkt1; |
2052 | int frameFinished; |
2053 | AVFrame *pFrame; |
2054 | double pts; |
2055 | |
2056 | pFrame = av_frame_alloc(); |
2057 | |
2058 | for(;;) { |
2059 | if(packet_queue_get(&is->videoq, packet, 1) < 0) { |
2060 | // means we quit getting packets |
2061 | break; |
2062 | } |
2063 | pts = 0; |
2064 | |
2065 | // Decode video frame |
2066 | avcodec_decode_video2(is->video_ctx, pFrame, &frameFinished, packet); |
2067 | |
2068 | if((pts = av_frame_get_best_effort_timestamp(pFrame)) == AV_NOPTS_VALUE) { |
2069 | pts = 0; |
2070 | } |
2071 | pts *= av_q2d(is->video_st->time_base); |
2072 | |
2073 | // Did we get a video frame? |
2074 | if(frameFinished) { |
2075 | pts = synchronize_video(is, pFrame, pts); |
2076 | if(queue_picture(is, pFrame, pts) < 0) { |
2077 | break; |
2078 | } |
2079 | } |
2080 | av_free_packet(packet); |
2081 | } |
2082 | av_frame_free(&pFrame); |
2083 | return 0; |
2084 | } |
2085 | |
2086 | int stream_component_open(VideoState *is, int stream_index) { |
2087 | |
2088 | AVFormatContext *pFormatCtx = is->pFormatCtx; |
2089 | AVCodecContext *codecCtx = NULL; |
2090 | AVCodec *codec = NULL; |
2091 | SDL_AudioSpec wanted_spec, spec; |
2092 | |
2093 | if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) { |
2094 | return -1; |
2095 | } |
2096 | |
2097 | codecCtx = avcodec_alloc_context3(NULL); |
2098 | |
2099 | int ret = avcodec_parameters_to_context(codecCtx, pFormatCtx->streams[stream_index]->codecpar); |
2100 | if (ret < 0) |
2101 | return -1; |
2102 | |
2103 | codec = avcodec_find_decoder(codecCtx->codec_id); |
2104 | if(!codec) { |
2105 | fprintf(stderr, "Unsupported codec!\n"); |
2106 | return -1; |
2107 | } |
2108 | |
2109 | |
2110 | if(codecCtx->codec_type == AVMEDIA_TYPE_AUDIO) { |
2111 | |
2112 | // Set audio settings from codec info |
2113 | wanted_spec.freq = codecCtx->sample_rate; |
2114 | wanted_spec.format = AUDIO_S16SYS; |
2115 | wanted_spec.channels = 2;//codecCtx->channels; |
2116 | wanted_spec.silence = 0; |
2117 | wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE; |
2118 | wanted_spec.callback = audio_callback; |
2119 | wanted_spec.userdata = is; |
2120 | |
2121 | if(SDL_OpenAudio(&wanted_spec, &spec) < 0) { |
2122 | fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError()); |
2123 | return -1; |
2124 | } |
2125 | is->audio_hw_buf_size = spec.size; |
2126 | } |
2127 | if(avcodec_open2(codecCtx, codec, NULL) < 0) { |
2128 | fprintf(stderr, "Unsupported codec!\n"); |
2129 | return -1; |
2130 | } |
2131 | |
2132 | switch(codecCtx->codec_type) { |
2133 | case AVMEDIA_TYPE_AUDIO: |
2134 | is->audioStream = stream_index; |
2135 | is->audio_st = pFormatCtx->streams[stream_index]; |
2136 | is->audio_ctx = codecCtx; |
2137 | is->audio_buf_size = 0; |
2138 | is->audio_buf_index = 0; |
2139 | memset(&is->audio_pkt, 0, sizeof(is->audio_pkt)); |
2140 | packet_queue_init(&is->audioq); |
2141 | |
2142 | //Out Audio Param |
2143 | uint64_t out_channel_layout=AV_CH_LAYOUT_STEREO; |
2144 | |
2145 | //AAC:1024 MP3:1152 |
2146 | int out_nb_samples= is->audio_ctx->frame_size; |
2147 | //AVSampleFormat out_sample_fmt = AV_SAMPLE_FMT_S16; |
2148 | |
2149 | int out_sample_rate=is->audio_ctx->sample_rate; |
2150 | int out_channels=av_get_channel_layout_nb_channels(out_channel_layout); |
2151 | //Out Buffer Size |
2152 | /* |
2153 | int out_buffer_size=av_samples_get_buffer_size(NULL, |
2154 | out_channels, |
2155 | out_nb_samples, |
2156 | AV_SAMPLE_FMT_S16, |
2157 | 1); |
2158 | */ |
2159 | |
2160 | //uint8_t *out_buffer=(uint8_t *)av_malloc(MAX_AUDIO_FRAME_SIZE*2); |
2161 | int64_t in_channel_layout=av_get_default_channel_layout(is->audio_ctx->channels); |
2162 | |
2163 | struct SwrContext *audio_convert_ctx; |
2164 | audio_convert_ctx = swr_alloc(); |
2165 | swr_alloc_set_opts(audio_convert_ctx, |
2166 | out_channel_layout, |
2167 | AV_SAMPLE_FMT_S16, |
2168 | out_sample_rate, |
2169 | in_channel_layout, |
2170 | is->audio_ctx->sample_fmt, |
2171 | is->audio_ctx->sample_rate, |
2172 | 0, |
2173 | NULL); |
2174 | fprintf(stderr, "swr opts: out_channel_layout:%lld, out_sample_fmt:%d, out_sample_rate:%d, in_channel_layout:%lld, in_sample_fmt:%d, in_sample_rate:%d", |
2175 | out_channel_layout, AV_SAMPLE_FMT_S16, out_sample_rate, in_channel_layout, is->audio_ctx->sample_fmt, is->audio_ctx->sample_rate); |
2176 | swr_init(audio_convert_ctx); |
2177 | |
2178 | is->audio_swr_ctx = audio_convert_ctx; |
2179 | |
2180 | SDL_PauseAudio(0); |
2181 | break; |
2182 | case AVMEDIA_TYPE_VIDEO: |
2183 | is->videoStream = stream_index; |
2184 | is->video_st = pFormatCtx->streams[stream_index]; |
2185 | is->video_ctx = codecCtx; |
2186 | |
2187 | is->frame_timer = (double)av_gettime() / 1000000.0; |
2188 | is->frame_last_delay = 40e-3; |
2189 | |
2190 | packet_queue_init(&is->videoq); |
2191 | is->video_sws_ctx = sws_getContext(is->video_ctx->width, is->video_ctx->height, |
2192 | is->video_ctx->pix_fmt, is->video_ctx->width, |
2193 | is->video_ctx->height, AV_PIX_FMT_YUV420P, |
2194 | SWS_BILINEAR, NULL, NULL, NULL |
2195 | ); |
2196 | is->video_tid = SDL_CreateThread(decode_video_thread, "decode_video_thread", is); |
2197 | break; |
2198 | default: |
2199 | break; |
2200 | } |
2201 | } |
2202 | |
2203 | int demux_thread(void *arg) { |
2204 | |
2205 | Uint32 pixformat; |
2206 | |
2207 | VideoState *is = (VideoState *)arg; |
2208 | AVFormatContext *pFormatCtx; |
2209 | AVPacket pkt1, *packet = &pkt1; |
2210 | |
2211 | int video_index = -1; |
2212 | int audio_index = -1; |
2213 | int i; |
2214 | |
2215 | is->videoStream=-1; |
2216 | is->audioStream=-1; |
2217 | |
2218 | global_video_state = is; |
2219 | |
2220 | // Open video file |
2221 | if(avformat_open_input(&pFormatCtx, is->filename, NULL, NULL)!=0) |
2222 | return -1; // Couldn't open file |
2223 | |
2224 | is->pFormatCtx = pFormatCtx; |
2225 | |
2226 | // Retrieve stream information |
2227 | if(avformat_find_stream_info(pFormatCtx, NULL)<0) |
2228 | return -1; // Couldn't find stream information |
2229 | |
2230 | // Dump information about file onto standard error |
2231 | av_dump_format(pFormatCtx, 0, is->filename, 0); |
2232 | |
2233 | // Find the first video stream |
2234 | for(i=0; i<pFormatCtx->nb_streams; i++) { |
2235 | if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO && |
2236 | video_index < 0) { |
2237 | video_index=i; |
2238 | } |
2239 | if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO && |
2240 | audio_index < 0) { |
2241 | audio_index=i; |
2242 | } |
2243 | } |
2244 | |
2245 | if(audio_index >= 0) { |
2246 | stream_component_open(is, audio_index); |
2247 | } |
2248 | if(video_index >= 0) { |
2249 | stream_component_open(is, video_index); |
2250 | } |
2251 | |
2252 | if(is->videoStream < 0 || is->audioStream < 0) { |
2253 | fprintf(stderr, "%s: could not open codecs\n", is->filename); |
2254 | goto fail; |
2255 | } |
2256 | |
2257 | win = SDL_CreateWindow("Media Player", |
2258 | SDL_WINDOWPOS_UNDEFINED, |
2259 | SDL_WINDOWPOS_UNDEFINED, |
2260 | is->video_ctx->width, is->video_ctx->height, |
2261 | SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE); |
2262 | |
2263 | renderer = SDL_CreateRenderer(win, -1, 0); |
2264 | |
2265 | pixformat = SDL_PIXELFORMAT_IYUV; |
2266 | texture = SDL_CreateTexture(renderer, |
2267 | pixformat, |
2268 | SDL_TEXTUREACCESS_STREAMING, |
2269 | is->video_ctx->width, |
2270 | is->video_ctx->height); |
2271 | |
2272 | // main decode loop |
2273 | |
2274 | for(;;) { |
2275 | |
2276 | if(is->quit) { |
2277 | SDL_CondSignal(is->videoq.cond); |
2278 | SDL_CondSignal(is->audioq.cond); |
2279 | break; |
2280 | } |
2281 | // seek stuff goes here |
2282 | if(is->audioq.size > MAX_AUDIOQ_SIZE || |
2283 | is->videoq.size > MAX_VIDEOQ_SIZE) { |
2284 | SDL_Delay(10); |
2285 | continue; |
2286 | } |
2287 | if(av_read_frame(is->pFormatCtx, packet) < 0) { |
2288 | if(is->pFormatCtx->pb->error == 0) { |
2289 | SDL_Delay(100); /* no error; wait for user input */ |
2290 | continue; |
2291 | } else { |
2292 | break; |
2293 | } |
2294 | } |
2295 | // Is this a packet from the video stream? |
2296 | if(packet->stream_index == is->videoStream) { |
2297 | packet_queue_put(&is->videoq, packet); |
2298 | } else if(packet->stream_index == is->audioStream) { |
2299 | packet_queue_put(&is->audioq, packet); |
2300 | } else { |
2301 | av_free_packet(packet); |
2302 | } |
2303 | } |
2304 | /* all done - wait for it */ |
2305 | while(!is->quit) { |
2306 | SDL_Delay(100); |
2307 | } |
2308 | |
2309 | fail: |
2310 | if(1){ |
2311 | SDL_Event event; |
2312 | event.type = FF_QUIT_EVENT; |
2313 | event.user.data1 = is; |
2314 | SDL_PushEvent(&event); |
2315 | } |
2316 | return 0; |
2317 | } |
2318 | |
2319 | int main(int argc, char *argv[]) { |
2320 | |
2321 | int ret = -1; |
2322 | |
2323 | SDL_Event event; |
2324 | |
2325 | VideoState *is; |
2326 | |
2327 | is = av_mallocz(sizeof(VideoState)); |
2328 | |
2329 | if(argc < 2) { |
2330 | fprintf(stderr, "Usage: test <file>\n"); |
2331 | exit(1); |
2332 | } |
2333 | // Register all formats and codecs |
2334 | av_register_all(); |
2335 | |
2336 | if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) { |
2337 | fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError()); |
2338 | exit(1); |
2339 | } |
2340 | |
2341 | text_mutex = SDL_CreateMutex(); |
2342 | |
2343 | av_strlcpy(is->filename, argv[1], sizeof(is->filename)); |
2344 | |
2345 | is->pictq_mutex = SDL_CreateMutex(); |
2346 | is->pictq_cond = SDL_CreateCond(); |
2347 | |
2348 | schedule_refresh(is, 40); |
2349 | |
2350 | is->parse_tid = SDL_CreateThread(demux_thread, "demux_thread", is); |
2351 | if(!is->parse_tid) { |
2352 | av_free(is); |
2353 | goto __FAIL; |
2354 | } |
2355 | for(;;) { |
2356 | |
2357 | SDL_WaitEvent(&event); |
2358 | switch(event.type) { |
2359 | case FF_QUIT_EVENT: |
2360 | case SDL_QUIT: |
2361 | is->quit = 1; |
2362 | //SDL_Quit(); |
2363 | //return 0; |
2364 | goto __QUIT; |
2365 | break; |
2366 | case FF_REFRESH_EVENT: |
2367 | video_refresh_timer(event.user.data1); |
2368 | break; |
2369 | default: |
2370 | break; |
2371 | } |
2372 | } |
2373 | |
2374 | __QUIT: |
2375 | ret = 0; |
2376 | |
2377 | __FAIL: |
2378 | |
2379 | SDL_Quit(); |
2380 | /* |
2381 | if(audiofd){ |
2382 | fclose(audiofd); |
2383 | } |
2384 | if(audiofd1){ |
2385 | fclose(audiofd1); |
2386 | } |
2387 | */ |
2388 | return ret; |
2389 | |
2390 | } |
2391 | ``` |
2392 | |
2393 | |
2394 | #### 55.Android中使用FFmpeg |
2395 | |
2396 | - Java与C之间的相互调用 |
2397 | - Android下FFmpeg的编译 |
2398 | - Android下如何使用FFmpeg |
2399 | - JNI基本概念 |
2400 | - JNIEnv Java本地化环境,C/C++要访问Java相关的代码都需要它 |
2401 | - JavaVM 一个进程对于一个JavaVM,用于获取JNIEnv,一个JavaVM里边很多线程,一个线程对应一个JNIEnv |
2402 | - 线程 |
2403 | - [Java调用C/C++方法一](#Java调用C/C++) |
2404 | - 在Java层定义native关键字函数 |
2405 | - 在C/C++层创建`Java_packname_classname_methodname`函数 |
2406 | - [Java调用C/C++方法二](#Java调用C/C++) |
2407 | - 在Java层定义native关键字函数 |
2408 | - [RegisterNative](#方法二的定义) |
2409 | - Jnit JNI_OnLoad(JavaVM *vm, void* reserved) |
2410 | - Jint JNI_OnUnload(JavaVM *vm, void* reserved) |
2411 | - 什么是Signature |
2412 | - Java与C/C++相互调用时,用于描述函数参数的描述符【可以理解为映射表的key】 |
2413 | - 输入参数放在()内,输出参数放在()外 |
2414 | - 多个参数之间顺序存放,且用`;`分割 |
2415 | - 原始类型的Signature |
2416 | |
2417 | Java类型 | 符号 |
2418 | ---|--- |
2419 | boolean | Z |
2420 | byte | B |
2421 | char | C |
2422 | short | S |
2423 | int | I |
2424 | long | L |
2425 | float | F |
2426 | double | D |
2427 | void | V |
2428 | |
2429 | - 类的Signature |
2430 | - Java对象参数`L包路径/类名` |
2431 | - Java数组`[` |
2432 | - ([Student;)[LStudent] ==> Student[] Xxx(Student[]) |
2433 | - ([java/lang/String;)[Ljava/lang/Object ==> Object[] Xxx(String[] s) |
2434 | |
2435 | - [C/C++调Java方法](#C/C++调Java方法) |
2436 | - 1. FindClass 获取Java中的类 |
2437 | - 2. GetMethodID / GetFieldID 获取Java类中所有的方法/属性 |
2438 | - 3. NewObject 获取Java中的内存对象 |
2439 | - 4. Call<TYPE>Method / [G/S]et<Type>Field |
2440 | |
2441 | |
2442 | ###### 方法二的定义 |
2443 | |
2444 | ```c |
2445 | typedef struct { |
2446 | const char* name; // 与Java定义的name同名 |
2447 | const char* signature; // 标注输入输出参数 |
2448 | void* fnPtr; // 具体API |
2449 | }JNINativeMethod; |
Java调用C/C++
C++端代码
1
2
3
4
5
6
extern "C" JNIEXPORT jstring JNICALL
7
Java_com_muziyu_apple_firstjni_MainActivity_stringFromJNI(
8
JNIEnv *env,
9
jobject /* this */) {
10
std::string hello = "Hello from C++";
11
return env->NewStringUTF(hello.c_str());
12
}
13
14
/// 方法一
15
extern "C"
16
JNIEXPORT jstring JNICALL
17
Java_com_muziyu_apple_firstjni_MainActivity_mStringFromJNI(JNIEnv *env, jobject thiz, jstring str) {
18
// TODO: implement mStringFromJNI()
19
const char *mStr = env->GetStringUTFChars(str, 0);
20
env->ReleaseStringUTFChars(str, mStr);
21
return env->NewStringUTF(mStr);
22
}
23
24
/// 方法二
25
extern "C"
26
JNIEXPORT jstring JNICALL
27
my_test_register(JNIEnv *env, jobject thiz) { /// 实现一个Native对应的函数
28
return env->NewStringUTF("This is a test of register!");
29
}
30
31
static JNINativeMethod g_methods[] = { /// 定义了一个静态方法,第一个参数是Java端定义的Native的方法名,第二个参数是一个输入输出参数
32
{ "_test","()Ljava/lang/String;", (void*)my_test_register },
33
};
34
35
jint JNI_OnLoad(JavaVM *vm, void *reserved) {
36
JNIEnv *env = NULL;
37
vm->GetEnv((void**)&env, JNI_VERSION_1_6); /// 获取Java虚拟机环境
38
jclass clazz = env->FindClass(JNI_CLASS_PATH); /// 根据Java端的类路径在C/C++层创建这个类
39
env->RegisterNatives(clazz, g_methods, sizeof(g_methods)/sizeof(g_methods[0])); /// 在Java虚拟机里建立C/C++到Java的映射关系
40
return JNI_VERSION_1_6;
41
}
Java端代码
1
package com.muziyu.apple.firstjni;
2
3
import androidx.appcompat.app.AppCompatActivity;
4
5
import android.os.Bundle;
6
import android.widget.TextView;
7
8
public class MainActivity extends AppCompatActivity {
9
10
// Used to load the 'native-lib' library on application startup.
11
static {
12
System.loadLibrary("native-lib");
13
}
14
15
16
protected void onCreate(Bundle savedInstanceState) {
17
super.onCreate(savedInstanceState);
18
setContentView(R.layout.activity_main);
19
20
// Example of a call to a native method
21
TextView tv = findViewById(R.id.sample_text);
22
String hello = stringFromJNI() + mStringFromJNI("ABC") + ' ' + _test();
23
tv.setText(hello);
24
}
25
26
/**
27
* A native method that is implemented by the 'native-lib' native library,
28
* which is packaged with this application.
29
*/
30
public native String stringFromJNI();
31
32
// 方法一
33
public native String mStringFromJNI(String str);
34
35
// 方法二
36
public native String _test();
37
}
C/C++调Java方法
Java主入口
1 | package com.muziyu/apple.firstjni; |
2 | |
3 | import androidx.appcompat.app.AppCompatActivity; |
4 | |
5 | import android.os.Bundle; |
6 | import android.widget.TextView; |
7 | |
8 | public class MainActivity extends AppCompatActivity { |
9 | |
10 | // Used to load the 'native-lib' library on application startup. |
11 | static { |
12 | System.loadLibrary("native-lib"); |
13 | } |
14 | |
15 | |
16 | protected void onCreate(Bundle savedInstanceState) { |
17 | super.onCreate(savedInstanceState); |
18 | setContentView(R.layout.activity_main); |
19 | |
20 | // Example of a call to a native method |
21 | TextView tv = findViewById(R.id.sample_text); |
22 | String hello = mTest(); |
23 | tv.setText(hello); |
24 | } |
25 | |
26 | /** |
27 | * A native method that is implemented by the 'native-lib' native library, |
28 | * which is packaged with this application. |
29 | */ |
30 | // C/C++调用Java |
31 | public native String mTest(); |
32 | |
33 | } |
- Java外部类
1 | package com.muziyu/apple.firstjni; |
2 | |
3 | public class Student { |
4 | private int year; |
5 | |
6 | public int getYear() { |
7 | return year; |
8 | } |
9 | |
10 | public void setYear(int year) { |
11 | this.year = year; |
12 | } |
13 | |
14 | } |
- C/C++代码
1
2
3
4
5
6
// C/C++调用Java
7
extern "C"
8
JNIEXPORT jstring JNICALL
9
Java_com_muziyu_apple_firstjni_MainActivity_mTest(JNIEnv *env, jobject thiz) {
10
/// 第一步:获取Java类
11
jclass clazz = env->FindClass(JNI_CLAZZ_PATH);
12
/// 第二步:获取Java类中的方法和属性
13
jmethodID method_init_id = env->GetMethodID(clazz, "<init>", "()V");
14
jmethodID method_set_id = env->GetMethodID(clazz, "setYear", "(I)V");
15
jmethodID method_get_id = env->GetMethodID(clazz, "getYear", "()I");
16
/// 第三步:生成一个新的对象
17
jobject obj = env->NewObject(clazz, method_init_id);
18
/// 第四步:调用Java中的方法
19
env->CallVoidMethod(obj, method_set_id, 18);
20
int year = env->CallIntMethod(obj, method_get_id);
21
22
char tmp [50];
23
sprintf(tmp, "%d", year);
24
std::string hello = "Hello from C++, year=";
25
hello.append(tmp);
26
return env->NewStringUTF(hello.c_str());
27
}
56.Android下的播放器
1 | # 设置CMake版本 |
2 | cmake_minimum_required(VERSION 3.4.1) |
3 | |
4 | # 新增的Lib库:第一个参数是自己定义的名字,第二参数是指定是动态库(.so)还是静态库(.a),第三个参数是Lib的路径 |
5 | add_library( |
6 | native-lib |
7 | SHARED |
8 | native-lib.cpp) |
9 | |
10 | # 指定ffmpeg编译好之后的Lib库目录 |
11 | set(JNI_LIBS_DIR ${CMAKE_SOURCE_DIR}/src/main/jniLibs) |
12 | |
13 | # 在系统下找对应的Lib库 |
14 | find_library( |
15 | log-lib |
16 | log) |
17 | find_library( |
18 | android-lib |
19 | android) |
20 | |
21 | # 引入FFmpeg中的libavutil |
22 | add_library( |
23 | avutil |
24 | SHARED |
25 | native-lib.cpp) |
26 | |
27 | set_target_properties( |
28 | avutil |
29 | PROPERTIES IMPORTED_LOCATION |
30 | ${JNI_LIBS_DIR}/${ANDROID_ABI}/libavutil.so) |
31 | |
32 | # 引入FFmpeg中的libswresample |
33 | add_library( |
34 | swresample |
35 | SHARED |
36 | native-lib.cpp) |
37 | |
38 | set_target_properties( |
39 | avutil |
40 | PROPERTIES IMPORTED_LOCATION |
41 | ${JNI_LIBS_DIR}/${ANDROID_ABI}/libswresample.so) |
42 | |
43 | # 引入FFmpeg中的libswscale |
44 | add_library( |
45 | swscale |
46 | SHARED |
47 | native-lib.cpp) |
48 | |
49 | set_target_properties( |
50 | avutil |
51 | PROPERTIES IMPORTED_LOCATION |
52 | ${JNI_LIBS_DIR}/${ANDROID_ABI}/libswscale.so) |
53 | |
54 | # 引入FFmpeg中的libavcodec |
55 | add_library( |
56 | avcodec |
57 | SHARED |
58 | native-lib.cpp) |
59 | |
60 | set_target_properties( |
61 | avutil |
62 | PROPERTIES IMPORTED_LOCATION |
63 | ${JNI_LIBS_DIR}/${ANDROID_ABI}/libavcodec.so) |
64 | |
65 | # 引入FFmpeg中的libavformat |
66 | add_library( |
67 | avformat |
68 | SHARED |
69 | native-lib.cpp) |
70 | |
71 | set_target_properties( |
72 | avutil |
73 | PROPERTIES IMPORTED_LOCATION |
74 | ${JNI_LIBS_DIR}/${ANDROID_ABI}/libavformat.so) |
75 | |
76 | # 引入FFmpeg中的libavfilter |
77 | add_library( |
78 | avfilter |
79 | SHARED |
80 | native-lib.cpp) |
81 | |
82 | set_target_properties( |
83 | avutil |
84 | PROPERTIES IMPORTED_LOCATION |
85 | ${JNI_LIBS_DIR}/${ANDROID_ABI}/libavfilter.so) |
86 | |
87 | # 引入FFmpeg中的libavdevice |
88 | add_library( |
89 | avdevice |
90 | SHARED |
91 | native-lib.cpp) |
92 | |
93 | set_target_properties( |
94 | avutil |
95 | PROPERTIES IMPORTED_LOCATION |
96 | ${JNI_LIBS_DIR}/${ANDROID_ABI}/libavdevice.so) |
97 | |
98 | # 设置第三方lib库的头文件路径 |
99 | include_directories(${JNI_LIBS_DIR}/includes) |
100 | |
101 | # 链接所有Lib库 |
102 | target_link_libraries( |
103 | native-lib |
104 | avutil swresample swscale avcodec avformat avfilter avdevice |
105 | ${log-lib} ${android-lib}) |
57.IOS下使用FFmpeg
- build-ffmpeg.sh
# directories
FF_VERSION="4.2"
#FF_VERSION="snapshot-git"
if [[ $FFMPEG_VERSION != "" ]]; then
FF_VERSION=$FFMPEG_VERSION
fi
SOURCE="ffmpeg-$FF_VERSION"
FAT="FFmpeg-iOS"
SCRATCH="scratch"
# must be an absolute path
THIN=`pwd`/"thin"
# absolute path to x264 library
#X264=`pwd`/fat-x264
#FDK_AAC=`pwd`/../fdk-aac-build-script-for-iOS/fdk-aac-ios
CONFIGURE_FLAGS="--enable-cross-compile --disable-debug --disable-programs \
--disable-doc --enable-pic"
if [ "$X264" ]
then
CONFIGURE_FLAGS="$CONFIGURE_FLAGS --enable-gpl --enable-libx264"
fi
if [ "$FDK_AAC" ]
then
CONFIGURE_FLAGS="$CONFIGURE_FLAGS --enable-libfdk-aac --enable-nonfree"
fi
# avresample
#CONFIGURE_FLAGS="$CONFIGURE_FLAGS --enable-avresample"
ARCHS="arm64 armv7 x86_64 i386"
COMPILE="y"
LIPO="y"
DEPLOYMENT_TARGET="8.0"
if [ "$*" ]
then
if [ "$*" = "lipo" ]
then
# skip compile
COMPILE=
else
ARCHS="$*"
if [ $# -eq 1 ]
then
# skip lipo
LIPO=
fi
fi
fi
if [ "$COMPILE" ]
then
if [ ! `which yasm` ]
then
echo 'Yasm not found'
if [ ! `which brew` ]
then
echo 'Homebrew not found. Trying to install...'
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" \
|| exit 1
fi
echo 'Trying to install Yasm...'
brew install yasm || exit 1
fi
if [ ! `which gas-preprocessor.pl` ]
then
echo 'gas-preprocessor.pl not found. Trying to install...'
(curl -L https://github.com/libav/gas-preprocessor/raw/master/gas-preprocessor.pl \
-o /usr/local/bin/gas-preprocessor.pl \
&& chmod +x /usr/local/bin/gas-preprocessor.pl) \
|| exit 1
fi
if [ ! -r $SOURCE ]
then
echo 'FFmpeg source not found. Trying to download...'
curl http://www.ffmpeg.org/releases/$SOURCE.tar.bz2 | tar xj \
|| exit 1
fi
CWD=`pwd`
for ARCH in $ARCHS
do
echo "building $ARCH..."
mkdir -p "$SCRATCH/$ARCH"
cd "$SCRATCH/$ARCH"
CFLAGS="-arch $ARCH"
if [ "$ARCH" = "i386" -o "$ARCH" = "x86_64" ]
then
PLATFORM="iPhoneSimulator"
CFLAGS="$CFLAGS -mios-simulator-version-min=$DEPLOYMENT_TARGET"
else
PLATFORM="iPhoneOS"
CFLAGS="$CFLAGS -mios-version-min=$DEPLOYMENT_TARGET -fembed-bitcode"
if [ "$ARCH" = "arm64" ]
then
EXPORT="GASPP_FIX_XCODE5=1"
fi
fi
XCRUN_SDK=`echo $PLATFORM | tr '[:upper:]' '[:lower:]'`
CC="xcrun -sdk $XCRUN_SDK clang"
# force "configure" to use "gas-preprocessor.pl" (FFmpeg 3.3)
if [ "$ARCH" = "arm64" ]
then
AS="gas-preprocessor.pl -arch aarch64 -- $CC"
else
AS="gas-preprocessor.pl -- $CC"
fi
CXXFLAGS="$CFLAGS"
LDFLAGS="$CFLAGS"
if [ "$X264" ]
then
CFLAGS="$CFLAGS -I$X264/include"
LDFLAGS="$LDFLAGS -L$X264/lib"
fi
if [ "$FDK_AAC" ]
then
CFLAGS="$CFLAGS -I$FDK_AAC/include"
LDFLAGS="$LDFLAGS -L$FDK_AAC/lib"
fi
TMPDIR=${TMPDIR/%\/} $CWD/$SOURCE/configure \
--target-os=darwin \
--arch=$ARCH \
--cc="$CC" \
--as="$AS" \
$CONFIGURE_FLAGS \
--extra-cflags="$CFLAGS" \
--extra-ldflags="$LDFLAGS" \
--prefix="$THIN/$ARCH" \
|| exit 1
make -j3 install $EXPORT || exit 1
cd $CWD
done
fi
if [ "$LIPO" ]
then
echo "building fat binaries..."
mkdir -p $FAT/lib
set - $ARCHS
CWD=`pwd`
cd $THIN/$1/lib
for LIB in *.a
do
cd $CWD
echo lipo -create `find $THIN -name $LIB` -output $FAT/lib/$LIB 1>&2
lipo -create `find $THIN -name $LIB` -output $FAT/lib/$LIB || exit 1
done
cd $CWD
cp -rf $THIN/$1/include $FAT
fi
echo Done
58.音视频进阶
- FFmpeg Filter的使用与开发
- FFmpeg裁剪与优化
- 视频渲染(OpenGL / Metal)
- 声音特效
- 网络传输
- WebRTC 在浏览器之间进行P2P的传输,视频会议
- AR技术
- OpenCV
- 回音消除
- 降噪
- 视频秒开
- 多人多视频实时互动