SDL2--音频处理
概述
wiki
Simple DirectMedia Layer is a cross-platform development library designed to provide low level access to audio, keyboard, mouse, joystick, and graphics hardware via OpenGL/Direct3D/Metal/Vulkan. It is used by video playback software, emulators, and popular games.
SDL is written in C, works natively with C++, and there are bindings available for several other languages, including C# and Python.
使用
安装编译
$ sudo apt-get install libsdl2-dev
$ sdl2-config -h
Usage: /usr/bin/sdl2-config [--prefix[=DIR]] [--exec-prefix[=DIR]] [--version] [--cflags] [--libs] [--static-libs]
$ sdl2-config --cflags --libs --static-libs
-I/usr/include/SDL2 -D_REENTRANT
-lSDL2
-lSDL2 -Wl,--no-undefined -lm -ldl -lasound -lm -ldl -lpthread -lpulse-simple -lpulse -lX11 -lXext -lXcursor -lXinerama -lXi -lXrandr -lXss -lXxf86vm -lwayland-egl -lwayland-client -lwayland-cursor -lxkbcommon -lpthread -lrt
$ gcc -o ffplayer ffplayer.c -lavutil -lavformat -lavcodec -lavutil -lswscale -lswresample -lSDL2
C++库:libSDL2pp,C++11 bindings/wrapper for SDL2.
音频处理
1. 打开音频设备
// 打开音频设备,获取SDL设备支持的音频参数actual_spec(期望的参数是desired,实际得到obtained)
// 不关心obtained,可设为NULL
wanted_spec.freq = p_codec_ctx->sample_rate; // 采样率
wanted_spec.format = AUDIO_S16SYS; // S表带符号,16是采样深度,SYS表采用系统字节序
wanted_spec.channels = p_codec_ctx->channels; // 声道数
wanted_spec.silence = 0; // 静音值
wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE; // SDL声音缓冲区尺寸,单位是单声道采样点尺寸x通道数
wanted_spec.callback = sdl_audio_callback; // 回调函数,若为NULL,则应使用SDL_QueueAudio()机制
wanted_spec.userdata = p_codec_ctx; // 提供给回调函数的参数
// SDL_OpenAudio(), always acts on device ID 1
int SDL_OpenAudio(SDL_AudioSpec * desired, SDL_AudioSpec * obtained);
//Open a specific audio device. Passing in a device name of NULL requests
// the most reasonable default (and is equivalent to calling SDL_OpenAudio()).
//\return 0 on error, a valid device ID that is >= 2 on success.
SDL_AudioDeviceID SDL_OpenAudioDevice(const char *device, int iscapture,const SDL_AudioSpec *desired, SDL_AudioSpec *obtained, int allowed_changes);
SDL_OpenAudioDevice()的第一个参数是音频设备的名称,调用SDL_GetAudioDeviceName根据设备的编号获取到设备的名称。下面这段代码用来输出所有的设备名称:
int count = SDL_GetNumAudioDevices(0);
for (int i = 0; i < count; i++)
{
cout << "Audio device " << i << " : " << SDL_GetAudioDeviceName(i, 0);
}
- iscapture 设为0,非0的值在当前SDL2版本还不支持
- allowed_changes 期望和实际总是有差别,该参数用来指定 当期望和实际的不一样时,能不能够对某一些输出参数进行修改。
设为0,则不能修改。设为如下的值,则可对相应的参数修改:
SDL_AUDIO_ALLOW_FREQUENCY_CHANGE、
SDL_AUDIO_ALLOW_FORMAT_CHANGE、
SDL_AUDIO_ALLOW_CHANNELS_CHANGE、
SDL_AUDIO_ALLOW_ANY_CHANGE、
SDL提供两种使音频设备取得音频数据方法:
- a. push,SDL以特定的频率调用回调函数,在回调函数中取得音频数据
- b. pull,用户程序以特定的频率调用SDL_QueueAudio(),向音频设备提供数据。此种情况wanted_spec.callback=NULL
音频设备打开后播放静音,不启动回调,调用SDL_PauseAudio(0)后启动回调,开始正常播放音频。
2. 配置音频数据
根据SDL音频参数构建音频重采样参数,wanted_spec是期望的参数,actual_spec是实际的参数,wanted_spec和auctual_spec都是SDL中的参数。
audio_param是FFmpeg中的参数,此参数应保证是SDL播放支持的参数,后面重采样要用到此参数。音频帧解码后得到的frame中的音频格式未必被SDL支持,比如frame可能是planar格式,但SDL2.0并不支持planar格式,若将解码后的frame直接送入SDL音频缓冲区,声音将无法正常播放。所以需要先将frame重采样(转换格式)为SDL支持的模式,然后送再写入SDL音频缓冲区。
3. 启动音频回调机制
// 暂停/继续音频回调处理。参数1表暂停,0表继续。
// 打开音频设备后默认未启动回调处理,通过调用SDL_PauseAudio(0)来启动回调处理。
// 这样就可以在打开音频设备后先为回调函数安全初始化数据,一切就绪后再启动音频回调。
// 在暂停期间,会将静音值往音频设备写。
void SDL_PauseAudio(int pause_on)
void SDL_PauseAudioDevice(SDL_AudioDeviceID dev, int pause_on)
4. 音频回调函数
// 音频处理回调函数
// 此函数被SDL按需调用,此函数不在用户主线程中,因此数据需要保护
// \param[in] userdata用户在注册回调函数时指定的参数
// \param[out] stream 音频数据缓冲区地址,将解码后的音频数据填入此缓冲区
// \param[in] len 音频数据缓冲区大小,单位字节
// 回调函数返回后,stream指向的音频缓冲区将变为无效
// 双声道采样点的顺序为LRLRLR
/* desired->callback should be set to a function that will be called
* when the audio device is ready for more data. It is passed a pointer
* to the audio buffer, and the length in bytes of the audio buffer.
* This function usually runs in a separate thread, and so you should
* protect data structures that it accesses by calling SDL_LockAudio()
* and SDL_UnlockAudio() in your code. Alternately, you may pass a NULL
* pointer here, and call SDL_QueueAudio() with some frequency, to queue
* more audio samples to be played (or for capture devices, call
* SDL_DequeueAudio() with some frequency, to obtain audio samples).
*\c desired->userdata is passed as the first parameter to your callback
* function. If you passed a NULL callback, this value is ignored.
/**
* This function is called when the audio device needs more data.
*
* \param userdata An application-specific parameter saved in
* the SDL_AudioSpec structure
* \param stream A pointer to the audio data buffer.
* \param len The length of that buffer in bytes.
*
* Once the callback returns, the buffer will no longer be valid.
* Stereo samples are stored in a LRLRLR ordering.
*
* You can choose to avoid callbacks and use SDL_QueueAudio() instead, if
* you like. Just open your audio device with a NULL callback.
*/
typedef void (SDLCALL * SDL_AudioCallback) (void *userdata, Uint8 * stream, int len);
在回调函数内要首先初始化缓冲区stream,并且在回调函数返回结束后,缓冲区就不再可用了。对于多声道的音频数据在缓冲区中是交错存放的
- 双声道 LRLR
- 四声道 front-left / front-right / rear-left /rear-right
- 5.1 front-left / front-right / center / low-freq / rear-left / rear-right
混合音频数据
#define SDL_MIX_MAXVOLUME 128
/**
* This takes two audio buffers of the playing audio format and mixes
* them, performing addition, volume adjustment, and overflow clipping.
* The volume ranges from 0 - 128, and should be set to ::SDL_MIX_MAXVOLUME
* for full audio volume. Note this does not change hardware volume.
* This is provided for convenience -- you can mix your own audio data.
*/
extern DECLSPEC void SDLCALL SDL_MixAudio(Uint8 * dst, const Uint8 * src, Uint32 len, int volume);
/**
* This works like SDL_MixAudio(), but you specify the audio format instead of
* using the format of audio device 1. Thus it can be used when no audio
* device is open at all.
*/
extern DECLSPEC void SDLCALL SDL_MixAudioFormat(Uint8 * dst, const Uint8 * src, SDL_AudioFormat format, Uint32 len, int volume);
callback外调用线程锁,互斥访问资源
/**
* \name Audio lock functions
*
* The lock manipulated by these functions protects the callback function.
* During a SDL_LockAudio()/SDL_UnlockAudio() pair, you can be guaranteed that
* the callback function is not running. Do not call these from the callback
* function or you will cause deadlock.
*/
extern DECLSPEC void SDLCALL SDL_LockAudio(void);
extern DECLSPEC void SDLCALL SDL_LockAudioDevice(SDL_AudioDeviceID dev);
extern DECLSPEC void SDLCALL SDL_UnlockAudio(void);
extern DECLSPEC void SDLCALL SDL_UnlockAudioDevice(SDL_AudioDeviceID dev);
5. 关闭
extern DECLSPEC void SDLCALL SDL_CloseAudio(void);
extern DECLSPEC void SDLCALL SDL_CloseAudioDevice(SDL_AudioDeviceID dev);
音频流转换
SDL2.0.6之前,只能通过SDL_AudioCVT一种方法转换音频,从SDL 2.0.7之后,引入了SDL_AudioStream。
The SDL_AudioStream structure is used to convert audio data between different formats in arbitrarily-sized blocks. It is meant to be a replacement for the SDL_AudioCVT-related interfaces.
SDL_NewAudioStream
SDL_AudioStreamPut
SDL_AudioStreamAvailable
SDL_AudioStreamGet
SDL_AudioStreamFlush
SDL_AudioStreamClear
SDL_FreeAudioStream
六、示例
#ifdef __cplusplus
extern "C"
{
#endif
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswresample/swresample.h>
#include <SDL2/SDL.h>
#include <alsa/asoundlib.h>
#ifdef __cplusplus
};
#endi
#define MAX_AUDIO_FRAME_SIZE 192000 // 1 second of 48khz 32bit audio
//48000 * (32/8)
unsigned int audioLen = 0;
unsigned char *audioPos = NULL;
unsigned char audioVolume = 0;
void fill_audio(void * udata, Uint8 * stream, int len)
{
SDL_memset(stream, 0, len);
if (audioLen == 0)
return;
len = (len>audioLen?audioLen:len);
SDL_MixAudio(stream,audioPos,len,audioVolume);
audioPos += len;
audioLen -= len;
}
int play_audio(char *file, std::atomic_bool *stop_flag, std::atomic_uint8_t *volume_adjust)
{
AVFormatContext *pavfc = NULL;
AVCodecContext *pavcc = NULL;
AVCodec *pavc = NULL;
AVPacket *pPkt = NULL;
AVFrame*pFrame = NULL;
struct SwrContext *pswrc = NULL;
SDL_AudioSpec wantSpec;
int ret = 0;
int audio_index = -1;
int i = 0;
enum AVSampleFormat out_sample_fmt = AV_SAMPLE_FMT_S16; //声音格式
uint64_t out_chn_layout = -1; // 根据音频文件确定声道布局
int out_sample_rate = -1; // 根据音频文件确定采样率
int out_nb_samples = -1; // 根据音频文件确定采样数
int out_channels = -1; // 根据音频文件确定声道
int out_buffer_size = -1; //输出buff
int out_convert_size = -1;
unsigned char *outBuff = NULL;
uint64_t in_chn_layout = -1; //通道布局
if ((ret = avformat_open_input(&pavfc, file, NULL, NULL)) < 0) {
ROS_ERROR("[%s] avformat_open_input error", file);
return ret;
}
if ((ret = avformat_find_stream_info(pavfc, NULL)) < 0) {
ROS_ERROR("[%s] avformat_find_stream_info error", file);
avformat_close_input(&pavfc);
return ret;
}
/* find audio index */
for (i = 0; i < pavfc->nb_streams; i++) {
if (pavfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO){
audio_index = i;
break;
}
}
if (-1 == audio_index) {
ROS_ERROR("[%s] can not find audio stream!", file);
avformat_close_input(&pavfc);
return -1;
}
if(!(pavc = avcodec_find_decoder(pavfc->streams[audio_index]->codecpar->codec_id))){
ROS_ERROR("[%s] can not find decoder!", file);
avformat_close_input(&pavfc);
return -1;
}
if(!(pavcc = avcodec_alloc_context3(pavc))){
ROS_ERROR("[%s] Could not allocate a decoding context", file);
avformat_close_input(&pavfc);
return -1;
}
if((ret = avcodec_parameters_to_context(pavcc, pavfc->streams[audio_index]->codecpar)) < 0){
ROS_ERROR("[%s] avcodec_parameters_to_context error", file);
avcodec_free_context(&pavcc);
avformat_close_input(&pavfc);
return ret;
}
pavcc->pkt_timebase = pavfc->streams[audio_index]->time_base;
if ((ret = avcodec_open2(pavcc, pavc, NULL)) < 0){
ROS_ERROR("[%s] Could not open codec", file);
avcodec_free_context(&pavcc);
avformat_close_input(&pavfc);
return ret;
}
//out parameter
out_sample_rate = pavcc->sample_rate;
out_nb_samples = pavcc->frame_size;
out_channels = pavcc->channels;
out_chn_layout = av_get_default_channel_layout(pavcc->channels);
out_buffer_size = av_samples_get_buffer_size(NULL, out_channels, out_nb_samples, out_sample_fmt, 1);
ROS_INFO("[%s] audio in: channels(%d), nb_samples(%d), sample_fmt(%d:%dB:%s), sample_rate(%d)\n", file, pavcc->channels, pavcc->frame_size,
pavcc->sample_fmt, av_get_bytes_per_sample(pavcc->sample_fmt), av_get_sample_fmt_name(pavcc->sample_fmt), pavcc->sample_rate);
ROS_INFO("[%s] audio_out: buffer_size(%d), channels(%d), nb_samples(%d), sample_fmt(%d:%dB:%s), sample_rate(%d)\n", file, out_buffer_size, out_channels, out_nb_samples,
out_sample_fmt, av_get_bytes_per_sample(out_sample_fmt), av_get_sample_fmt_name(out_sample_fmt), out_sample_rate);
in_chn_layout = av_get_default_channel_layout(pavcc->channels);
//Swr
pswrc = swr_alloc_set_opts(NULL, out_chn_layout, out_sample_fmt, out_sample_rate,
in_chn_layout, pavcc->sample_fmt, pavcc->sample_rate, 0, NULL);
if(!pswrc || (swr_init(pswrc) < 0)) {
ROS_ERROR("[%s] swresample init error", file);
ret = -1;
goto Cleanup;
}
//SDL
wantSpec.freq = out_sample_rate;
wantSpec.format = AUDIO_S16SYS;
wantSpec.channels = out_channels;
wantSpec.silence = 0;
wantSpec.samples = out_nb_samples;
wantSpec.callback = fill_audio;
wantSpec.userdata = pavfc;
if (SDL_OpenAudio(&wantSpec, NULL) < 0){
ROS_ERROR("[%s] can not open SDL!", file);
ret = -1;
goto Cleanup;
}
outBuff = (unsigned char *)av_malloc(MAX_AUDIO_FRAME_SIZE*2); //双声道
if(! outBuff){
ROS_ERROR("[%s] audio buffer malloc failure", file);
ret = -1;
goto Cleanup;
}
if (!(pPkt = (AVPacket *)av_malloc(sizeof(AVPacket)))){
ROS_ERROR("[%s] AVPacket malloc failure", file);
ret = -1;
goto Cleanup;
}
if(!(pFrame = av_frame_alloc())){
ROS_ERROR("[%s] AVFrame malloc fail", file);
ret = -1;
goto Cleanup;
}
SDL_PauseAudio(0);
while(!*stop_flag && (av_read_frame(pavfc, pPkt) >= 0)){
if(pPkt->stream_index == audio_index){
if((ret = avcodec_send_packet(pavcc, pPkt)) < 0){
ROS_ERROR("[%s] avcodec send packet error", file);
break;
}
while(ret >= 0){
ret = avcodec_receive_frame(pavcc, pFrame);
if ((ret == AVERROR_EOF) || (ret == AVERROR(EAGAIN))){
ret = 0;
break;
} else if(ret < 0){
ROS_ERROR("[%s] avcodec receive frame error", file);
goto Cleanup;
}
#if 1
out_buffer_size = av_rescale_rnd(
swr_get_delay(pswrc, pavcc->sample_rate) + pFrame->nb_samples,
out_sample_rate, pavcc->sample_rate, AV_ROUND_UP);
out_convert_size = swr_convert(pswrc, &outBuff, out_buffer_size,(const uint8_t **)pFrame->data , pFrame->nb_samples);
#else
out_convert_size = swr_convert(pswrc, &outBuff, MAX_AUDIO_FRAME_SIZE,(const uint8_t **)pFrame->data , pFrame->nb_samples);
#endif
printf("out_buffer_size is %d, convert_size is %d - channels(%d), nb_samples(%d), sample_fmt(%d) - Frame: nb_samples(%d)\n",
out_buffer_size, out_convert_size, out_channels, out_nb_samples, out_sample_fmt, pFrame->nb_samples);
while(audioLen > 0) // make sure callback deal with over
SDL_Delay(1);
SDL_LockAudio();
audioVolume = *volume_adjust;
audioVolume = (audioVolume>100?100:audioVolume) * SDL_MIX_MAXVOLUME * 0.01;
audioLen = out_convert_size * out_channels * av_get_bytes_per_sample(out_sample_fmt);
audioPos = outBuff;
SDL_UnlockAudio();
av_frame_unref(pFrame);
}
}
av_packet_unref(pPkt);
}
Cleanup:
SDL_CloseAudio();
SDL_Quit();
av_frame_free(&pFrame);
av_packet_free(&pPkt);
av_freep(&outBuff);
swr_free(&pswrc);
if(pavcc){ avcodec_free_context(&pavcc);}
if(pavfc){ avformat_close_input(&pavfc); }
audioLen = 0;
audioPos = NULL;
return ret;
}
队列
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
}
#include <SDL2/SDL.h>
#include <iostream>
#include <queue>
using namespace std;
bool quit = false;
typedef struct PacketQueue
{
queue<AVPacket> av_queue;
int nb_packets;
int size;
SDL_mutex *mutex;
SDL_cond *cond;
PacketQueue()
{
nb_packets = 0;
size = 0;
mutex = SDL_CreateMutex();
cond = SDL_CreateCond();
}
bool enQueue(const AVPacket *packet)
{
AVPacket pkt;
if (av_packet_ref(&pkt, packet) < 0)
return false;
SDL_LockMutex(mutex);
av_queue.push(pkt);
size += pkt.size;
nb_packets++;
SDL_CondSignal(cond);
SDL_UnlockMutex(mutex);
return true;
}
bool deQueue(AVPacket *packet, bool block)
{
bool ret = false;
SDL_LockMutex(mutex);
while (true)
{
if (quit)
{
ret = false;
break;
}
if (!av_queue.empty())
{
if (av_packet_ref(packet, &av_queue.front()) < 0)
{
ret = false;
break;
}
av_queue.pop();
nb_packets--;
size -= packet->size;
ret = true;
break;
}
else if (!block)
{
ret = false;
break;
}
else
{
SDL_CondWait(cond, mutex);
}
}
SDL_UnlockMutex(mutex);
return ret;
}
bool queueEmpty()
{
bool ret = false;
SDL_LockMutex(mutex);
ret = av_queue.empty();
SDL_UnlockMutex(mutex);
return ret;
}
void queueSignal()
{
SDL_CondSignal(cond);
}
}PacketQueue;
PacketQueue audioq;
// 从Packet中解码,返回解码的数据长度
int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_size)
{
AVFrame *frame = av_frame_alloc();
int data_size = 0;
AVPacket pkt;
SwrContext *swr_ctx = nullptr;
if (quit)
return -1;
if (!audioq.deQueue(&pkt, true))
return -1;
int ret = avcodec_send_packet(aCodecCtx, &pkt);
if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
return -1;
ret = avcodec_receive_frame(aCodecCtx, frame);
if (ret < 0 && ret != AVERROR_EOF)
return -1;
int index = av_get_channel_layout_channel_index(av_get_default_channel_layout(4), AV_CH_FRONT_CENTER);
// 设置通道数或channel_layout
if (frame->channels > 0 && frame->channel_layout == 0)
frame->channel_layout = av_get_default_channel_layout(frame->channels);
else if (frame->channels == 0 && frame->channel_layout > 0)
frame->channels = av_get_channel_layout_nb_channels(frame->channel_layout);
AVSampleFormat dst_format = AV_SAMPLE_FMT_S16;//av_get_packed_sample_fmt((AVSampleFormat)frame->format);
Uint64 dst_layout = av_get_default_channel_layout(frame->channels);
// 设置转换参数
swr_ctx = swr_alloc_set_opts(nullptr, dst_layout, dst_format, frame->sample_rate,
frame->channel_layout, (AVSampleFormat)frame->format, frame->sample_rate, 0, nullptr);
if (!swr_ctx || swr_init(swr_ctx) < 0)
return -1;
// 计算转换后的sample个数 a * b / c
int dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, frame->sample_rate) + frame->nb_samples, frame->sample_rate, frame->sample_rate, AVRounding(1));
// 转换,返回值为转换后的sample个数
int nb = swr_convert(swr_ctx, &audio_buf, dst_nb_samples, (const uint8_t**)frame->data, frame->nb_samples);
data_size = frame->channels * nb * av_get_bytes_per_sample(dst_format);
av_frame_free(&frame);
swr_free(&swr_ctx);
return data_size;
}
static const int MAX_AUDIO_FRAME_SIZE = 192000;
static const int SDL_AUDIO_BUFFER_SIZE = 1024;
void audio_callback(void* userdata, Uint8 *stream, int len)
{
AVCodecContext *aCodecCtx = (AVCodecContext *)userdata;
int len1 = 0, audio_size = 0;
static uint8_t audio_buff[(MAX_AUDIO_FRAME_SIZE * 3) / 2];
static unsigned int audio_buf_size = 0;
static unsigned int audio_buf_index = 0;
SDL_memset(stream, 0, len);
while (len > 0)// 想设备发送长度为len的数据
{
if (audio_buf_index >= audio_buf_size) // 缓冲区中无数据
{
// 从packet中解码数据
audio_size = audio_decode_frame(aCodecCtx, audio_buff, sizeof(audio_buff));
if (audio_size < 0) // 没有解码到数据或出错,填充0
{
audio_buf_size = 0;
memset(audio_buff, 0, audio_buf_size);
break;
}
else
audio_buf_size = audio_size;
audio_buf_index = 0;
}
len1 = audio_buf_size - audio_buf_index; // 缓冲区中剩下的数据长度
if (len1 > len) // 向设备发送的数据长度为len
len1 = len;
SDL_MixAudio(stream, audio_buff + audio_buf_index, len1, SDL_MIX_MAXVOLUME);
len -= len1;
stream += len1;
audio_buf_index += len1;
}
}
int main(int argc, char* argv[])
{
av_register_all();
SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER);
char* filename = argv[1];
AVFormatContext *pFormatCtx = nullptr;
if (avformat_open_input(&pFormatCtx, filename, nullptr, nullptr) != 0)
return -1;
if (avformat_find_stream_info(pFormatCtx, nullptr) < 0)
return -1;
av_dump_format(pFormatCtx, 0, filename, 0);
int audioStream = -1;
for (int i = 0; i < pFormatCtx->nb_streams; i++)
{
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
audioStream = i;
break;
}
}
if (audioStream == -1)
return -1;
AVCodecContext* pCodecCtxOrg = nullptr;
AVCodecContext* pCodecCtx = nullptr;
AVCodec* pCodec = nullptr;
pCodecCtxOrg = pFormatCtx->streams[audioStream]->codec; // codec context
// 找到audio stream的 decoder
pCodec = avcodec_find_decoder(pCodecCtxOrg->codec_id);
if (!pCodec)
{
cout << "Unsupported codec!" << endl;
return -1;
}
// 不直接使用从AVFormatContext得到的CodecContext,要复制一个
pCodecCtx = avcodec_alloc_context3(pCodec);
if (avcodec_copy_context(pCodecCtx, pCodecCtxOrg) != 0)
{
cout << "Could not copy codec context!" << endl;
return -1;
}
// Set audio settings from codec info
SDL_AudioSpec wanted_spec, spec;
wanted_spec.freq = pCodecCtx->sample_rate;
wanted_spec.format = AUDIO_S16SYS;
wanted_spec.channels = pCodecCtx->channels;
wanted_spec.silence = 0;
wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
wanted_spec.callback = audio_callback;
wanted_spec.userdata = pCodecCtx;
if (SDL_OpenAudio(&wanted_spec, &spec) < 0)
{
cout << "Open audio failed:" << SDL_GetError() << endl;
getchar();
return -1;
}
avcodec_open2(pCodecCtx, pCodec, nullptr);
SDL_PauseAudio(0);
AVPacket packet;
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == audioStream)
audioq.enQueue(&packet);
else
av_packet_unref(&packet);
}
while(!audioq.queueEmpty()){
SDL_Delay(100);
}
quit = true;
audioq.queueSignal();
printf("thread cleanup...\n");
SDL_CloseAudio();
SDL_Quit();
avformat_close_input(&pFormatCtx);
printf("thread over...\n");
std::cout << "audioq type: " << typeid(audioq).name() << std::endl;
// getchar();
return 0;
}