32 Commits
v0.1 ... v0.2

Author SHA1 Message Date
9a0e28cab0 build: Update library version info 2015-11-04 13:20:05 +05:30
f7c9b269a0 doc: Add release notes about changes and API breakage 2015-11-04 13:15:21 +05:30
34abadd258 Update code to current Chromium master
This corresponds to:

Chromium: 6555f9456074c0c0e5f7713564b978588ac04a5d
webrtc: c8b569e0a7ad0b369e15f0197b3a558699ec8efa
2015-11-04 13:11:30 +05:30
9bc60d3e10 doc: Add a pro-tip to update instructions 2015-11-04 13:11:30 +05:30
5ac4d6aa8a build: Dist ancillary documentation 2015-11-04 13:11:30 +05:30
ff94c386ca build: Install trace.h to allow clients access to the Trace API 2015-11-04 13:11:30 +05:30
1cba3b05b9 doc: Split out and expand on updating notes
Expands on instructions for updating the code when upstream changes.
Also renaming with the '.md' extension for things that understand
Markdown.
2015-11-04 13:11:30 +05:30
66cdc2e923 common_audio: Remove extraneous header
This one is left over from a previous version of the code base.
2015-11-04 13:11:30 +05:30
602b0e5f56 build: Don't install a top level copy of audio_processing.h
If we're breaking API, then clients need to be modified and recompiled
anyway, so we can avoid the cruft of trying to be backwards compatible.

Clients now need to include the file as it is in the upstream sources:
<webrtc/modules/audio_processing/include/audio_processing.h>
2015-11-04 13:11:30 +05:30
360faa363d build: Install module_common_types.h and dependencies
This is needed for at least the AudioFrame class.

Unfortunately, this does add a bit of ugliness because
module_common_types.h has video bits that are hidden behind our own
define, which now becomes part of pkg-config CFLAGS.

This could be made less ugly, potentially, but I'm not sure how right
now.
2015-11-04 13:11:30 +05:30
7771f4d7f3 doc: Add upstream repo URL to README 2015-11-04 13:11:30 +05:30
8d0c073e56 doc: Update README 2015-10-19 11:51:55 +05:30
a6e73f4d94 build: Conditionally build C variants of assembler-optimised code 2015-10-19 11:48:52 +05:30
7d9c65b625 build: Define assembler flags where required 2015-10-19 11:41:13 +05:30
88a8b62f3f build: Define ARM arch preprocessor macros 2015-10-19 11:40:05 +05:30
2f0b9411d3 system_wrappers: Add missing file for ARM builds 2015-10-19 11:32:48 +05:30
d6a338cb01 build: Use CXXFLAGS instead of CFLAGS in compile testing
This is needed since we're using AC_LANG_CPLUSPLUS
2015-10-19 11:29:40 +05:30
e4b1cac207 build: Minor whitespace changes
Makes syntax highlighting in vim unbreak.
2015-10-19 11:24:40 +05:30
e5a6e18f13 Drop redundant header 2015-10-15 16:18:47 +05:30
9d68f7efef build: Fix distcheck 2015-10-15 16:18:47 +05:30
98454ed265 build: Add architecture checks for x86 and ARM
On x86, SSE optimisations are always compiled in, and used based on
runtime checks.

On ARM, we try to autodetect NEON support (with an option of runtime
detection). This has not been build-tested on ARM yet.

This leaves MIPS to be done.
2015-10-15 16:18:47 +05:30
f6941fbf6a build: Stop hard-coding OS/platform CFLAGS 2015-10-15 16:18:47 +05:30
12e9e1eafd build: Fix up include file paths 2015-10-15 16:18:47 +05:30
9b4e8dc83c debug: Update protobuf file
This isn't used it, but let's keep it up to date
2015-10-15 16:18:47 +05:30
926b543a2f build: Drop old gpyi file 2015-10-15 16:18:47 +05:30
7fcd4d2df5 build: More build fixes and cleanups 2015-10-15 16:18:47 +05:30
65a2860806 Update .gitignore for .dirstamp files 2015-10-15 16:18:47 +05:30
e68571d456 build: Some fixes for make distcheck 2015-10-15 16:18:47 +05:30
407bfbf651 build: Make build succeed without test and non-audio deps 2015-10-15 16:18:47 +05:30
753eada3aa Update audio_processing module
Corresponds to upstream commit 524e9b043e7e86fd72353b987c9d5f6a1ebf83e1

Update notes:

 * Pull in third party license file

 * Replace .gypi files with BUILD.gn to keep track of what changes
   upstream

 * Bunch of new filse pulled in as dependencies

 * Won't build yet due to changes needed on top of these
2015-10-15 16:18:45 +05:30
5ae7a5d6cd Update system_wrappers
Corresponds to upstream commit 524e9b043e7e86fd72353b987c9d5f6a1ebf83e1
2015-10-15 16:18:39 +05:30
c4fb4e38de Update common_audio
Corresponds to upstream commit 524e9b043e7e86fd72353b987c9d5f6a1ebf83e1

Update notes:

 * Moved src/ to webrtc/ to easily diff against the third_party/webrtc
   in the chromium tree

 * ARM/NEON/MIPS support is not yet hooked up

 * Tests have not been copied
2015-10-15 16:18:25 +05:30
528 changed files with 70644 additions and 28921 deletions

2
.gitignore vendored
View File

@ -5,6 +5,7 @@
.*.swp .*.swp
*~ *~
.deps* .deps*
.dirstamp
.libs* .libs*
Makefile Makefile
Makefile.in Makefile.in
@ -27,4 +28,3 @@ ltmain.sh
missing missing
mkinstalldirs mkinstalldirs
stamp-* stamp-*

View File

@ -1,6 +1,23 @@
SUBDIRS = src SUBDIRS = webrtc
pkgconfigdir = $(libdir)/pkgconfig pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = webrtc-audio-processing.pc pkgconfig_DATA = webrtc-audio-processing.pc
EXTRA_DIST = PATENTS webrtcincludedir = $(includedir)/webrtc_audio_processing
nobase_webrtcinclude_HEADERS = webrtc/base/arraysize.h \
webrtc/base/checks.h \
webrtc/base/constructormagic.h \
webrtc/base/basictypes.h \
webrtc/base/maybe.h \
webrtc/base/platform_file.h \
webrtc/common.h \
webrtc/common_types.h \
webrtc/typedefs.h \
webrtc/modules/audio_processing/beamformer/array_util.h \
webrtc/modules/audio_processing/include/audio_processing.h \
webrtc/modules/interface/module_common_types.h \
webrtc/system_wrappers/include/trace.h
EXTRA_DIST = NEWS \
README.md \
UPDATING.md

44
NEWS
View File

@ -1,3 +1,47 @@
Release 0.2
-----------
Updated AudioProcessing code to be more current.
Contains API breaking changes.
Upstream changes include:
* Rewritten AGC and voice activity detection
* Intelligibility enhancer
* Extended AEC filter
* Beamformer
* Transient suppressor
* ARM, NEON and MIPS optimisations (MIPS optimisations are not hooked up)
API changes:
* We no longer include a top-level audio_processing.h. The webrtc tree format
is used, so use webrtc/modules/audio_processing/include/audio_processing.h
* The top-level module_common_types.h has also been moved to
webrtc/modules/interface/module_common_types.h
* C++11 support is now required while compiling client code
* AudioProcessing::Create() does not take any arguments any more
* AudioProcessing::Destroy() is gone, use standard C++ "delete" instead
* Stream parameters are now configured via StreamConfig and ProcessingConfig
rather than set_sample_rate(), set_num_channels(), etc.
* AudioFrame field names have changed
* Use config API for newer audio processing options
* Use ProcessReverseStream() instead of AnalyzeReverseStream(), particularly
when using the intelligibility enhancer
* GainControl::set_analog_level_limits() is broken. The AGC implementation
hard codes 0-255 as the volume range
Other notes:
* The new audio processing parameters are not all tested, and a few are not
enabled upstream (in Chromium) either
* The rewritten AGC appears to be less sensitive, and it might make sense to
initialise the capture volume to something reasonable (33% or 50%, for
example) to make sure there is sufficient energy in the stream to trigger
the AGC mechanism
Release 0.1 Release 0.1
----------- -----------

41
README
View File

@ -1,40 +1 @@
About See README.md
=====
This is meant to be a more Linux packaging friendly copy of the AudioProcessing
module from the WebRTC[1] project. The ideal case is that we make no changes to
the code to make tracking upstream code easy.
This package currently only includes the AudioProcessing bits, but I am very
open to collaborating with other projects that wish to distribute other bits of
the code and hopefully eventually have a single point of packaging all the
WebRTC code to help people reuse the code and avoid keeping private copies in
several different projects.
[1] http://code.google.com/p/webrtc/
Feedback
========
Patches, suggestions welcome. You can send them to the PulseAudio mailing
list[2] or to me at the address below.
-- Arun Raghavan <arun.raghavan@collabora.co.uk>
[2] http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss
Notes
=====
Assembling some quick notes on maintaining this tree vs. the original WebRTC
project source code.
1. Running meld on a pristine tree's src/ vs. the same directory in the WebRTC
code should produce a fairly easy-to-parse set of differences that can be
merged. I've kept the test code out for now, but this might get merged in
the future. The .gyp/.gypi files are included to make tracking addition and
removal of files easy to track.
2. Be sure to keep the keep the comment on top of the version field in
configure.ac up-to-date so that it's easy to track what changes went in at
what point.

36
README.md Normal file
View File

@ -0,0 +1,36 @@
About
=====
This is meant to be a more Linux packaging friendly copy of the AudioProcessing
module from the WebRTC[1][2] project. The ideal case is that we make no changes to
the code to make tracking upstream code easy.
This package currently only includes the AudioProcessing bits, but I am very
open to collaborating with other projects that wish to distribute other bits of
the code and hopefully eventually have a single point of packaging all the
WebRTC code to help people reuse the code and avoid keeping private copies in
several different projects.
[1] http://code.google.com/p/webrtc/
[2] https://chromium.googlesource.com/external/webrtc/trunk/webrtc.git
Feedback
========
Patches, suggestions welcome. You can send them to the PulseAudio mailing
list[2] or to me at the address below.
-- Arun Raghavan <mail@arunraghavan.net>
[3] http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss
Notes
====
1. Some files need to be patch to avoid pulling in the gtest framework. This
should ideally be pushed upstream in some way so we're able to just pull
in what we need without changing anything.
2. It might be nice to try LTO on the library. We build a lot of code as part
of the main AudioProcessing module deps, and it's possible that this could
provide significant space savings.

67
UPDATING.md Normal file
View File

@ -0,0 +1,67 @@
Updating
=====
Assembling some quick notes on maintaining this tree vs. the upstream WebRTC
project source code.
1. The code is currently synced agains whatever revision of the upstream
webrtc git repository Chromium uses.
2. Instructions on checking out the Chromium tree are on the
[Chromium site][get-chromium]. As a shortcut, you can look at the DEPS file
in the Chromium tree for the current webrtc version being used, and then
just use that commit hash with the webrtc tree.
3. [Meld][meld] is a great tool for diffing two directories. Start by running
it on ```webrtc-audio-processing/webrtc``` and
```chromium/third_party/webrtc```.
* For each directory in the ```webrtc-audio-processing``` tree, go over the
corresponding code in the ```chromium``` tree.
* Examine changed files, and pick any new changes. A small number of files
in the ```webrtc-audio-processing``` tree have been changed by hand, make
sure that those are not overwritten.
* unittest files have been left out since they are not built or used.
* BUILD.gn files have been copied to keep track of changes to the build
system upstreama.
* Arch-specific files usually have special handling in the corresponding
Makefile.am.
4. Once everything has been copied and updated, everything needs to be built.
Missing dependencies (files that were not copied, or new modules that are
being depended on) will first turn up here.
* Copy new deps as needed, leaving out testing-only dependencies insofar as
this is possible.
5. ```webrtc/modules/audio_processing/include/audio_processing.h``` is the main
include file, so look for API changes here.
* The current policy is that we mirror upstream API as-is.
* Update configure.ac with the appropriate version info based on how the
code has changed. Details on how to do this are included in the
[libtool documentation][libtool-version-info].
5. Build PulseAudio (and/or any other dependent projects) against the new code.
The easy way to do this is via a prefixed install.
* Run ```configure``` webrtc-audio-processing with
```--prefix=/some/local/path```, then do a ```make``` and
```make install```.
* Run ```configure``` on PulseAudio with
```PKG_CONFIG_PATH=/some/local/path/lib/pkgconfig```, which will cause the
build to pick up the prefixed install. Then do a ```make```, run the built
PulseAudio, and load ```module-echo-cancel``` to make sure it loads fine.
* Run some test streams through the canceller to make sure it is working
fine.
[get-chromium]: http://dev.chromium.org/developers/how-tos/get-the-code
[meld]: http://meldmerge.org/
[libtool-version-info]: https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html

View File

@ -1,16 +1,18 @@
# Revision changelog (version - date, svn rev. from upstream that was merged) AC_INIT([webrtc-audio-processing], [0.2])
# 0.1 - 21 Oct 2011, r789 AM_INIT_AUTOMAKE([dist-xz subdir-objects tar-ustar])
AC_INIT([webrtc-audio-processing], [0.1])
AM_INIT_AUTOMAKE([dist-xz tar-ustar])
AC_SUBST(LIBWEBRTC_AUDIO_PROCESSING_VERSION_INFO, [0:0:0]) AC_SUBST(LIBWEBRTC_AUDIO_PROCESSING_VERSION_INFO, [1:0:0])
AM_SILENT_RULES([yes]) AM_SILENT_RULES([yes])
# Set up the host_* variables
AC_CANONICAL_HOST
AC_PROG_CC AC_PROG_CC
AC_PROG_CXX AC_PROG_CXX
AC_PROG_LIBTOOL AC_PROG_LIBTOOL
AC_PROG_INSTALL AC_PROG_INSTALL
AM_PROG_AS
AC_LANG_C AC_LANG_C
AC_LANG_CPLUSPLUS AC_LANG_CPLUSPLUS
@ -24,26 +26,98 @@ AS_CASE(["x${with_ns_mode}"],
[NS_FIXED=0]) [NS_FIXED=0])
AM_CONDITIONAL(NS_FIXED, [test "x${NS_FIXED}" = "x1"]) AM_CONDITIONAL(NS_FIXED, [test "x${NS_FIXED}" = "x1"])
COMMON_CFLAGS="-DNDEBUG -I\$(srcdir)/interface -I\$(srcdir)/main/interface -I\$(top_srcdir)/src -I\$(top_srcdir)/src/modules/interface"
COMMON_CXXFLAGS="-DNDEBUG -I\$(srcdir)/interface -I\$(srcdir)/main/interface -I\$(top_srcdir)/src -I\$(top_srcdir)/src/modules/interface" # Borrowed from gst-plugins-bad
AC_CHECK_HEADER(MobileCoreServices/MobileCoreServices.h, HAVE_IOS="yes", HAVE_IOS="no", [-])
# Based on gst-plugins-bad configure.ac and defines in
# <chromium source>/build/config/BUILDCONFIG.gn and
# webrtc/BUILD.gn
AS_CASE(["${host}"],
[*android*],
[
OS_CFLAGS="-DWEBRTC_ANDROID -DWEBRTC_LINUX"
PLATFORM_CFLAGS="-DWEBRTC_POSIX"
],
[*-*linux*],
[
OS_CFLAGS="-DWEBRTC_LINUX"
PLATFORM_CFLAGS="-DWEBRTC_POSIX"
],
[*-*darwin*],
[
AS_IF([test "$HAVE_IOS" = "yes"],
[OS_FLAGS="-DWEBRTC_MAC -DWEBRTC_IOS"],
[OS_FLAGS="-DWEBRTC_MAC"])
PLATFORM_CFLAGS="-DWEBRTC_POSIX"
]
# FIXME: Add Windows support
)
AC_SUBST(PLATFORM_CFLAGS)
AS_CASE(["${host_cpu}"],
[i?86|x86_64],
[
HAVE_X86=1
],
[armv7*|armv8*],
[
HAVE_ARM=1
HAVE_ARMV7=1
ARCH_CFLAGS="-DWEBRTC_ARCH_ARM -DWEBRTC_ARCH_ARM_V7"
],
[arm*],
[
HAVE_ARM=1
ARCH_CFLAGS="-DWEBRTC_ARCH_ARM"
]
# FIXME: Add MIPS support, see webrtc/BUILD.gn for defines
)
AM_CONDITIONAL(HAVE_X86, [test "x${HAVE_X86}" = "x1"])
AM_CONDITIONAL(HAVE_ARM, [test "x${HAVE_ARM}" = "x1"])
AM_CONDITIONAL(HAVE_ARMV7, [test "x${HAVE_ARMV7}" = "x1"])
# Borrowed from pulseaudio's configure.ac
AC_ARG_ENABLE([neon],
AS_HELP_STRING([--enable-neon], [Enable NEON optimisations on ARM CPUs that support it (yes|no|auto|runtime)]))
AS_IF([test "x$enable_neon" != "xno"],
AS_IF([test "x$enable_neon" != "xruntime"],
[
save_CXXFLAGS="$CXXFLAGS"; CXXFLAGS="-mfpu=neon $CXXFLAGS"
AC_COMPILE_IFELSE(
[AC_LANG_PROGRAM(
[
#include <arm_neon.h>
], [])],
[
HAVE_NEON=1
ARCH_CFLAGS="$ARCH_CFLAGS -DWEBRTC_HAS_NEON -mfpu=neon"
])
CXXFLAGS="$save_CXXFLAGS"
],
[
HAVE_NEON=1
ARCH_CFLAGS="$ARCH_CFLAGS -DWEBRTC_DETECT_NEON -mfpu=neon"
])
)
AM_CONDITIONAL([HAVE_NEON], [test "x$HAVE_NEON" = "x1"])
COMMON_CFLAGS="-DWEBRTC_AUDIO_PROCESSING_ONLY_BUILD ${PLATFORM_CFLAGS} ${OS_CFLAGS} ${ARCH_CFLAGS} -DNDEBUG -I\$(top_srcdir)"
COMMON_CXXFLAGS="-std=c++11 -DWEBRTC_AUDIO_PROCESSING_ONLY_BUILD ${PLATFORM_CFLAGS} ${OS_CFLAGS} ${ARCH_CFLAGS} -DNDEBUG -I\$(top_srcdir)"
AC_SUBST([COMMON_CFLAGS]) AC_SUBST([COMMON_CFLAGS])
AC_SUBST([COMMON_CXXFLAGS]) AC_SUBST([COMMON_CXXFLAGS])
AC_CONFIG_FILES([ AC_CONFIG_FILES([
webrtc-audio-processing.pc webrtc-audio-processing.pc
Makefile Makefile
src/Makefile webrtc/Makefile
src/common_audio/Makefile webrtc/base/Makefile
src/common_audio/signal_processing_library/Makefile webrtc/common_audio/Makefile
src/common_audio/vad/Makefile webrtc/system_wrappers/Makefile
src/system_wrappers/Makefile webrtc/modules/Makefile
src/modules/Makefile webrtc/modules/audio_coding/Makefile
src/modules/audio_processing/Makefile webrtc/modules/audio_processing/Makefile
src/modules/audio_processing/utility/Makefile
src/modules/audio_processing/ns/Makefile
src/modules/audio_processing/aec/Makefile
src/modules/audio_processing/aecm/Makefile
src/modules/audio_processing/agc/Makefile
]) ])
AC_OUTPUT AC_OUTPUT

View File

@ -1 +0,0 @@
SUBDIRS = common_audio system_wrappers modules

View File

@ -1 +0,0 @@
SUBDIRS = signal_processing_library vad

View File

@ -1,44 +0,0 @@
noinst_LTLIBRARIES = libspl.la
libspl_la_SOURCES = main/interface/signal_processing_library.h \
main/interface/spl_inl.h \
main/source/auto_corr_to_refl_coef.c \
main/source/auto_correlation.c \
main/source/complex_fft.c \
main/source/complex_ifft.c \
main/source/complex_bit_reverse.c \
main/source/copy_set_operations.c \
main/source/cos_table.c \
main/source/cross_correlation.c \
main/source/division_operations.c \
main/source/dot_product_with_scale.c \
main/source/downsample_fast.c \
main/source/energy.c \
main/source/filter_ar.c \
main/source/filter_ar_fast_q12.c \
main/source/filter_ma_fast_q12.c \
main/source/get_hanning_window.c \
main/source/get_scaling_square.c \
main/source/hanning_table.c \
main/source/ilbc_specific_functions.c \
main/source/levinson_durbin.c \
main/source/lpc_to_refl_coef.c \
main/source/min_max_operations.c \
main/source/randn_table.c \
main/source/randomization_functions.c \
main/source/refl_coef_to_lpc.c \
main/source/resample.c \
main/source/resample_48khz.c \
main/source/resample_by_2.c \
main/source/resample_by_2_internal.c \
main/source/resample_by_2_internal.h \
main/source/resample_fractional.c \
main/source/sin_table.c \
main/source/sin_table_1024.c \
main/source/spl_sqrt.c \
main/source/spl_sqrt_floor.c \
main/source/spl_version.c \
main/source/splitting_filter.c \
main/source/sqrt_of_one_minus_x_squared.c \
main/source/vector_scaling_operations.c
libspl_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS)

View File

@ -1,159 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
// This header file includes the inline functions in
// the fix point signal processing library.
#ifndef WEBRTC_SPL_SPL_INL_H_
#define WEBRTC_SPL_SPL_INL_H_
#ifdef WEBRTC_ARCH_ARM_V7A
#include "spl_inl_armv7.h"
#else
static __inline WebRtc_Word16 WebRtcSpl_SatW32ToW16(WebRtc_Word32 value32) {
WebRtc_Word16 out16 = (WebRtc_Word16) value32;
if (value32 > 32767)
out16 = 32767;
else if (value32 < -32768)
out16 = -32768;
return out16;
}
static __inline WebRtc_Word16 WebRtcSpl_AddSatW16(WebRtc_Word16 a,
WebRtc_Word16 b) {
return WebRtcSpl_SatW32ToW16((WebRtc_Word32) a + (WebRtc_Word32) b);
}
static __inline WebRtc_Word32 WebRtcSpl_AddSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_sum;
// perform long addition
l_sum = l_var1 + l_var2;
// check for under or overflow
if (WEBRTC_SPL_IS_NEG(l_var1)) {
if (WEBRTC_SPL_IS_NEG(l_var2) && !WEBRTC_SPL_IS_NEG(l_sum)) {
l_sum = (WebRtc_Word32)0x80000000;
}
} else {
if (!WEBRTC_SPL_IS_NEG(l_var2) && WEBRTC_SPL_IS_NEG(l_sum)) {
l_sum = (WebRtc_Word32)0x7FFFFFFF;
}
}
return l_sum;
}
static __inline WebRtc_Word16 WebRtcSpl_SubSatW16(WebRtc_Word16 var1,
WebRtc_Word16 var2) {
return WebRtcSpl_SatW32ToW16((WebRtc_Word32) var1 - (WebRtc_Word32) var2);
}
static __inline WebRtc_Word32 WebRtcSpl_SubSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_diff;
// perform subtraction
l_diff = l_var1 - l_var2;
// check for underflow
if ((l_var1 < 0) && (l_var2 > 0) && (l_diff > 0))
l_diff = (WebRtc_Word32)0x80000000;
// check for overflow
if ((l_var1 > 0) && (l_var2 < 0) && (l_diff < 0))
l_diff = (WebRtc_Word32)0x7FFFFFFF;
return l_diff;
}
static __inline WebRtc_Word16 WebRtcSpl_GetSizeInBits(WebRtc_UWord32 n) {
int bits;
if (0xFFFF0000 & n) {
bits = 16;
} else {
bits = 0;
}
if (0x0000FF00 & (n >> bits)) bits += 8;
if (0x000000F0 & (n >> bits)) bits += 4;
if (0x0000000C & (n >> bits)) bits += 2;
if (0x00000002 & (n >> bits)) bits += 1;
if (0x00000001 & (n >> bits)) bits += 1;
return bits;
}
static __inline int WebRtcSpl_NormW32(WebRtc_Word32 a) {
int zeros;
if (a <= 0) a ^= 0xFFFFFFFF;
if (!(0xFFFF8000 & a)) {
zeros = 16;
} else {
zeros = 0;
}
if (!(0xFF800000 & (a << zeros))) zeros += 8;
if (!(0xF8000000 & (a << zeros))) zeros += 4;
if (!(0xE0000000 & (a << zeros))) zeros += 2;
if (!(0xC0000000 & (a << zeros))) zeros += 1;
return zeros;
}
static __inline int WebRtcSpl_NormU32(WebRtc_UWord32 a) {
int zeros;
if (a == 0) return 0;
if (!(0xFFFF0000 & a)) {
zeros = 16;
} else {
zeros = 0;
}
if (!(0xFF000000 & (a << zeros))) zeros += 8;
if (!(0xF0000000 & (a << zeros))) zeros += 4;
if (!(0xC0000000 & (a << zeros))) zeros += 2;
if (!(0x80000000 & (a << zeros))) zeros += 1;
return zeros;
}
static __inline int WebRtcSpl_NormW16(WebRtc_Word16 a) {
int zeros;
if (a <= 0) a ^= 0xFFFF;
if (!(0xFF80 & a)) {
zeros = 8;
} else {
zeros = 0;
}
if (!(0xF800 & (a << zeros))) zeros += 4;
if (!(0xE000 & (a << zeros))) zeros += 2;
if (!(0xC000 & (a << zeros))) zeros += 1;
return zeros;
}
static __inline int32_t WebRtc_MulAccumW16(int16_t a,
int16_t b,
int32_t c) {
return (a * b + c);
}
#endif // WEBRTC_ARCH_ARM_V7A
#endif // WEBRTC_SPL_SPL_INL_H_

View File

@ -1,137 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
// This header file includes the inline functions for ARM processors in
// the fix point signal processing library.
#ifndef WEBRTC_SPL_SPL_INL_ARMV7_H_
#define WEBRTC_SPL_SPL_INL_ARMV7_H_
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_16_32_RSFT16(WebRtc_Word16 a,
WebRtc_Word32 b) {
WebRtc_Word32 tmp;
__asm__("smulwb %0, %1, %2":"=r"(tmp):"r"(b), "r"(a));
return tmp;
}
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_32_32_RSFT32(WebRtc_Word16 a,
WebRtc_Word16 b,
WebRtc_Word32 c) {
WebRtc_Word32 tmp;
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(tmp) : "r"(b), "r"(a));
__asm__("smmul %0, %1, %2":"=r"(tmp):"r"(tmp), "r"(c));
return tmp;
}
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_32_32_RSFT32BI(WebRtc_Word32 a,
WebRtc_Word32 b) {
WebRtc_Word32 tmp;
__asm__("smmul %0, %1, %2":"=r"(tmp):"r"(a), "r"(b));
return tmp;
}
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_16_16(WebRtc_Word16 a,
WebRtc_Word16 b) {
WebRtc_Word32 tmp;
__asm__("smulbb %0, %1, %2":"=r"(tmp):"r"(a), "r"(b));
return tmp;
}
static __inline int32_t WebRtc_MulAccumW16(int16_t a,
int16_t b,
int32_t c) {
int32_t tmp = 0;
__asm__("smlabb %0, %1, %2, %3":"=r"(tmp):"r"(a), "r"(b), "r"(c));
return tmp;
}
static __inline WebRtc_Word16 WebRtcSpl_AddSatW16(WebRtc_Word16 a,
WebRtc_Word16 b) {
WebRtc_Word32 s_sum;
__asm__("qadd16 %0, %1, %2":"=r"(s_sum):"r"(a), "r"(b));
return (WebRtc_Word16) s_sum;
}
static __inline WebRtc_Word32 WebRtcSpl_AddSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_sum;
__asm__("qadd %0, %1, %2":"=r"(l_sum):"r"(l_var1), "r"(l_var2));
return l_sum;
}
static __inline WebRtc_Word16 WebRtcSpl_SubSatW16(WebRtc_Word16 var1,
WebRtc_Word16 var2) {
WebRtc_Word32 s_sub;
__asm__("qsub16 %0, %1, %2":"=r"(s_sub):"r"(var1), "r"(var2));
return (WebRtc_Word16)s_sub;
}
static __inline WebRtc_Word32 WebRtcSpl_SubSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_sub;
__asm__("qsub %0, %1, %2":"=r"(l_sub):"r"(l_var1), "r"(l_var2));
return l_sub;
}
static __inline WebRtc_Word16 WebRtcSpl_GetSizeInBits(WebRtc_UWord32 n) {
WebRtc_Word32 tmp;
__asm__("clz %0, %1":"=r"(tmp):"r"(n));
return (WebRtc_Word16)(32 - tmp);
}
static __inline int WebRtcSpl_NormW32(WebRtc_Word32 a) {
WebRtc_Word32 tmp;
if (a <= 0) a ^= 0xFFFFFFFF;
__asm__("clz %0, %1":"=r"(tmp):"r"(a));
return tmp - 1;
}
static __inline int WebRtcSpl_NormU32(WebRtc_UWord32 a) {
int tmp;
if (a == 0) return 0;
__asm__("clz %0, %1":"=r"(tmp):"r"(a));
return tmp;
}
static __inline int WebRtcSpl_NormW16(WebRtc_Word16 a) {
WebRtc_Word32 tmp;
if (a <= 0) a ^= 0xFFFFFFFF;
__asm__("clz %0, %1":"=r"(tmp):"r"(a));
return tmp - 17;
}
static __inline WebRtc_Word16 WebRtcSpl_SatW32ToW16(WebRtc_Word32 value32) {
WebRtc_Word16 out16;
__asm__("ssat %r0, #16, %r1" : "=r"(out16) : "r"(value32));
return out16;
}
#endif // WEBRTC_SPL_SPL_INL_ARMV7_H_

View File

@ -1,141 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_AutoCorrelation().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
int WebRtcSpl_AutoCorrelation(G_CONST WebRtc_Word16* in_vector,
int in_vector_length,
int order,
WebRtc_Word32* result,
int* scale)
{
WebRtc_Word32 sum;
int i, j;
WebRtc_Word16 smax; // Sample max
G_CONST WebRtc_Word16* xptr1;
G_CONST WebRtc_Word16* xptr2;
WebRtc_Word32* resultptr;
int scaling = 0;
#ifdef _ARM_OPT_
#pragma message("NOTE: _ARM_OPT_ optimizations are used")
WebRtc_Word16 loops4;
#endif
if (order < 0)
order = in_vector_length;
// Find the max. sample
smax = WebRtcSpl_MaxAbsValueW16(in_vector, in_vector_length);
// In order to avoid overflow when computing the sum we should scale the samples so that
// (in_vector_length * smax * smax) will not overflow.
if (smax == 0)
{
scaling = 0;
} else
{
int nbits = WebRtcSpl_GetSizeInBits(in_vector_length); // # of bits in the sum loop
int t = WebRtcSpl_NormW32(WEBRTC_SPL_MUL(smax, smax)); // # of bits to normalize smax
if (t > nbits)
{
scaling = 0;
} else
{
scaling = nbits - t;
}
}
resultptr = result;
// Perform the actual correlation calculation
for (i = 0; i < order + 1; i++)
{
int loops = (in_vector_length - i);
sum = 0;
xptr1 = in_vector;
xptr2 = &in_vector[i];
#ifndef _ARM_OPT_
for (j = loops; j > 0; j--)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1++, *xptr2++, scaling);
}
#else
loops4 = (loops >> 2) << 2;
if (scaling == 0)
{
for (j = 0; j < loops4; j = j + 4)
{
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
}
for (j = loops4; j < loops; j++)
{
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
}
}
else
{
for (j = 0; j < loops4; j = j + 4)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
}
for (j = loops4; j < loops; j++)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
}
}
#endif
*resultptr++ = sum;
}
*scale = scaling;
return order + 1;
}

View File

@ -1,51 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_ComplexBitReverse().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_ComplexBitReverse(WebRtc_Word16 frfi[], int stages)
{
int mr, nn, n, l, m;
WebRtc_Word16 tr, ti;
n = 1 << stages;
mr = 0;
nn = n - 1;
// decimation in time - re-order data
for (m = 1; m <= nn; ++m)
{
l = n;
do
{
l >>= 1;
} while (mr + l > nn);
mr = (mr & (l - 1)) + l;
if (mr <= m)
continue;
tr = frfi[2 * m];
frfi[2 * m] = frfi[2 * mr];
frfi[2 * mr] = tr;
ti = frfi[2 * m + 1];
frfi[2 * m + 1] = frfi[2 * mr + 1];
frfi[2 * mr + 1] = ti;
}
}

View File

@ -1,150 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_ComplexFFT().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define CFFTSFT 14
#define CFFTRND 1
#define CFFTRND2 16384
int WebRtcSpl_ComplexFFT(WebRtc_Word16 frfi[], int stages, int mode)
{
int i, j, l, k, istep, n, m;
WebRtc_Word16 wr, wi;
WebRtc_Word32 tr32, ti32, qr32, qi32;
/* The 1024-value is a constant given from the size of WebRtcSpl_kSinTable1024[],
* and should not be changed depending on the input parameter 'stages'
*/
n = 1 << stages;
if (n > 1024)
return -1;
l = 1;
k = 10 - 1; /* Constant for given WebRtcSpl_kSinTable1024[]. Do not change
depending on the input parameter 'stages' */
if (mode == 0)
{
// mode==0: Low-complexity and Low-accuracy mode
while (l < n)
{
istep = l << 1;
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = -WebRtcSpl_kSinTable1024[j];
for (i = m; i < n; i += istep)
{
j = i + l;
tr32 = WEBRTC_SPL_RSHIFT_W32((WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j])
- WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j + 1])), 15);
ti32 = WEBRTC_SPL_RSHIFT_W32((WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j + 1])
+ WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j])), 15);
qr32 = (WebRtc_Word32)frfi[2 * i];
qi32 = (WebRtc_Word32)frfi[2 * i + 1];
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 - tr32, 1);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 - ti32, 1);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 + tr32, 1);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 + ti32, 1);
}
}
--k;
l = istep;
}
} else
{
// mode==1: High-complexity and High-accuracy mode
while (l < n)
{
istep = l << 1;
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = -WebRtcSpl_kSinTable1024[j];
#ifdef WEBRTC_ARCH_ARM_V7A
WebRtc_Word32 wri;
WebRtc_Word32 frfi_r;
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(wri) :
"r"((WebRtc_Word32)wr), "r"((WebRtc_Word32)wi));
#endif
for (i = m; i < n; i += istep)
{
j = i + l;
#ifdef WEBRTC_ARCH_ARM_V7A
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(frfi_r) :
"r"((WebRtc_Word32)frfi[2*j]), "r"((WebRtc_Word32)frfi[2*j +1]));
__asm__("smlsd %0, %1, %2, %3" : "=r"(tr32) :
"r"(wri), "r"(frfi_r), "r"(CFFTRND));
__asm__("smladx %0, %1, %2, %3" : "=r"(ti32) :
"r"(wri), "r"(frfi_r), "r"(CFFTRND));
#else
tr32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j])
- WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j + 1]) + CFFTRND;
ti32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j + 1])
+ WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j]) + CFFTRND;
#endif
tr32 = WEBRTC_SPL_RSHIFT_W32(tr32, 15 - CFFTSFT);
ti32 = WEBRTC_SPL_RSHIFT_W32(ti32, 15 - CFFTSFT);
qr32 = ((WebRtc_Word32)frfi[2 * i]) << CFFTSFT;
qi32 = ((WebRtc_Word32)frfi[2 * i + 1]) << CFFTSFT;
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qr32 - tr32 + CFFTRND2), 1 + CFFTSFT);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 - ti32 + CFFTRND2), 1 + CFFTSFT);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qr32 + tr32 + CFFTRND2), 1 + CFFTSFT);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 + ti32 + CFFTRND2), 1 + CFFTSFT);
}
}
--k;
l = istep;
}
}
return 0;
}

View File

@ -1,161 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_ComplexIFFT().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define CIFFTSFT 14
#define CIFFTRND 1
int WebRtcSpl_ComplexIFFT(WebRtc_Word16 frfi[], int stages, int mode)
{
int i, j, l, k, istep, n, m, scale, shift;
WebRtc_Word16 wr, wi;
WebRtc_Word32 tr32, ti32, qr32, qi32;
WebRtc_Word32 tmp32, round2;
/* The 1024-value is a constant given from the size of WebRtcSpl_kSinTable1024[],
* and should not be changed depending on the input parameter 'stages'
*/
n = 1 << stages;
if (n > 1024)
return -1;
scale = 0;
l = 1;
k = 10 - 1; /* Constant for given WebRtcSpl_kSinTable1024[]. Do not change
depending on the input parameter 'stages' */
while (l < n)
{
// variable scaling, depending upon data
shift = 0;
round2 = 8192;
tmp32 = (WebRtc_Word32)WebRtcSpl_MaxAbsValueW16(frfi, 2 * n);
if (tmp32 > 13573)
{
shift++;
scale++;
round2 <<= 1;
}
if (tmp32 > 27146)
{
shift++;
scale++;
round2 <<= 1;
}
istep = l << 1;
if (mode == 0)
{
// mode==0: Low-complexity and Low-accuracy mode
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = WebRtcSpl_kSinTable1024[j];
for (i = m; i < n; i += istep)
{
j = i + l;
tr32 = WEBRTC_SPL_RSHIFT_W32((WEBRTC_SPL_MUL_16_16_RSFT(wr, frfi[2 * j], 0)
- WEBRTC_SPL_MUL_16_16_RSFT(wi, frfi[2 * j + 1], 0)), 15);
ti32 = WEBRTC_SPL_RSHIFT_W32(
(WEBRTC_SPL_MUL_16_16_RSFT(wr, frfi[2 * j + 1], 0)
+ WEBRTC_SPL_MUL_16_16_RSFT(wi,frfi[2*j],0)), 15);
qr32 = (WebRtc_Word32)frfi[2 * i];
qi32 = (WebRtc_Word32)frfi[2 * i + 1];
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 - tr32, shift);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 - ti32, shift);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 + tr32, shift);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 + ti32, shift);
}
}
} else
{
// mode==1: High-complexity and High-accuracy mode
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = WebRtcSpl_kSinTable1024[j];
#ifdef WEBRTC_ARCH_ARM_V7A
WebRtc_Word32 wri;
WebRtc_Word32 frfi_r;
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(wri) :
"r"((WebRtc_Word32)wr), "r"((WebRtc_Word32)wi));
#endif
for (i = m; i < n; i += istep)
{
j = i + l;
#ifdef WEBRTC_ARCH_ARM_V7A
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(frfi_r) :
"r"((WebRtc_Word32)frfi[2*j]), "r"((WebRtc_Word32)frfi[2*j +1]));
__asm__("smlsd %0, %1, %2, %3" : "=r"(tr32) :
"r"(wri), "r"(frfi_r), "r"(CIFFTRND));
__asm__("smladx %0, %1, %2, %3" : "=r"(ti32) :
"r"(wri), "r"(frfi_r), "r"(CIFFTRND));
#else
tr32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j])
- WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j + 1]) + CIFFTRND;
ti32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j + 1])
+ WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j]) + CIFFTRND;
#endif
tr32 = WEBRTC_SPL_RSHIFT_W32(tr32, 15 - CIFFTSFT);
ti32 = WEBRTC_SPL_RSHIFT_W32(ti32, 15 - CIFFTSFT);
qr32 = ((WebRtc_Word32)frfi[2 * i]) << CIFFTSFT;
qi32 = ((WebRtc_Word32)frfi[2 * i + 1]) << CIFFTSFT;
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((qr32 - tr32+round2),
shift+CIFFTSFT);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 - ti32 + round2), shift + CIFFTSFT);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((qr32 + tr32 + round2),
shift + CIFFTSFT);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 + ti32 + round2), shift + CIFFTSFT);
}
}
}
--k;
l = istep;
}
return scale;
}

View File

@ -1,108 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the implementation of functions
* WebRtcSpl_MemSetW16()
* WebRtcSpl_MemSetW32()
* WebRtcSpl_MemCpyReversedOrder()
* WebRtcSpl_CopyFromEndW16()
* WebRtcSpl_ZerosArrayW16()
* WebRtcSpl_ZerosArrayW32()
* WebRtcSpl_OnesArrayW16()
* WebRtcSpl_OnesArrayW32()
*
* The description header can be found in signal_processing_library.h
*
*/
#include <string.h>
#include "signal_processing_library.h"
void WebRtcSpl_MemSetW16(WebRtc_Word16 *ptr, WebRtc_Word16 set_value, int length)
{
int j;
WebRtc_Word16 *arrptr = ptr;
for (j = length; j > 0; j--)
{
*arrptr++ = set_value;
}
}
void WebRtcSpl_MemSetW32(WebRtc_Word32 *ptr, WebRtc_Word32 set_value, int length)
{
int j;
WebRtc_Word32 *arrptr = ptr;
for (j = length; j > 0; j--)
{
*arrptr++ = set_value;
}
}
void WebRtcSpl_MemCpyReversedOrder(WebRtc_Word16* dest, WebRtc_Word16* source, int length)
{
int j;
WebRtc_Word16* destPtr = dest;
WebRtc_Word16* sourcePtr = source;
for (j = 0; j < length; j++)
{
*destPtr-- = *sourcePtr++;
}
}
WebRtc_Word16 WebRtcSpl_CopyFromEndW16(G_CONST WebRtc_Word16 *vector_in,
WebRtc_Word16 length,
WebRtc_Word16 samples,
WebRtc_Word16 *vector_out)
{
// Copy the last <samples> of the input vector to vector_out
WEBRTC_SPL_MEMCPY_W16(vector_out, &vector_in[length - samples], samples);
return samples;
}
WebRtc_Word16 WebRtcSpl_ZerosArrayW16(WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtcSpl_MemSetW16(vector, 0, length);
return length;
}
WebRtc_Word16 WebRtcSpl_ZerosArrayW32(WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtcSpl_MemSetW32(vector, 0, length);
return length;
}
WebRtc_Word16 WebRtcSpl_OnesArrayW16(WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 i;
WebRtc_Word16 *tmpvec = vector;
for (i = 0; i < length; i++)
{
*tmpvec++ = 1;
}
return length;
}
WebRtc_Word16 WebRtcSpl_OnesArrayW32(WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 i;
WebRtc_Word32 *tmpvec = vector;
for (i = 0; i < length; i++)
{
*tmpvec++ = 1;
}
return length;
}

View File

@ -1,60 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the 360 degree cos table.
*
*/
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_kCosTable[] = {
8192, 8190, 8187, 8180, 8172, 8160, 8147, 8130, 8112,
8091, 8067, 8041, 8012, 7982, 7948, 7912, 7874, 7834,
7791, 7745, 7697, 7647, 7595, 7540, 7483, 7424, 7362,
7299, 7233, 7164, 7094, 7021, 6947, 6870, 6791, 6710,
6627, 6542, 6455, 6366, 6275, 6182, 6087, 5991, 5892,
5792, 5690, 5586, 5481, 5374, 5265, 5155, 5043, 4930,
4815, 4698, 4580, 4461, 4341, 4219, 4096, 3971, 3845,
3719, 3591, 3462, 3331, 3200, 3068, 2935, 2801, 2667,
2531, 2395, 2258, 2120, 1981, 1842, 1703, 1563, 1422,
1281, 1140, 998, 856, 713, 571, 428, 285, 142,
0, -142, -285, -428, -571, -713, -856, -998, -1140,
-1281, -1422, -1563, -1703, -1842, -1981, -2120, -2258, -2395,
-2531, -2667, -2801, -2935, -3068, -3200, -3331, -3462, -3591,
-3719, -3845, -3971, -4095, -4219, -4341, -4461, -4580, -4698,
-4815, -4930, -5043, -5155, -5265, -5374, -5481, -5586, -5690,
-5792, -5892, -5991, -6087, -6182, -6275, -6366, -6455, -6542,
-6627, -6710, -6791, -6870, -6947, -7021, -7094, -7164, -7233,
-7299, -7362, -7424, -7483, -7540, -7595, -7647, -7697, -7745,
-7791, -7834, -7874, -7912, -7948, -7982, -8012, -8041, -8067,
-8091, -8112, -8130, -8147, -8160, -8172, -8180, -8187, -8190,
-8191, -8190, -8187, -8180, -8172, -8160, -8147, -8130, -8112,
-8091, -8067, -8041, -8012, -7982, -7948, -7912, -7874, -7834,
-7791, -7745, -7697, -7647, -7595, -7540, -7483, -7424, -7362,
-7299, -7233, -7164, -7094, -7021, -6947, -6870, -6791, -6710,
-6627, -6542, -6455, -6366, -6275, -6182, -6087, -5991, -5892,
-5792, -5690, -5586, -5481, -5374, -5265, -5155, -5043, -4930,
-4815, -4698, -4580, -4461, -4341, -4219, -4096, -3971, -3845,
-3719, -3591, -3462, -3331, -3200, -3068, -2935, -2801, -2667,
-2531, -2395, -2258, -2120, -1981, -1842, -1703, -1563, -1422,
-1281, -1140, -998, -856, -713, -571, -428, -285, -142,
0, 142, 285, 428, 571, 713, 856, 998, 1140,
1281, 1422, 1563, 1703, 1842, 1981, 2120, 2258, 2395,
2531, 2667, 2801, 2935, 3068, 3200, 3331, 3462, 3591,
3719, 3845, 3971, 4095, 4219, 4341, 4461, 4580, 4698,
4815, 4930, 5043, 5155, 5265, 5374, 5481, 5586, 5690,
5792, 5892, 5991, 6087, 6182, 6275, 6366, 6455, 6542,
6627, 6710, 6791, 6870, 6947, 7021, 7094, 7164, 7233,
7299, 7362, 7424, 7483, 7540, 7595, 7647, 7697, 7745,
7791, 7834, 7874, 7912, 7948, 7982, 8012, 8041, 8067,
8091, 8112, 8130, 8147, 8160, 8172, 8180, 8187, 8190
};

View File

@ -1,267 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_CrossCorrelation().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_CrossCorrelation(WebRtc_Word32* cross_correlation, WebRtc_Word16* seq1,
WebRtc_Word16* seq2, WebRtc_Word16 dim_seq,
WebRtc_Word16 dim_cross_correlation,
WebRtc_Word16 right_shifts,
WebRtc_Word16 step_seq2)
{
int i, j;
WebRtc_Word16* seq1Ptr;
WebRtc_Word16* seq2Ptr;
WebRtc_Word32* CrossCorrPtr;
#ifdef _XSCALE_OPT_
#ifdef _WIN32
#pragma message("NOTE: _XSCALE_OPT_ optimizations are used (overrides _ARM_OPT_ and requires /QRxscale compiler flag)")
#endif
__int64 macc40;
int iseq1[250];
int iseq2[250];
int iseq3[250];
int * iseq1Ptr;
int * iseq2Ptr;
int * iseq3Ptr;
int len, i_len;
seq1Ptr = seq1;
iseq1Ptr = iseq1;
for(i = 0; i < ((dim_seq + 1) >> 1); i++)
{
*iseq1Ptr = (unsigned short)*seq1Ptr++;
*iseq1Ptr++ |= (WebRtc_Word32)*seq1Ptr++ << 16;
}
if(dim_seq%2)
{
*(iseq1Ptr-1) &= 0x0000ffff;
}
*iseq1Ptr = 0;
iseq1Ptr++;
*iseq1Ptr = 0;
iseq1Ptr++;
*iseq1Ptr = 0;
if(step_seq2 < 0)
{
seq2Ptr = seq2 - dim_cross_correlation + 1;
CrossCorrPtr = &cross_correlation[dim_cross_correlation - 1];
}
else
{
seq2Ptr = seq2;
CrossCorrPtr = cross_correlation;
}
len = dim_seq + dim_cross_correlation - 1;
i_len = (len + 1) >> 1;
iseq2Ptr = iseq2;
iseq3Ptr = iseq3;
for(i = 0; i < i_len; i++)
{
*iseq2Ptr = (unsigned short)*seq2Ptr++;
*iseq3Ptr = (unsigned short)*seq2Ptr;
*iseq2Ptr++ |= (WebRtc_Word32)*seq2Ptr++ << 16;
*iseq3Ptr++ |= (WebRtc_Word32)*seq2Ptr << 16;
}
if(len % 2)
{
iseq2[i_len - 1] &= 0x0000ffff;
iseq3[i_len - 1] = 0;
}
else
iseq3[i_len - 1] &= 0x0000ffff;
iseq2[i_len] = 0;
iseq3[i_len] = 0;
iseq2[i_len + 1] = 0;
iseq3[i_len + 1] = 0;
iseq2[i_len + 2] = 0;
iseq3[i_len + 2] = 0;
// Set pointer to start value
iseq2Ptr = iseq2;
iseq3Ptr = iseq3;
i_len = (dim_seq + 7) >> 3;
for (i = 0; i < dim_cross_correlation; i++)
{
iseq1Ptr = iseq1;
macc40 = 0;
_WriteCoProcessor(macc40, 0);
if((i & 1))
{
iseq3Ptr = iseq3 + (i >> 1);
for (j = i_len; j > 0; j--)
{
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
}
}
else
{
iseq2Ptr = iseq2 + (i >> 1);
for (j = i_len; j > 0; j--)
{
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
}
}
macc40 = _ReadCoProcessor(0);
*CrossCorrPtr = (WebRtc_Word32)(macc40 >> right_shifts);
CrossCorrPtr += step_seq2;
}
#else // #ifdef _XSCALE_OPT_
#ifdef _ARM_OPT_
WebRtc_Word16 dim_seq8 = (dim_seq >> 3) << 3;
#endif
CrossCorrPtr = cross_correlation;
for (i = 0; i < dim_cross_correlation; i++)
{
// Set the pointer to the static vector, set the pointer to the sliding vector
// and initialize cross_correlation
seq1Ptr = seq1;
seq2Ptr = seq2 + (step_seq2 * i);
(*CrossCorrPtr) = 0;
#ifndef _ARM_OPT_
#ifdef _WIN32
#pragma message("NOTE: default implementation is used")
#endif
// Perform the cross correlation
for (j = 0; j < dim_seq; j++)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr), right_shifts);
seq1Ptr++;
seq2Ptr++;
}
#else
#ifdef _WIN32
#pragma message("NOTE: _ARM_OPT_ optimizations are used")
#endif
if (right_shifts == 0)
{
// Perform the optimized cross correlation
for (j = 0; j < dim_seq8; j = j + 8)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
}
for (j = dim_seq8; j < dim_seq; j++)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
}
}
else // right_shifts != 0
{
// Perform the optimized cross correlation
for (j = 0; j < dim_seq8; j = j + 8)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
}
for (j = dim_seq8; j < dim_seq; j++)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
}
}
#endif
CrossCorrPtr++;
}
#endif
}

View File

@ -1,144 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the divisions
* WebRtcSpl_DivU32U16()
* WebRtcSpl_DivW32W16()
* WebRtcSpl_DivW32W16ResW16()
* WebRtcSpl_DivResultInQ31()
* WebRtcSpl_DivW32HiLow()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
WebRtc_UWord32 WebRtcSpl_DivU32U16(WebRtc_UWord32 num, WebRtc_UWord16 den)
{
// Guard against division with 0
if (den != 0)
{
return (WebRtc_UWord32)(num / den);
} else
{
return (WebRtc_UWord32)0xFFFFFFFF;
}
}
WebRtc_Word32 WebRtcSpl_DivW32W16(WebRtc_Word32 num, WebRtc_Word16 den)
{
// Guard against division with 0
if (den != 0)
{
return (WebRtc_Word32)(num / den);
} else
{
return (WebRtc_Word32)0x7FFFFFFF;
}
}
WebRtc_Word16 WebRtcSpl_DivW32W16ResW16(WebRtc_Word32 num, WebRtc_Word16 den)
{
// Guard against division with 0
if (den != 0)
{
return (WebRtc_Word16)(num / den);
} else
{
return (WebRtc_Word16)0x7FFF;
}
}
WebRtc_Word32 WebRtcSpl_DivResultInQ31(WebRtc_Word32 num, WebRtc_Word32 den)
{
WebRtc_Word32 L_num = num;
WebRtc_Word32 L_den = den;
WebRtc_Word32 div = 0;
int k = 31;
int change_sign = 0;
if (num == 0)
return 0;
if (num < 0)
{
change_sign++;
L_num = -num;
}
if (den < 0)
{
change_sign++;
L_den = -den;
}
while (k--)
{
div <<= 1;
L_num <<= 1;
if (L_num >= L_den)
{
L_num -= L_den;
div++;
}
}
if (change_sign == 1)
{
div = -div;
}
return div;
}
WebRtc_Word32 WebRtcSpl_DivW32HiLow(WebRtc_Word32 num, WebRtc_Word16 den_hi,
WebRtc_Word16 den_low)
{
WebRtc_Word16 approx, tmp_hi, tmp_low, num_hi, num_low;
WebRtc_Word32 tmpW32;
approx = (WebRtc_Word16)WebRtcSpl_DivW32W16((WebRtc_Word32)0x1FFFFFFF, den_hi);
// result in Q14 (Note: 3FFFFFFF = 0.5 in Q30)
// tmpW32 = 1/den = approx * (2.0 - den * approx) (in Q30)
tmpW32 = (WEBRTC_SPL_MUL_16_16(den_hi, approx) << 1)
+ ((WEBRTC_SPL_MUL_16_16(den_low, approx) >> 15) << 1);
// tmpW32 = den * approx
tmpW32 = (WebRtc_Word32)0x7fffffffL - tmpW32; // result in Q30 (tmpW32 = 2.0-(den*approx))
// Store tmpW32 in hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmpW32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((tmpW32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// tmpW32 = 1/den in Q29
tmpW32 = ((WEBRTC_SPL_MUL_16_16(tmp_hi, approx) + (WEBRTC_SPL_MUL_16_16(tmp_low, approx)
>> 15)) << 1);
// 1/den in hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmpW32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((tmpW32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// Store num in hi and low format
num_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(num, 16);
num_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((num
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)num_hi, 16)), 1);
// num * (1/den) by 32 bit multiplication (result in Q28)
tmpW32 = (WEBRTC_SPL_MUL_16_16(num_hi, tmp_hi) + (WEBRTC_SPL_MUL_16_16(num_hi, tmp_low)
>> 15) + (WEBRTC_SPL_MUL_16_16(num_low, tmp_hi) >> 15));
// Put result in Q31 (convert from Q28)
tmpW32 = WEBRTC_SPL_LSHIFT_W32(tmpW32, 3);
return tmpW32;
}

View File

@ -1,91 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_DotProductWithScale().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
WebRtc_Word32 WebRtcSpl_DotProductWithScale(WebRtc_Word16 *vector1, WebRtc_Word16 *vector2,
int length, int scaling)
{
WebRtc_Word32 sum;
int i;
#ifdef _ARM_OPT_
#pragma message("NOTE: _ARM_OPT_ optimizations are used")
WebRtc_Word16 len4 = (length >> 2) << 2;
#endif
sum = 0;
#ifndef _ARM_OPT_
for (i = 0; i < length; i++)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1++, *vector2++, scaling);
}
#else
if (scaling == 0)
{
for (i = 0; i < len4; i = i + 4)
{
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
}
for (i = len4; i < length; i++)
{
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
}
}
else
{
for (i = 0; i < len4; i = i + 4)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
}
for (i = len4; i < length; i++)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
}
}
#endif
return sum;
}

View File

@ -1,59 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_DownsampleFast().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
int WebRtcSpl_DownsampleFast(WebRtc_Word16 *in_ptr, WebRtc_Word16 in_length,
WebRtc_Word16 *out_ptr, WebRtc_Word16 out_length,
WebRtc_Word16 *B, WebRtc_Word16 B_length, WebRtc_Word16 factor,
WebRtc_Word16 delay)
{
WebRtc_Word32 o;
int i, j;
WebRtc_Word16 *downsampled_ptr = out_ptr;
WebRtc_Word16 *b_ptr;
WebRtc_Word16 *x_ptr;
WebRtc_Word16 endpos = delay
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16(factor, (out_length - 1)) + 1;
if (in_length < endpos)
{
return -1;
}
for (i = delay; i < endpos; i += factor)
{
b_ptr = &B[0];
x_ptr = &in_ptr[i];
o = (WebRtc_Word32)2048; // Round val
for (j = 0; j < B_length; j++)
{
o += WEBRTC_SPL_MUL_16_16(*b_ptr++, *x_ptr--);
}
o = WEBRTC_SPL_RSHIFT_W32(o, 12);
// If output is higher than 32768, saturate it. Same with negative side
*downsampled_ptr++ = WebRtcSpl_SatW32ToW16(o);
}
return 0;
}

View File

@ -1,89 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_FilterAR().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
int WebRtcSpl_FilterAR(G_CONST WebRtc_Word16* a,
int a_length,
G_CONST WebRtc_Word16* x,
int x_length,
WebRtc_Word16* state,
int state_length,
WebRtc_Word16* state_low,
int state_low_length,
WebRtc_Word16* filtered,
WebRtc_Word16* filtered_low,
int filtered_low_length)
{
WebRtc_Word32 o;
WebRtc_Word32 oLOW;
int i, j, stop;
G_CONST WebRtc_Word16* x_ptr = &x[0];
WebRtc_Word16* filteredFINAL_ptr = filtered;
WebRtc_Word16* filteredFINAL_LOW_ptr = filtered_low;
for (i = 0; i < x_length; i++)
{
// Calculate filtered[i] and filtered_low[i]
G_CONST WebRtc_Word16* a_ptr = &a[1];
WebRtc_Word16* filtered_ptr = &filtered[i - 1];
WebRtc_Word16* filtered_low_ptr = &filtered_low[i - 1];
WebRtc_Word16* state_ptr = &state[state_length - 1];
WebRtc_Word16* state_low_ptr = &state_low[state_length - 1];
o = (WebRtc_Word32)(*x_ptr++) << 12;
oLOW = (WebRtc_Word32)0;
stop = (i < a_length) ? i + 1 : a_length;
for (j = 1; j < stop; j++)
{
o -= WEBRTC_SPL_MUL_16_16(*a_ptr, *filtered_ptr--);
oLOW -= WEBRTC_SPL_MUL_16_16(*a_ptr++, *filtered_low_ptr--);
}
for (j = i + 1; j < a_length; j++)
{
o -= WEBRTC_SPL_MUL_16_16(*a_ptr, *state_ptr--);
oLOW -= WEBRTC_SPL_MUL_16_16(*a_ptr++, *state_low_ptr--);
}
o += (oLOW >> 12);
*filteredFINAL_ptr = (WebRtc_Word16)((o + (WebRtc_Word32)2048) >> 12);
*filteredFINAL_LOW_ptr++ = (WebRtc_Word16)(o - ((WebRtc_Word32)(*filteredFINAL_ptr++)
<< 12));
}
// Save the filter state
if (x_length >= state_length)
{
WebRtcSpl_CopyFromEndW16(filtered, x_length, a_length - 1, state);
WebRtcSpl_CopyFromEndW16(filtered_low, x_length, a_length - 1, state_low);
} else
{
for (i = 0; i < state_length - x_length; i++)
{
state[i] = state[i + x_length];
state_low[i] = state_low[i + x_length];
}
for (i = 0; i < x_length; i++)
{
state[state_length - x_length + i] = filtered[i];
state[state_length - x_length + i] = filtered_low[i];
}
}
return x_length;
}

View File

@ -1,49 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_FilterARFastQ12().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_FilterARFastQ12(WebRtc_Word16 *in, WebRtc_Word16 *out, WebRtc_Word16 *A,
WebRtc_Word16 A_length, WebRtc_Word16 length)
{
WebRtc_Word32 o;
int i, j;
WebRtc_Word16 *x_ptr = &in[0];
WebRtc_Word16 *filtered_ptr = &out[0];
for (i = 0; i < length; i++)
{
// Calculate filtered[i]
G_CONST WebRtc_Word16 *a_ptr = &A[0];
WebRtc_Word16 *state_ptr = &out[i - 1];
o = WEBRTC_SPL_MUL_16_16(*x_ptr++, *a_ptr++);
for (j = 1; j < A_length; j++)
{
o -= WEBRTC_SPL_MUL_16_16(*a_ptr++,*state_ptr--);
}
// Saturate the output
o = WEBRTC_SPL_SAT((WebRtc_Word32)134215679, o, (WebRtc_Word32)-134217728);
*filtered_ptr++ = (WebRtc_Word16)((o + (WebRtc_Word32)2048) >> 12);
}
return;
}

View File

@ -1,41 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_GetHanningWindow().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_GetHanningWindow(WebRtc_Word16 *v, WebRtc_Word16 size)
{
int jj;
WebRtc_Word16 *vptr1;
WebRtc_Word32 index;
WebRtc_Word32 factor = ((WebRtc_Word32)0x40000000);
factor = WebRtcSpl_DivW32W16(factor, size);
if (size < 513)
index = (WebRtc_Word32)-0x200000;
else
index = (WebRtc_Word32)-0x100000;
vptr1 = v;
for (jj = 0; jj < size; jj++)
{
index += factor;
(*vptr1++) = WebRtcSpl_kHanningTable[index >> 22];
}
}

View File

@ -1,120 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the iLBC specific functions
* WebRtcSpl_ScaleAndAddVectorsWithRound()
* WebRtcSpl_ReverseOrderMultArrayElements()
* WebRtcSpl_ElementwiseVectorMult()
* WebRtcSpl_AddVectorsAndShift()
* WebRtcSpl_AddAffineVectorToVector()
* WebRtcSpl_AffineTransformVector()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_ScaleAndAddVectorsWithRound(WebRtc_Word16 *vector1, WebRtc_Word16 scale1,
WebRtc_Word16 *vector2, WebRtc_Word16 scale2,
WebRtc_Word16 right_shifts, WebRtc_Word16 *out,
WebRtc_Word16 vector_length)
{
int i;
WebRtc_Word16 roundVal;
roundVal = 1 << right_shifts;
roundVal = roundVal >> 1;
for (i = 0; i < vector_length; i++)
{
out[i] = (WebRtc_Word16)((WEBRTC_SPL_MUL_16_16(vector1[i], scale1)
+ WEBRTC_SPL_MUL_16_16(vector2[i], scale2) + roundVal) >> right_shifts);
}
}
void WebRtcSpl_ReverseOrderMultArrayElements(WebRtc_Word16 *out, G_CONST WebRtc_Word16 *in,
G_CONST WebRtc_Word16 *win,
WebRtc_Word16 vector_length,
WebRtc_Word16 right_shifts)
{
int i;
WebRtc_Word16 *outptr = out;
G_CONST WebRtc_Word16 *inptr = in;
G_CONST WebRtc_Word16 *winptr = win;
for (i = 0; i < vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(*inptr++,
*winptr--, right_shifts);
}
}
void WebRtcSpl_ElementwiseVectorMult(WebRtc_Word16 *out, G_CONST WebRtc_Word16 *in,
G_CONST WebRtc_Word16 *win, WebRtc_Word16 vector_length,
WebRtc_Word16 right_shifts)
{
int i;
WebRtc_Word16 *outptr = out;
G_CONST WebRtc_Word16 *inptr = in;
G_CONST WebRtc_Word16 *winptr = win;
for (i = 0; i < vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(*inptr++,
*winptr++, right_shifts);
}
}
void WebRtcSpl_AddVectorsAndShift(WebRtc_Word16 *out, G_CONST WebRtc_Word16 *in1,
G_CONST WebRtc_Word16 *in2, WebRtc_Word16 vector_length,
WebRtc_Word16 right_shifts)
{
int i;
WebRtc_Word16 *outptr = out;
G_CONST WebRtc_Word16 *in1ptr = in1;
G_CONST WebRtc_Word16 *in2ptr = in2;
for (i = vector_length; i > 0; i--)
{
(*outptr++) = (WebRtc_Word16)(((*in1ptr++) + (*in2ptr++)) >> right_shifts);
}
}
void WebRtcSpl_AddAffineVectorToVector(WebRtc_Word16 *out, WebRtc_Word16 *in,
WebRtc_Word16 gain, WebRtc_Word32 add_constant,
WebRtc_Word16 right_shifts, int vector_length)
{
WebRtc_Word16 *inPtr;
WebRtc_Word16 *outPtr;
int i;
inPtr = in;
outPtr = out;
for (i = 0; i < vector_length; i++)
{
(*outPtr++) += (WebRtc_Word16)((WEBRTC_SPL_MUL_16_16((*inPtr++), gain)
+ (WebRtc_Word32)add_constant) >> right_shifts);
}
}
void WebRtcSpl_AffineTransformVector(WebRtc_Word16 *out, WebRtc_Word16 *in,
WebRtc_Word16 gain, WebRtc_Word32 add_constant,
WebRtc_Word16 right_shifts, int vector_length)
{
WebRtc_Word16 *inPtr;
WebRtc_Word16 *outPtr;
int i;
inPtr = in;
outPtr = out;
for (i = 0; i < vector_length; i++)
{
(*outPtr++) = (WebRtc_Word16)((WEBRTC_SPL_MUL_16_16((*inPtr++), gain)
+ (WebRtc_Word32)add_constant) >> right_shifts);
}
}

View File

@ -1,259 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_LevinsonDurbin().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define SPL_LEVINSON_MAXORDER 20
WebRtc_Word16 WebRtcSpl_LevinsonDurbin(WebRtc_Word32 *R, WebRtc_Word16 *A, WebRtc_Word16 *K,
WebRtc_Word16 order)
{
WebRtc_Word16 i, j;
// Auto-correlation coefficients in high precision
WebRtc_Word16 R_hi[SPL_LEVINSON_MAXORDER + 1], R_low[SPL_LEVINSON_MAXORDER + 1];
// LPC coefficients in high precision
WebRtc_Word16 A_hi[SPL_LEVINSON_MAXORDER + 1], A_low[SPL_LEVINSON_MAXORDER + 1];
// LPC coefficients for next iteration
WebRtc_Word16 A_upd_hi[SPL_LEVINSON_MAXORDER + 1], A_upd_low[SPL_LEVINSON_MAXORDER + 1];
// Reflection coefficient in high precision
WebRtc_Word16 K_hi, K_low;
// Prediction gain Alpha in high precision and with scale factor
WebRtc_Word16 Alpha_hi, Alpha_low, Alpha_exp;
WebRtc_Word16 tmp_hi, tmp_low;
WebRtc_Word32 temp1W32, temp2W32, temp3W32;
WebRtc_Word16 norm;
// Normalize the autocorrelation R[0]...R[order+1]
norm = WebRtcSpl_NormW32(R[0]);
for (i = order; i >= 0; i--)
{
temp1W32 = WEBRTC_SPL_LSHIFT_W32(R[i], norm);
// Put R in hi and low format
R_hi[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
R_low[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_hi[i], 16)), 1);
}
// K = A[1] = -R[1] / R[0]
temp2W32 = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_hi[1],16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_low[1],1); // R[1] in Q31
temp3W32 = WEBRTC_SPL_ABS_W32(temp2W32); // abs R[1]
temp1W32 = WebRtcSpl_DivW32HiLow(temp3W32, R_hi[0], R_low[0]); // abs(R[1])/R[0] in Q31
// Put back the sign on R[1]
if (temp2W32 > 0)
{
temp1W32 = -temp1W32;
}
// Put K in hi and low format
K_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
K_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)K_hi, 16)), 1);
// Store first reflection coefficient
K[0] = K_hi;
temp1W32 = WEBRTC_SPL_RSHIFT_W32(temp1W32, 4); // A[1] in Q27
// Put A[1] in hi and low format
A_hi[1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
A_low[1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_hi[1], 16)), 1);
// Alpha = R[0] * (1-K^2)
temp1W32 = (((WEBRTC_SPL_MUL_16_16(K_hi, K_low) >> 14) + WEBRTC_SPL_MUL_16_16(K_hi, K_hi))
<< 1); // temp1W32 = k^2 in Q31
temp1W32 = WEBRTC_SPL_ABS_W32(temp1W32); // Guard against <0
temp1W32 = (WebRtc_Word32)0x7fffffffL - temp1W32; // temp1W32 = (1 - K[0]*K[0]) in Q31
// Store temp1W32 = 1 - K[0]*K[0] on hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// Calculate Alpha in Q31
temp1W32 = ((WEBRTC_SPL_MUL_16_16(R_hi[0], tmp_hi)
+ (WEBRTC_SPL_MUL_16_16(R_hi[0], tmp_low) >> 15)
+ (WEBRTC_SPL_MUL_16_16(R_low[0], tmp_hi) >> 15)) << 1);
// Normalize Alpha and put it in hi and low format
Alpha_exp = WebRtcSpl_NormW32(temp1W32);
temp1W32 = WEBRTC_SPL_LSHIFT_W32(temp1W32, Alpha_exp);
Alpha_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
Alpha_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)Alpha_hi, 16)), 1);
// Perform the iterative calculations in the Levinson-Durbin algorithm
for (i = 2; i <= order; i++)
{
/* ----
temp1W32 = R[i] + > R[j]*A[i-j]
/
----
j=1..i-1
*/
temp1W32 = 0;
for (j = 1; j < i; j++)
{
// temp1W32 is in Q31
temp1W32 += ((WEBRTC_SPL_MUL_16_16(R_hi[j], A_hi[i-j]) << 1)
+ (((WEBRTC_SPL_MUL_16_16(R_hi[j], A_low[i-j]) >> 15)
+ (WEBRTC_SPL_MUL_16_16(R_low[j], A_hi[i-j]) >> 15)) << 1));
}
temp1W32 = WEBRTC_SPL_LSHIFT_W32(temp1W32, 4);
temp1W32 += (WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_hi[i], 16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_low[i], 1));
// K = -temp1W32 / Alpha
temp2W32 = WEBRTC_SPL_ABS_W32(temp1W32); // abs(temp1W32)
temp3W32 = WebRtcSpl_DivW32HiLow(temp2W32, Alpha_hi, Alpha_low); // abs(temp1W32)/Alpha
// Put the sign of temp1W32 back again
if (temp1W32 > 0)
{
temp3W32 = -temp3W32;
}
// Use the Alpha shifts from earlier to de-normalize
norm = WebRtcSpl_NormW32(temp3W32);
if ((Alpha_exp <= norm) || (temp3W32 == 0))
{
temp3W32 = WEBRTC_SPL_LSHIFT_W32(temp3W32, Alpha_exp);
} else
{
if (temp3W32 > 0)
{
temp3W32 = (WebRtc_Word32)0x7fffffffL;
} else
{
temp3W32 = (WebRtc_Word32)0x80000000L;
}
}
// Put K on hi and low format
K_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp3W32, 16);
K_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp3W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)K_hi, 16)), 1);
// Store Reflection coefficient in Q15
K[i - 1] = K_hi;
// Test for unstable filter.
// If unstable return 0 and let the user decide what to do in that case
if ((WebRtc_Word32)WEBRTC_SPL_ABS_W16(K_hi) > (WebRtc_Word32)32750)
{
return 0; // Unstable filter
}
/*
Compute updated LPC coefficient: Anew[i]
Anew[j]= A[j] + K*A[i-j] for j=1..i-1
Anew[i]= K
*/
for (j = 1; j < i; j++)
{
// temp1W32 = A[j] in Q27
temp1W32 = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_hi[j],16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_low[j],1);
// temp1W32 += K*A[i-j] in Q27
temp1W32 += ((WEBRTC_SPL_MUL_16_16(K_hi, A_hi[i-j])
+ (WEBRTC_SPL_MUL_16_16(K_hi, A_low[i-j]) >> 15)
+ (WEBRTC_SPL_MUL_16_16(K_low, A_hi[i-j]) >> 15)) << 1);
// Put Anew in hi and low format
A_upd_hi[j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
A_upd_low[j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_upd_hi[j], 16)), 1);
}
// temp3W32 = K in Q27 (Convert from Q31 to Q27)
temp3W32 = WEBRTC_SPL_RSHIFT_W32(temp3W32, 4);
// Store Anew in hi and low format
A_upd_hi[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp3W32, 16);
A_upd_low[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp3W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_upd_hi[i], 16)), 1);
// Alpha = Alpha * (1-K^2)
temp1W32 = (((WEBRTC_SPL_MUL_16_16(K_hi, K_low) >> 14)
+ WEBRTC_SPL_MUL_16_16(K_hi, K_hi)) << 1); // K*K in Q31
temp1W32 = WEBRTC_SPL_ABS_W32(temp1W32); // Guard against <0
temp1W32 = (WebRtc_Word32)0x7fffffffL - temp1W32; // 1 - K*K in Q31
// Convert 1- K^2 in hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// Calculate Alpha = Alpha * (1-K^2) in Q31
temp1W32 = ((WEBRTC_SPL_MUL_16_16(Alpha_hi, tmp_hi)
+ (WEBRTC_SPL_MUL_16_16(Alpha_hi, tmp_low) >> 15)
+ (WEBRTC_SPL_MUL_16_16(Alpha_low, tmp_hi) >> 15)) << 1);
// Normalize Alpha and store it on hi and low format
norm = WebRtcSpl_NormW32(temp1W32);
temp1W32 = WEBRTC_SPL_LSHIFT_W32(temp1W32, norm);
Alpha_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
Alpha_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)Alpha_hi, 16)), 1);
// Update the total normalization of Alpha
Alpha_exp = Alpha_exp + norm;
// Update A[]
for (j = 1; j <= i; j++)
{
A_hi[j] = A_upd_hi[j];
A_low[j] = A_upd_low[j];
}
}
/*
Set A[0] to 1.0 and store the A[i] i=1...order in Q12
(Convert from Q27 and use rounding)
*/
A[0] = 4096;
for (i = 1; i <= order; i++)
{
// temp1W32 in Q27
temp1W32 = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_hi[i], 16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_low[i], 1);
// Round and store upper word
A[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32<<1)+(WebRtc_Word32)32768, 16);
}
return 1; // Stable filters
}

View File

@ -1,265 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the implementation of functions
* WebRtcSpl_MaxAbsValueW16()
* WebRtcSpl_MaxAbsIndexW16()
* WebRtcSpl_MaxAbsValueW32()
* WebRtcSpl_MaxValueW16()
* WebRtcSpl_MaxIndexW16()
* WebRtcSpl_MaxValueW32()
* WebRtcSpl_MaxIndexW32()
* WebRtcSpl_MinValueW16()
* WebRtcSpl_MinIndexW16()
* WebRtcSpl_MinValueW32()
* WebRtcSpl_MinIndexW32()
*
* The description header can be found in signal_processing_library.h.
*
*/
#include "signal_processing_library.h"
#if !(defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON))
// Maximum absolute value of word16 vector.
WebRtc_Word16 WebRtcSpl_MaxAbsValueW16(const WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMax = 0;
WebRtc_Word32 absVal;
WebRtc_Word16 totMax;
int i;
G_CONST WebRtc_Word16 *tmpvector = vector;
for (i = 0; i < length; i++)
{
absVal = WEBRTC_SPL_ABS_W32((*tmpvector));
if (absVal > tempMax)
{
tempMax = absVal;
}
tmpvector++;
}
totMax = (WebRtc_Word16)WEBRTC_SPL_MIN(tempMax, WEBRTC_SPL_WORD16_MAX);
return totMax;
}
#endif
// Index of maximum absolute value in a word16 vector.
WebRtc_Word16 WebRtcSpl_MaxAbsIndexW16(G_CONST WebRtc_Word16* vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMax;
WebRtc_Word16 absTemp;
WebRtc_Word16 tempMaxIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word16 *tmpvector = vector;
tempMax = WEBRTC_SPL_ABS_W16(*tmpvector);
tmpvector++;
for (i = 1; i < length; i++)
{
absTemp = WEBRTC_SPL_ABS_W16(*tmpvector);
tmpvector++;
if (absTemp > tempMax)
{
tempMax = absTemp;
tempMaxIndex = i;
}
}
return tempMaxIndex;
}
// Maximum absolute value of word32 vector.
WebRtc_Word32 WebRtcSpl_MaxAbsValueW32(G_CONST WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtc_UWord32 tempMax = 0;
WebRtc_UWord32 absVal;
WebRtc_Word32 retval;
int i;
G_CONST WebRtc_Word32 *tmpvector = vector;
for (i = 0; i < length; i++)
{
absVal = WEBRTC_SPL_ABS_W32((*tmpvector));
if (absVal > tempMax)
{
tempMax = absVal;
}
tmpvector++;
}
retval = (WebRtc_Word32)(WEBRTC_SPL_MIN(tempMax, WEBRTC_SPL_WORD32_MAX));
return retval;
}
// Maximum value of word16 vector.
#ifndef XSCALE_OPT
WebRtc_Word16 WebRtcSpl_MaxValueW16(G_CONST WebRtc_Word16* vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMax;
WebRtc_Word16 i;
G_CONST WebRtc_Word16 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
tempMax = vector[i];
}
return tempMax;
}
#else
#pragma message(">> WebRtcSpl_MaxValueW16 is excluded from this build")
#endif
// Index of maximum value in a word16 vector.
WebRtc_Word16 WebRtcSpl_MaxIndexW16(G_CONST WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMax;
WebRtc_Word16 tempMaxIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word16 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
{
tempMax = vector[i];
tempMaxIndex = i;
}
}
return tempMaxIndex;
}
// Maximum value of word32 vector.
#ifndef XSCALE_OPT
WebRtc_Word32 WebRtcSpl_MaxValueW32(G_CONST WebRtc_Word32* vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMax;
WebRtc_Word16 i;
G_CONST WebRtc_Word32 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
tempMax = vector[i];
}
return tempMax;
}
#else
#pragma message(">> WebRtcSpl_MaxValueW32 is excluded from this build")
#endif
// Index of maximum value in a word32 vector.
WebRtc_Word16 WebRtcSpl_MaxIndexW32(G_CONST WebRtc_Word32* vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMax;
WebRtc_Word16 tempMaxIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word32 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
{
tempMax = vector[i];
tempMaxIndex = i;
}
}
return tempMaxIndex;
}
// Minimum value of word16 vector.
WebRtc_Word16 WebRtcSpl_MinValueW16(G_CONST WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMin;
WebRtc_Word16 i;
G_CONST WebRtc_Word16 *tmpvector = vector;
// Find the minimum value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
tempMin = (vector[i]);
}
return tempMin;
}
// Index of minimum value in a word16 vector.
#ifndef XSCALE_OPT
WebRtc_Word16 WebRtcSpl_MinIndexW16(G_CONST WebRtc_Word16* vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMin;
WebRtc_Word16 tempMinIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word16* tmpvector = vector;
// Find index of smallest value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
{
tempMin = vector[i];
tempMinIndex = i;
}
}
return tempMinIndex;
}
#else
#pragma message(">> WebRtcSpl_MinIndexW16 is excluded from this build")
#endif
// Minimum value of word32 vector.
WebRtc_Word32 WebRtcSpl_MinValueW32(G_CONST WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMin;
WebRtc_Word16 i;
G_CONST WebRtc_Word32 *tmpvector = vector;
// Find the minimum value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
tempMin = (vector[i]);
}
return tempMin;
}
// Index of minimum value in a word32 vector.
#ifndef XSCALE_OPT
WebRtc_Word16 WebRtcSpl_MinIndexW32(G_CONST WebRtc_Word32* vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMin;
WebRtc_Word16 tempMinIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word32 *tmpvector = vector;
// Find index of smallest value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
{
tempMin = vector[i];
tempMinIndex = i;
}
}
return tempMinIndex;
}
#else
#pragma message(">> WebRtcSpl_MinIndexW32 is excluded from this build")
#endif

View File

@ -1,47 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#if (defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON))
#include <arm_neon.h>
#include "signal_processing_library.h"
// Maximum absolute value of word16 vector.
WebRtc_Word16 WebRtcSpl_MaxAbsValueW16(const WebRtc_Word16* vector,
WebRtc_Word16 length) {
WebRtc_Word32 temp_max = 0;
WebRtc_Word32 abs_val;
WebRtc_Word16 tot_max;
int i;
__asm__("vmov.i16 d25, #0" : : : "d25");
for (i = 0; i < length - 7; i += 8) {
__asm__("vld1.16 {d26, d27}, [%0]" : : "r"(&vector[i]) : "q13");
__asm__("vabs.s16 q13, q13" : : : "q13");
__asm__("vpmax.s16 d26, d27" : : : "q13");
__asm__("vpmax.s16 d25, d26" : : : "d25", "d26");
}
__asm__("vpmax.s16 d25, d25" : : : "d25");
__asm__("vpmax.s16 d25, d25" : : : "d25");
__asm__("vmov.s16 %0, d25[0]" : "=r"(temp_max): : "d25");
for (; i < length; i++) {
abs_val = WEBRTC_SPL_ABS_W32((vector[i]));
if (abs_val > temp_max) {
temp_max = abs_val;
}
}
tot_max = (WebRtc_Word16)WEBRTC_SPL_MIN(temp_max, WEBRTC_SPL_WORD16_MAX);
return tot_max;
}
#endif

View File

@ -1,52 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the randomization functions
* WebRtcSpl_IncreaseSeed()
* WebRtcSpl_RandU()
* WebRtcSpl_RandN()
* WebRtcSpl_RandUArray()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
WebRtc_UWord32 WebRtcSpl_IncreaseSeed(WebRtc_UWord32 *seed)
{
seed[0] = (seed[0] * ((WebRtc_Word32)69069) + 1) & (WEBRTC_SPL_MAX_SEED_USED - 1);
return seed[0];
}
WebRtc_Word16 WebRtcSpl_RandU(WebRtc_UWord32 *seed)
{
return (WebRtc_Word16)(WebRtcSpl_IncreaseSeed(seed) >> 16);
}
WebRtc_Word16 WebRtcSpl_RandN(WebRtc_UWord32 *seed)
{
return WebRtcSpl_kRandNTable[WebRtcSpl_IncreaseSeed(seed) >> 23];
}
// Creates an array of uniformly distributed variables
WebRtc_Word16 WebRtcSpl_RandUArray(WebRtc_Word16* vector,
WebRtc_Word16 vector_length,
WebRtc_UWord32* seed)
{
int i;
for (i = 0; i < vector_length; i++)
{
vector[i] = WebRtcSpl_RandU(seed);
}
return vector_length;
}

View File

@ -1,47 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file contains some internal resampling functions.
*
*/
#ifndef WEBRTC_SPL_RESAMPLE_BY_2_INTERNAL_H_
#define WEBRTC_SPL_RESAMPLE_BY_2_INTERNAL_H_
#include "typedefs.h"
/*******************************************************************
* resample_by_2_fast.c
* Functions for internal use in the other resample functions
******************************************************************/
void WebRtcSpl_DownBy2IntToShort(WebRtc_Word32 *in, WebRtc_Word32 len, WebRtc_Word16 *out,
WebRtc_Word32 *state);
void WebRtcSpl_DownBy2ShortToInt(const WebRtc_Word16 *in, WebRtc_Word32 len,
WebRtc_Word32 *out, WebRtc_Word32 *state);
void WebRtcSpl_UpBy2ShortToInt(const WebRtc_Word16 *in, WebRtc_Word32 len,
WebRtc_Word32 *out, WebRtc_Word32 *state);
void WebRtcSpl_UpBy2IntToInt(const WebRtc_Word32 *in, WebRtc_Word32 len, WebRtc_Word32 *out,
WebRtc_Word32 *state);
void WebRtcSpl_UpBy2IntToShort(const WebRtc_Word32 *in, WebRtc_Word32 len,
WebRtc_Word16 *out, WebRtc_Word32 *state);
void WebRtcSpl_LPBy2ShortToInt(const WebRtc_Word16* in, WebRtc_Word32 len,
WebRtc_Word32* out, WebRtc_Word32* state);
void WebRtcSpl_LPBy2IntToInt(const WebRtc_Word32* in, WebRtc_Word32 len, WebRtc_Word32* out,
WebRtc_Word32* state);
#endif // WEBRTC_SPL_RESAMPLE_BY_2_INTERNAL_H_

View File

@ -1,60 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the 360 degree sine table.
*
*/
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_kSinTable[] = {
0, 142, 285, 428, 571, 713, 856, 998, 1140,
1281, 1422, 1563, 1703, 1842, 1981, 2120, 2258, 2395,
2531, 2667, 2801, 2935, 3068, 3200, 3331, 3462, 3591,
3719, 3845, 3971, 4095, 4219, 4341, 4461, 4580, 4698,
4815, 4930, 5043, 5155, 5265, 5374, 5481, 5586, 5690,
5792, 5892, 5991, 6087, 6182, 6275, 6366, 6455, 6542,
6627, 6710, 6791, 6870, 6947, 7021, 7094, 7164, 7233,
7299, 7362, 7424, 7483, 7540, 7595, 7647, 7697, 7745,
7791, 7834, 7874, 7912, 7948, 7982, 8012, 8041, 8067,
8091, 8112, 8130, 8147, 8160, 8172, 8180, 8187, 8190,
8191, 8190, 8187, 8180, 8172, 8160, 8147, 8130, 8112,
8091, 8067, 8041, 8012, 7982, 7948, 7912, 7874, 7834,
7791, 7745, 7697, 7647, 7595, 7540, 7483, 7424, 7362,
7299, 7233, 7164, 7094, 7021, 6947, 6870, 6791, 6710,
6627, 6542, 6455, 6366, 6275, 6182, 6087, 5991, 5892,
5792, 5690, 5586, 5481, 5374, 5265, 5155, 5043, 4930,
4815, 4698, 4580, 4461, 4341, 4219, 4096, 3971, 3845,
3719, 3591, 3462, 3331, 3200, 3068, 2935, 2801, 2667,
2531, 2395, 2258, 2120, 1981, 1842, 1703, 1563, 1422,
1281, 1140, 998, 856, 713, 571, 428, 285, 142,
0, -142, -285, -428, -571, -713, -856, -998, -1140,
-1281, -1422, -1563, -1703, -1842, -1981, -2120, -2258, -2395,
-2531, -2667, -2801, -2935, -3068, -3200, -3331, -3462, -3591,
-3719, -3845, -3971, -4095, -4219, -4341, -4461, -4580, -4698,
-4815, -4930, -5043, -5155, -5265, -5374, -5481, -5586, -5690,
-5792, -5892, -5991, -6087, -6182, -6275, -6366, -6455, -6542,
-6627, -6710, -6791, -6870, -6947, -7021, -7094, -7164, -7233,
-7299, -7362, -7424, -7483, -7540, -7595, -7647, -7697, -7745,
-7791, -7834, -7874, -7912, -7948, -7982, -8012, -8041, -8067,
-8091, -8112, -8130, -8147, -8160, -8172, -8180, -8187, -8190,
-8191, -8190, -8187, -8180, -8172, -8160, -8147, -8130, -8112,
-8091, -8067, -8041, -8012, -7982, -7948, -7912, -7874, -7834,
-7791, -7745, -7697, -7647, -7595, -7540, -7483, -7424, -7362,
-7299, -7233, -7164, -7094, -7021, -6947, -6870, -6791, -6710,
-6627, -6542, -6455, -6366, -6275, -6182, -6087, -5991, -5892,
-5792, -5690, -5586, -5481, -5374, -5265, -5155, -5043, -4930,
-4815, -4698, -4580, -4461, -4341, -4219, -4096, -3971, -3845,
-3719, -3591, -3462, -3331, -3200, -3068, -2935, -2801, -2667,
-2531, -2395, -2258, -2120, -1981, -1842, -1703, -1563, -1422,
-1281, -1140, -998, -856, -713, -571, -428, -285, -142
};

View File

@ -1,150 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the 1024 point sine table.
*
*/
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_kSinTable1024[] =
{
0, 201, 402, 603, 804, 1005, 1206, 1406,
1607, 1808, 2009, 2209, 2410, 2610, 2811, 3011,
3211, 3411, 3611, 3811, 4011, 4210, 4409, 4608,
4807, 5006, 5205, 5403, 5601, 5799, 5997, 6195,
6392, 6589, 6786, 6982, 7179, 7375, 7571, 7766,
7961, 8156, 8351, 8545, 8739, 8932, 9126, 9319,
9511, 9703, 9895, 10087, 10278, 10469, 10659, 10849,
11038, 11227, 11416, 11604, 11792, 11980, 12166, 12353,
12539, 12724, 12909, 13094, 13278, 13462, 13645, 13827,
14009, 14191, 14372, 14552, 14732, 14911, 15090, 15268,
15446, 15623, 15799, 15975, 16150, 16325, 16499, 16672,
16845, 17017, 17189, 17360, 17530, 17699, 17868, 18036,
18204, 18371, 18537, 18702, 18867, 19031, 19194, 19357,
19519, 19680, 19840, 20000, 20159, 20317, 20474, 20631,
20787, 20942, 21096, 21249, 21402, 21554, 21705, 21855,
22004, 22153, 22301, 22448, 22594, 22739, 22883, 23027,
23169, 23311, 23452, 23592, 23731, 23869, 24006, 24143,
24278, 24413, 24546, 24679, 24811, 24942, 25072, 25201,
25329, 25456, 25582, 25707, 25831, 25954, 26077, 26198,
26318, 26437, 26556, 26673, 26789, 26905, 27019, 27132,
27244, 27355, 27466, 27575, 27683, 27790, 27896, 28001,
28105, 28208, 28309, 28410, 28510, 28608, 28706, 28802,
28897, 28992, 29085, 29177, 29268, 29358, 29446, 29534,
29621, 29706, 29790, 29873, 29955, 30036, 30116, 30195,
30272, 30349, 30424, 30498, 30571, 30643, 30713, 30783,
30851, 30918, 30984, 31049,
31113, 31175, 31236, 31297,
31356, 31413, 31470, 31525, 31580, 31633, 31684, 31735,
31785, 31833, 31880, 31926, 31970, 32014, 32056, 32097,
32137, 32176, 32213, 32249, 32284, 32318, 32350, 32382,
32412, 32441, 32468, 32495, 32520, 32544, 32567, 32588,
32609, 32628, 32646, 32662, 32678, 32692, 32705, 32717,
32727, 32736, 32744, 32751, 32757, 32761, 32764, 32766,
32767, 32766, 32764, 32761, 32757, 32751, 32744, 32736,
32727, 32717, 32705, 32692, 32678, 32662, 32646, 32628,
32609, 32588, 32567, 32544, 32520, 32495, 32468, 32441,
32412, 32382, 32350, 32318, 32284, 32249, 32213, 32176,
32137, 32097, 32056, 32014, 31970, 31926, 31880, 31833,
31785, 31735, 31684, 31633, 31580, 31525, 31470, 31413,
31356, 31297, 31236, 31175, 31113, 31049, 30984, 30918,
30851, 30783, 30713, 30643, 30571, 30498, 30424, 30349,
30272, 30195, 30116, 30036, 29955, 29873, 29790, 29706,
29621, 29534, 29446, 29358, 29268, 29177, 29085, 28992,
28897, 28802, 28706, 28608, 28510, 28410, 28309, 28208,
28105, 28001, 27896, 27790, 27683, 27575, 27466, 27355,
27244, 27132, 27019, 26905, 26789, 26673, 26556, 26437,
26318, 26198, 26077, 25954, 25831, 25707, 25582, 25456,
25329, 25201, 25072, 24942, 24811, 24679, 24546, 24413,
24278, 24143, 24006, 23869, 23731, 23592, 23452, 23311,
23169, 23027, 22883, 22739, 22594, 22448, 22301, 22153,
22004, 21855, 21705, 21554, 21402, 21249, 21096, 20942,
20787, 20631, 20474, 20317, 20159, 20000, 19840, 19680,
19519, 19357, 19194, 19031, 18867, 18702, 18537, 18371,
18204, 18036, 17868, 17699, 17530, 17360, 17189, 17017,
16845, 16672, 16499, 16325, 16150, 15975, 15799, 15623,
15446, 15268, 15090, 14911, 14732, 14552, 14372, 14191,
14009, 13827, 13645, 13462, 13278, 13094, 12909, 12724,
12539, 12353, 12166, 11980, 11792, 11604, 11416, 11227,
11038, 10849, 10659, 10469, 10278, 10087, 9895, 9703,
9511, 9319, 9126, 8932, 8739, 8545, 8351, 8156,
7961, 7766, 7571, 7375, 7179, 6982, 6786, 6589,
6392, 6195, 5997, 5799, 5601, 5403, 5205, 5006,
4807, 4608, 4409, 4210, 4011, 3811, 3611, 3411,
3211, 3011, 2811, 2610, 2410, 2209, 2009, 1808,
1607, 1406, 1206, 1005, 804, 603, 402, 201,
0, -201, -402, -603, -804, -1005, -1206, -1406,
-1607, -1808, -2009, -2209, -2410, -2610, -2811, -3011,
-3211, -3411, -3611, -3811, -4011, -4210, -4409, -4608,
-4807, -5006, -5205, -5403, -5601, -5799, -5997, -6195,
-6392, -6589, -6786, -6982, -7179, -7375, -7571, -7766,
-7961, -8156, -8351, -8545, -8739, -8932, -9126, -9319,
-9511, -9703, -9895, -10087, -10278, -10469, -10659, -10849,
-11038, -11227, -11416, -11604, -11792, -11980, -12166, -12353,
-12539, -12724, -12909, -13094, -13278, -13462, -13645, -13827,
-14009, -14191, -14372, -14552, -14732, -14911, -15090, -15268,
-15446, -15623, -15799, -15975, -16150, -16325, -16499, -16672,
-16845, -17017, -17189, -17360, -17530, -17699, -17868, -18036,
-18204, -18371, -18537, -18702, -18867, -19031, -19194, -19357,
-19519, -19680, -19840, -20000, -20159, -20317, -20474, -20631,
-20787, -20942, -21096, -21249, -21402, -21554, -21705, -21855,
-22004, -22153, -22301, -22448, -22594, -22739, -22883, -23027,
-23169, -23311, -23452, -23592, -23731, -23869, -24006, -24143,
-24278, -24413, -24546, -24679, -24811, -24942, -25072, -25201,
-25329, -25456, -25582, -25707, -25831, -25954, -26077, -26198,
-26318, -26437, -26556, -26673, -26789, -26905, -27019, -27132,
-27244, -27355, -27466, -27575, -27683, -27790, -27896, -28001,
-28105, -28208, -28309, -28410, -28510, -28608, -28706, -28802,
-28897, -28992, -29085, -29177, -29268, -29358, -29446, -29534,
-29621, -29706, -29790, -29873, -29955, -30036, -30116, -30195,
-30272, -30349, -30424, -30498, -30571, -30643, -30713, -30783,
-30851, -30918, -30984, -31049, -31113, -31175, -31236, -31297,
-31356, -31413, -31470, -31525, -31580, -31633, -31684, -31735,
-31785, -31833, -31880, -31926, -31970, -32014, -32056, -32097,
-32137, -32176, -32213, -32249, -32284, -32318, -32350, -32382,
-32412, -32441, -32468, -32495, -32520, -32544, -32567, -32588,
-32609, -32628, -32646, -32662, -32678, -32692, -32705, -32717,
-32727, -32736, -32744, -32751, -32757, -32761, -32764, -32766,
-32767, -32766, -32764, -32761, -32757, -32751, -32744, -32736,
-32727, -32717, -32705, -32692, -32678, -32662, -32646, -32628,
-32609, -32588, -32567, -32544, -32520, -32495, -32468, -32441,
-32412, -32382, -32350, -32318, -32284, -32249, -32213, -32176,
-32137, -32097, -32056, -32014, -31970, -31926, -31880, -31833,
-31785, -31735, -31684, -31633, -31580, -31525, -31470, -31413,
-31356, -31297, -31236, -31175, -31113, -31049, -30984, -30918,
-30851, -30783, -30713, -30643, -30571, -30498, -30424, -30349,
-30272, -30195, -30116, -30036, -29955, -29873, -29790, -29706,
-29621, -29534, -29446, -29358, -29268, -29177, -29085, -28992,
-28897, -28802, -28706, -28608, -28510, -28410, -28309, -28208,
-28105, -28001, -27896, -27790, -27683, -27575, -27466, -27355,
-27244, -27132, -27019, -26905, -26789, -26673, -26556, -26437,
-26318, -26198, -26077, -25954, -25831, -25707, -25582, -25456,
-25329, -25201, -25072, -24942, -24811, -24679, -24546, -24413,
-24278, -24143, -24006, -23869, -23731, -23592, -23452, -23311,
-23169, -23027, -22883, -22739, -22594, -22448, -22301, -22153,
-22004, -21855, -21705, -21554, -21402, -21249, -21096, -20942,
-20787, -20631, -20474, -20317, -20159, -20000, -19840, -19680,
-19519, -19357, -19194, -19031, -18867, -18702, -18537, -18371,
-18204, -18036, -17868, -17699, -17530, -17360, -17189, -17017,
-16845, -16672, -16499, -16325, -16150, -15975, -15799, -15623,
-15446, -15268, -15090, -14911, -14732, -14552, -14372, -14191,
-14009, -13827, -13645, -13462, -13278, -13094, -12909, -12724,
-12539, -12353, -12166, -11980, -11792, -11604, -11416, -11227,
-11038, -10849, -10659, -10469, -10278, -10087, -9895, -9703,
-9511, -9319, -9126, -8932, -8739, -8545, -8351, -8156,
-7961, -7766, -7571, -7375, -7179, -6982, -6786, -6589,
-6392, -6195, -5997, -5799, -5601, -5403, -5205, -5006,
-4807, -4608, -4409, -4210, -4011, -3811, -3611, -3411,
-3211, -3011, -2811, -2610, -2410, -2209, -2009, -1808,
-1607, -1406, -1206, -1005, -804, -603, -402, -201,
};

View File

@ -1,73 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'spl',
'type': '<(library)',
'include_dirs': [
'../interface',
],
'direct_dependent_settings': {
'include_dirs': [
'../interface',
],
},
'sources': [
'../interface/signal_processing_library.h',
'../interface/spl_inl.h',
'auto_corr_to_refl_coef.c',
'auto_correlation.c',
'complex_fft.c',
'complex_ifft.c',
'complex_bit_reverse.c',
'copy_set_operations.c',
'cos_table.c',
'cross_correlation.c',
'division_operations.c',
'dot_product_with_scale.c',
'downsample_fast.c',
'energy.c',
'filter_ar.c',
'filter_ar_fast_q12.c',
'filter_ma_fast_q12.c',
'get_hanning_window.c',
'get_scaling_square.c',
'hanning_table.c',
'ilbc_specific_functions.c',
'levinson_durbin.c',
'lpc_to_refl_coef.c',
'min_max_operations.c',
'randn_table.c',
'randomization_functions.c',
'refl_coef_to_lpc.c',
'resample.c',
'resample_48khz.c',
'resample_by_2.c',
'resample_by_2_internal.c',
'resample_by_2_internal.h',
'resample_fractional.c',
'sin_table.c',
'sin_table_1024.c',
'spl_sqrt.c',
'spl_sqrt_floor.c',
'spl_version.c',
'splitting_filter.c',
'sqrt_of_one_minus_x_squared.c',
'vector_scaling_operations.c',
],
},
],
}
# Local Variables:
# tab-width:2
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=2 shiftwidth=2:

View File

@ -1,53 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_SqrtFloor().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define WEBRTC_SPL_SQRT_ITER(N) \
try1 = root + (1 << (N)); \
if (value >= try1 << (N)) \
{ \
value -= try1 << (N); \
root |= 2 << (N); \
}
// (out) Square root of input parameter
WebRtc_Word32 WebRtcSpl_SqrtFloor(WebRtc_Word32 value)
{
// new routine for performance, 4 cycles/bit in ARM
// output precision is 16 bits
WebRtc_Word32 root = 0, try1;
WEBRTC_SPL_SQRT_ITER (15);
WEBRTC_SPL_SQRT_ITER (14);
WEBRTC_SPL_SQRT_ITER (13);
WEBRTC_SPL_SQRT_ITER (12);
WEBRTC_SPL_SQRT_ITER (11);
WEBRTC_SPL_SQRT_ITER (10);
WEBRTC_SPL_SQRT_ITER ( 9);
WEBRTC_SPL_SQRT_ITER ( 8);
WEBRTC_SPL_SQRT_ITER ( 7);
WEBRTC_SPL_SQRT_ITER ( 6);
WEBRTC_SPL_SQRT_ITER ( 5);
WEBRTC_SPL_SQRT_ITER ( 4);
WEBRTC_SPL_SQRT_ITER ( 3);
WEBRTC_SPL_SQRT_ITER ( 2);
WEBRTC_SPL_SQRT_ITER ( 1);
WEBRTC_SPL_SQRT_ITER ( 0);
return root >> 1;
}

View File

@ -1,25 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_get_version().
* The description header can be found in signal_processing_library.h
*
*/
#include <string.h>
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_get_version(char* version, WebRtc_Word16 length_in_bytes)
{
strncpy(version, "1.2.0", length_in_bytes);
return 0;
}

View File

@ -1,151 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the functions
* WebRtcSpl_VectorBitShiftW16()
* WebRtcSpl_VectorBitShiftW32()
* WebRtcSpl_VectorBitShiftW32ToW16()
* WebRtcSpl_ScaleVector()
* WebRtcSpl_ScaleVectorWithSat()
* WebRtcSpl_ScaleAndAddVectors()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_VectorBitShiftW16(WebRtc_Word16 *res,
WebRtc_Word16 length,
G_CONST WebRtc_Word16 *in,
WebRtc_Word16 right_shifts)
{
int i;
if (right_shifts > 0)
{
for (i = length; i > 0; i--)
{
(*res++) = ((*in++) >> right_shifts);
}
} else
{
for (i = length; i > 0; i--)
{
(*res++) = ((*in++) << (-right_shifts));
}
}
}
void WebRtcSpl_VectorBitShiftW32(WebRtc_Word32 *out_vector,
WebRtc_Word16 vector_length,
G_CONST WebRtc_Word32 *in_vector,
WebRtc_Word16 right_shifts)
{
int i;
if (right_shifts > 0)
{
for (i = vector_length; i > 0; i--)
{
(*out_vector++) = ((*in_vector++) >> right_shifts);
}
} else
{
for (i = vector_length; i > 0; i--)
{
(*out_vector++) = ((*in_vector++) << (-right_shifts));
}
}
}
void WebRtcSpl_VectorBitShiftW32ToW16(WebRtc_Word16 *res,
WebRtc_Word16 length,
G_CONST WebRtc_Word32 *in,
WebRtc_Word16 right_shifts)
{
int i;
if (right_shifts >= 0)
{
for (i = length; i > 0; i--)
{
(*res++) = (WebRtc_Word16)((*in++) >> right_shifts);
}
} else
{
WebRtc_Word16 left_shifts = -right_shifts;
for (i = length; i > 0; i--)
{
(*res++) = (WebRtc_Word16)((*in++) << left_shifts);
}
}
}
void WebRtcSpl_ScaleVector(G_CONST WebRtc_Word16 *in_vector, WebRtc_Word16 *out_vector,
WebRtc_Word16 gain, WebRtc_Word16 in_vector_length,
WebRtc_Word16 right_shifts)
{
// Performs vector operation: out_vector = (gain*in_vector)>>right_shifts
int i;
G_CONST WebRtc_Word16 *inptr;
WebRtc_Word16 *outptr;
inptr = in_vector;
outptr = out_vector;
for (i = 0; i < in_vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(*inptr++, gain, right_shifts);
}
}
void WebRtcSpl_ScaleVectorWithSat(G_CONST WebRtc_Word16 *in_vector, WebRtc_Word16 *out_vector,
WebRtc_Word16 gain, WebRtc_Word16 in_vector_length,
WebRtc_Word16 right_shifts)
{
// Performs vector operation: out_vector = (gain*in_vector)>>right_shifts
int i;
WebRtc_Word32 tmpW32;
G_CONST WebRtc_Word16 *inptr;
WebRtc_Word16 *outptr;
inptr = in_vector;
outptr = out_vector;
for (i = 0; i < in_vector_length; i++)
{
tmpW32 = WEBRTC_SPL_MUL_16_16_RSFT(*inptr++, gain, right_shifts);
(*outptr++) = WebRtcSpl_SatW32ToW16(tmpW32);
}
}
void WebRtcSpl_ScaleAndAddVectors(G_CONST WebRtc_Word16 *in1, WebRtc_Word16 gain1, int shift1,
G_CONST WebRtc_Word16 *in2, WebRtc_Word16 gain2, int shift2,
WebRtc_Word16 *out, int vector_length)
{
// Performs vector operation: out = (gain1*in1)>>shift1 + (gain2*in2)>>shift2
int i;
G_CONST WebRtc_Word16 *in1ptr;
G_CONST WebRtc_Word16 *in2ptr;
WebRtc_Word16 *outptr;
in1ptr = in1;
in2ptr = in2;
outptr = out;
for (i = 0; i < vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(gain1, *in1ptr++, shift1)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(gain2, *in2ptr++, shift2);
}
}

View File

@ -1,704 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the Q14 radix-8 tables used in ARM9e optimizations.
*
*/
extern const int s_Q14S_8;
const int s_Q14S_8 = 1024;
extern const unsigned short t_Q14S_8[2032];
const unsigned short t_Q14S_8[2032] = {
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x22a3,0x187e ,0x3249,0x0c7c ,0x11a8,0x238e ,
0x0000,0x2d41 ,0x22a3,0x187e ,0xdd5d,0x3b21 ,
0xdd5d,0x3b21 ,0x11a8,0x238e ,0xb4be,0x3ec5 ,
0xc000,0x4000 ,0x0000,0x2d41 ,0xa57e,0x2d41 ,
0xac61,0x3b21 ,0xee58,0x3537 ,0xb4be,0x0c7c ,
0xa57e,0x2d41 ,0xdd5d,0x3b21 ,0xdd5d,0xe782 ,
0xac61,0x187e ,0xcdb7,0x3ec5 ,0x11a8,0xcac9 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x396b,0x0646 ,0x3cc8,0x0324 ,0x35eb,0x0964 ,
0x3249,0x0c7c ,0x396b,0x0646 ,0x2aaa,0x1294 ,
0x2aaa,0x1294 ,0x35eb,0x0964 ,0x1e7e,0x1b5d ,
0x22a3,0x187e ,0x3249,0x0c7c ,0x11a8,0x238e ,
0x1a46,0x1e2b ,0x2e88,0x0f8d ,0x0471,0x2afb ,
0x11a8,0x238e ,0x2aaa,0x1294 ,0xf721,0x3179 ,
0x08df,0x289a ,0x26b3,0x1590 ,0xea02,0x36e5 ,
0x0000,0x2d41 ,0x22a3,0x187e ,0xdd5d,0x3b21 ,
0xf721,0x3179 ,0x1e7e,0x1b5d ,0xd178,0x3e15 ,
0xee58,0x3537 ,0x1a46,0x1e2b ,0xc695,0x3fb1 ,
0xe5ba,0x3871 ,0x15fe,0x20e7 ,0xbcf0,0x3fec ,
0xdd5d,0x3b21 ,0x11a8,0x238e ,0xb4be,0x3ec5 ,
0xd556,0x3d3f ,0x0d48,0x2620 ,0xae2e,0x3c42 ,
0xcdb7,0x3ec5 ,0x08df,0x289a ,0xa963,0x3871 ,
0xc695,0x3fb1 ,0x0471,0x2afb ,0xa678,0x3368 ,
0xc000,0x4000 ,0x0000,0x2d41 ,0xa57e,0x2d41 ,
0xba09,0x3fb1 ,0xfb8f,0x2f6c ,0xa678,0x2620 ,
0xb4be,0x3ec5 ,0xf721,0x3179 ,0xa963,0x1e2b ,
0xb02d,0x3d3f ,0xf2b8,0x3368 ,0xae2e,0x1590 ,
0xac61,0x3b21 ,0xee58,0x3537 ,0xb4be,0x0c7c ,
0xa963,0x3871 ,0xea02,0x36e5 ,0xbcf0,0x0324 ,
0xa73b,0x3537 ,0xe5ba,0x3871 ,0xc695,0xf9ba ,
0xa5ed,0x3179 ,0xe182,0x39db ,0xd178,0xf073 ,
0xa57e,0x2d41 ,0xdd5d,0x3b21 ,0xdd5d,0xe782 ,
0xa5ed,0x289a ,0xd94d,0x3c42 ,0xea02,0xdf19 ,
0xa73b,0x238e ,0xd556,0x3d3f ,0xf721,0xd766 ,
0xa963,0x1e2b ,0xd178,0x3e15 ,0x0471,0xd094 ,
0xac61,0x187e ,0xcdb7,0x3ec5 ,0x11a8,0xcac9 ,
0xb02d,0x1294 ,0xca15,0x3f4f ,0x1e7e,0xc625 ,
0xb4be,0x0c7c ,0xc695,0x3fb1 ,0x2aaa,0xc2c1 ,
0xba09,0x0646 ,0xc338,0x3fec ,0x35eb,0xc0b1 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3e69,0x0192 ,0x3f36,0x00c9 ,0x3d9a,0x025b ,
0x3cc8,0x0324 ,0x3e69,0x0192 ,0x3b1e,0x04b5 ,
0x3b1e,0x04b5 ,0x3d9a,0x025b ,0x388e,0x070e ,
0x396b,0x0646 ,0x3cc8,0x0324 ,0x35eb,0x0964 ,
0x37af,0x07d6 ,0x3bf4,0x03ed ,0x3334,0x0bb7 ,
0x35eb,0x0964 ,0x3b1e,0x04b5 ,0x306c,0x0e06 ,
0x341e,0x0af1 ,0x3a46,0x057e ,0x2d93,0x1050 ,
0x3249,0x0c7c ,0x396b,0x0646 ,0x2aaa,0x1294 ,
0x306c,0x0e06 ,0x388e,0x070e ,0x27b3,0x14d2 ,
0x2e88,0x0f8d ,0x37af,0x07d6 ,0x24ae,0x1709 ,
0x2c9d,0x1112 ,0x36ce,0x089d ,0x219c,0x1937 ,
0x2aaa,0x1294 ,0x35eb,0x0964 ,0x1e7e,0x1b5d ,
0x28b2,0x1413 ,0x3505,0x0a2b ,0x1b56,0x1d79 ,
0x26b3,0x1590 ,0x341e,0x0af1 ,0x1824,0x1f8c ,
0x24ae,0x1709 ,0x3334,0x0bb7 ,0x14ea,0x2193 ,
0x22a3,0x187e ,0x3249,0x0c7c ,0x11a8,0x238e ,
0x2093,0x19ef ,0x315b,0x0d41 ,0x0e61,0x257e ,
0x1e7e,0x1b5d ,0x306c,0x0e06 ,0x0b14,0x2760 ,
0x1c64,0x1cc6 ,0x2f7b,0x0eca ,0x07c4,0x2935 ,
0x1a46,0x1e2b ,0x2e88,0x0f8d ,0x0471,0x2afb ,
0x1824,0x1f8c ,0x2d93,0x1050 ,0x011c,0x2cb2 ,
0x15fe,0x20e7 ,0x2c9d,0x1112 ,0xfdc7,0x2e5a ,
0x13d5,0x223d ,0x2ba4,0x11d3 ,0xfa73,0x2ff2 ,
0x11a8,0x238e ,0x2aaa,0x1294 ,0xf721,0x3179 ,
0x0f79,0x24da ,0x29af,0x1354 ,0xf3d2,0x32ef ,
0x0d48,0x2620 ,0x28b2,0x1413 ,0xf087,0x3453 ,
0x0b14,0x2760 ,0x27b3,0x14d2 ,0xed41,0x35a5 ,
0x08df,0x289a ,0x26b3,0x1590 ,0xea02,0x36e5 ,
0x06a9,0x29ce ,0x25b1,0x164c ,0xe6cb,0x3812 ,
0x0471,0x2afb ,0x24ae,0x1709 ,0xe39c,0x392b ,
0x0239,0x2c21 ,0x23a9,0x17c4 ,0xe077,0x3a30 ,
0x0000,0x2d41 ,0x22a3,0x187e ,0xdd5d,0x3b21 ,
0xfdc7,0x2e5a ,0x219c,0x1937 ,0xda4f,0x3bfd ,
0xfb8f,0x2f6c ,0x2093,0x19ef ,0xd74e,0x3cc5 ,
0xf957,0x3076 ,0x1f89,0x1aa7 ,0xd45c,0x3d78 ,
0xf721,0x3179 ,0x1e7e,0x1b5d ,0xd178,0x3e15 ,
0xf4ec,0x3274 ,0x1d72,0x1c12 ,0xcea5,0x3e9d ,
0xf2b8,0x3368 ,0x1c64,0x1cc6 ,0xcbe2,0x3f0f ,
0xf087,0x3453 ,0x1b56,0x1d79 ,0xc932,0x3f6b ,
0xee58,0x3537 ,0x1a46,0x1e2b ,0xc695,0x3fb1 ,
0xec2b,0x3612 ,0x1935,0x1edc ,0xc40c,0x3fe1 ,
0xea02,0x36e5 ,0x1824,0x1f8c ,0xc197,0x3ffb ,
0xe7dc,0x37b0 ,0x1711,0x203a ,0xbf38,0x3fff ,
0xe5ba,0x3871 ,0x15fe,0x20e7 ,0xbcf0,0x3fec ,
0xe39c,0x392b ,0x14ea,0x2193 ,0xbabf,0x3fc4 ,
0xe182,0x39db ,0x13d5,0x223d ,0xb8a6,0x3f85 ,
0xdf6d,0x3a82 ,0x12bf,0x22e7 ,0xb6a5,0x3f30 ,
0xdd5d,0x3b21 ,0x11a8,0x238e ,0xb4be,0x3ec5 ,
0xdb52,0x3bb6 ,0x1091,0x2435 ,0xb2f2,0x3e45 ,
0xd94d,0x3c42 ,0x0f79,0x24da ,0xb140,0x3daf ,
0xd74e,0x3cc5 ,0x0e61,0x257e ,0xafa9,0x3d03 ,
0xd556,0x3d3f ,0x0d48,0x2620 ,0xae2e,0x3c42 ,
0xd363,0x3daf ,0x0c2e,0x26c1 ,0xacd0,0x3b6d ,
0xd178,0x3e15 ,0x0b14,0x2760 ,0xab8e,0x3a82 ,
0xcf94,0x3e72 ,0x09fa,0x27fe ,0xaa6a,0x3984 ,
0xcdb7,0x3ec5 ,0x08df,0x289a ,0xa963,0x3871 ,
0xcbe2,0x3f0f ,0x07c4,0x2935 ,0xa87b,0x374b ,
0xca15,0x3f4f ,0x06a9,0x29ce ,0xa7b1,0x3612 ,
0xc851,0x3f85 ,0x058d,0x2a65 ,0xa705,0x34c6 ,
0xc695,0x3fb1 ,0x0471,0x2afb ,0xa678,0x3368 ,
0xc4e2,0x3fd4 ,0x0355,0x2b8f ,0xa60b,0x31f8 ,
0xc338,0x3fec ,0x0239,0x2c21 ,0xa5bc,0x3076 ,
0xc197,0x3ffb ,0x011c,0x2cb2 ,0xa58d,0x2ee4 ,
0xc000,0x4000 ,0x0000,0x2d41 ,0xa57e,0x2d41 ,
0xbe73,0x3ffb ,0xfee4,0x2dcf ,0xa58d,0x2b8f ,
0xbcf0,0x3fec ,0xfdc7,0x2e5a ,0xa5bc,0x29ce ,
0xbb77,0x3fd4 ,0xfcab,0x2ee4 ,0xa60b,0x27fe ,
0xba09,0x3fb1 ,0xfb8f,0x2f6c ,0xa678,0x2620 ,
0xb8a6,0x3f85 ,0xfa73,0x2ff2 ,0xa705,0x2435 ,
0xb74d,0x3f4f ,0xf957,0x3076 ,0xa7b1,0x223d ,
0xb600,0x3f0f ,0xf83c,0x30f9 ,0xa87b,0x203a ,
0xb4be,0x3ec5 ,0xf721,0x3179 ,0xa963,0x1e2b ,
0xb388,0x3e72 ,0xf606,0x31f8 ,0xaa6a,0x1c12 ,
0xb25e,0x3e15 ,0xf4ec,0x3274 ,0xab8e,0x19ef ,
0xb140,0x3daf ,0xf3d2,0x32ef ,0xacd0,0x17c4 ,
0xb02d,0x3d3f ,0xf2b8,0x3368 ,0xae2e,0x1590 ,
0xaf28,0x3cc5 ,0xf19f,0x33df ,0xafa9,0x1354 ,
0xae2e,0x3c42 ,0xf087,0x3453 ,0xb140,0x1112 ,
0xad41,0x3bb6 ,0xef6f,0x34c6 ,0xb2f2,0x0eca ,
0xac61,0x3b21 ,0xee58,0x3537 ,0xb4be,0x0c7c ,
0xab8e,0x3a82 ,0xed41,0x35a5 ,0xb6a5,0x0a2b ,
0xaac8,0x39db ,0xec2b,0x3612 ,0xb8a6,0x07d6 ,
0xaa0f,0x392b ,0xeb16,0x367d ,0xbabf,0x057e ,
0xa963,0x3871 ,0xea02,0x36e5 ,0xbcf0,0x0324 ,
0xa8c5,0x37b0 ,0xe8ef,0x374b ,0xbf38,0x00c9 ,
0xa834,0x36e5 ,0xe7dc,0x37b0 ,0xc197,0xfe6e ,
0xa7b1,0x3612 ,0xe6cb,0x3812 ,0xc40c,0xfc13 ,
0xa73b,0x3537 ,0xe5ba,0x3871 ,0xc695,0xf9ba ,
0xa6d3,0x3453 ,0xe4aa,0x38cf ,0xc932,0xf763 ,
0xa678,0x3368 ,0xe39c,0x392b ,0xcbe2,0xf50f ,
0xa62c,0x3274 ,0xe28e,0x3984 ,0xcea5,0xf2bf ,
0xa5ed,0x3179 ,0xe182,0x39db ,0xd178,0xf073 ,
0xa5bc,0x3076 ,0xe077,0x3a30 ,0xd45c,0xee2d ,
0xa599,0x2f6c ,0xdf6d,0x3a82 ,0xd74e,0xebed ,
0xa585,0x2e5a ,0xde64,0x3ad3 ,0xda4f,0xe9b4 ,
0xa57e,0x2d41 ,0xdd5d,0x3b21 ,0xdd5d,0xe782 ,
0xa585,0x2c21 ,0xdc57,0x3b6d ,0xe077,0xe559 ,
0xa599,0x2afb ,0xdb52,0x3bb6 ,0xe39c,0xe33a ,
0xa5bc,0x29ce ,0xda4f,0x3bfd ,0xe6cb,0xe124 ,
0xa5ed,0x289a ,0xd94d,0x3c42 ,0xea02,0xdf19 ,
0xa62c,0x2760 ,0xd84d,0x3c85 ,0xed41,0xdd19 ,
0xa678,0x2620 ,0xd74e,0x3cc5 ,0xf087,0xdb26 ,
0xa6d3,0x24da ,0xd651,0x3d03 ,0xf3d2,0xd93f ,
0xa73b,0x238e ,0xd556,0x3d3f ,0xf721,0xd766 ,
0xa7b1,0x223d ,0xd45c,0x3d78 ,0xfa73,0xd59b ,
0xa834,0x20e7 ,0xd363,0x3daf ,0xfdc7,0xd3df ,
0xa8c5,0x1f8c ,0xd26d,0x3de3 ,0x011c,0xd231 ,
0xa963,0x1e2b ,0xd178,0x3e15 ,0x0471,0xd094 ,
0xaa0f,0x1cc6 ,0xd085,0x3e45 ,0x07c4,0xcf07 ,
0xaac8,0x1b5d ,0xcf94,0x3e72 ,0x0b14,0xcd8c ,
0xab8e,0x19ef ,0xcea5,0x3e9d ,0x0e61,0xcc21 ,
0xac61,0x187e ,0xcdb7,0x3ec5 ,0x11a8,0xcac9 ,
0xad41,0x1709 ,0xcccc,0x3eeb ,0x14ea,0xc983 ,
0xae2e,0x1590 ,0xcbe2,0x3f0f ,0x1824,0xc850 ,
0xaf28,0x1413 ,0xcafb,0x3f30 ,0x1b56,0xc731 ,
0xb02d,0x1294 ,0xca15,0x3f4f ,0x1e7e,0xc625 ,
0xb140,0x1112 ,0xc932,0x3f6b ,0x219c,0xc52d ,
0xb25e,0x0f8d ,0xc851,0x3f85 ,0x24ae,0xc44a ,
0xb388,0x0e06 ,0xc772,0x3f9c ,0x27b3,0xc37b ,
0xb4be,0x0c7c ,0xc695,0x3fb1 ,0x2aaa,0xc2c1 ,
0xb600,0x0af1 ,0xc5ba,0x3fc4 ,0x2d93,0xc21d ,
0xb74d,0x0964 ,0xc4e2,0x3fd4 ,0x306c,0xc18e ,
0xb8a6,0x07d6 ,0xc40c,0x3fe1 ,0x3334,0xc115 ,
0xba09,0x0646 ,0xc338,0x3fec ,0x35eb,0xc0b1 ,
0xbb77,0x04b5 ,0xc266,0x3ff5 ,0x388e,0xc064 ,
0xbcf0,0x0324 ,0xc197,0x3ffb ,0x3b1e,0xc02c ,
0xbe73,0x0192 ,0xc0ca,0x3fff ,0x3d9a,0xc00b ,
0x4000,0x0000 ,0x3f9b,0x0065 ,0x3f36,0x00c9 ,
0x3ed0,0x012e ,0x3e69,0x0192 ,0x3e02,0x01f7 ,
0x3d9a,0x025b ,0x3d31,0x02c0 ,0x3cc8,0x0324 ,
0x3c5f,0x0388 ,0x3bf4,0x03ed ,0x3b8a,0x0451 ,
0x3b1e,0x04b5 ,0x3ab2,0x051a ,0x3a46,0x057e ,
0x39d9,0x05e2 ,0x396b,0x0646 ,0x38fd,0x06aa ,
0x388e,0x070e ,0x381f,0x0772 ,0x37af,0x07d6 ,
0x373f,0x0839 ,0x36ce,0x089d ,0x365d,0x0901 ,
0x35eb,0x0964 ,0x3578,0x09c7 ,0x3505,0x0a2b ,
0x3492,0x0a8e ,0x341e,0x0af1 ,0x33a9,0x0b54 ,
0x3334,0x0bb7 ,0x32bf,0x0c1a ,0x3249,0x0c7c ,
0x31d2,0x0cdf ,0x315b,0x0d41 ,0x30e4,0x0da4 ,
0x306c,0x0e06 ,0x2ff4,0x0e68 ,0x2f7b,0x0eca ,
0x2f02,0x0f2b ,0x2e88,0x0f8d ,0x2e0e,0x0fee ,
0x2d93,0x1050 ,0x2d18,0x10b1 ,0x2c9d,0x1112 ,
0x2c21,0x1173 ,0x2ba4,0x11d3 ,0x2b28,0x1234 ,
0x2aaa,0x1294 ,0x2a2d,0x12f4 ,0x29af,0x1354 ,
0x2931,0x13b4 ,0x28b2,0x1413 ,0x2833,0x1473 ,
0x27b3,0x14d2 ,0x2733,0x1531 ,0x26b3,0x1590 ,
0x2632,0x15ee ,0x25b1,0x164c ,0x252f,0x16ab ,
0x24ae,0x1709 ,0x242b,0x1766 ,0x23a9,0x17c4 ,
0x2326,0x1821 ,0x22a3,0x187e ,0x221f,0x18db ,
0x219c,0x1937 ,0x2117,0x1993 ,0x2093,0x19ef ,
0x200e,0x1a4b ,0x1f89,0x1aa7 ,0x1f04,0x1b02 ,
0x1e7e,0x1b5d ,0x1df8,0x1bb8 ,0x1d72,0x1c12 ,
0x1ceb,0x1c6c ,0x1c64,0x1cc6 ,0x1bdd,0x1d20 ,
0x1b56,0x1d79 ,0x1ace,0x1dd3 ,0x1a46,0x1e2b ,
0x19be,0x1e84 ,0x1935,0x1edc ,0x18ad,0x1f34 ,
0x1824,0x1f8c ,0x179b,0x1fe3 ,0x1711,0x203a ,
0x1688,0x2091 ,0x15fe,0x20e7 ,0x1574,0x213d ,
0x14ea,0x2193 ,0x145f,0x21e8 ,0x13d5,0x223d ,
0x134a,0x2292 ,0x12bf,0x22e7 ,0x1234,0x233b ,
0x11a8,0x238e ,0x111d,0x23e2 ,0x1091,0x2435 ,
0x1005,0x2488 ,0x0f79,0x24da ,0x0eed,0x252c ,
0x0e61,0x257e ,0x0dd4,0x25cf ,0x0d48,0x2620 ,
0x0cbb,0x2671 ,0x0c2e,0x26c1 ,0x0ba1,0x2711 ,
0x0b14,0x2760 ,0x0a87,0x27af ,0x09fa,0x27fe ,
0x096d,0x284c ,0x08df,0x289a ,0x0852,0x28e7 ,
0x07c4,0x2935 ,0x0736,0x2981 ,0x06a9,0x29ce ,
0x061b,0x2a1a ,0x058d,0x2a65 ,0x04ff,0x2ab0 ,
0x0471,0x2afb ,0x03e3,0x2b45 ,0x0355,0x2b8f ,
0x02c7,0x2bd8 ,0x0239,0x2c21 ,0x01aa,0x2c6a ,
0x011c,0x2cb2 ,0x008e,0x2cfa ,0x0000,0x2d41 ,
0xff72,0x2d88 ,0xfee4,0x2dcf ,0xfe56,0x2e15 ,
0xfdc7,0x2e5a ,0xfd39,0x2e9f ,0xfcab,0x2ee4 ,
0xfc1d,0x2f28 ,0xfb8f,0x2f6c ,0xfb01,0x2faf ,
0xfa73,0x2ff2 ,0xf9e5,0x3034 ,0xf957,0x3076 ,
0xf8ca,0x30b8 ,0xf83c,0x30f9 ,0xf7ae,0x3139 ,
0xf721,0x3179 ,0xf693,0x31b9 ,0xf606,0x31f8 ,
0xf579,0x3236 ,0xf4ec,0x3274 ,0xf45f,0x32b2 ,
0xf3d2,0x32ef ,0xf345,0x332c ,0xf2b8,0x3368 ,
0xf22c,0x33a3 ,0xf19f,0x33df ,0xf113,0x3419 ,
0xf087,0x3453 ,0xeffb,0x348d ,0xef6f,0x34c6 ,
0xeee3,0x34ff ,0xee58,0x3537 ,0xedcc,0x356e ,
0xed41,0x35a5 ,0xecb6,0x35dc ,0xec2b,0x3612 ,
0xeba1,0x3648 ,0xeb16,0x367d ,0xea8c,0x36b1 ,
0xea02,0x36e5 ,0xe978,0x3718 ,0xe8ef,0x374b ,
0xe865,0x377e ,0xe7dc,0x37b0 ,0xe753,0x37e1 ,
0xe6cb,0x3812 ,0xe642,0x3842 ,0xe5ba,0x3871 ,
0xe532,0x38a1 ,0xe4aa,0x38cf ,0xe423,0x38fd ,
0xe39c,0x392b ,0xe315,0x3958 ,0xe28e,0x3984 ,
0xe208,0x39b0 ,0xe182,0x39db ,0xe0fc,0x3a06 ,
0xe077,0x3a30 ,0xdff2,0x3a59 ,0xdf6d,0x3a82 ,
0xdee9,0x3aab ,0xde64,0x3ad3 ,0xdde1,0x3afa ,
0xdd5d,0x3b21 ,0xdcda,0x3b47 ,0xdc57,0x3b6d ,
0xdbd5,0x3b92 ,0xdb52,0x3bb6 ,0xdad1,0x3bda ,
0xda4f,0x3bfd ,0xd9ce,0x3c20 ,0xd94d,0x3c42 ,
0xd8cd,0x3c64 ,0xd84d,0x3c85 ,0xd7cd,0x3ca5 ,
0xd74e,0x3cc5 ,0xd6cf,0x3ce4 ,0xd651,0x3d03 ,
0xd5d3,0x3d21 ,0xd556,0x3d3f ,0xd4d8,0x3d5b ,
0xd45c,0x3d78 ,0xd3df,0x3d93 ,0xd363,0x3daf ,
0xd2e8,0x3dc9 ,0xd26d,0x3de3 ,0xd1f2,0x3dfc ,
0xd178,0x3e15 ,0xd0fe,0x3e2d ,0xd085,0x3e45 ,
0xd00c,0x3e5c ,0xcf94,0x3e72 ,0xcf1c,0x3e88 ,
0xcea5,0x3e9d ,0xce2e,0x3eb1 ,0xcdb7,0x3ec5 ,
0xcd41,0x3ed8 ,0xcccc,0x3eeb ,0xcc57,0x3efd ,
0xcbe2,0x3f0f ,0xcb6e,0x3f20 ,0xcafb,0x3f30 ,
0xca88,0x3f40 ,0xca15,0x3f4f ,0xc9a3,0x3f5d ,
0xc932,0x3f6b ,0xc8c1,0x3f78 ,0xc851,0x3f85 ,
0xc7e1,0x3f91 ,0xc772,0x3f9c ,0xc703,0x3fa7 ,
0xc695,0x3fb1 ,0xc627,0x3fbb ,0xc5ba,0x3fc4 ,
0xc54e,0x3fcc ,0xc4e2,0x3fd4 ,0xc476,0x3fdb ,
0xc40c,0x3fe1 ,0xc3a1,0x3fe7 ,0xc338,0x3fec ,
0xc2cf,0x3ff1 ,0xc266,0x3ff5 ,0xc1fe,0x3ff8 ,
0xc197,0x3ffb ,0xc130,0x3ffd ,0xc0ca,0x3fff ,
0xc065,0x4000 ,0xc000,0x4000 ,0xbf9c,0x4000 ,
0xbf38,0x3fff ,0xbed5,0x3ffd ,0xbe73,0x3ffb ,
0xbe11,0x3ff8 ,0xbdb0,0x3ff5 ,0xbd50,0x3ff1 ,
0xbcf0,0x3fec ,0xbc91,0x3fe7 ,0xbc32,0x3fe1 ,
0xbbd4,0x3fdb ,0xbb77,0x3fd4 ,0xbb1b,0x3fcc ,
0xbabf,0x3fc4 ,0xba64,0x3fbb ,0xba09,0x3fb1 ,
0xb9af,0x3fa7 ,0xb956,0x3f9c ,0xb8fd,0x3f91 ,
0xb8a6,0x3f85 ,0xb84f,0x3f78 ,0xb7f8,0x3f6b ,
0xb7a2,0x3f5d ,0xb74d,0x3f4f ,0xb6f9,0x3f40 ,
0xb6a5,0x3f30 ,0xb652,0x3f20 ,0xb600,0x3f0f ,
0xb5af,0x3efd ,0xb55e,0x3eeb ,0xb50e,0x3ed8 ,
0xb4be,0x3ec5 ,0xb470,0x3eb1 ,0xb422,0x3e9d ,
0xb3d5,0x3e88 ,0xb388,0x3e72 ,0xb33d,0x3e5c ,
0xb2f2,0x3e45 ,0xb2a7,0x3e2d ,0xb25e,0x3e15 ,
0xb215,0x3dfc ,0xb1cd,0x3de3 ,0xb186,0x3dc9 ,
0xb140,0x3daf ,0xb0fa,0x3d93 ,0xb0b5,0x3d78 ,
0xb071,0x3d5b ,0xb02d,0x3d3f ,0xafeb,0x3d21 ,
0xafa9,0x3d03 ,0xaf68,0x3ce4 ,0xaf28,0x3cc5 ,
0xaee8,0x3ca5 ,0xaea9,0x3c85 ,0xae6b,0x3c64 ,
0xae2e,0x3c42 ,0xadf2,0x3c20 ,0xadb6,0x3bfd ,
0xad7b,0x3bda ,0xad41,0x3bb6 ,0xad08,0x3b92 ,
0xacd0,0x3b6d ,0xac98,0x3b47 ,0xac61,0x3b21 ,
0xac2b,0x3afa ,0xabf6,0x3ad3 ,0xabc2,0x3aab ,
0xab8e,0x3a82 ,0xab5b,0x3a59 ,0xab29,0x3a30 ,
0xaaf8,0x3a06 ,0xaac8,0x39db ,0xaa98,0x39b0 ,
0xaa6a,0x3984 ,0xaa3c,0x3958 ,0xaa0f,0x392b ,
0xa9e3,0x38fd ,0xa9b7,0x38cf ,0xa98d,0x38a1 ,
0xa963,0x3871 ,0xa93a,0x3842 ,0xa912,0x3812 ,
0xa8eb,0x37e1 ,0xa8c5,0x37b0 ,0xa89f,0x377e ,
0xa87b,0x374b ,0xa857,0x3718 ,0xa834,0x36e5 ,
0xa812,0x36b1 ,0xa7f1,0x367d ,0xa7d0,0x3648 ,
0xa7b1,0x3612 ,0xa792,0x35dc ,0xa774,0x35a5 ,
0xa757,0x356e ,0xa73b,0x3537 ,0xa71f,0x34ff ,
0xa705,0x34c6 ,0xa6eb,0x348d ,0xa6d3,0x3453 ,
0xa6bb,0x3419 ,0xa6a4,0x33df ,0xa68e,0x33a3 ,
0xa678,0x3368 ,0xa664,0x332c ,0xa650,0x32ef ,
0xa63e,0x32b2 ,0xa62c,0x3274 ,0xa61b,0x3236 ,
0xa60b,0x31f8 ,0xa5fb,0x31b9 ,0xa5ed,0x3179 ,
0xa5e0,0x3139 ,0xa5d3,0x30f9 ,0xa5c7,0x30b8 ,
0xa5bc,0x3076 ,0xa5b2,0x3034 ,0xa5a9,0x2ff2 ,
0xa5a1,0x2faf ,0xa599,0x2f6c ,0xa593,0x2f28 ,
0xa58d,0x2ee4 ,0xa588,0x2e9f ,0xa585,0x2e5a ,
0xa581,0x2e15 ,0xa57f,0x2dcf ,0xa57e,0x2d88 ,
0xa57e,0x2d41 ,0xa57e,0x2cfa ,0xa57f,0x2cb2 ,
0xa581,0x2c6a ,0xa585,0x2c21 ,0xa588,0x2bd8 ,
0xa58d,0x2b8f ,0xa593,0x2b45 ,0xa599,0x2afb ,
0xa5a1,0x2ab0 ,0xa5a9,0x2a65 ,0xa5b2,0x2a1a ,
0xa5bc,0x29ce ,0xa5c7,0x2981 ,0xa5d3,0x2935 ,
0xa5e0,0x28e7 ,0xa5ed,0x289a ,0xa5fb,0x284c ,
0xa60b,0x27fe ,0xa61b,0x27af ,0xa62c,0x2760 ,
0xa63e,0x2711 ,0xa650,0x26c1 ,0xa664,0x2671 ,
0xa678,0x2620 ,0xa68e,0x25cf ,0xa6a4,0x257e ,
0xa6bb,0x252c ,0xa6d3,0x24da ,0xa6eb,0x2488 ,
0xa705,0x2435 ,0xa71f,0x23e2 ,0xa73b,0x238e ,
0xa757,0x233b ,0xa774,0x22e7 ,0xa792,0x2292 ,
0xa7b1,0x223d ,0xa7d0,0x21e8 ,0xa7f1,0x2193 ,
0xa812,0x213d ,0xa834,0x20e7 ,0xa857,0x2091 ,
0xa87b,0x203a ,0xa89f,0x1fe3 ,0xa8c5,0x1f8c ,
0xa8eb,0x1f34 ,0xa912,0x1edc ,0xa93a,0x1e84 ,
0xa963,0x1e2b ,0xa98d,0x1dd3 ,0xa9b7,0x1d79 ,
0xa9e3,0x1d20 ,0xaa0f,0x1cc6 ,0xaa3c,0x1c6c ,
0xaa6a,0x1c12 ,0xaa98,0x1bb8 ,0xaac8,0x1b5d ,
0xaaf8,0x1b02 ,0xab29,0x1aa7 ,0xab5b,0x1a4b ,
0xab8e,0x19ef ,0xabc2,0x1993 ,0xabf6,0x1937 ,
0xac2b,0x18db ,0xac61,0x187e ,0xac98,0x1821 ,
0xacd0,0x17c4 ,0xad08,0x1766 ,0xad41,0x1709 ,
0xad7b,0x16ab ,0xadb6,0x164c ,0xadf2,0x15ee ,
0xae2e,0x1590 ,0xae6b,0x1531 ,0xaea9,0x14d2 ,
0xaee8,0x1473 ,0xaf28,0x1413 ,0xaf68,0x13b4 ,
0xafa9,0x1354 ,0xafeb,0x12f4 ,0xb02d,0x1294 ,
0xb071,0x1234 ,0xb0b5,0x11d3 ,0xb0fa,0x1173 ,
0xb140,0x1112 ,0xb186,0x10b1 ,0xb1cd,0x1050 ,
0xb215,0x0fee ,0xb25e,0x0f8d ,0xb2a7,0x0f2b ,
0xb2f2,0x0eca ,0xb33d,0x0e68 ,0xb388,0x0e06 ,
0xb3d5,0x0da4 ,0xb422,0x0d41 ,0xb470,0x0cdf ,
0xb4be,0x0c7c ,0xb50e,0x0c1a ,0xb55e,0x0bb7 ,
0xb5af,0x0b54 ,0xb600,0x0af1 ,0xb652,0x0a8e ,
0xb6a5,0x0a2b ,0xb6f9,0x09c7 ,0xb74d,0x0964 ,
0xb7a2,0x0901 ,0xb7f8,0x089d ,0xb84f,0x0839 ,
0xb8a6,0x07d6 ,0xb8fd,0x0772 ,0xb956,0x070e ,
0xb9af,0x06aa ,0xba09,0x0646 ,0xba64,0x05e2 ,
0xbabf,0x057e ,0xbb1b,0x051a ,0xbb77,0x04b5 ,
0xbbd4,0x0451 ,0xbc32,0x03ed ,0xbc91,0x0388 ,
0xbcf0,0x0324 ,0xbd50,0x02c0 ,0xbdb0,0x025b ,
0xbe11,0x01f7 ,0xbe73,0x0192 ,0xbed5,0x012e ,
0xbf38,0x00c9 ,0xbf9c,0x0065 };
extern const int s_Q14R_8;
const int s_Q14R_8 = 1024;
extern const unsigned short t_Q14R_8[2032];
const unsigned short t_Q14R_8[2032] = {
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3b21,0x187e ,0x3ec5,0x0c7c ,0x3537,0x238e ,
0x2d41,0x2d41 ,0x3b21,0x187e ,0x187e,0x3b21 ,
0x187e,0x3b21 ,0x3537,0x238e ,0xf384,0x3ec5 ,
0x0000,0x4000 ,0x2d41,0x2d41 ,0xd2bf,0x2d41 ,
0xe782,0x3b21 ,0x238e,0x3537 ,0xc13b,0x0c7c ,
0xd2bf,0x2d41 ,0x187e,0x3b21 ,0xc4df,0xe782 ,
0xc4df,0x187e ,0x0c7c,0x3ec5 ,0xdc72,0xcac9 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3fb1,0x0646 ,0x3fec,0x0324 ,0x3f4f,0x0964 ,
0x3ec5,0x0c7c ,0x3fb1,0x0646 ,0x3d3f,0x1294 ,
0x3d3f,0x1294 ,0x3f4f,0x0964 ,0x39db,0x1b5d ,
0x3b21,0x187e ,0x3ec5,0x0c7c ,0x3537,0x238e ,
0x3871,0x1e2b ,0x3e15,0x0f8d ,0x2f6c,0x2afb ,
0x3537,0x238e ,0x3d3f,0x1294 ,0x289a,0x3179 ,
0x3179,0x289a ,0x3c42,0x1590 ,0x20e7,0x36e5 ,
0x2d41,0x2d41 ,0x3b21,0x187e ,0x187e,0x3b21 ,
0x289a,0x3179 ,0x39db,0x1b5d ,0x0f8d,0x3e15 ,
0x238e,0x3537 ,0x3871,0x1e2b ,0x0646,0x3fb1 ,
0x1e2b,0x3871 ,0x36e5,0x20e7 ,0xfcdc,0x3fec ,
0x187e,0x3b21 ,0x3537,0x238e ,0xf384,0x3ec5 ,
0x1294,0x3d3f ,0x3368,0x2620 ,0xea70,0x3c42 ,
0x0c7c,0x3ec5 ,0x3179,0x289a ,0xe1d5,0x3871 ,
0x0646,0x3fb1 ,0x2f6c,0x2afb ,0xd9e0,0x3368 ,
0x0000,0x4000 ,0x2d41,0x2d41 ,0xd2bf,0x2d41 ,
0xf9ba,0x3fb1 ,0x2afb,0x2f6c ,0xcc98,0x2620 ,
0xf384,0x3ec5 ,0x289a,0x3179 ,0xc78f,0x1e2b ,
0xed6c,0x3d3f ,0x2620,0x3368 ,0xc3be,0x1590 ,
0xe782,0x3b21 ,0x238e,0x3537 ,0xc13b,0x0c7c ,
0xe1d5,0x3871 ,0x20e7,0x36e5 ,0xc014,0x0324 ,
0xdc72,0x3537 ,0x1e2b,0x3871 ,0xc04f,0xf9ba ,
0xd766,0x3179 ,0x1b5d,0x39db ,0xc1eb,0xf073 ,
0xd2bf,0x2d41 ,0x187e,0x3b21 ,0xc4df,0xe782 ,
0xce87,0x289a ,0x1590,0x3c42 ,0xc91b,0xdf19 ,
0xcac9,0x238e ,0x1294,0x3d3f ,0xce87,0xd766 ,
0xc78f,0x1e2b ,0x0f8d,0x3e15 ,0xd505,0xd094 ,
0xc4df,0x187e ,0x0c7c,0x3ec5 ,0xdc72,0xcac9 ,
0xc2c1,0x1294 ,0x0964,0x3f4f ,0xe4a3,0xc625 ,
0xc13b,0x0c7c ,0x0646,0x3fb1 ,0xed6c,0xc2c1 ,
0xc04f,0x0646 ,0x0324,0x3fec ,0xf69c,0xc0b1 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3ffb,0x0192 ,0x3fff,0x00c9 ,0x3ff5,0x025b ,
0x3fec,0x0324 ,0x3ffb,0x0192 ,0x3fd4,0x04b5 ,
0x3fd4,0x04b5 ,0x3ff5,0x025b ,0x3f9c,0x070e ,
0x3fb1,0x0646 ,0x3fec,0x0324 ,0x3f4f,0x0964 ,
0x3f85,0x07d6 ,0x3fe1,0x03ed ,0x3eeb,0x0bb7 ,
0x3f4f,0x0964 ,0x3fd4,0x04b5 ,0x3e72,0x0e06 ,
0x3f0f,0x0af1 ,0x3fc4,0x057e ,0x3de3,0x1050 ,
0x3ec5,0x0c7c ,0x3fb1,0x0646 ,0x3d3f,0x1294 ,
0x3e72,0x0e06 ,0x3f9c,0x070e ,0x3c85,0x14d2 ,
0x3e15,0x0f8d ,0x3f85,0x07d6 ,0x3bb6,0x1709 ,
0x3daf,0x1112 ,0x3f6b,0x089d ,0x3ad3,0x1937 ,
0x3d3f,0x1294 ,0x3f4f,0x0964 ,0x39db,0x1b5d ,
0x3cc5,0x1413 ,0x3f30,0x0a2b ,0x38cf,0x1d79 ,
0x3c42,0x1590 ,0x3f0f,0x0af1 ,0x37b0,0x1f8c ,
0x3bb6,0x1709 ,0x3eeb,0x0bb7 ,0x367d,0x2193 ,
0x3b21,0x187e ,0x3ec5,0x0c7c ,0x3537,0x238e ,
0x3a82,0x19ef ,0x3e9d,0x0d41 ,0x33df,0x257e ,
0x39db,0x1b5d ,0x3e72,0x0e06 ,0x3274,0x2760 ,
0x392b,0x1cc6 ,0x3e45,0x0eca ,0x30f9,0x2935 ,
0x3871,0x1e2b ,0x3e15,0x0f8d ,0x2f6c,0x2afb ,
0x37b0,0x1f8c ,0x3de3,0x1050 ,0x2dcf,0x2cb2 ,
0x36e5,0x20e7 ,0x3daf,0x1112 ,0x2c21,0x2e5a ,
0x3612,0x223d ,0x3d78,0x11d3 ,0x2a65,0x2ff2 ,
0x3537,0x238e ,0x3d3f,0x1294 ,0x289a,0x3179 ,
0x3453,0x24da ,0x3d03,0x1354 ,0x26c1,0x32ef ,
0x3368,0x2620 ,0x3cc5,0x1413 ,0x24da,0x3453 ,
0x3274,0x2760 ,0x3c85,0x14d2 ,0x22e7,0x35a5 ,
0x3179,0x289a ,0x3c42,0x1590 ,0x20e7,0x36e5 ,
0x3076,0x29ce ,0x3bfd,0x164c ,0x1edc,0x3812 ,
0x2f6c,0x2afb ,0x3bb6,0x1709 ,0x1cc6,0x392b ,
0x2e5a,0x2c21 ,0x3b6d,0x17c4 ,0x1aa7,0x3a30 ,
0x2d41,0x2d41 ,0x3b21,0x187e ,0x187e,0x3b21 ,
0x2c21,0x2e5a ,0x3ad3,0x1937 ,0x164c,0x3bfd ,
0x2afb,0x2f6c ,0x3a82,0x19ef ,0x1413,0x3cc5 ,
0x29ce,0x3076 ,0x3a30,0x1aa7 ,0x11d3,0x3d78 ,
0x289a,0x3179 ,0x39db,0x1b5d ,0x0f8d,0x3e15 ,
0x2760,0x3274 ,0x3984,0x1c12 ,0x0d41,0x3e9d ,
0x2620,0x3368 ,0x392b,0x1cc6 ,0x0af1,0x3f0f ,
0x24da,0x3453 ,0x38cf,0x1d79 ,0x089d,0x3f6b ,
0x238e,0x3537 ,0x3871,0x1e2b ,0x0646,0x3fb1 ,
0x223d,0x3612 ,0x3812,0x1edc ,0x03ed,0x3fe1 ,
0x20e7,0x36e5 ,0x37b0,0x1f8c ,0x0192,0x3ffb ,
0x1f8c,0x37b0 ,0x374b,0x203a ,0xff37,0x3fff ,
0x1e2b,0x3871 ,0x36e5,0x20e7 ,0xfcdc,0x3fec ,
0x1cc6,0x392b ,0x367d,0x2193 ,0xfa82,0x3fc4 ,
0x1b5d,0x39db ,0x3612,0x223d ,0xf82a,0x3f85 ,
0x19ef,0x3a82 ,0x35a5,0x22e7 ,0xf5d5,0x3f30 ,
0x187e,0x3b21 ,0x3537,0x238e ,0xf384,0x3ec5 ,
0x1709,0x3bb6 ,0x34c6,0x2435 ,0xf136,0x3e45 ,
0x1590,0x3c42 ,0x3453,0x24da ,0xeeee,0x3daf ,
0x1413,0x3cc5 ,0x33df,0x257e ,0xecac,0x3d03 ,
0x1294,0x3d3f ,0x3368,0x2620 ,0xea70,0x3c42 ,
0x1112,0x3daf ,0x32ef,0x26c1 ,0xe83c,0x3b6d ,
0x0f8d,0x3e15 ,0x3274,0x2760 ,0xe611,0x3a82 ,
0x0e06,0x3e72 ,0x31f8,0x27fe ,0xe3ee,0x3984 ,
0x0c7c,0x3ec5 ,0x3179,0x289a ,0xe1d5,0x3871 ,
0x0af1,0x3f0f ,0x30f9,0x2935 ,0xdfc6,0x374b ,
0x0964,0x3f4f ,0x3076,0x29ce ,0xddc3,0x3612 ,
0x07d6,0x3f85 ,0x2ff2,0x2a65 ,0xdbcb,0x34c6 ,
0x0646,0x3fb1 ,0x2f6c,0x2afb ,0xd9e0,0x3368 ,
0x04b5,0x3fd4 ,0x2ee4,0x2b8f ,0xd802,0x31f8 ,
0x0324,0x3fec ,0x2e5a,0x2c21 ,0xd632,0x3076 ,
0x0192,0x3ffb ,0x2dcf,0x2cb2 ,0xd471,0x2ee4 ,
0x0000,0x4000 ,0x2d41,0x2d41 ,0xd2bf,0x2d41 ,
0xfe6e,0x3ffb ,0x2cb2,0x2dcf ,0xd11c,0x2b8f ,
0xfcdc,0x3fec ,0x2c21,0x2e5a ,0xcf8a,0x29ce ,
0xfb4b,0x3fd4 ,0x2b8f,0x2ee4 ,0xce08,0x27fe ,
0xf9ba,0x3fb1 ,0x2afb,0x2f6c ,0xcc98,0x2620 ,
0xf82a,0x3f85 ,0x2a65,0x2ff2 ,0xcb3a,0x2435 ,
0xf69c,0x3f4f ,0x29ce,0x3076 ,0xc9ee,0x223d ,
0xf50f,0x3f0f ,0x2935,0x30f9 ,0xc8b5,0x203a ,
0xf384,0x3ec5 ,0x289a,0x3179 ,0xc78f,0x1e2b ,
0xf1fa,0x3e72 ,0x27fe,0x31f8 ,0xc67c,0x1c12 ,
0xf073,0x3e15 ,0x2760,0x3274 ,0xc57e,0x19ef ,
0xeeee,0x3daf ,0x26c1,0x32ef ,0xc493,0x17c4 ,
0xed6c,0x3d3f ,0x2620,0x3368 ,0xc3be,0x1590 ,
0xebed,0x3cc5 ,0x257e,0x33df ,0xc2fd,0x1354 ,
0xea70,0x3c42 ,0x24da,0x3453 ,0xc251,0x1112 ,
0xe8f7,0x3bb6 ,0x2435,0x34c6 ,0xc1bb,0x0eca ,
0xe782,0x3b21 ,0x238e,0x3537 ,0xc13b,0x0c7c ,
0xe611,0x3a82 ,0x22e7,0x35a5 ,0xc0d0,0x0a2b ,
0xe4a3,0x39db ,0x223d,0x3612 ,0xc07b,0x07d6 ,
0xe33a,0x392b ,0x2193,0x367d ,0xc03c,0x057e ,
0xe1d5,0x3871 ,0x20e7,0x36e5 ,0xc014,0x0324 ,
0xe074,0x37b0 ,0x203a,0x374b ,0xc001,0x00c9 ,
0xdf19,0x36e5 ,0x1f8c,0x37b0 ,0xc005,0xfe6e ,
0xddc3,0x3612 ,0x1edc,0x3812 ,0xc01f,0xfc13 ,
0xdc72,0x3537 ,0x1e2b,0x3871 ,0xc04f,0xf9ba ,
0xdb26,0x3453 ,0x1d79,0x38cf ,0xc095,0xf763 ,
0xd9e0,0x3368 ,0x1cc6,0x392b ,0xc0f1,0xf50f ,
0xd8a0,0x3274 ,0x1c12,0x3984 ,0xc163,0xf2bf ,
0xd766,0x3179 ,0x1b5d,0x39db ,0xc1eb,0xf073 ,
0xd632,0x3076 ,0x1aa7,0x3a30 ,0xc288,0xee2d ,
0xd505,0x2f6c ,0x19ef,0x3a82 ,0xc33b,0xebed ,
0xd3df,0x2e5a ,0x1937,0x3ad3 ,0xc403,0xe9b4 ,
0xd2bf,0x2d41 ,0x187e,0x3b21 ,0xc4df,0xe782 ,
0xd1a6,0x2c21 ,0x17c4,0x3b6d ,0xc5d0,0xe559 ,
0xd094,0x2afb ,0x1709,0x3bb6 ,0xc6d5,0xe33a ,
0xcf8a,0x29ce ,0x164c,0x3bfd ,0xc7ee,0xe124 ,
0xce87,0x289a ,0x1590,0x3c42 ,0xc91b,0xdf19 ,
0xcd8c,0x2760 ,0x14d2,0x3c85 ,0xca5b,0xdd19 ,
0xcc98,0x2620 ,0x1413,0x3cc5 ,0xcbad,0xdb26 ,
0xcbad,0x24da ,0x1354,0x3d03 ,0xcd11,0xd93f ,
0xcac9,0x238e ,0x1294,0x3d3f ,0xce87,0xd766 ,
0xc9ee,0x223d ,0x11d3,0x3d78 ,0xd00e,0xd59b ,
0xc91b,0x20e7 ,0x1112,0x3daf ,0xd1a6,0xd3df ,
0xc850,0x1f8c ,0x1050,0x3de3 ,0xd34e,0xd231 ,
0xc78f,0x1e2b ,0x0f8d,0x3e15 ,0xd505,0xd094 ,
0xc6d5,0x1cc6 ,0x0eca,0x3e45 ,0xd6cb,0xcf07 ,
0xc625,0x1b5d ,0x0e06,0x3e72 ,0xd8a0,0xcd8c ,
0xc57e,0x19ef ,0x0d41,0x3e9d ,0xda82,0xcc21 ,
0xc4df,0x187e ,0x0c7c,0x3ec5 ,0xdc72,0xcac9 ,
0xc44a,0x1709 ,0x0bb7,0x3eeb ,0xde6d,0xc983 ,
0xc3be,0x1590 ,0x0af1,0x3f0f ,0xe074,0xc850 ,
0xc33b,0x1413 ,0x0a2b,0x3f30 ,0xe287,0xc731 ,
0xc2c1,0x1294 ,0x0964,0x3f4f ,0xe4a3,0xc625 ,
0xc251,0x1112 ,0x089d,0x3f6b ,0xe6c9,0xc52d ,
0xc1eb,0x0f8d ,0x07d6,0x3f85 ,0xe8f7,0xc44a ,
0xc18e,0x0e06 ,0x070e,0x3f9c ,0xeb2e,0xc37b ,
0xc13b,0x0c7c ,0x0646,0x3fb1 ,0xed6c,0xc2c1 ,
0xc0f1,0x0af1 ,0x057e,0x3fc4 ,0xefb0,0xc21d ,
0xc0b1,0x0964 ,0x04b5,0x3fd4 ,0xf1fa,0xc18e ,
0xc07b,0x07d6 ,0x03ed,0x3fe1 ,0xf449,0xc115 ,
0xc04f,0x0646 ,0x0324,0x3fec ,0xf69c,0xc0b1 ,
0xc02c,0x04b5 ,0x025b,0x3ff5 ,0xf8f2,0xc064 ,
0xc014,0x0324 ,0x0192,0x3ffb ,0xfb4b,0xc02c ,
0xc005,0x0192 ,0x00c9,0x3fff ,0xfda5,0xc00b ,
0x4000,0x0000 ,0x4000,0x0065 ,0x3fff,0x00c9 ,
0x3ffd,0x012e ,0x3ffb,0x0192 ,0x3ff8,0x01f7 ,
0x3ff5,0x025b ,0x3ff1,0x02c0 ,0x3fec,0x0324 ,
0x3fe7,0x0388 ,0x3fe1,0x03ed ,0x3fdb,0x0451 ,
0x3fd4,0x04b5 ,0x3fcc,0x051a ,0x3fc4,0x057e ,
0x3fbb,0x05e2 ,0x3fb1,0x0646 ,0x3fa7,0x06aa ,
0x3f9c,0x070e ,0x3f91,0x0772 ,0x3f85,0x07d6 ,
0x3f78,0x0839 ,0x3f6b,0x089d ,0x3f5d,0x0901 ,
0x3f4f,0x0964 ,0x3f40,0x09c7 ,0x3f30,0x0a2b ,
0x3f20,0x0a8e ,0x3f0f,0x0af1 ,0x3efd,0x0b54 ,
0x3eeb,0x0bb7 ,0x3ed8,0x0c1a ,0x3ec5,0x0c7c ,
0x3eb1,0x0cdf ,0x3e9d,0x0d41 ,0x3e88,0x0da4 ,
0x3e72,0x0e06 ,0x3e5c,0x0e68 ,0x3e45,0x0eca ,
0x3e2d,0x0f2b ,0x3e15,0x0f8d ,0x3dfc,0x0fee ,
0x3de3,0x1050 ,0x3dc9,0x10b1 ,0x3daf,0x1112 ,
0x3d93,0x1173 ,0x3d78,0x11d3 ,0x3d5b,0x1234 ,
0x3d3f,0x1294 ,0x3d21,0x12f4 ,0x3d03,0x1354 ,
0x3ce4,0x13b4 ,0x3cc5,0x1413 ,0x3ca5,0x1473 ,
0x3c85,0x14d2 ,0x3c64,0x1531 ,0x3c42,0x1590 ,
0x3c20,0x15ee ,0x3bfd,0x164c ,0x3bda,0x16ab ,
0x3bb6,0x1709 ,0x3b92,0x1766 ,0x3b6d,0x17c4 ,
0x3b47,0x1821 ,0x3b21,0x187e ,0x3afa,0x18db ,
0x3ad3,0x1937 ,0x3aab,0x1993 ,0x3a82,0x19ef ,
0x3a59,0x1a4b ,0x3a30,0x1aa7 ,0x3a06,0x1b02 ,
0x39db,0x1b5d ,0x39b0,0x1bb8 ,0x3984,0x1c12 ,
0x3958,0x1c6c ,0x392b,0x1cc6 ,0x38fd,0x1d20 ,
0x38cf,0x1d79 ,0x38a1,0x1dd3 ,0x3871,0x1e2b ,
0x3842,0x1e84 ,0x3812,0x1edc ,0x37e1,0x1f34 ,
0x37b0,0x1f8c ,0x377e,0x1fe3 ,0x374b,0x203a ,
0x3718,0x2091 ,0x36e5,0x20e7 ,0x36b1,0x213d ,
0x367d,0x2193 ,0x3648,0x21e8 ,0x3612,0x223d ,
0x35dc,0x2292 ,0x35a5,0x22e7 ,0x356e,0x233b ,
0x3537,0x238e ,0x34ff,0x23e2 ,0x34c6,0x2435 ,
0x348d,0x2488 ,0x3453,0x24da ,0x3419,0x252c ,
0x33df,0x257e ,0x33a3,0x25cf ,0x3368,0x2620 ,
0x332c,0x2671 ,0x32ef,0x26c1 ,0x32b2,0x2711 ,
0x3274,0x2760 ,0x3236,0x27af ,0x31f8,0x27fe ,
0x31b9,0x284c ,0x3179,0x289a ,0x3139,0x28e7 ,
0x30f9,0x2935 ,0x30b8,0x2981 ,0x3076,0x29ce ,
0x3034,0x2a1a ,0x2ff2,0x2a65 ,0x2faf,0x2ab0 ,
0x2f6c,0x2afb ,0x2f28,0x2b45 ,0x2ee4,0x2b8f ,
0x2e9f,0x2bd8 ,0x2e5a,0x2c21 ,0x2e15,0x2c6a ,
0x2dcf,0x2cb2 ,0x2d88,0x2cfa ,0x2d41,0x2d41 ,
0x2cfa,0x2d88 ,0x2cb2,0x2dcf ,0x2c6a,0x2e15 ,
0x2c21,0x2e5a ,0x2bd8,0x2e9f ,0x2b8f,0x2ee4 ,
0x2b45,0x2f28 ,0x2afb,0x2f6c ,0x2ab0,0x2faf ,
0x2a65,0x2ff2 ,0x2a1a,0x3034 ,0x29ce,0x3076 ,
0x2981,0x30b8 ,0x2935,0x30f9 ,0x28e7,0x3139 ,
0x289a,0x3179 ,0x284c,0x31b9 ,0x27fe,0x31f8 ,
0x27af,0x3236 ,0x2760,0x3274 ,0x2711,0x32b2 ,
0x26c1,0x32ef ,0x2671,0x332c ,0x2620,0x3368 ,
0x25cf,0x33a3 ,0x257e,0x33df ,0x252c,0x3419 ,
0x24da,0x3453 ,0x2488,0x348d ,0x2435,0x34c6 ,
0x23e2,0x34ff ,0x238e,0x3537 ,0x233b,0x356e ,
0x22e7,0x35a5 ,0x2292,0x35dc ,0x223d,0x3612 ,
0x21e8,0x3648 ,0x2193,0x367d ,0x213d,0x36b1 ,
0x20e7,0x36e5 ,0x2091,0x3718 ,0x203a,0x374b ,
0x1fe3,0x377e ,0x1f8c,0x37b0 ,0x1f34,0x37e1 ,
0x1edc,0x3812 ,0x1e84,0x3842 ,0x1e2b,0x3871 ,
0x1dd3,0x38a1 ,0x1d79,0x38cf ,0x1d20,0x38fd ,
0x1cc6,0x392b ,0x1c6c,0x3958 ,0x1c12,0x3984 ,
0x1bb8,0x39b0 ,0x1b5d,0x39db ,0x1b02,0x3a06 ,
0x1aa7,0x3a30 ,0x1a4b,0x3a59 ,0x19ef,0x3a82 ,
0x1993,0x3aab ,0x1937,0x3ad3 ,0x18db,0x3afa ,
0x187e,0x3b21 ,0x1821,0x3b47 ,0x17c4,0x3b6d ,
0x1766,0x3b92 ,0x1709,0x3bb6 ,0x16ab,0x3bda ,
0x164c,0x3bfd ,0x15ee,0x3c20 ,0x1590,0x3c42 ,
0x1531,0x3c64 ,0x14d2,0x3c85 ,0x1473,0x3ca5 ,
0x1413,0x3cc5 ,0x13b4,0x3ce4 ,0x1354,0x3d03 ,
0x12f4,0x3d21 ,0x1294,0x3d3f ,0x1234,0x3d5b ,
0x11d3,0x3d78 ,0x1173,0x3d93 ,0x1112,0x3daf ,
0x10b1,0x3dc9 ,0x1050,0x3de3 ,0x0fee,0x3dfc ,
0x0f8d,0x3e15 ,0x0f2b,0x3e2d ,0x0eca,0x3e45 ,
0x0e68,0x3e5c ,0x0e06,0x3e72 ,0x0da4,0x3e88 ,
0x0d41,0x3e9d ,0x0cdf,0x3eb1 ,0x0c7c,0x3ec5 ,
0x0c1a,0x3ed8 ,0x0bb7,0x3eeb ,0x0b54,0x3efd ,
0x0af1,0x3f0f ,0x0a8e,0x3f20 ,0x0a2b,0x3f30 ,
0x09c7,0x3f40 ,0x0964,0x3f4f ,0x0901,0x3f5d ,
0x089d,0x3f6b ,0x0839,0x3f78 ,0x07d6,0x3f85 ,
0x0772,0x3f91 ,0x070e,0x3f9c ,0x06aa,0x3fa7 ,
0x0646,0x3fb1 ,0x05e2,0x3fbb ,0x057e,0x3fc4 ,
0x051a,0x3fcc ,0x04b5,0x3fd4 ,0x0451,0x3fdb ,
0x03ed,0x3fe1 ,0x0388,0x3fe7 ,0x0324,0x3fec ,
0x02c0,0x3ff1 ,0x025b,0x3ff5 ,0x01f7,0x3ff8 ,
0x0192,0x3ffb ,0x012e,0x3ffd ,0x00c9,0x3fff ,
0x0065,0x4000 ,0x0000,0x4000 ,0xff9b,0x4000 ,
0xff37,0x3fff ,0xfed2,0x3ffd ,0xfe6e,0x3ffb ,
0xfe09,0x3ff8 ,0xfda5,0x3ff5 ,0xfd40,0x3ff1 ,
0xfcdc,0x3fec ,0xfc78,0x3fe7 ,0xfc13,0x3fe1 ,
0xfbaf,0x3fdb ,0xfb4b,0x3fd4 ,0xfae6,0x3fcc ,
0xfa82,0x3fc4 ,0xfa1e,0x3fbb ,0xf9ba,0x3fb1 ,
0xf956,0x3fa7 ,0xf8f2,0x3f9c ,0xf88e,0x3f91 ,
0xf82a,0x3f85 ,0xf7c7,0x3f78 ,0xf763,0x3f6b ,
0xf6ff,0x3f5d ,0xf69c,0x3f4f ,0xf639,0x3f40 ,
0xf5d5,0x3f30 ,0xf572,0x3f20 ,0xf50f,0x3f0f ,
0xf4ac,0x3efd ,0xf449,0x3eeb ,0xf3e6,0x3ed8 ,
0xf384,0x3ec5 ,0xf321,0x3eb1 ,0xf2bf,0x3e9d ,
0xf25c,0x3e88 ,0xf1fa,0x3e72 ,0xf198,0x3e5c ,
0xf136,0x3e45 ,0xf0d5,0x3e2d ,0xf073,0x3e15 ,
0xf012,0x3dfc ,0xefb0,0x3de3 ,0xef4f,0x3dc9 ,
0xeeee,0x3daf ,0xee8d,0x3d93 ,0xee2d,0x3d78 ,
0xedcc,0x3d5b ,0xed6c,0x3d3f ,0xed0c,0x3d21 ,
0xecac,0x3d03 ,0xec4c,0x3ce4 ,0xebed,0x3cc5 ,
0xeb8d,0x3ca5 ,0xeb2e,0x3c85 ,0xeacf,0x3c64 ,
0xea70,0x3c42 ,0xea12,0x3c20 ,0xe9b4,0x3bfd ,
0xe955,0x3bda ,0xe8f7,0x3bb6 ,0xe89a,0x3b92 ,
0xe83c,0x3b6d ,0xe7df,0x3b47 ,0xe782,0x3b21 ,
0xe725,0x3afa ,0xe6c9,0x3ad3 ,0xe66d,0x3aab ,
0xe611,0x3a82 ,0xe5b5,0x3a59 ,0xe559,0x3a30 ,
0xe4fe,0x3a06 ,0xe4a3,0x39db ,0xe448,0x39b0 ,
0xe3ee,0x3984 ,0xe394,0x3958 ,0xe33a,0x392b ,
0xe2e0,0x38fd ,0xe287,0x38cf ,0xe22d,0x38a1 ,
0xe1d5,0x3871 ,0xe17c,0x3842 ,0xe124,0x3812 ,
0xe0cc,0x37e1 ,0xe074,0x37b0 ,0xe01d,0x377e ,
0xdfc6,0x374b ,0xdf6f,0x3718 ,0xdf19,0x36e5 ,
0xdec3,0x36b1 ,0xde6d,0x367d ,0xde18,0x3648 ,
0xddc3,0x3612 ,0xdd6e,0x35dc ,0xdd19,0x35a5 ,
0xdcc5,0x356e ,0xdc72,0x3537 ,0xdc1e,0x34ff ,
0xdbcb,0x34c6 ,0xdb78,0x348d ,0xdb26,0x3453 ,
0xdad4,0x3419 ,0xda82,0x33df ,0xda31,0x33a3 ,
0xd9e0,0x3368 ,0xd98f,0x332c ,0xd93f,0x32ef ,
0xd8ef,0x32b2 ,0xd8a0,0x3274 ,0xd851,0x3236 ,
0xd802,0x31f8 ,0xd7b4,0x31b9 ,0xd766,0x3179 ,
0xd719,0x3139 ,0xd6cb,0x30f9 ,0xd67f,0x30b8 ,
0xd632,0x3076 ,0xd5e6,0x3034 ,0xd59b,0x2ff2 ,
0xd550,0x2faf ,0xd505,0x2f6c ,0xd4bb,0x2f28 ,
0xd471,0x2ee4 ,0xd428,0x2e9f ,0xd3df,0x2e5a ,
0xd396,0x2e15 ,0xd34e,0x2dcf ,0xd306,0x2d88 ,
0xd2bf,0x2d41 ,0xd278,0x2cfa ,0xd231,0x2cb2 ,
0xd1eb,0x2c6a ,0xd1a6,0x2c21 ,0xd161,0x2bd8 ,
0xd11c,0x2b8f ,0xd0d8,0x2b45 ,0xd094,0x2afb ,
0xd051,0x2ab0 ,0xd00e,0x2a65 ,0xcfcc,0x2a1a ,
0xcf8a,0x29ce ,0xcf48,0x2981 ,0xcf07,0x2935 ,
0xcec7,0x28e7 ,0xce87,0x289a ,0xce47,0x284c ,
0xce08,0x27fe ,0xcdca,0x27af ,0xcd8c,0x2760 ,
0xcd4e,0x2711 ,0xcd11,0x26c1 ,0xccd4,0x2671 ,
0xcc98,0x2620 ,0xcc5d,0x25cf ,0xcc21,0x257e ,
0xcbe7,0x252c ,0xcbad,0x24da ,0xcb73,0x2488 ,
0xcb3a,0x2435 ,0xcb01,0x23e2 ,0xcac9,0x238e ,
0xca92,0x233b ,0xca5b,0x22e7 ,0xca24,0x2292 ,
0xc9ee,0x223d ,0xc9b8,0x21e8 ,0xc983,0x2193 ,
0xc94f,0x213d ,0xc91b,0x20e7 ,0xc8e8,0x2091 ,
0xc8b5,0x203a ,0xc882,0x1fe3 ,0xc850,0x1f8c ,
0xc81f,0x1f34 ,0xc7ee,0x1edc ,0xc7be,0x1e84 ,
0xc78f,0x1e2b ,0xc75f,0x1dd3 ,0xc731,0x1d79 ,
0xc703,0x1d20 ,0xc6d5,0x1cc6 ,0xc6a8,0x1c6c ,
0xc67c,0x1c12 ,0xc650,0x1bb8 ,0xc625,0x1b5d ,
0xc5fa,0x1b02 ,0xc5d0,0x1aa7 ,0xc5a7,0x1a4b ,
0xc57e,0x19ef ,0xc555,0x1993 ,0xc52d,0x1937 ,
0xc506,0x18db ,0xc4df,0x187e ,0xc4b9,0x1821 ,
0xc493,0x17c4 ,0xc46e,0x1766 ,0xc44a,0x1709 ,
0xc426,0x16ab ,0xc403,0x164c ,0xc3e0,0x15ee ,
0xc3be,0x1590 ,0xc39c,0x1531 ,0xc37b,0x14d2 ,
0xc35b,0x1473 ,0xc33b,0x1413 ,0xc31c,0x13b4 ,
0xc2fd,0x1354 ,0xc2df,0x12f4 ,0xc2c1,0x1294 ,
0xc2a5,0x1234 ,0xc288,0x11d3 ,0xc26d,0x1173 ,
0xc251,0x1112 ,0xc237,0x10b1 ,0xc21d,0x1050 ,
0xc204,0x0fee ,0xc1eb,0x0f8d ,0xc1d3,0x0f2b ,
0xc1bb,0x0eca ,0xc1a4,0x0e68 ,0xc18e,0x0e06 ,
0xc178,0x0da4 ,0xc163,0x0d41 ,0xc14f,0x0cdf ,
0xc13b,0x0c7c ,0xc128,0x0c1a ,0xc115,0x0bb7 ,
0xc103,0x0b54 ,0xc0f1,0x0af1 ,0xc0e0,0x0a8e ,
0xc0d0,0x0a2b ,0xc0c0,0x09c7 ,0xc0b1,0x0964 ,
0xc0a3,0x0901 ,0xc095,0x089d ,0xc088,0x0839 ,
0xc07b,0x07d6 ,0xc06f,0x0772 ,0xc064,0x070e ,
0xc059,0x06aa ,0xc04f,0x0646 ,0xc045,0x05e2 ,
0xc03c,0x057e ,0xc034,0x051a ,0xc02c,0x04b5 ,
0xc025,0x0451 ,0xc01f,0x03ed ,0xc019,0x0388 ,
0xc014,0x0324 ,0xc00f,0x02c0 ,0xc00b,0x025b ,
0xc008,0x01f7 ,0xc005,0x0192 ,0xc003,0x012e ,
0xc001,0x00c9 ,0xc000,0x0065 };

View File

@ -1,27 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the Q14 radix-2 tables used in ARM9E optimization routines.
*
*/
extern const unsigned short t_Q14S_rad8[2];
const unsigned short t_Q14S_rad8[2] = { 0x0000,0x2d41 };
//extern const int t_Q30S_rad8[2];
//const int t_Q30S_rad8[2] = { 0x00000000,0x2d413ccd };
extern const unsigned short t_Q14R_rad8[2];
const unsigned short t_Q14R_rad8[2] = { 0x2d41,0x2d41 };
//extern const int t_Q30R_rad8[2];
//const int t_Q30R_rad8[2] = { 0x2d413ccd,0x2d413ccd };

View File

@ -1,494 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the SPL unit_test.
*
*/
#include "unit_test.h"
#include "signal_processing_library.h"
class SplEnvironment : public ::testing::Environment {
public:
virtual void SetUp() {
}
virtual void TearDown() {
}
};
SplTest::SplTest()
{
}
void SplTest::SetUp() {
}
void SplTest::TearDown() {
}
TEST_F(SplTest, MacroTest) {
// Macros with inputs.
int A = 10;
int B = 21;
int a = -3;
int b = WEBRTC_SPL_WORD32_MAX;
int nr = 2;
int d_ptr1 = 0;
int d_ptr2 = 0;
EXPECT_EQ(10, WEBRTC_SPL_MIN(A, B));
EXPECT_EQ(21, WEBRTC_SPL_MAX(A, B));
EXPECT_EQ(3, WEBRTC_SPL_ABS_W16(a));
EXPECT_EQ(3, WEBRTC_SPL_ABS_W32(a));
EXPECT_EQ(0, WEBRTC_SPL_GET_BYTE(&B, nr));
WEBRTC_SPL_SET_BYTE(&d_ptr2, 1, nr);
EXPECT_EQ(65536, d_ptr2);
EXPECT_EQ(-63, WEBRTC_SPL_MUL(a, B));
EXPECT_EQ(-2147483645, WEBRTC_SPL_MUL(a, b));
EXPECT_EQ(-2147483645, WEBRTC_SPL_UMUL(a, b));
b = WEBRTC_SPL_WORD16_MAX >> 1;
EXPECT_EQ(65535, WEBRTC_SPL_UMUL_RSFT16(a, b));
EXPECT_EQ(1073627139, WEBRTC_SPL_UMUL_16_16(a, b));
EXPECT_EQ(16382, WEBRTC_SPL_UMUL_16_16_RSFT16(a, b));
EXPECT_EQ(-49149, WEBRTC_SPL_UMUL_32_16(a, b));
EXPECT_EQ(65535, WEBRTC_SPL_UMUL_32_16_RSFT16(a, b));
EXPECT_EQ(-49149, WEBRTC_SPL_MUL_16_U16(a, b));
a = b;
b = -3;
EXPECT_EQ(-5461, WEBRTC_SPL_DIV(a, b));
EXPECT_EQ(0, WEBRTC_SPL_UDIV(a, b));
EXPECT_EQ(-1, WEBRTC_SPL_MUL_16_32_RSFT16(a, b));
EXPECT_EQ(-1, WEBRTC_SPL_MUL_16_32_RSFT15(a, b));
EXPECT_EQ(-3, WEBRTC_SPL_MUL_16_32_RSFT14(a, b));
EXPECT_EQ(-24, WEBRTC_SPL_MUL_16_32_RSFT11(a, b));
int a32 = WEBRTC_SPL_WORD32_MAX;
int a32a = (WEBRTC_SPL_WORD32_MAX >> 16);
int a32b = (WEBRTC_SPL_WORD32_MAX & 0x0000ffff);
EXPECT_EQ(5, WEBRTC_SPL_MUL_32_32_RSFT32(a32a, a32b, A));
EXPECT_EQ(5, WEBRTC_SPL_MUL_32_32_RSFT32BI(a32, A));
EXPECT_EQ(-49149, WEBRTC_SPL_MUL_16_16(a, b));
EXPECT_EQ(-12288, WEBRTC_SPL_MUL_16_16_RSFT(a, b, 2));
EXPECT_EQ(-12287, WEBRTC_SPL_MUL_16_16_RSFT_WITH_ROUND(a, b, 2));
EXPECT_EQ(-1, WEBRTC_SPL_MUL_16_16_RSFT_WITH_FIXROUND(a, b));
EXPECT_EQ(16380, WEBRTC_SPL_ADD_SAT_W32(a, b));
EXPECT_EQ(21, WEBRTC_SPL_SAT(a, A, B));
EXPECT_EQ(21, WEBRTC_SPL_SAT(a, B, A));
EXPECT_EQ(-49149, WEBRTC_SPL_MUL_32_16(a, b));
EXPECT_EQ(16386, WEBRTC_SPL_SUB_SAT_W32(a, b));
EXPECT_EQ(16380, WEBRTC_SPL_ADD_SAT_W16(a, b));
EXPECT_EQ(16386, WEBRTC_SPL_SUB_SAT_W16(a, b));
EXPECT_TRUE(WEBRTC_SPL_IS_NEG(b));
// Shifting with negative numbers allowed
// Positive means left shift
EXPECT_EQ(32766, WEBRTC_SPL_SHIFT_W16(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_SHIFT_W32(a, 1));
// Shifting with negative numbers not allowed
// We cannot do casting here due to signed/unsigned problem
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_W16(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_W16(a, 1));
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_W32(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_W32(a, 1));
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_U16(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_U16(a, 1));
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_U32(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_U32(a, 1));
EXPECT_EQ(1470, WEBRTC_SPL_RAND(A));
}
TEST_F(SplTest, InlineTest) {
WebRtc_Word16 a = 121;
WebRtc_Word16 b = -17;
WebRtc_Word32 A = 111121;
WebRtc_Word32 B = -1711;
char bVersion[8];
EXPECT_EQ(104, WebRtcSpl_AddSatW16(a, b));
EXPECT_EQ(138, WebRtcSpl_SubSatW16(a, b));
EXPECT_EQ(109410, WebRtcSpl_AddSatW32(A, B));
EXPECT_EQ(112832, WebRtcSpl_SubSatW32(A, B));
EXPECT_EQ(17, WebRtcSpl_GetSizeInBits(A));
EXPECT_EQ(14, WebRtcSpl_NormW32(A));
EXPECT_EQ(4, WebRtcSpl_NormW16(B));
EXPECT_EQ(15, WebRtcSpl_NormU32(A));
EXPECT_EQ(0, WebRtcSpl_get_version(bVersion, 8));
}
TEST_F(SplTest, MathOperationsTest) {
int A = 117;
WebRtc_Word32 num = 117;
WebRtc_Word32 den = -5;
WebRtc_UWord16 denU = 5;
EXPECT_EQ(10, WebRtcSpl_Sqrt(A));
EXPECT_EQ(10, WebRtcSpl_SqrtFloor(A));
EXPECT_EQ(-91772805, WebRtcSpl_DivResultInQ31(den, num));
EXPECT_EQ(-23, WebRtcSpl_DivW32W16ResW16(num, (WebRtc_Word16)den));
EXPECT_EQ(-23, WebRtcSpl_DivW32W16(num, (WebRtc_Word16)den));
EXPECT_EQ(23, WebRtcSpl_DivU32U16(num, denU));
EXPECT_EQ(0, WebRtcSpl_DivW32HiLow(128, 0, 256));
}
TEST_F(SplTest, BasicArrayOperationsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, 1100};
int Bs[] = {2, 6, 66, 550};
WebRtc_UWord8 b8[kVectorSize];
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_UWord8 bTmp8[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
WebRtc_Word32 bTmp32[kVectorSize];
WebRtcSpl_MemSetW16(b16, 3, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(3, b16[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_ZerosArrayW16(b16, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(0, b16[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_OnesArrayW16(b16, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(1, b16[kk]);
}
WebRtcSpl_MemSetW32(b32, 3, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(3, b32[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_ZerosArrayW32(b32, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(0, b32[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_OnesArrayW32(b32, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(1, b32[kk]);
}
for (int kk = 0; kk < kVectorSize; ++kk) {
bTmp8[kk] = (WebRtc_Word8)kk;
bTmp16[kk] = (WebRtc_Word16)kk;
bTmp32[kk] = (WebRtc_Word32)kk;
}
WEBRTC_SPL_MEMCPY_W8(b8, bTmp8, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(b8[kk], bTmp8[kk]);
}
WEBRTC_SPL_MEMCPY_W16(b16, bTmp16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(b16[kk], bTmp16[kk]);
}
// WEBRTC_SPL_MEMCPY_W32(b32, bTmp32, kVectorSize);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(b32[kk], bTmp32[kk]);
// }
EXPECT_EQ(2, WebRtcSpl_CopyFromEndW16(b16, kVectorSize, 2, bTmp16));
for (int kk = 0; kk < 2; ++kk) {
EXPECT_EQ(kk+2, bTmp16[kk]);
}
for (int kk = 0; kk < kVectorSize; ++kk) {
b32[kk] = B[kk];
b16[kk] = (WebRtc_Word16)B[kk];
}
WebRtcSpl_VectorBitShiftW32ToW16(bTmp16, kVectorSize, b32, 1);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]>>1), bTmp16[kk]);
}
WebRtcSpl_VectorBitShiftW16(bTmp16, kVectorSize, b16, 1);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]>>1), bTmp16[kk]);
}
WebRtcSpl_VectorBitShiftW32(bTmp32, kVectorSize, b32, 1);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]>>1), bTmp32[kk]);
}
WebRtcSpl_MemCpyReversedOrder(&bTmp16[3], b16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(b16[3-kk], bTmp16[kk]);
}
}
TEST_F(SplTest, MinMaxOperationsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, -1100};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = B[kk];
b32[kk] = B[kk];
}
EXPECT_EQ(1100, WebRtcSpl_MaxAbsValueW16(b16, kVectorSize));
EXPECT_EQ(1100, WebRtcSpl_MaxAbsValueW32(b32, kVectorSize));
EXPECT_EQ(133, WebRtcSpl_MaxValueW16(b16, kVectorSize));
EXPECT_EQ(133, WebRtcSpl_MaxValueW32(b32, kVectorSize));
EXPECT_EQ(3, WebRtcSpl_MaxAbsIndexW16(b16, kVectorSize));
EXPECT_EQ(2, WebRtcSpl_MaxIndexW16(b16, kVectorSize));
EXPECT_EQ(2, WebRtcSpl_MaxIndexW32(b32, kVectorSize));
EXPECT_EQ(-1100, WebRtcSpl_MinValueW16(b16, kVectorSize));
EXPECT_EQ(-1100, WebRtcSpl_MinValueW32(b32, kVectorSize));
EXPECT_EQ(3, WebRtcSpl_MinIndexW16(b16, kVectorSize));
EXPECT_EQ(3, WebRtcSpl_MinIndexW32(b32, kVectorSize));
EXPECT_EQ(0, WebRtcSpl_GetScalingSquare(b16, kVectorSize, 1));
}
TEST_F(SplTest, VectorOperationsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, 1100};
WebRtc_Word16 a16[kVectorSize];
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
for (int kk = 0; kk < kVectorSize; ++kk) {
a16[kk] = B[kk];
b16[kk] = B[kk];
}
WebRtcSpl_AffineTransformVector(bTmp16, b16, 3, 7, 2, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]*3+7)>>2, bTmp16[kk]);
}
WebRtcSpl_ScaleAndAddVectorsWithRound(b16, 3, b16, 2, 2, bTmp16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]*3+B[kk]*2+2)>>2, bTmp16[kk]);
}
WebRtcSpl_AddAffineVectorToVector(bTmp16, b16, 3, 7, 2, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(((B[kk]*3+B[kk]*2+2)>>2)+((b16[kk]*3+7)>>2), bTmp16[kk]);
}
WebRtcSpl_CrossCorrelation(b32, b16, bTmp16, kVectorSize, 2, 2, 0);
for (int kk = 0; kk < 2; ++kk) {
EXPECT_EQ(614236, b32[kk]);
}
// EXPECT_EQ(, WebRtcSpl_DotProduct(b16, bTmp16, 4));
EXPECT_EQ(306962, WebRtcSpl_DotProductWithScale(b16, b16, kVectorSize, 2));
WebRtcSpl_ScaleVector(b16, bTmp16, 13, kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((b16[kk]*13)>>2, bTmp16[kk]);
}
WebRtcSpl_ScaleVectorWithSat(b16, bTmp16, 13, kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((b16[kk]*13)>>2, bTmp16[kk]);
}
WebRtcSpl_ScaleAndAddVectors(a16, 13, 2, b16, 7, 2, bTmp16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(((a16[kk]*13)>>2)+((b16[kk]*7)>>2), bTmp16[kk]);
}
WebRtcSpl_AddVectorsAndShift(bTmp16, a16, b16, kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(B[kk] >> 1, bTmp16[kk]);
}
WebRtcSpl_ReverseOrderMultArrayElements(bTmp16, a16, &b16[3], kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((a16[kk]*b16[3-kk])>>2, bTmp16[kk]);
}
WebRtcSpl_ElementwiseVectorMult(bTmp16, a16, b16, kVectorSize, 6);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((a16[kk]*b16[kk])>>6, bTmp16[kk]);
}
WebRtcSpl_SqrtOfOneMinusXSquared(b16, kVectorSize, bTmp16);
for (int kk = 0; kk < kVectorSize - 1; ++kk) {
EXPECT_EQ(32767, bTmp16[kk]);
}
EXPECT_EQ(32749, bTmp16[kVectorSize - 1]);
}
TEST_F(SplTest, EstimatorsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, 1100};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = B[kk];
b32[kk] = B[kk];
}
EXPECT_EQ(0, WebRtcSpl_LevinsonDurbin(b32, b16, bTmp16, 2));
}
TEST_F(SplTest, FilterTest) {
const int kVectorSize = 4;
WebRtc_Word16 A[] = {1, 2, 33, 100};
WebRtc_Word16 A5[] = {1, 2, 33, 100, -5};
WebRtc_Word16 B[] = {4, 12, 133, 110};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
WebRtc_Word16 bTmp16Low[kVectorSize];
WebRtc_Word16 bState[kVectorSize];
WebRtc_Word16 bStateLow[kVectorSize];
WebRtcSpl_ZerosArrayW16(bState, kVectorSize);
WebRtcSpl_ZerosArrayW16(bStateLow, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = A[kk];
}
// MA filters
WebRtcSpl_FilterMAFastQ12(b16, bTmp16, B, kVectorSize, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
//EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
}
// AR filters
WebRtcSpl_FilterARFastQ12(b16, bTmp16, A, kVectorSize, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_FilterAR(A5,
5,
b16,
kVectorSize,
bState,
kVectorSize,
bStateLow,
kVectorSize,
bTmp16,
bTmp16Low,
kVectorSize));
}
TEST_F(SplTest, RandTest) {
const int kVectorSize = 4;
WebRtc_Word16 BU[] = {3653, 12446, 8525, 30691};
WebRtc_Word16 BN[] = {3459, -11689, -258, -3738};
WebRtc_Word16 b16[kVectorSize];
WebRtc_UWord32 bSeed = 100000;
EXPECT_EQ(464449057, WebRtcSpl_IncreaseSeed(&bSeed));
EXPECT_EQ(31565, WebRtcSpl_RandU(&bSeed));
EXPECT_EQ(-9786, WebRtcSpl_RandN(&bSeed));
EXPECT_EQ(kVectorSize, WebRtcSpl_RandUArray(b16, kVectorSize, &bSeed));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(BU[kk], b16[kk]);
}
}
TEST_F(SplTest, SignalProcessingTest) {
const int kVectorSize = 4;
int A[] = {1, 2, 33, 100};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
WebRtc_Word32 bTmp32[kVectorSize];
int bScale = 0;
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = A[kk];
b32[kk] = A[kk];
}
EXPECT_EQ(2, WebRtcSpl_AutoCorrelation(b16, kVectorSize, 1, bTmp32, &bScale));
WebRtcSpl_ReflCoefToLpc(b16, kVectorSize, bTmp16);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
// }
WebRtcSpl_LpcToReflCoef(bTmp16, kVectorSize, b16);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(a16[kk], b16[kk]);
// }
WebRtcSpl_AutoCorrToReflCoef(b32, kVectorSize, bTmp16);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
// }
WebRtcSpl_GetHanningWindow(bTmp16, kVectorSize);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
// }
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = A[kk];
}
EXPECT_EQ(11094 , WebRtcSpl_Energy(b16, kVectorSize, &bScale));
EXPECT_EQ(0, bScale);
}
TEST_F(SplTest, FFTTest) {
WebRtc_Word16 B[] = {1, 2, 33, 100,
2, 3, 34, 101,
3, 4, 35, 102,
4, 5, 36, 103};
EXPECT_EQ(0, WebRtcSpl_ComplexFFT(B, 3, 1));
// for (int kk = 0; kk < 16; ++kk) {
// EXPECT_EQ(A[kk], B[kk]);
// }
EXPECT_EQ(0, WebRtcSpl_ComplexIFFT(B, 3, 1));
// for (int kk = 0; kk < 16; ++kk) {
// EXPECT_EQ(A[kk], B[kk]);
// }
WebRtcSpl_ComplexBitReverse(B, 3);
for (int kk = 0; kk < 16; ++kk) {
//EXPECT_EQ(A[kk], B[kk]);
}
}
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
SplEnvironment* env = new SplEnvironment;
::testing::AddGlobalTestEnvironment(env);
return RUN_ALL_TESTS();
}

View File

@ -1,30 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file contains the function WebRtcSpl_CopyFromBeginU8().
* The description header can be found in signal_processing_library.h
*
*/
#ifndef WEBRTC_SPL_UNIT_TEST_H_
#define WEBRTC_SPL_UNIT_TEST_H_
#include <gtest/gtest.h>
class SplTest: public ::testing::Test
{
protected:
SplTest();
virtual void SetUp();
virtual void TearDown();
};
#endif // WEBRTC_SPL_UNIT_TEST_H_

View File

@ -1,15 +0,0 @@
noinst_LTLIBRARIES = libvad.la
libvad_la_SOURCES = main/interface/webrtc_vad.h \
main/source/webrtc_vad.c \
main/source/vad_core.c \
main/source/vad_core.h \
main/source/vad_defines.h \
main/source/vad_filterbank.c \
main/source/vad_filterbank.h \
main/source/vad_gmm.c \
main/source/vad_gmm.h \
main/source/vad_sp.c \
main/source/vad_sp.h
libvad_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface

View File

@ -1,159 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the VAD API calls. Specific function calls are given below.
*/
#ifndef WEBRTC_VAD_WEBRTC_VAD_H_
#define WEBRTC_VAD_WEBRTC_VAD_H_
#include "typedefs.h"
typedef struct WebRtcVadInst VadInst;
#ifdef __cplusplus
extern "C"
{
#endif
/****************************************************************************
* WebRtcVad_get_version(...)
*
* This function returns the version number of the code.
*
* Output:
* - version : Pointer to a buffer where the version info will
* be stored.
* Input:
* - size_bytes : Size of the buffer.
*
*/
WebRtc_Word16 WebRtcVad_get_version(char *version, size_t size_bytes);
/****************************************************************************
* WebRtcVad_AssignSize(...)
*
* This functions get the size needed for storing the instance for encoder
* and decoder, respectively
*
* Input/Output:
* - size_in_bytes : Pointer to integer where the size is returned
*
* Return value : 0
*/
WebRtc_Word16 WebRtcVad_AssignSize(int *size_in_bytes);
/****************************************************************************
* WebRtcVad_Assign(...)
*
* This functions Assigns memory for the instances.
*
* Input:
* - vad_inst_addr : Address to where to assign memory
* Output:
* - vad_inst : Pointer to the instance that should be created
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Assign(VadInst **vad_inst, void *vad_inst_addr);
/****************************************************************************
* WebRtcVad_Create(...)
*
* This function creates an instance to the VAD structure
*
* Input:
* - vad_inst : Pointer to VAD instance that should be created
*
* Output:
* - vad_inst : Pointer to created VAD instance
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Create(VadInst **vad_inst);
/****************************************************************************
* WebRtcVad_Free(...)
*
* This function frees the dynamic memory of a specified VAD instance
*
* Input:
* - vad_inst : Pointer to VAD instance that should be freed
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Free(VadInst *vad_inst);
/****************************************************************************
* WebRtcVad_Init(...)
*
* This function initializes a VAD instance
*
* Input:
* - vad_inst : Instance that should be initialized
*
* Output:
* - vad_inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Init(VadInst *vad_inst);
/****************************************************************************
* WebRtcVad_set_mode(...)
*
* This function initializes a VAD instance
*
* Input:
* - vad_inst : VAD instance
* - mode : Aggressiveness setting (0, 1, 2, or 3)
*
* Output:
* - vad_inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_set_mode(VadInst *vad_inst, WebRtc_Word16 mode);
/****************************************************************************
* WebRtcVad_Process(...)
*
* This functions does a VAD for the inserted speech frame
*
* Input
* - vad_inst : VAD Instance. Needs to be initiated before call.
* - fs : sampling frequency (Hz): 8000, 16000, or 32000
* - speech_frame : Pointer to speech frame buffer
* - frame_length : Length of speech frame buffer in number of samples
*
* Output:
* - vad_inst : Updated VAD instance
*
* Return value : 1 - Active Voice
* 0 - Non-active Voice
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Process(VadInst *vad_inst,
WebRtc_Word16 fs,
WebRtc_Word16 *speech_frame,
WebRtc_Word16 frame_length);
#ifdef __cplusplus
}
#endif
#endif // WEBRTC_VAD_WEBRTC_VAD_H_

View File

@ -1,46 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'vad',
'type': '<(library)',
'dependencies': [
'spl',
],
'include_dirs': [
'../interface',
],
'direct_dependent_settings': {
'include_dirs': [
'../interface',
],
},
'sources': [
'../interface/webrtc_vad.h',
'webrtc_vad.c',
'vad_core.c',
'vad_core.h',
'vad_defines.h',
'vad_filterbank.c',
'vad_filterbank.h',
'vad_gmm.c',
'vad_gmm.h',
'vad_sp.c',
'vad_sp.h',
],
},
],
}
# Local Variables:
# tab-width:2
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=2 shiftwidth=2:

View File

@ -1,723 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the core functionality in VAD.
* For function description, see vad_core.h.
*/
#include "vad_core.h"
#include "signal_processing_library.h"
#include "typedefs.h"
#include "vad_defines.h"
#include "vad_filterbank.h"
#include "vad_gmm.h"
#include "vad_sp.h"
// Spectrum Weighting
static const WebRtc_Word16 kSpectrumWeight[6] = { 6, 8, 10, 12, 14, 16 };
static const WebRtc_Word16 kNoiseUpdateConst = 655; // Q15
static const WebRtc_Word16 kSpeechUpdateConst = 6554; // Q15
static const WebRtc_Word16 kBackEta = 154; // Q8
// Minimum difference between the two models, Q5
static const WebRtc_Word16 kMinimumDifference[6] = {
544, 544, 576, 576, 576, 576 };
// Upper limit of mean value for speech model, Q7
static const WebRtc_Word16 kMaximumSpeech[6] = {
11392, 11392, 11520, 11520, 11520, 11520 };
// Minimum value for mean value
static const WebRtc_Word16 kMinimumMean[2] = { 640, 768 };
// Upper limit of mean value for noise model, Q7
static const WebRtc_Word16 kMaximumNoise[6] = {
9216, 9088, 8960, 8832, 8704, 8576 };
// Start values for the Gaussian models, Q7
// Weights for the two Gaussians for the six channels (noise)
static const WebRtc_Word16 kNoiseDataWeights[12] = {
34, 62, 72, 66, 53, 25, 94, 66, 56, 62, 75, 103 };
// Weights for the two Gaussians for the six channels (speech)
static const WebRtc_Word16 kSpeechDataWeights[12] = {
48, 82, 45, 87, 50, 47, 80, 46, 83, 41, 78, 81 };
// Means for the two Gaussians for the six channels (noise)
static const WebRtc_Word16 kNoiseDataMeans[12] = {
6738, 4892, 7065, 6715, 6771, 3369, 7646, 3863, 7820, 7266, 5020, 4362 };
// Means for the two Gaussians for the six channels (speech)
static const WebRtc_Word16 kSpeechDataMeans[12] = {
8306, 10085, 10078, 11823, 11843, 6309, 9473, 9571, 10879, 7581, 8180, 7483
};
// Stds for the two Gaussians for the six channels (noise)
static const WebRtc_Word16 kNoiseDataStds[12] = {
378, 1064, 493, 582, 688, 593, 474, 697, 475, 688, 421, 455 };
// Stds for the two Gaussians for the six channels (speech)
static const WebRtc_Word16 kSpeechDataStds[12] = {
555, 505, 567, 524, 585, 1231, 509, 828, 492, 1540, 1079, 850 };
static const int kInitCheck = 42;
// Initialize VAD
int WebRtcVad_InitCore(VadInstT *inst, short mode)
{
int i;
// Initialization of struct
inst->vad = 1;
inst->frame_counter = 0;
inst->over_hang = 0;
inst->num_of_speech = 0;
// Initialization of downsampling filter state
inst->downsampling_filter_states[0] = 0;
inst->downsampling_filter_states[1] = 0;
inst->downsampling_filter_states[2] = 0;
inst->downsampling_filter_states[3] = 0;
// Read initial PDF parameters
for (i = 0; i < NUM_TABLE_VALUES; i++)
{
inst->noise_means[i] = kNoiseDataMeans[i];
inst->speech_means[i] = kSpeechDataMeans[i];
inst->noise_stds[i] = kNoiseDataStds[i];
inst->speech_stds[i] = kSpeechDataStds[i];
}
// Index and Minimum value vectors are initialized
for (i = 0; i < 16 * NUM_CHANNELS; i++)
{
inst->low_value_vector[i] = 10000;
inst->index_vector[i] = 0;
}
for (i = 0; i < 5; i++)
{
inst->upper_state[i] = 0;
inst->lower_state[i] = 0;
}
for (i = 0; i < 4; i++)
{
inst->hp_filter_state[i] = 0;
}
// Init mean value memory, for FindMin function
inst->mean_value[0] = 1600;
inst->mean_value[1] = 1600;
inst->mean_value[2] = 1600;
inst->mean_value[3] = 1600;
inst->mean_value[4] = 1600;
inst->mean_value[5] = 1600;
if (mode == 0)
{
// Quality mode
inst->over_hang_max_1[0] = OHMAX1_10MS_Q; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_Q; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_Q; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_Q; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_Q; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_Q; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_Q;
inst->individual[1] = INDIVIDUAL_20MS_Q;
inst->individual[2] = INDIVIDUAL_30MS_Q;
inst->total[0] = TOTAL_10MS_Q;
inst->total[1] = TOTAL_20MS_Q;
inst->total[2] = TOTAL_30MS_Q;
} else if (mode == 1)
{
// Low bitrate mode
inst->over_hang_max_1[0] = OHMAX1_10MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_LBR; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_LBR; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_LBR;
inst->individual[1] = INDIVIDUAL_20MS_LBR;
inst->individual[2] = INDIVIDUAL_30MS_LBR;
inst->total[0] = TOTAL_10MS_LBR;
inst->total[1] = TOTAL_20MS_LBR;
inst->total[2] = TOTAL_30MS_LBR;
} else if (mode == 2)
{
// Aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_AGG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_AGG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_AGG;
inst->individual[1] = INDIVIDUAL_20MS_AGG;
inst->individual[2] = INDIVIDUAL_30MS_AGG;
inst->total[0] = TOTAL_10MS_AGG;
inst->total[1] = TOTAL_20MS_AGG;
inst->total[2] = TOTAL_30MS_AGG;
} else
{
// Very aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_VAG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_VAG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_VAG;
inst->individual[1] = INDIVIDUAL_20MS_VAG;
inst->individual[2] = INDIVIDUAL_30MS_VAG;
inst->total[0] = TOTAL_10MS_VAG;
inst->total[1] = TOTAL_20MS_VAG;
inst->total[2] = TOTAL_30MS_VAG;
}
inst->init_flag = kInitCheck;
return 0;
}
// Set aggressiveness mode
int WebRtcVad_set_mode_core(VadInstT *inst, short mode)
{
if (mode == 0)
{
// Quality mode
inst->over_hang_max_1[0] = OHMAX1_10MS_Q; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_Q; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_Q; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_Q; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_Q; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_Q; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_Q;
inst->individual[1] = INDIVIDUAL_20MS_Q;
inst->individual[2] = INDIVIDUAL_30MS_Q;
inst->total[0] = TOTAL_10MS_Q;
inst->total[1] = TOTAL_20MS_Q;
inst->total[2] = TOTAL_30MS_Q;
} else if (mode == 1)
{
// Low bitrate mode
inst->over_hang_max_1[0] = OHMAX1_10MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_LBR; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_LBR; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_LBR;
inst->individual[1] = INDIVIDUAL_20MS_LBR;
inst->individual[2] = INDIVIDUAL_30MS_LBR;
inst->total[0] = TOTAL_10MS_LBR;
inst->total[1] = TOTAL_20MS_LBR;
inst->total[2] = TOTAL_30MS_LBR;
} else if (mode == 2)
{
// Aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_AGG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_AGG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_AGG;
inst->individual[1] = INDIVIDUAL_20MS_AGG;
inst->individual[2] = INDIVIDUAL_30MS_AGG;
inst->total[0] = TOTAL_10MS_AGG;
inst->total[1] = TOTAL_20MS_AGG;
inst->total[2] = TOTAL_30MS_AGG;
} else if (mode == 3)
{
// Very aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_VAG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_VAG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_VAG;
inst->individual[1] = INDIVIDUAL_20MS_VAG;
inst->individual[2] = INDIVIDUAL_30MS_VAG;
inst->total[0] = TOTAL_10MS_VAG;
inst->total[1] = TOTAL_20MS_VAG;
inst->total[2] = TOTAL_30MS_VAG;
} else
{
return -1;
}
return 0;
}
// Calculate VAD decision by first extracting feature values and then calculate
// probability for both speech and background noise.
WebRtc_Word16 WebRtcVad_CalcVad32khz(VadInstT *inst, WebRtc_Word16 *speech_frame,
int frame_length)
{
WebRtc_Word16 len, vad;
WebRtc_Word16 speechWB[480]; // Downsampled speech frame: 960 samples (30ms in SWB)
WebRtc_Word16 speechNB[240]; // Downsampled speech frame: 480 samples (30ms in WB)
// Downsample signal 32->16->8 before doing VAD
WebRtcVad_Downsampling(speech_frame, speechWB, &(inst->downsampling_filter_states[2]),
frame_length);
len = WEBRTC_SPL_RSHIFT_W16(frame_length, 1);
WebRtcVad_Downsampling(speechWB, speechNB, inst->downsampling_filter_states, len);
len = WEBRTC_SPL_RSHIFT_W16(len, 1);
// Do VAD on an 8 kHz signal
vad = WebRtcVad_CalcVad8khz(inst, speechNB, len);
return vad;
}
WebRtc_Word16 WebRtcVad_CalcVad16khz(VadInstT *inst, WebRtc_Word16 *speech_frame,
int frame_length)
{
WebRtc_Word16 len, vad;
WebRtc_Word16 speechNB[240]; // Downsampled speech frame: 480 samples (30ms in WB)
// Wideband: Downsample signal before doing VAD
WebRtcVad_Downsampling(speech_frame, speechNB, inst->downsampling_filter_states,
frame_length);
len = WEBRTC_SPL_RSHIFT_W16(frame_length, 1);
vad = WebRtcVad_CalcVad8khz(inst, speechNB, len);
return vad;
}
WebRtc_Word16 WebRtcVad_CalcVad8khz(VadInstT *inst, WebRtc_Word16 *speech_frame,
int frame_length)
{
WebRtc_Word16 feature_vector[NUM_CHANNELS], total_power;
// Get power in the bands
total_power = WebRtcVad_get_features(inst, speech_frame, frame_length, feature_vector);
// Make a VAD
inst->vad = WebRtcVad_GmmProbability(inst, feature_vector, total_power, frame_length);
return inst->vad;
}
// Calculate probability for both speech and background noise, and perform a
// hypothesis-test.
WebRtc_Word16 WebRtcVad_GmmProbability(VadInstT *inst, WebRtc_Word16 *feature_vector,
WebRtc_Word16 total_power, int frame_length)
{
int n, k;
WebRtc_Word16 backval;
WebRtc_Word16 h0, h1;
WebRtc_Word16 ratvec, xval;
WebRtc_Word16 vadflag;
WebRtc_Word16 shifts0, shifts1;
WebRtc_Word16 tmp16, tmp16_1, tmp16_2;
WebRtc_Word16 diff, nr, pos;
WebRtc_Word16 nmk, nmk2, nmk3, smk, smk2, nsk, ssk;
WebRtc_Word16 delt, ndelt;
WebRtc_Word16 maxspe, maxmu;
WebRtc_Word16 deltaN[NUM_TABLE_VALUES], deltaS[NUM_TABLE_VALUES];
WebRtc_Word16 ngprvec[NUM_TABLE_VALUES], sgprvec[NUM_TABLE_VALUES];
WebRtc_Word32 h0test, h1test;
WebRtc_Word32 tmp32_1, tmp32_2;
WebRtc_Word32 dotVal;
WebRtc_Word32 nmid, smid;
WebRtc_Word32 probn[NUM_MODELS], probs[NUM_MODELS];
WebRtc_Word16 *nmean1ptr, *nmean2ptr, *smean1ptr, *smean2ptr, *nstd1ptr, *nstd2ptr,
*sstd1ptr, *sstd2ptr;
WebRtc_Word16 overhead1, overhead2, individualTest, totalTest;
// Set the thresholds to different values based on frame length
if (frame_length == 80)
{
// 80 input samples
overhead1 = inst->over_hang_max_1[0];
overhead2 = inst->over_hang_max_2[0];
individualTest = inst->individual[0];
totalTest = inst->total[0];
} else if (frame_length == 160)
{
// 160 input samples
overhead1 = inst->over_hang_max_1[1];
overhead2 = inst->over_hang_max_2[1];
individualTest = inst->individual[1];
totalTest = inst->total[1];
} else
{
// 240 input samples
overhead1 = inst->over_hang_max_1[2];
overhead2 = inst->over_hang_max_2[2];
individualTest = inst->individual[2];
totalTest = inst->total[2];
}
if (total_power > MIN_ENERGY)
{ // If signal present at all
// Set pointers to the gaussian parameters
nmean1ptr = &inst->noise_means[0];
nmean2ptr = &inst->noise_means[NUM_CHANNELS];
smean1ptr = &inst->speech_means[0];
smean2ptr = &inst->speech_means[NUM_CHANNELS];
nstd1ptr = &inst->noise_stds[0];
nstd2ptr = &inst->noise_stds[NUM_CHANNELS];
sstd1ptr = &inst->speech_stds[0];
sstd2ptr = &inst->speech_stds[NUM_CHANNELS];
vadflag = 0;
dotVal = 0;
for (n = 0; n < NUM_CHANNELS; n++)
{ // For all channels
pos = WEBRTC_SPL_LSHIFT_W16(n, 1);
xval = feature_vector[n];
// Probability for Noise, Q7 * Q20 = Q27
tmp32_1 = WebRtcVad_GaussianProbability(xval, *nmean1ptr++, *nstd1ptr++,
&deltaN[pos]);
probn[0] = (WebRtc_Word32)(kNoiseDataWeights[n] * tmp32_1);
tmp32_1 = WebRtcVad_GaussianProbability(xval, *nmean2ptr++, *nstd2ptr++,
&deltaN[pos + 1]);
probn[1] = (WebRtc_Word32)(kNoiseDataWeights[n + NUM_CHANNELS] * tmp32_1);
h0test = probn[0] + probn[1]; // Q27
h0 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(h0test, 12); // Q15
// Probability for Speech
tmp32_1 = WebRtcVad_GaussianProbability(xval, *smean1ptr++, *sstd1ptr++,
&deltaS[pos]);
probs[0] = (WebRtc_Word32)(kSpeechDataWeights[n] * tmp32_1);
tmp32_1 = WebRtcVad_GaussianProbability(xval, *smean2ptr++, *sstd2ptr++,
&deltaS[pos + 1]);
probs[1] = (WebRtc_Word32)(kSpeechDataWeights[n + NUM_CHANNELS] * tmp32_1);
h1test = probs[0] + probs[1]; // Q27
h1 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(h1test, 12); // Q15
// Get likelihood ratio. Approximate log2(H1/H0) with shifts0 - shifts1
shifts0 = WebRtcSpl_NormW32(h0test);
shifts1 = WebRtcSpl_NormW32(h1test);
if ((h0test > 0) && (h1test > 0))
{
ratvec = shifts0 - shifts1;
} else if (h1test > 0)
{
ratvec = 31 - shifts1;
} else if (h0test > 0)
{
ratvec = shifts0 - 31;
} else
{
ratvec = 0;
}
// VAD decision with spectrum weighting
dotVal += WEBRTC_SPL_MUL_16_16(ratvec, kSpectrumWeight[n]);
// Individual channel test
if ((ratvec << 2) > individualTest)
{
vadflag = 1;
}
// Probabilities used when updating model
if (h0 > 0)
{
tmp32_1 = probn[0] & 0xFFFFF000; // Q27
tmp32_2 = WEBRTC_SPL_LSHIFT_W32(tmp32_1, 2); // Q29
ngprvec[pos] = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_2, h0);
ngprvec[pos + 1] = 16384 - ngprvec[pos];
} else
{
ngprvec[pos] = 16384;
ngprvec[pos + 1] = 0;
}
// Probabilities used when updating model
if (h1 > 0)
{
tmp32_1 = probs[0] & 0xFFFFF000;
tmp32_2 = WEBRTC_SPL_LSHIFT_W32(tmp32_1, 2);
sgprvec[pos] = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_2, h1);
sgprvec[pos + 1] = 16384 - sgprvec[pos];
} else
{
sgprvec[pos] = 0;
sgprvec[pos + 1] = 0;
}
}
// Overall test
if (dotVal >= totalTest)
{
vadflag |= 1;
}
// Set pointers to the means and standard deviations.
nmean1ptr = &inst->noise_means[0];
smean1ptr = &inst->speech_means[0];
nstd1ptr = &inst->noise_stds[0];
sstd1ptr = &inst->speech_stds[0];
maxspe = 12800;
// Update the model's parameters
for (n = 0; n < NUM_CHANNELS; n++)
{
pos = WEBRTC_SPL_LSHIFT_W16(n, 1);
// Get min value in past which is used for long term correction
backval = WebRtcVad_FindMinimum(inst, feature_vector[n], n); // Q4
// Compute the "global" mean, that is the sum of the two means weighted
nmid = WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n], *nmean1ptr); // Q7 * Q7
nmid += WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n+NUM_CHANNELS],
*(nmean1ptr+NUM_CHANNELS));
tmp16_1 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(nmid, 6); // Q8
for (k = 0; k < NUM_MODELS; k++)
{
nr = pos + k;
nmean2ptr = nmean1ptr + k * NUM_CHANNELS;
smean2ptr = smean1ptr + k * NUM_CHANNELS;
nstd2ptr = nstd1ptr + k * NUM_CHANNELS;
sstd2ptr = sstd1ptr + k * NUM_CHANNELS;
nmk = *nmean2ptr;
smk = *smean2ptr;
nsk = *nstd2ptr;
ssk = *sstd2ptr;
// Update noise mean vector if the frame consists of noise only
nmk2 = nmk;
if (!vadflag)
{
// deltaN = (x-mu)/sigma^2
// ngprvec[k] = probn[k]/(probn[0] + probn[1])
delt = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(ngprvec[nr],
deltaN[nr], 11); // Q14*Q11
nmk2 = nmk + (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(delt,
kNoiseUpdateConst,
22); // Q7+(Q14*Q15>>22)
}
// Long term correction of the noise mean
ndelt = WEBRTC_SPL_LSHIFT_W16(backval, 4);
ndelt -= tmp16_1; // Q8 - Q8
nmk3 = nmk2 + (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(ndelt,
kBackEta,
9); // Q7+(Q8*Q8)>>9
// Control that the noise mean does not drift to much
tmp16 = WEBRTC_SPL_LSHIFT_W16(k+5, 7);
if (nmk3 < tmp16)
nmk3 = tmp16;
tmp16 = WEBRTC_SPL_LSHIFT_W16(72+k-n, 7);
if (nmk3 > tmp16)
nmk3 = tmp16;
*nmean2ptr = nmk3;
if (vadflag)
{
// Update speech mean vector:
// deltaS = (x-mu)/sigma^2
// sgprvec[k] = probn[k]/(probn[0] + probn[1])
delt = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(sgprvec[nr],
deltaS[nr],
11); // (Q14*Q11)>>11=Q14
tmp16 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(delt,
kSpeechUpdateConst,
21) + 1;
smk2 = smk + (tmp16 >> 1); // Q7 + (Q14 * Q15 >> 22)
// Control that the speech mean does not drift to much
maxmu = maxspe + 640;
if (smk2 < kMinimumMean[k])
smk2 = kMinimumMean[k];
if (smk2 > maxmu)
smk2 = maxmu;
*smean2ptr = smk2;
// (Q7>>3) = Q4
tmp16 = WEBRTC_SPL_RSHIFT_W16((smk + 4), 3);
tmp16 = feature_vector[n] - tmp16; // Q4
tmp32_1 = WEBRTC_SPL_MUL_16_16_RSFT(deltaS[nr], tmp16, 3);
tmp32_2 = tmp32_1 - (WebRtc_Word32)4096; // Q12
tmp16 = WEBRTC_SPL_RSHIFT_W16((sgprvec[nr]), 2);
tmp32_1 = (WebRtc_Word32)(tmp16 * tmp32_2);// (Q15>>3)*(Q14>>2)=Q12*Q12=Q24
tmp32_2 = WEBRTC_SPL_RSHIFT_W32(tmp32_1, 4); // Q20
// 0.1 * Q20 / Q7 = Q13
if (tmp32_2 > 0)
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_2, ssk * 10);
else
{
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(-tmp32_2, ssk * 10);
tmp16 = -tmp16;
}
// divide by 4 giving an update factor of 0.025
tmp16 += 128; // Rounding
ssk += WEBRTC_SPL_RSHIFT_W16(tmp16, 8);
// Division with 8 plus Q7
if (ssk < MIN_STD)
ssk = MIN_STD;
*sstd2ptr = ssk;
} else
{
// Update GMM variance vectors
// deltaN * (feature_vector[n] - nmk) - 1, Q11 * Q4
tmp16 = feature_vector[n] - WEBRTC_SPL_RSHIFT_W16(nmk, 3);
// (Q15>>3) * (Q14>>2) = Q12 * Q12 = Q24
tmp32_1 = WEBRTC_SPL_MUL_16_16_RSFT(deltaN[nr], tmp16, 3) - 4096;
tmp16 = WEBRTC_SPL_RSHIFT_W16((ngprvec[nr]+2), 2);
tmp32_2 = (WebRtc_Word32)(tmp16 * tmp32_1);
tmp32_1 = WEBRTC_SPL_RSHIFT_W32(tmp32_2, 14);
// Q20 * approx 0.001 (2^-10=0.0009766)
// Q20 / Q7 = Q13
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_1, nsk);
if (tmp32_1 > 0)
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_1, nsk);
else
{
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(-tmp32_1, nsk);
tmp16 = -tmp16;
}
tmp16 += 32; // Rounding
nsk += WEBRTC_SPL_RSHIFT_W16(tmp16, 6);
if (nsk < MIN_STD)
nsk = MIN_STD;
*nstd2ptr = nsk;
}
}
// Separate models if they are too close - nmid in Q14
nmid = WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n], *nmean1ptr);
nmid += WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n+NUM_CHANNELS], *nmean2ptr);
// smid in Q14
smid = WEBRTC_SPL_MUL_16_16(kSpeechDataWeights[n], *smean1ptr);
smid += WEBRTC_SPL_MUL_16_16(kSpeechDataWeights[n+NUM_CHANNELS], *smean2ptr);
// diff = "global" speech mean - "global" noise mean
diff = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(smid, 9);
tmp16 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(nmid, 9);
diff -= tmp16;
if (diff < kMinimumDifference[n])
{
tmp16 = kMinimumDifference[n] - diff; // Q5
// tmp16_1 = ~0.8 * (kMinimumDifference - diff) in Q7
// tmp16_2 = ~0.2 * (kMinimumDifference - diff) in Q7
tmp16_1 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(13, tmp16, 2);
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(3, tmp16, 2);
// First Gauss, speech model
tmp16 = tmp16_1 + *smean1ptr;
*smean1ptr = tmp16;
smid = WEBRTC_SPL_MUL_16_16(tmp16, kSpeechDataWeights[n]);
// Second Gauss, speech model
tmp16 = tmp16_1 + *smean2ptr;
*smean2ptr = tmp16;
smid += WEBRTC_SPL_MUL_16_16(tmp16, kSpeechDataWeights[n+NUM_CHANNELS]);
// First Gauss, noise model
tmp16 = *nmean1ptr - tmp16_2;
*nmean1ptr = tmp16;
nmid = WEBRTC_SPL_MUL_16_16(tmp16, kNoiseDataWeights[n]);
// Second Gauss, noise model
tmp16 = *nmean2ptr - tmp16_2;
*nmean2ptr = tmp16;
nmid += WEBRTC_SPL_MUL_16_16(tmp16, kNoiseDataWeights[n+NUM_CHANNELS]);
}
// Control that the speech & noise means do not drift to much
maxspe = kMaximumSpeech[n];
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(smid, 7);
if (tmp16_2 > maxspe)
{ // Upper limit of speech model
tmp16_2 -= maxspe;
*smean1ptr -= tmp16_2;
*smean2ptr -= tmp16_2;
}
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(nmid, 7);
if (tmp16_2 > kMaximumNoise[n])
{
tmp16_2 -= kMaximumNoise[n];
*nmean1ptr -= tmp16_2;
*nmean2ptr -= tmp16_2;
}
nmean1ptr++;
smean1ptr++;
nstd1ptr++;
sstd1ptr++;
}
inst->frame_counter++;
} else
{
vadflag = 0;
}
// Hangover smoothing
if (!vadflag)
{
if (inst->over_hang > 0)
{
vadflag = 2 + inst->over_hang;
inst->over_hang = inst->over_hang - 1;
}
inst->num_of_speech = 0;
} else
{
inst->num_of_speech = inst->num_of_speech + 1;
if (inst->num_of_speech > NSP_MAX)
{
inst->num_of_speech = NSP_MAX;
inst->over_hang = overhead2;
} else
inst->over_hang = overhead1;
}
return vadflag;
}

View File

@ -1,132 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the descriptions of the core VAD calls.
*/
#ifndef WEBRTC_VAD_CORE_H_
#define WEBRTC_VAD_CORE_H_
#include "typedefs.h"
#include "vad_defines.h"
typedef struct VadInstT_
{
WebRtc_Word16 vad;
WebRtc_Word32 downsampling_filter_states[4];
WebRtc_Word16 noise_means[NUM_TABLE_VALUES];
WebRtc_Word16 speech_means[NUM_TABLE_VALUES];
WebRtc_Word16 noise_stds[NUM_TABLE_VALUES];
WebRtc_Word16 speech_stds[NUM_TABLE_VALUES];
WebRtc_Word32 frame_counter;
WebRtc_Word16 over_hang; // Over Hang
WebRtc_Word16 num_of_speech;
WebRtc_Word16 index_vector[16 * NUM_CHANNELS];
WebRtc_Word16 low_value_vector[16 * NUM_CHANNELS];
WebRtc_Word16 mean_value[NUM_CHANNELS];
WebRtc_Word16 upper_state[5];
WebRtc_Word16 lower_state[5];
WebRtc_Word16 hp_filter_state[4];
WebRtc_Word16 over_hang_max_1[3];
WebRtc_Word16 over_hang_max_2[3];
WebRtc_Word16 individual[3];
WebRtc_Word16 total[3];
short init_flag;
} VadInstT;
/****************************************************************************
* WebRtcVad_InitCore(...)
*
* This function initializes a VAD instance
*
* Input:
* - inst : Instance that should be initialized
* - mode : Aggressiveness degree
* 0 (High quality) - 3 (Highly aggressive)
*
* Output:
* - inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcVad_InitCore(VadInstT* inst, short mode);
/****************************************************************************
* WebRtcVad_set_mode_core(...)
*
* This function changes the VAD settings
*
* Input:
* - inst : VAD instance
* - mode : Aggressiveness degree
* 0 (High quality) - 3 (Highly aggressive)
*
* Output:
* - inst : Changed instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcVad_set_mode_core(VadInstT* inst, short mode);
/****************************************************************************
* WebRtcVad_CalcVad32khz(...)
* WebRtcVad_CalcVad16khz(...)
* WebRtcVad_CalcVad8khz(...)
*
* Calculate probability for active speech and make VAD decision.
*
* Input:
* - inst : Instance that should be initialized
* - speech_frame : Input speech frame
* - frame_length : Number of input samples
*
* Output:
* - inst : Updated filter states etc.
*
* Return value : VAD decision
* 0 - No active speech
* 1-6 - Active speech
*/
WebRtc_Word16 WebRtcVad_CalcVad32khz(VadInstT* inst, WebRtc_Word16* speech_frame,
int frame_length);
WebRtc_Word16 WebRtcVad_CalcVad16khz(VadInstT* inst, WebRtc_Word16* speech_frame,
int frame_length);
WebRtc_Word16 WebRtcVad_CalcVad8khz(VadInstT* inst, WebRtc_Word16* speech_frame,
int frame_length);
/****************************************************************************
* WebRtcVad_GmmProbability(...)
*
* This function calculates the probabilities for background noise and
* speech using Gaussian Mixture Models. A hypothesis-test is performed to decide
* which type of signal is most probable.
*
* Input:
* - inst : Pointer to VAD instance
* - feature_vector : Feature vector = log10(energy in frequency band)
* - total_power : Total power in frame.
* - frame_length : Number of input samples
*
* Output:
* VAD decision : 0 - noise, 1 - speech
*
*/
WebRtc_Word16 WebRtcVad_GmmProbability(VadInstT* inst, WebRtc_Word16* feature_vector,
WebRtc_Word16 total_power, int frame_length);
#endif // WEBRTC_VAD_CORE_H_

View File

@ -1,95 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the macros used in VAD.
*/
#ifndef WEBRTC_VAD_DEFINES_H_
#define WEBRTC_VAD_DEFINES_H_
#define NUM_CHANNELS 6 // Eight frequency bands
#define NUM_MODELS 2 // Number of Gaussian models
#define NUM_TABLE_VALUES NUM_CHANNELS * NUM_MODELS
#define MIN_ENERGY 10
#define ALPHA1 6553 // 0.2 in Q15
#define ALPHA2 32439 // 0.99 in Q15
#define NSP_MAX 6 // Maximum number of VAD=1 frames in a row counted
#define MIN_STD 384 // Minimum standard deviation
// Mode 0, Quality thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_Q 24
#define INDIVIDUAL_20MS_Q 21 // (log10(2)*66)<<2 ~=16
#define INDIVIDUAL_30MS_Q 24
#define TOTAL_10MS_Q 57
#define TOTAL_20MS_Q 48
#define TOTAL_30MS_Q 57
#define OHMAX1_10MS_Q 8 // Max Overhang 1
#define OHMAX2_10MS_Q 14 // Max Overhang 2
#define OHMAX1_20MS_Q 4 // Max Overhang 1
#define OHMAX2_20MS_Q 7 // Max Overhang 2
#define OHMAX1_30MS_Q 3
#define OHMAX2_30MS_Q 5
// Mode 1, Low bitrate thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_LBR 37
#define INDIVIDUAL_20MS_LBR 32
#define INDIVIDUAL_30MS_LBR 37
#define TOTAL_10MS_LBR 100
#define TOTAL_20MS_LBR 80
#define TOTAL_30MS_LBR 100
#define OHMAX1_10MS_LBR 8 // Max Overhang 1
#define OHMAX2_10MS_LBR 14 // Max Overhang 2
#define OHMAX1_20MS_LBR 4
#define OHMAX2_20MS_LBR 7
#define OHMAX1_30MS_LBR 3
#define OHMAX2_30MS_LBR 5
// Mode 2, Very aggressive thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_AGG 82
#define INDIVIDUAL_20MS_AGG 78
#define INDIVIDUAL_30MS_AGG 82
#define TOTAL_10MS_AGG 285 //580
#define TOTAL_20MS_AGG 260
#define TOTAL_30MS_AGG 285
#define OHMAX1_10MS_AGG 6 // Max Overhang 1
#define OHMAX2_10MS_AGG 9 // Max Overhang 2
#define OHMAX1_20MS_AGG 3
#define OHMAX2_20MS_AGG 5
#define OHMAX1_30MS_AGG 2
#define OHMAX2_30MS_AGG 3
// Mode 3, Super aggressive thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_VAG 94
#define INDIVIDUAL_20MS_VAG 94
#define INDIVIDUAL_30MS_VAG 94
#define TOTAL_10MS_VAG 1100 //1700
#define TOTAL_20MS_VAG 1050
#define TOTAL_30MS_VAG 1100
#define OHMAX1_10MS_VAG 6 // Max Overhang 1
#define OHMAX2_10MS_VAG 9 // Max Overhang 2
#define OHMAX1_20MS_VAG 3
#define OHMAX2_20MS_VAG 5
#define OHMAX1_30MS_VAG 2
#define OHMAX2_30MS_VAG 3
#endif // WEBRTC_VAD_DEFINES_H_

View File

@ -1,279 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the internal filterbank associated functions.
* For function description, see vad_filterbank.h.
*/
#include "vad_filterbank.h"
#include "signal_processing_library.h"
#include "typedefs.h"
#include "vad_defines.h"
// Constant 160*log10(2) in Q9
static const WebRtc_Word16 kLogConst = 24660;
// Coefficients used by WebRtcVad_HpOutput, Q14
static const WebRtc_Word16 kHpZeroCoefs[3] = {6631, -13262, 6631};
static const WebRtc_Word16 kHpPoleCoefs[3] = {16384, -7756, 5620};
// Allpass filter coefficients, upper and lower, in Q15
// Upper: 0.64, Lower: 0.17
static const WebRtc_Word16 kAllPassCoefsQ15[2] = {20972, 5571};
// Adjustment for division with two in WebRtcVad_SplitFilter
static const WebRtc_Word16 kOffsetVector[6] = {368, 368, 272, 176, 176, 176};
void WebRtcVad_HpOutput(WebRtc_Word16 *in_vector,
WebRtc_Word16 in_vector_length,
WebRtc_Word16 *out_vector,
WebRtc_Word16 *filter_state)
{
WebRtc_Word16 i, *pi, *outPtr;
WebRtc_Word32 tmpW32;
pi = &in_vector[0];
outPtr = &out_vector[0];
// The sum of the absolute values of the impulse response:
// The zero/pole-filter has a max amplification of a single sample of: 1.4546
// Impulse response: 0.4047 -0.6179 -0.0266 0.1993 0.1035 -0.0194
// The all-zero section has a max amplification of a single sample of: 1.6189
// Impulse response: 0.4047 -0.8094 0.4047 0 0 0
// The all-pole section has a max amplification of a single sample of: 1.9931
// Impulse response: 1.0000 0.4734 -0.1189 -0.2187 -0.0627 0.04532
for (i = 0; i < in_vector_length; i++)
{
// all-zero section (filter coefficients in Q14)
tmpW32 = (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpZeroCoefs[0], (*pi));
tmpW32 += (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpZeroCoefs[1], filter_state[0]);
tmpW32 += (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpZeroCoefs[2], filter_state[1]); // Q14
filter_state[1] = filter_state[0];
filter_state[0] = *pi++;
// all-pole section
tmpW32 -= (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpPoleCoefs[1], filter_state[2]); // Q14
tmpW32 -= (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpPoleCoefs[2], filter_state[3]);
filter_state[3] = filter_state[2];
filter_state[2] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32 (tmpW32, 14);
*outPtr++ = filter_state[2];
}
}
void WebRtcVad_Allpass(WebRtc_Word16 *in_vector,
WebRtc_Word16 *out_vector,
WebRtc_Word16 filter_coefficients,
int vector_length,
WebRtc_Word16 *filter_state)
{
// The filter can only cause overflow (in the w16 output variable)
// if more than 4 consecutive input numbers are of maximum value and
// has the the same sign as the impulse responses first taps.
// First 6 taps of the impulse response: 0.6399 0.5905 -0.3779
// 0.2418 -0.1547 0.0990
int n;
WebRtc_Word16 tmp16;
WebRtc_Word32 tmp32, in32, state32;
state32 = WEBRTC_SPL_LSHIFT_W32(((WebRtc_Word32)(*filter_state)), 16); // Q31
for (n = 0; n < vector_length; n++)
{
tmp32 = state32 + WEBRTC_SPL_MUL_16_16(filter_coefficients, (*in_vector));
tmp16 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 16);
*out_vector++ = tmp16;
in32 = WEBRTC_SPL_LSHIFT_W32(((WebRtc_Word32)(*in_vector)), 14);
state32 = in32 - WEBRTC_SPL_MUL_16_16(filter_coefficients, tmp16);
state32 = WEBRTC_SPL_LSHIFT_W32(state32, 1);
in_vector += 2;
}
*filter_state = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(state32, 16);
}
void WebRtcVad_SplitFilter(WebRtc_Word16 *in_vector,
WebRtc_Word16 *out_vector_hp,
WebRtc_Word16 *out_vector_lp,
WebRtc_Word16 *upper_state,
WebRtc_Word16 *lower_state,
int in_vector_length)
{
WebRtc_Word16 tmpOut;
int k, halflen;
// Downsampling by 2 and get two branches
halflen = WEBRTC_SPL_RSHIFT_W16(in_vector_length, 1);
// All-pass filtering upper branch
WebRtcVad_Allpass(&in_vector[0], out_vector_hp, kAllPassCoefsQ15[0], halflen, upper_state);
// All-pass filtering lower branch
WebRtcVad_Allpass(&in_vector[1], out_vector_lp, kAllPassCoefsQ15[1], halflen, lower_state);
// Make LP and HP signals
for (k = 0; k < halflen; k++)
{
tmpOut = *out_vector_hp;
*out_vector_hp++ -= *out_vector_lp;
*out_vector_lp++ += tmpOut;
}
}
WebRtc_Word16 WebRtcVad_get_features(VadInstT *inst,
WebRtc_Word16 *in_vector,
int frame_size,
WebRtc_Word16 *out_vector)
{
int curlen, filtno;
WebRtc_Word16 vecHP1[120], vecLP1[120];
WebRtc_Word16 vecHP2[60], vecLP2[60];
WebRtc_Word16 *ptin;
WebRtc_Word16 *hptout, *lptout;
WebRtc_Word16 power = 0;
// Split at 2000 Hz and downsample
filtno = 0;
ptin = in_vector;
hptout = vecHP1;
lptout = vecLP1;
curlen = frame_size;
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Split at 3000 Hz and downsample
filtno = 1;
ptin = vecHP1;
hptout = vecHP2;
lptout = vecLP2;
curlen = WEBRTC_SPL_RSHIFT_W16(frame_size, 1);
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 3000 Hz - 4000 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP2, &out_vector[5], &power, kOffsetVector[5], curlen);
// Energy in 2000 Hz - 3000 Hz
WebRtcVad_LogOfEnergy(vecLP2, &out_vector[4], &power, kOffsetVector[4], curlen);
// Split at 1000 Hz and downsample
filtno = 2;
ptin = vecLP1;
hptout = vecHP2;
lptout = vecLP2;
curlen = WEBRTC_SPL_RSHIFT_W16(frame_size, 1);
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 1000 Hz - 2000 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP2, &out_vector[3], &power, kOffsetVector[3], curlen);
// Split at 500 Hz
filtno = 3;
ptin = vecLP2;
hptout = vecHP1;
lptout = vecLP1;
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 500 Hz - 1000 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP1, &out_vector[2], &power, kOffsetVector[2], curlen);
// Split at 250 Hz
filtno = 4;
ptin = vecLP1;
hptout = vecHP2;
lptout = vecLP2;
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 250 Hz - 500 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP2, &out_vector[1], &power, kOffsetVector[1], curlen);
// Remove DC and LFs
WebRtcVad_HpOutput(vecLP2, curlen, vecHP1, inst->hp_filter_state);
// Power in 80 Hz - 250 Hz
WebRtcVad_LogOfEnergy(vecHP1, &out_vector[0], &power, kOffsetVector[0], curlen);
return power;
}
void WebRtcVad_LogOfEnergy(WebRtc_Word16 *vector,
WebRtc_Word16 *enerlogval,
WebRtc_Word16 *power,
WebRtc_Word16 offset,
int vector_length)
{
WebRtc_Word16 enerSum = 0;
WebRtc_Word16 zeros, frac, log2;
WebRtc_Word32 energy;
int shfts = 0, shfts2;
energy = WebRtcSpl_Energy(vector, vector_length, &shfts);
if (energy > 0)
{
shfts2 = 16 - WebRtcSpl_NormW32(energy);
shfts += shfts2;
// "shfts" is the total number of right shifts that has been done to enerSum.
enerSum = (WebRtc_Word16)WEBRTC_SPL_SHIFT_W32(energy, -shfts2);
// Find:
// 160*log10(enerSum*2^shfts) = 160*log10(2)*log2(enerSum*2^shfts) =
// 160*log10(2)*(log2(enerSum) + log2(2^shfts)) =
// 160*log10(2)*(log2(enerSum) + shfts)
zeros = WebRtcSpl_NormU32(enerSum);
frac = (WebRtc_Word16)(((WebRtc_UWord32)((WebRtc_Word32)(enerSum) << zeros)
& 0x7FFFFFFF) >> 21);
log2 = (WebRtc_Word16)(((31 - zeros) << 10) + frac);
*enerlogval = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(kLogConst, log2, 19)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(shfts, kLogConst, 9);
if (*enerlogval < 0)
{
*enerlogval = 0;
}
} else
{
*enerlogval = 0;
shfts = -15;
enerSum = 0;
}
*enerlogval += offset;
// Total power in frame
if (*power <= MIN_ENERGY)
{
if (shfts > 0)
{
*power += MIN_ENERGY + 1;
} else if (WEBRTC_SPL_SHIFT_W16(enerSum, shfts) > MIN_ENERGY)
{
*power += MIN_ENERGY + 1;
} else
{
*power += WEBRTC_SPL_SHIFT_W16(enerSum, shfts);
}
}
}

View File

@ -1,143 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the description of the internal VAD call
* WebRtcVad_GaussianProbability.
*/
#ifndef WEBRTC_VAD_FILTERBANK_H_
#define WEBRTC_VAD_FILTERBANK_H_
#include "vad_core.h"
/****************************************************************************
* WebRtcVad_HpOutput(...)
*
* This function removes DC from the lowest frequency band
*
* Input:
* - in_vector : Samples in the frequency interval 0 - 250 Hz
* - in_vector_length : Length of input and output vector
* - filter_state : Current state of the filter
*
* Output:
* - out_vector : Samples in the frequency interval 80 - 250 Hz
* - filter_state : Updated state of the filter
*
*/
void WebRtcVad_HpOutput(WebRtc_Word16* in_vector,
WebRtc_Word16 in_vector_length,
WebRtc_Word16* out_vector,
WebRtc_Word16* filter_state);
/****************************************************************************
* WebRtcVad_Allpass(...)
*
* This function is used when before splitting a speech file into
* different frequency bands
*
* Note! Do NOT let the arrays in_vector and out_vector correspond to the same address.
*
* Input:
* - in_vector : (Q0)
* - filter_coefficients : (Q15)
* - vector_length : Length of input and output vector
* - filter_state : Current state of the filter (Q(-1))
*
* Output:
* - out_vector : Output speech signal (Q(-1))
* - filter_state : Updated state of the filter (Q(-1))
*
*/
void WebRtcVad_Allpass(WebRtc_Word16* in_vector,
WebRtc_Word16* outw16,
WebRtc_Word16 filter_coefficients,
int vector_length,
WebRtc_Word16* filter_state);
/****************************************************************************
* WebRtcVad_SplitFilter(...)
*
* This function is used when before splitting a speech file into
* different frequency bands
*
* Input:
* - in_vector : Input signal to be split into two frequency bands.
* - upper_state : Current state of the upper filter
* - lower_state : Current state of the lower filter
* - in_vector_length : Length of input vector
*
* Output:
* - out_vector_hp : Upper half of the spectrum
* - out_vector_lp : Lower half of the spectrum
* - upper_state : Updated state of the upper filter
* - lower_state : Updated state of the lower filter
*
*/
void WebRtcVad_SplitFilter(WebRtc_Word16* in_vector,
WebRtc_Word16* out_vector_hp,
WebRtc_Word16* out_vector_lp,
WebRtc_Word16* upper_state,
WebRtc_Word16* lower_state,
int in_vector_length);
/****************************************************************************
* WebRtcVad_get_features(...)
*
* This function is used to get the logarithm of the power of each of the
* 6 frequency bands used by the VAD:
* 80 Hz - 250 Hz
* 250 Hz - 500 Hz
* 500 Hz - 1000 Hz
* 1000 Hz - 2000 Hz
* 2000 Hz - 3000 Hz
* 3000 Hz - 4000 Hz
*
* Input:
* - inst : Pointer to VAD instance
* - in_vector : Input speech signal
* - frame_size : Frame size, in number of samples
*
* Output:
* - out_vector : 10*log10(power in each freq. band), Q4
*
* Return: total power in the signal (NOTE! This value is not exact since it
* is only used in a comparison.
*/
WebRtc_Word16 WebRtcVad_get_features(VadInstT* inst,
WebRtc_Word16* in_vector,
int frame_size,
WebRtc_Word16* out_vector);
/****************************************************************************
* WebRtcVad_LogOfEnergy(...)
*
* This function is used to get the logarithm of the power of one frequency band.
*
* Input:
* - vector : Input speech samples for one frequency band
* - offset : Offset value for the current frequency band
* - vector_length : Length of input vector
*
* Output:
* - enerlogval : 10*log10(energy);
* - power : Update total power in speech frame. NOTE! This value
* is not exact since it is only used in a comparison.
*
*/
void WebRtcVad_LogOfEnergy(WebRtc_Word16* vector,
WebRtc_Word16* enerlogval,
WebRtc_Word16* power,
WebRtc_Word16 offset,
int vector_length);
#endif // WEBRTC_VAD_FILTERBANK_H_

View File

@ -1,75 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the internal VAD call
* WebRtcVad_GaussianProbability. For function description, see vad_gmm.h.
*/
#include "vad_gmm.h"
#include "signal_processing_library.h"
#include "typedefs.h"
static const WebRtc_Word32 kCompVar = 22005;
// Constant log2(exp(1)) in Q12
static const WebRtc_Word16 kLog10Const = 5909;
WebRtc_Word32 WebRtcVad_GaussianProbability(WebRtc_Word16 in_sample,
WebRtc_Word16 mean,
WebRtc_Word16 std,
WebRtc_Word16 *delta)
{
WebRtc_Word16 tmp16, tmpDiv, tmpDiv2, expVal, tmp16_1, tmp16_2;
WebRtc_Word32 tmp32, y32;
// Calculate tmpDiv=1/std, in Q10
tmp32 = (WebRtc_Word32)WEBRTC_SPL_RSHIFT_W16(std,1) + (WebRtc_Word32)131072; // 1 in Q17
tmpDiv = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32, std); // Q17/Q7 = Q10
// Calculate tmpDiv2=1/std^2, in Q14
tmp16 = WEBRTC_SPL_RSHIFT_W16(tmpDiv, 2); // From Q10 to Q8
tmpDiv2 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(tmp16, tmp16, 2); // (Q8 * Q8)>>2 = Q14
tmp16 = WEBRTC_SPL_LSHIFT_W16(in_sample, 3); // Q7
tmp16 = tmp16 - mean; // Q7 - Q7 = Q7
// To be used later, when updating noise/speech model
// delta = (x-m)/std^2, in Q11
*delta = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(tmpDiv2, tmp16, 10); //(Q14*Q7)>>10 = Q11
// Calculate tmp32=(x-m)^2/(2*std^2), in Q10
tmp32 = (WebRtc_Word32)WEBRTC_SPL_MUL_16_16_RSFT(*delta, tmp16, 9); // One shift for /2
// Calculate expVal ~= exp(-(x-m)^2/(2*std^2)) ~= exp2(-log2(exp(1))*tmp32)
if (tmp32 < kCompVar)
{
// Calculate tmp16 = log2(exp(1))*tmp32 , in Q10
tmp16 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((WebRtc_Word16)tmp32,
kLog10Const, 12);
tmp16 = -tmp16;
tmp16_2 = (WebRtc_Word16)(0x0400 | (tmp16 & 0x03FF));
tmp16_1 = (WebRtc_Word16)(tmp16 ^ 0xFFFF);
tmp16 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W16(tmp16_1, 10);
tmp16 += 1;
// Calculate expVal=log2(-tmp32), in Q10
expVal = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((WebRtc_Word32)tmp16_2, tmp16);
} else
{
expVal = 0;
}
// Calculate y32=(1/std)*exp(-(x-m)^2/(2*std^2)), in Q20
y32 = WEBRTC_SPL_MUL_16_16(tmpDiv, expVal); // Q10 * Q10 = Q20
return y32; // Q20
}

View File

@ -1,47 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the description of the internal VAD call
* WebRtcVad_GaussianProbability.
*/
#ifndef WEBRTC_VAD_GMM_H_
#define WEBRTC_VAD_GMM_H_
#include "typedefs.h"
/****************************************************************************
* WebRtcVad_GaussianProbability(...)
*
* This function calculates the probability for the value 'in_sample', given that in_sample
* comes from a normal distribution with mean 'mean' and standard deviation 'std'.
*
* Input:
* - in_sample : Input sample in Q4
* - mean : mean value in the statistical model, Q7
* - std : standard deviation, Q7
*
* Output:
*
* - delta : Value used when updating the model, Q11
*
* Return:
* - out : out = 1/std * exp(-(x-m)^2/(2*std^2));
* Probability for x.
*
*/
WebRtc_Word32 WebRtcVad_GaussianProbability(WebRtc_Word16 in_sample,
WebRtc_Word16 mean,
WebRtc_Word16 std,
WebRtc_Word16 *delta);
#endif // WEBRTC_VAD_GMM_H_

View File

@ -1,236 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the VAD internal calls for
* Downsampling and FindMinimum.
* For function call descriptions; See vad_sp.h.
*/
#include "vad_sp.h"
#include "signal_processing_library.h"
#include "typedefs.h"
#include "vad_defines.h"
// Allpass filter coefficients, upper and lower, in Q13
// Upper: 0.64, Lower: 0.17
static const WebRtc_Word16 kAllPassCoefsQ13[2] = {5243, 1392}; // Q13
// Downsampling filter based on the splitting filter and the allpass functions
// in vad_filterbank.c
void WebRtcVad_Downsampling(WebRtc_Word16* signal_in,
WebRtc_Word16* signal_out,
WebRtc_Word32* filter_state,
int inlen)
{
WebRtc_Word16 tmp16_1, tmp16_2;
WebRtc_Word32 tmp32_1, tmp32_2;
int n, halflen;
// Downsampling by 2 and get two branches
halflen = WEBRTC_SPL_RSHIFT_W16(inlen, 1);
tmp32_1 = filter_state[0];
tmp32_2 = filter_state[1];
// Filter coefficients in Q13, filter state in Q0
for (n = 0; n < halflen; n++)
{
// All-pass filtering upper branch
tmp16_1 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32_1, 1)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[0]),
*signal_in, 14);
*signal_out = tmp16_1;
tmp32_1 = (WebRtc_Word32)(*signal_in++)
- (WebRtc_Word32)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[0]), tmp16_1, 12);
// All-pass filtering lower branch
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32_2, 1)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[1]),
*signal_in, 14);
*signal_out++ += tmp16_2;
tmp32_2 = (WebRtc_Word32)(*signal_in++)
- (WebRtc_Word32)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[1]), tmp16_2, 12);
}
filter_state[0] = tmp32_1;
filter_state[1] = tmp32_2;
}
WebRtc_Word16 WebRtcVad_FindMinimum(VadInstT* inst,
WebRtc_Word16 x,
int n)
{
int i, j, k, II = -1, offset;
WebRtc_Word16 meanV, alpha;
WebRtc_Word32 tmp32, tmp32_1;
WebRtc_Word16 *valptr, *idxptr, *p1, *p2, *p3;
// Offset to beginning of the 16 minimum values in memory
offset = WEBRTC_SPL_LSHIFT_W16(n, 4);
// Pointer to memory for the 16 minimum values and the age of each value
idxptr = &inst->index_vector[offset];
valptr = &inst->low_value_vector[offset];
// Each value in low_value_vector is getting 1 loop older.
// Update age of each value in indexVal, and remove old values.
for (i = 0; i < 16; i++)
{
p3 = idxptr + i;
if (*p3 != 100)
{
*p3 += 1;
} else
{
p1 = valptr + i + 1;
p2 = p3 + 1;
for (j = i; j < 16; j++)
{
*(valptr + j) = *p1++;
*(idxptr + j) = *p2++;
}
*(idxptr + 15) = 101;
*(valptr + 15) = 10000;
}
}
// Check if x smaller than any of the values in low_value_vector.
// If so, find position.
if (x < *(valptr + 7))
{
if (x < *(valptr + 3))
{
if (x < *(valptr + 1))
{
if (x < *valptr)
{
II = 0;
} else
{
II = 1;
}
} else if (x < *(valptr + 2))
{
II = 2;
} else
{
II = 3;
}
} else if (x < *(valptr + 5))
{
if (x < *(valptr + 4))
{
II = 4;
} else
{
II = 5;
}
} else if (x < *(valptr + 6))
{
II = 6;
} else
{
II = 7;
}
} else if (x < *(valptr + 15))
{
if (x < *(valptr + 11))
{
if (x < *(valptr + 9))
{
if (x < *(valptr + 8))
{
II = 8;
} else
{
II = 9;
}
} else if (x < *(valptr + 10))
{
II = 10;
} else
{
II = 11;
}
} else if (x < *(valptr + 13))
{
if (x < *(valptr + 12))
{
II = 12;
} else
{
II = 13;
}
} else if (x < *(valptr + 14))
{
II = 14;
} else
{
II = 15;
}
}
// Put new min value on right position and shift bigger values up
if (II > -1)
{
for (i = 15; i > II; i--)
{
k = i - 1;
*(valptr + i) = *(valptr + k);
*(idxptr + i) = *(idxptr + k);
}
*(valptr + II) = x;
*(idxptr + II) = 1;
}
meanV = 0;
if ((inst->frame_counter) > 4)
{
j = 5;
} else
{
j = inst->frame_counter;
}
if (j > 2)
{
meanV = *(valptr + 2);
} else if (j > 0)
{
meanV = *valptr;
} else
{
meanV = 1600;
}
if (inst->frame_counter > 0)
{
if (meanV < inst->mean_value[n])
{
alpha = (WebRtc_Word16)ALPHA1; // 0.2 in Q15
} else
{
alpha = (WebRtc_Word16)ALPHA2; // 0.99 in Q15
}
} else
{
alpha = 0;
}
tmp32 = WEBRTC_SPL_MUL_16_16((alpha+1), inst->mean_value[n]);
tmp32_1 = WEBRTC_SPL_MUL_16_16(WEBRTC_SPL_WORD16_MAX - alpha, meanV);
tmp32 += tmp32_1;
tmp32 += 16384;
inst->mean_value[n] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 15);
return inst->mean_value[n];
}

View File

@ -1,60 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the VAD internal calls for Downsampling and FindMinimum.
* Specific function calls are given below.
*/
#ifndef WEBRTC_VAD_SP_H_
#define WEBRTC_VAD_SP_H_
#include "vad_core.h"
/****************************************************************************
* WebRtcVad_Downsampling(...)
*
* Downsamples the signal a factor 2, eg. 32->16 or 16->8
*
* Input:
* - signal_in : Input signal
* - in_length : Length of input signal in samples
*
* Input & Output:
* - filter_state : Filter state for first all-pass filters
*
* Output:
* - signal_out : Downsampled signal (of length len/2)
*/
void WebRtcVad_Downsampling(WebRtc_Word16* signal_in,
WebRtc_Word16* signal_out,
WebRtc_Word32* filter_state,
int in_length);
/****************************************************************************
* WebRtcVad_FindMinimum(...)
*
* Find the five lowest values of x in 100 frames long window. Return a mean
* value of these five values.
*
* Input:
* - feature_value : Feature value
* - channel : Channel number
*
* Input & Output:
* - inst : State information
*
* Output:
* return value : Weighted minimum value for a moving window.
*/
WebRtc_Word16 WebRtcVad_FindMinimum(VadInstT* inst, WebRtc_Word16 feature_value, int channel);
#endif // WEBRTC_VAD_SP_H_

View File

@ -1,197 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the VAD API calls. For a specific function call description,
* see webrtc_vad.h
*/
#include <stdlib.h>
#include <string.h>
#include "webrtc_vad.h"
#include "vad_core.h"
static const int kInitCheck = 42;
WebRtc_Word16 WebRtcVad_get_version(char *version, size_t size_bytes)
{
const char my_version[] = "VAD 1.2.0";
if (version == NULL)
{
return -1;
}
if (size_bytes < sizeof(my_version))
{
return -1;
}
memcpy(version, my_version, sizeof(my_version));
return 0;
}
WebRtc_Word16 WebRtcVad_AssignSize(int *size_in_bytes)
{
*size_in_bytes = sizeof(VadInstT) * 2 / sizeof(WebRtc_Word16);
return 0;
}
WebRtc_Word16 WebRtcVad_Assign(VadInst **vad_inst, void *vad_inst_addr)
{
if (vad_inst == NULL)
{
return -1;
}
if (vad_inst_addr != NULL)
{
*vad_inst = (VadInst*)vad_inst_addr;
return 0;
} else
{
return -1;
}
}
WebRtc_Word16 WebRtcVad_Create(VadInst **vad_inst)
{
VadInstT *vad_ptr = NULL;
if (vad_inst == NULL)
{
return -1;
}
*vad_inst = NULL;
vad_ptr = (VadInstT *)malloc(sizeof(VadInstT));
*vad_inst = (VadInst *)vad_ptr;
if (vad_ptr == NULL)
{
return -1;
}
vad_ptr->init_flag = 0;
return 0;
}
WebRtc_Word16 WebRtcVad_Free(VadInst *vad_inst)
{
if (vad_inst == NULL)
{
return -1;
}
free(vad_inst);
return 0;
}
WebRtc_Word16 WebRtcVad_Init(VadInst *vad_inst)
{
short mode = 0; // Default high quality
if (vad_inst == NULL)
{
return -1;
}
return WebRtcVad_InitCore((VadInstT*)vad_inst, mode);
}
WebRtc_Word16 WebRtcVad_set_mode(VadInst *vad_inst, WebRtc_Word16 mode)
{
VadInstT* vad_ptr;
if (vad_inst == NULL)
{
return -1;
}
vad_ptr = (VadInstT*)vad_inst;
if (vad_ptr->init_flag != kInitCheck)
{
return -1;
}
return WebRtcVad_set_mode_core((VadInstT*)vad_inst, mode);
}
WebRtc_Word16 WebRtcVad_Process(VadInst *vad_inst,
WebRtc_Word16 fs,
WebRtc_Word16 *speech_frame,
WebRtc_Word16 frame_length)
{
WebRtc_Word16 vad;
VadInstT* vad_ptr;
if (vad_inst == NULL)
{
return -1;
}
vad_ptr = (VadInstT*)vad_inst;
if (vad_ptr->init_flag != kInitCheck)
{
return -1;
}
if (speech_frame == NULL)
{
return -1;
}
if (fs == 32000)
{
if ((frame_length != 320) && (frame_length != 640) && (frame_length != 960))
{
return -1;
}
vad = WebRtcVad_CalcVad32khz((VadInstT*)vad_inst, speech_frame, frame_length);
} else if (fs == 16000)
{
if ((frame_length != 160) && (frame_length != 320) && (frame_length != 480))
{
return -1;
}
vad = WebRtcVad_CalcVad16khz((VadInstT*)vad_inst, speech_frame, frame_length);
} else if (fs == 8000)
{
if ((frame_length != 80) && (frame_length != 160) && (frame_length != 240))
{
return -1;
}
vad = WebRtcVad_CalcVad8khz((VadInstT*)vad_inst, speech_frame, frame_length);
} else
{
return -1; // Not a supported sampling frequency
}
if (vad > 0)
{
return 1;
} else if (vad == 0)
{
return 0;
} else
{
return -1;
}
}

View File

@ -1,123 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the VAD unit tests.
*/
#include <cstring>
#include "unit_test.h"
#include "webrtc_vad.h"
class VadEnvironment : public ::testing::Environment {
public:
virtual void SetUp() {
}
virtual void TearDown() {
}
};
VadTest::VadTest()
{
}
void VadTest::SetUp() {
}
void VadTest::TearDown() {
}
TEST_F(VadTest, ApiTest) {
VadInst *vad_inst;
int i, j, k;
short zeros[960];
short speech[960];
char version[32];
// Valid test cases
int fs[3] = {8000, 16000, 32000};
int nMode[4] = {0, 1, 2, 3};
int framelen[3][3] = {{80, 160, 240},
{160, 320, 480}, {320, 640, 960}} ;
int vad_counter = 0;
memset(zeros, 0, sizeof(short) * 960);
memset(speech, 1, sizeof(short) * 960);
speech[13] = 1374;
speech[73] = -3747;
// WebRtcVad_get_version()
WebRtcVad_get_version(version);
//printf("API Test for %s\n", version);
// Null instance tests
EXPECT_EQ(-1, WebRtcVad_Create(NULL));
EXPECT_EQ(-1, WebRtcVad_Init(NULL));
EXPECT_EQ(-1, WebRtcVad_Assign(NULL, NULL));
EXPECT_EQ(-1, WebRtcVad_Free(NULL));
EXPECT_EQ(-1, WebRtcVad_set_mode(NULL, nMode[0]));
EXPECT_EQ(-1, WebRtcVad_Process(NULL, fs[0], speech, framelen[0][0]));
EXPECT_EQ(WebRtcVad_Create(&vad_inst), 0);
// Not initialized tests
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, fs[0], speech, framelen[0][0]));
EXPECT_EQ(-1, WebRtcVad_set_mode(vad_inst, nMode[0]));
// WebRtcVad_Init() tests
EXPECT_EQ(WebRtcVad_Init(vad_inst), 0);
// WebRtcVad_set_mode() tests
EXPECT_EQ(-1, WebRtcVad_set_mode(vad_inst, -1));
EXPECT_EQ(-1, WebRtcVad_set_mode(vad_inst, 4));
for (i = 0; i < sizeof(nMode)/sizeof(nMode[0]); i++) {
EXPECT_EQ(WebRtcVad_set_mode(vad_inst, nMode[i]), 0);
}
// WebRtcVad_Process() tests
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, fs[0], NULL, framelen[0][0]));
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, 12000, speech, framelen[0][0]));
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, fs[0], speech, framelen[1][1]));
EXPECT_EQ(WebRtcVad_Process(vad_inst, fs[0], zeros, framelen[0][0]), 0);
for (i = 0; i < sizeof(fs)/sizeof(fs[0]); i++) {
for (j = 0; j < sizeof(framelen[0])/sizeof(framelen[0][0]); j++) {
for (k = 0; k < sizeof(nMode)/sizeof(nMode[0]); k++) {
EXPECT_EQ(WebRtcVad_set_mode(vad_inst, nMode[k]), 0);
// printf("%d\n", WebRtcVad_Process(vad_inst, fs[i], speech, framelen[i][j]));
if (vad_counter < 9)
{
EXPECT_EQ(WebRtcVad_Process(vad_inst, fs[i], speech, framelen[i][j]), 1);
} else
{
EXPECT_EQ(WebRtcVad_Process(vad_inst, fs[i], speech, framelen[i][j]), 0);
}
vad_counter++;
}
}
}
EXPECT_EQ(0, WebRtcVad_Free(vad_inst));
}
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
VadEnvironment* env = new VadEnvironment;
::testing::AddGlobalTestEnvironment(env);
return RUN_ALL_TESTS();
}

View File

@ -1,28 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the declaration of the VAD unit test.
*/
#ifndef WEBRTC_VAD_UNIT_TEST_H_
#define WEBRTC_VAD_UNIT_TEST_H_
#include <gtest/gtest.h>
class VadTest : public ::testing::Test {
protected:
VadTest();
virtual void SetUp();
virtual void TearDown();
};
#endif // WEBRTC_VAD_UNIT_TEST_H_

View File

@ -1,610 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_COMMON_TYPES_H
#define WEBRTC_COMMON_TYPES_H
#include "typedefs.h"
#ifdef WEBRTC_EXPORT
#define WEBRTC_DLLEXPORT _declspec(dllexport)
#elif WEBRTC_DLL
#define WEBRTC_DLLEXPORT _declspec(dllimport)
#else
#define WEBRTC_DLLEXPORT
#endif
#ifndef NULL
#define NULL 0
#endif
namespace webrtc {
class InStream
{
public:
virtual int Read(void *buf,int len) = 0;
virtual int Rewind() {return -1;}
virtual ~InStream() {}
protected:
InStream() {}
};
class OutStream
{
public:
virtual bool Write(const void *buf,int len) = 0;
virtual int Rewind() {return -1;}
virtual ~OutStream() {}
protected:
OutStream() {}
};
enum TraceModule
{
// not a module, triggered from the engine code
kTraceVoice = 0x0001,
// not a module, triggered from the engine code
kTraceVideo = 0x0002,
// not a module, triggered from the utility code
kTraceUtility = 0x0003,
kTraceRtpRtcp = 0x0004,
kTraceTransport = 0x0005,
kTraceSrtp = 0x0006,
kTraceAudioCoding = 0x0007,
kTraceAudioMixerServer = 0x0008,
kTraceAudioMixerClient = 0x0009,
kTraceFile = 0x000a,
kTraceAudioProcessing = 0x000b,
kTraceVideoCoding = 0x0010,
kTraceVideoMixer = 0x0011,
kTraceAudioDevice = 0x0012,
kTraceVideoRenderer = 0x0014,
kTraceVideoCapture = 0x0015,
kTraceVideoPreocessing = 0x0016
};
enum TraceLevel
{
kTraceNone = 0x0000, // no trace
kTraceStateInfo = 0x0001,
kTraceWarning = 0x0002,
kTraceError = 0x0004,
kTraceCritical = 0x0008,
kTraceApiCall = 0x0010,
kTraceDefault = 0x00ff,
kTraceModuleCall = 0x0020,
kTraceMemory = 0x0100, // memory info
kTraceTimer = 0x0200, // timing info
kTraceStream = 0x0400, // "continuous" stream of data
// used for debug purposes
kTraceDebug = 0x0800, // debug
kTraceInfo = 0x1000, // debug info
kTraceAll = 0xffff
};
// External Trace API
class TraceCallback
{
public:
virtual void Print(const TraceLevel level,
const char *traceString,
const int length) = 0;
protected:
virtual ~TraceCallback() {}
TraceCallback() {}
};
enum FileFormats
{
kFileFormatWavFile = 1,
kFileFormatCompressedFile = 2,
kFileFormatAviFile = 3,
kFileFormatPreencodedFile = 4,
kFileFormatPcm16kHzFile = 7,
kFileFormatPcm8kHzFile = 8,
kFileFormatPcm32kHzFile = 9
};
enum ProcessingTypes
{
kPlaybackPerChannel = 0,
kPlaybackAllChannelsMixed,
kRecordingPerChannel,
kRecordingAllChannelsMixed
};
// Encryption enums
enum CipherTypes
{
kCipherNull = 0,
kCipherAes128CounterMode = 1
};
enum AuthenticationTypes
{
kAuthNull = 0,
kAuthHmacSha1 = 3
};
enum SecurityLevels
{
kNoProtection = 0,
kEncryption = 1,
kAuthentication = 2,
kEncryptionAndAuthentication = 3
};
class Encryption
{
public:
virtual void encrypt(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
virtual void decrypt(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
virtual void encrypt_rtcp(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
virtual void decrypt_rtcp(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
protected:
virtual ~Encryption() {}
Encryption() {}
};
// External transport callback interface
class Transport
{
public:
virtual int SendPacket(int channel, const void *data, int len) = 0;
virtual int SendRTCPPacket(int channel, const void *data, int len) = 0;
protected:
virtual ~Transport() {}
Transport() {}
};
// ==================================================================
// Voice specific types
// ==================================================================
// Each codec supported can be described by this structure.
struct CodecInst
{
int pltype;
char plname[32];
int plfreq;
int pacsize;
int channels;
int rate;
};
enum FrameType
{
kFrameEmpty = 0,
kAudioFrameSpeech = 1,
kAudioFrameCN = 2,
kVideoFrameKey = 3, // independent frame
kVideoFrameDelta = 4, // depends on the previus frame
kVideoFrameGolden = 5, // depends on a old known previus frame
kVideoFrameAltRef = 6
};
// RTP
enum {kRtpCsrcSize = 15}; // RFC 3550 page 13
enum RTPDirections
{
kRtpIncoming = 0,
kRtpOutgoing
};
enum PayloadFrequencies
{
kFreq8000Hz = 8000,
kFreq16000Hz = 16000,
kFreq32000Hz = 32000
};
enum VadModes // degree of bandwidth reduction
{
kVadConventional = 0, // lowest reduction
kVadAggressiveLow,
kVadAggressiveMid,
kVadAggressiveHigh // highest reduction
};
struct NetworkStatistics // NETEQ statistics
{
// current jitter buffer size in ms
WebRtc_UWord16 currentBufferSize;
// preferred (optimal) buffer size in ms
WebRtc_UWord16 preferredBufferSize;
// loss rate (network + late) in percent (in Q14)
WebRtc_UWord16 currentPacketLossRate;
// late loss rate in percent (in Q14)
WebRtc_UWord16 currentDiscardRate;
// fraction (of original stream) of synthesized speech inserted through
// expansion (in Q14)
WebRtc_UWord16 currentExpandRate;
// fraction of synthesized speech inserted through pre-emptive expansion
// (in Q14)
WebRtc_UWord16 currentPreemptiveRate;
// fraction of data removed through acceleration (in Q14)
WebRtc_UWord16 currentAccelerateRate;
};
struct JitterStatistics
{
// smallest Jitter Buffer size during call in ms
WebRtc_UWord32 jbMinSize;
// largest Jitter Buffer size during call in ms
WebRtc_UWord32 jbMaxSize;
// the average JB size, measured over time - ms
WebRtc_UWord32 jbAvgSize;
// number of times the Jitter Buffer changed (using Accelerate or
// Pre-emptive Expand)
WebRtc_UWord32 jbChangeCount;
// amount (in ms) of audio data received late
WebRtc_UWord32 lateLossMs;
// milliseconds removed to reduce jitter buffer size
WebRtc_UWord32 accelerateMs;
// milliseconds discarded through buffer flushing
WebRtc_UWord32 flushedMs;
// milliseconds of generated silence
WebRtc_UWord32 generatedSilentMs;
// milliseconds of synthetic audio data (non-background noise)
WebRtc_UWord32 interpolatedVoiceMs;
// milliseconds of synthetic audio data (background noise level)
WebRtc_UWord32 interpolatedSilentMs;
// count of tiny expansions in output audio
WebRtc_UWord32 countExpandMoreThan120ms;
// count of small expansions in output audio
WebRtc_UWord32 countExpandMoreThan250ms;
// count of medium expansions in output audio
WebRtc_UWord32 countExpandMoreThan500ms;
// count of long expansions in output audio
WebRtc_UWord32 countExpandMoreThan2000ms;
// duration of longest audio drop-out
WebRtc_UWord32 longestExpandDurationMs;
// count of times we got small network outage (inter-arrival time in
// [500, 1000) ms)
WebRtc_UWord32 countIAT500ms;
// count of times we got medium network outage (inter-arrival time in
// [1000, 2000) ms)
WebRtc_UWord32 countIAT1000ms;
// count of times we got large network outage (inter-arrival time >=
// 2000 ms)
WebRtc_UWord32 countIAT2000ms;
// longest packet inter-arrival time in ms
WebRtc_UWord32 longestIATms;
// min time incoming Packet "waited" to be played
WebRtc_UWord32 minPacketDelayMs;
// max time incoming Packet "waited" to be played
WebRtc_UWord32 maxPacketDelayMs;
// avg time incoming Packet "waited" to be played
WebRtc_UWord32 avgPacketDelayMs;
};
typedef struct
{
int min; // minumum
int max; // maximum
int average; // average
} StatVal;
typedef struct // All levels are reported in dBm0
{
StatVal speech_rx; // long-term speech levels on receiving side
StatVal speech_tx; // long-term speech levels on transmitting side
StatVal noise_rx; // long-term noise/silence levels on receiving side
StatVal noise_tx; // long-term noise/silence levels on transmitting side
} LevelStatistics;
typedef struct // All levels are reported in dB
{
StatVal erl; // Echo Return Loss
StatVal erle; // Echo Return Loss Enhancement
StatVal rerl; // RERL = ERL + ERLE
// Echo suppression inside EC at the point just before its NLP
StatVal a_nlp;
} EchoStatistics;
enum TelephoneEventDetectionMethods
{
kInBand = 0,
kOutOfBand = 1,
kInAndOutOfBand = 2
};
enum NsModes // type of Noise Suppression
{
kNsUnchanged = 0, // previously set mode
kNsDefault, // platform default
kNsConference, // conferencing default
kNsLowSuppression, // lowest suppression
kNsModerateSuppression,
kNsHighSuppression,
kNsVeryHighSuppression, // highest suppression
};
enum AgcModes // type of Automatic Gain Control
{
kAgcUnchanged = 0, // previously set mode
kAgcDefault, // platform default
// adaptive mode for use when analog volume control exists (e.g. for
// PC softphone)
kAgcAdaptiveAnalog,
// scaling takes place in the digital domain (e.g. for conference servers
// and embedded devices)
kAgcAdaptiveDigital,
// can be used on embedded devices where the the capture signal is level
// is predictable
kAgcFixedDigital
};
// EC modes
enum EcModes // type of Echo Control
{
kEcUnchanged = 0, // previously set mode
kEcDefault, // platform default
kEcConference, // conferencing default (aggressive AEC)
kEcAec, // Acoustic Echo Cancellation
kEcAecm, // AEC mobile
};
// AECM modes
enum AecmModes // mode of AECM
{
kAecmQuietEarpieceOrHeadset = 0,
// Quiet earpiece or headset use
kAecmEarpiece, // most earpiece use
kAecmLoudEarpiece, // Loud earpiece or quiet speakerphone use
kAecmSpeakerphone, // most speakerphone use (default)
kAecmLoudSpeakerphone // Loud speakerphone
};
// AGC configuration
typedef struct
{
unsigned short targetLeveldBOv;
unsigned short digitalCompressionGaindB;
bool limiterEnable;
} AgcConfig; // AGC configuration parameters
enum StereoChannel
{
kStereoLeft = 0,
kStereoRight,
kStereoBoth
};
// Audio device layers
enum AudioLayers
{
kAudioPlatformDefault = 0,
kAudioWindowsWave = 1,
kAudioWindowsCore = 2,
kAudioLinuxAlsa = 3,
kAudioLinuxPulse = 4
};
enum NetEqModes // NetEQ playout configurations
{
// Optimized trade-off between low delay and jitter robustness for two-way
// communication.
kNetEqDefault = 0,
// Improved jitter robustness at the cost of increased delay. Can be
// used in one-way communication.
kNetEqStreaming = 1,
// Optimzed for decodability of fax signals rather than for perceived audio
// quality.
kNetEqFax = 2,
};
enum NetEqBgnModes // NetEQ Background Noise (BGN) configurations
{
// BGN is always on and will be generated when the incoming RTP stream
// stops (default).
kBgnOn = 0,
// The BGN is faded to zero (complete silence) after a few seconds.
kBgnFade = 1,
// BGN is not used at all. Silence is produced after speech extrapolation
// has faded.
kBgnOff = 2,
};
enum OnHoldModes // On Hold direction
{
kHoldSendAndPlay = 0, // Put both sending and playing in on-hold state.
kHoldSendOnly, // Put only sending in on-hold state.
kHoldPlayOnly // Put only playing in on-hold state.
};
enum AmrMode
{
kRfc3267BwEfficient = 0,
kRfc3267OctetAligned = 1,
kRfc3267FileStorage = 2,
};
// ==================================================================
// Video specific types
// ==================================================================
// Raw video types
enum RawVideoType
{
kVideoI420 = 0,
kVideoYV12 = 1,
kVideoYUY2 = 2,
kVideoUYVY = 3,
kVideoIYUV = 4,
kVideoARGB = 5,
kVideoRGB24 = 6,
kVideoRGB565 = 7,
kVideoARGB4444 = 8,
kVideoARGB1555 = 9,
kVideoMJPEG = 10,
kVideoNV12 = 11,
kVideoNV21 = 12,
kVideoUnknown = 99
};
// Video codec
enum { kConfigParameterSize = 128};
enum { kPayloadNameSize = 32};
enum { kMaxSimulcastStreams = 4};
// H.263 specific
struct VideoCodecH263
{
char quality;
};
// H.264 specific
enum H264Packetization
{
kH264SingleMode = 0,
kH264NonInterleavedMode = 1
};
enum VideoCodecComplexity
{
kComplexityNormal = 0,
kComplexityHigh = 1,
kComplexityHigher = 2,
kComplexityMax = 3
};
enum VideoCodecProfile
{
kProfileBase = 0x00,
kProfileMain = 0x01
};
struct VideoCodecH264
{
H264Packetization packetization;
VideoCodecComplexity complexity;
VideoCodecProfile profile;
char level;
char quality;
bool useFMO;
unsigned char configParameters[kConfigParameterSize];
unsigned char configParametersSize;
};
// VP8 specific
struct VideoCodecVP8
{
bool pictureLossIndicationOn;
bool feedbackModeOn;
VideoCodecComplexity complexity;
unsigned char numberOfTemporalLayers;
};
// MPEG-4 specific
struct VideoCodecMPEG4
{
unsigned char configParameters[kConfigParameterSize];
unsigned char configParametersSize;
char level;
};
// Unknown specific
struct VideoCodecGeneric
{
};
// Video codec types
enum VideoCodecType
{
kVideoCodecH263,
kVideoCodecH264,
kVideoCodecVP8,
kVideoCodecMPEG4,
kVideoCodecI420,
kVideoCodecRED,
kVideoCodecULPFEC,
kVideoCodecUnknown
};
union VideoCodecUnion
{
VideoCodecH263 H263;
VideoCodecH264 H264;
VideoCodecVP8 VP8;
VideoCodecMPEG4 MPEG4;
VideoCodecGeneric Generic;
};
/*
* Simulcast is when the same stream is encoded multiple times with different
* settings such as resolution.
*/
struct SimulcastStream
{
unsigned short width;
unsigned short height;
unsigned char numberOfTemporalLayers;
unsigned int maxBitrate;
unsigned int qpMax; // minimum quality
};
// Common video codec properties
struct VideoCodec
{
VideoCodecType codecType;
char plName[kPayloadNameSize];
unsigned char plType;
unsigned short width;
unsigned short height;
unsigned int startBitrate;
unsigned int maxBitrate;
unsigned int minBitrate;
unsigned char maxFramerate;
VideoCodecUnion codecSpecific;
unsigned int qpMax;
unsigned char numberOfSimulcastStreams;
SimulcastStream simulcastStream[kMaxSimulcastStreams];
};
} // namespace webrtc
#endif // WEBRTC_COMMON_TYPES_H

View File

@ -1 +0,0 @@
SUBDIRS = audio_processing

View File

@ -1,59 +0,0 @@
SUBDIRS = utility ns aec aecm agc
lib_LTLIBRARIES = libwebrtc_audio_processing.la
if NS_FIXED
COMMON_CXXFLAGS += -DWEBRTC_NS_FIXED=1
NS_LIB = libns_fix
else
COMMON_CXXFLAGS += -DWEBRTC_NS_FLOAT=1
NS_LIB = libns
endif
webrtcincludedir = $(includedir)/webrtc_audio_processing
webrtcinclude_HEADERS = $(top_srcdir)/src/typedefs.h \
$(top_srcdir)/src/modules/interface/module.h \
interface/audio_processing.h \
$(top_srcdir)/src/common_types.h \
$(top_srcdir)/src/modules/interface/module_common_types.h
libwebrtc_audio_processing_la_SOURCES = interface/audio_processing.h \
audio_buffer.cc \
audio_buffer.h \
audio_processing_impl.cc \
audio_processing_impl.h \
echo_cancellation_impl.cc \
echo_cancellation_impl.h \
echo_control_mobile_impl.cc \
echo_control_mobile_impl.h \
gain_control_impl.cc \
gain_control_impl.h \
high_pass_filter_impl.cc \
high_pass_filter_impl.h \
level_estimator_impl.cc \
level_estimator_impl.h \
noise_suppression_impl.cc \
noise_suppression_impl.h \
splitting_filter.cc \
splitting_filter.h \
processing_component.cc \
processing_component.h \
voice_detection_impl.cc \
voice_detection_impl.h
libwebrtc_audio_processing_la_CXXFLAGS = $(AM_CXXFLAGS) $(COMMON_CXXFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/common_audio/vad/main/interface \
-I$(top_srcdir)/src/system_wrappers/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility \
-I$(top_srcdir)/src/modules/audio_processing/ns/interface \
-I$(top_srcdir)/src/modules/audio_processing/aec/interface \
-I$(top_srcdir)/src/modules/audio_processing/aecm/interface \
-I$(top_srcdir)/src/modules/audio_processing/agc/interface
libwebrtc_audio_processing_la_LIBADD = $(top_builddir)/src/system_wrappers/libsystem_wrappers.la \
$(top_builddir)/src/common_audio/signal_processing_library/libspl.la \
$(top_builddir)/src/common_audio/vad/libvad.la \
$(top_builddir)/src/modules/audio_processing/utility/libapm_util.la \
$(top_builddir)/src/modules/audio_processing/ns/$(NS_LIB).la \
$(top_builddir)/src/modules/audio_processing/aec/libaec.la \
$(top_builddir)/src/modules/audio_processing/aecm/libaecm.la \
$(top_builddir)/src/modules/audio_processing/agc/libagc.la
libwebrtc_audio_processing_la_LDFLAGS = $(AM_LDFLAGS) -version-info $(LIBWEBRTC_AUDIO_PROCESSING_VERSION_INFO)

View File

@ -1,2 +0,0 @@
andrew@webrtc.org
bjornv@webrtc.org

View File

@ -1,16 +0,0 @@
noinst_LTLIBRARIES = libaec.la
libaec_la_SOURCES = interface/echo_cancellation.h \
echo_cancellation.c \
aec_core.h \
aec_core.c \
aec_core_sse2.c \
aec_rdft.h \
aec_rdft.c \
aec_rdft_sse2.c \
resampler.h \
resampler.c
libaec_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/system_wrappers/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility

View File

@ -1,40 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'aec',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
'apm_util'
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/echo_cancellation.h',
'echo_cancellation.c',
'aec_core.h',
'aec_core.c',
'aec_core_sse2.c',
'aec_rdft.h',
'aec_rdft.c',
'aec_rdft_sse2.c',
'resampler.h',
'resampler.c',
],
},
],
}

File diff suppressed because it is too large Load Diff

View File

@ -1,181 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* Specifies the interface for the AEC core.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_CORE_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_CORE_H_
#include <stdio.h>
#include "signal_processing_library.h"
#include "typedefs.h"
//#define AEC_DEBUG // for recording files
#define FRAME_LEN 80
#define PART_LEN 64 // Length of partition
#define PART_LEN1 (PART_LEN + 1) // Unique fft coefficients
#define PART_LEN2 (PART_LEN * 2) // Length of partition * 2
#define NR_PART 12 // Number of partitions
#define FILT_LEN (PART_LEN * NR_PART) // Filter length
#define FILT_LEN2 (FILT_LEN * 2) // Double filter length
#define FAR_BUF_LEN (FILT_LEN2 * 2)
#define PREF_BAND_SIZE 24
#define BLOCKL_MAX FRAME_LEN
// Maximum delay in fixed point delay estimator, used for logging
enum {kMaxDelay = 100};
typedef float complex_t[2];
// For performance reasons, some arrays of complex numbers are replaced by twice
// as long arrays of float, all the real parts followed by all the imaginary
// ones (complex_t[SIZE] -> float[2][SIZE]). This allows SIMD optimizations and
// is better than two arrays (one for the real parts and one for the imaginary
// parts) as this other way would require two pointers instead of one and cause
// extra register spilling. This also allows the offsets to be calculated at
// compile time.
// Metrics
enum {offsetLevel = -100};
typedef struct {
float sfrsum;
int sfrcounter;
float framelevel;
float frsum;
int frcounter;
float minlevel;
float averagelevel;
} power_level_t;
typedef struct {
float instant;
float average;
float min;
float max;
float sum;
float hisum;
float himean;
int counter;
int hicounter;
} stats_t;
typedef struct {
int farBufWritePos, farBufReadPos;
int knownDelay;
int inSamples, outSamples;
int delayEstCtr;
void *farFrBuf, *nearFrBuf, *outFrBuf;
void *nearFrBufH;
void *outFrBufH;
float xBuf[PART_LEN2]; // farend
float dBuf[PART_LEN2]; // nearend
float eBuf[PART_LEN2]; // error
float dBufH[PART_LEN2]; // nearend
float xPow[PART_LEN1];
float dPow[PART_LEN1];
float dMinPow[PART_LEN1];
float dInitMinPow[PART_LEN1];
float *noisePow;
float xfBuf[2][NR_PART * PART_LEN1]; // farend fft buffer
float wfBuf[2][NR_PART * PART_LEN1]; // filter fft
complex_t sde[PART_LEN1]; // cross-psd of nearend and error
complex_t sxd[PART_LEN1]; // cross-psd of farend and nearend
complex_t xfwBuf[NR_PART * PART_LEN1]; // farend windowed fft buffer
float sx[PART_LEN1], sd[PART_LEN1], se[PART_LEN1]; // far, near and error psd
float hNs[PART_LEN1];
float hNlFbMin, hNlFbLocalMin;
float hNlXdAvgMin;
int hNlNewMin, hNlMinCtr;
float overDrive, overDriveSm;
float targetSupp, minOverDrive;
float outBuf[PART_LEN];
int delayIdx;
short stNearState, echoState;
short divergeState;
int xfBufBlockPos;
short farBuf[FILT_LEN2 * 2];
short mult; // sampling frequency multiple
int sampFreq;
WebRtc_UWord32 seed;
float mu; // stepsize
float errThresh; // error threshold
int noiseEstCtr;
power_level_t farlevel;
power_level_t nearlevel;
power_level_t linoutlevel;
power_level_t nlpoutlevel;
int metricsMode;
int stateCounter;
stats_t erl;
stats_t erle;
stats_t aNlp;
stats_t rerl;
// Quantities to control H band scaling for SWB input
int freq_avg_ic; //initial bin for averaging nlp gain
int flag_Hband_cn; //for comfort noise
float cn_scale_Hband; //scale for comfort noise in H band
int delay_histogram[kMaxDelay];
int delay_logging_enabled;
void* delay_estimator;
#ifdef AEC_DEBUG
FILE *farFile;
FILE *nearFile;
FILE *outFile;
FILE *outLpFile;
#endif
} aec_t;
typedef void (*WebRtcAec_FilterFar_t)(aec_t *aec, float yf[2][PART_LEN1]);
extern WebRtcAec_FilterFar_t WebRtcAec_FilterFar;
typedef void (*WebRtcAec_ScaleErrorSignal_t)(aec_t *aec, float ef[2][PART_LEN1]);
extern WebRtcAec_ScaleErrorSignal_t WebRtcAec_ScaleErrorSignal;
typedef void (*WebRtcAec_FilterAdaptation_t)
(aec_t *aec, float *fft, float ef[2][PART_LEN1]);
extern WebRtcAec_FilterAdaptation_t WebRtcAec_FilterAdaptation;
typedef void (*WebRtcAec_OverdriveAndSuppress_t)
(aec_t *aec, float hNl[PART_LEN1], const float hNlFb, float efw[2][PART_LEN1]);
extern WebRtcAec_OverdriveAndSuppress_t WebRtcAec_OverdriveAndSuppress;
int WebRtcAec_CreateAec(aec_t **aec);
int WebRtcAec_FreeAec(aec_t *aec);
int WebRtcAec_InitAec(aec_t *aec, int sampFreq);
void WebRtcAec_InitAec_SSE2(void);
void WebRtcAec_InitMetrics(aec_t *aec);
void WebRtcAec_ProcessFrame(aec_t *aec, const short *farend,
const short *nearend, const short *nearendH,
short *out, short *outH,
int knownDelay);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_CORE_H_

View File

@ -1,417 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* The core AEC algorithm, SSE2 version of speed-critical functions.
*/
#include "typedefs.h"
#if defined(WEBRTC_USE_SSE2)
#include <emmintrin.h>
#include <math.h>
#include "aec_core.h"
#include "aec_rdft.h"
__inline static float MulRe(float aRe, float aIm, float bRe, float bIm)
{
return aRe * bRe - aIm * bIm;
}
__inline static float MulIm(float aRe, float aIm, float bRe, float bIm)
{
return aRe * bIm + aIm * bRe;
}
static void FilterFarSSE2(aec_t *aec, float yf[2][PART_LEN1])
{
int i;
for (i = 0; i < NR_PART; i++) {
int j;
int xPos = (i + aec->xfBufBlockPos) * PART_LEN1;
int pos = i * PART_LEN1;
// Check for wrap
if (i + aec->xfBufBlockPos >= NR_PART) {
xPos -= NR_PART*(PART_LEN1);
}
// vectorized code (four at once)
for (j = 0; j + 3 < PART_LEN1; j += 4) {
const __m128 xfBuf_re = _mm_loadu_ps(&aec->xfBuf[0][xPos + j]);
const __m128 xfBuf_im = _mm_loadu_ps(&aec->xfBuf[1][xPos + j]);
const __m128 wfBuf_re = _mm_loadu_ps(&aec->wfBuf[0][pos + j]);
const __m128 wfBuf_im = _mm_loadu_ps(&aec->wfBuf[1][pos + j]);
const __m128 yf_re = _mm_loadu_ps(&yf[0][j]);
const __m128 yf_im = _mm_loadu_ps(&yf[1][j]);
const __m128 a = _mm_mul_ps(xfBuf_re, wfBuf_re);
const __m128 b = _mm_mul_ps(xfBuf_im, wfBuf_im);
const __m128 c = _mm_mul_ps(xfBuf_re, wfBuf_im);
const __m128 d = _mm_mul_ps(xfBuf_im, wfBuf_re);
const __m128 e = _mm_sub_ps(a, b);
const __m128 f = _mm_add_ps(c, d);
const __m128 g = _mm_add_ps(yf_re, e);
const __m128 h = _mm_add_ps(yf_im, f);
_mm_storeu_ps(&yf[0][j], g);
_mm_storeu_ps(&yf[1][j], h);
}
// scalar code for the remaining items.
for (; j < PART_LEN1; j++) {
yf[0][j] += MulRe(aec->xfBuf[0][xPos + j], aec->xfBuf[1][xPos + j],
aec->wfBuf[0][ pos + j], aec->wfBuf[1][ pos + j]);
yf[1][j] += MulIm(aec->xfBuf[0][xPos + j], aec->xfBuf[1][xPos + j],
aec->wfBuf[0][ pos + j], aec->wfBuf[1][ pos + j]);
}
}
}
static void ScaleErrorSignalSSE2(aec_t *aec, float ef[2][PART_LEN1])
{
const __m128 k1e_10f = _mm_set1_ps(1e-10f);
const __m128 kThresh = _mm_set1_ps(aec->errThresh);
const __m128 kMu = _mm_set1_ps(aec->mu);
int i;
// vectorized code (four at once)
for (i = 0; i + 3 < PART_LEN1; i += 4) {
const __m128 xPow = _mm_loadu_ps(&aec->xPow[i]);
const __m128 ef_re_base = _mm_loadu_ps(&ef[0][i]);
const __m128 ef_im_base = _mm_loadu_ps(&ef[1][i]);
const __m128 xPowPlus = _mm_add_ps(xPow, k1e_10f);
__m128 ef_re = _mm_div_ps(ef_re_base, xPowPlus);
__m128 ef_im = _mm_div_ps(ef_im_base, xPowPlus);
const __m128 ef_re2 = _mm_mul_ps(ef_re, ef_re);
const __m128 ef_im2 = _mm_mul_ps(ef_im, ef_im);
const __m128 ef_sum2 = _mm_add_ps(ef_re2, ef_im2);
const __m128 absEf = _mm_sqrt_ps(ef_sum2);
const __m128 bigger = _mm_cmpgt_ps(absEf, kThresh);
__m128 absEfPlus = _mm_add_ps(absEf, k1e_10f);
const __m128 absEfInv = _mm_div_ps(kThresh, absEfPlus);
__m128 ef_re_if = _mm_mul_ps(ef_re, absEfInv);
__m128 ef_im_if = _mm_mul_ps(ef_im, absEfInv);
ef_re_if = _mm_and_ps(bigger, ef_re_if);
ef_im_if = _mm_and_ps(bigger, ef_im_if);
ef_re = _mm_andnot_ps(bigger, ef_re);
ef_im = _mm_andnot_ps(bigger, ef_im);
ef_re = _mm_or_ps(ef_re, ef_re_if);
ef_im = _mm_or_ps(ef_im, ef_im_if);
ef_re = _mm_mul_ps(ef_re, kMu);
ef_im = _mm_mul_ps(ef_im, kMu);
_mm_storeu_ps(&ef[0][i], ef_re);
_mm_storeu_ps(&ef[1][i], ef_im);
}
// scalar code for the remaining items.
for (; i < (PART_LEN1); i++) {
float absEf;
ef[0][i] /= (aec->xPow[i] + 1e-10f);
ef[1][i] /= (aec->xPow[i] + 1e-10f);
absEf = sqrtf(ef[0][i] * ef[0][i] + ef[1][i] * ef[1][i]);
if (absEf > aec->errThresh) {
absEf = aec->errThresh / (absEf + 1e-10f);
ef[0][i] *= absEf;
ef[1][i] *= absEf;
}
// Stepsize factor
ef[0][i] *= aec->mu;
ef[1][i] *= aec->mu;
}
}
static void FilterAdaptationSSE2(aec_t *aec, float *fft, float ef[2][PART_LEN1]) {
int i, j;
for (i = 0; i < NR_PART; i++) {
int xPos = (i + aec->xfBufBlockPos)*(PART_LEN1);
int pos = i * PART_LEN1;
// Check for wrap
if (i + aec->xfBufBlockPos >= NR_PART) {
xPos -= NR_PART * PART_LEN1;
}
// Process the whole array...
for (j = 0; j < PART_LEN; j+= 4) {
// Load xfBuf and ef.
const __m128 xfBuf_re = _mm_loadu_ps(&aec->xfBuf[0][xPos + j]);
const __m128 xfBuf_im = _mm_loadu_ps(&aec->xfBuf[1][xPos + j]);
const __m128 ef_re = _mm_loadu_ps(&ef[0][j]);
const __m128 ef_im = _mm_loadu_ps(&ef[1][j]);
// Calculate the product of conjugate(xfBuf) by ef.
// re(conjugate(a) * b) = aRe * bRe + aIm * bIm
// im(conjugate(a) * b)= aRe * bIm - aIm * bRe
const __m128 a = _mm_mul_ps(xfBuf_re, ef_re);
const __m128 b = _mm_mul_ps(xfBuf_im, ef_im);
const __m128 c = _mm_mul_ps(xfBuf_re, ef_im);
const __m128 d = _mm_mul_ps(xfBuf_im, ef_re);
const __m128 e = _mm_add_ps(a, b);
const __m128 f = _mm_sub_ps(c, d);
// Interleave real and imaginary parts.
const __m128 g = _mm_unpacklo_ps(e, f);
const __m128 h = _mm_unpackhi_ps(e, f);
// Store
_mm_storeu_ps(&fft[2*j + 0], g);
_mm_storeu_ps(&fft[2*j + 4], h);
}
// ... and fixup the first imaginary entry.
fft[1] = MulRe(aec->xfBuf[0][xPos + PART_LEN],
-aec->xfBuf[1][xPos + PART_LEN],
ef[0][PART_LEN], ef[1][PART_LEN]);
aec_rdft_inverse_128(fft);
memset(fft + PART_LEN, 0, sizeof(float)*PART_LEN);
// fft scaling
{
float scale = 2.0f / PART_LEN2;
const __m128 scale_ps = _mm_load_ps1(&scale);
for (j = 0; j < PART_LEN; j+=4) {
const __m128 fft_ps = _mm_loadu_ps(&fft[j]);
const __m128 fft_scale = _mm_mul_ps(fft_ps, scale_ps);
_mm_storeu_ps(&fft[j], fft_scale);
}
}
aec_rdft_forward_128(fft);
{
float wt1 = aec->wfBuf[1][pos];
aec->wfBuf[0][pos + PART_LEN] += fft[1];
for (j = 0; j < PART_LEN; j+= 4) {
__m128 wtBuf_re = _mm_loadu_ps(&aec->wfBuf[0][pos + j]);
__m128 wtBuf_im = _mm_loadu_ps(&aec->wfBuf[1][pos + j]);
const __m128 fft0 = _mm_loadu_ps(&fft[2 * j + 0]);
const __m128 fft4 = _mm_loadu_ps(&fft[2 * j + 4]);
const __m128 fft_re = _mm_shuffle_ps(fft0, fft4, _MM_SHUFFLE(2, 0, 2 ,0));
const __m128 fft_im = _mm_shuffle_ps(fft0, fft4, _MM_SHUFFLE(3, 1, 3 ,1));
wtBuf_re = _mm_add_ps(wtBuf_re, fft_re);
wtBuf_im = _mm_add_ps(wtBuf_im, fft_im);
_mm_storeu_ps(&aec->wfBuf[0][pos + j], wtBuf_re);
_mm_storeu_ps(&aec->wfBuf[1][pos + j], wtBuf_im);
}
aec->wfBuf[1][pos] = wt1;
}
}
}
static __m128 mm_pow_ps(__m128 a, __m128 b)
{
// a^b = exp2(b * log2(a))
// exp2(x) and log2(x) are calculated using polynomial approximations.
__m128 log2_a, b_log2_a, a_exp_b;
// Calculate log2(x), x = a.
{
// To calculate log2(x), we decompose x like this:
// x = y * 2^n
// n is an integer
// y is in the [1.0, 2.0) range
//
// log2(x) = log2(y) + n
// n can be evaluated by playing with float representation.
// log2(y) in a small range can be approximated, this code uses an order
// five polynomial approximation. The coefficients have been
// estimated with the Remez algorithm and the resulting
// polynomial has a maximum relative error of 0.00086%.
// Compute n.
// This is done by masking the exponent, shifting it into the top bit of
// the mantissa, putting eight into the biased exponent (to shift/
// compensate the fact that the exponent has been shifted in the top/
// fractional part and finally getting rid of the implicit leading one
// from the mantissa by substracting it out.
static const ALIGN16_BEG int float_exponent_mask[4] ALIGN16_END =
{0x7F800000, 0x7F800000, 0x7F800000, 0x7F800000};
static const ALIGN16_BEG int eight_biased_exponent[4] ALIGN16_END =
{0x43800000, 0x43800000, 0x43800000, 0x43800000};
static const ALIGN16_BEG int implicit_leading_one[4] ALIGN16_END =
{0x43BF8000, 0x43BF8000, 0x43BF8000, 0x43BF8000};
static const int shift_exponent_into_top_mantissa = 8;
const __m128 two_n = _mm_and_ps(a, *((__m128 *)float_exponent_mask));
const __m128 n_1 = _mm_castsi128_ps(_mm_srli_epi32(_mm_castps_si128(two_n),
shift_exponent_into_top_mantissa));
const __m128 n_0 = _mm_or_ps(n_1, *((__m128 *)eight_biased_exponent));
const __m128 n = _mm_sub_ps(n_0, *((__m128 *)implicit_leading_one));
// Compute y.
static const ALIGN16_BEG int mantissa_mask[4] ALIGN16_END =
{0x007FFFFF, 0x007FFFFF, 0x007FFFFF, 0x007FFFFF};
static const ALIGN16_BEG int zero_biased_exponent_is_one[4] ALIGN16_END =
{0x3F800000, 0x3F800000, 0x3F800000, 0x3F800000};
const __m128 mantissa = _mm_and_ps(a, *((__m128 *)mantissa_mask));
const __m128 y = _mm_or_ps(
mantissa, *((__m128 *)zero_biased_exponent_is_one));
// Approximate log2(y) ~= (y - 1) * pol5(y).
// pol5(y) = C5 * y^5 + C4 * y^4 + C3 * y^3 + C2 * y^2 + C1 * y + C0
static const ALIGN16_BEG float ALIGN16_END C5[4] =
{-3.4436006e-2f, -3.4436006e-2f, -3.4436006e-2f, -3.4436006e-2f};
static const ALIGN16_BEG float ALIGN16_END C4[4] =
{3.1821337e-1f, 3.1821337e-1f, 3.1821337e-1f, 3.1821337e-1f};
static const ALIGN16_BEG float ALIGN16_END C3[4] =
{-1.2315303f, -1.2315303f, -1.2315303f, -1.2315303f};
static const ALIGN16_BEG float ALIGN16_END C2[4] =
{2.5988452f, 2.5988452f, 2.5988452f, 2.5988452f};
static const ALIGN16_BEG float ALIGN16_END C1[4] =
{-3.3241990f, -3.3241990f, -3.3241990f, -3.3241990f};
static const ALIGN16_BEG float ALIGN16_END C0[4] =
{3.1157899f, 3.1157899f, 3.1157899f, 3.1157899f};
const __m128 pol5_y_0 = _mm_mul_ps(y, *((__m128 *)C5));
const __m128 pol5_y_1 = _mm_add_ps(pol5_y_0, *((__m128 *)C4));
const __m128 pol5_y_2 = _mm_mul_ps(pol5_y_1, y);
const __m128 pol5_y_3 = _mm_add_ps(pol5_y_2, *((__m128 *)C3));
const __m128 pol5_y_4 = _mm_mul_ps(pol5_y_3, y);
const __m128 pol5_y_5 = _mm_add_ps(pol5_y_4, *((__m128 *)C2));
const __m128 pol5_y_6 = _mm_mul_ps(pol5_y_5, y);
const __m128 pol5_y_7 = _mm_add_ps(pol5_y_6, *((__m128 *)C1));
const __m128 pol5_y_8 = _mm_mul_ps(pol5_y_7, y);
const __m128 pol5_y = _mm_add_ps(pol5_y_8, *((__m128 *)C0));
const __m128 y_minus_one = _mm_sub_ps(
y, *((__m128 *)zero_biased_exponent_is_one));
const __m128 log2_y = _mm_mul_ps(y_minus_one , pol5_y);
// Combine parts.
log2_a = _mm_add_ps(n, log2_y);
}
// b * log2(a)
b_log2_a = _mm_mul_ps(b, log2_a);
// Calculate exp2(x), x = b * log2(a).
{
// To calculate 2^x, we decompose x like this:
// x = n + y
// n is an integer, the value of x - 0.5 rounded down, therefore
// y is in the [0.5, 1.5) range
//
// 2^x = 2^n * 2^y
// 2^n can be evaluated by playing with float representation.
// 2^y in a small range can be approximated, this code uses an order two
// polynomial approximation. The coefficients have been estimated
// with the Remez algorithm and the resulting polynomial has a
// maximum relative error of 0.17%.
// To avoid over/underflow, we reduce the range of input to ]-127, 129].
static const ALIGN16_BEG float max_input[4] ALIGN16_END =
{129.f, 129.f, 129.f, 129.f};
static const ALIGN16_BEG float min_input[4] ALIGN16_END =
{-126.99999f, -126.99999f, -126.99999f, -126.99999f};
const __m128 x_min = _mm_min_ps(b_log2_a, *((__m128 *)max_input));
const __m128 x_max = _mm_max_ps(x_min, *((__m128 *)min_input));
// Compute n.
static const ALIGN16_BEG float half[4] ALIGN16_END =
{0.5f, 0.5f, 0.5f, 0.5f};
const __m128 x_minus_half = _mm_sub_ps(x_max, *((__m128 *)half));
const __m128i x_minus_half_floor = _mm_cvtps_epi32(x_minus_half);
// Compute 2^n.
static const ALIGN16_BEG int float_exponent_bias[4] ALIGN16_END =
{127, 127, 127, 127};
static const int float_exponent_shift = 23;
const __m128i two_n_exponent = _mm_add_epi32(
x_minus_half_floor, *((__m128i *)float_exponent_bias));
const __m128 two_n = _mm_castsi128_ps(_mm_slli_epi32(
two_n_exponent, float_exponent_shift));
// Compute y.
const __m128 y = _mm_sub_ps(x_max, _mm_cvtepi32_ps(x_minus_half_floor));
// Approximate 2^y ~= C2 * y^2 + C1 * y + C0.
static const ALIGN16_BEG float C2[4] ALIGN16_END =
{3.3718944e-1f, 3.3718944e-1f, 3.3718944e-1f, 3.3718944e-1f};
static const ALIGN16_BEG float C1[4] ALIGN16_END =
{6.5763628e-1f, 6.5763628e-1f, 6.5763628e-1f, 6.5763628e-1f};
static const ALIGN16_BEG float C0[4] ALIGN16_END =
{1.0017247f, 1.0017247f, 1.0017247f, 1.0017247f};
const __m128 exp2_y_0 = _mm_mul_ps(y, *((__m128 *)C2));
const __m128 exp2_y_1 = _mm_add_ps(exp2_y_0, *((__m128 *)C1));
const __m128 exp2_y_2 = _mm_mul_ps(exp2_y_1, y);
const __m128 exp2_y = _mm_add_ps(exp2_y_2, *((__m128 *)C0));
// Combine parts.
a_exp_b = _mm_mul_ps(exp2_y, two_n);
}
return a_exp_b;
}
extern const float WebRtcAec_weightCurve[65];
extern const float WebRtcAec_overDriveCurve[65];
static void OverdriveAndSuppressSSE2(aec_t *aec, float hNl[PART_LEN1],
const float hNlFb,
float efw[2][PART_LEN1]) {
int i;
const __m128 vec_hNlFb = _mm_set1_ps(hNlFb);
const __m128 vec_one = _mm_set1_ps(1.0f);
const __m128 vec_minus_one = _mm_set1_ps(-1.0f);
const __m128 vec_overDriveSm = _mm_set1_ps(aec->overDriveSm);
// vectorized code (four at once)
for (i = 0; i + 3 < PART_LEN1; i+=4) {
// Weight subbands
__m128 vec_hNl = _mm_loadu_ps(&hNl[i]);
const __m128 vec_weightCurve = _mm_loadu_ps(&WebRtcAec_weightCurve[i]);
const __m128 bigger = _mm_cmpgt_ps(vec_hNl, vec_hNlFb);
const __m128 vec_weightCurve_hNlFb = _mm_mul_ps(
vec_weightCurve, vec_hNlFb);
const __m128 vec_one_weightCurve = _mm_sub_ps(vec_one, vec_weightCurve);
const __m128 vec_one_weightCurve_hNl = _mm_mul_ps(
vec_one_weightCurve, vec_hNl);
const __m128 vec_if0 = _mm_andnot_ps(bigger, vec_hNl);
const __m128 vec_if1 = _mm_and_ps(
bigger, _mm_add_ps(vec_weightCurve_hNlFb, vec_one_weightCurve_hNl));
vec_hNl = _mm_or_ps(vec_if0, vec_if1);
{
const __m128 vec_overDriveCurve = _mm_loadu_ps(
&WebRtcAec_overDriveCurve[i]);
const __m128 vec_overDriveSm_overDriveCurve = _mm_mul_ps(
vec_overDriveSm, vec_overDriveCurve);
vec_hNl = mm_pow_ps(vec_hNl, vec_overDriveSm_overDriveCurve);
_mm_storeu_ps(&hNl[i], vec_hNl);
}
// Suppress error signal
{
__m128 vec_efw_re = _mm_loadu_ps(&efw[0][i]);
__m128 vec_efw_im = _mm_loadu_ps(&efw[1][i]);
vec_efw_re = _mm_mul_ps(vec_efw_re, vec_hNl);
vec_efw_im = _mm_mul_ps(vec_efw_im, vec_hNl);
// Ooura fft returns incorrect sign on imaginary component. It matters
// here because we are making an additive change with comfort noise.
vec_efw_im = _mm_mul_ps(vec_efw_im, vec_minus_one);
_mm_storeu_ps(&efw[0][i], vec_efw_re);
_mm_storeu_ps(&efw[1][i], vec_efw_im);
}
}
// scalar code for the remaining items.
for (; i < PART_LEN1; i++) {
// Weight subbands
if (hNl[i] > hNlFb) {
hNl[i] = WebRtcAec_weightCurve[i] * hNlFb +
(1 - WebRtcAec_weightCurve[i]) * hNl[i];
}
hNl[i] = powf(hNl[i], aec->overDriveSm * WebRtcAec_overDriveCurve[i]);
// Suppress error signal
efw[0][i] *= hNl[i];
efw[1][i] *= hNl[i];
// Ooura fft returns incorrect sign on imaginary component. It matters
// here because we are making an additive change with comfort noise.
efw[1][i] *= -1;
}
}
void WebRtcAec_InitAec_SSE2(void) {
WebRtcAec_FilterFar = FilterFarSSE2;
WebRtcAec_ScaleErrorSignal = ScaleErrorSignalSSE2;
WebRtcAec_FilterAdaptation = FilterAdaptationSSE2;
WebRtcAec_OverdriveAndSuppress = OverdriveAndSuppressSSE2;
}
#endif // WEBRTC_USE_SSE2

View File

@ -1,57 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_RDFT_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_RDFT_H_
// These intrinsics were unavailable before VS 2008.
// TODO(andrew): move to a common file.
#if defined(_MSC_VER) && _MSC_VER < 1500
#include <emmintrin.h>
static __inline __m128 _mm_castsi128_ps(__m128i a) { return *(__m128*)&a; }
static __inline __m128i _mm_castps_si128(__m128 a) { return *(__m128i*)&a; }
#endif
#ifdef _MSC_VER /* visual c++ */
# define ALIGN16_BEG __declspec(align(16))
# define ALIGN16_END
#else /* gcc or icc */
# define ALIGN16_BEG
# define ALIGN16_END __attribute__((aligned(16)))
#endif
// constants shared by all paths (C, SSE2).
extern float rdft_w[64];
// constants used by the C path.
extern float rdft_wk3ri_first[32];
extern float rdft_wk3ri_second[32];
// constants used by SSE2 but initialized in C path.
extern float rdft_wk1r[32];
extern float rdft_wk2r[32];
extern float rdft_wk3r[32];
extern float rdft_wk1i[32];
extern float rdft_wk2i[32];
extern float rdft_wk3i[32];
extern float cftmdl_wk1r[4];
// code path selection function pointers
typedef void (*rft_sub_128_t)(float *a);
extern rft_sub_128_t rftfsub_128;
extern rft_sub_128_t rftbsub_128;
extern rft_sub_128_t cft1st_128;
extern rft_sub_128_t cftmdl_128;
// entry points
void aec_rdft_init(void);
void aec_rdft_init_sse2(void);
void aec_rdft_forward_128(float *a);
void aec_rdft_inverse_128(float *a);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_RDFT_H_

View File

@ -1,431 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "typedefs.h"
#if defined(WEBRTC_USE_SSE2)
#include <emmintrin.h>
#include "aec_rdft.h"
static const ALIGN16_BEG float ALIGN16_END k_swap_sign[4] =
{-1.f, 1.f, -1.f, 1.f};
static void cft1st_128_SSE2(float *a) {
const __m128 mm_swap_sign = _mm_load_ps(k_swap_sign);
int j, k2;
for (k2 = 0, j = 0; j < 128; j += 16, k2 += 4) {
__m128 a00v = _mm_loadu_ps(&a[j + 0]);
__m128 a04v = _mm_loadu_ps(&a[j + 4]);
__m128 a08v = _mm_loadu_ps(&a[j + 8]);
__m128 a12v = _mm_loadu_ps(&a[j + 12]);
__m128 a01v = _mm_shuffle_ps(a00v, a08v, _MM_SHUFFLE(1, 0, 1 ,0));
__m128 a23v = _mm_shuffle_ps(a00v, a08v, _MM_SHUFFLE(3, 2, 3 ,2));
__m128 a45v = _mm_shuffle_ps(a04v, a12v, _MM_SHUFFLE(1, 0, 1 ,0));
__m128 a67v = _mm_shuffle_ps(a04v, a12v, _MM_SHUFFLE(3, 2, 3 ,2));
const __m128 wk1rv = _mm_load_ps(&rdft_wk1r[k2]);
const __m128 wk1iv = _mm_load_ps(&rdft_wk1i[k2]);
const __m128 wk2rv = _mm_load_ps(&rdft_wk2r[k2]);
const __m128 wk2iv = _mm_load_ps(&rdft_wk2i[k2]);
const __m128 wk3rv = _mm_load_ps(&rdft_wk3r[k2]);
const __m128 wk3iv = _mm_load_ps(&rdft_wk3i[k2]);
__m128 x0v = _mm_add_ps(a01v, a23v);
const __m128 x1v = _mm_sub_ps(a01v, a23v);
const __m128 x2v = _mm_add_ps(a45v, a67v);
const __m128 x3v = _mm_sub_ps(a45v, a67v);
__m128 x0w;
a01v = _mm_add_ps(x0v, x2v);
x0v = _mm_sub_ps(x0v, x2v);
x0w = _mm_shuffle_ps(x0v, x0v, _MM_SHUFFLE(2, 3, 0 ,1));
{
const __m128 a45_0v = _mm_mul_ps(wk2rv, x0v);
const __m128 a45_1v = _mm_mul_ps(wk2iv, x0w);
a45v = _mm_add_ps(a45_0v, a45_1v);
}
{
__m128 a23_0v, a23_1v;
const __m128 x3w = _mm_shuffle_ps(x3v, x3v, _MM_SHUFFLE(2, 3, 0 ,1));
const __m128 x3s = _mm_mul_ps(mm_swap_sign, x3w);
x0v = _mm_add_ps(x1v, x3s);
x0w = _mm_shuffle_ps(x0v, x0v, _MM_SHUFFLE(2, 3, 0 ,1));
a23_0v = _mm_mul_ps(wk1rv, x0v);
a23_1v = _mm_mul_ps(wk1iv, x0w);
a23v = _mm_add_ps(a23_0v, a23_1v);
x0v = _mm_sub_ps(x1v, x3s);
x0w = _mm_shuffle_ps(x0v, x0v, _MM_SHUFFLE(2, 3, 0 ,1));
}
{
const __m128 a67_0v = _mm_mul_ps(wk3rv, x0v);
const __m128 a67_1v = _mm_mul_ps(wk3iv, x0w);
a67v = _mm_add_ps(a67_0v, a67_1v);
}
a00v = _mm_shuffle_ps(a01v, a23v, _MM_SHUFFLE(1, 0, 1 ,0));
a04v = _mm_shuffle_ps(a45v, a67v, _MM_SHUFFLE(1, 0, 1 ,0));
a08v = _mm_shuffle_ps(a01v, a23v, _MM_SHUFFLE(3, 2, 3 ,2));
a12v = _mm_shuffle_ps(a45v, a67v, _MM_SHUFFLE(3, 2, 3 ,2));
_mm_storeu_ps(&a[j + 0], a00v);
_mm_storeu_ps(&a[j + 4], a04v);
_mm_storeu_ps(&a[j + 8], a08v);
_mm_storeu_ps(&a[j + 12], a12v);
}
}
static void cftmdl_128_SSE2(float *a) {
const int l = 8;
const __m128 mm_swap_sign = _mm_load_ps(k_swap_sign);
int j0;
__m128 wk1rv = _mm_load_ps(cftmdl_wk1r);
for (j0 = 0; j0 < l; j0 += 2) {
const __m128i a_00 = _mm_loadl_epi64((__m128i*)&a[j0 + 0]);
const __m128i a_08 = _mm_loadl_epi64((__m128i*)&a[j0 + 8]);
const __m128i a_32 = _mm_loadl_epi64((__m128i*)&a[j0 + 32]);
const __m128i a_40 = _mm_loadl_epi64((__m128i*)&a[j0 + 40]);
const __m128 a_00_32 = _mm_shuffle_ps(_mm_castsi128_ps(a_00),
_mm_castsi128_ps(a_32),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_08_40 = _mm_shuffle_ps(_mm_castsi128_ps(a_08),
_mm_castsi128_ps(a_40),
_MM_SHUFFLE(1, 0, 1 ,0));
__m128 x0r0_0i0_0r1_x0i1 = _mm_add_ps(a_00_32, a_08_40);
const __m128 x1r0_1i0_1r1_x1i1 = _mm_sub_ps(a_00_32, a_08_40);
const __m128i a_16 = _mm_loadl_epi64((__m128i*)&a[j0 + 16]);
const __m128i a_24 = _mm_loadl_epi64((__m128i*)&a[j0 + 24]);
const __m128i a_48 = _mm_loadl_epi64((__m128i*)&a[j0 + 48]);
const __m128i a_56 = _mm_loadl_epi64((__m128i*)&a[j0 + 56]);
const __m128 a_16_48 = _mm_shuffle_ps(_mm_castsi128_ps(a_16),
_mm_castsi128_ps(a_48),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_24_56 = _mm_shuffle_ps(_mm_castsi128_ps(a_24),
_mm_castsi128_ps(a_56),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 x2r0_2i0_2r1_x2i1 = _mm_add_ps(a_16_48, a_24_56);
const __m128 x3r0_3i0_3r1_x3i1 = _mm_sub_ps(a_16_48, a_24_56);
const __m128 xx0 = _mm_add_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 xx1 = _mm_sub_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 x3i0_3r0_3i1_x3r1 = _mm_castsi128_ps(
_mm_shuffle_epi32(_mm_castps_si128(x3r0_3i0_3r1_x3i1),
_MM_SHUFFLE(2, 3, 0, 1)));
const __m128 x3_swapped = _mm_mul_ps(mm_swap_sign, x3i0_3r0_3i1_x3r1);
const __m128 x1_x3_add = _mm_add_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 x1_x3_sub = _mm_sub_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 yy0 = _mm_shuffle_ps(x1_x3_add, x1_x3_sub,
_MM_SHUFFLE(2, 2, 2 ,2));
const __m128 yy1 = _mm_shuffle_ps(x1_x3_add, x1_x3_sub,
_MM_SHUFFLE(3, 3, 3 ,3));
const __m128 yy2 = _mm_mul_ps(mm_swap_sign, yy1);
const __m128 yy3 = _mm_add_ps(yy0, yy2);
const __m128 yy4 = _mm_mul_ps(wk1rv, yy3);
_mm_storel_epi64((__m128i*)&a[j0 + 0], _mm_castps_si128(xx0));
_mm_storel_epi64((__m128i*)&a[j0 + 32],
_mm_shuffle_epi32(_mm_castps_si128(xx0),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 16], _mm_castps_si128(xx1));
_mm_storel_epi64((__m128i*)&a[j0 + 48],
_mm_shuffle_epi32(_mm_castps_si128(xx1),
_MM_SHUFFLE(2, 3, 2, 3)));
a[j0 + 48] = -a[j0 + 48];
_mm_storel_epi64((__m128i*)&a[j0 + 8], _mm_castps_si128(x1_x3_add));
_mm_storel_epi64((__m128i*)&a[j0 + 24], _mm_castps_si128(x1_x3_sub));
_mm_storel_epi64((__m128i*)&a[j0 + 40], _mm_castps_si128(yy4));
_mm_storel_epi64((__m128i*)&a[j0 + 56],
_mm_shuffle_epi32(_mm_castps_si128(yy4),
_MM_SHUFFLE(2, 3, 2, 3)));
}
{
int k = 64;
int k1 = 2;
int k2 = 2 * k1;
const __m128 wk2rv = _mm_load_ps(&rdft_wk2r[k2+0]);
const __m128 wk2iv = _mm_load_ps(&rdft_wk2i[k2+0]);
const __m128 wk1iv = _mm_load_ps(&rdft_wk1i[k2+0]);
const __m128 wk3rv = _mm_load_ps(&rdft_wk3r[k2+0]);
const __m128 wk3iv = _mm_load_ps(&rdft_wk3i[k2+0]);
wk1rv = _mm_load_ps(&rdft_wk1r[k2+0]);
for (j0 = k; j0 < l + k; j0 += 2) {
const __m128i a_00 = _mm_loadl_epi64((__m128i*)&a[j0 + 0]);
const __m128i a_08 = _mm_loadl_epi64((__m128i*)&a[j0 + 8]);
const __m128i a_32 = _mm_loadl_epi64((__m128i*)&a[j0 + 32]);
const __m128i a_40 = _mm_loadl_epi64((__m128i*)&a[j0 + 40]);
const __m128 a_00_32 = _mm_shuffle_ps(_mm_castsi128_ps(a_00),
_mm_castsi128_ps(a_32),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_08_40 = _mm_shuffle_ps(_mm_castsi128_ps(a_08),
_mm_castsi128_ps(a_40),
_MM_SHUFFLE(1, 0, 1 ,0));
__m128 x0r0_0i0_0r1_x0i1 = _mm_add_ps(a_00_32, a_08_40);
const __m128 x1r0_1i0_1r1_x1i1 = _mm_sub_ps(a_00_32, a_08_40);
const __m128i a_16 = _mm_loadl_epi64((__m128i*)&a[j0 + 16]);
const __m128i a_24 = _mm_loadl_epi64((__m128i*)&a[j0 + 24]);
const __m128i a_48 = _mm_loadl_epi64((__m128i*)&a[j0 + 48]);
const __m128i a_56 = _mm_loadl_epi64((__m128i*)&a[j0 + 56]);
const __m128 a_16_48 = _mm_shuffle_ps(_mm_castsi128_ps(a_16),
_mm_castsi128_ps(a_48),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_24_56 = _mm_shuffle_ps(_mm_castsi128_ps(a_24),
_mm_castsi128_ps(a_56),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 x2r0_2i0_2r1_x2i1 = _mm_add_ps(a_16_48, a_24_56);
const __m128 x3r0_3i0_3r1_x3i1 = _mm_sub_ps(a_16_48, a_24_56);
const __m128 xx = _mm_add_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 xx1 = _mm_sub_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 xx2 = _mm_mul_ps(xx1 , wk2rv);
const __m128 xx3 = _mm_mul_ps(wk2iv,
_mm_castsi128_ps(_mm_shuffle_epi32(_mm_castps_si128(xx1),
_MM_SHUFFLE(2, 3, 0, 1))));
const __m128 xx4 = _mm_add_ps(xx2, xx3);
const __m128 x3i0_3r0_3i1_x3r1 = _mm_castsi128_ps(
_mm_shuffle_epi32(_mm_castps_si128(x3r0_3i0_3r1_x3i1),
_MM_SHUFFLE(2, 3, 0, 1)));
const __m128 x3_swapped = _mm_mul_ps(mm_swap_sign, x3i0_3r0_3i1_x3r1);
const __m128 x1_x3_add = _mm_add_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 x1_x3_sub = _mm_sub_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 xx10 = _mm_mul_ps(x1_x3_add, wk1rv);
const __m128 xx11 = _mm_mul_ps(wk1iv,
_mm_castsi128_ps(_mm_shuffle_epi32(_mm_castps_si128(x1_x3_add),
_MM_SHUFFLE(2, 3, 0, 1))));
const __m128 xx12 = _mm_add_ps(xx10, xx11);
const __m128 xx20 = _mm_mul_ps(x1_x3_sub, wk3rv);
const __m128 xx21 = _mm_mul_ps(wk3iv,
_mm_castsi128_ps(_mm_shuffle_epi32(_mm_castps_si128(x1_x3_sub),
_MM_SHUFFLE(2, 3, 0, 1))));
const __m128 xx22 = _mm_add_ps(xx20, xx21);
_mm_storel_epi64((__m128i*)&a[j0 + 0], _mm_castps_si128(xx));
_mm_storel_epi64((__m128i*)&a[j0 + 32],
_mm_shuffle_epi32(_mm_castps_si128(xx),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 16], _mm_castps_si128(xx4));
_mm_storel_epi64((__m128i*)&a[j0 + 48],
_mm_shuffle_epi32(_mm_castps_si128(xx4),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 8], _mm_castps_si128(xx12));
_mm_storel_epi64((__m128i*)&a[j0 + 40],
_mm_shuffle_epi32(_mm_castps_si128(xx12),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 24], _mm_castps_si128(xx22));
_mm_storel_epi64((__m128i*)&a[j0 + 56],
_mm_shuffle_epi32(_mm_castps_si128(xx22),
_MM_SHUFFLE(3, 2, 3, 2)));
}
}
}
static void rftfsub_128_SSE2(float *a) {
const float *c = rdft_w + 32;
int j1, j2, k1, k2;
float wkr, wki, xr, xi, yr, yi;
static const ALIGN16_BEG float ALIGN16_END k_half[4] =
{0.5f, 0.5f, 0.5f, 0.5f};
const __m128 mm_half = _mm_load_ps(k_half);
// Vectorized code (four at once).
// Note: commented number are indexes for the first iteration of the loop.
for (j1 = 1, j2 = 2; j2 + 7 < 64; j1 += 4, j2 += 8) {
// Load 'wk'.
const __m128 c_j1 = _mm_loadu_ps(&c[ j1]); // 1, 2, 3, 4,
const __m128 c_k1 = _mm_loadu_ps(&c[29 - j1]); // 28, 29, 30, 31,
const __m128 wkrt = _mm_sub_ps(mm_half, c_k1); // 28, 29, 30, 31,
const __m128 wkr_ =
_mm_shuffle_ps(wkrt, wkrt, _MM_SHUFFLE(0, 1, 2, 3)); // 31, 30, 29, 28,
const __m128 wki_ = c_j1; // 1, 2, 3, 4,
// Load and shuffle 'a'.
const __m128 a_j2_0 = _mm_loadu_ps(&a[0 + j2]); // 2, 3, 4, 5,
const __m128 a_j2_4 = _mm_loadu_ps(&a[4 + j2]); // 6, 7, 8, 9,
const __m128 a_k2_0 = _mm_loadu_ps(&a[122 - j2]); // 120, 121, 122, 123,
const __m128 a_k2_4 = _mm_loadu_ps(&a[126 - j2]); // 124, 125, 126, 127,
const __m128 a_j2_p0 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(2, 0, 2 ,0)); // 2, 4, 6, 8,
const __m128 a_j2_p1 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(3, 1, 3 ,1)); // 3, 5, 7, 9,
const __m128 a_k2_p0 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(0, 2, 0 ,2)); // 126, 124, 122, 120,
const __m128 a_k2_p1 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(1, 3, 1 ,3)); // 127, 125, 123, 121,
// Calculate 'x'.
const __m128 xr_ = _mm_sub_ps(a_j2_p0, a_k2_p0);
// 2-126, 4-124, 6-122, 8-120,
const __m128 xi_ = _mm_add_ps(a_j2_p1, a_k2_p1);
// 3-127, 5-125, 7-123, 9-121,
// Calculate product into 'y'.
// yr = wkr * xr - wki * xi;
// yi = wkr * xi + wki * xr;
const __m128 a_ = _mm_mul_ps(wkr_, xr_);
const __m128 b_ = _mm_mul_ps(wki_, xi_);
const __m128 c_ = _mm_mul_ps(wkr_, xi_);
const __m128 d_ = _mm_mul_ps(wki_, xr_);
const __m128 yr_ = _mm_sub_ps(a_, b_); // 2-126, 4-124, 6-122, 8-120,
const __m128 yi_ = _mm_add_ps(c_, d_); // 3-127, 5-125, 7-123, 9-121,
// Update 'a'.
// a[j2 + 0] -= yr;
// a[j2 + 1] -= yi;
// a[k2 + 0] += yr;
// a[k2 + 1] -= yi;
const __m128 a_j2_p0n = _mm_sub_ps(a_j2_p0, yr_); // 2, 4, 6, 8,
const __m128 a_j2_p1n = _mm_sub_ps(a_j2_p1, yi_); // 3, 5, 7, 9,
const __m128 a_k2_p0n = _mm_add_ps(a_k2_p0, yr_); // 126, 124, 122, 120,
const __m128 a_k2_p1n = _mm_sub_ps(a_k2_p1, yi_); // 127, 125, 123, 121,
// Shuffle in right order and store.
const __m128 a_j2_0n = _mm_unpacklo_ps(a_j2_p0n, a_j2_p1n);
// 2, 3, 4, 5,
const __m128 a_j2_4n = _mm_unpackhi_ps(a_j2_p0n, a_j2_p1n);
// 6, 7, 8, 9,
const __m128 a_k2_0nt = _mm_unpackhi_ps(a_k2_p0n, a_k2_p1n);
// 122, 123, 120, 121,
const __m128 a_k2_4nt = _mm_unpacklo_ps(a_k2_p0n, a_k2_p1n);
// 126, 127, 124, 125,
const __m128 a_k2_0n = _mm_shuffle_ps(a_k2_0nt, a_k2_0nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 120, 121, 122, 123,
const __m128 a_k2_4n = _mm_shuffle_ps(a_k2_4nt, a_k2_4nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 124, 125, 126, 127,
_mm_storeu_ps(&a[0 + j2], a_j2_0n);
_mm_storeu_ps(&a[4 + j2], a_j2_4n);
_mm_storeu_ps(&a[122 - j2], a_k2_0n);
_mm_storeu_ps(&a[126 - j2], a_k2_4n);
}
// Scalar code for the remaining items.
for (; j2 < 64; j1 += 1, j2 += 2) {
k2 = 128 - j2;
k1 = 32 - j1;
wkr = 0.5f - c[k1];
wki = c[j1];
xr = a[j2 + 0] - a[k2 + 0];
xi = a[j2 + 1] + a[k2 + 1];
yr = wkr * xr - wki * xi;
yi = wkr * xi + wki * xr;
a[j2 + 0] -= yr;
a[j2 + 1] -= yi;
a[k2 + 0] += yr;
a[k2 + 1] -= yi;
}
}
static void rftbsub_128_SSE2(float *a) {
const float *c = rdft_w + 32;
int j1, j2, k1, k2;
float wkr, wki, xr, xi, yr, yi;
static const ALIGN16_BEG float ALIGN16_END k_half[4] =
{0.5f, 0.5f, 0.5f, 0.5f};
const __m128 mm_half = _mm_load_ps(k_half);
a[1] = -a[1];
// Vectorized code (four at once).
// Note: commented number are indexes for the first iteration of the loop.
for (j1 = 1, j2 = 2; j2 + 7 < 64; j1 += 4, j2 += 8) {
// Load 'wk'.
const __m128 c_j1 = _mm_loadu_ps(&c[ j1]); // 1, 2, 3, 4,
const __m128 c_k1 = _mm_loadu_ps(&c[29 - j1]); // 28, 29, 30, 31,
const __m128 wkrt = _mm_sub_ps(mm_half, c_k1); // 28, 29, 30, 31,
const __m128 wkr_ =
_mm_shuffle_ps(wkrt, wkrt, _MM_SHUFFLE(0, 1, 2, 3)); // 31, 30, 29, 28,
const __m128 wki_ = c_j1; // 1, 2, 3, 4,
// Load and shuffle 'a'.
const __m128 a_j2_0 = _mm_loadu_ps(&a[0 + j2]); // 2, 3, 4, 5,
const __m128 a_j2_4 = _mm_loadu_ps(&a[4 + j2]); // 6, 7, 8, 9,
const __m128 a_k2_0 = _mm_loadu_ps(&a[122 - j2]); // 120, 121, 122, 123,
const __m128 a_k2_4 = _mm_loadu_ps(&a[126 - j2]); // 124, 125, 126, 127,
const __m128 a_j2_p0 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(2, 0, 2 ,0)); // 2, 4, 6, 8,
const __m128 a_j2_p1 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(3, 1, 3 ,1)); // 3, 5, 7, 9,
const __m128 a_k2_p0 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(0, 2, 0 ,2)); // 126, 124, 122, 120,
const __m128 a_k2_p1 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(1, 3, 1 ,3)); // 127, 125, 123, 121,
// Calculate 'x'.
const __m128 xr_ = _mm_sub_ps(a_j2_p0, a_k2_p0);
// 2-126, 4-124, 6-122, 8-120,
const __m128 xi_ = _mm_add_ps(a_j2_p1, a_k2_p1);
// 3-127, 5-125, 7-123, 9-121,
// Calculate product into 'y'.
// yr = wkr * xr + wki * xi;
// yi = wkr * xi - wki * xr;
const __m128 a_ = _mm_mul_ps(wkr_, xr_);
const __m128 b_ = _mm_mul_ps(wki_, xi_);
const __m128 c_ = _mm_mul_ps(wkr_, xi_);
const __m128 d_ = _mm_mul_ps(wki_, xr_);
const __m128 yr_ = _mm_add_ps(a_, b_); // 2-126, 4-124, 6-122, 8-120,
const __m128 yi_ = _mm_sub_ps(c_, d_); // 3-127, 5-125, 7-123, 9-121,
// Update 'a'.
// a[j2 + 0] = a[j2 + 0] - yr;
// a[j2 + 1] = yi - a[j2 + 1];
// a[k2 + 0] = yr + a[k2 + 0];
// a[k2 + 1] = yi - a[k2 + 1];
const __m128 a_j2_p0n = _mm_sub_ps(a_j2_p0, yr_); // 2, 4, 6, 8,
const __m128 a_j2_p1n = _mm_sub_ps(yi_, a_j2_p1); // 3, 5, 7, 9,
const __m128 a_k2_p0n = _mm_add_ps(a_k2_p0, yr_); // 126, 124, 122, 120,
const __m128 a_k2_p1n = _mm_sub_ps(yi_, a_k2_p1); // 127, 125, 123, 121,
// Shuffle in right order and store.
const __m128 a_j2_0n = _mm_unpacklo_ps(a_j2_p0n, a_j2_p1n);
// 2, 3, 4, 5,
const __m128 a_j2_4n = _mm_unpackhi_ps(a_j2_p0n, a_j2_p1n);
// 6, 7, 8, 9,
const __m128 a_k2_0nt = _mm_unpackhi_ps(a_k2_p0n, a_k2_p1n);
// 122, 123, 120, 121,
const __m128 a_k2_4nt = _mm_unpacklo_ps(a_k2_p0n, a_k2_p1n);
// 126, 127, 124, 125,
const __m128 a_k2_0n = _mm_shuffle_ps(a_k2_0nt, a_k2_0nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 120, 121, 122, 123,
const __m128 a_k2_4n = _mm_shuffle_ps(a_k2_4nt, a_k2_4nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 124, 125, 126, 127,
_mm_storeu_ps(&a[0 + j2], a_j2_0n);
_mm_storeu_ps(&a[4 + j2], a_j2_4n);
_mm_storeu_ps(&a[122 - j2], a_k2_0n);
_mm_storeu_ps(&a[126 - j2], a_k2_4n);
}
// Scalar code for the remaining items.
for (; j2 < 64; j1 += 1, j2 += 2) {
k2 = 128 - j2;
k1 = 32 - j1;
wkr = 0.5f - c[k1];
wki = c[j1];
xr = a[j2 + 0] - a[k2 + 0];
xi = a[j2 + 1] + a[k2 + 1];
yr = wkr * xr + wki * xi;
yi = wkr * xi - wki * xr;
a[j2 + 0] = a[j2 + 0] - yr;
a[j2 + 1] = yi - a[j2 + 1];
a[k2 + 0] = yr + a[k2 + 0];
a[k2 + 1] = yi - a[k2 + 1];
}
a[65] = -a[65];
}
void aec_rdft_init_sse2(void) {
cft1st_128 = cft1st_128_SSE2;
cftmdl_128 = cftmdl_128_SSE2;
rftfsub_128 = rftfsub_128_SSE2;
rftbsub_128 = rftbsub_128_SSE2;
}
#endif // WEBRTC_USE_SS2

View File

@ -1,901 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* Contains the API functions for the AEC.
*/
#include "echo_cancellation.h"
#include <math.h>
#ifdef AEC_DEBUG
#include <stdio.h>
#endif
#include <stdlib.h>
#include <string.h>
#include "aec_core.h"
#include "resampler.h"
#include "ring_buffer.h"
#define BUF_SIZE_FRAMES 50 // buffer size (frames)
// Maximum length of resampled signal. Must be an integer multiple of frames
// (ceil(1/(1 + MIN_SKEW)*2) + 1)*FRAME_LEN
// The factor of 2 handles wb, and the + 1 is as a safety margin
#define MAX_RESAMP_LEN (5 * FRAME_LEN)
static const int bufSizeSamp = BUF_SIZE_FRAMES * FRAME_LEN; // buffer size (samples)
static const int sampMsNb = 8; // samples per ms in nb
// Target suppression levels for nlp modes
// log{0.001, 0.00001, 0.00000001}
static const float targetSupp[3] = {-6.9f, -11.5f, -18.4f};
static const float minOverDrive[3] = {1.0f, 2.0f, 5.0f};
static const int initCheck = 42;
typedef struct {
int delayCtr;
int sampFreq;
int splitSampFreq;
int scSampFreq;
float sampFactor; // scSampRate / sampFreq
short nlpMode;
short autoOnOff;
short activity;
short skewMode;
short bufSizeStart;
//short bufResetCtr; // counts number of noncausal frames
int knownDelay;
// Stores the last frame added to the farend buffer
short farendOld[2][FRAME_LEN];
short initFlag; // indicates if AEC has been initialized
// Variables used for averaging far end buffer size
short counter;
short sum;
short firstVal;
short checkBufSizeCtr;
// Variables used for delay shifts
short msInSndCardBuf;
short filtDelay;
int timeForDelayChange;
int ECstartup;
int checkBuffSize;
int delayChange;
short lastDelayDiff;
#ifdef AEC_DEBUG
FILE *bufFile;
FILE *delayFile;
FILE *skewFile;
FILE *preCompFile;
FILE *postCompFile;
#endif // AEC_DEBUG
// Structures
void *farendBuf;
void *resampler;
int skewFrCtr;
int resample; // if the skew is small enough we don't resample
int highSkewCtr;
float skew;
int lastError;
aec_t *aec;
} aecpc_t;
// Estimates delay to set the position of the farend buffer read pointer
// (controlled by knownDelay)
static int EstBufDelay(aecpc_t *aecInst, short msInSndCardBuf);
// Stuffs the farend buffer if the estimated delay is too large
static int DelayComp(aecpc_t *aecInst);
WebRtc_Word32 WebRtcAec_Create(void **aecInst)
{
aecpc_t *aecpc;
if (aecInst == NULL) {
return -1;
}
aecpc = malloc(sizeof(aecpc_t));
*aecInst = aecpc;
if (aecpc == NULL) {
return -1;
}
if (WebRtcAec_CreateAec(&aecpc->aec) == -1) {
WebRtcAec_Free(aecpc);
aecpc = NULL;
return -1;
}
if (WebRtcApm_CreateBuffer(&aecpc->farendBuf, bufSizeSamp) == -1) {
WebRtcAec_Free(aecpc);
aecpc = NULL;
return -1;
}
if (WebRtcAec_CreateResampler(&aecpc->resampler) == -1) {
WebRtcAec_Free(aecpc);
aecpc = NULL;
return -1;
}
aecpc->initFlag = 0;
aecpc->lastError = 0;
#ifdef AEC_DEBUG
aecpc->aec->farFile = fopen("aecFar.pcm","wb");
aecpc->aec->nearFile = fopen("aecNear.pcm","wb");
aecpc->aec->outFile = fopen("aecOut.pcm","wb");
aecpc->aec->outLpFile = fopen("aecOutLp.pcm","wb");
aecpc->bufFile = fopen("aecBuf.dat", "wb");
aecpc->skewFile = fopen("aecSkew.dat", "wb");
aecpc->delayFile = fopen("aecDelay.dat", "wb");
aecpc->preCompFile = fopen("preComp.pcm", "wb");
aecpc->postCompFile = fopen("postComp.pcm", "wb");
#endif // AEC_DEBUG
return 0;
}
WebRtc_Word32 WebRtcAec_Free(void *aecInst)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
#ifdef AEC_DEBUG
fclose(aecpc->aec->farFile);
fclose(aecpc->aec->nearFile);
fclose(aecpc->aec->outFile);
fclose(aecpc->aec->outLpFile);
fclose(aecpc->bufFile);
fclose(aecpc->skewFile);
fclose(aecpc->delayFile);
fclose(aecpc->preCompFile);
fclose(aecpc->postCompFile);
#endif // AEC_DEBUG
WebRtcAec_FreeAec(aecpc->aec);
WebRtcApm_FreeBuffer(aecpc->farendBuf);
WebRtcAec_FreeResampler(aecpc->resampler);
free(aecpc);
return 0;
}
WebRtc_Word32 WebRtcAec_Init(void *aecInst, WebRtc_Word32 sampFreq, WebRtc_Word32 scSampFreq)
{
aecpc_t *aecpc = aecInst;
AecConfig aecConfig;
if (aecpc == NULL) {
return -1;
}
if (sampFreq != 8000 && sampFreq != 16000 && sampFreq != 32000) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->sampFreq = sampFreq;
if (scSampFreq < 1 || scSampFreq > 96000) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->scSampFreq = scSampFreq;
// Initialize echo canceller core
if (WebRtcAec_InitAec(aecpc->aec, aecpc->sampFreq) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
// Initialize farend buffer
if (WebRtcApm_InitBuffer(aecpc->farendBuf) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
if (WebRtcAec_InitResampler(aecpc->resampler, aecpc->scSampFreq) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
aecpc->initFlag = initCheck; // indicates that initialization has been done
if (aecpc->sampFreq == 32000) {
aecpc->splitSampFreq = 16000;
}
else {
aecpc->splitSampFreq = sampFreq;
}
aecpc->skewFrCtr = 0;
aecpc->activity = 0;
aecpc->delayChange = 1;
aecpc->delayCtr = 0;
aecpc->sum = 0;
aecpc->counter = 0;
aecpc->checkBuffSize = 1;
aecpc->firstVal = 0;
aecpc->ECstartup = 1;
aecpc->bufSizeStart = 0;
aecpc->checkBufSizeCtr = 0;
aecpc->filtDelay = 0;
aecpc->timeForDelayChange =0;
aecpc->knownDelay = 0;
aecpc->lastDelayDiff = 0;
aecpc->skew = 0;
aecpc->resample = kAecFalse;
aecpc->highSkewCtr = 0;
aecpc->sampFactor = (aecpc->scSampFreq * 1.0f) / aecpc->splitSampFreq;
memset(&aecpc->farendOld[0][0], 0, 160);
// Default settings.
aecConfig.nlpMode = kAecNlpModerate;
aecConfig.skewMode = kAecFalse;
aecConfig.metricsMode = kAecFalse;
aecConfig.delay_logging = kAecFalse;
if (WebRtcAec_set_config(aecpc, aecConfig) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
return 0;
}
// only buffer L band for farend
WebRtc_Word32 WebRtcAec_BufferFarend(void *aecInst, const WebRtc_Word16 *farend,
WebRtc_Word16 nrOfSamples)
{
aecpc_t *aecpc = aecInst;
WebRtc_Word32 retVal = 0;
short newNrOfSamples;
short newFarend[MAX_RESAMP_LEN];
float skew;
if (aecpc == NULL) {
return -1;
}
if (farend == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
// number of samples == 160 for SWB input
if (nrOfSamples != 80 && nrOfSamples != 160) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
skew = aecpc->skew;
// TODO: Is this really a good idea?
if (!aecpc->ECstartup) {
DelayComp(aecpc);
}
if (aecpc->skewMode == kAecTrue && aecpc->resample == kAecTrue) {
// Resample and get a new number of samples
newNrOfSamples = WebRtcAec_ResampleLinear(aecpc->resampler,
farend,
nrOfSamples,
skew,
newFarend);
WebRtcApm_WriteBuffer(aecpc->farendBuf, newFarend, newNrOfSamples);
#ifdef AEC_DEBUG
fwrite(farend, 2, nrOfSamples, aecpc->preCompFile);
fwrite(newFarend, 2, newNrOfSamples, aecpc->postCompFile);
#endif
}
else {
WebRtcApm_WriteBuffer(aecpc->farendBuf, farend, nrOfSamples);
}
return retVal;
}
WebRtc_Word32 WebRtcAec_Process(void *aecInst, const WebRtc_Word16 *nearend,
const WebRtc_Word16 *nearendH, WebRtc_Word16 *out, WebRtc_Word16 *outH,
WebRtc_Word16 nrOfSamples, WebRtc_Word16 msInSndCardBuf, WebRtc_Word32 skew)
{
aecpc_t *aecpc = aecInst;
WebRtc_Word32 retVal = 0;
short i;
short farend[FRAME_LEN];
short nmbrOfFilledBuffers;
short nBlocks10ms;
short nFrames;
#ifdef AEC_DEBUG
short msInAECBuf;
#endif
// Limit resampling to doubling/halving of signal
const float minSkewEst = -0.5f;
const float maxSkewEst = 1.0f;
if (aecpc == NULL) {
return -1;
}
if (nearend == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (out == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
// number of samples == 160 for SWB input
if (nrOfSamples != 80 && nrOfSamples != 160) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
// Check for valid pointers based on sampling rate
if (aecpc->sampFreq == 32000 && nearendH == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (msInSndCardBuf < 0) {
msInSndCardBuf = 0;
aecpc->lastError = AEC_BAD_PARAMETER_WARNING;
retVal = -1;
}
else if (msInSndCardBuf > 500) {
msInSndCardBuf = 500;
aecpc->lastError = AEC_BAD_PARAMETER_WARNING;
retVal = -1;
}
msInSndCardBuf += 10;
aecpc->msInSndCardBuf = msInSndCardBuf;
if (aecpc->skewMode == kAecTrue) {
if (aecpc->skewFrCtr < 25) {
aecpc->skewFrCtr++;
}
else {
retVal = WebRtcAec_GetSkew(aecpc->resampler, skew, &aecpc->skew);
if (retVal == -1) {
aecpc->skew = 0;
aecpc->lastError = AEC_BAD_PARAMETER_WARNING;
}
aecpc->skew /= aecpc->sampFactor*nrOfSamples;
if (aecpc->skew < 1.0e-3 && aecpc->skew > -1.0e-3) {
aecpc->resample = kAecFalse;
}
else {
aecpc->resample = kAecTrue;
}
if (aecpc->skew < minSkewEst) {
aecpc->skew = minSkewEst;
}
else if (aecpc->skew > maxSkewEst) {
aecpc->skew = maxSkewEst;
}
#ifdef AEC_DEBUG
fwrite(&aecpc->skew, sizeof(aecpc->skew), 1, aecpc->skewFile);
#endif
}
}
nFrames = nrOfSamples / FRAME_LEN;
nBlocks10ms = nFrames / aecpc->aec->mult;
if (aecpc->ECstartup) {
if (nearend != out) {
// Only needed if they don't already point to the same place.
memcpy(out, nearend, sizeof(short) * nrOfSamples);
}
nmbrOfFilledBuffers = WebRtcApm_get_buffer_size(aecpc->farendBuf) / FRAME_LEN;
// The AEC is in the start up mode
// AEC is disabled until the soundcard buffer and farend buffers are OK
// Mechanism to ensure that the soundcard buffer is reasonably stable.
if (aecpc->checkBuffSize) {
aecpc->checkBufSizeCtr++;
// Before we fill up the far end buffer we require the amount of data on the
// sound card to be stable (+/-8 ms) compared to the first value. This
// comparison is made during the following 4 consecutive frames. If it seems
// to be stable then we start to fill up the far end buffer.
if (aecpc->counter == 0) {
aecpc->firstVal = aecpc->msInSndCardBuf;
aecpc->sum = 0;
}
if (abs(aecpc->firstVal - aecpc->msInSndCardBuf) <
WEBRTC_SPL_MAX(0.2 * aecpc->msInSndCardBuf, sampMsNb)) {
aecpc->sum += aecpc->msInSndCardBuf;
aecpc->counter++;
}
else {
aecpc->counter = 0;
}
if (aecpc->counter*nBlocks10ms >= 6) {
// The farend buffer size is determined in blocks of 80 samples
// Use 75% of the average value of the soundcard buffer
aecpc->bufSizeStart = WEBRTC_SPL_MIN((int) (0.75 * (aecpc->sum *
aecpc->aec->mult) / (aecpc->counter * 10)), BUF_SIZE_FRAMES);
// buffersize has now been determined
aecpc->checkBuffSize = 0;
}
if (aecpc->checkBufSizeCtr * nBlocks10ms > 50) {
// for really bad sound cards, don't disable echocanceller for more than 0.5 sec
aecpc->bufSizeStart = WEBRTC_SPL_MIN((int) (0.75 * (aecpc->msInSndCardBuf *
aecpc->aec->mult) / 10), BUF_SIZE_FRAMES);
aecpc->checkBuffSize = 0;
}
}
// if checkBuffSize changed in the if-statement above
if (!aecpc->checkBuffSize) {
// soundcard buffer is now reasonably stable
// When the far end buffer is filled with approximately the same amount of
// data as the amount on the sound card we end the start up phase and start
// to cancel echoes.
if (nmbrOfFilledBuffers == aecpc->bufSizeStart) {
aecpc->ECstartup = 0; // Enable the AEC
}
else if (nmbrOfFilledBuffers > aecpc->bufSizeStart) {
WebRtcApm_FlushBuffer(aecpc->farendBuf, WebRtcApm_get_buffer_size(aecpc->farendBuf) -
aecpc->bufSizeStart * FRAME_LEN);
aecpc->ECstartup = 0;
}
}
}
else {
// AEC is enabled
// Note only 1 block supported for nb and 2 blocks for wb
for (i = 0; i < nFrames; i++) {
nmbrOfFilledBuffers = WebRtcApm_get_buffer_size(aecpc->farendBuf) / FRAME_LEN;
// Check that there is data in the far end buffer
if (nmbrOfFilledBuffers > 0) {
// Get the next 80 samples from the farend buffer
WebRtcApm_ReadBuffer(aecpc->farendBuf, farend, FRAME_LEN);
// Always store the last frame for use when we run out of data
memcpy(&(aecpc->farendOld[i][0]), farend, FRAME_LEN * sizeof(short));
}
else {
// We have no data so we use the last played frame
memcpy(farend, &(aecpc->farendOld[i][0]), FRAME_LEN * sizeof(short));
}
// Call buffer delay estimator when all data is extracted,
// i.e. i = 0 for NB and i = 1 for WB or SWB
if ((i == 0 && aecpc->splitSampFreq == 8000) ||
(i == 1 && (aecpc->splitSampFreq == 16000))) {
EstBufDelay(aecpc, aecpc->msInSndCardBuf);
}
// Call the AEC
WebRtcAec_ProcessFrame(aecpc->aec, farend, &nearend[FRAME_LEN * i], &nearendH[FRAME_LEN * i],
&out[FRAME_LEN * i], &outH[FRAME_LEN * i], aecpc->knownDelay);
}
}
#ifdef AEC_DEBUG
msInAECBuf = WebRtcApm_get_buffer_size(aecpc->farendBuf) / (sampMsNb*aecpc->aec->mult);
fwrite(&msInAECBuf, 2, 1, aecpc->bufFile);
fwrite(&(aecpc->knownDelay), sizeof(aecpc->knownDelay), 1, aecpc->delayFile);
#endif
return retVal;
}
WebRtc_Word32 WebRtcAec_set_config(void *aecInst, AecConfig config)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
if (config.skewMode != kAecFalse && config.skewMode != kAecTrue) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->skewMode = config.skewMode;
if (config.nlpMode != kAecNlpConservative && config.nlpMode !=
kAecNlpModerate && config.nlpMode != kAecNlpAggressive) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->nlpMode = config.nlpMode;
aecpc->aec->targetSupp = targetSupp[aecpc->nlpMode];
aecpc->aec->minOverDrive = minOverDrive[aecpc->nlpMode];
if (config.metricsMode != kAecFalse && config.metricsMode != kAecTrue) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->aec->metricsMode = config.metricsMode;
if (aecpc->aec->metricsMode == kAecTrue) {
WebRtcAec_InitMetrics(aecpc->aec);
}
if (config.delay_logging != kAecFalse && config.delay_logging != kAecTrue) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->aec->delay_logging_enabled = config.delay_logging;
if (aecpc->aec->delay_logging_enabled == kAecTrue) {
memset(aecpc->aec->delay_histogram, 0, sizeof(aecpc->aec->delay_histogram));
}
return 0;
}
WebRtc_Word32 WebRtcAec_get_config(void *aecInst, AecConfig *config)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (config == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
config->nlpMode = aecpc->nlpMode;
config->skewMode = aecpc->skewMode;
config->metricsMode = aecpc->aec->metricsMode;
config->delay_logging = aecpc->aec->delay_logging_enabled;
return 0;
}
WebRtc_Word32 WebRtcAec_get_echo_status(void *aecInst, WebRtc_Word16 *status)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (status == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
*status = aecpc->aec->echoState;
return 0;
}
WebRtc_Word32 WebRtcAec_GetMetrics(void *aecInst, AecMetrics *metrics)
{
const float upweight = 0.7f;
float dtmp;
short stmp;
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (metrics == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
// ERL
metrics->erl.instant = (short) aecpc->aec->erl.instant;
if ((aecpc->aec->erl.himean > offsetLevel) && (aecpc->aec->erl.average > offsetLevel)) {
// Use a mix between regular average and upper part average
dtmp = upweight * aecpc->aec->erl.himean + (1 - upweight) * aecpc->aec->erl.average;
metrics->erl.average = (short) dtmp;
}
else {
metrics->erl.average = offsetLevel;
}
metrics->erl.max = (short) aecpc->aec->erl.max;
if (aecpc->aec->erl.min < (offsetLevel * (-1))) {
metrics->erl.min = (short) aecpc->aec->erl.min;
}
else {
metrics->erl.min = offsetLevel;
}
// ERLE
metrics->erle.instant = (short) aecpc->aec->erle.instant;
if ((aecpc->aec->erle.himean > offsetLevel) && (aecpc->aec->erle.average > offsetLevel)) {
// Use a mix between regular average and upper part average
dtmp = upweight * aecpc->aec->erle.himean + (1 - upweight) * aecpc->aec->erle.average;
metrics->erle.average = (short) dtmp;
}
else {
metrics->erle.average = offsetLevel;
}
metrics->erle.max = (short) aecpc->aec->erle.max;
if (aecpc->aec->erle.min < (offsetLevel * (-1))) {
metrics->erle.min = (short) aecpc->aec->erle.min;
} else {
metrics->erle.min = offsetLevel;
}
// RERL
if ((metrics->erl.average > offsetLevel) && (metrics->erle.average > offsetLevel)) {
stmp = metrics->erl.average + metrics->erle.average;
}
else {
stmp = offsetLevel;
}
metrics->rerl.average = stmp;
// No other statistics needed, but returned for completeness
metrics->rerl.instant = stmp;
metrics->rerl.max = stmp;
metrics->rerl.min = stmp;
// A_NLP
metrics->aNlp.instant = (short) aecpc->aec->aNlp.instant;
if ((aecpc->aec->aNlp.himean > offsetLevel) && (aecpc->aec->aNlp.average > offsetLevel)) {
// Use a mix between regular average and upper part average
dtmp = upweight * aecpc->aec->aNlp.himean + (1 - upweight) * aecpc->aec->aNlp.average;
metrics->aNlp.average = (short) dtmp;
}
else {
metrics->aNlp.average = offsetLevel;
}
metrics->aNlp.max = (short) aecpc->aec->aNlp.max;
if (aecpc->aec->aNlp.min < (offsetLevel * (-1))) {
metrics->aNlp.min = (short) aecpc->aec->aNlp.min;
}
else {
metrics->aNlp.min = offsetLevel;
}
return 0;
}
int WebRtcAec_GetDelayMetrics(void* handle, int* median, int* std) {
aecpc_t* self = handle;
int i = 0;
int delay_values = 0;
int num_delay_values = 0;
int my_median = 0;
const int kMsPerBlock = (PART_LEN * 1000) / self->splitSampFreq;
float l1_norm = 0;
if (self == NULL) {
return -1;
}
if (median == NULL) {
self->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (std == NULL) {
self->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (self->initFlag != initCheck) {
self->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
if (self->aec->delay_logging_enabled == 0) {
// Logging disabled
self->lastError = AEC_UNSUPPORTED_FUNCTION_ERROR;
return -1;
}
// Get number of delay values since last update
for (i = 0; i < kMaxDelay; i++) {
num_delay_values += self->aec->delay_histogram[i];
}
if (num_delay_values == 0) {
// We have no new delay value data
*median = -1;
*std = -1;
return 0;
}
delay_values = num_delay_values >> 1; // Start value for median count down
// Get median of delay values since last update
for (i = 0; i < kMaxDelay; i++) {
delay_values -= self->aec->delay_histogram[i];
if (delay_values < 0) {
my_median = i;
break;
}
}
*median = my_median * kMsPerBlock;
// Calculate the L1 norm, with median value as central moment
for (i = 0; i < kMaxDelay; i++) {
l1_norm += (float) (fabs(i - my_median) * self->aec->delay_histogram[i]);
}
*std = (int) (l1_norm / (float) num_delay_values + 0.5f) * kMsPerBlock;
// Reset histogram
memset(self->aec->delay_histogram, 0, sizeof(self->aec->delay_histogram));
return 0;
}
WebRtc_Word32 WebRtcAec_get_version(WebRtc_Word8 *versionStr, WebRtc_Word16 len)
{
const char version[] = "AEC 2.5.0";
const short versionLen = (short)strlen(version) + 1; // +1 for null-termination
if (versionStr == NULL) {
return -1;
}
if (versionLen > len) {
return -1;
}
strncpy(versionStr, version, versionLen);
return 0;
}
WebRtc_Word32 WebRtcAec_get_error_code(void *aecInst)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
return aecpc->lastError;
}
static int EstBufDelay(aecpc_t *aecpc, short msInSndCardBuf)
{
short delayNew, nSampFar, nSampSndCard;
short diff;
nSampFar = WebRtcApm_get_buffer_size(aecpc->farendBuf);
nSampSndCard = msInSndCardBuf * sampMsNb * aecpc->aec->mult;
delayNew = nSampSndCard - nSampFar;
// Account for resampling frame delay
if (aecpc->skewMode == kAecTrue && aecpc->resample == kAecTrue) {
delayNew -= kResamplingDelay;
}
if (delayNew < FRAME_LEN) {
WebRtcApm_FlushBuffer(aecpc->farendBuf, FRAME_LEN);
delayNew += FRAME_LEN;
}
aecpc->filtDelay = WEBRTC_SPL_MAX(0, (short)(0.8*aecpc->filtDelay + 0.2*delayNew));
diff = aecpc->filtDelay - aecpc->knownDelay;
if (diff > 224) {
if (aecpc->lastDelayDiff < 96) {
aecpc->timeForDelayChange = 0;
}
else {
aecpc->timeForDelayChange++;
}
}
else if (diff < 96 && aecpc->knownDelay > 0) {
if (aecpc->lastDelayDiff > 224) {
aecpc->timeForDelayChange = 0;
}
else {
aecpc->timeForDelayChange++;
}
}
else {
aecpc->timeForDelayChange = 0;
}
aecpc->lastDelayDiff = diff;
if (aecpc->timeForDelayChange > 25) {
aecpc->knownDelay = WEBRTC_SPL_MAX((int)aecpc->filtDelay - 160, 0);
}
return 0;
}
static int DelayComp(aecpc_t *aecpc)
{
int nSampFar, nSampSndCard, delayNew, nSampAdd;
const int maxStuffSamp = 10 * FRAME_LEN;
nSampFar = WebRtcApm_get_buffer_size(aecpc->farendBuf);
nSampSndCard = aecpc->msInSndCardBuf * sampMsNb * aecpc->aec->mult;
delayNew = nSampSndCard - nSampFar;
// Account for resampling frame delay
if (aecpc->skewMode == kAecTrue && aecpc->resample == kAecTrue) {
delayNew -= kResamplingDelay;
}
if (delayNew > FAR_BUF_LEN - FRAME_LEN*aecpc->aec->mult) {
// The difference of the buffersizes is larger than the maximum
// allowed known delay. Compensate by stuffing the buffer.
nSampAdd = (int)(WEBRTC_SPL_MAX((int)(0.5 * nSampSndCard - nSampFar),
FRAME_LEN));
nSampAdd = WEBRTC_SPL_MIN(nSampAdd, maxStuffSamp);
WebRtcApm_StuffBuffer(aecpc->farendBuf, nSampAdd);
aecpc->delayChange = 1; // the delay needs to be updated
}
return 0;
}

View File

@ -1,278 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_INTERFACE_ECHO_CANCELLATION_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_INTERFACE_ECHO_CANCELLATION_H_
#include "typedefs.h"
// Errors
#define AEC_UNSPECIFIED_ERROR 12000
#define AEC_UNSUPPORTED_FUNCTION_ERROR 12001
#define AEC_UNINITIALIZED_ERROR 12002
#define AEC_NULL_POINTER_ERROR 12003
#define AEC_BAD_PARAMETER_ERROR 12004
// Warnings
#define AEC_BAD_PARAMETER_WARNING 12050
enum {
kAecNlpConservative = 0,
kAecNlpModerate,
kAecNlpAggressive
};
enum {
kAecFalse = 0,
kAecTrue
};
typedef struct {
WebRtc_Word16 nlpMode; // default kAecNlpModerate
WebRtc_Word16 skewMode; // default kAecFalse
WebRtc_Word16 metricsMode; // default kAecFalse
int delay_logging; // default kAecFalse
//float realSkew;
} AecConfig;
typedef struct {
WebRtc_Word16 instant;
WebRtc_Word16 average;
WebRtc_Word16 max;
WebRtc_Word16 min;
} AecLevel;
typedef struct {
AecLevel rerl;
AecLevel erl;
AecLevel erle;
AecLevel aNlp;
} AecMetrics;
#ifdef __cplusplus
extern "C" {
#endif
/*
* Allocates the memory needed by the AEC. The memory needs to be initialized
* separately using the WebRtcAec_Init() function.
*
* Inputs Description
* -------------------------------------------------------------------
* void **aecInst Pointer to the AEC instance to be created
* and initialized
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Create(void **aecInst);
/*
* This function releases the memory allocated by WebRtcAec_Create().
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Free(void *aecInst);
/*
* Initializes an AEC instance.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* WebRtc_Word32 sampFreq Sampling frequency of data
* WebRtc_Word32 scSampFreq Soundcard sampling frequency
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Init(void *aecInst,
WebRtc_Word32 sampFreq,
WebRtc_Word32 scSampFreq);
/*
* Inserts an 80 or 160 sample block of data into the farend buffer.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* WebRtc_Word16 *farend In buffer containing one frame of
* farend signal for L band
* WebRtc_Word16 nrOfSamples Number of samples in farend buffer
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_BufferFarend(void *aecInst,
const WebRtc_Word16 *farend,
WebRtc_Word16 nrOfSamples);
/*
* Runs the echo canceller on an 80 or 160 sample blocks of data.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* WebRtc_Word16 *nearend In buffer containing one frame of
* nearend+echo signal for L band
* WebRtc_Word16 *nearendH In buffer containing one frame of
* nearend+echo signal for H band
* WebRtc_Word16 nrOfSamples Number of samples in nearend buffer
* WebRtc_Word16 msInSndCardBuf Delay estimate for sound card and
* system buffers
* WebRtc_Word16 skew Difference between number of samples played
* and recorded at the soundcard (for clock skew
* compensation)
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word16 *out Out buffer, one frame of processed nearend
* for L band
* WebRtc_Word16 *outH Out buffer, one frame of processed nearend
* for H band
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Process(void *aecInst,
const WebRtc_Word16 *nearend,
const WebRtc_Word16 *nearendH,
WebRtc_Word16 *out,
WebRtc_Word16 *outH,
WebRtc_Word16 nrOfSamples,
WebRtc_Word16 msInSndCardBuf,
WebRtc_Word32 skew);
/*
* This function enables the user to set certain parameters on-the-fly.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* AecConfig config Config instance that contains all
* properties to be set
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_set_config(void *aecInst, AecConfig config);
/*
* Gets the on-the-fly paramters.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* AecConfig *config Pointer to the config instance that
* all properties will be written to
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_get_config(void *aecInst, AecConfig *config);
/*
* Gets the current echo status of the nearend signal.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word16 *status 0: Almost certainly nearend single-talk
* 1: Might not be neared single-talk
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_get_echo_status(void *aecInst, WebRtc_Word16 *status);
/*
* Gets the current echo metrics for the session.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* AecMetrics *metrics Struct which will be filled out with the
* current echo metrics.
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_GetMetrics(void *aecInst, AecMetrics *metrics);
/*
* Gets the current delay metrics for the session.
*
* Inputs Description
* -------------------------------------------------------------------
* void* handle Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* int* median Delay median value.
* int* std Delay standard deviation.
*
* int return 0: OK
* -1: error
*/
int WebRtcAec_GetDelayMetrics(void* handle, int* median, int* std);
/*
* Gets the last error code.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 11000-11100: error code
*/
WebRtc_Word32 WebRtcAec_get_error_code(void *aecInst);
/*
* Gets a version string.
*
* Inputs Description
* -------------------------------------------------------------------
* char *versionStr Pointer to a string array
* WebRtc_Word16 len The maximum length of the string
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word8 *versionStr Pointer to a string array
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_get_version(WebRtc_Word8 *versionStr, WebRtc_Word16 len);
#ifdef __cplusplus
}
#endif
#endif /* WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_INTERFACE_ECHO_CANCELLATION_H_ */

View File

@ -1,233 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/* Resamples a signal to an arbitrary rate. Used by the AEC to compensate for clock
* skew by resampling the farend signal.
*/
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include "resampler.h"
#include "aec_core.h"
enum { kFrameBufferSize = FRAME_LEN * 4 };
enum { kEstimateLengthFrames = 400 };
typedef struct {
short buffer[kFrameBufferSize];
float position;
int deviceSampleRateHz;
int skewData[kEstimateLengthFrames];
int skewDataIndex;
float skewEstimate;
} resampler_t;
static int EstimateSkew(const int* rawSkew,
int size,
int absLimit,
float *skewEst);
int WebRtcAec_CreateResampler(void **resampInst)
{
resampler_t *obj = malloc(sizeof(resampler_t));
*resampInst = obj;
if (obj == NULL) {
return -1;
}
return 0;
}
int WebRtcAec_InitResampler(void *resampInst, int deviceSampleRateHz)
{
resampler_t *obj = (resampler_t*) resampInst;
memset(obj->buffer, 0, sizeof(obj->buffer));
obj->position = 0.0;
obj->deviceSampleRateHz = deviceSampleRateHz;
memset(obj->skewData, 0, sizeof(obj->skewData));
obj->skewDataIndex = 0;
obj->skewEstimate = 0.0;
return 0;
}
int WebRtcAec_FreeResampler(void *resampInst)
{
resampler_t *obj = (resampler_t*) resampInst;
free(obj);
return 0;
}
int WebRtcAec_ResampleLinear(void *resampInst,
const short *inspeech,
int size,
float skew,
short *outspeech)
{
resampler_t *obj = (resampler_t*) resampInst;
short *y;
float be, tnew, interp;
int tn, outsize, mm;
if (size < 0 || size > 2 * FRAME_LEN) {
return -1;
}
// Add new frame data in lookahead
memcpy(&obj->buffer[FRAME_LEN + kResamplingDelay],
inspeech,
size * sizeof(short));
// Sample rate ratio
be = 1 + skew;
// Loop over input frame
mm = 0;
y = &obj->buffer[FRAME_LEN]; // Point at current frame
tnew = be * mm + obj->position;
tn = (int) tnew;
while (tn < size) {
// Interpolation
interp = y[tn] + (tnew - tn) * (y[tn+1] - y[tn]);
if (interp > 32767) {
interp = 32767;
}
else if (interp < -32768) {
interp = -32768;
}
outspeech[mm] = (short) interp;
mm++;
tnew = be * mm + obj->position;
tn = (int) tnew;
}
outsize = mm;
obj->position += outsize * be - size;
// Shift buffer
memmove(obj->buffer,
&obj->buffer[size],
(kFrameBufferSize - size) * sizeof(short));
return outsize;
}
int WebRtcAec_GetSkew(void *resampInst, int rawSkew, float *skewEst)
{
resampler_t *obj = (resampler_t*)resampInst;
int err = 0;
if (obj->skewDataIndex < kEstimateLengthFrames) {
obj->skewData[obj->skewDataIndex] = rawSkew;
obj->skewDataIndex++;
}
else if (obj->skewDataIndex == kEstimateLengthFrames) {
err = EstimateSkew(obj->skewData,
kEstimateLengthFrames,
obj->deviceSampleRateHz,
skewEst);
obj->skewEstimate = *skewEst;
obj->skewDataIndex++;
}
else {
*skewEst = obj->skewEstimate;
}
return err;
}
int EstimateSkew(const int* rawSkew,
int size,
int deviceSampleRateHz,
float *skewEst)
{
const int absLimitOuter = (int)(0.04f * deviceSampleRateHz);
const int absLimitInner = (int)(0.0025f * deviceSampleRateHz);
int i = 0;
int n = 0;
float rawAvg = 0;
float err = 0;
float rawAbsDev = 0;
int upperLimit = 0;
int lowerLimit = 0;
float cumSum = 0;
float x = 0;
float x2 = 0;
float y = 0;
float xy = 0;
float xAvg = 0;
float denom = 0;
float skew = 0;
*skewEst = 0; // Set in case of error below.
for (i = 0; i < size; i++) {
if ((rawSkew[i] < absLimitOuter && rawSkew[i] > -absLimitOuter)) {
n++;
rawAvg += rawSkew[i];
}
}
if (n == 0) {
return -1;
}
assert(n > 0);
rawAvg /= n;
for (i = 0; i < size; i++) {
if ((rawSkew[i] < absLimitOuter && rawSkew[i] > -absLimitOuter)) {
err = rawSkew[i] - rawAvg;
rawAbsDev += err >= 0 ? err : -err;
}
}
assert(n > 0);
rawAbsDev /= n;
upperLimit = (int)(rawAvg + 5 * rawAbsDev + 1); // +1 for ceiling.
lowerLimit = (int)(rawAvg - 5 * rawAbsDev - 1); // -1 for floor.
n = 0;
for (i = 0; i < size; i++) {
if ((rawSkew[i] < absLimitInner && rawSkew[i] > -absLimitInner) ||
(rawSkew[i] < upperLimit && rawSkew[i] > lowerLimit)) {
n++;
cumSum += rawSkew[i];
x += n;
x2 += n*n;
y += cumSum;
xy += n * cumSum;
}
}
if (n == 0) {
return -1;
}
assert(n > 0);
xAvg = x / n;
denom = x2 - xAvg*x;
if (denom != 0) {
skew = (xy - xAvg*y) / denom;
}
*skewEst = skew;
return 0;
}

View File

@ -1,32 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_RESAMPLER_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_RESAMPLER_H_
enum { kResamplingDelay = 1 };
// Unless otherwise specified, functions return 0 on success and -1 on error
int WebRtcAec_CreateResampler(void **resampInst);
int WebRtcAec_InitResampler(void *resampInst, int deviceSampleRateHz);
int WebRtcAec_FreeResampler(void *resampInst);
// Estimates skew from raw measurement.
int WebRtcAec_GetSkew(void *resampInst, int rawSkew, float *skewEst);
// Resamples input using linear interpolation.
// Returns size of resampled array.
int WebRtcAec_ResampleLinear(void *resampInst,
const short *inspeech,
int size,
float skew,
short *outspeech);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_RESAMPLER_H_

View File

@ -1,9 +0,0 @@
noinst_LTLIBRARIES = libaecm.la
libaecm_la_SOURCES = interface/echo_control_mobile.h \
echo_control_mobile.c \
aecm_core.c \
aecm_core.h
libaecm_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility

View File

@ -1,34 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'aecm',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
'apm_util'
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/echo_control_mobile.h',
'echo_control_mobile.c',
'aecm_core.c',
'aecm_core.h',
],
},
],
}

File diff suppressed because it is too large Load Diff

View File

@ -1,358 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
// Performs echo control (suppression) with fft routines in fixed-point
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AECM_MAIN_SOURCE_AECM_CORE_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AECM_MAIN_SOURCE_AECM_CORE_H_
#define AECM_DYNAMIC_Q // turn on/off dynamic Q-domain
//#define AECM_WITH_ABS_APPROX
//#define AECM_SHORT // for 32 sample partition length (otherwise 64)
#include "typedefs.h"
#include "signal_processing_library.h"
// Algorithm parameters
#define FRAME_LEN 80 // Total frame length, 10 ms
#ifdef AECM_SHORT
#define PART_LEN 32 // Length of partition
#define PART_LEN_SHIFT 6 // Length of (PART_LEN * 2) in base 2
#else
#define PART_LEN 64 // Length of partition
#define PART_LEN_SHIFT 7 // Length of (PART_LEN * 2) in base 2
#endif
#define PART_LEN1 (PART_LEN + 1) // Unique fft coefficients
#define PART_LEN2 (PART_LEN << 1) // Length of partition * 2
#define PART_LEN4 (PART_LEN << 2) // Length of partition * 4
#define FAR_BUF_LEN PART_LEN4 // Length of buffers
#define MAX_DELAY 100
// Counter parameters
#ifdef AECM_SHORT
#define CONV_LEN 1024 // Convergence length used at startup
#else
#define CONV_LEN 512 // Convergence length used at startup
#endif
#define CONV_LEN2 (CONV_LEN << 1) // Convergence length * 2 used at startup
// Energy parameters
#define MAX_BUF_LEN 64 // History length of energy signals
#define FAR_ENERGY_MIN 1025 // Lowest Far energy level: At least 2 in energy
#define FAR_ENERGY_DIFF 929 // Allowed difference between max and min
#define ENERGY_DEV_OFFSET 0 // The energy error offset in Q8
#define ENERGY_DEV_TOL 400 // The energy estimation tolerance in Q8
#define FAR_ENERGY_VAD_REGION 230 // Far VAD tolerance region
// Stepsize parameters
#define MU_MIN 10 // Min stepsize 2^-MU_MIN (far end energy dependent)
#define MU_MAX 1 // Max stepsize 2^-MU_MAX (far end energy dependent)
#define MU_DIFF 9 // MU_MIN - MU_MAX
// Channel parameters
#define MIN_MSE_COUNT 20 // Min number of consecutive blocks with enough far end
// energy to compare channel estimates
#define MIN_MSE_DIFF 29 // The ratio between adapted and stored channel to
// accept a new storage (0.8 in Q-MSE_RESOLUTION)
#define MSE_RESOLUTION 5 // MSE parameter resolution
#define RESOLUTION_CHANNEL16 12 // W16 Channel in Q-RESOLUTION_CHANNEL16
#define RESOLUTION_CHANNEL32 28 // W32 Channel in Q-RESOLUTION_CHANNEL
#define CHANNEL_VAD 16 // Minimum energy in frequency band to update channel
// Suppression gain parameters: SUPGAIN_ parameters in Q-(RESOLUTION_SUPGAIN)
#define RESOLUTION_SUPGAIN 8 // Channel in Q-(RESOLUTION_SUPGAIN)
#define SUPGAIN_DEFAULT (1 << RESOLUTION_SUPGAIN) // Default suppression gain
#define SUPGAIN_ERROR_PARAM_A 3072 // Estimation error parameter (Maximum gain) (8 in Q8)
#define SUPGAIN_ERROR_PARAM_B 1536 // Estimation error parameter (Gain before going down)
#define SUPGAIN_ERROR_PARAM_D SUPGAIN_DEFAULT // Estimation error parameter
// (Should be the same as Default) (1 in Q8)
#define SUPGAIN_EPC_DT 200 // = SUPGAIN_ERROR_PARAM_C * ENERGY_DEV_TOL
// Defines for "check delay estimation"
#define CORR_WIDTH 31 // Number of samples to correlate over.
#define CORR_MAX 16 // Maximum correlation offset
#define CORR_MAX_BUF 63
#define CORR_DEV 4
#define CORR_MAX_LEVEL 20
#define CORR_MAX_LOW 4
#define CORR_BUF_LEN (CORR_MAX << 1) + 1
// Note that CORR_WIDTH + 2*CORR_MAX <= MAX_BUF_LEN
#define ONE_Q14 (1 << 14)
// NLP defines
#define NLP_COMP_LOW 3277 // 0.2 in Q14
#define NLP_COMP_HIGH ONE_Q14 // 1 in Q14
extern const WebRtc_Word16 WebRtcAecm_kSqrtHanning[];
typedef struct {
WebRtc_Word16 real;
WebRtc_Word16 imag;
} complex16_t;
typedef struct
{
int farBufWritePos;
int farBufReadPos;
int knownDelay;
int lastKnownDelay;
int firstVAD; // Parameter to control poorly initialized channels
void *farFrameBuf;
void *nearNoisyFrameBuf;
void *nearCleanFrameBuf;
void *outFrameBuf;
WebRtc_Word16 farBuf[FAR_BUF_LEN];
WebRtc_Word16 mult;
WebRtc_UWord32 seed;
// Delay estimation variables
void* delay_estimator;
WebRtc_UWord16 currentDelay;
WebRtc_Word16 nlpFlag;
WebRtc_Word16 fixedDelay;
WebRtc_UWord32 totCount;
WebRtc_Word16 dfaCleanQDomain;
WebRtc_Word16 dfaCleanQDomainOld;
WebRtc_Word16 dfaNoisyQDomain;
WebRtc_Word16 dfaNoisyQDomainOld;
WebRtc_Word16 nearLogEnergy[MAX_BUF_LEN];
WebRtc_Word16 farLogEnergy;
WebRtc_Word16 echoAdaptLogEnergy[MAX_BUF_LEN];
WebRtc_Word16 echoStoredLogEnergy[MAX_BUF_LEN];
// The extra 16 or 32 bytes in the following buffers are for alignment based Neon code.
// It's designed this way since the current GCC compiler can't align a buffer in 16 or 32
// byte boundaries properly.
WebRtc_Word16 channelStored_buf[PART_LEN1 + 8];
WebRtc_Word16 channelAdapt16_buf[PART_LEN1 + 8];
WebRtc_Word32 channelAdapt32_buf[PART_LEN1 + 8];
WebRtc_Word16 xBuf_buf[PART_LEN2 + 16]; // farend
WebRtc_Word16 dBufClean_buf[PART_LEN2 + 16]; // nearend
WebRtc_Word16 dBufNoisy_buf[PART_LEN2 + 16]; // nearend
WebRtc_Word16 outBuf_buf[PART_LEN + 8];
// Pointers to the above buffers
WebRtc_Word16 *channelStored;
WebRtc_Word16 *channelAdapt16;
WebRtc_Word32 *channelAdapt32;
WebRtc_Word16 *xBuf;
WebRtc_Word16 *dBufClean;
WebRtc_Word16 *dBufNoisy;
WebRtc_Word16 *outBuf;
WebRtc_Word32 echoFilt[PART_LEN1];
WebRtc_Word16 nearFilt[PART_LEN1];
WebRtc_Word32 noiseEst[PART_LEN1];
int noiseEstTooLowCtr[PART_LEN1];
int noiseEstTooHighCtr[PART_LEN1];
WebRtc_Word16 noiseEstCtr;
WebRtc_Word16 cngMode;
WebRtc_Word32 mseAdaptOld;
WebRtc_Word32 mseStoredOld;
WebRtc_Word32 mseThreshold;
WebRtc_Word16 farEnergyMin;
WebRtc_Word16 farEnergyMax;
WebRtc_Word16 farEnergyMaxMin;
WebRtc_Word16 farEnergyVAD;
WebRtc_Word16 farEnergyMSE;
int currentVADValue;
WebRtc_Word16 vadUpdateCount;
WebRtc_Word16 startupState;
WebRtc_Word16 mseChannelCount;
WebRtc_Word16 supGain;
WebRtc_Word16 supGainOld;
WebRtc_Word16 supGainErrParamA;
WebRtc_Word16 supGainErrParamD;
WebRtc_Word16 supGainErrParamDiffAB;
WebRtc_Word16 supGainErrParamDiffBD;
#ifdef AEC_DEBUG
FILE *farFile;
FILE *nearFile;
FILE *outFile;
#endif
} AecmCore_t;
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_CreateCore(...)
//
// Allocates the memory needed by the AECM. The memory needs to be
// initialized separately using the WebRtcAecm_InitCore() function.
//
// Input:
// - aecm : Instance that should be created
//
// Output:
// - aecm : Created instance
//
// Return value : 0 - Ok
// -1 - Error
//
int WebRtcAecm_CreateCore(AecmCore_t **aecm);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_InitCore(...)
//
// This function initializes the AECM instant created with WebRtcAecm_CreateCore(...)
// Input:
// - aecm : Pointer to the AECM instance
// - samplingFreq : Sampling Frequency
//
// Output:
// - aecm : Initialized instance
//
// Return value : 0 - Ok
// -1 - Error
//
int WebRtcAecm_InitCore(AecmCore_t * const aecm, int samplingFreq);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_FreeCore(...)
//
// This function releases the memory allocated by WebRtcAecm_CreateCore()
// Input:
// - aecm : Pointer to the AECM instance
//
// Return value : 0 - Ok
// -1 - Error
// 11001-11016: Error
//
int WebRtcAecm_FreeCore(AecmCore_t *aecm);
int WebRtcAecm_Control(AecmCore_t *aecm, int delay, int nlpFlag);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_InitEchoPathCore(...)
//
// This function resets the echo channel adaptation with the specified channel.
// Input:
// - aecm : Pointer to the AECM instance
// - echo_path : Pointer to the data that should initialize the echo path
//
// Output:
// - aecm : Initialized instance
//
void WebRtcAecm_InitEchoPathCore(AecmCore_t* aecm, const WebRtc_Word16* echo_path);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_ProcessFrame(...)
//
// This function processes frames and sends blocks to WebRtcAecm_ProcessBlock(...)
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one frame of echo signal
// - nearendNoisy : In buffer containing one frame of nearend+echo signal without NS
// - nearendClean : In buffer containing one frame of nearend+echo signal with NS
//
// Output:
// - out : Out buffer, one frame of nearend signal :
//
//
int WebRtcAecm_ProcessFrame(AecmCore_t * aecm, const WebRtc_Word16 * farend,
const WebRtc_Word16 * nearendNoisy,
const WebRtc_Word16 * nearendClean,
WebRtc_Word16 * out);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_ProcessBlock(...)
//
// This function is called for every block within one frame
// This function is called by WebRtcAecm_ProcessFrame(...)
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one block of echo signal
// - nearendNoisy : In buffer containing one frame of nearend+echo signal without NS
// - nearendClean : In buffer containing one frame of nearend+echo signal with NS
//
// Output:
// - out : Out buffer, one block of nearend signal :
//
//
int WebRtcAecm_ProcessBlock(AecmCore_t * aecm, const WebRtc_Word16 * farend,
const WebRtc_Word16 * nearendNoisy,
const WebRtc_Word16 * noisyClean,
WebRtc_Word16 * out);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_BufferFarFrame()
//
// Inserts a frame of data into farend buffer.
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one frame of farend signal
// - farLen : Length of frame
//
void WebRtcAecm_BufferFarFrame(AecmCore_t * const aecm, const WebRtc_Word16 * const farend,
const int farLen);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_FetchFarFrame()
//
// Read the farend buffer to account for known delay
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one frame of farend signal
// - farLen : Length of frame
// - knownDelay : known delay
//
void WebRtcAecm_FetchFarFrame(AecmCore_t * const aecm, WebRtc_Word16 * const farend,
const int farLen, const int knownDelay);
///////////////////////////////////////////////////////////////////////////////////////////////
// Some internal functions shared by ARM NEON and generic C code:
//
void WebRtcAecm_CalcLinearEnergies(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echoEst,
WebRtc_UWord32* far_energy,
WebRtc_UWord32* echo_energy_adapt,
WebRtc_UWord32* echo_energy_stored);
void WebRtcAecm_StoreAdaptiveChannel(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echo_est);
void WebRtcAecm_ResetAdaptiveChannel(AecmCore_t *aecm);
void WebRtcAecm_WindowAndFFT(WebRtc_Word16* fft,
const WebRtc_Word16* time_signal,
complex16_t* freq_signal,
int time_signal_scaling);
void WebRtcAecm_InverseFFTAndWindow(AecmCore_t* aecm,
WebRtc_Word16* fft,
complex16_t* efw,
WebRtc_Word16* output,
const WebRtc_Word16* nearendClean);
#endif

View File

@ -1,314 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#if defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON)
#include "aecm_core.h"
#include <arm_neon.h>
#include <assert.h>
// Square root of Hanning window in Q14.
static const WebRtc_Word16 kSqrtHanningReversed[] __attribute__ ((aligned (8))) = {
16384, 16373, 16354, 16325,
16286, 16237, 16179, 16111,
16034, 15947, 15851, 15746,
15631, 15506, 15373, 15231,
15079, 14918, 14749, 14571,
14384, 14189, 13985, 13773,
13553, 13325, 13089, 12845,
12594, 12335, 12068, 11795,
11514, 11227, 10933, 10633,
10326, 10013, 9695, 9370,
9040, 8705, 8364, 8019,
7668, 7313, 6954, 6591,
6224, 5853, 5478, 5101,
4720, 4337, 3951, 3562,
3172, 2780, 2386, 1990,
1594, 1196, 798, 399
};
void WebRtcAecm_WindowAndFFT(WebRtc_Word16* fft,
const WebRtc_Word16* time_signal,
complex16_t* freq_signal,
int time_signal_scaling)
{
int i, j;
int16x4_t tmp16x4_scaling = vdup_n_s16(time_signal_scaling);
__asm__("vmov.i16 d21, #0" ::: "d21");
for(i = 0, j = 0; i < PART_LEN; i += 4, j += 8)
{
int16x4_t tmp16x4_0;
int16x4_t tmp16x4_1;
int32x4_t tmp32x4_0;
/* Window near end */
// fft[j] = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((time_signal[i]
// << time_signal_scaling), WebRtcAecm_kSqrtHanning[i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&time_signal[i]));
tmp16x4_0 = vshl_s16(tmp16x4_0, tmp16x4_scaling);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&WebRtcAecm_kSqrtHanning[i]));
tmp32x4_0 = vmull_s16(tmp16x4_0, tmp16x4_1);
__asm__("vshrn.i32 d20, %q0, #14" : : "w"(tmp32x4_0) : "d20");
__asm__("vst2.16 {d20, d21}, [%0, :128]" : : "r"(&fft[j]) : "q10");
// fft[PART_LEN2 + j] = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(
// (time_signal[PART_LEN + i] << time_signal_scaling),
// WebRtcAecm_kSqrtHanning[PART_LEN - i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&time_signal[i + PART_LEN]));
tmp16x4_0 = vshl_s16(tmp16x4_0, tmp16x4_scaling);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&kSqrtHanningReversed[i]));
tmp32x4_0 = vmull_s16(tmp16x4_0, tmp16x4_1);
__asm__("vshrn.i32 d20, %q0, #14" : : "w"(tmp32x4_0) : "d20");
__asm__("vst2.16 {d20, d21}, [%0, :128]" : : "r"(&fft[PART_LEN2 + j]) : "q10");
}
WebRtcSpl_ComplexBitReverse(fft, PART_LEN_SHIFT);
WebRtcSpl_ComplexFFT(fft, PART_LEN_SHIFT, 1);
// Take only the first PART_LEN2 samples, and switch the sign of the imaginary part.
for(i = 0, j = 0; j < PART_LEN2; i += 8, j += 16)
{
__asm__("vld2.16 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&fft[j]) : "q10", "q11");
__asm__("vneg.s16 d22, d22" : : : "q10");
__asm__("vneg.s16 d23, d23" : : : "q11");
__asm__("vst2.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&freq_signal[i].real): "q10", "q11");
}
}
void WebRtcAecm_InverseFFTAndWindow(AecmCore_t* aecm,
WebRtc_Word16* fft,
complex16_t* efw,
WebRtc_Word16* output,
const WebRtc_Word16* nearendClean)
{
int i, j, outCFFT;
WebRtc_Word32 tmp32no1;
// Synthesis
for(i = 0, j = 0; i < PART_LEN; i += 4, j += 8)
{
// We overwrite two more elements in fft[], but it's ok.
__asm__("vld2.16 {d20, d21}, [%0, :128]" : : "r"(&(efw[i].real)) : "q10");
__asm__("vmov q11, q10" : : : "q10", "q11");
__asm__("vneg.s16 d23, d23" : : : "q11");
__asm__("vst2.16 {d22, d23}, [%0, :128]" : : "r"(&fft[j]): "q11");
__asm__("vrev64.16 q10, q10" : : : "q10");
__asm__("vst2.16 {d20, d21}, [%0]" : : "r"(&fft[PART_LEN4 - j - 6]): "q10");
}
fft[PART_LEN2] = efw[PART_LEN].real;
fft[PART_LEN2 + 1] = -efw[PART_LEN].imag;
// Inverse FFT, result should be scaled with outCFFT.
WebRtcSpl_ComplexBitReverse(fft, PART_LEN_SHIFT);
outCFFT = WebRtcSpl_ComplexIFFT(fft, PART_LEN_SHIFT, 1);
// Take only the real values and scale with outCFFT.
for (i = 0, j = 0; i < PART_LEN2; i += 8, j+= 16)
{
__asm__("vld2.16 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&fft[j]) : "q10", "q11");
__asm__("vst1.16 {d20, d21}, [%0, :128]" : : "r"(&fft[i]): "q10");
}
int32x4_t tmp32x4_2;
__asm__("vdup.32 %q0, %1" : "=w"(tmp32x4_2) : "r"((WebRtc_Word32)
(outCFFT - aecm->dfaCleanQDomain)));
for (i = 0; i < PART_LEN; i += 4)
{
int16x4_t tmp16x4_0;
int16x4_t tmp16x4_1;
int32x4_t tmp32x4_0;
int32x4_t tmp32x4_1;
// fft[i] = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT_WITH_ROUND(
// fft[i], WebRtcAecm_kSqrtHanning[i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&fft[i]));
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&WebRtcAecm_kSqrtHanning[i]));
__asm__("vmull.s16 %q0, %P1, %P2" : "=w"(tmp32x4_0) : "w"(tmp16x4_0), "w"(tmp16x4_1));
__asm__("vrshr.s32 %q0, %q1, #14" : "=w"(tmp32x4_0) : "0"(tmp32x4_0));
// tmp32no1 = WEBRTC_SPL_SHIFT_W32((WebRtc_Word32)fft[i],
// outCFFT - aecm->dfaCleanQDomain);
__asm__("vshl.s32 %q0, %q1, %q2" : "=w"(tmp32x4_0) : "0"(tmp32x4_0), "w"(tmp32x4_2));
// fft[i] = (WebRtc_Word16)WEBRTC_SPL_SAT(WEBRTC_SPL_WORD16_MAX,
// tmp32no1 + outBuf[i], WEBRTC_SPL_WORD16_MIN);
// output[i] = fft[i];
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&aecm->outBuf[i]));
__asm__("vmovl.s16 %q0, %P1" : "=w"(tmp32x4_1) : "w"(tmp16x4_0));
__asm__("vadd.i32 %q0, %q1" : : "w"(tmp32x4_0), "w"(tmp32x4_1));
__asm__("vqshrn.s32 %P0, %q1, #0" : "=w"(tmp16x4_0) : "w"(tmp32x4_0));
__asm__("vst1.16 %P0, [%1, :64]" : : "w"(tmp16x4_0), "r"(&fft[i]));
__asm__("vst1.16 %P0, [%1, :64]" : : "w"(tmp16x4_0), "r"(&output[i]));
// tmp32no1 = WEBRTC_SPL_MUL_16_16_RSFT(
// fft[PART_LEN + i], WebRtcAecm_kSqrtHanning[PART_LEN - i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&fft[PART_LEN + i]));
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&kSqrtHanningReversed[i]));
__asm__("vmull.s16 %q0, %P1, %P2" : "=w"(tmp32x4_0) : "w"(tmp16x4_0), "w"(tmp16x4_1));
__asm__("vshr.s32 %q0, %q1, #14" : "=w"(tmp32x4_0) : "0"(tmp32x4_0));
// tmp32no1 = WEBRTC_SPL_SHIFT_W32(tmp32no1, outCFFT - aecm->dfaCleanQDomain);
__asm__("vshl.s32 %q0, %q1, %q2" : "=w"(tmp32x4_0) : "0"(tmp32x4_0), "w"(tmp32x4_2));
// outBuf[i] = (WebRtc_Word16)WEBRTC_SPL_SAT(
// WEBRTC_SPL_WORD16_MAX, tmp32no1, WEBRTC_SPL_WORD16_MIN);
__asm__("vqshrn.s32 %P0, %q1, #0" : "=w"(tmp16x4_0) : "w"(tmp32x4_0));
__asm__("vst1.16 %P0, [%1, :64]" : : "w"(tmp16x4_0), "r"(&aecm->outBuf[i]));
}
// Copy the current block to the old position (outBuf is shifted elsewhere).
for (i = 0; i < PART_LEN; i += 16)
{
__asm__("vld1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->xBuf[i + PART_LEN]) : "q10");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&aecm->xBuf[i]): "q10");
}
for (i = 0; i < PART_LEN; i += 16)
{
__asm__("vld1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufNoisy[i + PART_LEN]) : "q10");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufNoisy[i]): "q10");
}
if (nearendClean != NULL) {
for (i = 0; i < PART_LEN; i += 16)
{
__asm__("vld1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufClean[i + PART_LEN]) : "q10");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufClean[i]): "q10");
}
}
}
void WebRtcAecm_CalcLinearEnergies(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echo_est,
WebRtc_UWord32* far_energy,
WebRtc_UWord32* echo_energy_adapt,
WebRtc_UWord32* echo_energy_stored)
{
int i;
register WebRtc_UWord32 far_energy_r;
register WebRtc_UWord32 echo_energy_stored_r;
register WebRtc_UWord32 echo_energy_adapt_r;
uint32x4_t tmp32x4_0;
__asm__("vmov.i32 q14, #0" : : : "q14"); // far_energy
__asm__("vmov.i32 q8, #0" : : : "q8"); // echo_energy_stored
__asm__("vmov.i32 q9, #0" : : : "q9"); // echo_energy_adapt
for(i = 0; i < PART_LEN -7; i += 8)
{
// far_energy += (WebRtc_UWord32)(far_spectrum[i]);
__asm__("vld1.16 {d26, d27}, [%0]" : : "r"(&far_spectrum[i]) : "q13");
__asm__("vaddw.u16 q14, q14, d26" : : : "q14", "q13");
__asm__("vaddw.u16 q14, q14, d27" : : : "q14", "q13");
// Get estimated echo energies for adaptive channel and stored channel.
// echoEst[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
__asm__("vld1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelStored[i]) : "q12");
__asm__("vmull.u16 q10, d26, d24" : : : "q12", "q13", "q10");
__asm__("vmull.u16 q11, d27, d25" : : : "q12", "q13", "q11");
__asm__("vst1.32 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&echo_est[i]):
"q10", "q11");
// echo_energy_stored += (WebRtc_UWord32)echoEst[i];
__asm__("vadd.u32 q8, q10" : : : "q10", "q8");
__asm__("vadd.u32 q8, q11" : : : "q11", "q8");
// echo_energy_adapt += WEBRTC_SPL_UMUL_16_16(
// aecm->channelAdapt16[i], far_spectrum[i]);
__asm__("vld1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelAdapt16[i]) : "q12");
__asm__("vmull.u16 q10, d26, d24" : : : "q12", "q13", "q10");
__asm__("vmull.u16 q11, d27, d25" : : : "q12", "q13", "q11");
__asm__("vadd.u32 q9, q10" : : : "q9", "q15");
__asm__("vadd.u32 q9, q11" : : : "q9", "q11");
}
__asm__("vadd.u32 d28, d29" : : : "q14");
__asm__("vpadd.u32 d28, d28" : : : "q14");
__asm__("vmov.32 %0, d28[0]" : "=r"(far_energy_r): : "q14");
__asm__("vadd.u32 d18, d19" : : : "q9");
__asm__("vpadd.u32 d18, d18" : : : "q9");
__asm__("vmov.32 %0, d18[0]" : "=r"(echo_energy_adapt_r): : "q9");
__asm__("vadd.u32 d16, d17" : : : "q8");
__asm__("vpadd.u32 d16, d16" : : : "q8");
__asm__("vmov.32 %0, d16[0]" : "=r"(echo_energy_stored_r): : "q8");
// Get estimated echo energies for adaptive channel and stored channel.
echo_est[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
*echo_energy_stored = echo_energy_stored_r + (WebRtc_UWord32)echo_est[i];
*far_energy = far_energy_r + (WebRtc_UWord32)(far_spectrum[i]);
*echo_energy_adapt = echo_energy_adapt_r + WEBRTC_SPL_UMUL_16_16(
aecm->channelAdapt16[i], far_spectrum[i]);
}
void WebRtcAecm_StoreAdaptiveChannel(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echo_est)
{
int i;
// During startup we store the channel every block.
// Recalculate echo estimate.
for(i = 0; i < PART_LEN -7; i += 8)
{
// aecm->channelStored[i] = acem->channelAdapt16[i];
// echo_est[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
__asm__("vld1.16 {d26, d27}, [%0]" : : "r"(&far_spectrum[i]) : "q13");
__asm__("vld1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelAdapt16[i]) : "q12");
__asm__("vst1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelStored[i]) : "q12");
__asm__("vmull.u16 q10, d26, d24" : : : "q12", "q13", "q10");
__asm__("vmull.u16 q11, d27, d25" : : : "q12", "q13", "q11");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&echo_est[i]) : "q10", "q11");
}
aecm->channelStored[i] = aecm->channelAdapt16[i];
echo_est[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
}
void WebRtcAecm_ResetAdaptiveChannel(AecmCore_t* aecm)
{
int i;
for(i = 0; i < PART_LEN -7; i += 8)
{
// aecm->channelAdapt16[i] = aecm->channelStored[i];
// aecm->channelAdapt32[i] = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)
// aecm->channelStored[i], 16);
__asm__("vld1.16 {d24, d25}, [%0, :128]" : :
"r"(&aecm->channelStored[i]) : "q12");
__asm__("vst1.16 {d24, d25}, [%0, :128]" : :
"r"(&aecm->channelAdapt16[i]) : "q12");
__asm__("vshll.s16 q10, d24, #16" : : : "q12", "q13", "q10");
__asm__("vshll.s16 q11, d25, #16" : : : "q12", "q13", "q11");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->channelAdapt32[i]): "q10", "q11");
}
aecm->channelAdapt16[i] = aecm->channelStored[i];
aecm->channelAdapt32[i] = WEBRTC_SPL_LSHIFT_W32(
(WebRtc_Word32)aecm->channelStored[i], 16);
}
#endif // #if defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON)

View File

@ -1,10 +0,0 @@
noinst_LTLIBRARIES = libagc.la
libagc_la_SOURCES = interface/gain_control.h \
analog_agc.c \
analog_agc.h \
digital_agc.c \
digital_agc.h
libagc_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility

View File

@ -1,34 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'agc',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/gain_control.h',
'analog_agc.c',
'analog_agc.h',
'digital_agc.c',
'digital_agc.h',
],
},
],
}

View File

@ -1,133 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_
#include "typedefs.h"
#include "gain_control.h"
#include "digital_agc.h"
//#define AGC_DEBUG
//#define MIC_LEVEL_FEEDBACK
#ifdef AGC_DEBUG
#include <stdio.h>
#endif
/* Analog Automatic Gain Control variables:
* Constant declarations (inner limits inside which no changes are done)
* In the beginning the range is narrower to widen as soon as the measure
* 'Rxx160_LP' is inside it. Currently the starting limits are -22.2+/-1dBm0
* and the final limits -22.2+/-2.5dBm0. These levels makes the speech signal
* go towards -25.4dBm0 (-31.4dBov). Tuned with wbfile-31.4dBov.pcm
* The limits are created by running the AGC with a file having the desired
* signal level and thereafter plotting Rxx160_LP in the dBm0-domain defined
* by out=10*log10(in/260537279.7); Set the target level to the average level
* of our measure Rxx160_LP. Remember that the levels are in blocks of 16 in
* Q(-7). (Example matlab code: round(db2pow(-21.2)*16/2^7) )
*/
#define RXX_BUFFER_LEN 10
static const WebRtc_Word16 kMsecSpeechInner = 520;
static const WebRtc_Word16 kMsecSpeechOuter = 340;
static const WebRtc_Word16 kNormalVadThreshold = 400;
static const WebRtc_Word16 kAlphaShortTerm = 6; // 1 >> 6 = 0.0156
static const WebRtc_Word16 kAlphaLongTerm = 10; // 1 >> 10 = 0.000977
typedef struct
{
// Configurable parameters/variables
WebRtc_UWord32 fs; // Sampling frequency
WebRtc_Word16 compressionGaindB; // Fixed gain level in dB
WebRtc_Word16 targetLevelDbfs; // Target level in -dBfs of envelope (default -3)
WebRtc_Word16 agcMode; // Hard coded mode (adaptAna/adaptDig/fixedDig)
WebRtc_UWord8 limiterEnable; // Enabling limiter (on/off (default off))
WebRtcAgc_config_t defaultConfig;
WebRtcAgc_config_t usedConfig;
// General variables
WebRtc_Word16 initFlag;
WebRtc_Word16 lastError;
// Target level parameters
// Based on the above: analogTargetLevel = round((32767*10^(-22/20))^2*16/2^7)
WebRtc_Word32 analogTargetLevel; // = RXX_BUFFER_LEN * 846805; -22 dBfs
WebRtc_Word32 startUpperLimit; // = RXX_BUFFER_LEN * 1066064; -21 dBfs
WebRtc_Word32 startLowerLimit; // = RXX_BUFFER_LEN * 672641; -23 dBfs
WebRtc_Word32 upperPrimaryLimit; // = RXX_BUFFER_LEN * 1342095; -20 dBfs
WebRtc_Word32 lowerPrimaryLimit; // = RXX_BUFFER_LEN * 534298; -24 dBfs
WebRtc_Word32 upperSecondaryLimit;// = RXX_BUFFER_LEN * 2677832; -17 dBfs
WebRtc_Word32 lowerSecondaryLimit;// = RXX_BUFFER_LEN * 267783; -27 dBfs
WebRtc_UWord16 targetIdx; // Table index for corresponding target level
#ifdef MIC_LEVEL_FEEDBACK
WebRtc_UWord16 targetIdxOffset; // Table index offset for level compensation
#endif
WebRtc_Word16 analogTarget; // Digital reference level in ENV scale
// Analog AGC specific variables
WebRtc_Word32 filterState[8]; // For downsampling wb to nb
WebRtc_Word32 upperLimit; // Upper limit for mic energy
WebRtc_Word32 lowerLimit; // Lower limit for mic energy
WebRtc_Word32 Rxx160w32; // Average energy for one frame
WebRtc_Word32 Rxx16_LPw32; // Low pass filtered subframe energies
WebRtc_Word32 Rxx160_LPw32; // Low pass filtered frame energies
WebRtc_Word32 Rxx16_LPw32Max; // Keeps track of largest energy subframe
WebRtc_Word32 Rxx16_vectorw32[RXX_BUFFER_LEN];// Array with subframe energies
WebRtc_Word32 Rxx16w32_array[2][5];// Energy values of microphone signal
WebRtc_Word32 env[2][10]; // Envelope values of subframes
WebRtc_Word16 Rxx16pos; // Current position in the Rxx16_vectorw32
WebRtc_Word16 envSum; // Filtered scaled envelope in subframes
WebRtc_Word16 vadThreshold; // Threshold for VAD decision
WebRtc_Word16 inActive; // Inactive time in milliseconds
WebRtc_Word16 msTooLow; // Milliseconds of speech at a too low level
WebRtc_Word16 msTooHigh; // Milliseconds of speech at a too high level
WebRtc_Word16 changeToSlowMode; // Change to slow mode after some time at target
WebRtc_Word16 firstCall; // First call to the process-function
WebRtc_Word16 msZero; // Milliseconds of zero input
WebRtc_Word16 msecSpeechOuterChange;// Min ms of speech between volume changes
WebRtc_Word16 msecSpeechInnerChange;// Min ms of speech between volume changes
WebRtc_Word16 activeSpeech; // Milliseconds of active speech
WebRtc_Word16 muteGuardMs; // Counter to prevent mute action
WebRtc_Word16 inQueue; // 10 ms batch indicator
// Microphone level variables
WebRtc_Word32 micRef; // Remember ref. mic level for virtual mic
WebRtc_UWord16 gainTableIdx; // Current position in virtual gain table
WebRtc_Word32 micGainIdx; // Gain index of mic level to increase slowly
WebRtc_Word32 micVol; // Remember volume between frames
WebRtc_Word32 maxLevel; // Max possible vol level, incl dig gain
WebRtc_Word32 maxAnalog; // Maximum possible analog volume level
WebRtc_Word32 maxInit; // Initial value of "max"
WebRtc_Word32 minLevel; // Minimum possible volume level
WebRtc_Word32 minOutput; // Minimum output volume level
WebRtc_Word32 zeroCtrlMax; // Remember max gain => don't amp low input
WebRtc_Word16 scale; // Scale factor for internal volume levels
#ifdef MIC_LEVEL_FEEDBACK
WebRtc_Word16 numBlocksMicLvlSat;
WebRtc_UWord8 micLvlSat;
#endif
// Structs for VAD and digital_agc
AgcVad_t vadMic;
DigitalAgc_t digitalAgc;
#ifdef AGC_DEBUG
FILE* fpt;
FILE* agcLog;
WebRtc_Word32 fcount;
#endif
WebRtc_Word16 lowLevelSignal;
} Agc_t;
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_

View File

@ -1,76 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_DIGITAL_AGC_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_DIGITAL_AGC_H_
#ifdef AGC_DEBUG
#include <stdio.h>
#endif
#include "typedefs.h"
#include "signal_processing_library.h"
// the 32 most significant bits of A(19) * B(26) >> 13
#define AGC_MUL32(A, B) (((B)>>13)*(A) + ( ((0x00001FFF & (B))*(A)) >> 13 ))
// C + the 32 most significant bits of A * B
#define AGC_SCALEDIFF32(A, B, C) ((C) + ((B)>>16)*(A) + ( ((0x0000FFFF & (B))*(A)) >> 16 ))
typedef struct
{
WebRtc_Word32 downState[8];
WebRtc_Word16 HPstate;
WebRtc_Word16 counter;
WebRtc_Word16 logRatio; // log( P(active) / P(inactive) ) (Q10)
WebRtc_Word16 meanLongTerm; // Q10
WebRtc_Word32 varianceLongTerm; // Q8
WebRtc_Word16 stdLongTerm; // Q10
WebRtc_Word16 meanShortTerm; // Q10
WebRtc_Word32 varianceShortTerm; // Q8
WebRtc_Word16 stdShortTerm; // Q10
} AgcVad_t; // total = 54 bytes
typedef struct
{
WebRtc_Word32 capacitorSlow;
WebRtc_Word32 capacitorFast;
WebRtc_Word32 gain;
WebRtc_Word32 gainTable[32];
WebRtc_Word16 gatePrevious;
WebRtc_Word16 agcMode;
AgcVad_t vadNearend;
AgcVad_t vadFarend;
#ifdef AGC_DEBUG
FILE* logFile;
int frameCounter;
#endif
} DigitalAgc_t;
WebRtc_Word32 WebRtcAgc_InitDigital(DigitalAgc_t *digitalAgcInst, WebRtc_Word16 agcMode);
WebRtc_Word32 WebRtcAgc_ProcessDigital(DigitalAgc_t *digitalAgcInst, const WebRtc_Word16 *inNear,
const WebRtc_Word16 *inNear_H, WebRtc_Word16 *out,
WebRtc_Word16 *out_H, WebRtc_UWord32 FS,
WebRtc_Word16 lowLevelSignal);
WebRtc_Word32 WebRtcAgc_AddFarendToDigital(DigitalAgc_t *digitalAgcInst, const WebRtc_Word16 *inFar,
WebRtc_Word16 nrSamples);
void WebRtcAgc_InitVad(AgcVad_t *vadInst);
WebRtc_Word16 WebRtcAgc_ProcessVad(AgcVad_t *vadInst, // (i) VAD state
const WebRtc_Word16 *in, // (i) Speech signal
WebRtc_Word16 nrSamples); // (i) number of samples
WebRtc_Word32 WebRtcAgc_CalculateGainTable(WebRtc_Word32 *gainTable, // Q16
WebRtc_Word16 compressionGaindB, // Q0 (in dB)
WebRtc_Word16 targetLevelDbfs,// Q0 (in dB)
WebRtc_UWord8 limiterEnable, WebRtc_Word16 analogTarget);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_

View File

@ -1,100 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'variables': {
'protoc_out_dir': '<(SHARED_INTERMEDIATE_DIR)/protoc_out',
'protoc_out_relpath': 'webrtc/audio_processing',
},
'targets': [
{
'target_name': 'audioproc_unittest',
'type': 'executable',
'conditions': [
['prefer_fixed_point==1', {
'defines': ['WEBRTC_APM_UNIT_TEST_FIXED_PROFILE'],
}, {
'defines': ['WEBRTC_APM_UNIT_TEST_FLOAT_PROFILE'],
}],
],
'dependencies': [
'audioproc_unittest_proto',
'audio_processing',
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
'<(webrtc_root)/system_wrappers/source/system_wrappers.gyp:system_wrappers',
'<(webrtc_root)/../test/test.gyp:test_support',
'<(webrtc_root)/../testing/gtest.gyp:gtest',
'<(webrtc_root)/../third_party/protobuf/protobuf.gyp:protobuf_lite',
],
'include_dirs': [
'<(webrtc_root)/../testing/gtest/include',
'<(protoc_out_dir)',
],
'sources': [
'test/unit_test.cc',
'<(protoc_out_dir)/<(protoc_out_relpath)/unittest.pb.cc',
'<(protoc_out_dir)/<(protoc_out_relpath)/unittest.pb.h',
],
},
{
# Protobuf compiler / generate rule for audioproc_unittest
'target_name': 'audioproc_unittest_proto',
'type': 'none',
'variables': {
'proto_relpath':
'<(webrtc_root)/modules/audio_processing/test',
},
'sources': [
'<(proto_relpath)/unittest.proto',
],
'rules': [
{
'rule_name': 'genproto',
'extension': 'proto',
'inputs': [
'<(PRODUCT_DIR)/<(EXECUTABLE_PREFIX)protoc<(EXECUTABLE_SUFFIX)',
],
'outputs': [
'<(protoc_out_dir)/<(protoc_out_relpath)/<(RULE_INPUT_ROOT).pb.cc',
'<(protoc_out_dir)/<(RULE_INPUT_ROOT).pb.h',
],
'action': [
'<(PRODUCT_DIR)/<(EXECUTABLE_PREFIX)protoc<(EXECUTABLE_SUFFIX)',
'--proto_path=<(proto_relpath)',
'<(proto_relpath)/<(RULE_INPUT_NAME)',
'--cpp_out=<(protoc_out_dir)/<(protoc_out_relpath)',
],
'message': 'Generating C++ code from <(RULE_INPUT_PATH)',
},
],
'dependencies': [
'<(webrtc_root)/../third_party/protobuf/protobuf.gyp:protoc#host',
],
# This target exports a hard dependency because it generates header
# files.
'hard_dependency': 1,
},
{
'target_name': 'audioproc_process_test',
'type': 'executable',
'dependencies': [
'audio_processing',
'<(webrtc_root)/system_wrappers/source/system_wrappers.gyp:system_wrappers',
'<(webrtc_root)/../testing/gtest.gyp:gtest',
'<(webrtc_root)/../third_party/protobuf/protobuf.gyp:protobuf_lite',
],
'include_dirs': [
'<(webrtc_root)/../testing/gtest/include',
'<(protoc_out_dir)',
],
'sources': [
'test/process_test.cc',
],
},
],
}

View File

@ -1,288 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "audio_buffer.h"
namespace webrtc {
namespace {
enum {
kSamplesPer8kHzChannel = 80,
kSamplesPer16kHzChannel = 160,
kSamplesPer32kHzChannel = 320
};
void StereoToMono(const WebRtc_Word16* left, const WebRtc_Word16* right,
WebRtc_Word16* out, int samples_per_channel) {
WebRtc_Word32 data_int32 = 0;
for (int i = 0; i < samples_per_channel; i++) {
data_int32 = (left[i] + right[i]) >> 1;
if (data_int32 > 32767) {
data_int32 = 32767;
} else if (data_int32 < -32768) {
data_int32 = -32768;
}
out[i] = static_cast<WebRtc_Word16>(data_int32);
}
}
} // namespace
struct AudioChannel {
AudioChannel() {
memset(data, 0, sizeof(data));
}
WebRtc_Word16 data[kSamplesPer32kHzChannel];
};
struct SplitAudioChannel {
SplitAudioChannel() {
memset(low_pass_data, 0, sizeof(low_pass_data));
memset(high_pass_data, 0, sizeof(high_pass_data));
memset(analysis_filter_state1, 0, sizeof(analysis_filter_state1));
memset(analysis_filter_state2, 0, sizeof(analysis_filter_state2));
memset(synthesis_filter_state1, 0, sizeof(synthesis_filter_state1));
memset(synthesis_filter_state2, 0, sizeof(synthesis_filter_state2));
}
WebRtc_Word16 low_pass_data[kSamplesPer16kHzChannel];
WebRtc_Word16 high_pass_data[kSamplesPer16kHzChannel];
WebRtc_Word32 analysis_filter_state1[6];
WebRtc_Word32 analysis_filter_state2[6];
WebRtc_Word32 synthesis_filter_state1[6];
WebRtc_Word32 synthesis_filter_state2[6];
};
// TODO(andrew): check range of input parameters?
AudioBuffer::AudioBuffer(int max_num_channels,
int samples_per_channel)
: max_num_channels_(max_num_channels),
num_channels_(0),
num_mixed_channels_(0),
num_mixed_low_pass_channels_(0),
samples_per_channel_(samples_per_channel),
samples_per_split_channel_(samples_per_channel),
reference_copied_(false),
activity_(AudioFrame::kVadUnknown),
data_(NULL),
channels_(NULL),
split_channels_(NULL),
mixed_low_pass_channels_(NULL),
low_pass_reference_channels_(NULL) {
if (max_num_channels_ > 1) {
channels_ = new AudioChannel[max_num_channels_];
mixed_low_pass_channels_ = new AudioChannel[max_num_channels_];
}
low_pass_reference_channels_ = new AudioChannel[max_num_channels_];
if (samples_per_channel_ == kSamplesPer32kHzChannel) {
split_channels_ = new SplitAudioChannel[max_num_channels_];
samples_per_split_channel_ = kSamplesPer16kHzChannel;
}
}
AudioBuffer::~AudioBuffer() {
if (channels_ != NULL) {
delete [] channels_;
}
if (mixed_low_pass_channels_ != NULL) {
delete [] mixed_low_pass_channels_;
}
if (low_pass_reference_channels_ != NULL) {
delete [] low_pass_reference_channels_;
}
if (split_channels_ != NULL) {
delete [] split_channels_;
}
}
WebRtc_Word16* AudioBuffer::data(int channel) const {
assert(channel >= 0 && channel < num_channels_);
if (data_ != NULL) {
return data_;
}
return channels_[channel].data;
}
WebRtc_Word16* AudioBuffer::low_pass_split_data(int channel) const {
assert(channel >= 0 && channel < num_channels_);
if (split_channels_ == NULL) {
return data(channel);
}
return split_channels_[channel].low_pass_data;
}
WebRtc_Word16* AudioBuffer::high_pass_split_data(int channel) const {
assert(channel >= 0 && channel < num_channels_);
if (split_channels_ == NULL) {
return NULL;
}
return split_channels_[channel].high_pass_data;
}
WebRtc_Word16* AudioBuffer::mixed_low_pass_data(int channel) const {
assert(channel >= 0 && channel < num_mixed_low_pass_channels_);
return mixed_low_pass_channels_[channel].data;
}
WebRtc_Word16* AudioBuffer::low_pass_reference(int channel) const {
assert(channel >= 0 && channel < num_channels_);
if (!reference_copied_) {
return NULL;
}
return low_pass_reference_channels_[channel].data;
}
WebRtc_Word32* AudioBuffer::analysis_filter_state1(int channel) const {
assert(channel >= 0 && channel < num_channels_);
return split_channels_[channel].analysis_filter_state1;
}
WebRtc_Word32* AudioBuffer::analysis_filter_state2(int channel) const {
assert(channel >= 0 && channel < num_channels_);
return split_channels_[channel].analysis_filter_state2;
}
WebRtc_Word32* AudioBuffer::synthesis_filter_state1(int channel) const {
assert(channel >= 0 && channel < num_channels_);
return split_channels_[channel].synthesis_filter_state1;
}
WebRtc_Word32* AudioBuffer::synthesis_filter_state2(int channel) const {
assert(channel >= 0 && channel < num_channels_);
return split_channels_[channel].synthesis_filter_state2;
}
void AudioBuffer::set_activity(AudioFrame::VADActivity activity) {
activity_ = activity;
}
AudioFrame::VADActivity AudioBuffer::activity() {
return activity_;
}
int AudioBuffer::num_channels() const {
return num_channels_;
}
int AudioBuffer::samples_per_channel() const {
return samples_per_channel_;
}
int AudioBuffer::samples_per_split_channel() const {
return samples_per_split_channel_;
}
// TODO(andrew): Do deinterleaving and mixing in one step?
void AudioBuffer::DeinterleaveFrom(AudioFrame* frame) {
assert(frame->_audioChannel <= max_num_channels_);
assert(frame->_payloadDataLengthInSamples == samples_per_channel_);
num_channels_ = frame->_audioChannel;
num_mixed_channels_ = 0;
num_mixed_low_pass_channels_ = 0;
reference_copied_ = false;
activity_ = frame->_vadActivity;
if (num_channels_ == 1) {
// We can get away with a pointer assignment in this case.
data_ = frame->_payloadData;
return;
}
WebRtc_Word16* interleaved = frame->_payloadData;
for (int i = 0; i < num_channels_; i++) {
WebRtc_Word16* deinterleaved = channels_[i].data;
int interleaved_idx = i;
for (int j = 0; j < samples_per_channel_; j++) {
deinterleaved[j] = interleaved[interleaved_idx];
interleaved_idx += num_channels_;
}
}
}
void AudioBuffer::InterleaveTo(AudioFrame* frame) const {
assert(frame->_audioChannel == num_channels_);
assert(frame->_payloadDataLengthInSamples == samples_per_channel_);
frame->_vadActivity = activity_;
if (num_channels_ == 1) {
if (num_mixed_channels_ == 1) {
memcpy(frame->_payloadData,
channels_[0].data,
sizeof(WebRtc_Word16) * samples_per_channel_);
} else {
// These should point to the same buffer in this case.
assert(data_ == frame->_payloadData);
}
return;
}
WebRtc_Word16* interleaved = frame->_payloadData;
for (int i = 0; i < num_channels_; i++) {
WebRtc_Word16* deinterleaved = channels_[i].data;
int interleaved_idx = i;
for (int j = 0; j < samples_per_channel_; j++) {
interleaved[interleaved_idx] = deinterleaved[j];
interleaved_idx += num_channels_;
}
}
}
// TODO(andrew): would be good to support the no-mix case with pointer
// assignment.
// TODO(andrew): handle mixing to multiple channels?
void AudioBuffer::Mix(int num_mixed_channels) {
// We currently only support the stereo to mono case.
assert(num_channels_ == 2);
assert(num_mixed_channels == 1);
StereoToMono(channels_[0].data,
channels_[1].data,
channels_[0].data,
samples_per_channel_);
num_channels_ = num_mixed_channels;
num_mixed_channels_ = num_mixed_channels;
}
void AudioBuffer::CopyAndMixLowPass(int num_mixed_channels) {
// We currently only support the stereo to mono case.
assert(num_channels_ == 2);
assert(num_mixed_channels == 1);
StereoToMono(low_pass_split_data(0),
low_pass_split_data(1),
mixed_low_pass_channels_[0].data,
samples_per_split_channel_);
num_mixed_low_pass_channels_ = num_mixed_channels;
}
void AudioBuffer::CopyLowPassToReference() {
reference_copied_ = true;
for (int i = 0; i < num_channels_; i++) {
memcpy(low_pass_reference_channels_[i].data,
low_pass_split_data(i),
sizeof(WebRtc_Word16) * samples_per_split_channel_);
}
}
} // namespace webrtc

View File

@ -1,71 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_AUDIO_BUFFER_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_AUDIO_BUFFER_H_
#include "module_common_types.h"
#include "typedefs.h"
namespace webrtc {
struct AudioChannel;
struct SplitAudioChannel;
class AudioBuffer {
public:
AudioBuffer(int max_num_channels, int samples_per_channel);
virtual ~AudioBuffer();
int num_channels() const;
int samples_per_channel() const;
int samples_per_split_channel() const;
WebRtc_Word16* data(int channel) const;
WebRtc_Word16* low_pass_split_data(int channel) const;
WebRtc_Word16* high_pass_split_data(int channel) const;
WebRtc_Word16* mixed_low_pass_data(int channel) const;
WebRtc_Word16* low_pass_reference(int channel) const;
WebRtc_Word32* analysis_filter_state1(int channel) const;
WebRtc_Word32* analysis_filter_state2(int channel) const;
WebRtc_Word32* synthesis_filter_state1(int channel) const;
WebRtc_Word32* synthesis_filter_state2(int channel) const;
void set_activity(AudioFrame::VADActivity activity);
AudioFrame::VADActivity activity();
void DeinterleaveFrom(AudioFrame* audioFrame);
void InterleaveTo(AudioFrame* audioFrame) const;
void Mix(int num_mixed_channels);
void CopyAndMixLowPass(int num_mixed_channels);
void CopyLowPassToReference();
private:
const int max_num_channels_;
int num_channels_;
int num_mixed_channels_;
int num_mixed_low_pass_channels_;
const int samples_per_channel_;
int samples_per_split_channel_;
bool reference_copied_;
AudioFrame::VADActivity activity_;
WebRtc_Word16* data_;
// TODO(andrew): use vectors here.
AudioChannel* channels_;
SplitAudioChannel* split_channels_;
// TODO(andrew): improve this, we don't need the full 32 kHz space here.
AudioChannel* mixed_low_pass_channels_;
AudioChannel* low_pass_reference_channels_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_AUDIO_BUFFER_H_

View File

@ -1,130 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'variables': {
'protoc_out_dir': '<(SHARED_INTERMEDIATE_DIR)/protoc_out',
'protoc_out_relpath': 'webrtc/audio_processing',
},
'targets': [
{
'target_name': 'audio_processing',
'type': '<(library)',
'conditions': [
['prefer_fixed_point==1', {
'dependencies': ['ns_fix'],
'defines': ['WEBRTC_NS_FIXED'],
}, {
'dependencies': ['ns'],
'defines': ['WEBRTC_NS_FLOAT'],
}],
['build_with_chromium==1', {
'dependencies': [
'<(webrtc_root)/../protobuf/protobuf.gyp:protobuf_lite',
],
}, {
'dependencies': [
'<(webrtc_root)/../third_party/protobuf/protobuf.gyp:protobuf_lite',
],
}],
],
'dependencies': [
'debug_proto',
'aec',
'aecm',
'agc',
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
'<(webrtc_root)/common_audio/common_audio.gyp:vad',
'<(webrtc_root)/system_wrappers/source/system_wrappers.gyp:system_wrappers',
],
'include_dirs': [
'interface',
'../interface',
'<(protoc_out_dir)',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
'../interface',
],
},
'sources': [
'interface/audio_processing.h',
'audio_buffer.cc',
'audio_buffer.h',
'audio_processing_impl.cc',
'audio_processing_impl.h',
'echo_cancellation_impl.cc',
'echo_cancellation_impl.h',
'echo_control_mobile_impl.cc',
'echo_control_mobile_impl.h',
'gain_control_impl.cc',
'gain_control_impl.h',
'high_pass_filter_impl.cc',
'high_pass_filter_impl.h',
'level_estimator_impl.cc',
'level_estimator_impl.h',
'noise_suppression_impl.cc',
'noise_suppression_impl.h',
'splitting_filter.cc',
'splitting_filter.h',
'processing_component.cc',
'processing_component.h',
'voice_detection_impl.cc',
'voice_detection_impl.h',
'<(protoc_out_dir)/<(protoc_out_relpath)/debug.pb.cc',
'<(protoc_out_dir)/<(protoc_out_relpath)/debug.pb.h',
],
},
{
# Protobuf compiler / generate rule for audio_processing
'target_name': 'debug_proto',
'type': 'none',
'variables': {
'proto_relpath': '<(webrtc_root)/modules/audio_processing',
},
'sources': [
'<(proto_relpath)/debug.proto',
],
'rules': [
{
'rule_name': 'genproto',
'extension': 'proto',
'inputs': [
'<(PRODUCT_DIR)/<(EXECUTABLE_PREFIX)protoc<(EXECUTABLE_SUFFIX)',
],
'outputs': [
'<(protoc_out_dir)/<(protoc_out_relpath)/<(RULE_INPUT_ROOT).pb.cc',
'<(protoc_out_dir)/<(protoc_out_relpath)/<(RULE_INPUT_ROOT).pb.h',
],
'action': [
'<(PRODUCT_DIR)/<(EXECUTABLE_PREFIX)protoc<(EXECUTABLE_SUFFIX)',
'--proto_path=<(proto_relpath)',
'<(proto_relpath)/<(RULE_INPUT_NAME)',
'--cpp_out=<(protoc_out_dir)/<(protoc_out_relpath)',
],
'message': 'Generating C++ code from <(RULE_INPUT_PATH)',
},
],
'conditions': [
['build_with_chromium==1', {
'dependencies': [
'<(webrtc_root)/../protobuf/protobuf.gyp:protoc#host',
],
}, {
'dependencies': [
'<(webrtc_root)/../third_party/protobuf/protobuf.gyp:protoc#host',
],
}],
],
# This target exports a hard dependency because it generates header
# files.
'hard_dependency': 1,
},
],
}

View File

@ -1,673 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "audio_processing_impl.h"
#include <assert.h>
#include "audio_buffer.h"
#include "critical_section_wrapper.h"
#include "echo_cancellation_impl.h"
#include "echo_control_mobile_impl.h"
#ifndef NDEBUG
#include "file_wrapper.h"
#endif
#include "high_pass_filter_impl.h"
#include "gain_control_impl.h"
#include "level_estimator_impl.h"
#include "module_common_types.h"
#include "noise_suppression_impl.h"
#include "processing_component.h"
#include "splitting_filter.h"
#include "voice_detection_impl.h"
#ifndef NDEBUG
#ifdef WEBRTC_ANDROID
#include "external/webrtc/src/modules/audio_processing/main/source/debug.pb.h"
#else
#include "webrtc/audio_processing/debug.pb.h"
#endif
#endif /* NDEBUG */
namespace webrtc {
AudioProcessing* AudioProcessing::Create(int id) {
/*WEBRTC_TRACE(webrtc::kTraceModuleCall,
webrtc::kTraceAudioProcessing,
id,
"AudioProcessing::Create()");*/
AudioProcessingImpl* apm = new AudioProcessingImpl(id);
if (apm->Initialize() != kNoError) {
delete apm;
apm = NULL;
}
return apm;
}
void AudioProcessing::Destroy(AudioProcessing* apm) {
delete static_cast<AudioProcessingImpl*>(apm);
}
AudioProcessingImpl::AudioProcessingImpl(int id)
: id_(id),
echo_cancellation_(NULL),
echo_control_mobile_(NULL),
gain_control_(NULL),
high_pass_filter_(NULL),
level_estimator_(NULL),
noise_suppression_(NULL),
voice_detection_(NULL),
#ifndef NDEBUG
debug_file_(FileWrapper::Create()),
event_msg_(new audioproc::Event()),
#endif
crit_(CriticalSectionWrapper::CreateCriticalSection()),
render_audio_(NULL),
capture_audio_(NULL),
sample_rate_hz_(kSampleRate16kHz),
split_sample_rate_hz_(kSampleRate16kHz),
samples_per_channel_(sample_rate_hz_ / 100),
stream_delay_ms_(0),
was_stream_delay_set_(false),
num_reverse_channels_(1),
num_input_channels_(1),
num_output_channels_(1) {
echo_cancellation_ = new EchoCancellationImpl(this);
component_list_.push_back(echo_cancellation_);
echo_control_mobile_ = new EchoControlMobileImpl(this);
component_list_.push_back(echo_control_mobile_);
gain_control_ = new GainControlImpl(this);
component_list_.push_back(gain_control_);
high_pass_filter_ = new HighPassFilterImpl(this);
component_list_.push_back(high_pass_filter_);
level_estimator_ = new LevelEstimatorImpl(this);
component_list_.push_back(level_estimator_);
noise_suppression_ = new NoiseSuppressionImpl(this);
component_list_.push_back(noise_suppression_);
voice_detection_ = new VoiceDetectionImpl(this);
component_list_.push_back(voice_detection_);
}
AudioProcessingImpl::~AudioProcessingImpl() {
while (!component_list_.empty()) {
ProcessingComponent* component = component_list_.front();
component->Destroy();
delete component;
component_list_.pop_front();
}
#ifndef NDEBUG
if (debug_file_->Open()) {
debug_file_->CloseFile();
}
delete debug_file_;
debug_file_ = NULL;
delete event_msg_;
event_msg_ = NULL;
#endif
delete crit_;
crit_ = NULL;
if (render_audio_) {
delete render_audio_;
render_audio_ = NULL;
}
if (capture_audio_) {
delete capture_audio_;
capture_audio_ = NULL;
}
}
CriticalSectionWrapper* AudioProcessingImpl::crit() const {
return crit_;
}
int AudioProcessingImpl::split_sample_rate_hz() const {
return split_sample_rate_hz_;
}
int AudioProcessingImpl::Initialize() {
CriticalSectionScoped crit_scoped(*crit_);
return InitializeLocked();
}
int AudioProcessingImpl::InitializeLocked() {
if (render_audio_ != NULL) {
delete render_audio_;
render_audio_ = NULL;
}
if (capture_audio_ != NULL) {
delete capture_audio_;
capture_audio_ = NULL;
}
render_audio_ = new AudioBuffer(num_reverse_channels_,
samples_per_channel_);
capture_audio_ = new AudioBuffer(num_input_channels_,
samples_per_channel_);
was_stream_delay_set_ = false;
// Initialize all components.
std::list<ProcessingComponent*>::iterator it;
for (it = component_list_.begin(); it != component_list_.end(); it++) {
int err = (*it)->Initialize();
if (err != kNoError) {
return err;
}
}
#ifndef NDEBUG
if (debug_file_->Open()) {
int err = WriteInitMessage();
if (err != kNoError) {
return err;
}
}
#endif
return kNoError;
}
int AudioProcessingImpl::set_sample_rate_hz(int rate) {
CriticalSectionScoped crit_scoped(*crit_);
if (rate != kSampleRate8kHz &&
rate != kSampleRate16kHz &&
rate != kSampleRate32kHz) {
return kBadParameterError;
}
sample_rate_hz_ = rate;
samples_per_channel_ = rate / 100;
if (sample_rate_hz_ == kSampleRate32kHz) {
split_sample_rate_hz_ = kSampleRate16kHz;
} else {
split_sample_rate_hz_ = sample_rate_hz_;
}
return InitializeLocked();
}
int AudioProcessingImpl::sample_rate_hz() const {
return sample_rate_hz_;
}
int AudioProcessingImpl::set_num_reverse_channels(int channels) {
CriticalSectionScoped crit_scoped(*crit_);
// Only stereo supported currently.
if (channels > 2 || channels < 1) {
return kBadParameterError;
}
num_reverse_channels_ = channels;
return InitializeLocked();
}
int AudioProcessingImpl::num_reverse_channels() const {
return num_reverse_channels_;
}
int AudioProcessingImpl::set_num_channels(
int input_channels,
int output_channels) {
CriticalSectionScoped crit_scoped(*crit_);
if (output_channels > input_channels) {
return kBadParameterError;
}
// Only stereo supported currently.
if (input_channels > 2 || input_channels < 1) {
return kBadParameterError;
}
if (output_channels > 2 || output_channels < 1) {
return kBadParameterError;
}
num_input_channels_ = input_channels;
num_output_channels_ = output_channels;
return InitializeLocked();
}
int AudioProcessingImpl::num_input_channels() const {
return num_input_channels_;
}
int AudioProcessingImpl::num_output_channels() const {
return num_output_channels_;
}
int AudioProcessingImpl::ProcessStream(AudioFrame* frame) {
CriticalSectionScoped crit_scoped(*crit_);
int err = kNoError;
if (frame == NULL) {
return kNullPointerError;
}
if (frame->_frequencyInHz != sample_rate_hz_) {
return kBadSampleRateError;
}
if (frame->_audioChannel != num_input_channels_) {
return kBadNumberChannelsError;
}
if (frame->_payloadDataLengthInSamples != samples_per_channel_) {
return kBadDataLengthError;
}
#ifndef NDEBUG
if (debug_file_->Open()) {
event_msg_->set_type(audioproc::Event::STREAM);
audioproc::Stream* msg = event_msg_->mutable_stream();
const size_t data_size = sizeof(WebRtc_Word16) *
frame->_payloadDataLengthInSamples *
frame->_audioChannel;
msg->set_input_data(frame->_payloadData, data_size);
msg->set_delay(stream_delay_ms_);
msg->set_drift(echo_cancellation_->stream_drift_samples());
msg->set_level(gain_control_->stream_analog_level());
}
#endif
capture_audio_->DeinterleaveFrom(frame);
// TODO(ajm): experiment with mixing and AEC placement.
if (num_output_channels_ < num_input_channels_) {
capture_audio_->Mix(num_output_channels_);
frame->_audioChannel = num_output_channels_;
}
if (sample_rate_hz_ == kSampleRate32kHz) {
for (int i = 0; i < num_input_channels_; i++) {
// Split into a low and high band.
SplittingFilterAnalysis(capture_audio_->data(i),
capture_audio_->low_pass_split_data(i),
capture_audio_->high_pass_split_data(i),
capture_audio_->analysis_filter_state1(i),
capture_audio_->analysis_filter_state2(i));
}
}
err = high_pass_filter_->ProcessCaptureAudio(capture_audio_);
if (err != kNoError) {
return err;
}
err = gain_control_->AnalyzeCaptureAudio(capture_audio_);
if (err != kNoError) {
return err;
}
err = echo_cancellation_->ProcessCaptureAudio(capture_audio_);
if (err != kNoError) {
return err;
}
if (echo_control_mobile_->is_enabled() &&
noise_suppression_->is_enabled()) {
capture_audio_->CopyLowPassToReference();
}
err = noise_suppression_->ProcessCaptureAudio(capture_audio_);
if (err != kNoError) {
return err;
}
err = echo_control_mobile_->ProcessCaptureAudio(capture_audio_);
if (err != kNoError) {
return err;
}
err = voice_detection_->ProcessCaptureAudio(capture_audio_);
if (err != kNoError) {
return err;
}
err = gain_control_->ProcessCaptureAudio(capture_audio_);
if (err != kNoError) {
return err;
}
//err = level_estimator_->ProcessCaptureAudio(capture_audio_);
//if (err != kNoError) {
// return err;
//}
if (sample_rate_hz_ == kSampleRate32kHz) {
for (int i = 0; i < num_output_channels_; i++) {
// Recombine low and high bands.
SplittingFilterSynthesis(capture_audio_->low_pass_split_data(i),
capture_audio_->high_pass_split_data(i),
capture_audio_->data(i),
capture_audio_->synthesis_filter_state1(i),
capture_audio_->synthesis_filter_state2(i));
}
}
capture_audio_->InterleaveTo(frame);
#ifndef NDEBUG
if (debug_file_->Open()) {
audioproc::Stream* msg = event_msg_->mutable_stream();
const size_t data_size = sizeof(WebRtc_Word16) *
frame->_payloadDataLengthInSamples *
frame->_audioChannel;
msg->set_output_data(frame->_payloadData, data_size);
err = WriteMessageToDebugFile();
if (err != kNoError) {
return err;
}
}
#endif
return kNoError;
}
int AudioProcessingImpl::AnalyzeReverseStream(AudioFrame* frame) {
CriticalSectionScoped crit_scoped(*crit_);
int err = kNoError;
if (frame == NULL) {
return kNullPointerError;
}
if (frame->_frequencyInHz != sample_rate_hz_) {
return kBadSampleRateError;
}
if (frame->_audioChannel != num_reverse_channels_) {
return kBadNumberChannelsError;
}
if (frame->_payloadDataLengthInSamples != samples_per_channel_) {
return kBadDataLengthError;
}
#ifndef NDEBUG
if (debug_file_->Open()) {
event_msg_->set_type(audioproc::Event::REVERSE_STREAM);
audioproc::ReverseStream* msg = event_msg_->mutable_reverse_stream();
const size_t data_size = sizeof(WebRtc_Word16) *
frame->_payloadDataLengthInSamples *
frame->_audioChannel;
msg->set_data(frame->_payloadData, data_size);
err = WriteMessageToDebugFile();
if (err != kNoError) {
return err;
}
}
#endif
render_audio_->DeinterleaveFrom(frame);
// TODO(ajm): turn the splitting filter into a component?
if (sample_rate_hz_ == kSampleRate32kHz) {
for (int i = 0; i < num_reverse_channels_; i++) {
// Split into low and high band.
SplittingFilterAnalysis(render_audio_->data(i),
render_audio_->low_pass_split_data(i),
render_audio_->high_pass_split_data(i),
render_audio_->analysis_filter_state1(i),
render_audio_->analysis_filter_state2(i));
}
}
// TODO(ajm): warnings possible from components?
err = echo_cancellation_->ProcessRenderAudio(render_audio_);
if (err != kNoError) {
return err;
}
err = echo_control_mobile_->ProcessRenderAudio(render_audio_);
if (err != kNoError) {
return err;
}
err = gain_control_->ProcessRenderAudio(render_audio_);
if (err != kNoError) {
return err;
}
//err = level_estimator_->AnalyzeReverseStream(render_audio_);
//if (err != kNoError) {
// return err;
//}
was_stream_delay_set_ = false;
return err; // TODO(ajm): this is for returning warnings; necessary?
}
int AudioProcessingImpl::set_stream_delay_ms(int delay) {
was_stream_delay_set_ = true;
if (delay < 0) {
return kBadParameterError;
}
// TODO(ajm): the max is rather arbitrarily chosen; investigate.
if (delay > 500) {
stream_delay_ms_ = 500;
return kBadStreamParameterWarning;
}
stream_delay_ms_ = delay;
return kNoError;
}
int AudioProcessingImpl::stream_delay_ms() const {
return stream_delay_ms_;
}
bool AudioProcessingImpl::was_stream_delay_set() const {
return was_stream_delay_set_;
}
int AudioProcessingImpl::StartDebugRecording(
const char filename[AudioProcessing::kMaxFilenameSize]) {
#ifndef NDEBUG
CriticalSectionScoped crit_scoped(*crit_);
assert(kMaxFilenameSize == FileWrapper::kMaxFileNameSize);
if (filename == NULL) {
return kNullPointerError;
}
// Stop any ongoing recording.
if (debug_file_->Open()) {
if (debug_file_->CloseFile() == -1) {
return kFileError;
}
}
if (debug_file_->OpenFile(filename, false) == -1) {
debug_file_->CloseFile();
return kFileError;
}
int err = WriteInitMessage();
if (err != kNoError) {
return err;
}
#endif
return kNoError;
}
int AudioProcessingImpl::StopDebugRecording() {
#ifndef NDEBUG
CriticalSectionScoped crit_scoped(*crit_);
// We just return if recording hasn't started.
if (debug_file_->Open()) {
if (debug_file_->CloseFile() == -1) {
return kFileError;
}
}
#endif
return kNoError;
}
EchoCancellation* AudioProcessingImpl::echo_cancellation() const {
return echo_cancellation_;
}
EchoControlMobile* AudioProcessingImpl::echo_control_mobile() const {
return echo_control_mobile_;
}
GainControl* AudioProcessingImpl::gain_control() const {
return gain_control_;
}
HighPassFilter* AudioProcessingImpl::high_pass_filter() const {
return high_pass_filter_;
}
LevelEstimator* AudioProcessingImpl::level_estimator() const {
return level_estimator_;
}
NoiseSuppression* AudioProcessingImpl::noise_suppression() const {
return noise_suppression_;
}
VoiceDetection* AudioProcessingImpl::voice_detection() const {
return voice_detection_;
}
WebRtc_Word32 AudioProcessingImpl::Version(WebRtc_Word8* version,
WebRtc_UWord32& bytes_remaining, WebRtc_UWord32& position) const {
if (version == NULL) {
/*WEBRTC_TRACE(webrtc::kTraceError,
webrtc::kTraceAudioProcessing,
-1,
"Null version pointer");*/
return kNullPointerError;
}
memset(&version[position], 0, bytes_remaining);
char my_version[] = "AudioProcessing 1.0.0";
// Includes null termination.
WebRtc_UWord32 length = static_cast<WebRtc_UWord32>(strlen(my_version));
if (bytes_remaining < length) {
/*WEBRTC_TRACE(webrtc::kTraceError,
webrtc::kTraceAudioProcessing,
-1,
"Buffer of insufficient length");*/
return kBadParameterError;
}
memcpy(&version[position], my_version, length);
bytes_remaining -= length;
position += length;
std::list<ProcessingComponent*>::const_iterator it;
for (it = component_list_.begin(); it != component_list_.end(); it++) {
char component_version[256];
strcpy(component_version, "\n");
int err = (*it)->get_version(&component_version[1],
sizeof(component_version) - 1);
if (err != kNoError) {
return err;
}
if (strncmp(&component_version[1], "\0", 1) == 0) {
// Assume empty if first byte is NULL.
continue;
}
length = static_cast<WebRtc_UWord32>(strlen(component_version));
if (bytes_remaining < length) {
/*WEBRTC_TRACE(webrtc::kTraceError,
webrtc::kTraceAudioProcessing,
-1,
"Buffer of insufficient length");*/
return kBadParameterError;
}
memcpy(&version[position], component_version, length);
bytes_remaining -= length;
position += length;
}
return kNoError;
}
WebRtc_Word32 AudioProcessingImpl::ChangeUniqueId(const WebRtc_Word32 id) {
CriticalSectionScoped crit_scoped(*crit_);
/*WEBRTC_TRACE(webrtc::kTraceModuleCall,
webrtc::kTraceAudioProcessing,
id_,
"ChangeUniqueId(new id = %d)",
id);*/
id_ = id;
return kNoError;
}
#ifndef NDEBUG
int AudioProcessingImpl::WriteMessageToDebugFile() {
int32_t size = event_msg_->ByteSize();
if (size <= 0) {
return kUnspecifiedError;
}
#if defined(WEBRTC_BIG_ENDIAN)
// TODO(ajm): Use little-endian "on the wire". For the moment, we can be
// pretty safe in assuming little-endian.
#endif
if (!event_msg_->SerializeToString(&event_str_)) {
return kUnspecifiedError;
}
// Write message preceded by its size.
if (!debug_file_->Write(&size, sizeof(int32_t))) {
return kFileError;
}
if (!debug_file_->Write(event_str_.data(), event_str_.length())) {
return kFileError;
}
event_msg_->Clear();
return 0;
}
int AudioProcessingImpl::WriteInitMessage() {
event_msg_->set_type(audioproc::Event::INIT);
audioproc::Init* msg = event_msg_->mutable_init();
msg->set_sample_rate(sample_rate_hz_);
msg->set_device_sample_rate(echo_cancellation_->device_sample_rate_hz());
msg->set_num_input_channels(num_input_channels_);
msg->set_num_output_channels(num_output_channels_);
msg->set_num_reverse_channels(num_reverse_channels_);
int err = WriteMessageToDebugFile();
if (err != kNoError) {
return err;
}
return kNoError;
}
#endif
} // namespace webrtc

View File

@ -1,117 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_AUDIO_PROCESSING_IMPL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_AUDIO_PROCESSING_IMPL_H_
#include <list>
#include <string>
#include "audio_processing.h"
namespace webrtc {
namespace audioproc {
class Event;
} // audioproc
class AudioBuffer;
class CriticalSectionWrapper;
class EchoCancellationImpl;
class EchoControlMobileImpl;
class FileWrapper;
class GainControlImpl;
class HighPassFilterImpl;
class LevelEstimatorImpl;
class NoiseSuppressionImpl;
class ProcessingComponent;
class VoiceDetectionImpl;
class AudioProcessingImpl : public AudioProcessing {
public:
enum {
kSampleRate8kHz = 8000,
kSampleRate16kHz = 16000,
kSampleRate32kHz = 32000
};
explicit AudioProcessingImpl(int id);
virtual ~AudioProcessingImpl();
CriticalSectionWrapper* crit() const;
int split_sample_rate_hz() const;
bool was_stream_delay_set() const;
// AudioProcessing methods.
virtual int Initialize();
virtual int InitializeLocked();
virtual int set_sample_rate_hz(int rate);
virtual int sample_rate_hz() const;
virtual int set_num_channels(int input_channels, int output_channels);
virtual int num_input_channels() const;
virtual int num_output_channels() const;
virtual int set_num_reverse_channels(int channels);
virtual int num_reverse_channels() const;
virtual int ProcessStream(AudioFrame* frame);
virtual int AnalyzeReverseStream(AudioFrame* frame);
virtual int set_stream_delay_ms(int delay);
virtual int stream_delay_ms() const;
virtual int StartDebugRecording(const char filename[kMaxFilenameSize]);
virtual int StopDebugRecording();
virtual EchoCancellation* echo_cancellation() const;
virtual EchoControlMobile* echo_control_mobile() const;
virtual GainControl* gain_control() const;
virtual HighPassFilter* high_pass_filter() const;
virtual LevelEstimator* level_estimator() const;
virtual NoiseSuppression* noise_suppression() const;
virtual VoiceDetection* voice_detection() const;
// Module methods.
virtual WebRtc_Word32 Version(WebRtc_Word8* version,
WebRtc_UWord32& remainingBufferInBytes,
WebRtc_UWord32& position) const;
virtual WebRtc_Word32 ChangeUniqueId(const WebRtc_Word32 id);
private:
int WriteMessageToDebugFile();
int WriteInitMessage();
int id_;
EchoCancellationImpl* echo_cancellation_;
EchoControlMobileImpl* echo_control_mobile_;
GainControlImpl* gain_control_;
HighPassFilterImpl* high_pass_filter_;
LevelEstimatorImpl* level_estimator_;
NoiseSuppressionImpl* noise_suppression_;
VoiceDetectionImpl* voice_detection_;
std::list<ProcessingComponent*> component_list_;
FileWrapper* debug_file_;
audioproc::Event* event_msg_; // Protobuf message.
std::string event_str_; // Memory for protobuf serialization.
CriticalSectionWrapper* crit_;
AudioBuffer* render_audio_;
AudioBuffer* capture_audio_;
int sample_rate_hz_;
int split_sample_rate_hz_;
int samples_per_channel_;
int stream_delay_ms_;
bool was_stream_delay_set_;
int num_reverse_channels_;
int num_input_channels_;
int num_output_channels_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_AUDIO_PROCESSING_IMPL_H_

View File

@ -1,37 +0,0 @@
syntax = "proto2";
option optimize_for = LITE_RUNTIME;
package webrtc.audioproc;
message Init {
optional int32 sample_rate = 1;
optional int32 device_sample_rate = 2;
optional int32 num_input_channels = 3;
optional int32 num_output_channels = 4;
optional int32 num_reverse_channels = 5;
}
message ReverseStream {
optional bytes data = 1;
}
message Stream {
optional bytes input_data = 1;
optional bytes output_data = 2;
optional int32 delay = 3;
optional sint32 drift = 4;
optional int32 level = 5;
}
message Event {
enum Type {
INIT = 0;
REVERSE_STREAM = 1;
STREAM = 2;
}
required Type type = 1;
optional Init init = 2;
optional ReverseStream reverse_stream = 3;
optional Stream stream = 4;
}

View File

@ -1,76 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_ECHO_CANCELLATION_IMPL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_ECHO_CANCELLATION_IMPL_H_
#include "audio_processing.h"
#include "processing_component.h"
namespace webrtc {
class AudioProcessingImpl;
class AudioBuffer;
class EchoCancellationImpl : public EchoCancellation,
public ProcessingComponent {
public:
explicit EchoCancellationImpl(const AudioProcessingImpl* apm);
virtual ~EchoCancellationImpl();
int ProcessRenderAudio(const AudioBuffer* audio);
int ProcessCaptureAudio(AudioBuffer* audio);
// EchoCancellation implementation.
virtual bool is_enabled() const;
virtual int device_sample_rate_hz() const;
virtual int stream_drift_samples() const;
// ProcessingComponent implementation.
virtual int Initialize();
virtual int get_version(char* version, int version_len_bytes) const;
private:
// EchoCancellation implementation.
virtual int Enable(bool enable);
virtual int enable_drift_compensation(bool enable);
virtual bool is_drift_compensation_enabled() const;
virtual int set_device_sample_rate_hz(int rate);
virtual int set_stream_drift_samples(int drift);
virtual int set_suppression_level(SuppressionLevel level);
virtual SuppressionLevel suppression_level() const;
virtual int enable_metrics(bool enable);
virtual bool are_metrics_enabled() const;
virtual bool stream_has_echo() const;
virtual int GetMetrics(Metrics* metrics);
virtual int enable_delay_logging(bool enable);
virtual bool is_delay_logging_enabled() const;
virtual int GetDelayMetrics(int* median, int* std);
// ProcessingComponent implementation.
virtual void* CreateHandle() const;
virtual int InitializeHandle(void* handle) const;
virtual int ConfigureHandle(void* handle) const;
virtual int DestroyHandle(void* handle) const;
virtual int num_handles_required() const;
virtual int GetHandleError(void* handle) const;
const AudioProcessingImpl* apm_;
bool drift_compensation_enabled_;
bool metrics_enabled_;
SuppressionLevel suppression_level_;
int device_sample_rate_hz_;
int stream_drift_samples_;
bool was_stream_drift_set_;
bool stream_has_echo_;
bool delay_logging_enabled_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_ECHO_CANCELLATION_IMPL_H_

View File

@ -1,62 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_ECHO_CONTROL_MOBILE_IMPL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_ECHO_CONTROL_MOBILE_IMPL_H_
#include "audio_processing.h"
#include "processing_component.h"
namespace webrtc {
class AudioProcessingImpl;
class AudioBuffer;
class EchoControlMobileImpl : public EchoControlMobile,
public ProcessingComponent {
public:
explicit EchoControlMobileImpl(const AudioProcessingImpl* apm);
virtual ~EchoControlMobileImpl();
int ProcessRenderAudio(const AudioBuffer* audio);
int ProcessCaptureAudio(AudioBuffer* audio);
// EchoControlMobile implementation.
virtual bool is_enabled() const;
// ProcessingComponent implementation.
virtual int Initialize();
virtual int get_version(char* version, int version_len_bytes) const;
private:
// EchoControlMobile implementation.
virtual int Enable(bool enable);
virtual int set_routing_mode(RoutingMode mode);
virtual RoutingMode routing_mode() const;
virtual int enable_comfort_noise(bool enable);
virtual bool is_comfort_noise_enabled() const;
virtual int SetEchoPath(const void* echo_path, size_t size_bytes);
virtual int GetEchoPath(void* echo_path, size_t size_bytes) const;
// ProcessingComponent implementation.
virtual void* CreateHandle() const;
virtual int InitializeHandle(void* handle) const;
virtual int ConfigureHandle(void* handle) const;
virtual int DestroyHandle(void* handle) const;
virtual int num_handles_required() const;
virtual int GetHandleError(void* handle) const;
const AudioProcessingImpl* apm_;
RoutingMode routing_mode_;
bool comfort_noise_enabled_;
unsigned char* external_echo_path_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_ECHO_CONTROL_MOBILE_IMPL_H_

View File

@ -1,80 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_GAIN_CONTROL_IMPL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_GAIN_CONTROL_IMPL_H_
#include <vector>
#include "audio_processing.h"
#include "processing_component.h"
namespace webrtc {
class AudioProcessingImpl;
class AudioBuffer;
class GainControlImpl : public GainControl,
public ProcessingComponent {
public:
explicit GainControlImpl(const AudioProcessingImpl* apm);
virtual ~GainControlImpl();
int ProcessRenderAudio(AudioBuffer* audio);
int AnalyzeCaptureAudio(AudioBuffer* audio);
int ProcessCaptureAudio(AudioBuffer* audio);
// ProcessingComponent implementation.
virtual int Initialize();
virtual int get_version(char* version, int version_len_bytes) const;
// GainControl implementation.
virtual bool is_enabled() const;
virtual int stream_analog_level();
private:
// GainControl implementation.
virtual int Enable(bool enable);
virtual int set_stream_analog_level(int level);
virtual int set_mode(Mode mode);
virtual Mode mode() const;
virtual int set_target_level_dbfs(int level);
virtual int target_level_dbfs() const;
virtual int set_compression_gain_db(int gain);
virtual int compression_gain_db() const;
virtual int enable_limiter(bool enable);
virtual bool is_limiter_enabled() const;
virtual int set_analog_level_limits(int minimum, int maximum);
virtual int analog_level_minimum() const;
virtual int analog_level_maximum() const;
virtual bool stream_is_saturated() const;
// ProcessingComponent implementation.
virtual void* CreateHandle() const;
virtual int InitializeHandle(void* handle) const;
virtual int ConfigureHandle(void* handle) const;
virtual int DestroyHandle(void* handle) const;
virtual int num_handles_required() const;
virtual int GetHandleError(void* handle) const;
const AudioProcessingImpl* apm_;
Mode mode_;
int minimum_capture_level_;
int maximum_capture_level_;
bool limiter_enabled_;
int target_level_dbfs_;
int compression_gain_db_;
std::vector<int> capture_levels_;
int analog_capture_level_;
bool was_analog_level_set_;
bool stream_is_saturated_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_GAIN_CONTROL_IMPL_H_

View File

@ -1,51 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_HIGH_PASS_FILTER_IMPL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_HIGH_PASS_FILTER_IMPL_H_
#include "audio_processing.h"
#include "processing_component.h"
namespace webrtc {
class AudioProcessingImpl;
class AudioBuffer;
class HighPassFilterImpl : public HighPassFilter,
public ProcessingComponent {
public:
explicit HighPassFilterImpl(const AudioProcessingImpl* apm);
virtual ~HighPassFilterImpl();
int ProcessCaptureAudio(AudioBuffer* audio);
// HighPassFilter implementation.
virtual bool is_enabled() const;
// ProcessingComponent implementation.
virtual int get_version(char* version, int version_len_bytes) const;
private:
// HighPassFilter implementation.
virtual int Enable(bool enable);
// ProcessingComponent implementation.
virtual void* CreateHandle() const;
virtual int InitializeHandle(void* handle) const;
virtual int ConfigureHandle(void* handle) const;
virtual int DestroyHandle(void* handle) const;
virtual int num_handles_required() const;
virtual int GetHandleError(void* handle) const;
const AudioProcessingImpl* apm_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_HIGH_PASS_FILTER_IMPL_H_

View File

@ -1,182 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "level_estimator_impl.h"
#include <cassert>
#include <cstring>
#include "critical_section_wrapper.h"
#include "audio_processing_impl.h"
#include "audio_buffer.h"
// TODO(ajm): implement the underlying level estimator component.
namespace webrtc {
typedef void Handle;
namespace {
/*int EstimateLevel(AudioBuffer* audio, Handle* my_handle) {
assert(audio->samples_per_split_channel() <= 160);
WebRtc_Word16* mixed_data = audio->low_pass_split_data(0);
if (audio->num_channels() > 1) {
audio->CopyAndMixLowPass(1);
mixed_data = audio->mixed_low_pass_data(0);
}
int err = UpdateLvlEst(my_handle,
mixed_data,
audio->samples_per_split_channel());
if (err != AudioProcessing::kNoError) {
return GetHandleError(my_handle);
}
return AudioProcessing::kNoError;
}
int GetMetricsLocal(Handle* my_handle, LevelEstimator::Metrics* metrics) {
level_t levels;
memset(&levels, 0, sizeof(levels));
int err = ExportLevels(my_handle, &levels, 2);
if (err != AudioProcessing::kNoError) {
return err;
}
metrics->signal.instant = levels.instant;
metrics->signal.average = levels.average;
metrics->signal.maximum = levels.max;
metrics->signal.minimum = levels.min;
err = ExportLevels(my_handle, &levels, 1);
if (err != AudioProcessing::kNoError) {
return err;
}
metrics->speech.instant = levels.instant;
metrics->speech.average = levels.average;
metrics->speech.maximum = levels.max;
metrics->speech.minimum = levels.min;
err = ExportLevels(my_handle, &levels, 0);
if (err != AudioProcessing::kNoError) {
return err;
}
metrics->noise.instant = levels.instant;
metrics->noise.average = levels.average;
metrics->noise.maximum = levels.max;
metrics->noise.minimum = levels.min;
return AudioProcessing::kNoError;
}*/
} // namespace
LevelEstimatorImpl::LevelEstimatorImpl(const AudioProcessingImpl* apm)
: ProcessingComponent(apm),
apm_(apm) {}
LevelEstimatorImpl::~LevelEstimatorImpl() {}
int LevelEstimatorImpl::AnalyzeReverseStream(AudioBuffer* /*audio*/) {
return apm_->kUnsupportedComponentError;
/*if (!is_component_enabled()) {
return apm_->kNoError;
}
return EstimateLevel(audio, static_cast<Handle*>(handle(1)));*/
}
int LevelEstimatorImpl::ProcessCaptureAudio(AudioBuffer* /*audio*/) {
return apm_->kUnsupportedComponentError;
/*if (!is_component_enabled()) {
return apm_->kNoError;
}
return EstimateLevel(audio, static_cast<Handle*>(handle(0)));*/
}
int LevelEstimatorImpl::Enable(bool /*enable*/) {
CriticalSectionScoped crit_scoped(*apm_->crit());
return apm_->kUnsupportedComponentError;
//return EnableComponent(enable);
}
bool LevelEstimatorImpl::is_enabled() const {
return is_component_enabled();
}
int LevelEstimatorImpl::GetMetrics(LevelEstimator::Metrics* /*metrics*/,
LevelEstimator::Metrics* /*reverse_metrics*/) {
return apm_->kUnsupportedComponentError;
/*if (!is_component_enabled()) {
return apm_->kNotEnabledError;
}
int err = GetMetricsLocal(static_cast<Handle*>(handle(0)), metrics);
if (err != apm_->kNoError) {
return err;
}
err = GetMetricsLocal(static_cast<Handle*>(handle(1)), reverse_metrics);
if (err != apm_->kNoError) {
return err;
}
return apm_->kNoError;*/
}
int LevelEstimatorImpl::get_version(char* version,
int version_len_bytes) const {
// An empty string is used to indicate no version information.
memset(version, 0, version_len_bytes);
return apm_->kNoError;
}
void* LevelEstimatorImpl::CreateHandle() const {
Handle* handle = NULL;
/*if (CreateLvlEst(&handle) != apm_->kNoError) {
handle = NULL;
} else {
assert(handle != NULL);
}*/
return handle;
}
int LevelEstimatorImpl::DestroyHandle(void* /*handle*/) const {
return apm_->kUnsupportedComponentError;
//return FreeLvlEst(static_cast<Handle*>(handle));
}
int LevelEstimatorImpl::InitializeHandle(void* /*handle*/) const {
return apm_->kUnsupportedComponentError;
/*const double kIntervalSeconds = 1.5;
return InitLvlEst(static_cast<Handle*>(handle),
apm_->sample_rate_hz(),
kIntervalSeconds);*/
}
int LevelEstimatorImpl::ConfigureHandle(void* /*handle*/) const {
return apm_->kUnsupportedComponentError;
//return apm_->kNoError;
}
int LevelEstimatorImpl::num_handles_required() const {
return apm_->kUnsupportedComponentError;
//return 2;
}
int LevelEstimatorImpl::GetHandleError(void* handle) const {
// The component has no detailed errors.
assert(handle != NULL);
return apm_->kUnspecifiedError;
}
} // namespace webrtc

View File

@ -1,53 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_LEVEL_ESTIMATOR_IMPL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_LEVEL_ESTIMATOR_IMPL_H_
#include "audio_processing.h"
#include "processing_component.h"
namespace webrtc {
class AudioProcessingImpl;
class AudioBuffer;
class LevelEstimatorImpl : public LevelEstimator,
public ProcessingComponent {
public:
explicit LevelEstimatorImpl(const AudioProcessingImpl* apm);
virtual ~LevelEstimatorImpl();
int AnalyzeReverseStream(AudioBuffer* audio);
int ProcessCaptureAudio(AudioBuffer* audio);
// LevelEstimator implementation.
virtual bool is_enabled() const;
// ProcessingComponent implementation.
virtual int get_version(char* version, int version_len_bytes) const;
private:
// LevelEstimator implementation.
virtual int Enable(bool enable);
virtual int GetMetrics(Metrics* metrics, Metrics* reverse_metrics);
// ProcessingComponent implementation.
virtual void* CreateHandle() const;
virtual int InitializeHandle(void* handle) const;
virtual int ConfigureHandle(void* handle) const;
virtual int DestroyHandle(void* handle) const;
virtual int num_handles_required() const;
virtual int GetHandleError(void* handle) const;
const AudioProcessingImpl* apm_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_LEVEL_ESTIMATOR_IMPL_H_

View File

@ -1,54 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_NOISE_SUPPRESSION_IMPL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_NOISE_SUPPRESSION_IMPL_H_
#include "audio_processing.h"
#include "processing_component.h"
namespace webrtc {
class AudioProcessingImpl;
class AudioBuffer;
class NoiseSuppressionImpl : public NoiseSuppression,
public ProcessingComponent {
public:
explicit NoiseSuppressionImpl(const AudioProcessingImpl* apm);
virtual ~NoiseSuppressionImpl();
int ProcessCaptureAudio(AudioBuffer* audio);
// NoiseSuppression implementation.
virtual bool is_enabled() const;
// ProcessingComponent implementation.
virtual int get_version(char* version, int version_len_bytes) const;
private:
// NoiseSuppression implementation.
virtual int Enable(bool enable);
virtual int set_level(Level level);
virtual Level level() const;
// ProcessingComponent implementation.
virtual void* CreateHandle() const;
virtual int InitializeHandle(void* handle) const;
virtual int ConfigureHandle(void* handle) const;
virtual int DestroyHandle(void* handle) const;
virtual int num_handles_required() const;
virtual int GetHandleError(void* handle) const;
const AudioProcessingImpl* apm_;
Level level_;
};
} // namespace webrtc
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_MAIN_SOURCE_NOISE_SUPPRESSION_IMPL_H_

View File

@ -1,20 +0,0 @@
noinst_LTLIBRARIES = libns.la libns_fix.la
libns_la_SOURCES = interface/noise_suppression.h \
noise_suppression.c \
windows_private.h \
defines.h \
ns_core.c \
ns_core.h
libns_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility
libns_fix_la_SOURCES = interface/noise_suppression_x.h \
noise_suppression_x.c \
nsx_defines.h \
nsx_core.c \
nsx_core.h
libns_fix_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility

View File

@ -1,124 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_NS_MAIN_INTERFACE_NOISE_SUPPRESSION_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_NS_MAIN_INTERFACE_NOISE_SUPPRESSION_H_
#include "typedefs.h"
typedef struct NsHandleT NsHandle;
#ifdef __cplusplus
extern "C" {
#endif
/*
* This function returns the version number of the code.
*
* Input:
* - version : Pointer to a character array where the version
* info is stored.
* - length : Length of version.
*
* Return value : 0 - Ok
* -1 - Error (probably length is not sufficient)
*/
int WebRtcNs_get_version(char* version, short length);
/*
* This function creates an instance to the noise reduction structure
*
* Input:
* - NS_inst : Pointer to noise reduction instance that should be
* created
*
* Output:
* - NS_inst : Pointer to created noise reduction instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNs_Create(NsHandle** NS_inst);
/*
* This function frees the dynamic memory of a specified Noise Reduction
* instance.
*
* Input:
* - NS_inst : Pointer to NS instance that should be freed
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNs_Free(NsHandle* NS_inst);
/*
* This function initializes a NS instance
*
* Input:
* - NS_inst : Instance that should be initialized
* - fs : sampling frequency
*
* Output:
* - NS_inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNs_Init(NsHandle* NS_inst, WebRtc_UWord32 fs);
/*
* This changes the aggressiveness of the noise suppression method.
*
* Input:
* - NS_inst : Instance that should be initialized
* - mode : 0: Mild, 1: Medium , 2: Aggressive
*
* Output:
* - NS_inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNs_set_policy(NsHandle* NS_inst, int mode);
/*
* This functions does Noise Suppression for the inserted speech frame. The
* input and output signals should always be 10ms (80 or 160 samples).
*
* Input
* - NS_inst : NS Instance. Needs to be initiated before call.
* - spframe : Pointer to speech frame buffer for L band
* - spframe_H : Pointer to speech frame buffer for H band
* - fs : sampling frequency
*
* Output:
* - NS_inst : Updated NS instance
* - outframe : Pointer to output frame for L band
* - outframe_H : Pointer to output frame for H band
*
* Return value : 0 - OK
* -1 - Error
*/
int WebRtcNs_Process(NsHandle* NS_inst,
short* spframe,
short* spframe_H,
short* outframe,
short* outframe_H);
#ifdef __cplusplus
}
#endif
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_NS_MAIN_INTERFACE_NOISE_SUPPRESSION_H_

View File

@ -1,123 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_NS_MAIN_INTERFACE_NOISE_SUPPRESSION_X_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_NS_MAIN_INTERFACE_NOISE_SUPPRESSION_X_H_
#include "typedefs.h"
typedef struct NsxHandleT NsxHandle;
#ifdef __cplusplus
extern "C" {
#endif
/*
* This function returns the version number of the code.
*
* Input:
* - version : Pointer to a character array where the version
* info is stored.
* - length : Length of version.
*
* Return value : 0 - Ok
* -1 - Error (probably length is not sufficient)
*/
int WebRtcNsx_get_version(char* version, short length);
/*
* This function creates an instance to the noise reduction structure
*
* Input:
* - nsxInst : Pointer to noise reduction instance that should be
* created
*
* Output:
* - nsxInst : Pointer to created noise reduction instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNsx_Create(NsxHandle** nsxInst);
/*
* This function frees the dynamic memory of a specified Noise Suppression
* instance.
*
* Input:
* - nsxInst : Pointer to NS instance that should be freed
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNsx_Free(NsxHandle* nsxInst);
/*
* This function initializes a NS instance
*
* Input:
* - nsxInst : Instance that should be initialized
* - fs : sampling frequency
*
* Output:
* - nsxInst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNsx_Init(NsxHandle* nsxInst, WebRtc_UWord32 fs);
/*
* This changes the aggressiveness of the noise suppression method.
*
* Input:
* - nsxInst : Instance that should be initialized
* - mode : 0: Mild, 1: Medium , 2: Aggressive
*
* Output:
* - nsxInst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcNsx_set_policy(NsxHandle* nsxInst, int mode);
/*
* This functions does noise suppression for the inserted speech frame. The
* input and output signals should always be 10ms (80 or 160 samples).
*
* Input
* - nsxInst : NSx instance. Needs to be initiated before call.
* - speechFrame : Pointer to speech frame buffer for L band
* - speechFrameHB : Pointer to speech frame buffer for H band
* - fs : sampling frequency
*
* Output:
* - nsxInst : Updated NSx instance
* - outFrame : Pointer to output frame for L band
* - outFrameHB : Pointer to output frame for H band
*
* Return value : 0 - OK
* -1 - Error
*/
int WebRtcNsx_Process(NsxHandle* nsxInst,
short* speechFrame,
short* speechFrameHB,
short* outFrame,
short* outFrameHB);
#ifdef __cplusplus
}
#endif
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_NS_MAIN_INTERFACE_NOISE_SUPPRESSION_X_H_

View File

@ -1,65 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include <stdlib.h>
#include <string.h>
#include "noise_suppression.h"
#include "ns_core.h"
#include "defines.h"
int WebRtcNs_get_version(char* versionStr, short length) {
const char version[] = "NS 2.2.0";
const short versionLen = (short)strlen(version) + 1; // +1: null-termination
if (versionStr == NULL) {
return -1;
}
if (versionLen > length) {
return -1;
}
strncpy(versionStr, version, versionLen);
return 0;
}
int WebRtcNs_Create(NsHandle** NS_inst) {
*NS_inst = (NsHandle*) malloc(sizeof(NSinst_t));
if (*NS_inst != NULL) {
(*(NSinst_t**)NS_inst)->initFlag = 0;
return 0;
} else {
return -1;
}
}
int WebRtcNs_Free(NsHandle* NS_inst) {
free(NS_inst);
return 0;
}
int WebRtcNs_Init(NsHandle* NS_inst, WebRtc_UWord32 fs) {
return WebRtcNs_InitCore((NSinst_t*) NS_inst, fs);
}
int WebRtcNs_set_policy(NsHandle* NS_inst, int mode) {
return WebRtcNs_set_policy_core((NSinst_t*) NS_inst, mode);
}
int WebRtcNs_Process(NsHandle* NS_inst, short* spframe, short* spframe_H,
short* outframe, short* outframe_H) {
return WebRtcNs_ProcessCore(
(NSinst_t*) NS_inst, spframe, spframe_H, outframe, outframe_H);
}

View File

@ -1,65 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include <stdlib.h>
#include <string.h>
#include "noise_suppression_x.h"
#include "nsx_core.h"
#include "nsx_defines.h"
int WebRtcNsx_get_version(char* versionStr, short length) {
const char version[] = "NS\t3.1.0";
const short versionLen = (short)strlen(version) + 1; // +1: null-termination
if (versionStr == NULL) {
return -1;
}
if (versionLen > length) {
return -1;
}
strncpy(versionStr, version, versionLen);
return 0;
}
int WebRtcNsx_Create(NsxHandle** nsxInst) {
*nsxInst = (NsxHandle*)malloc(sizeof(NsxInst_t));
if (*nsxInst != NULL) {
(*(NsxInst_t**)nsxInst)->initFlag = 0;
return 0;
} else {
return -1;
}
}
int WebRtcNsx_Free(NsxHandle* nsxInst) {
free(nsxInst);
return 0;
}
int WebRtcNsx_Init(NsxHandle* nsxInst, WebRtc_UWord32 fs) {
return WebRtcNsx_InitCore((NsxInst_t*)nsxInst, fs);
}
int WebRtcNsx_set_policy(NsxHandle* nsxInst, int mode) {
return WebRtcNsx_set_policy_core((NsxInst_t*)nsxInst, mode);
}
int WebRtcNsx_Process(NsxHandle* nsxInst, short* speechFrame,
short* speechFrameHB, short* outFrame,
short* outFrameHB) {
return WebRtcNsx_ProcessCore(
(NsxInst_t*)nsxInst, speechFrame, speechFrameHB, outFrame, outFrameHB);
}

View File

@ -1,58 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'ns',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
'apm_util'
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/noise_suppression.h',
'noise_suppression.c',
'windows_private.h',
'defines.h',
'ns_core.c',
'ns_core.h',
],
},
{
'target_name': 'ns_fix',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/noise_suppression_x.h',
'noise_suppression_x.c',
'nsx_defines.h',
'nsx_core.c',
'nsx_core.h',
],
},
],
}

Some files were not shown because too many files have changed in this diff Show More