150 Commits
v0.1 ... v2.0

Author SHA1 Message Date
0743cb5ce5 doc: Document changes for 2.0 2025-01-08 11:58:17 -05:00
971f026d55 doc: Backfill 1.x releases 2025-01-08 11:51:04 -05:00
774ac54e71 meson: Only disable SIMD for non-SSE machines on x86
This ended up disabling SIMD everywhere except where SSE/AVX was
enabled.
2024-12-30 16:21:20 -05:00
54a632f018 meson: Convert the 'neon' option into a feature
Easier to express things, now that runtime is a no-op.
2024-12-30 16:21:08 -05:00
d63a2c9714 meson: pffft: Warn about not having runtime neon checks
The pffft.c file does not have runtime checks for NEON, and silently
falls back to disabling it when the neon option is 'runtime'. Print a
warning in this case.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:19:07 -05:00
73aed233b2 meson: Use neon_opt to control building neon files
Using the have_neon boolean to enable NEON code means we have to either
fully enable it or fully disable it. When using -Dneon=runtime, ideally
only the parts that support runtime checks would be built for NEON, and
those that don't would be built without NEON. Though, there are no
longer any runtime checks for NEON anywhere, so it's equivalent to 'no'
with a warning.

In general, we should use have_* variables to indicate compiler support,
and *_opt options to choose if and how we want to utilize that. Use
neon_opt to control NEON compilation and avoid modifying have_neon which
now would fully refer to compiler support.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:18:44 -05:00
2493b6e659 meson: Raise error for setting 'neon' when unsupported
We can set -Dneon=yes on x86, which will fail during build because the
x86 compiler doesn't understand the resulting `-mfpu=neon` flag. Make the
'neon' build option cause an error in the setup stage if we didn't
detect hardware support for NEON.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:18:00 -05:00
fc5a4946af meson: Set 'auto' as the default neon option value
The default for the neon build option is 'no', which disabled NEON code
for 32-bit ARM but enabled it for ARM64. Now that 'no' can disable NEON
code for ARM64, the default should be 'auto' which would enable it where
possible. Handle the 'auto' value, and set it as the default.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:17:32 -05:00
b7a194f824 meson: Check arm neon support before parsing neon option
The main if statment for the NEON option has been quite convoluted. The
previous commits reduced what it does to a simple case: check NEON
support and set have_neon on 32-bit ARM CPUs. Do that near the
architecture definitions.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:16:43 -05:00
6c914be933 meson: Make -Dneon=no disable neon-specific code
When the neon build option is set to 'no', we should disable optimized
implementations that use NEON. Change have_neon to false in that case,
so that we skip the flags and skip building NEON-specific files.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:16:43 -05:00
8b0255991e meson: Set WEBRTC_HAS_NEON macro after handling neon build option
The WEBRTC_HAS_NEON macro that enables using NEON implementations is
unconditionally set for arm64 and the 'neon' build option is ignored,
assuming we always want to use the NEON-specific implementations instead
of generic ones. This is an OK assumption to make because arm64 CPUs
always support NEON.

But the code handling the build option ended up quite convoluted. As
part of cleaning up, set the relevant cflags after we handle the build
option. This also means that we can make 'runtime' fall back to 'no',
and disable NEON-specific code with -Dneon=no.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:16:43 -05:00
b22ce018c8 meson: Drop obsolete WEBRTC_DETECT_NEON macro
Upstream has dropped runtime checks for NEON instructions around 2016,
and the WEBRTC_DETECT_NEON macro is removed along with it. Disable NEON
when building with -Dneon=runtime and omit a warning instead.

Link: https://webrtc.googlesource.com/src/+/e305d956c0717a28ca88cd8547e5b310dfa74594
Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 14:16:43 -05:00
a1ccd6c700 Track patches to WebRTC code in patches/
This should make it easier to not lose track during the next bump.
2024-12-30 14:05:31 -05:00
297fd4f2ef AECM: MIPS: Use uintptr_t for pointer arithmetic
Trying to compile the MIPS-specific AECM audio processing file for
mips64el on Debian results in the following errors:

  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc: In function ‘int webrtc::WebRtcAecm_ProcessBlock(AecmCore*, const int16_t*, const int16_t*, const int16_t*, int16_t*)’:
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:955:30: error: cast from ‘int16_t*’ {aka ‘short int*’} to ‘uint32_t’ {aka ‘unsigned int’} loses precision [-fpermissive]
    955 |   int16_t* fft = (int16_t*)(((uint32_t)fft_buf + 31) & ~31);
        |                              ^~~~~~~~~~~~~~~~~
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:955:18: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
    955 |   int16_t* fft = (int16_t*)(((uint32_t)fft_buf + 31) & ~31);
        |                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:956:36: error: cast from ‘int32_t*’ {aka ‘int*’} to ‘uint32_t’ {aka ‘unsigned int’} loses precision [-fpermissive]
    956 |   int32_t* echoEst32 = (int32_t*)(((uint32_t)echoEst32_buf + 31) & ~31);
        |                                    ^~~~~~~~~~~~~~~~~~~~~~~
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:956:24: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
    956 |   int32_t* echoEst32 = (int32_t*)(((uint32_t)echoEst32_buf + 31) & ~31);
        |                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:957:40: error: cast from ‘int32_t*’ {aka ‘int*’} to ‘uint32_t’ {aka ‘unsigned int’} loses precision [-fpermissive]
    957 |   ComplexInt16* dfw = (ComplexInt16*)(((uint32_t)dfw_buf + 31) & ~31);
        |                                        ^~~~~~~~~~~~~~~~~
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:957:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
    957 |   ComplexInt16* dfw = (ComplexInt16*)(((uint32_t)dfw_buf + 31) & ~31);
        |                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:958:40: error: cast from ‘int32_t*’ {aka ‘int*’} to ‘uint32_t’ {aka ‘unsigned int’} loses precision [-fpermissive]
    958 |   ComplexInt16* efw = (ComplexInt16*)(((uint32_t)efw_buf + 31) & ~31);
        |                                        ^~~~~~~~~~~~~~~~~
  ../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:958:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
    958 |   ComplexInt16* efw = (ComplexInt16*)(((uint32_t)efw_buf + 31) & ~31);
        |                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Presumably, this file was written for 32-bit MIPS so the author used
uint32_t to do pointer arithmetic over these arrays. Fix the errors by
using uintptr_t to work with pointers.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 18:11:05 +00:00
ecb8817972 Remove some unused files 2024-12-30 12:48:46 -05:00
6119c05d7d meson: Drop WEBRTC_THREAD_RR
The define was dropped upstream a while back and is unused.
2024-12-30 12:20:27 -05:00
2a318149f8 Add support for BSD systems
webrtc/rtc_base/checks.cc:158:28: error: use of undeclared identifier 'LAST_SYSTEM_ERROR'
  158 |                file, line, LAST_SYSTEM_ERROR, message);
      |                            ^
webrtc/rtc_base/checks.cc:220:16: error: use of undeclared identifier 'LAST_SYSTEM_ERROR'
  220 |                LAST_SYSTEM_ERROR);
      |                ^
In file included from webrtc/rtc_base/platform_thread_types.cc:11:
webrtc/rtc_base/platform_thread_types.h:47:1: error: unknown type name 'PlatformThreadId'
   47 | PlatformThreadId CurrentThreadId();
      | ^
webrtc/rtc_base/platform_thread_types.h:52:1: error: unknown type name 'PlatformThreadRef'
   52 | PlatformThreadRef CurrentThreadRef();
      | ^
webrtc/rtc_base/platform_thread_types.h:55:29: error: unknown type name 'PlatformThreadRef'
   55 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b);
      |                             ^
webrtc/rtc_base/platform_thread_types.h:55:57: error: unknown type name 'PlatformThreadRef'
   55 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b);
      |                                                         ^
webrtc/rtc_base/platform_thread_types.cc:37:1: error: unknown type name 'PlatformThreadId'
   37 | PlatformThreadId CurrentThreadId() {
      | ^
webrtc/rtc_base/platform_thread_types.cc:58:1: error: unknown type name 'PlatformThreadRef'
   58 | PlatformThreadRef CurrentThreadRef() {
      | ^
webrtc/rtc_base/platform_thread_types.cc:68:29: error: unknown type name 'PlatformThreadRef'
   68 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b) {
      |                             ^
webrtc/rtc_base/platform_thread_types.cc:68:57: error: unknown type name 'PlatformThreadRef'
   68 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b) {
      |                                                         ^
In file included from webrtc/rtc_base/event_tracer.cc:30:
In file included from webrtc/api/sequence_checker.h:15:
In file included from webrtc/rtc_base/synchronization/sequence_checker_internal.h:18:
webrtc/rtc_base/synchronization/mutex.h:28:2: error: Unsupported platform.
   28 | #error Unsupported platform.
      |  ^
webrtc/rtc_base/synchronization/mutex.h:52:3: error: unknown type name 'MutexImpl'
   52 |   MutexImpl impl_;
      |   ^
2024-12-30 17:18:54 +00:00
4a17c682e9 common_audio: Add MIPS_DSP_R1_LE guard for vector scaling ops
The MIPS-specific source for vector scaling operations fails to build on
Debian's mips64el:

  [97/303] Compiling C object webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o
  FAILED: webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o
  cc [...] webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o.d -o webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o -c ../webrtc/common_audio/signal_processing/vector_scaling_operations_mips.c
  /tmp/cc7UGPkY.s: Assembler messages:
  /tmp/cc7UGPkY.s:57: Error: opcode not supported on this processor: mips64r2 (mips64r2) `extrv_r.w $3,$ac0,$8'
  ninja: build stopped: subcommand failed.

The EXTRV_R.W instruction it uses is part of DSP extensions for this
architecture. In signal_processing_library.h, this function's prototype
is guarded with #if defined(MIPS_DSP_R1_LE). Guard the implementation
like that as well to fix the error.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 12:06:56 -05:00
5252919799 meson: Avoid default AECM implementation on MIPS
Trying to link both aecm/aecm_core_mips.cc and aecm/aecm_core_c.cc into
the same library results in an error because they both try to implement
webrtc::WebRtcAecm_ProcessBlock():

  [306/306] Linking target webrtc/modules/audio_processing/libwebrtc-audio-processing-1.so.3
  FAILED: webrtc/modules/audio_processing/libwebrtc-audio-processing-1.so.3
  c++ @webrtc/modules/audio_processing/libwebrtc-audio-processing-1.so.3.rsp
  /usr/bin/ld: webrtc/modules/audio_processing/libwebrtc-audio-processing-1.so.3.p/aecm_aecm_core_mips.cc.o: in function `webrtc::WebRtcAecm_ProcessBlock(webrtc::AecmCore*, short const*, short const*, short const*, short*)':
  [...]/webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:934: multiple definition of `webrtc::WebRtcAecm_ProcessBlock(webrtc::AecmCore*, short const*, short const*, short const*, short*)'; webrtc/modules/audio_processing/libwebrtc-audio-processing-1.so.3.p/aecm_aecm_core_c.cc.o:[...]/webrtc/modules/audio_processing/aecm/aecm_core_c.cc:377: first defined here
  collect2: error: ld returned 1 exit status
  ninja: build stopped: subcommand failed.

The MIPS-specific file is a replacement for the other, unlike the NEON
case. Don't add the default implementation unconditionally, add it only
for non-MIPS builds.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 12:06:55 -05:00
85556fd38b meson: Drop malformed WEBRTC_ARCH_MIPS_FAMILY cflags argument
The top-level meson.build file adds WEBRTC_ARCH_MIPS_FAMILY to
arch_cflags for mips architectures, which causes the following error:

  [1/306] Compiling C++ object webrtc/rtc_base/liblibbase.a.p/synchronization_yield.cc.o
  FAILED: webrtc/rtc_base/liblibbase.a.p/synchronization_yield.cc.o
  c++ [...] -DWEBRTC_THREAD_RR WEBRTC_ARCH_MIPS_FAMILY -MD [...] ../webrtc/rtc_base/synchronization/yield.cc
  c++: warning: WEBRTC_ARCH_MIPS_FAMILY: linker input file unused because linking not done
  c++: error: WEBRTC_ARCH_MIPS_FAMILY: linker input file not found: No such file or directory

It is supposed to be "-DWEBRTC_ARCH_MIPS_FAMILY". But, that macro is
already defined in arch.h when building for mips:

  [30/306] Compiling C++ object webrtc/system_wrappers/libsystem_wrappers.a.p/source_cpu_features.cc.o
  In file included from ../webrtc/system_wrappers/source/cpu_features.cc:13:
  ../webrtc/rtc_base/system/arch.h:47:9: warning: "WEBRTC_ARCH_MIPS_FAMILY" redefined
     47 | #define WEBRTC_ARCH_MIPS_FAMILY
        |         ^~~~~~~~~~~~~~~~~~~~~~~
  <command-line>: note: this is the location of the previous definition

Drop the broken, unnecessary argument from cflags.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-30 12:06:49 -05:00
da757ff09f Fix MIPS-specific source path
Link: https://sources.debian.org/patches/webrtc-audio-processing/1.0-0.2/fix-mips-source-path.patch
2024-12-30 12:06:46 -05:00
fed81a77c9 Allow disabling inline SSE
Should make building on i686 without SSE feasible.

Fixes: https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/issues/5
2024-12-26 17:35:08 -05:00
c144c53039 Drop WAV-related files
These are only used when WEBRTC_APM_DEBUG_DUMP=1, which we do not set.
Hopefully this makes building on big-endian machines possible.

Fixes: https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/issues/31
2024-12-26 13:07:40 -05:00
0d4c5f27b5 build: Bump version to 2.0 2024-12-26 12:55:16 -05:00
9b194c2d99 ci: Switch CI over to Fedora
Ubuntu/Debian absl are currently lagging. We should switch back to
Debian once recent abseil lands in unstable/testing.
2024-12-26 12:55:16 -05:00
d090f7a927 ci: Bump iOS version to 11.0
This is needed for std::optional to be usable (specifically for .value()
to be available).
2024-12-26 12:55:16 -05:00
ad563b095c Fix up XMM intrinsics usage on MSVC
Repplying 0a0050746b after M131 bump.
2024-12-26 12:55:16 -05:00
b5c48b97f6 Bump to WebRTC M131 release
Ongoing fixes and improvements, transient suppressor is gone. Also,
dropping isac because it doesn't seem to be useful, and is just build
system deadweight now.

Upstream references:

  Version: 131.0.6778.200
  WebRTC: 79aff54b0fa9238ce3518dd9eaf9610cd6f22e82
  Chromium: 2a19506ad24af755f2a215a4c61f775393e0db42
2024-12-26 12:55:16 -05:00
8bdb53d91c meson: Update abseil wrap to 20240722
We need something recent enough with the stringify API for the M131 bump.
2024-12-26 12:55:12 -05:00
ced0ff6765 ci: Fix up workflow rules for MR vs. branch pipelines 2024-12-26 09:27:14 -05:00
1ff1a0b860 Decode base64-encoded third-party files
Some files were committed into the repository as base64 encoded files.
Presumably, this is because the "text" download links on Google's
Gitiles web interface sends them as such. These can be found by running
`git grep "^[[:alnum:]]\{128,\}=*$"`. Decode them with `base64 -d`.

Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
2024-12-24 19:42:56 -05:00
867e2d875b Add a trivial example to run AEC offline
Just allows for some sanity testing for now, will improve for
configurability and add some sample data in the future.
2024-12-24 18:57:37 -05:00
a729ccfe0f ci: Bump to Ubuntu 24.04 2024-12-24 18:57:37 -05:00
4c81b31652 ci: Bump Windows to cpp_std=c++20
Required for designated initializers.
2024-12-24 12:02:19 -05:00
0a0050746b Fix up XMM intrinsics usage on MSVC 2024-12-24 12:02:19 -05:00
06157f1659 build: Use Visual Studio-specific flags for AVX
Needed for now, but unstable-simd is likely a better fix for all our
SIMD building.
2024-12-24 12:02:19 -05:00
c6abf6cd3f Bump to WebRTC M120 release
Some API deprecation -- ExperimentalAgc and ExperimentalNs are gone.
We're continuing to carry iSAC even though it's gone upstream, but maybe
we'll want to drop that soon.
2024-12-24 11:05:39 -05:00
9a202fb8c2 file_wrapper.h: Fix build with GCC13
It is a missed instance of cdec109331 (!31).

Fixes #32
2024-04-04 18:32:39 -03:00
a949f1de2d build: Appease MSYS2 UCRT64 GCC 13
Undefining this macro makes GCC in standards C++ mode very unhappy:

In file included from D:/msys64/ucrt64/include/c++/13.2.0/bits/requires_hosted.h:31,
                 from D:/msys64/ucrt64/include/c++/13.2.0/string:38,
                 from ..\subprojects\webrtc-audio-processing\webrtc/rtc_base/system/file_wrapper.h:17,
                 from ../subprojects/webrtc-audio-processing/webrtc/rtc_base/system/file_wrapper.cc:11:
D:/msys64/ucrt64/include/c++/13.2.0/x86_64-w64-mingw32/bits/c++config.h:666:2: warning: #warning "__STRICT_ANSI__ seems to have been undefined; this is not supported" [-Wcpp]
  666 | #warning "__STRICT_ANSI__ seems to have been undefined; this is not supported"
      |  ^~~~~~~

See: https://github.com/fmtlib/fmt/issues/2059#issue-761209068

See: #32
2024-03-23 16:34:50 -03:00
f89958d824 Bring arch.h in line with upstream webrtc
Largely to bring in preprocessor support for additional architectures as
based on 6215ba804eb500f3e28b39088c73af3c4f4cd10a by
Timothy Gu <timothygu99@gmail.com>:

Add preprocessor support for additional architectures

- _M_ARM is used by Microsoft [1]
- __riscv and __riscv_xlen are defined by [2]
- __sparc and __sparc__ are documented at [3]
- __MIPSEB__, __PPC__, __PPC64__ are documented at [3] and used in
  Chromium's build/build_config.h [4]
  Note: Chromium assumes that all PowerPC architectures are 64-bit. This
  is in fact not true.

[1]: https://docs.microsoft.com/en-us/cpp/preprocessor/predefined-macros?view=msvc-160
[2]: feca479356 (cc-preprocessor-definitions)
[3]: https://sourceforge.net/p/predef/wiki/Architectures/
[4]: https://source.chromium.org/chromium/chromium/src/+/master:build/build_config.h;drc=e12bf2e5ff1eacb9aca3e9a26bdeebdbdad5965a
2023-11-29 16:59:12 +00:00
8e258a1933 build: Bump version to 1.3 2023-09-05 11:19:47 -04:00
0691ae20d8 meson: Fix generation of pkgconfig files
Too much information was specified manually. All this is deduced
automatically if you specify the library as the first positional
argument.

Only absl_base needs to be in Requires: because absl_optional's header
file is needed at build time.

Also add a check in the CI for the pc files being usable.
2023-09-05 01:50:51 +05:30
c9b0a675e4 build: Bump version to 1.2 2023-09-01 11:05:31 -04:00
315b2222a8 meson: Update minimum version based on what abseil wrap needs 2023-08-13 17:42:29 -04:00
bc401b3cbf build: Expose absl as a dependency of webrtc-audio-processing
This is needed because the audio processing header references
abseil's optional.h. Clean up the declared dependencies while we're at
it.

Fixes: https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/merge_requests/34
2023-08-13 17:42:29 -04:00
92a4765a7e meson: Update to latest wrap, install required absl headers 2023-06-01 17:46:28 +05:30
c76b8bf340 doc: Update tarball generation process
Use meson dist, include subproject tarballs, and sha256sum files are
also autogenerated.
2023-05-25 18:25:51 -04:00
cdec109331 file_utils.h: Fix build with gcc-13
* add missing include as reported by gcc-13:
webrtc/modules/audio_processing/transient/file_utils.cc:11:
../webrtc-audio-processing-1.0/webrtc/modules/audio_processing/transient/file_utils.h:36:35: error: 'uint8_t' does not name a type
   36 | int ConvertByteArrayToFloat(const uint8_t bytes[4], float* out);
      |                                   ^~~~~~~
webrtc/modules/audio_processing/transient/file_utils.h:17:1: note: 'uint8_t' is defined in header '<cstdint>'; did you forget to '#include <cstdint>'?
   16 | #include "rtc_base/system/file_wrapper.h"
  +++ |+#include <cstdint>
   17 |

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
2023-05-25 18:13:04 -04:00
096b0eaed2 meson: Fixes for MSVC build
winsock2.h must be included before windows.h or alternative
definitions of `struct sockaddr` are defined.

```
FAILED: webrtc/rtc_base/liblibbase.a.p/logging.cc.obj
"cl" "-Iwebrtc\rtc_base\liblibbase.a.p" "-Iwebrtc\rtc_base" "-I..\webrtc\rtc_base" "-Iwebrtc" "-I..\webrtc" "-Isubprojects\abseil-cpp-20230125.1" "-I..\subprojects\abseil-cpp-20230125.1" "/MD" "/nologo" "/showIncludes" "/utf-8" "/Zc:__cplusplus" "/W2" "/EHsc" "/std:c++17" "/permissive-" "/O2" "/Zi" "-DWEBRTC_LIBRARY_
IMPL" "-DWEBRTC_ENABLE_SYMBOL_EXPORT" "-DNDEBUG" "-DWEBRTC_WIN" "-D_WIN32" "-U__STRICT_ANSI__" "-D__STDC_FORMAT_MACROS=1" "-DNOMINMAX" "-DWEBRTC_ENABLE_AVX2" "/Fdwebrtc\rtc_base\liblibbase.a.p\logging.cc.pdb" /Fowebrtc/rtc_base/liblibbase.a.p/logging.cc.obj "/c" ../webrtc/rtc_base/logging.cc
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(103): warning C4005: 'AF_IPX': macro redefinition
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um\winsock.h(457): note: see previous definition of 'AF_IPX'
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(147): warning C4005: 'AF_MAX': macro redefinition
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um\winsock.h(476): note: see previous definition of 'AF_MAX'
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(187): warning C4005: 'SO_DONTLINGER': macro redefinition
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um\winsock.h(399): note: see previous definition of 'SO_DONTLINGER'
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(240): error C2011: 'sockaddr': 'struct' type redefinition
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um\winsock.h(482): note: see declaration of 'sockaddr'
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(442): error C2143: syntax error: missing '}' before 'constant'
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(442): error C2059: syntax error: 'constant'
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(496): error C2143: syntax error: missing ';' before '}'
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(496): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared\ws2def.h(496): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
...
```
2023-05-26 03:17:31 +05:30
b24229cbbc meson: Ensure that abseil is built with c++17 too
subprojects do not inherit $lang_std default values from the project.
2023-05-26 03:17:31 +05:30
55239c4ca2 ci: Add jobs for MSVC, macOS, iOS, Android 2023-05-26 03:17:31 +05:30
a47df351ca ci: Bump ubuntu version to 22.10
Contains absl with pkgconfig files.
2023-05-26 03:17:31 +05:30
4125ace620 meson: Fix compatibility with Fedora's abseil-cpp package
1. Fedora abseil-cpp package is built with C++17:
   https://src.fedoraproject.org/rpms/abseil-cpp/blob/rawhide/f/abseil-cpp.spec
2. There is no `absl_types` pkgconfig file, and it's only needed on iOS
2023-05-26 03:16:38 +05:30
aa32d179d0 meson: Update abseil-cpp to latest wrap 2023-05-26 00:45:29 +05:30
9a362bd149 meson: Don't require cross files to set host_system = ios
It's not specified as a host_system by meson, so people will often not
set it.
2023-05-26 00:45:29 +05:30
8366ff0ce0 meson: Get rid of cmake and manual library searching
Simplify fallback, and prefer it. `[provide]` section requires meson
0.55, so require that.

pkg-config lookup is only provided for distros, since they dislike
static linking / vendoring.
2023-05-26 00:45:00 +05:30
ca1186946d build: don't detect neon again when building on aarch64
it will try to add -mfpu=neon to cflags not available on aarch64 since
neon is mandatory there
2022-05-21 14:10:48 +02:00
26f4493405 build: fix -Dneon=runtime 2022-05-21 14:08:15 +02:00
e31340c243 Add builds for distro and vendored versions of abseil 2021-10-20 11:16:19 -04:00
5a5aa66ada Add an abseil subproject and correctly specify fallback deps 2021-10-20 11:16:18 -04:00
0cc2ebeda2 Add missing absl library for bad_optional_access 2021-10-20 11:15:57 -04:00
6064932abf Add missing header for C++17 compatibility
Hopefully we can drop this change with the next update.
2021-10-19 18:06:37 -04:00
8bf9efad15 Use pkg-config for abseil-cpp detection if available
This should make things a bit easier.
2021-10-19 18:06:37 -04:00
ff85c98683 Some fixes for MinGW
* Rename Windows.h uses to windows.h
  * Comment out structured exception handling usage

Makes MinGW happier. Mostly the same as previous work by
Nicolas Dufresne <nicolas.dufresne@collabora.com>, with the exception
that we now don't try to invoke RaiseException which would fail in MinGW
as it raises a Windows structured exception.
2021-10-19 16:09:07 -04:00
57ec282d4f Remove rnn_vad_tool.cc that contains main(). 2021-09-08 12:21:50 +00:00
6e37f37c4e build: Split out iSAC VAD sources into a separate dependency
Avoid having to link webrtc-audio-processing with webrtc-audio-coding,
and makes the required symbols directly available.

Part-of: <https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/merge_requests/22>
2021-06-19 13:06:12 -04:00
b8ad0dfc22 build: Add framework deps on macOS and iOS
Part-of: <https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/merge_requests/21>
2021-06-18 23:44:18 -04:00
e47b68df57 arch.h: Add RISC-V support 2021-06-17 01:48:37 +00:00
e74894baeb build: Add library-based absl detection as a fallback
This should help for cases where users can make abseil-cpp available but
wiring up the CMake-build isn't that easy (for example, while
cross-compiling).
2021-06-05 18:37:23 -04:00
589a744585 Fix build on Android
There's a bit of system integration that we haven't pulled in (as it has
transitive dependencies), so we manually stub it out.
2021-06-05 18:16:18 -04:00
8ac052ad6f doc: Add some build instructions to README 2021-02-12 15:44:49 -05:00
b34c1d5746 build: Fix ARM ISA detection
armv7 isn't a real cpu_family in meson, so drop that. The detection for
__ARM_ARCH_ISA_ARM was also inverted.

Fixes: https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/issues/6
2020-12-11 08:16:04 -05:00
3f9907f93d build: Use cmake to look up abseil dependency
This should be much more robust than looking up the library directly.

Fixes: https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/issues/4
2020-12-10 19:20:09 -05:00
ce1a78887a build: Revert top-level project name to not have a prefix
Should make meson dist easier to work with.
2020-12-10 18:24:05 -05:00
8ce8bebb7d build: Bump project version to 1.1 2020-12-10 18:24:05 -05:00
00ae7eb234 doc: Fix up release process
Missed a trailing '/' while generating the archive with disastrous
results.
2020-11-27 16:47:09 -05:00
d353e92425 doc: Fix up links in markdown 2020-11-27 14:50:02 -05:00
6a4d14d5c0 doc: Some minor README cleanups 2020-11-27 14:46:36 -05:00
6a67b5ba7e doc: Add some documentation about the release process 2020-11-27 14:30:53 -05:00
e5402cd638 build: Fix up some ARM-related mistakes 2020-11-27 14:03:06 -05:00
e23c10c5e0 ci: Add an aarch64 build 2020-11-27 13:20:12 -05:00
d938d2cf52 meson: override dependency
Will allow us to build the libs as part of gst-build as subprojects.
2020-10-28 16:03:08 +01:00
593986ec5e ci: Add a gitlab-ci.yml 2020-10-26 14:58:04 -04:00
2fabea79e0 gitignore: Drop autotools-related paths 2020-10-23 13:30:23 -04:00
21d78a4267 build: Make packages versioned
Since we cannot rely on the API to be stable upstream, let's start
making the pkg-config, library, and include dir have a version suffix.
This will allow different downstream projects depending on us to
independently switch versions without packagers having to jump through
hoops.
2020-10-23 13:30:23 -04:00
bcec8b0b21 Update to current webrtc library
This is from the upstream library commit id
3326535126e435f1ba647885ce43a8f0f3d317eb, corresponding to Chromium
88.0.4290.1.
2020-10-23 13:30:23 -04:00
b1b02581d3 gitignore: Add install/ for local prefixed installs 2020-10-20 17:22:19 -04:00
a54ffa1220 Add build directory to gitignore
This is what is expected to commonly be used with the meson build
system.
2020-10-12 11:25:23 -04:00
34efc689c2 add webrtc-audio-coding public library
This new lib contains the bare minimum to implement an iSAC encoder and
decoder.

The webrtc files have been copied from the revision as the existing
imported files (c8b569e0a7ad0b369e15f0197b3a558699ec8efa).
2020-03-27 14:52:22 +01:00
f13529b5b8 UPDATING: update with meson instructions 2020-03-24 15:00:53 +01:00
f2003f80d1 meson: fix pkgconfig generation
The bug preventing us to pass the library object to
pkgconfig.generate() has been fixed in meson 0.52.

By doing so the generated pc file has the right -L linker flag, making
it easier to test the lib from non standard location.
We also no longer have to pass libraries_private, it will handle it
automatically.
2020-03-24 15:00:53 +01:00
301110c655 remove autotools
In Meson we trust.
2020-03-24 14:10:59 +01:00
9def8cf10d Add support for non-Linux GNU
GNU/Hurd and GNU/kFreeBSD have basically the same userland as GNU/Linux,
just not the same kernel.
2019-08-31 23:00:29 +02:00
27e93ee86b build/meson: fix compilation on arm64
The assembly files used don't use the right comments for arm64
2018-11-08 20:56:52 +11:00
682857751b build: Factor out common POSIX flag setting in meson build 2018-10-28 14:57:00 +00:00
b47c302cef build: Fix project() invocation in meson build 2018-10-28 14:56:54 +00:00
eb398328ab Initial meson build files 2018-10-28 23:25:18 +11:00
e882a5442a build: Update version to 0.3.1 2018-07-23 18:28:08 +05:30
ee8cfef49b build: Fix configure option '--with-ns-mode'
Make *really* take '--with-ns-mode'-option into account.
Before it was bogus (wrong if-check) and it always resulted
in the float version being used.

Signed-off-by: Mirko Vogt <mirko-dev@nanl.de>
2017-01-06 10:31:17 +05:30
0d937fbc71 doc: file invalid reference to pulseaudio mailing list
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
2016-08-10 20:37:31 +05:30
ff77a85c28 build: fix architecture detection
The current architecture detection, based on the "host_cpu" part of the
tuple does not work properly for a number of reason:

 - The code assumes that if host_cpu starts with "arm" then ARM
   instructions are available, which is incorrect. Indeed, Cortex-M
   platforms can run Linux, they are ARM platforms (so host_cpu = arm),
   but they don't support ARM instructions: they support only the
   Thumb-2 instruction set.

 - The armv7 case is also not very useful, as it is not standard at all
   to pass armv7 as host_cpu even if the host system is actually ARMv7
   based.

 - For the same reason, the armv8 case is not very useful: ARMv8 is
   AArch64, and there is already a separate case to handle this
   architecture.

So, this commit moves away from a host_cpu based logic, and instead
tests using AC_CHECK_DECLS() the built-in definitions of the compiler:

 - If we have __ARM_ARCH_ISA_ARM defined, then it's an ARM processor
   that supports the ARM instruction set (this allows to exclude Thumb-2
   only processors).

 - If we have __ARM_ARCH_7A__, then we have an ARMv7-A processor, and
   we can enable the corresponding optimizations

 - Same for __aarch64__, __i386__ and __x86_64__.

In addition, we remove the AC_MSG_ERROR() that makes the build fail for
all architectures but the ones that are explicitly supported. Indeed,
webrtc-audio-processing builds just fine for other architectures (tested
on MIPS), it's just that none of the architecture-specific optimizations
will be used.

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
2016-08-10 20:37:27 +05:30
7d15b10fb0 build: Add ARM 64bit support 2016-07-14 12:57:43 +09:00
7a37a8bca3 build: Re-add pthread linking on linux 2016-07-14 12:57:43 +09:00
b8be6d1095 build: Use -no-undefined to support both clang and gcc 2016-07-14 12:57:43 +09:00
c9cffb9e3d build: Sync defines and libs with build.gn 2016-07-14 12:57:43 +09:00
1378babdf0 osx: Fix type OS_FLAGS instead of OS_CFLAGS 2016-07-14 12:57:43 +09:00
75ef0de241 build: Protect against unsupported CPU types 2016-07-14 12:57:43 +09:00
6ad2f51e9e Add missing windows conditions variable
Those are used by generic RW lock implementation.

https://bugs.freedesktop.org/show_bug.cgi?id=96754
2016-07-14 12:57:43 +09:00
db2f422578 build: Define MSVC _WIN32 so we can build on mingw
https://bugs.freedesktop.org/show_bug.cgi?id=96754
2016-07-14 12:57:43 +09:00
bf6b9de143 build: Properly select the right system wrappers
This is needed for windows build to be usable.

https://bugs.freedesktop.org/show_bug.cgi?id=96754
2016-07-14 12:57:43 +09:00
12ac8441f7 build: Add required define for Windows
This will also add it to the .pc file as WEBRTC_WIN leaks into the
public interface and undefined __STRICT_ANSI__ so M_PI is available.

https://bugs.freedesktop.org/show_bug.cgi?id=96754
2016-07-14 12:57:43 +09:00
44cf7726ca build: Don't blindly link to pthread
This otherwise breaks the build on Android and Windows. The flag is
required on some Linux builds, and is readded in a subsequent commit.

https://bugs.freedesktop.org/show_bug.cgi?id=96754
2016-07-14 12:57:28 +09:00
560f300a3d build: Add cerbero gnustl support for Android 2016-07-14 12:49:29 +09:00
bf25c45e54 Add missing windows specific headers
https://bugs.freedesktop.org/show_bug.cgi?id=96754
2016-07-14 12:38:42 +09:00
fc0e761394 build: Bump version to 0.3 2016-06-22 12:16:50 +05:30
df47d74bc3 doc: Update NEWS for release 2016-06-22 12:12:10 +05:30
066cf53da7 build: Make sure files with SSE2 code are compiled with -msse2
Signed-off-by: Arun Raghavan <arun@arunraghavan.net>
2016-06-21 16:46:48 +05:30
d58164e4d8 build: enforce linking with --no-undefined, add explicit -lpthread
In investigating x86/sse2 issues in recent webrtc-audio-processing-0.2
release, I found that it was possible for libwebrtc_audio_processing to
contain undefined symbols.

Attached is a patch that addresses this:
* adds -Wl,--no-undefined to libwebrtc_audio_processing_la_LDFLAGS
* adds explicit -lpthread linkage (else, there are undefined references
  to pthread-related symbols)

Signed-off-by: Arun Raghavan <arun@arunraghavan.net>
2016-06-01 10:09:45 +05:30
9a0e28cab0 build: Update library version info 2015-11-04 13:20:05 +05:30
f7c9b269a0 doc: Add release notes about changes and API breakage 2015-11-04 13:15:21 +05:30
34abadd258 Update code to current Chromium master
This corresponds to:

Chromium: 6555f9456074c0c0e5f7713564b978588ac04a5d
webrtc: c8b569e0a7ad0b369e15f0197b3a558699ec8efa
2015-11-04 13:11:30 +05:30
9bc60d3e10 doc: Add a pro-tip to update instructions 2015-11-04 13:11:30 +05:30
5ac4d6aa8a build: Dist ancillary documentation 2015-11-04 13:11:30 +05:30
ff94c386ca build: Install trace.h to allow clients access to the Trace API 2015-11-04 13:11:30 +05:30
1cba3b05b9 doc: Split out and expand on updating notes
Expands on instructions for updating the code when upstream changes.
Also renaming with the '.md' extension for things that understand
Markdown.
2015-11-04 13:11:30 +05:30
66cdc2e923 common_audio: Remove extraneous header
This one is left over from a previous version of the code base.
2015-11-04 13:11:30 +05:30
602b0e5f56 build: Don't install a top level copy of audio_processing.h
If we're breaking API, then clients need to be modified and recompiled
anyway, so we can avoid the cruft of trying to be backwards compatible.

Clients now need to include the file as it is in the upstream sources:
<webrtc/modules/audio_processing/include/audio_processing.h>
2015-11-04 13:11:30 +05:30
360faa363d build: Install module_common_types.h and dependencies
This is needed for at least the AudioFrame class.

Unfortunately, this does add a bit of ugliness because
module_common_types.h has video bits that are hidden behind our own
define, which now becomes part of pkg-config CFLAGS.

This could be made less ugly, potentially, but I'm not sure how right
now.
2015-11-04 13:11:30 +05:30
7771f4d7f3 doc: Add upstream repo URL to README 2015-11-04 13:11:30 +05:30
8d0c073e56 doc: Update README 2015-10-19 11:51:55 +05:30
a6e73f4d94 build: Conditionally build C variants of assembler-optimised code 2015-10-19 11:48:52 +05:30
7d9c65b625 build: Define assembler flags where required 2015-10-19 11:41:13 +05:30
88a8b62f3f build: Define ARM arch preprocessor macros 2015-10-19 11:40:05 +05:30
2f0b9411d3 system_wrappers: Add missing file for ARM builds 2015-10-19 11:32:48 +05:30
d6a338cb01 build: Use CXXFLAGS instead of CFLAGS in compile testing
This is needed since we're using AC_LANG_CPLUSPLUS
2015-10-19 11:29:40 +05:30
e4b1cac207 build: Minor whitespace changes
Makes syntax highlighting in vim unbreak.
2015-10-19 11:24:40 +05:30
e5a6e18f13 Drop redundant header 2015-10-15 16:18:47 +05:30
9d68f7efef build: Fix distcheck 2015-10-15 16:18:47 +05:30
98454ed265 build: Add architecture checks for x86 and ARM
On x86, SSE optimisations are always compiled in, and used based on
runtime checks.

On ARM, we try to autodetect NEON support (with an option of runtime
detection). This has not been build-tested on ARM yet.

This leaves MIPS to be done.
2015-10-15 16:18:47 +05:30
f6941fbf6a build: Stop hard-coding OS/platform CFLAGS 2015-10-15 16:18:47 +05:30
12e9e1eafd build: Fix up include file paths 2015-10-15 16:18:47 +05:30
9b4e8dc83c debug: Update protobuf file
This isn't used it, but let's keep it up to date
2015-10-15 16:18:47 +05:30
926b543a2f build: Drop old gpyi file 2015-10-15 16:18:47 +05:30
7fcd4d2df5 build: More build fixes and cleanups 2015-10-15 16:18:47 +05:30
65a2860806 Update .gitignore for .dirstamp files 2015-10-15 16:18:47 +05:30
e68571d456 build: Some fixes for make distcheck 2015-10-15 16:18:47 +05:30
407bfbf651 build: Make build succeed without test and non-audio deps 2015-10-15 16:18:47 +05:30
753eada3aa Update audio_processing module
Corresponds to upstream commit 524e9b043e7e86fd72353b987c9d5f6a1ebf83e1

Update notes:

 * Pull in third party license file

 * Replace .gypi files with BUILD.gn to keep track of what changes
   upstream

 * Bunch of new filse pulled in as dependencies

 * Won't build yet due to changes needed on top of these
2015-10-15 16:18:45 +05:30
5ae7a5d6cd Update system_wrappers
Corresponds to upstream commit 524e9b043e7e86fd72353b987c9d5f6a1ebf83e1
2015-10-15 16:18:39 +05:30
c4fb4e38de Update common_audio
Corresponds to upstream commit 524e9b043e7e86fd72353b987c9d5f6a1ebf83e1

Update notes:

 * Moved src/ to webrtc/ to easily diff against the third_party/webrtc
   in the chromium tree

 * ARM/NEON/MIPS support is not yet hooked up

 * Tests have not been copied
2015-10-15 16:18:25 +05:30
814 changed files with 106218 additions and 38177 deletions

27
.gitignore vendored
View File

@ -1,30 +1,11 @@
*.o *.o
*.lo
*.la
*.pc *.pc
.*.swp .*.swp
*~ *~
.deps* .deps*
.dirstamp
.libs* .libs*
Makefile build/
Makefile.in
aclocal.m4
autom4te.cache
compile
config.guess
config.h
config.h.in
config.log
config.rpath
config.status
config.sub
configure
depcomp depcomp
install-sh install/
libltdl subprojects/*/
libtool
ltmain.sh
missing
mkinstalldirs
stamp-*

299
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,299 @@
# The build has two stages. The 'container' stage is used to build a Docker
# container and push it to the project's container registry on fd.o GitLab.
# This step is only run when the tag for the container changes, else it is
# effectively a no-op. All of this infrastructure is inherited from the
# wayland/ci-templates repository which is the recommended way to set up CI
# infrastructure on fd.o GitLab.
#
# Once the container stage is done, we move on to the 'build' stage where we
# run an autotools and meson build in parallel. Currently, tests are also run
# as part of the build stage as there doesn't seem to be significant value to
# splitting the stages at the moment.
# Create merge request pipelines for open merge requests, branch pipelines
# otherwise. This allows MRs for new users to run CI, and prevents duplicate
# pipelines for branches with open MRs.
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS
when: never
- if: $CI_COMMIT_BRANCH
stages:
- container
- build
variables:
# Update this tag when you want to trigger a rebuild the container in which
# CI runs, for example when adding new packages to FDO_DISTRIBUTION_PACKAGES.
# The tag is an arbitrary string that identifies the exact container
# contents.
BASE_TAG: '2024-12-26.0'
FDO_DISTRIBUTION_VERSION: '41'
FDO_UPSTREAM_REPO: 'pulseaudio/webrtc-audio-processing'
include:
# We pull templates from master to avoid the overhead of periodically
# scanning for changes upstream. This does means builds might occasionally
# break due to upstream changing things, so if you see unexpected build
# failures, this might be one cause.
- project: 'freedesktop/ci-templates'
ref: 'master'
file: '/templates/fedora.yml'
# Common container build template
.fedora-container-build:
variables:
GIT_STRATEGY: none # no need to pull the whole tree for rebuilding the image
# Remember to update FDO_DISTRIBUTION_TAG when modifying this package list!
# Otherwise the changes won't have effect since an old container image will
# be used.
FDO_DISTRIBUTION_PACKAGES: >-
ca-certificates
g++
gcc
git-core
cmake
abseil-cpp-devel
meson
ninja-build
pkg-config
python3-setuptools
# Used to extend both container and build jobs
.fedora-x86_64:
variables:
FDO_DISTRIBUTION_TAG: "x86_64-$BASE_TAG"
# Used to extend both container and build jobs
.fedora-aarch64:
tags:
- aarch64
variables:
FDO_DISTRIBUTION_TAG: "aarch64-$BASE_TAG"
build-container-x86_64:
extends:
- .fdo.container-build@fedora@x86_64
- .fedora-container-build
- .fedora-x86_64
stage: container
build-container-aarch64:
extends:
- .fdo.container-build@fedora@aarch64
- .fedora-container-build
- .fedora-aarch64
stage: container
# Common build template
.build-distro-absl:
stage: build
extends:
- .fdo.distribution-image@fedora
script:
- meson setup --wrap-mode=nofallback --prefix=/usr --libdir=lib builddir
- ninja -C builddir
- DESTDIR=$PWD/_install ninja install -C builddir
# Test that the pc files are usable
- PKG_CONFIG_PATH=$PWD/_install/usr/lib/pkgconfig pkg-config --cflags --libs webrtc-audio-processing-2
artifacts:
expire_in: '5 days'
when: 'always'
paths:
- "builddir/meson-logs/*txt"
.build-vendored-absl:
stage: build
extends:
- .fdo.distribution-image@fedora
script:
- meson setup --force-fallback-for=abseil-cpp --prefix=/usr --libdir=lib builddir
- ninja -C builddir
- DESTDIR=$PWD/_install ninja install -C builddir
# Test that the pc files are usable
- PKG_CONFIG_LIBDIR=$PWD/_install/usr/lib/pkgconfig pkg-config --cflags --libs webrtc-audio-processing-2
artifacts:
expire_in: '5 days'
when: 'always'
paths:
- "builddir/meson-logs/*txt"
build-distro-absl-x86_64:
extends:
- .build-distro-absl
- .fedora-x86_64
build-vendored-absl-x86_64:
extends:
- .build-vendored-absl
- .fedora-x86_64
build-distro-absl-aarch64:
extends:
- .build-distro-absl
- .fedora-aarch64
build-vendored-absl-aarch64:
extends:
- .build-vendored-absl
- .fedora-aarch64
# Update from:
# https://gitlab.freedesktop.org/gstreamer/gstreamer/-/blob/main/.gitlab-ci.yml
# https://gitlab.freedesktop.org/gstreamer/orc/-/blob/main/.gitlab-ci.yml
vs2019 amd64:
# Update from https://gitlab.freedesktop.org/gstreamer/gstreamer/container_registry
image: 'registry.freedesktop.org/gstreamer/gstreamer/amd64/windows:2023-04-21.0-main'
stage: 'build'
tags:
- 'docker'
- 'windows'
- '2022'
artifacts:
expire_in: '5 days'
when: 'always'
paths:
- "builddir/meson-logs/*txt"
variables:
# Make sure any failure in PowerShell scripts is fatal
ErrorActionPreference: 'Stop'
WarningPreference: 'Stop'
ARCH: 'amd64'
PLAT: 'Desktop'
before_script:
# Make sure meson is up to date, so we don't need to rebuild the image with each release
- pip3 install -U meson ninja
script:
# Gitlab executes PowerShell in docker, but VsDevCmd.bat is a batch script.
# Environment variables substitutions is done by PowerShell before calling
# cmd.exe, that's why we use $env:FOO instead of %FOO%
- cmd.exe /C "C:\BuildTools\Common7\Tools\VsDevCmd.bat -host_arch=amd64 -arch=$env:ARCH -app_platform=$env:PLAT &&
meson setup builddir -Dcpp_std=c++20 &&
meson compile --verbose -C builddir"
# Update from:
# https://gitlab.freedesktop.org/gstreamer/cerbero/-/blob/main/.gitlab-ci.yml
# https://gitlab.freedesktop.org/gstreamer/orc/-/blob/main/.gitlab-ci.yml
macos x86_64:
stage: 'build'
tags:
- gst-macos-13
artifacts:
expire_in: '5 days'
when: 'always'
paths:
- "builddir/meson-logs/*txt"
before_script:
- pip3 install --upgrade pip
# Need to install certificates for python
- pip3 install --upgrade certifi
# Anther way to install certificates
- open /Applications/Python\ 3.8/Install\ Certificates.command
# Make sure meson and ninja are up to date
- pip3 install -U meson ninja
script:
- CERT_PATH=$(python3 -m certifi) && export SSL_CERT_FILE=${CERT_PATH} && export REQUESTS_CA_BUNDLE=${CERT_PATH}
- meson setup builddir
- meson compile --verbose -C builddir
# Update from:
# https://gitlab.freedesktop.org/gstreamer/cerbero/-/blob/main/.gitlab-ci.yml
# https://gitlab.freedesktop.org/gstreamer/orc/-/blob/main/.gitlab-ci.yml
ios arm64:
stage: 'build'
tags:
- gst-ios-16
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_SHA}"
expire_in: '5 days'
when: 'always'
paths:
- "builddir/meson-logs/*txt"
before_script:
- pip3 install --upgrade pip
# Need to install certificates for python
- pip3 install --upgrade certifi
# Anther way to install certificates
- open /Applications/Python\ 3.8/Install\ Certificates.command
# Make sure meson and ninja are up to date
- pip3 install -U meson ninja
script:
- CERT_PATH=$(python3 -m certifi) && export SSL_CERT_FILE=${CERT_PATH} && export REQUESTS_CA_BUNDLE=${CERT_PATH}
- |
cat > ios-cross-file.txt <<EOF
[host_machine]
system = 'darwin'
cpu_family = 'aarch64'
cpu = 'aarch64'
endian = 'little'
[properties]
c_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
objc_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
cpp_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
objcpp_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
c_link_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
objc_link_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
cpp_link_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
objcpp_link_args = ['-arch', 'arm64', '--sysroot=$(xcrun --sdk iphoneos --show-sdk-path)', '-miphoneos-version-min=11.0']
[binaries]
ar = '$(xcrun --find --sdk iphoneos ar)'
c = '$(xcrun --find --sdk iphoneos clang)'
objc = '$(xcrun --find --sdk iphoneos clang)'
cpp = '$(xcrun --find --sdk iphoneos clang++)'
objcpp = '$(xcrun --find --sdk iphoneos clang++)'
ranlib = '$(xcrun --find --sdk iphoneos ranlib)'
strip = '$(xcrun --find --sdk iphoneos strip)'
pkgconfig = 'false'
cmake = 'false'
EOF
- meson setup --cross-file ios-cross-file.txt builddir
- meson compile --verbose -C builddir
# Update from:
# https://gitlab.freedesktop.org/gstreamer/cerbero/-/blob/main/.gitlab-ci.yml
# https://gitlab.freedesktop.org/gstreamer/orc/-/blob/main/.gitlab-ci.yml
android fedora arm64:
# Update from https://gitlab.freedesktop.org/gstreamer/cerbero/container_registry
image: 'registry.freedesktop.org/gstreamer/cerbero/amd64/android-fedora:2021-10-22.0-1.18'
stage: 'build'
artifacts:
expire_in: '5 days'
when: 'always'
paths:
- "builddir/meson-logs/*.txt"
before_script:
- dnf install -y python3-pip gcc ninja-build
- pip3 install --user meson
script:
- export PATH="$HOME/.local/bin:$PATH"
- |
cat > android-cross-file.txt <<EOF
[constants]
ndk_path = '/android/ndk'
toolchain = ndk_path + '/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android'
api = '28'
[host_machine]
system = 'android'
cpu_family = 'aarch64'
cpu = 'aarch64'
endian = 'little'
[properties]
sys_root = ndk_path + '/sysroot'
c_link_args = ['-fuse-ld=gold']
cpp_link_args = ['-fuse-ld=gold']
[binaries]
c = toolchain + api + '-clang'
cpp = toolchain + api + '-clang++'
ar = toolchain + '-ar'
strip = toolchain + '-strip'
EOF
- meson setup --cross-file android-cross-file.txt builddir
- meson compile --verbose -C builddir

View File

@ -1,6 +0,0 @@
SUBDIRS = src
pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = webrtc-audio-processing.pc
EXTRA_DIST = PATENTS

113
NEWS
View File

@ -1,3 +1,116 @@
Release 2.0
-----------
Bump to code from WebRTC M131 version.
Changes include:
* Minor (breaking) API changes upstream
* Various improvements to the AEC implementation
* Transient suppression is removed
* ExperimentalAgc and ExperimentalNs are removed
* iSAC and the webrtc-audio-coding library were removed
* abseil-cpp dependency bumped to 20240722
* NEON runtime detection dropped following upstream
* Fixes for building on i686 and MIPS
* Support for BSDs is added
* Other build-system cleanups
* Patches to upstream are now also tracked in patches/
Release 1.3
-----------
Fix for generate pkg-config file.
Release 1.2
-----------
Improvements for building with abseil-cpp as a subproject, and pkg-config
improvements for abseil dependency detection.
Release 1.1
-----------
Build fixes for various platforms.
Release 1.0
-----------
This is an API breaking release (as a reminder, the AudioProcessing module does
not provide a stable public API, so we expose whatever API exists in the
upstream project).
In order to make package management easier with these inevitable breakages, the
package is now suffixed with a version (currently it is
webrtc-audio-processing-1). When the next API break happens, we will bump the
major version, allowing incompatible versions to coexist. This also means that
the previous version can also coexist with this one. Non-breaking changes will
see a minor version update only.
Changes:
* The code base is now updated to correspond to the version shipping with the
Chromium 88.0.4290.1 tag
* There are a very large number changes to the underlying AEC implementation
since the last update was a while ago. Most visibly the use of the AEC3
canceller by default, the deletion of the beamformer code
* The autotools build system is replaced by meson
* The pkg-config name is changed as described above
Release 0.3
-----------
Minor build fixes.
Release 0.2
-----------
Updated AudioProcessing code to be more current.
Contains API breaking changes.
Upstream changes include:
* Rewritten AGC and voice activity detection
* Intelligibility enhancer
* Extended AEC filter
* Beamformer
* Transient suppressor
* ARM, NEON and MIPS optimisations (MIPS optimisations are not hooked up)
API changes:
* We no longer include a top-level audio_processing.h. The webrtc tree format
is used, so use webrtc/modules/audio_processing/include/audio_processing.h
* The top-level module_common_types.h has also been moved to
webrtc/modules/interface/module_common_types.h
* C++11 support is now required while compiling client code
* AudioProcessing::Create() does not take any arguments any more
* AudioProcessing::Destroy() is gone, use standard C++ "delete" instead
* Stream parameters are now configured via StreamConfig and ProcessingConfig
rather than set_sample_rate(), set_num_channels(), etc.
* AudioFrame field names have changed
* Use config API for newer audio processing options
* Use ProcessReverseStream() instead of AnalyzeReverseStream(), particularly
when using the intelligibility enhancer
* GainControl::set_analog_level_limits() is broken. The AGC implementation
hard codes 0-255 as the volume range
Other notes:
* The new audio processing parameters are not all tested, and a few are not
enabled upstream (in Chromium) either
* The rewritten AGC appears to be less sensitive, and it might make sense to
initialise the capture volume to something reasonable (33% or 50%, for
example) to make sure there is sufficient energy in the stream to trigger
the AGC mechanism
Release 0.1 Release 0.1
----------- -----------

41
README
View File

@ -1,40 +1 @@
About See README.md
=====
This is meant to be a more Linux packaging friendly copy of the AudioProcessing
module from the WebRTC[1] project. The ideal case is that we make no changes to
the code to make tracking upstream code easy.
This package currently only includes the AudioProcessing bits, but I am very
open to collaborating with other projects that wish to distribute other bits of
the code and hopefully eventually have a single point of packaging all the
WebRTC code to help people reuse the code and avoid keeping private copies in
several different projects.
[1] http://code.google.com/p/webrtc/
Feedback
========
Patches, suggestions welcome. You can send them to the PulseAudio mailing
list[2] or to me at the address below.
-- Arun Raghavan <arun.raghavan@collabora.co.uk>
[2] http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss
Notes
=====
Assembling some quick notes on maintaining this tree vs. the original WebRTC
project source code.
1. Running meld on a pristine tree's src/ vs. the same directory in the WebRTC
code should produce a fairly easy-to-parse set of differences that can be
merged. I've kept the test code out for now, but this might get merged in
the future. The .gyp/.gypi files are included to make tracking addition and
removal of files easy to track.
2. Be sure to keep the keep the comment on top of the version field in
configure.ac up-to-date so that it's easy to track what changes went in at
what point.

43
README.md Normal file
View File

@ -0,0 +1,43 @@
# About
This is meant to be a more Linux packaging friendly copy of the AudioProcessing
module from the [ WebRTC ](https://webrtc.googlesource.com/src) project. The
ideal case is that we make no changes to the code to make tracking upstream
code easy.
This package currently only includes the AudioProcessing bits, but I am very
open to collaborating with other projects that wish to distribute other bits of
the code and hopefully eventually have a single point of packaging all the
WebRTC code to help people reuse the code and avoid keeping private copies in
several different projects.
# Building
This project uses the [Meson build system](https://mesonbuild.com/). The
quickest way to build is:
```sh
# Initialise into the build/ directory, for a prefixed install into the
# install/ directory
meson . build -Dprefix=$PWD/install
# Run the actual build
ninja -C build
# Install locally
ninja -C build install
# The libraries, headers, and pkg-config files are now in the install/
# directory
```
# Feedback
Patches, suggestions welcome. You can file an issue on our Gitlab
[repository](https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/).
# Notes
1. It might be nice to try LTO on the library. We build a lot of code as part
of the main AudioProcessing module deps, and it's possible that this could
provide significant space savings.

64
RELEASING.md Normal file
View File

@ -0,0 +1,64 @@
# Release process
## Update the code
Follow the instructions in `UPDATING.md` to update the code.
## Update the package version
If there is no API breakage, update the minor version (X.y -> X.y+1). If there
is API breakage, update the major version (X.y -> X+1.0).
## Make sure builds are successful on all platforms
There is CI for `x86_64` and `aarch64` builds, but 32-bit ARM and MIPS builds
need manual verfification (or testing downstream).
## Tag the release
Tag the release with:
```sh
git tag -s -m 'WebRTC AudioProcessing v<X.y>' v<X.y>
```
## Make a tarball
```sh
# The output will be in build/meson-dist/
meson dist -C build --formats=gztar,xztar --include-subprojects
```
## Do a test build
```sh
tar xvf webrtc-audio-processing-X.y.tar.xz
cd webrtc-audio-processing-X.y
meson . build -Dprefix=$PWD/install
ninja -C build
ninja -C build install
cd ..
```
## Publish the files
```sh
scp webrtc-audio-processing-*.tar.* \
annarchy.freedesktop.org:/srv/www.freedesktop.org/www/software/pulseaudio/webrtc-audio-processing/
```
## Push the tag
```sh
git push origin master
git push origin vX.y
```
## Update the website
This is currently an embarrassing manual process.
## Send out a release announcement
This goes to the `pulseaudio-discuss` and `gstreamer-devel` mailing lists, and
possibly `discuss-webrtc` if it seems relevant.

66
UPDATING.md Normal file
View File

@ -0,0 +1,66 @@
Updating
=====
Assembling some quick notes on maintaining this tree vs. the upstream WebRTC
project source code.
1. The code is currently synced agains whatever revision of the upstream
webrtc git repository Chromium uses.
2. Instructions on checking out the Chromium tree are on the
[WebRTC repo][get-webrtc]. As a shortcut, you can look at the DEPS file
in the Chromium tree for the current webrtc version being used, and then
just use that commit hash with the webrtc tree.
3. [Meld][meld] is a great tool for diffing two directories. Start by running
it on ```webrtc-audio-processing/webrtc``` and
```chromium/third_party/webrtc```.
* For each directory in the ```webrtc-audio-processing``` tree, go over the
corresponding code in the ```chromium``` tree.
* Examine changed files, and pick any new changes. A small number of files
in the ```webrtc-audio-processing``` tree have been changed by hand, make
sure that those are not overwritten.
* unittest files have been left out since they are not built or used.
* BUILD.gn files have been copied to keep track of changes to the build
system upstreama.
* Arch-specific files usually have special handling in the corresponding
meson.build.
4. Once everything has been copied and updated, everything needs to be built.
Missing dependencies (files that were not copied, or new modules that are
being depended on) will first turn up here.
* Copy new deps as needed, leaving out testing-only dependencies insofar as
this is possible.
5. ```webrtc/modules/audio_processing/include/audio_processing.h``` is the main
include file, so look for API changes here.
* The current policy is that we mirror upstream API as-is.
* Update soversion in meson.build with the appropriate version info based on how the
code has changed. Details on how to do this are included in the
[libtool documentation][libtool-version-info].
5. Build PulseAudio (and/or any other dependent projects) against the new code.
The easy way to do this is via a prefixed install.
* Configure webrtc-audio-processing with
```meson build -D prefix=$(pwd)/install```, then do a ```ninja -C build/ install```
* Configure PulseAudio with
```meson build -D pkg_config_path=/path/to/webrtc-audio-processing/install/lib64/pkgconfig/```, which will cause the
build to pick up the prefixed install. Then do a ```ninja -C build```, run the built
PulseAudio, and load ```module-echo-cancel``` to make sure it loads fine.
* Run some test streams through the canceller to make sure it is working
fine.
[get-webrtc]: https://webrtc.googlesource.com/src/
[meld]: http://meldmerge.org/
[libtool-version-info]: https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html

View File

@ -1,6 +0,0 @@
#!/bin/sh
libtoolize
aclocal
automake --add-missing --copy
autoconf
./configure ${@}

View File

@ -1,49 +0,0 @@
# Revision changelog (version - date, svn rev. from upstream that was merged)
# 0.1 - 21 Oct 2011, r789
AC_INIT([webrtc-audio-processing], [0.1])
AM_INIT_AUTOMAKE([dist-xz tar-ustar])
AC_SUBST(LIBWEBRTC_AUDIO_PROCESSING_VERSION_INFO, [0:0:0])
AM_SILENT_RULES([yes])
AC_PROG_CC
AC_PROG_CXX
AC_PROG_LIBTOOL
AC_PROG_INSTALL
AC_LANG_C
AC_LANG_CPLUSPLUS
AC_ARG_WITH([ns-mode],
AS_HELP_STRING([--with-ns-mode=float|fixed], [Noise suppresion mode to use. Default is float]))
AS_CASE(["x${with_ns_mode}"],
["fixed"], [NS_FIXED=1],
["float"], [NS_FIXED=0],
[NS_FIXED=0])
AM_CONDITIONAL(NS_FIXED, [test "x${NS_FIXED}" = "x1"])
COMMON_CFLAGS="-DNDEBUG -I\$(srcdir)/interface -I\$(srcdir)/main/interface -I\$(top_srcdir)/src -I\$(top_srcdir)/src/modules/interface"
COMMON_CXXFLAGS="-DNDEBUG -I\$(srcdir)/interface -I\$(srcdir)/main/interface -I\$(top_srcdir)/src -I\$(top_srcdir)/src/modules/interface"
AC_SUBST([COMMON_CFLAGS])
AC_SUBST([COMMON_CXXFLAGS])
AC_CONFIG_FILES([
webrtc-audio-processing.pc
Makefile
src/Makefile
src/common_audio/Makefile
src/common_audio/signal_processing_library/Makefile
src/common_audio/vad/Makefile
src/system_wrappers/Makefile
src/modules/Makefile
src/modules/audio_processing/Makefile
src/modules/audio_processing/utility/Makefile
src/modules/audio_processing/ns/Makefile
src/modules/audio_processing/aec/Makefile
src/modules/audio_processing/aecm/Makefile
src/modules/audio_processing/agc/Makefile
])
AC_OUTPUT

8
examples/meson.build Normal file
View File

@ -0,0 +1,8 @@
top_incdir = include_directories('..')
executable('run-offline',
'run-offline.cpp',
install: false,
include_directories: top_incdir,
dependencies: [audio_processing_dep, absl_dep]
)

67
examples/run-offline.cpp Normal file
View File

@ -0,0 +1,67 @@
/*
* Copyright (c) 2024 Asymptotic Inc. All Rights Reserved.
* Author: Arun Raghavan <arun@asymptotic.io>
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "api/scoped_refptr.h"
#include <cstdlib>
#include <iostream>
#include <fstream>
#include <webrtc/modules/audio_processing/include/audio_processing.h>
#define DEFAULT_BLOCK_MS 10
#define DEFAULT_RATE 32000
#define DEFAULT_CHANNELS 1
int main(int argc, char **argv) {
if (argc != 4) {
std::cerr << "Usage: " << argv[0] << " <play_file> <rec_file> <out_file>" << std::endl;
return EXIT_FAILURE;
}
std::ifstream play_file(argv[1], std::ios::binary);
std::ifstream rec_file(argv[2], std::ios::binary);
std::ofstream aec_file(argv[3], std::ios::binary);
rtc::scoped_refptr<webrtc::AudioProcessing> apm = webrtc::AudioProcessingBuilder().Create();
webrtc::AudioProcessing::Config config;
config.echo_canceller.enabled = true;
config.echo_canceller.mobile_mode = false;
config.gain_controller1.enabled = true;
config.gain_controller1.mode = webrtc::AudioProcessing::Config::GainController1::kAdaptiveAnalog;
config.gain_controller2.enabled = true;
config.high_pass_filter.enabled = true;
apm->ApplyConfig(config);
webrtc::StreamConfig stream_config(DEFAULT_RATE, DEFAULT_CHANNELS);
while (!play_file.eof() && !rec_file.eof()) {
int16_t play_frame[DEFAULT_RATE * DEFAULT_BLOCK_MS / 1000 * DEFAULT_CHANNELS];
int16_t rec_frame[DEFAULT_RATE * DEFAULT_BLOCK_MS / 1000 * DEFAULT_CHANNELS];
play_file.read(reinterpret_cast<char *>(play_frame), sizeof(play_frame));
rec_file.read(reinterpret_cast<char *>(rec_frame), sizeof(rec_frame));
apm->ProcessReverseStream(play_frame, stream_config, stream_config, play_frame);
apm->ProcessStream(rec_frame, stream_config, stream_config, rec_frame);
aec_file.write(reinterpret_cast<char *>(rec_frame), sizeof(rec_frame));
}
play_file.close();
rec_file.close();
aec_file.close();
return EXIT_SUCCESS;
}

215
meson.build Normal file
View File

@ -0,0 +1,215 @@
project('webrtc-audio-processing', 'c', 'cpp',
version : '2.0',
meson_version : '>= 0.63',
default_options : [ 'warning_level=1',
'buildtype=debugoptimized',
'c_std=c11',
'cpp_std=c++17',
]
)
version_split = meson.project_version().split('.')
# This will be incremented each time a breaking API change occurs
major_version = version_split[0]
# This will be incremented when there are backwards-compatible changes
minor_version = version_split[1]
# We maintain per-package versions to not have to break API for one if only the
# other has breaking changes
apm_major_version = major_version
apm_minor_version = minor_version
apm_project_name = 'webrtc-audio-processing-' + apm_major_version
include_subdir = apm_project_name
cc = meson.get_compiler('c')
cpp = meson.get_compiler('cpp')
host_system = host_machine.system()
# Don't rely on the cross file setting the system properly when targeting ios
if host_system == 'darwin' and meson.is_cross_build()
ios_test_code = '''#include <TargetConditionals.h>
#if ! TARGET_OS_IPHONE
#error "Not iOS/tvOS/watchOS/iPhoneSimulator"
#endif'''
if cc.compiles(ios_test_code, name : 'building for iOS')
host_system = 'ios'
endif
endif
platform_cflags = []
os_cflags = []
os_deps = []
have_posix = false
have_win = false
# Let's use pkg-config if available. This will also fallback to the subproject
# if pkg-config is not found, which is really the most reliable way of building
# abseil due to strict C++ standard match requirements.
absl_dep = [
dependency('absl_base', default_options: ['cpp_std=c++17'], version: '>=20240722'),
dependency('absl_flags'),
dependency('absl_strings'),
dependency('absl_synchronization'),
dependency('absl_bad_optional_access'),
]
if absl_dep[0].type_name() == 'internal'
absl_subproj = subproject('abseil-cpp')
headers = [
absl_subproj.get_variable('absl_base_headers'),
absl_subproj.get_variable('absl_flags_headers'),
absl_subproj.get_variable('absl_strings_headers'),
absl_subproj.get_variable('absl_synchronization_headers'),
absl_subproj.get_variable('absl_types_headers'),
]
install_headers(headers, preserve_path: true)
pc_requires = []
else
pc_requires = [absl_dep[0]]
endif
if ['darwin', 'ios'].contains(host_system)
os_cflags = ['-DWEBRTC_MAC']
if host_system == 'ios'
os_cflags += ['-DWEBRTC_IOS']
endif
have_posix = true
elif host_system == 'android'
os_cflags += ['-DWEBRTC_ANDROID', '-DWEBRTC_LINUX']
os_deps += [cc.find_library('log')]
os_deps += [dependency('gnustl', required : get_option('gnustl'))]
have_posix = true
elif host_system == 'linux'
os_cflags += ['-DWEBRTC_LINUX']
os_deps += [cc.find_library('rt', required : false)]
os_deps += [dependency('threads')]
have_posix = true
elif (host_system == 'dragonfly' or host_system == 'freebsd' or
host_system == 'netbsd' or host_system == 'openbsd')
os_cflags += ['-DWEBRTC_BSD']
os_deps += [dependency('threads')]
have_posix = true
elif host_system == 'windows'
platform_cflags += ['-DWEBRTC_WIN', '-D_WIN32']
# this one is for MinGW to get format specifiers from inttypes.h in C++
platform_cflags += ['-D__STDC_FORMAT_MACROS=1']
# Avoid min/max from windows.h which breaks std::min/max
platform_cflags += ['-DNOMINMAX']
# Ensure M_PI etc are defined
platform_cflags += ['-D_USE_MATH_DEFINES']
os_deps += [cc.find_library('winmm')]
have_win = true
endif
if have_posix
platform_cflags += ['-DWEBRTC_POSIX']
endif
arch_cflags = []
have_arm = false
have_armv7 = false
have_arm64 = false
have_neon = false
have_mips = false
have_mips64 = false
have_x86 = false
have_inline_sse = false
have_avx2 = false
if host_machine.cpu_family() == 'arm'
if cc.compiles('''#ifndef __ARM_ARCH_ISA_ARM
#error no arm arch
#endif''')
have_arm = true
arch_cflags += ['-DWEBRTC_ARCH_ARM']
endif
if cc.compiles('''#ifndef __ARM_ARCH_7A__
#error no armv7 arch
#endif''')
have_armv7 = true
arch_cflags += ['-DWEBRTC_ARCH_ARM_V7']
endif
if cc.compiles('#include <arm_neon.h>', args : '-mfpu=neon')
have_neon = true
endif
endif
if host_machine.cpu_family() == 'aarch64'
have_arm64 = true
have_neon = true
arch_cflags += ['-DWEBRTC_ARCH_ARM64']
endif
if ['mips', 'mips64'].contains(host_machine.cpu_family())
have_mips = true
endif
if host_machine.cpu_family() == 'mips64'
have_mips64 = true
endif
if ['x86', 'x86_64'].contains(host_machine.cpu_family())
have_x86 = true
# AVX2 support is unconditionally available, since all the code (compiled
# with -mavx2) is in separate files from runtime detection (which should not
# be compiled with SIMD flags for cases where the CPU does not support it).
# Unfortunately, a bunch of SSE code is inline with the runtime detection,
# and we can't support that on systems that don't support SSE.
have_avx2 = true
arch_cflags += ['-DWEBRTC_ENABLE_AVX2']
if get_option('inline-sse')
have_inline_sse = true
else
have_inline_sse = false
arch_cflags += ['-DWAP_DISABLE_INLINE_SSE']
endif
endif
neon_opt = get_option('neon').require(have_neon)
if neon_opt.enabled()
arch_cflags += ['-DWEBRTC_HAS_NEON']
if not have_arm64
arch_cflags += ['-mfpu=neon']
endif
endif
common_cflags = [
'-DWEBRTC_LIBRARY_IMPL',
'-DWEBRTC_ENABLE_SYMBOL_EXPORT',
# avoid windows.h/winsock2.h conflicts
'-D_WINSOCKAPI_',
'-DNDEBUG'
] + platform_cflags + os_cflags + arch_cflags
common_cxxflags = common_cflags
common_deps = os_deps + [absl_dep]
webrtc_inc = include_directories('.')
# FIXME: use the unstable-simd module instead
if cc.get_define('_MSC_VER') != ''
avx_flags = ['/arch:AVX2']
else
avx_flags = ['-mavx2', '-mfma']
endif
subdir('webrtc')
pkgconfig = import('pkgconfig')
pkgconfig.generate(
libwebrtc_audio_processing,
description: 'WebRTC Audio Processing library',
subdirs: include_subdir,
requires: pc_requires,
extra_cflags: [
'-DWEBRTC_LIBRARY_IMPL',
] + platform_cflags,
)
audio_processing_dep = declare_dependency(
link_with: libwebrtc_audio_processing,
dependencies: [absl_dep],
include_directories: [webrtc_inc]
)
meson.override_dependency(apm_project_name, audio_processing_dep)
subdir('examples')

9
meson_options.txt Normal file
View File

@ -0,0 +1,9 @@
option('gnustl', type: 'feature',
value: 'auto',
description: 'Use gnustl for a c++ library implementation (only used on Android)')
option('neon', type: 'feature',
value: 'auto',
description: 'Enable NEON optimisations')
option('inline-sse', type: 'boolean',
value: true,
description: 'Enable inline SSE/SSE2 optimisations (i.e. assume CPU supports SSE/SSE2)')

View File

@ -0,0 +1,68 @@
From 297fd4f2efc53b6d49433eaad91a8e09a0f9cbec Mon Sep 17 00:00:00 2001
From: Alper Nebi Yasak <alpernebiyasak@gmail.com>
Date: Fri, 25 Oct 2024 00:40:59 +0300
Subject: [PATCH] AECM: MIPS: Use uintptr_t for pointer arithmetic
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Trying to compile the MIPS-specific AECM audio processing file for
mips64el on Debian results in the following errors:
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc: In function int webrtc::WebRtcAecm_ProcessBlock(AecmCore*, const int16_t*, const int16_t*, const int16_t*, int16_t*):
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:955:30: error: cast from int16_t* {aka short int*} to uint32_t {aka unsigned int} loses precision [-fpermissive]
955 | int16_t* fft = (int16_t*)(((uint32_t)fft_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:955:18: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
955 | int16_t* fft = (int16_t*)(((uint32_t)fft_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:956:36: error: cast from int32_t* {aka int*} to uint32_t {aka unsigned int} loses precision [-fpermissive]
956 | int32_t* echoEst32 = (int32_t*)(((uint32_t)echoEst32_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~~~~~~~
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:956:24: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
956 | int32_t* echoEst32 = (int32_t*)(((uint32_t)echoEst32_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:957:40: error: cast from int32_t* {aka int*} to uint32_t {aka unsigned int} loses precision [-fpermissive]
957 | ComplexInt16* dfw = (ComplexInt16*)(((uint32_t)dfw_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:957:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
957 | ComplexInt16* dfw = (ComplexInt16*)(((uint32_t)dfw_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:958:40: error: cast from int32_t* {aka int*} to uint32_t {aka unsigned int} loses precision [-fpermissive]
958 | ComplexInt16* efw = (ComplexInt16*)(((uint32_t)efw_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~
../webrtc/modules/audio_processing/aecm/aecm_core_mips.cc:958:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
958 | ComplexInt16* efw = (ComplexInt16*)(((uint32_t)efw_buf + 31) & ~31);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Presumably, this file was written for 32-bit MIPS so the author used
uint32_t to do pointer arithmetic over these arrays. Fix the errors by
using uintptr_t to work with pointers.
Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
---
webrtc/modules/audio_processing/aecm/aecm_core_mips.cc | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/webrtc/modules/audio_processing/aecm/aecm_core_mips.cc b/webrtc/modules/audio_processing/aecm/aecm_core_mips.cc
index 16b03cf..07c785e 100644
--- a/webrtc/modules/audio_processing/aecm/aecm_core_mips.cc
+++ b/webrtc/modules/audio_processing/aecm/aecm_core_mips.cc
@@ -952,10 +952,10 @@ int WebRtcAecm_ProcessBlock(AecmCore* aecm,
int32_t dfw_buf[PART_LEN2 + 8];
int32_t efw_buf[PART_LEN2 + 8];
- int16_t* fft = (int16_t*)(((uint32_t)fft_buf + 31) & ~31);
- int32_t* echoEst32 = (int32_t*)(((uint32_t)echoEst32_buf + 31) & ~31);
- ComplexInt16* dfw = (ComplexInt16*)(((uint32_t)dfw_buf + 31) & ~31);
- ComplexInt16* efw = (ComplexInt16*)(((uint32_t)efw_buf + 31) & ~31);
+ int16_t* fft = (int16_t*)(((uintptr_t)fft_buf + 31) & ~31);
+ int32_t* echoEst32 = (int32_t*)(((uintptr_t)echoEst32_buf + 31) & ~31);
+ ComplexInt16* dfw = (ComplexInt16*)(((uintptr_t)dfw_buf + 31) & ~31);
+ ComplexInt16* efw = (ComplexInt16*)(((uintptr_t)efw_buf + 31) & ~31);
int16_t hnl[PART_LEN1];
int16_t numPosCoef = 0;
--
2.47.1

View File

@ -0,0 +1,110 @@
From 2a318149f8d5094c82306b8091a7a8b5194bf9c1 Mon Sep 17 00:00:00 2001
From: Jan Beich <jbeich@FreeBSD.org>
Date: Tue, 7 Jan 2020 18:08:24 +0000
Subject: [PATCH] Add support for BSD systems
webrtc/rtc_base/checks.cc:158:28: error: use of undeclared identifier 'LAST_SYSTEM_ERROR'
158 | file, line, LAST_SYSTEM_ERROR, message);
| ^
webrtc/rtc_base/checks.cc:220:16: error: use of undeclared identifier 'LAST_SYSTEM_ERROR'
220 | LAST_SYSTEM_ERROR);
| ^
In file included from webrtc/rtc_base/platform_thread_types.cc:11:
webrtc/rtc_base/platform_thread_types.h:47:1: error: unknown type name 'PlatformThreadId'
47 | PlatformThreadId CurrentThreadId();
| ^
webrtc/rtc_base/platform_thread_types.h:52:1: error: unknown type name 'PlatformThreadRef'
52 | PlatformThreadRef CurrentThreadRef();
| ^
webrtc/rtc_base/platform_thread_types.h:55:29: error: unknown type name 'PlatformThreadRef'
55 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b);
| ^
webrtc/rtc_base/platform_thread_types.h:55:57: error: unknown type name 'PlatformThreadRef'
55 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b);
| ^
webrtc/rtc_base/platform_thread_types.cc:37:1: error: unknown type name 'PlatformThreadId'
37 | PlatformThreadId CurrentThreadId() {
| ^
webrtc/rtc_base/platform_thread_types.cc:58:1: error: unknown type name 'PlatformThreadRef'
58 | PlatformThreadRef CurrentThreadRef() {
| ^
webrtc/rtc_base/platform_thread_types.cc:68:29: error: unknown type name 'PlatformThreadRef'
68 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b) {
| ^
webrtc/rtc_base/platform_thread_types.cc:68:57: error: unknown type name 'PlatformThreadRef'
68 | bool IsThreadRefEqual(const PlatformThreadRef& a, const PlatformThreadRef& b) {
| ^
In file included from webrtc/rtc_base/event_tracer.cc:30:
In file included from webrtc/api/sequence_checker.h:15:
In file included from webrtc/rtc_base/synchronization/sequence_checker_internal.h:18:
webrtc/rtc_base/synchronization/mutex.h:28:2: error: Unsupported platform.
28 | #error Unsupported platform.
| ^
webrtc/rtc_base/synchronization/mutex.h:52:3: error: unknown type name 'MutexImpl'
52 | MutexImpl impl_;
| ^
---
meson.build | 5 +++++
webrtc/rtc_base/platform_thread_types.cc | 16 ++++++++++++++++
2 files changed, 21 insertions(+)
diff --git a/meson.build b/meson.build
index 8d85d56..05a434a 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,11 @@ elif host_system == 'linux'
os_deps += [cc.find_library('rt', required : false)]
os_deps += [dependency('threads')]
have_posix = true
+elif (host_system == 'dragonfly' or host_system == 'freebsd' or
+ host_system == 'netbsd' or host_system == 'openbsd')
+ os_cflags += ['-DWEBRTC_BSD', '-DWEBRTC_THREAD_RR']
+ os_deps += [dependency('threads')]
+ have_posix = true
elif host_system == 'windows'
platform_cflags += ['-DWEBRTC_WIN', '-D_WIN32']
# this one is for MinGW to get format specifiers from inttypes.h in C++
diff --git a/webrtc/rtc_base/platform_thread_types.cc b/webrtc/rtc_base/platform_thread_types.cc
index d64ea68..e98e8ec 100644
--- a/webrtc/rtc_base/platform_thread_types.cc
+++ b/webrtc/rtc_base/platform_thread_types.cc
@@ -15,6 +15,12 @@
#include <sys/syscall.h>
#endif
+#if defined(__DragonFly__) || defined(__FreeBSD__) || defined(__OpenBSD__) // WEBRTC_BSD
+#include <pthread_np.h>
+#elif defined(__NetBSD__) // WEBRTC_BSD
+#include <lwp.h>
+#endif
+
#if defined(WEBRTC_WIN)
#include "rtc_base/arraysize.h"
@@ -46,6 +52,12 @@ PlatformThreadId CurrentThreadId() {
return zx_thread_self();
#elif defined(WEBRTC_LINUX)
return syscall(__NR_gettid);
+#elif defined(__DragonFly__) || defined(__FreeBSD__) // WEBRTC_BSD
+ return pthread_getthreadid_np();
+#elif defined(__NetBSD__) // WEBRTC_BSD
+ return _lwp_self();
+#elif defined(__OpenBSD__) // WEBRTC_BSD
+ return getthrid();
#elif defined(__EMSCRIPTEN__)
return static_cast<PlatformThreadId>(pthread_self());
#else
@@ -116,6 +128,10 @@ void SetCurrentThreadName(const char* name) {
prctl(PR_SET_NAME, reinterpret_cast<unsigned long>(name)); // NOLINT
#elif defined(WEBRTC_MAC) || defined(WEBRTC_IOS)
pthread_setname_np(name);
+#elif defined(__DragonFly__) || defined(__FreeBSD__) || defined(__OpenBSD__) // WEBRTC_BSD
+ pthread_set_name_np(pthread_self(), name);
+#elif defined(__NetBSD__) // WEBRTC_BSD
+ pthread_setname_np(pthread_self(), "%s", (void*)name);
#elif defined(WEBRTC_FUCHSIA)
zx_status_t status = zx_object_set_property(zx_thread_self(), ZX_PROP_NAME,
name, strlen(name));
--
2.47.1

View File

@ -0,0 +1,336 @@
From fed81a77c9a9bc366556f732324cdc5f9e7b09e9 Mon Sep 17 00:00:00 2001
From: Arun Raghavan <arun@asymptotic.io>
Date: Thu, 26 Dec 2024 14:24:40 -0500
Subject: [PATCH] Allow disabling inline SSE
Should make building on i686 without SSE feasible.
Fixes: https://gitlab.freedesktop.org/pulseaudio/webrtc-audio-processing/-/issues/5
---
meson.build | 14 ++++++++++++--
meson_options.txt | 5 ++++-
.../audio_processing/aec3/adaptive_fir_filter.cc | 14 ++++++++++----
.../aec3/adaptive_fir_filter_erl.cc | 6 ++++--
webrtc/modules/audio_processing/aec3/fft_data.h | 4 +++-
.../audio_processing/aec3/matched_filter.cc | 6 ++++--
webrtc/modules/audio_processing/aec3/vector_math.h | 8 +++++---
.../audio_processing/agc2/rnn_vad/vector_math.h | 4 +++-
webrtc/third_party/pffft/meson.build | 2 +-
9 files changed, 46 insertions(+), 17 deletions(-)
diff --git a/meson.build b/meson.build
index 811d795..ebf053a 100644
--- a/meson.build
+++ b/meson.build
@@ -110,6 +110,7 @@ have_neon = false
have_mips = false
have_mips64 = false
have_x86 = false
+have_inline_sse = false
have_avx2 = false
if host_machine.cpu_family() == 'arm'
if cc.compiles('''#ifndef __ARM_ARCH_ISA_ARM
@@ -140,10 +141,19 @@ if host_machine.cpu_family() == 'mips64'
endif
if ['x86', 'x86_64'].contains(host_machine.cpu_family())
have_x86 = true
- # This is unconditionally enabled for now, actual usage is determined by
- # runtime CPU detection, so we're just assuming the compiler supports avx2
+ # AVX2 support is unconditionally available, since all the code (compiled
+ # with -mavx2) is in separate files from runtime detection (which should not
+ # be compiled with SIMD flags for cases where the CPU does not support it).
+ # Unfortunately, a bunch of SSE code is inline with the runtime detection,
+ # and we can't support that on systems that don't support SSE.
have_avx2 = true
arch_cflags += ['-DWEBRTC_ENABLE_AVX2']
+ if get_option('inline-sse')
+ have_inline_sse = true
+ else
+ have_inline_sse = false
+ arch_cflags += ['-DWAP_DISABLE_INLINE_SSE']
+ endif
endif
neon_opt = get_option('neon')
diff --git a/meson_options.txt b/meson_options.txt
index c939fb9..d08f356 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -3,4 +3,7 @@ option('gnustl', type: 'feature',
description: 'Use gnustl for a c++ library implementation (only used on Android)')
option('neon', type: 'combo',
choices: ['no', 'yes', 'auto', 'runtime'],
- description: '')
+ description: 'Enable NEON optimisations')
+option('inline-sse', type: 'boolean',
+ value: true,
+ description: 'Enable inline SSE/SSE2 optimisations (i.e. assume CPU supports SSE/SSE2)')
diff --git a/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc b/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc
index 917aa95..ded0511 100644
--- a/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc
+++ b/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc
@@ -16,7 +16,7 @@
#if defined(WEBRTC_HAS_NEON)
#include <arm_neon.h>
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
#include <emmintrin.h>
#endif
#include <math.h>
@@ -88,7 +88,7 @@ void ComputeFrequencyResponse_Neon(
}
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
// Computes and stores the frequency response of the filter.
void ComputeFrequencyResponse_Sse2(
size_t num_partitions,
@@ -212,7 +212,7 @@ void AdaptPartitions_Neon(const RenderBuffer& render_buffer,
}
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
// Adapts the filter partitions. (SSE2 variant)
void AdaptPartitions_Sse2(const RenderBuffer& render_buffer,
const FftData& G,
@@ -377,7 +377,7 @@ void ApplyFilter_Neon(const RenderBuffer& render_buffer,
}
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
// Produces the filter output (SSE2 variant).
void ApplyFilter_Sse2(const RenderBuffer& render_buffer,
size_t num_partitions,
@@ -557,9 +557,11 @@ void AdaptiveFirFilter::Filter(const RenderBuffer& render_buffer,
RTC_DCHECK(S);
switch (optimization_) {
#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2:
aec3::ApplyFilter_Sse2(render_buffer, current_size_partitions_, H_, S);
break;
+#endif
case Aec3Optimization::kAvx2:
aec3::ApplyFilter_Avx2(render_buffer, current_size_partitions_, H_, S);
break;
@@ -601,9 +603,11 @@ void AdaptiveFirFilter::ComputeFrequencyResponse(
switch (optimization_) {
#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2:
aec3::ComputeFrequencyResponse_Sse2(current_size_partitions_, H_, H2);
break;
+#endif
case Aec3Optimization::kAvx2:
aec3::ComputeFrequencyResponse_Avx2(current_size_partitions_, H_, H2);
break;
@@ -626,10 +630,12 @@ void AdaptiveFirFilter::AdaptAndUpdateSize(const RenderBuffer& render_buffer,
// Adapt the filter.
switch (optimization_) {
#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2:
aec3::AdaptPartitions_Sse2(render_buffer, G, current_size_partitions_,
&H_);
break;
+#endif
case Aec3Optimization::kAvx2:
aec3::AdaptPartitions_Avx2(render_buffer, G, current_size_partitions_,
&H_);
diff --git a/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_erl.cc b/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_erl.cc
index 45b8813..920d51c 100644
--- a/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_erl.cc
+++ b/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_erl.cc
@@ -16,7 +16,7 @@
#if defined(WEBRTC_HAS_NEON)
#include <arm_neon.h>
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
#include <emmintrin.h>
#endif
@@ -54,7 +54,7 @@ void ErlComputer_NEON(
}
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
// Computes and stores the echo return loss estimate of the filter, which is the
// sum of the partition frequency responses.
void ErlComputer_SSE2(
@@ -82,9 +82,11 @@ void ComputeErl(const Aec3Optimization& optimization,
// Update the frequency response and echo return loss for the filter.
switch (optimization) {
#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2:
aec3::ErlComputer_SSE2(H2, erl);
break;
+#endif
case Aec3Optimization::kAvx2:
aec3::ErlComputer_AVX2(H2, erl);
break;
diff --git a/webrtc/modules/audio_processing/aec3/fft_data.h b/webrtc/modules/audio_processing/aec3/fft_data.h
index 9c25e78..892407d 100644
--- a/webrtc/modules/audio_processing/aec3/fft_data.h
+++ b/webrtc/modules/audio_processing/aec3/fft_data.h
@@ -14,7 +14,7 @@
// Defines WEBRTC_ARCH_X86_FAMILY, used below.
#include "rtc_base/system/arch.h"
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
#include <emmintrin.h>
#endif
#include <algorithm>
@@ -49,6 +49,7 @@ struct FftData {
RTC_DCHECK_EQ(kFftLengthBy2Plus1, power_spectrum.size());
switch (optimization) {
#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2: {
constexpr int kNumFourBinBands = kFftLengthBy2 / 4;
constexpr int kLimit = kNumFourBinBands * 4;
@@ -63,6 +64,7 @@ struct FftData {
power_spectrum[kFftLengthBy2] = re[kFftLengthBy2] * re[kFftLengthBy2] +
im[kFftLengthBy2] * im[kFftLengthBy2];
} break;
+#endif
case Aec3Optimization::kAvx2:
SpectrumAVX2(power_spectrum);
break;
diff --git a/webrtc/modules/audio_processing/aec3/matched_filter.cc b/webrtc/modules/audio_processing/aec3/matched_filter.cc
index 59a3b46..86f365a 100644
--- a/webrtc/modules/audio_processing/aec3/matched_filter.cc
+++ b/webrtc/modules/audio_processing/aec3/matched_filter.cc
@@ -15,7 +15,7 @@
#if defined(WEBRTC_HAS_NEON)
#include <arm_neon.h>
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
#include <emmintrin.h>
#endif
#include <algorithm>
@@ -286,7 +286,7 @@ void MatchedFilterCore_NEON(size_t x_start_index,
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
void MatchedFilterCore_AccumulatedError_SSE2(
size_t x_start_index,
@@ -695,12 +695,14 @@ void MatchedFilter::Update(const DownsampledRenderBuffer& render_buffer,
switch (optimization_) {
#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2:
aec3::MatchedFilterCore_SSE2(
x_start_index, x2_sum_threshold, smoothing, render_buffer.buffer, y,
filters_[n], &filters_updated, &error_sum, compute_pre_echo,
instantaneous_accumulated_error_, scratch_memory_);
break;
+#endif
case Aec3Optimization::kAvx2:
aec3::MatchedFilterCore_AVX2(
x_start_index, x2_sum_threshold, smoothing, render_buffer.buffer, y,
diff --git a/webrtc/modules/audio_processing/aec3/vector_math.h b/webrtc/modules/audio_processing/aec3/vector_math.h
index e4d1381..1506a44 100644
--- a/webrtc/modules/audio_processing/aec3/vector_math.h
+++ b/webrtc/modules/audio_processing/aec3/vector_math.h
@@ -17,7 +17,7 @@
#if defined(WEBRTC_HAS_NEON)
#include <arm_neon.h>
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
#include <emmintrin.h>
#endif
#include <math.h>
@@ -43,7 +43,7 @@ class VectorMath {
void SqrtAVX2(rtc::ArrayView<float> x);
void Sqrt(rtc::ArrayView<float> x) {
switch (optimization_) {
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2: {
const int x_size = static_cast<int>(x.size());
const int vector_limit = x_size >> 2;
@@ -123,7 +123,7 @@ class VectorMath {
RTC_DCHECK_EQ(z.size(), x.size());
RTC_DCHECK_EQ(z.size(), y.size());
switch (optimization_) {
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2: {
const int x_size = static_cast<int>(x.size());
const int vector_limit = x_size >> 2;
@@ -174,6 +174,7 @@ class VectorMath {
RTC_DCHECK_EQ(z.size(), x.size());
switch (optimization_) {
#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if !defined(WAP_DISABLE_INLINE_SSE)
case Aec3Optimization::kSse2: {
const int x_size = static_cast<int>(x.size());
const int vector_limit = x_size >> 2;
@@ -190,6 +191,7 @@ class VectorMath {
z[j] += x[j];
}
} break;
+#endif
case Aec3Optimization::kAvx2:
AccumulateAVX2(x, z);
break;
diff --git a/webrtc/modules/audio_processing/agc2/rnn_vad/vector_math.h b/webrtc/modules/audio_processing/agc2/rnn_vad/vector_math.h
index 47f6811..f965086 100644
--- a/webrtc/modules/audio_processing/agc2/rnn_vad/vector_math.h
+++ b/webrtc/modules/audio_processing/agc2/rnn_vad/vector_math.h
@@ -17,7 +17,7 @@
#if defined(WEBRTC_HAS_NEON)
#include <arm_neon.h>
#endif
-#if defined(WEBRTC_ARCH_X86_FAMILY)
+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(WAP_DISABLE_INLINE_SSE)
#include <emmintrin.h>
#endif
@@ -47,6 +47,7 @@ class VectorMath {
if (cpu_features_.avx2) {
return DotProductAvx2(x, y);
} else if (cpu_features_.sse2) {
+#if !defined(WAP_DISABLE_INLINE_SSE)
__m128 accumulator = _mm_setzero_ps();
constexpr int kBlockSizeLog2 = 2;
constexpr int kBlockSize = 1 << kBlockSizeLog2;
@@ -72,6 +73,7 @@ class VectorMath {
dot_product += x[i] * y[i];
}
return dot_product;
+#endif
}
#elif defined(WEBRTC_HAS_NEON) && defined(WEBRTC_ARCH_ARM64)
if (cpu_features_.neon) {
diff --git a/webrtc/third_party/pffft/meson.build b/webrtc/third_party/pffft/meson.build
index c1eb5c6..cf4c9c7 100644
--- a/webrtc/third_party/pffft/meson.build
+++ b/webrtc/third_party/pffft/meson.build
@@ -4,7 +4,7 @@ pffft_sources = [
pffft_cflags = [ '-D_GNU_SOURCE' ]
-if (have_arm and not have_neon) or (have_mips and host_machine.endian() == 'little') or have_mips64
+if not have_inline_sse or (have_arm and not have_neon) or (have_mips and host_machine.endian() == 'little') or have_mips64
pffft_cflags += [ '-DPFFFT_SIMD_DISABLE' ]
endif
--
2.47.1

View File

@ -0,0 +1,68 @@
From ad563b095cea13730ca95e77d50e352ea9e344a9 Mon Sep 17 00:00:00 2001
From: Arun Raghavan <arun@asymptotic.io>
Date: Fri, 15 Dec 2023 16:06:05 -0500
Subject: [PATCH] Fix up XMM intrinsics usage on MSVC
Repplying 0a0050746bc20ef970b9f260d485e4367c7ba854 after M131 bump.
---
.../aec3/matched_filter_avx2.cc | 30 ++++++++++++-------
1 file changed, 20 insertions(+), 10 deletions(-)
diff --git a/webrtc/modules/audio_processing/aec3/matched_filter_avx2.cc b/webrtc/modules/audio_processing/aec3/matched_filter_avx2.cc
index 8c2ffcb..65a1b76 100644
--- a/webrtc/modules/audio_processing/aec3/matched_filter_avx2.cc
+++ b/webrtc/modules/audio_processing/aec3/matched_filter_avx2.cc
@@ -13,6 +13,16 @@
#include "modules/audio_processing/aec3/matched_filter.h"
#include "rtc_base/checks.h"
+#ifdef _MSC_VER
+// Visual Studio
+#define LOOKUP_M128(v, i) v.m128_f32[i]
+#define LOOKUP_M256(v, i) v.m256_f32[i]
+#else
+// GCC/Clang
+#define LOOKUP_M128(v, i) v[i]
+#define LOOKUP_M256(v, i) v[i]
+#endif
+
namespace webrtc {
namespace aec3 {
@@ -81,14 +91,14 @@ void MatchedFilterCore_AccumulatedError_AVX2(
s_inst_256_8 = _mm256_mul_ps(h_k_8, x_k_8);
s_inst_hadd_256 = _mm256_hadd_ps(s_inst_256, s_inst_256_8);
s_inst_hadd_256 = _mm256_hadd_ps(s_inst_hadd_256, s_inst_hadd_256);
- s_acum += s_inst_hadd_256[0];
- e_128[0] = s_acum - y[i];
- s_acum += s_inst_hadd_256[4];
- e_128[1] = s_acum - y[i];
- s_acum += s_inst_hadd_256[1];
- e_128[2] = s_acum - y[i];
- s_acum += s_inst_hadd_256[5];
- e_128[3] = s_acum - y[i];
+ s_acum += LOOKUP_M256(s_inst_hadd_256, 0);
+ LOOKUP_M128(e_128, 0) = s_acum - y[i];
+ s_acum += LOOKUP_M256(s_inst_hadd_256,4);
+ LOOKUP_M128(e_128, 1) = s_acum - y[i];
+ s_acum += LOOKUP_M256(s_inst_hadd_256,1);
+ LOOKUP_M128(e_128, 2) = s_acum - y[i];
+ s_acum += LOOKUP_M256(s_inst_hadd_256,5);
+ LOOKUP_M128(e_128, 3) = s_acum - y[i];
__m128 accumulated_error = _mm_load_ps(a_p);
accumulated_error = _mm_fmadd_ps(e_128, e_128, accumulated_error);
@@ -209,8 +219,8 @@ void MatchedFilterCore_AVX2(size_t x_start_index,
x2_sum_256 = _mm256_add_ps(x2_sum_256, x2_sum_256_8);
s_256 = _mm256_add_ps(s_256, s_256_8);
__m128 sum = hsum_ab(x2_sum_256, s_256);
- x2_sum += sum[0];
- s += sum[1];
+ x2_sum += LOOKUP_M128(sum, 0);
+ s += LOOKUP_M128(sum, 1);
// Compute the matched filter error.
float e = y[i] - s;
--
2.47.1

View File

@ -0,0 +1,45 @@
From 4a17c682e9a173c27feec9e67fb8c4c36090b1a6 Mon Sep 17 00:00:00 2001
From: Alper Nebi Yasak <alpernebiyasak@gmail.com>
Date: Fri, 25 Oct 2024 01:53:16 +0300
Subject: [PATCH] common_audio: Add MIPS_DSP_R1_LE guard for vector scaling ops
The MIPS-specific source for vector scaling operations fails to build on
Debian's mips64el:
[97/303] Compiling C object webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o
FAILED: webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o
cc [...] webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o.d -o webrtc/common_audio/libcommon_audio.a.p/signal_processing_vector_scaling_operations_mips.c.o -c ../webrtc/common_audio/signal_processing/vector_scaling_operations_mips.c
/tmp/cc7UGPkY.s: Assembler messages:
/tmp/cc7UGPkY.s:57: Error: opcode not supported on this processor: mips64r2 (mips64r2) `extrv_r.w $3,$ac0,$8'
ninja: build stopped: subcommand failed.
The EXTRV_R.W instruction it uses is part of DSP extensions for this
architecture. In signal_processing_library.h, this function's prototype
is guarded with #if defined(MIPS_DSP_R1_LE). Guard the implementation
like that as well to fix the error.
Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
---
.../signal_processing/vector_scaling_operations_mips.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/webrtc/common_audio/signal_processing/vector_scaling_operations_mips.c b/webrtc/common_audio/signal_processing/vector_scaling_operations_mips.c
index ba2d26d..08ca293 100644
--- a/webrtc/common_audio/signal_processing/vector_scaling_operations_mips.c
+++ b/webrtc/common_audio/signal_processing/vector_scaling_operations_mips.c
@@ -16,6 +16,7 @@
#include "common_audio/signal_processing/include/signal_processing_library.h"
+#if defined(MIPS_DSP_R1_LE)
int WebRtcSpl_ScaleAndAddVectorsWithRound_mips(const int16_t* in_vector1,
int16_t in_vector1_scale,
const int16_t* in_vector2,
@@ -55,3 +56,4 @@ int WebRtcSpl_ScaleAndAddVectorsWithRound_mips(const int16_t* in_vector1,
}
return 0;
}
+#endif
--
2.47.1

View File

@ -1 +0,0 @@
SUBDIRS = common_audio system_wrappers modules

View File

@ -1 +0,0 @@
SUBDIRS = signal_processing_library vad

View File

@ -1,44 +0,0 @@
noinst_LTLIBRARIES = libspl.la
libspl_la_SOURCES = main/interface/signal_processing_library.h \
main/interface/spl_inl.h \
main/source/auto_corr_to_refl_coef.c \
main/source/auto_correlation.c \
main/source/complex_fft.c \
main/source/complex_ifft.c \
main/source/complex_bit_reverse.c \
main/source/copy_set_operations.c \
main/source/cos_table.c \
main/source/cross_correlation.c \
main/source/division_operations.c \
main/source/dot_product_with_scale.c \
main/source/downsample_fast.c \
main/source/energy.c \
main/source/filter_ar.c \
main/source/filter_ar_fast_q12.c \
main/source/filter_ma_fast_q12.c \
main/source/get_hanning_window.c \
main/source/get_scaling_square.c \
main/source/hanning_table.c \
main/source/ilbc_specific_functions.c \
main/source/levinson_durbin.c \
main/source/lpc_to_refl_coef.c \
main/source/min_max_operations.c \
main/source/randn_table.c \
main/source/randomization_functions.c \
main/source/refl_coef_to_lpc.c \
main/source/resample.c \
main/source/resample_48khz.c \
main/source/resample_by_2.c \
main/source/resample_by_2_internal.c \
main/source/resample_by_2_internal.h \
main/source/resample_fractional.c \
main/source/sin_table.c \
main/source/sin_table_1024.c \
main/source/spl_sqrt.c \
main/source/spl_sqrt_floor.c \
main/source/spl_version.c \
main/source/splitting_filter.c \
main/source/sqrt_of_one_minus_x_squared.c \
main/source/vector_scaling_operations.c
libspl_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS)

View File

@ -1,159 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
// This header file includes the inline functions in
// the fix point signal processing library.
#ifndef WEBRTC_SPL_SPL_INL_H_
#define WEBRTC_SPL_SPL_INL_H_
#ifdef WEBRTC_ARCH_ARM_V7A
#include "spl_inl_armv7.h"
#else
static __inline WebRtc_Word16 WebRtcSpl_SatW32ToW16(WebRtc_Word32 value32) {
WebRtc_Word16 out16 = (WebRtc_Word16) value32;
if (value32 > 32767)
out16 = 32767;
else if (value32 < -32768)
out16 = -32768;
return out16;
}
static __inline WebRtc_Word16 WebRtcSpl_AddSatW16(WebRtc_Word16 a,
WebRtc_Word16 b) {
return WebRtcSpl_SatW32ToW16((WebRtc_Word32) a + (WebRtc_Word32) b);
}
static __inline WebRtc_Word32 WebRtcSpl_AddSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_sum;
// perform long addition
l_sum = l_var1 + l_var2;
// check for under or overflow
if (WEBRTC_SPL_IS_NEG(l_var1)) {
if (WEBRTC_SPL_IS_NEG(l_var2) && !WEBRTC_SPL_IS_NEG(l_sum)) {
l_sum = (WebRtc_Word32)0x80000000;
}
} else {
if (!WEBRTC_SPL_IS_NEG(l_var2) && WEBRTC_SPL_IS_NEG(l_sum)) {
l_sum = (WebRtc_Word32)0x7FFFFFFF;
}
}
return l_sum;
}
static __inline WebRtc_Word16 WebRtcSpl_SubSatW16(WebRtc_Word16 var1,
WebRtc_Word16 var2) {
return WebRtcSpl_SatW32ToW16((WebRtc_Word32) var1 - (WebRtc_Word32) var2);
}
static __inline WebRtc_Word32 WebRtcSpl_SubSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_diff;
// perform subtraction
l_diff = l_var1 - l_var2;
// check for underflow
if ((l_var1 < 0) && (l_var2 > 0) && (l_diff > 0))
l_diff = (WebRtc_Word32)0x80000000;
// check for overflow
if ((l_var1 > 0) && (l_var2 < 0) && (l_diff < 0))
l_diff = (WebRtc_Word32)0x7FFFFFFF;
return l_diff;
}
static __inline WebRtc_Word16 WebRtcSpl_GetSizeInBits(WebRtc_UWord32 n) {
int bits;
if (0xFFFF0000 & n) {
bits = 16;
} else {
bits = 0;
}
if (0x0000FF00 & (n >> bits)) bits += 8;
if (0x000000F0 & (n >> bits)) bits += 4;
if (0x0000000C & (n >> bits)) bits += 2;
if (0x00000002 & (n >> bits)) bits += 1;
if (0x00000001 & (n >> bits)) bits += 1;
return bits;
}
static __inline int WebRtcSpl_NormW32(WebRtc_Word32 a) {
int zeros;
if (a <= 0) a ^= 0xFFFFFFFF;
if (!(0xFFFF8000 & a)) {
zeros = 16;
} else {
zeros = 0;
}
if (!(0xFF800000 & (a << zeros))) zeros += 8;
if (!(0xF8000000 & (a << zeros))) zeros += 4;
if (!(0xE0000000 & (a << zeros))) zeros += 2;
if (!(0xC0000000 & (a << zeros))) zeros += 1;
return zeros;
}
static __inline int WebRtcSpl_NormU32(WebRtc_UWord32 a) {
int zeros;
if (a == 0) return 0;
if (!(0xFFFF0000 & a)) {
zeros = 16;
} else {
zeros = 0;
}
if (!(0xFF000000 & (a << zeros))) zeros += 8;
if (!(0xF0000000 & (a << zeros))) zeros += 4;
if (!(0xC0000000 & (a << zeros))) zeros += 2;
if (!(0x80000000 & (a << zeros))) zeros += 1;
return zeros;
}
static __inline int WebRtcSpl_NormW16(WebRtc_Word16 a) {
int zeros;
if (a <= 0) a ^= 0xFFFF;
if (!(0xFF80 & a)) {
zeros = 8;
} else {
zeros = 0;
}
if (!(0xF800 & (a << zeros))) zeros += 4;
if (!(0xE000 & (a << zeros))) zeros += 2;
if (!(0xC000 & (a << zeros))) zeros += 1;
return zeros;
}
static __inline int32_t WebRtc_MulAccumW16(int16_t a,
int16_t b,
int32_t c) {
return (a * b + c);
}
#endif // WEBRTC_ARCH_ARM_V7A
#endif // WEBRTC_SPL_SPL_INL_H_

View File

@ -1,137 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
// This header file includes the inline functions for ARM processors in
// the fix point signal processing library.
#ifndef WEBRTC_SPL_SPL_INL_ARMV7_H_
#define WEBRTC_SPL_SPL_INL_ARMV7_H_
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_16_32_RSFT16(WebRtc_Word16 a,
WebRtc_Word32 b) {
WebRtc_Word32 tmp;
__asm__("smulwb %0, %1, %2":"=r"(tmp):"r"(b), "r"(a));
return tmp;
}
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_32_32_RSFT32(WebRtc_Word16 a,
WebRtc_Word16 b,
WebRtc_Word32 c) {
WebRtc_Word32 tmp;
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(tmp) : "r"(b), "r"(a));
__asm__("smmul %0, %1, %2":"=r"(tmp):"r"(tmp), "r"(c));
return tmp;
}
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_32_32_RSFT32BI(WebRtc_Word32 a,
WebRtc_Word32 b) {
WebRtc_Word32 tmp;
__asm__("smmul %0, %1, %2":"=r"(tmp):"r"(a), "r"(b));
return tmp;
}
static __inline WebRtc_Word32 WEBRTC_SPL_MUL_16_16(WebRtc_Word16 a,
WebRtc_Word16 b) {
WebRtc_Word32 tmp;
__asm__("smulbb %0, %1, %2":"=r"(tmp):"r"(a), "r"(b));
return tmp;
}
static __inline int32_t WebRtc_MulAccumW16(int16_t a,
int16_t b,
int32_t c) {
int32_t tmp = 0;
__asm__("smlabb %0, %1, %2, %3":"=r"(tmp):"r"(a), "r"(b), "r"(c));
return tmp;
}
static __inline WebRtc_Word16 WebRtcSpl_AddSatW16(WebRtc_Word16 a,
WebRtc_Word16 b) {
WebRtc_Word32 s_sum;
__asm__("qadd16 %0, %1, %2":"=r"(s_sum):"r"(a), "r"(b));
return (WebRtc_Word16) s_sum;
}
static __inline WebRtc_Word32 WebRtcSpl_AddSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_sum;
__asm__("qadd %0, %1, %2":"=r"(l_sum):"r"(l_var1), "r"(l_var2));
return l_sum;
}
static __inline WebRtc_Word16 WebRtcSpl_SubSatW16(WebRtc_Word16 var1,
WebRtc_Word16 var2) {
WebRtc_Word32 s_sub;
__asm__("qsub16 %0, %1, %2":"=r"(s_sub):"r"(var1), "r"(var2));
return (WebRtc_Word16)s_sub;
}
static __inline WebRtc_Word32 WebRtcSpl_SubSatW32(WebRtc_Word32 l_var1,
WebRtc_Word32 l_var2) {
WebRtc_Word32 l_sub;
__asm__("qsub %0, %1, %2":"=r"(l_sub):"r"(l_var1), "r"(l_var2));
return l_sub;
}
static __inline WebRtc_Word16 WebRtcSpl_GetSizeInBits(WebRtc_UWord32 n) {
WebRtc_Word32 tmp;
__asm__("clz %0, %1":"=r"(tmp):"r"(n));
return (WebRtc_Word16)(32 - tmp);
}
static __inline int WebRtcSpl_NormW32(WebRtc_Word32 a) {
WebRtc_Word32 tmp;
if (a <= 0) a ^= 0xFFFFFFFF;
__asm__("clz %0, %1":"=r"(tmp):"r"(a));
return tmp - 1;
}
static __inline int WebRtcSpl_NormU32(WebRtc_UWord32 a) {
int tmp;
if (a == 0) return 0;
__asm__("clz %0, %1":"=r"(tmp):"r"(a));
return tmp;
}
static __inline int WebRtcSpl_NormW16(WebRtc_Word16 a) {
WebRtc_Word32 tmp;
if (a <= 0) a ^= 0xFFFFFFFF;
__asm__("clz %0, %1":"=r"(tmp):"r"(a));
return tmp - 17;
}
static __inline WebRtc_Word16 WebRtcSpl_SatW32ToW16(WebRtc_Word32 value32) {
WebRtc_Word16 out16;
__asm__("ssat %r0, #16, %r1" : "=r"(out16) : "r"(value32));
return out16;
}
#endif // WEBRTC_SPL_SPL_INL_ARMV7_H_

View File

@ -1,141 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_AutoCorrelation().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
int WebRtcSpl_AutoCorrelation(G_CONST WebRtc_Word16* in_vector,
int in_vector_length,
int order,
WebRtc_Word32* result,
int* scale)
{
WebRtc_Word32 sum;
int i, j;
WebRtc_Word16 smax; // Sample max
G_CONST WebRtc_Word16* xptr1;
G_CONST WebRtc_Word16* xptr2;
WebRtc_Word32* resultptr;
int scaling = 0;
#ifdef _ARM_OPT_
#pragma message("NOTE: _ARM_OPT_ optimizations are used")
WebRtc_Word16 loops4;
#endif
if (order < 0)
order = in_vector_length;
// Find the max. sample
smax = WebRtcSpl_MaxAbsValueW16(in_vector, in_vector_length);
// In order to avoid overflow when computing the sum we should scale the samples so that
// (in_vector_length * smax * smax) will not overflow.
if (smax == 0)
{
scaling = 0;
} else
{
int nbits = WebRtcSpl_GetSizeInBits(in_vector_length); // # of bits in the sum loop
int t = WebRtcSpl_NormW32(WEBRTC_SPL_MUL(smax, smax)); // # of bits to normalize smax
if (t > nbits)
{
scaling = 0;
} else
{
scaling = nbits - t;
}
}
resultptr = result;
// Perform the actual correlation calculation
for (i = 0; i < order + 1; i++)
{
int loops = (in_vector_length - i);
sum = 0;
xptr1 = in_vector;
xptr2 = &in_vector[i];
#ifndef _ARM_OPT_
for (j = loops; j > 0; j--)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1++, *xptr2++, scaling);
}
#else
loops4 = (loops >> 2) << 2;
if (scaling == 0)
{
for (j = 0; j < loops4; j = j + 4)
{
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
}
for (j = loops4; j < loops; j++)
{
sum += WEBRTC_SPL_MUL_16_16(*xptr1, *xptr2);
xptr1++;
xptr2++;
}
}
else
{
for (j = 0; j < loops4; j = j + 4)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
}
for (j = loops4; j < loops; j++)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*xptr1, *xptr2, scaling);
xptr1++;
xptr2++;
}
}
#endif
*resultptr++ = sum;
}
*scale = scaling;
return order + 1;
}

View File

@ -1,51 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_ComplexBitReverse().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_ComplexBitReverse(WebRtc_Word16 frfi[], int stages)
{
int mr, nn, n, l, m;
WebRtc_Word16 tr, ti;
n = 1 << stages;
mr = 0;
nn = n - 1;
// decimation in time - re-order data
for (m = 1; m <= nn; ++m)
{
l = n;
do
{
l >>= 1;
} while (mr + l > nn);
mr = (mr & (l - 1)) + l;
if (mr <= m)
continue;
tr = frfi[2 * m];
frfi[2 * m] = frfi[2 * mr];
frfi[2 * mr] = tr;
ti = frfi[2 * m + 1];
frfi[2 * m + 1] = frfi[2 * mr + 1];
frfi[2 * mr + 1] = ti;
}
}

View File

@ -1,150 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_ComplexFFT().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define CFFTSFT 14
#define CFFTRND 1
#define CFFTRND2 16384
int WebRtcSpl_ComplexFFT(WebRtc_Word16 frfi[], int stages, int mode)
{
int i, j, l, k, istep, n, m;
WebRtc_Word16 wr, wi;
WebRtc_Word32 tr32, ti32, qr32, qi32;
/* The 1024-value is a constant given from the size of WebRtcSpl_kSinTable1024[],
* and should not be changed depending on the input parameter 'stages'
*/
n = 1 << stages;
if (n > 1024)
return -1;
l = 1;
k = 10 - 1; /* Constant for given WebRtcSpl_kSinTable1024[]. Do not change
depending on the input parameter 'stages' */
if (mode == 0)
{
// mode==0: Low-complexity and Low-accuracy mode
while (l < n)
{
istep = l << 1;
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = -WebRtcSpl_kSinTable1024[j];
for (i = m; i < n; i += istep)
{
j = i + l;
tr32 = WEBRTC_SPL_RSHIFT_W32((WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j])
- WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j + 1])), 15);
ti32 = WEBRTC_SPL_RSHIFT_W32((WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j + 1])
+ WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j])), 15);
qr32 = (WebRtc_Word32)frfi[2 * i];
qi32 = (WebRtc_Word32)frfi[2 * i + 1];
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 - tr32, 1);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 - ti32, 1);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 + tr32, 1);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 + ti32, 1);
}
}
--k;
l = istep;
}
} else
{
// mode==1: High-complexity and High-accuracy mode
while (l < n)
{
istep = l << 1;
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = -WebRtcSpl_kSinTable1024[j];
#ifdef WEBRTC_ARCH_ARM_V7A
WebRtc_Word32 wri;
WebRtc_Word32 frfi_r;
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(wri) :
"r"((WebRtc_Word32)wr), "r"((WebRtc_Word32)wi));
#endif
for (i = m; i < n; i += istep)
{
j = i + l;
#ifdef WEBRTC_ARCH_ARM_V7A
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(frfi_r) :
"r"((WebRtc_Word32)frfi[2*j]), "r"((WebRtc_Word32)frfi[2*j +1]));
__asm__("smlsd %0, %1, %2, %3" : "=r"(tr32) :
"r"(wri), "r"(frfi_r), "r"(CFFTRND));
__asm__("smladx %0, %1, %2, %3" : "=r"(ti32) :
"r"(wri), "r"(frfi_r), "r"(CFFTRND));
#else
tr32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j])
- WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j + 1]) + CFFTRND;
ti32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j + 1])
+ WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j]) + CFFTRND;
#endif
tr32 = WEBRTC_SPL_RSHIFT_W32(tr32, 15 - CFFTSFT);
ti32 = WEBRTC_SPL_RSHIFT_W32(ti32, 15 - CFFTSFT);
qr32 = ((WebRtc_Word32)frfi[2 * i]) << CFFTSFT;
qi32 = ((WebRtc_Word32)frfi[2 * i + 1]) << CFFTSFT;
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qr32 - tr32 + CFFTRND2), 1 + CFFTSFT);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 - ti32 + CFFTRND2), 1 + CFFTSFT);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qr32 + tr32 + CFFTRND2), 1 + CFFTSFT);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 + ti32 + CFFTRND2), 1 + CFFTSFT);
}
}
--k;
l = istep;
}
}
return 0;
}

View File

@ -1,161 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_ComplexIFFT().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define CIFFTSFT 14
#define CIFFTRND 1
int WebRtcSpl_ComplexIFFT(WebRtc_Word16 frfi[], int stages, int mode)
{
int i, j, l, k, istep, n, m, scale, shift;
WebRtc_Word16 wr, wi;
WebRtc_Word32 tr32, ti32, qr32, qi32;
WebRtc_Word32 tmp32, round2;
/* The 1024-value is a constant given from the size of WebRtcSpl_kSinTable1024[],
* and should not be changed depending on the input parameter 'stages'
*/
n = 1 << stages;
if (n > 1024)
return -1;
scale = 0;
l = 1;
k = 10 - 1; /* Constant for given WebRtcSpl_kSinTable1024[]. Do not change
depending on the input parameter 'stages' */
while (l < n)
{
// variable scaling, depending upon data
shift = 0;
round2 = 8192;
tmp32 = (WebRtc_Word32)WebRtcSpl_MaxAbsValueW16(frfi, 2 * n);
if (tmp32 > 13573)
{
shift++;
scale++;
round2 <<= 1;
}
if (tmp32 > 27146)
{
shift++;
scale++;
round2 <<= 1;
}
istep = l << 1;
if (mode == 0)
{
// mode==0: Low-complexity and Low-accuracy mode
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = WebRtcSpl_kSinTable1024[j];
for (i = m; i < n; i += istep)
{
j = i + l;
tr32 = WEBRTC_SPL_RSHIFT_W32((WEBRTC_SPL_MUL_16_16_RSFT(wr, frfi[2 * j], 0)
- WEBRTC_SPL_MUL_16_16_RSFT(wi, frfi[2 * j + 1], 0)), 15);
ti32 = WEBRTC_SPL_RSHIFT_W32(
(WEBRTC_SPL_MUL_16_16_RSFT(wr, frfi[2 * j + 1], 0)
+ WEBRTC_SPL_MUL_16_16_RSFT(wi,frfi[2*j],0)), 15);
qr32 = (WebRtc_Word32)frfi[2 * i];
qi32 = (WebRtc_Word32)frfi[2 * i + 1];
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 - tr32, shift);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 - ti32, shift);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qr32 + tr32, shift);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(qi32 + ti32, shift);
}
}
} else
{
// mode==1: High-complexity and High-accuracy mode
for (m = 0; m < l; ++m)
{
j = m << k;
/* The 256-value is a constant given as 1/4 of the size of
* WebRtcSpl_kSinTable1024[], and should not be changed depending on the input
* parameter 'stages'. It will result in 0 <= j < N_SINE_WAVE/2
*/
wr = WebRtcSpl_kSinTable1024[j + 256];
wi = WebRtcSpl_kSinTable1024[j];
#ifdef WEBRTC_ARCH_ARM_V7A
WebRtc_Word32 wri;
WebRtc_Word32 frfi_r;
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(wri) :
"r"((WebRtc_Word32)wr), "r"((WebRtc_Word32)wi));
#endif
for (i = m; i < n; i += istep)
{
j = i + l;
#ifdef WEBRTC_ARCH_ARM_V7A
__asm__("pkhbt %0, %1, %2, lsl #16" : "=r"(frfi_r) :
"r"((WebRtc_Word32)frfi[2*j]), "r"((WebRtc_Word32)frfi[2*j +1]));
__asm__("smlsd %0, %1, %2, %3" : "=r"(tr32) :
"r"(wri), "r"(frfi_r), "r"(CIFFTRND));
__asm__("smladx %0, %1, %2, %3" : "=r"(ti32) :
"r"(wri), "r"(frfi_r), "r"(CIFFTRND));
#else
tr32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j])
- WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j + 1]) + CIFFTRND;
ti32 = WEBRTC_SPL_MUL_16_16(wr, frfi[2 * j + 1])
+ WEBRTC_SPL_MUL_16_16(wi, frfi[2 * j]) + CIFFTRND;
#endif
tr32 = WEBRTC_SPL_RSHIFT_W32(tr32, 15 - CIFFTSFT);
ti32 = WEBRTC_SPL_RSHIFT_W32(ti32, 15 - CIFFTSFT);
qr32 = ((WebRtc_Word32)frfi[2 * i]) << CIFFTSFT;
qi32 = ((WebRtc_Word32)frfi[2 * i + 1]) << CIFFTSFT;
frfi[2 * j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((qr32 - tr32+round2),
shift+CIFFTSFT);
frfi[2 * j + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 - ti32 + round2), shift + CIFFTSFT);
frfi[2 * i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((qr32 + tr32 + round2),
shift + CIFFTSFT);
frfi[2 * i + 1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(
(qi32 + ti32 + round2), shift + CIFFTSFT);
}
}
}
--k;
l = istep;
}
return scale;
}

View File

@ -1,108 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the implementation of functions
* WebRtcSpl_MemSetW16()
* WebRtcSpl_MemSetW32()
* WebRtcSpl_MemCpyReversedOrder()
* WebRtcSpl_CopyFromEndW16()
* WebRtcSpl_ZerosArrayW16()
* WebRtcSpl_ZerosArrayW32()
* WebRtcSpl_OnesArrayW16()
* WebRtcSpl_OnesArrayW32()
*
* The description header can be found in signal_processing_library.h
*
*/
#include <string.h>
#include "signal_processing_library.h"
void WebRtcSpl_MemSetW16(WebRtc_Word16 *ptr, WebRtc_Word16 set_value, int length)
{
int j;
WebRtc_Word16 *arrptr = ptr;
for (j = length; j > 0; j--)
{
*arrptr++ = set_value;
}
}
void WebRtcSpl_MemSetW32(WebRtc_Word32 *ptr, WebRtc_Word32 set_value, int length)
{
int j;
WebRtc_Word32 *arrptr = ptr;
for (j = length; j > 0; j--)
{
*arrptr++ = set_value;
}
}
void WebRtcSpl_MemCpyReversedOrder(WebRtc_Word16* dest, WebRtc_Word16* source, int length)
{
int j;
WebRtc_Word16* destPtr = dest;
WebRtc_Word16* sourcePtr = source;
for (j = 0; j < length; j++)
{
*destPtr-- = *sourcePtr++;
}
}
WebRtc_Word16 WebRtcSpl_CopyFromEndW16(G_CONST WebRtc_Word16 *vector_in,
WebRtc_Word16 length,
WebRtc_Word16 samples,
WebRtc_Word16 *vector_out)
{
// Copy the last <samples> of the input vector to vector_out
WEBRTC_SPL_MEMCPY_W16(vector_out, &vector_in[length - samples], samples);
return samples;
}
WebRtc_Word16 WebRtcSpl_ZerosArrayW16(WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtcSpl_MemSetW16(vector, 0, length);
return length;
}
WebRtc_Word16 WebRtcSpl_ZerosArrayW32(WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtcSpl_MemSetW32(vector, 0, length);
return length;
}
WebRtc_Word16 WebRtcSpl_OnesArrayW16(WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 i;
WebRtc_Word16 *tmpvec = vector;
for (i = 0; i < length; i++)
{
*tmpvec++ = 1;
}
return length;
}
WebRtc_Word16 WebRtcSpl_OnesArrayW32(WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 i;
WebRtc_Word32 *tmpvec = vector;
for (i = 0; i < length; i++)
{
*tmpvec++ = 1;
}
return length;
}

View File

@ -1,60 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the 360 degree cos table.
*
*/
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_kCosTable[] = {
8192, 8190, 8187, 8180, 8172, 8160, 8147, 8130, 8112,
8091, 8067, 8041, 8012, 7982, 7948, 7912, 7874, 7834,
7791, 7745, 7697, 7647, 7595, 7540, 7483, 7424, 7362,
7299, 7233, 7164, 7094, 7021, 6947, 6870, 6791, 6710,
6627, 6542, 6455, 6366, 6275, 6182, 6087, 5991, 5892,
5792, 5690, 5586, 5481, 5374, 5265, 5155, 5043, 4930,
4815, 4698, 4580, 4461, 4341, 4219, 4096, 3971, 3845,
3719, 3591, 3462, 3331, 3200, 3068, 2935, 2801, 2667,
2531, 2395, 2258, 2120, 1981, 1842, 1703, 1563, 1422,
1281, 1140, 998, 856, 713, 571, 428, 285, 142,
0, -142, -285, -428, -571, -713, -856, -998, -1140,
-1281, -1422, -1563, -1703, -1842, -1981, -2120, -2258, -2395,
-2531, -2667, -2801, -2935, -3068, -3200, -3331, -3462, -3591,
-3719, -3845, -3971, -4095, -4219, -4341, -4461, -4580, -4698,
-4815, -4930, -5043, -5155, -5265, -5374, -5481, -5586, -5690,
-5792, -5892, -5991, -6087, -6182, -6275, -6366, -6455, -6542,
-6627, -6710, -6791, -6870, -6947, -7021, -7094, -7164, -7233,
-7299, -7362, -7424, -7483, -7540, -7595, -7647, -7697, -7745,
-7791, -7834, -7874, -7912, -7948, -7982, -8012, -8041, -8067,
-8091, -8112, -8130, -8147, -8160, -8172, -8180, -8187, -8190,
-8191, -8190, -8187, -8180, -8172, -8160, -8147, -8130, -8112,
-8091, -8067, -8041, -8012, -7982, -7948, -7912, -7874, -7834,
-7791, -7745, -7697, -7647, -7595, -7540, -7483, -7424, -7362,
-7299, -7233, -7164, -7094, -7021, -6947, -6870, -6791, -6710,
-6627, -6542, -6455, -6366, -6275, -6182, -6087, -5991, -5892,
-5792, -5690, -5586, -5481, -5374, -5265, -5155, -5043, -4930,
-4815, -4698, -4580, -4461, -4341, -4219, -4096, -3971, -3845,
-3719, -3591, -3462, -3331, -3200, -3068, -2935, -2801, -2667,
-2531, -2395, -2258, -2120, -1981, -1842, -1703, -1563, -1422,
-1281, -1140, -998, -856, -713, -571, -428, -285, -142,
0, 142, 285, 428, 571, 713, 856, 998, 1140,
1281, 1422, 1563, 1703, 1842, 1981, 2120, 2258, 2395,
2531, 2667, 2801, 2935, 3068, 3200, 3331, 3462, 3591,
3719, 3845, 3971, 4095, 4219, 4341, 4461, 4580, 4698,
4815, 4930, 5043, 5155, 5265, 5374, 5481, 5586, 5690,
5792, 5892, 5991, 6087, 6182, 6275, 6366, 6455, 6542,
6627, 6710, 6791, 6870, 6947, 7021, 7094, 7164, 7233,
7299, 7362, 7424, 7483, 7540, 7595, 7647, 7697, 7745,
7791, 7834, 7874, 7912, 7948, 7982, 8012, 8041, 8067,
8091, 8112, 8130, 8147, 8160, 8172, 8180, 8187, 8190
};

View File

@ -1,267 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_CrossCorrelation().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_CrossCorrelation(WebRtc_Word32* cross_correlation, WebRtc_Word16* seq1,
WebRtc_Word16* seq2, WebRtc_Word16 dim_seq,
WebRtc_Word16 dim_cross_correlation,
WebRtc_Word16 right_shifts,
WebRtc_Word16 step_seq2)
{
int i, j;
WebRtc_Word16* seq1Ptr;
WebRtc_Word16* seq2Ptr;
WebRtc_Word32* CrossCorrPtr;
#ifdef _XSCALE_OPT_
#ifdef _WIN32
#pragma message("NOTE: _XSCALE_OPT_ optimizations are used (overrides _ARM_OPT_ and requires /QRxscale compiler flag)")
#endif
__int64 macc40;
int iseq1[250];
int iseq2[250];
int iseq3[250];
int * iseq1Ptr;
int * iseq2Ptr;
int * iseq3Ptr;
int len, i_len;
seq1Ptr = seq1;
iseq1Ptr = iseq1;
for(i = 0; i < ((dim_seq + 1) >> 1); i++)
{
*iseq1Ptr = (unsigned short)*seq1Ptr++;
*iseq1Ptr++ |= (WebRtc_Word32)*seq1Ptr++ << 16;
}
if(dim_seq%2)
{
*(iseq1Ptr-1) &= 0x0000ffff;
}
*iseq1Ptr = 0;
iseq1Ptr++;
*iseq1Ptr = 0;
iseq1Ptr++;
*iseq1Ptr = 0;
if(step_seq2 < 0)
{
seq2Ptr = seq2 - dim_cross_correlation + 1;
CrossCorrPtr = &cross_correlation[dim_cross_correlation - 1];
}
else
{
seq2Ptr = seq2;
CrossCorrPtr = cross_correlation;
}
len = dim_seq + dim_cross_correlation - 1;
i_len = (len + 1) >> 1;
iseq2Ptr = iseq2;
iseq3Ptr = iseq3;
for(i = 0; i < i_len; i++)
{
*iseq2Ptr = (unsigned short)*seq2Ptr++;
*iseq3Ptr = (unsigned short)*seq2Ptr;
*iseq2Ptr++ |= (WebRtc_Word32)*seq2Ptr++ << 16;
*iseq3Ptr++ |= (WebRtc_Word32)*seq2Ptr << 16;
}
if(len % 2)
{
iseq2[i_len - 1] &= 0x0000ffff;
iseq3[i_len - 1] = 0;
}
else
iseq3[i_len - 1] &= 0x0000ffff;
iseq2[i_len] = 0;
iseq3[i_len] = 0;
iseq2[i_len + 1] = 0;
iseq3[i_len + 1] = 0;
iseq2[i_len + 2] = 0;
iseq3[i_len + 2] = 0;
// Set pointer to start value
iseq2Ptr = iseq2;
iseq3Ptr = iseq3;
i_len = (dim_seq + 7) >> 3;
for (i = 0; i < dim_cross_correlation; i++)
{
iseq1Ptr = iseq1;
macc40 = 0;
_WriteCoProcessor(macc40, 0);
if((i & 1))
{
iseq3Ptr = iseq3 + (i >> 1);
for (j = i_len; j > 0; j--)
{
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq3Ptr++);
}
}
else
{
iseq2Ptr = iseq2 + (i >> 1);
for (j = i_len; j > 0; j--)
{
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
_SmulAddPack_2SW_ACC(*iseq1Ptr++, *iseq2Ptr++);
}
}
macc40 = _ReadCoProcessor(0);
*CrossCorrPtr = (WebRtc_Word32)(macc40 >> right_shifts);
CrossCorrPtr += step_seq2;
}
#else // #ifdef _XSCALE_OPT_
#ifdef _ARM_OPT_
WebRtc_Word16 dim_seq8 = (dim_seq >> 3) << 3;
#endif
CrossCorrPtr = cross_correlation;
for (i = 0; i < dim_cross_correlation; i++)
{
// Set the pointer to the static vector, set the pointer to the sliding vector
// and initialize cross_correlation
seq1Ptr = seq1;
seq2Ptr = seq2 + (step_seq2 * i);
(*CrossCorrPtr) = 0;
#ifndef _ARM_OPT_
#ifdef _WIN32
#pragma message("NOTE: default implementation is used")
#endif
// Perform the cross correlation
for (j = 0; j < dim_seq; j++)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr), right_shifts);
seq1Ptr++;
seq2Ptr++;
}
#else
#ifdef _WIN32
#pragma message("NOTE: _ARM_OPT_ optimizations are used")
#endif
if (right_shifts == 0)
{
// Perform the optimized cross correlation
for (j = 0; j < dim_seq8; j = j + 8)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
}
for (j = dim_seq8; j < dim_seq; j++)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16((*seq1Ptr), (*seq2Ptr));
seq1Ptr++;
seq2Ptr++;
}
}
else // right_shifts != 0
{
// Perform the optimized cross correlation
for (j = 0; j < dim_seq8; j = j + 8)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
}
for (j = dim_seq8; j < dim_seq; j++)
{
(*CrossCorrPtr) += WEBRTC_SPL_MUL_16_16_RSFT((*seq1Ptr), (*seq2Ptr),
right_shifts);
seq1Ptr++;
seq2Ptr++;
}
}
#endif
CrossCorrPtr++;
}
#endif
}

View File

@ -1,144 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the divisions
* WebRtcSpl_DivU32U16()
* WebRtcSpl_DivW32W16()
* WebRtcSpl_DivW32W16ResW16()
* WebRtcSpl_DivResultInQ31()
* WebRtcSpl_DivW32HiLow()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
WebRtc_UWord32 WebRtcSpl_DivU32U16(WebRtc_UWord32 num, WebRtc_UWord16 den)
{
// Guard against division with 0
if (den != 0)
{
return (WebRtc_UWord32)(num / den);
} else
{
return (WebRtc_UWord32)0xFFFFFFFF;
}
}
WebRtc_Word32 WebRtcSpl_DivW32W16(WebRtc_Word32 num, WebRtc_Word16 den)
{
// Guard against division with 0
if (den != 0)
{
return (WebRtc_Word32)(num / den);
} else
{
return (WebRtc_Word32)0x7FFFFFFF;
}
}
WebRtc_Word16 WebRtcSpl_DivW32W16ResW16(WebRtc_Word32 num, WebRtc_Word16 den)
{
// Guard against division with 0
if (den != 0)
{
return (WebRtc_Word16)(num / den);
} else
{
return (WebRtc_Word16)0x7FFF;
}
}
WebRtc_Word32 WebRtcSpl_DivResultInQ31(WebRtc_Word32 num, WebRtc_Word32 den)
{
WebRtc_Word32 L_num = num;
WebRtc_Word32 L_den = den;
WebRtc_Word32 div = 0;
int k = 31;
int change_sign = 0;
if (num == 0)
return 0;
if (num < 0)
{
change_sign++;
L_num = -num;
}
if (den < 0)
{
change_sign++;
L_den = -den;
}
while (k--)
{
div <<= 1;
L_num <<= 1;
if (L_num >= L_den)
{
L_num -= L_den;
div++;
}
}
if (change_sign == 1)
{
div = -div;
}
return div;
}
WebRtc_Word32 WebRtcSpl_DivW32HiLow(WebRtc_Word32 num, WebRtc_Word16 den_hi,
WebRtc_Word16 den_low)
{
WebRtc_Word16 approx, tmp_hi, tmp_low, num_hi, num_low;
WebRtc_Word32 tmpW32;
approx = (WebRtc_Word16)WebRtcSpl_DivW32W16((WebRtc_Word32)0x1FFFFFFF, den_hi);
// result in Q14 (Note: 3FFFFFFF = 0.5 in Q30)
// tmpW32 = 1/den = approx * (2.0 - den * approx) (in Q30)
tmpW32 = (WEBRTC_SPL_MUL_16_16(den_hi, approx) << 1)
+ ((WEBRTC_SPL_MUL_16_16(den_low, approx) >> 15) << 1);
// tmpW32 = den * approx
tmpW32 = (WebRtc_Word32)0x7fffffffL - tmpW32; // result in Q30 (tmpW32 = 2.0-(den*approx))
// Store tmpW32 in hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmpW32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((tmpW32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// tmpW32 = 1/den in Q29
tmpW32 = ((WEBRTC_SPL_MUL_16_16(tmp_hi, approx) + (WEBRTC_SPL_MUL_16_16(tmp_low, approx)
>> 15)) << 1);
// 1/den in hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmpW32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((tmpW32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// Store num in hi and low format
num_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(num, 16);
num_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((num
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)num_hi, 16)), 1);
// num * (1/den) by 32 bit multiplication (result in Q28)
tmpW32 = (WEBRTC_SPL_MUL_16_16(num_hi, tmp_hi) + (WEBRTC_SPL_MUL_16_16(num_hi, tmp_low)
>> 15) + (WEBRTC_SPL_MUL_16_16(num_low, tmp_hi) >> 15));
// Put result in Q31 (convert from Q28)
tmpW32 = WEBRTC_SPL_LSHIFT_W32(tmpW32, 3);
return tmpW32;
}

View File

@ -1,91 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_DotProductWithScale().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
WebRtc_Word32 WebRtcSpl_DotProductWithScale(WebRtc_Word16 *vector1, WebRtc_Word16 *vector2,
int length, int scaling)
{
WebRtc_Word32 sum;
int i;
#ifdef _ARM_OPT_
#pragma message("NOTE: _ARM_OPT_ optimizations are used")
WebRtc_Word16 len4 = (length >> 2) << 2;
#endif
sum = 0;
#ifndef _ARM_OPT_
for (i = 0; i < length; i++)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1++, *vector2++, scaling);
}
#else
if (scaling == 0)
{
for (i = 0; i < len4; i = i + 4)
{
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
}
for (i = len4; i < length; i++)
{
sum += WEBRTC_SPL_MUL_16_16(*vector1, *vector2);
vector1++;
vector2++;
}
}
else
{
for (i = 0; i < len4; i = i + 4)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
}
for (i = len4; i < length; i++)
{
sum += WEBRTC_SPL_MUL_16_16_RSFT(*vector1, *vector2, scaling);
vector1++;
vector2++;
}
}
#endif
return sum;
}

View File

@ -1,59 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_DownsampleFast().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
int WebRtcSpl_DownsampleFast(WebRtc_Word16 *in_ptr, WebRtc_Word16 in_length,
WebRtc_Word16 *out_ptr, WebRtc_Word16 out_length,
WebRtc_Word16 *B, WebRtc_Word16 B_length, WebRtc_Word16 factor,
WebRtc_Word16 delay)
{
WebRtc_Word32 o;
int i, j;
WebRtc_Word16 *downsampled_ptr = out_ptr;
WebRtc_Word16 *b_ptr;
WebRtc_Word16 *x_ptr;
WebRtc_Word16 endpos = delay
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16(factor, (out_length - 1)) + 1;
if (in_length < endpos)
{
return -1;
}
for (i = delay; i < endpos; i += factor)
{
b_ptr = &B[0];
x_ptr = &in_ptr[i];
o = (WebRtc_Word32)2048; // Round val
for (j = 0; j < B_length; j++)
{
o += WEBRTC_SPL_MUL_16_16(*b_ptr++, *x_ptr--);
}
o = WEBRTC_SPL_RSHIFT_W32(o, 12);
// If output is higher than 32768, saturate it. Same with negative side
*downsampled_ptr++ = WebRtcSpl_SatW32ToW16(o);
}
return 0;
}

View File

@ -1,89 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_FilterAR().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
int WebRtcSpl_FilterAR(G_CONST WebRtc_Word16* a,
int a_length,
G_CONST WebRtc_Word16* x,
int x_length,
WebRtc_Word16* state,
int state_length,
WebRtc_Word16* state_low,
int state_low_length,
WebRtc_Word16* filtered,
WebRtc_Word16* filtered_low,
int filtered_low_length)
{
WebRtc_Word32 o;
WebRtc_Word32 oLOW;
int i, j, stop;
G_CONST WebRtc_Word16* x_ptr = &x[0];
WebRtc_Word16* filteredFINAL_ptr = filtered;
WebRtc_Word16* filteredFINAL_LOW_ptr = filtered_low;
for (i = 0; i < x_length; i++)
{
// Calculate filtered[i] and filtered_low[i]
G_CONST WebRtc_Word16* a_ptr = &a[1];
WebRtc_Word16* filtered_ptr = &filtered[i - 1];
WebRtc_Word16* filtered_low_ptr = &filtered_low[i - 1];
WebRtc_Word16* state_ptr = &state[state_length - 1];
WebRtc_Word16* state_low_ptr = &state_low[state_length - 1];
o = (WebRtc_Word32)(*x_ptr++) << 12;
oLOW = (WebRtc_Word32)0;
stop = (i < a_length) ? i + 1 : a_length;
for (j = 1; j < stop; j++)
{
o -= WEBRTC_SPL_MUL_16_16(*a_ptr, *filtered_ptr--);
oLOW -= WEBRTC_SPL_MUL_16_16(*a_ptr++, *filtered_low_ptr--);
}
for (j = i + 1; j < a_length; j++)
{
o -= WEBRTC_SPL_MUL_16_16(*a_ptr, *state_ptr--);
oLOW -= WEBRTC_SPL_MUL_16_16(*a_ptr++, *state_low_ptr--);
}
o += (oLOW >> 12);
*filteredFINAL_ptr = (WebRtc_Word16)((o + (WebRtc_Word32)2048) >> 12);
*filteredFINAL_LOW_ptr++ = (WebRtc_Word16)(o - ((WebRtc_Word32)(*filteredFINAL_ptr++)
<< 12));
}
// Save the filter state
if (x_length >= state_length)
{
WebRtcSpl_CopyFromEndW16(filtered, x_length, a_length - 1, state);
WebRtcSpl_CopyFromEndW16(filtered_low, x_length, a_length - 1, state_low);
} else
{
for (i = 0; i < state_length - x_length; i++)
{
state[i] = state[i + x_length];
state_low[i] = state_low[i + x_length];
}
for (i = 0; i < x_length; i++)
{
state[state_length - x_length + i] = filtered[i];
state[state_length - x_length + i] = filtered_low[i];
}
}
return x_length;
}

View File

@ -1,49 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_FilterARFastQ12().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_FilterARFastQ12(WebRtc_Word16 *in, WebRtc_Word16 *out, WebRtc_Word16 *A,
WebRtc_Word16 A_length, WebRtc_Word16 length)
{
WebRtc_Word32 o;
int i, j;
WebRtc_Word16 *x_ptr = &in[0];
WebRtc_Word16 *filtered_ptr = &out[0];
for (i = 0; i < length; i++)
{
// Calculate filtered[i]
G_CONST WebRtc_Word16 *a_ptr = &A[0];
WebRtc_Word16 *state_ptr = &out[i - 1];
o = WEBRTC_SPL_MUL_16_16(*x_ptr++, *a_ptr++);
for (j = 1; j < A_length; j++)
{
o -= WEBRTC_SPL_MUL_16_16(*a_ptr++,*state_ptr--);
}
// Saturate the output
o = WEBRTC_SPL_SAT((WebRtc_Word32)134215679, o, (WebRtc_Word32)-134217728);
*filtered_ptr++ = (WebRtc_Word16)((o + (WebRtc_Word32)2048) >> 12);
}
return;
}

View File

@ -1,49 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_FilterMAFastQ12().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_FilterMAFastQ12(WebRtc_Word16* in_ptr,
WebRtc_Word16* out_ptr,
WebRtc_Word16* B,
WebRtc_Word16 B_length,
WebRtc_Word16 length)
{
WebRtc_Word32 o;
int i, j;
for (i = 0; i < length; i++)
{
G_CONST WebRtc_Word16* b_ptr = &B[0];
G_CONST WebRtc_Word16* x_ptr = &in_ptr[i];
o = (WebRtc_Word32)0;
for (j = 0; j < B_length; j++)
{
o += WEBRTC_SPL_MUL_16_16(*b_ptr++, *x_ptr--);
}
// If output is higher than 32768, saturate it. Same with negative side
// 2^27 = 134217728, which corresponds to 32768 in Q12
// Saturate the output
o = WEBRTC_SPL_SAT((WebRtc_Word32)134215679, o, (WebRtc_Word32)-134217728);
*out_ptr++ = (WebRtc_Word16)((o + (WebRtc_Word32)2048) >> 12);
}
return;
}

View File

@ -1,41 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_GetHanningWindow().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_GetHanningWindow(WebRtc_Word16 *v, WebRtc_Word16 size)
{
int jj;
WebRtc_Word16 *vptr1;
WebRtc_Word32 index;
WebRtc_Word32 factor = ((WebRtc_Word32)0x40000000);
factor = WebRtcSpl_DivW32W16(factor, size);
if (size < 513)
index = (WebRtc_Word32)-0x200000;
else
index = (WebRtc_Word32)-0x100000;
vptr1 = v;
for (jj = 0; jj < size; jj++)
{
index += factor;
(*vptr1++) = WebRtcSpl_kHanningTable[index >> 22];
}
}

View File

@ -1,120 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the iLBC specific functions
* WebRtcSpl_ScaleAndAddVectorsWithRound()
* WebRtcSpl_ReverseOrderMultArrayElements()
* WebRtcSpl_ElementwiseVectorMult()
* WebRtcSpl_AddVectorsAndShift()
* WebRtcSpl_AddAffineVectorToVector()
* WebRtcSpl_AffineTransformVector()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_ScaleAndAddVectorsWithRound(WebRtc_Word16 *vector1, WebRtc_Word16 scale1,
WebRtc_Word16 *vector2, WebRtc_Word16 scale2,
WebRtc_Word16 right_shifts, WebRtc_Word16 *out,
WebRtc_Word16 vector_length)
{
int i;
WebRtc_Word16 roundVal;
roundVal = 1 << right_shifts;
roundVal = roundVal >> 1;
for (i = 0; i < vector_length; i++)
{
out[i] = (WebRtc_Word16)((WEBRTC_SPL_MUL_16_16(vector1[i], scale1)
+ WEBRTC_SPL_MUL_16_16(vector2[i], scale2) + roundVal) >> right_shifts);
}
}
void WebRtcSpl_ReverseOrderMultArrayElements(WebRtc_Word16 *out, G_CONST WebRtc_Word16 *in,
G_CONST WebRtc_Word16 *win,
WebRtc_Word16 vector_length,
WebRtc_Word16 right_shifts)
{
int i;
WebRtc_Word16 *outptr = out;
G_CONST WebRtc_Word16 *inptr = in;
G_CONST WebRtc_Word16 *winptr = win;
for (i = 0; i < vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(*inptr++,
*winptr--, right_shifts);
}
}
void WebRtcSpl_ElementwiseVectorMult(WebRtc_Word16 *out, G_CONST WebRtc_Word16 *in,
G_CONST WebRtc_Word16 *win, WebRtc_Word16 vector_length,
WebRtc_Word16 right_shifts)
{
int i;
WebRtc_Word16 *outptr = out;
G_CONST WebRtc_Word16 *inptr = in;
G_CONST WebRtc_Word16 *winptr = win;
for (i = 0; i < vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(*inptr++,
*winptr++, right_shifts);
}
}
void WebRtcSpl_AddVectorsAndShift(WebRtc_Word16 *out, G_CONST WebRtc_Word16 *in1,
G_CONST WebRtc_Word16 *in2, WebRtc_Word16 vector_length,
WebRtc_Word16 right_shifts)
{
int i;
WebRtc_Word16 *outptr = out;
G_CONST WebRtc_Word16 *in1ptr = in1;
G_CONST WebRtc_Word16 *in2ptr = in2;
for (i = vector_length; i > 0; i--)
{
(*outptr++) = (WebRtc_Word16)(((*in1ptr++) + (*in2ptr++)) >> right_shifts);
}
}
void WebRtcSpl_AddAffineVectorToVector(WebRtc_Word16 *out, WebRtc_Word16 *in,
WebRtc_Word16 gain, WebRtc_Word32 add_constant,
WebRtc_Word16 right_shifts, int vector_length)
{
WebRtc_Word16 *inPtr;
WebRtc_Word16 *outPtr;
int i;
inPtr = in;
outPtr = out;
for (i = 0; i < vector_length; i++)
{
(*outPtr++) += (WebRtc_Word16)((WEBRTC_SPL_MUL_16_16((*inPtr++), gain)
+ (WebRtc_Word32)add_constant) >> right_shifts);
}
}
void WebRtcSpl_AffineTransformVector(WebRtc_Word16 *out, WebRtc_Word16 *in,
WebRtc_Word16 gain, WebRtc_Word32 add_constant,
WebRtc_Word16 right_shifts, int vector_length)
{
WebRtc_Word16 *inPtr;
WebRtc_Word16 *outPtr;
int i;
inPtr = in;
outPtr = out;
for (i = 0; i < vector_length; i++)
{
(*outPtr++) = (WebRtc_Word16)((WEBRTC_SPL_MUL_16_16((*inPtr++), gain)
+ (WebRtc_Word32)add_constant) >> right_shifts);
}
}

View File

@ -1,259 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_LevinsonDurbin().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define SPL_LEVINSON_MAXORDER 20
WebRtc_Word16 WebRtcSpl_LevinsonDurbin(WebRtc_Word32 *R, WebRtc_Word16 *A, WebRtc_Word16 *K,
WebRtc_Word16 order)
{
WebRtc_Word16 i, j;
// Auto-correlation coefficients in high precision
WebRtc_Word16 R_hi[SPL_LEVINSON_MAXORDER + 1], R_low[SPL_LEVINSON_MAXORDER + 1];
// LPC coefficients in high precision
WebRtc_Word16 A_hi[SPL_LEVINSON_MAXORDER + 1], A_low[SPL_LEVINSON_MAXORDER + 1];
// LPC coefficients for next iteration
WebRtc_Word16 A_upd_hi[SPL_LEVINSON_MAXORDER + 1], A_upd_low[SPL_LEVINSON_MAXORDER + 1];
// Reflection coefficient in high precision
WebRtc_Word16 K_hi, K_low;
// Prediction gain Alpha in high precision and with scale factor
WebRtc_Word16 Alpha_hi, Alpha_low, Alpha_exp;
WebRtc_Word16 tmp_hi, tmp_low;
WebRtc_Word32 temp1W32, temp2W32, temp3W32;
WebRtc_Word16 norm;
// Normalize the autocorrelation R[0]...R[order+1]
norm = WebRtcSpl_NormW32(R[0]);
for (i = order; i >= 0; i--)
{
temp1W32 = WEBRTC_SPL_LSHIFT_W32(R[i], norm);
// Put R in hi and low format
R_hi[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
R_low[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_hi[i], 16)), 1);
}
// K = A[1] = -R[1] / R[0]
temp2W32 = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_hi[1],16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_low[1],1); // R[1] in Q31
temp3W32 = WEBRTC_SPL_ABS_W32(temp2W32); // abs R[1]
temp1W32 = WebRtcSpl_DivW32HiLow(temp3W32, R_hi[0], R_low[0]); // abs(R[1])/R[0] in Q31
// Put back the sign on R[1]
if (temp2W32 > 0)
{
temp1W32 = -temp1W32;
}
// Put K in hi and low format
K_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
K_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)K_hi, 16)), 1);
// Store first reflection coefficient
K[0] = K_hi;
temp1W32 = WEBRTC_SPL_RSHIFT_W32(temp1W32, 4); // A[1] in Q27
// Put A[1] in hi and low format
A_hi[1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
A_low[1] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_hi[1], 16)), 1);
// Alpha = R[0] * (1-K^2)
temp1W32 = (((WEBRTC_SPL_MUL_16_16(K_hi, K_low) >> 14) + WEBRTC_SPL_MUL_16_16(K_hi, K_hi))
<< 1); // temp1W32 = k^2 in Q31
temp1W32 = WEBRTC_SPL_ABS_W32(temp1W32); // Guard against <0
temp1W32 = (WebRtc_Word32)0x7fffffffL - temp1W32; // temp1W32 = (1 - K[0]*K[0]) in Q31
// Store temp1W32 = 1 - K[0]*K[0] on hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// Calculate Alpha in Q31
temp1W32 = ((WEBRTC_SPL_MUL_16_16(R_hi[0], tmp_hi)
+ (WEBRTC_SPL_MUL_16_16(R_hi[0], tmp_low) >> 15)
+ (WEBRTC_SPL_MUL_16_16(R_low[0], tmp_hi) >> 15)) << 1);
// Normalize Alpha and put it in hi and low format
Alpha_exp = WebRtcSpl_NormW32(temp1W32);
temp1W32 = WEBRTC_SPL_LSHIFT_W32(temp1W32, Alpha_exp);
Alpha_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
Alpha_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)Alpha_hi, 16)), 1);
// Perform the iterative calculations in the Levinson-Durbin algorithm
for (i = 2; i <= order; i++)
{
/* ----
temp1W32 = R[i] + > R[j]*A[i-j]
/
----
j=1..i-1
*/
temp1W32 = 0;
for (j = 1; j < i; j++)
{
// temp1W32 is in Q31
temp1W32 += ((WEBRTC_SPL_MUL_16_16(R_hi[j], A_hi[i-j]) << 1)
+ (((WEBRTC_SPL_MUL_16_16(R_hi[j], A_low[i-j]) >> 15)
+ (WEBRTC_SPL_MUL_16_16(R_low[j], A_hi[i-j]) >> 15)) << 1));
}
temp1W32 = WEBRTC_SPL_LSHIFT_W32(temp1W32, 4);
temp1W32 += (WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_hi[i], 16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)R_low[i], 1));
// K = -temp1W32 / Alpha
temp2W32 = WEBRTC_SPL_ABS_W32(temp1W32); // abs(temp1W32)
temp3W32 = WebRtcSpl_DivW32HiLow(temp2W32, Alpha_hi, Alpha_low); // abs(temp1W32)/Alpha
// Put the sign of temp1W32 back again
if (temp1W32 > 0)
{
temp3W32 = -temp3W32;
}
// Use the Alpha shifts from earlier to de-normalize
norm = WebRtcSpl_NormW32(temp3W32);
if ((Alpha_exp <= norm) || (temp3W32 == 0))
{
temp3W32 = WEBRTC_SPL_LSHIFT_W32(temp3W32, Alpha_exp);
} else
{
if (temp3W32 > 0)
{
temp3W32 = (WebRtc_Word32)0x7fffffffL;
} else
{
temp3W32 = (WebRtc_Word32)0x80000000L;
}
}
// Put K on hi and low format
K_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp3W32, 16);
K_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp3W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)K_hi, 16)), 1);
// Store Reflection coefficient in Q15
K[i - 1] = K_hi;
// Test for unstable filter.
// If unstable return 0 and let the user decide what to do in that case
if ((WebRtc_Word32)WEBRTC_SPL_ABS_W16(K_hi) > (WebRtc_Word32)32750)
{
return 0; // Unstable filter
}
/*
Compute updated LPC coefficient: Anew[i]
Anew[j]= A[j] + K*A[i-j] for j=1..i-1
Anew[i]= K
*/
for (j = 1; j < i; j++)
{
// temp1W32 = A[j] in Q27
temp1W32 = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_hi[j],16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_low[j],1);
// temp1W32 += K*A[i-j] in Q27
temp1W32 += ((WEBRTC_SPL_MUL_16_16(K_hi, A_hi[i-j])
+ (WEBRTC_SPL_MUL_16_16(K_hi, A_low[i-j]) >> 15)
+ (WEBRTC_SPL_MUL_16_16(K_low, A_hi[i-j]) >> 15)) << 1);
// Put Anew in hi and low format
A_upd_hi[j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
A_upd_low[j] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_upd_hi[j], 16)), 1);
}
// temp3W32 = K in Q27 (Convert from Q31 to Q27)
temp3W32 = WEBRTC_SPL_RSHIFT_W32(temp3W32, 4);
// Store Anew in hi and low format
A_upd_hi[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp3W32, 16);
A_upd_low[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp3W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_upd_hi[i], 16)), 1);
// Alpha = Alpha * (1-K^2)
temp1W32 = (((WEBRTC_SPL_MUL_16_16(K_hi, K_low) >> 14)
+ WEBRTC_SPL_MUL_16_16(K_hi, K_hi)) << 1); // K*K in Q31
temp1W32 = WEBRTC_SPL_ABS_W32(temp1W32); // Guard against <0
temp1W32 = (WebRtc_Word32)0x7fffffffL - temp1W32; // 1 - K*K in Q31
// Convert 1- K^2 in hi and low format
tmp_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
tmp_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)tmp_hi, 16)), 1);
// Calculate Alpha = Alpha * (1-K^2) in Q31
temp1W32 = ((WEBRTC_SPL_MUL_16_16(Alpha_hi, tmp_hi)
+ (WEBRTC_SPL_MUL_16_16(Alpha_hi, tmp_low) >> 15)
+ (WEBRTC_SPL_MUL_16_16(Alpha_low, tmp_hi) >> 15)) << 1);
// Normalize Alpha and store it on hi and low format
norm = WebRtcSpl_NormW32(temp1W32);
temp1W32 = WEBRTC_SPL_LSHIFT_W32(temp1W32, norm);
Alpha_hi = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(temp1W32, 16);
Alpha_low = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32
- WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)Alpha_hi, 16)), 1);
// Update the total normalization of Alpha
Alpha_exp = Alpha_exp + norm;
// Update A[]
for (j = 1; j <= i; j++)
{
A_hi[j] = A_upd_hi[j];
A_low[j] = A_upd_low[j];
}
}
/*
Set A[0] to 1.0 and store the A[i] i=1...order in Q12
(Convert from Q27 and use rounding)
*/
A[0] = 4096;
for (i = 1; i <= order; i++)
{
// temp1W32 in Q27
temp1W32 = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_hi[i], 16)
+ WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)A_low[i], 1);
// Round and store upper word
A[i] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((temp1W32<<1)+(WebRtc_Word32)32768, 16);
}
return 1; // Stable filters
}

View File

@ -1,265 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the implementation of functions
* WebRtcSpl_MaxAbsValueW16()
* WebRtcSpl_MaxAbsIndexW16()
* WebRtcSpl_MaxAbsValueW32()
* WebRtcSpl_MaxValueW16()
* WebRtcSpl_MaxIndexW16()
* WebRtcSpl_MaxValueW32()
* WebRtcSpl_MaxIndexW32()
* WebRtcSpl_MinValueW16()
* WebRtcSpl_MinIndexW16()
* WebRtcSpl_MinValueW32()
* WebRtcSpl_MinIndexW32()
*
* The description header can be found in signal_processing_library.h.
*
*/
#include "signal_processing_library.h"
#if !(defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON))
// Maximum absolute value of word16 vector.
WebRtc_Word16 WebRtcSpl_MaxAbsValueW16(const WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMax = 0;
WebRtc_Word32 absVal;
WebRtc_Word16 totMax;
int i;
G_CONST WebRtc_Word16 *tmpvector = vector;
for (i = 0; i < length; i++)
{
absVal = WEBRTC_SPL_ABS_W32((*tmpvector));
if (absVal > tempMax)
{
tempMax = absVal;
}
tmpvector++;
}
totMax = (WebRtc_Word16)WEBRTC_SPL_MIN(tempMax, WEBRTC_SPL_WORD16_MAX);
return totMax;
}
#endif
// Index of maximum absolute value in a word16 vector.
WebRtc_Word16 WebRtcSpl_MaxAbsIndexW16(G_CONST WebRtc_Word16* vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMax;
WebRtc_Word16 absTemp;
WebRtc_Word16 tempMaxIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word16 *tmpvector = vector;
tempMax = WEBRTC_SPL_ABS_W16(*tmpvector);
tmpvector++;
for (i = 1; i < length; i++)
{
absTemp = WEBRTC_SPL_ABS_W16(*tmpvector);
tmpvector++;
if (absTemp > tempMax)
{
tempMax = absTemp;
tempMaxIndex = i;
}
}
return tempMaxIndex;
}
// Maximum absolute value of word32 vector.
WebRtc_Word32 WebRtcSpl_MaxAbsValueW32(G_CONST WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtc_UWord32 tempMax = 0;
WebRtc_UWord32 absVal;
WebRtc_Word32 retval;
int i;
G_CONST WebRtc_Word32 *tmpvector = vector;
for (i = 0; i < length; i++)
{
absVal = WEBRTC_SPL_ABS_W32((*tmpvector));
if (absVal > tempMax)
{
tempMax = absVal;
}
tmpvector++;
}
retval = (WebRtc_Word32)(WEBRTC_SPL_MIN(tempMax, WEBRTC_SPL_WORD32_MAX));
return retval;
}
// Maximum value of word16 vector.
#ifndef XSCALE_OPT
WebRtc_Word16 WebRtcSpl_MaxValueW16(G_CONST WebRtc_Word16* vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMax;
WebRtc_Word16 i;
G_CONST WebRtc_Word16 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
tempMax = vector[i];
}
return tempMax;
}
#else
#pragma message(">> WebRtcSpl_MaxValueW16 is excluded from this build")
#endif
// Index of maximum value in a word16 vector.
WebRtc_Word16 WebRtcSpl_MaxIndexW16(G_CONST WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMax;
WebRtc_Word16 tempMaxIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word16 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
{
tempMax = vector[i];
tempMaxIndex = i;
}
}
return tempMaxIndex;
}
// Maximum value of word32 vector.
#ifndef XSCALE_OPT
WebRtc_Word32 WebRtcSpl_MaxValueW32(G_CONST WebRtc_Word32* vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMax;
WebRtc_Word16 i;
G_CONST WebRtc_Word32 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
tempMax = vector[i];
}
return tempMax;
}
#else
#pragma message(">> WebRtcSpl_MaxValueW32 is excluded from this build")
#endif
// Index of maximum value in a word32 vector.
WebRtc_Word16 WebRtcSpl_MaxIndexW32(G_CONST WebRtc_Word32* vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMax;
WebRtc_Word16 tempMaxIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word32 *tmpvector = vector;
tempMax = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ > tempMax)
{
tempMax = vector[i];
tempMaxIndex = i;
}
}
return tempMaxIndex;
}
// Minimum value of word16 vector.
WebRtc_Word16 WebRtcSpl_MinValueW16(G_CONST WebRtc_Word16 *vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMin;
WebRtc_Word16 i;
G_CONST WebRtc_Word16 *tmpvector = vector;
// Find the minimum value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
tempMin = (vector[i]);
}
return tempMin;
}
// Index of minimum value in a word16 vector.
#ifndef XSCALE_OPT
WebRtc_Word16 WebRtcSpl_MinIndexW16(G_CONST WebRtc_Word16* vector, WebRtc_Word16 length)
{
WebRtc_Word16 tempMin;
WebRtc_Word16 tempMinIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word16* tmpvector = vector;
// Find index of smallest value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
{
tempMin = vector[i];
tempMinIndex = i;
}
}
return tempMinIndex;
}
#else
#pragma message(">> WebRtcSpl_MinIndexW16 is excluded from this build")
#endif
// Minimum value of word32 vector.
WebRtc_Word32 WebRtcSpl_MinValueW32(G_CONST WebRtc_Word32 *vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMin;
WebRtc_Word16 i;
G_CONST WebRtc_Word32 *tmpvector = vector;
// Find the minimum value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
tempMin = (vector[i]);
}
return tempMin;
}
// Index of minimum value in a word32 vector.
#ifndef XSCALE_OPT
WebRtc_Word16 WebRtcSpl_MinIndexW32(G_CONST WebRtc_Word32* vector, WebRtc_Word16 length)
{
WebRtc_Word32 tempMin;
WebRtc_Word16 tempMinIndex = 0;
WebRtc_Word16 i = 0;
G_CONST WebRtc_Word32 *tmpvector = vector;
// Find index of smallest value
tempMin = *tmpvector++;
for (i = 1; i < length; i++)
{
if (*tmpvector++ < tempMin)
{
tempMin = vector[i];
tempMinIndex = i;
}
}
return tempMinIndex;
}
#else
#pragma message(">> WebRtcSpl_MinIndexW32 is excluded from this build")
#endif

View File

@ -1,47 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#if (defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON))
#include <arm_neon.h>
#include "signal_processing_library.h"
// Maximum absolute value of word16 vector.
WebRtc_Word16 WebRtcSpl_MaxAbsValueW16(const WebRtc_Word16* vector,
WebRtc_Word16 length) {
WebRtc_Word32 temp_max = 0;
WebRtc_Word32 abs_val;
WebRtc_Word16 tot_max;
int i;
__asm__("vmov.i16 d25, #0" : : : "d25");
for (i = 0; i < length - 7; i += 8) {
__asm__("vld1.16 {d26, d27}, [%0]" : : "r"(&vector[i]) : "q13");
__asm__("vabs.s16 q13, q13" : : : "q13");
__asm__("vpmax.s16 d26, d27" : : : "q13");
__asm__("vpmax.s16 d25, d26" : : : "d25", "d26");
}
__asm__("vpmax.s16 d25, d25" : : : "d25");
__asm__("vpmax.s16 d25, d25" : : : "d25");
__asm__("vmov.s16 %0, d25[0]" : "=r"(temp_max): : "d25");
for (; i < length; i++) {
abs_val = WEBRTC_SPL_ABS_W32((vector[i]));
if (abs_val > temp_max) {
temp_max = abs_val;
}
}
tot_max = (WebRtc_Word16)WEBRTC_SPL_MIN(temp_max, WEBRTC_SPL_WORD16_MAX);
return tot_max;
}
#endif

View File

@ -1,52 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the randomization functions
* WebRtcSpl_IncreaseSeed()
* WebRtcSpl_RandU()
* WebRtcSpl_RandN()
* WebRtcSpl_RandUArray()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
WebRtc_UWord32 WebRtcSpl_IncreaseSeed(WebRtc_UWord32 *seed)
{
seed[0] = (seed[0] * ((WebRtc_Word32)69069) + 1) & (WEBRTC_SPL_MAX_SEED_USED - 1);
return seed[0];
}
WebRtc_Word16 WebRtcSpl_RandU(WebRtc_UWord32 *seed)
{
return (WebRtc_Word16)(WebRtcSpl_IncreaseSeed(seed) >> 16);
}
WebRtc_Word16 WebRtcSpl_RandN(WebRtc_UWord32 *seed)
{
return WebRtcSpl_kRandNTable[WebRtcSpl_IncreaseSeed(seed) >> 23];
}
// Creates an array of uniformly distributed variables
WebRtc_Word16 WebRtcSpl_RandUArray(WebRtc_Word16* vector,
WebRtc_Word16 vector_length,
WebRtc_UWord32* seed)
{
int i;
for (i = 0; i < vector_length; i++)
{
vector[i] = WebRtcSpl_RandU(seed);
}
return vector_length;
}

View File

@ -1,47 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file contains some internal resampling functions.
*
*/
#ifndef WEBRTC_SPL_RESAMPLE_BY_2_INTERNAL_H_
#define WEBRTC_SPL_RESAMPLE_BY_2_INTERNAL_H_
#include "typedefs.h"
/*******************************************************************
* resample_by_2_fast.c
* Functions for internal use in the other resample functions
******************************************************************/
void WebRtcSpl_DownBy2IntToShort(WebRtc_Word32 *in, WebRtc_Word32 len, WebRtc_Word16 *out,
WebRtc_Word32 *state);
void WebRtcSpl_DownBy2ShortToInt(const WebRtc_Word16 *in, WebRtc_Word32 len,
WebRtc_Word32 *out, WebRtc_Word32 *state);
void WebRtcSpl_UpBy2ShortToInt(const WebRtc_Word16 *in, WebRtc_Word32 len,
WebRtc_Word32 *out, WebRtc_Word32 *state);
void WebRtcSpl_UpBy2IntToInt(const WebRtc_Word32 *in, WebRtc_Word32 len, WebRtc_Word32 *out,
WebRtc_Word32 *state);
void WebRtcSpl_UpBy2IntToShort(const WebRtc_Word32 *in, WebRtc_Word32 len,
WebRtc_Word16 *out, WebRtc_Word32 *state);
void WebRtcSpl_LPBy2ShortToInt(const WebRtc_Word16* in, WebRtc_Word32 len,
WebRtc_Word32* out, WebRtc_Word32* state);
void WebRtcSpl_LPBy2IntToInt(const WebRtc_Word32* in, WebRtc_Word32 len, WebRtc_Word32* out,
WebRtc_Word32* state);
#endif // WEBRTC_SPL_RESAMPLE_BY_2_INTERNAL_H_

View File

@ -1,60 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the 360 degree sine table.
*
*/
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_kSinTable[] = {
0, 142, 285, 428, 571, 713, 856, 998, 1140,
1281, 1422, 1563, 1703, 1842, 1981, 2120, 2258, 2395,
2531, 2667, 2801, 2935, 3068, 3200, 3331, 3462, 3591,
3719, 3845, 3971, 4095, 4219, 4341, 4461, 4580, 4698,
4815, 4930, 5043, 5155, 5265, 5374, 5481, 5586, 5690,
5792, 5892, 5991, 6087, 6182, 6275, 6366, 6455, 6542,
6627, 6710, 6791, 6870, 6947, 7021, 7094, 7164, 7233,
7299, 7362, 7424, 7483, 7540, 7595, 7647, 7697, 7745,
7791, 7834, 7874, 7912, 7948, 7982, 8012, 8041, 8067,
8091, 8112, 8130, 8147, 8160, 8172, 8180, 8187, 8190,
8191, 8190, 8187, 8180, 8172, 8160, 8147, 8130, 8112,
8091, 8067, 8041, 8012, 7982, 7948, 7912, 7874, 7834,
7791, 7745, 7697, 7647, 7595, 7540, 7483, 7424, 7362,
7299, 7233, 7164, 7094, 7021, 6947, 6870, 6791, 6710,
6627, 6542, 6455, 6366, 6275, 6182, 6087, 5991, 5892,
5792, 5690, 5586, 5481, 5374, 5265, 5155, 5043, 4930,
4815, 4698, 4580, 4461, 4341, 4219, 4096, 3971, 3845,
3719, 3591, 3462, 3331, 3200, 3068, 2935, 2801, 2667,
2531, 2395, 2258, 2120, 1981, 1842, 1703, 1563, 1422,
1281, 1140, 998, 856, 713, 571, 428, 285, 142,
0, -142, -285, -428, -571, -713, -856, -998, -1140,
-1281, -1422, -1563, -1703, -1842, -1981, -2120, -2258, -2395,
-2531, -2667, -2801, -2935, -3068, -3200, -3331, -3462, -3591,
-3719, -3845, -3971, -4095, -4219, -4341, -4461, -4580, -4698,
-4815, -4930, -5043, -5155, -5265, -5374, -5481, -5586, -5690,
-5792, -5892, -5991, -6087, -6182, -6275, -6366, -6455, -6542,
-6627, -6710, -6791, -6870, -6947, -7021, -7094, -7164, -7233,
-7299, -7362, -7424, -7483, -7540, -7595, -7647, -7697, -7745,
-7791, -7834, -7874, -7912, -7948, -7982, -8012, -8041, -8067,
-8091, -8112, -8130, -8147, -8160, -8172, -8180, -8187, -8190,
-8191, -8190, -8187, -8180, -8172, -8160, -8147, -8130, -8112,
-8091, -8067, -8041, -8012, -7982, -7948, -7912, -7874, -7834,
-7791, -7745, -7697, -7647, -7595, -7540, -7483, -7424, -7362,
-7299, -7233, -7164, -7094, -7021, -6947, -6870, -6791, -6710,
-6627, -6542, -6455, -6366, -6275, -6182, -6087, -5991, -5892,
-5792, -5690, -5586, -5481, -5374, -5265, -5155, -5043, -4930,
-4815, -4698, -4580, -4461, -4341, -4219, -4096, -3971, -3845,
-3719, -3591, -3462, -3331, -3200, -3068, -2935, -2801, -2667,
-2531, -2395, -2258, -2120, -1981, -1842, -1703, -1563, -1422,
-1281, -1140, -998, -856, -713, -571, -428, -285, -142
};

View File

@ -1,150 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the 1024 point sine table.
*
*/
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_kSinTable1024[] =
{
0, 201, 402, 603, 804, 1005, 1206, 1406,
1607, 1808, 2009, 2209, 2410, 2610, 2811, 3011,
3211, 3411, 3611, 3811, 4011, 4210, 4409, 4608,
4807, 5006, 5205, 5403, 5601, 5799, 5997, 6195,
6392, 6589, 6786, 6982, 7179, 7375, 7571, 7766,
7961, 8156, 8351, 8545, 8739, 8932, 9126, 9319,
9511, 9703, 9895, 10087, 10278, 10469, 10659, 10849,
11038, 11227, 11416, 11604, 11792, 11980, 12166, 12353,
12539, 12724, 12909, 13094, 13278, 13462, 13645, 13827,
14009, 14191, 14372, 14552, 14732, 14911, 15090, 15268,
15446, 15623, 15799, 15975, 16150, 16325, 16499, 16672,
16845, 17017, 17189, 17360, 17530, 17699, 17868, 18036,
18204, 18371, 18537, 18702, 18867, 19031, 19194, 19357,
19519, 19680, 19840, 20000, 20159, 20317, 20474, 20631,
20787, 20942, 21096, 21249, 21402, 21554, 21705, 21855,
22004, 22153, 22301, 22448, 22594, 22739, 22883, 23027,
23169, 23311, 23452, 23592, 23731, 23869, 24006, 24143,
24278, 24413, 24546, 24679, 24811, 24942, 25072, 25201,
25329, 25456, 25582, 25707, 25831, 25954, 26077, 26198,
26318, 26437, 26556, 26673, 26789, 26905, 27019, 27132,
27244, 27355, 27466, 27575, 27683, 27790, 27896, 28001,
28105, 28208, 28309, 28410, 28510, 28608, 28706, 28802,
28897, 28992, 29085, 29177, 29268, 29358, 29446, 29534,
29621, 29706, 29790, 29873, 29955, 30036, 30116, 30195,
30272, 30349, 30424, 30498, 30571, 30643, 30713, 30783,
30851, 30918, 30984, 31049,
31113, 31175, 31236, 31297,
31356, 31413, 31470, 31525, 31580, 31633, 31684, 31735,
31785, 31833, 31880, 31926, 31970, 32014, 32056, 32097,
32137, 32176, 32213, 32249, 32284, 32318, 32350, 32382,
32412, 32441, 32468, 32495, 32520, 32544, 32567, 32588,
32609, 32628, 32646, 32662, 32678, 32692, 32705, 32717,
32727, 32736, 32744, 32751, 32757, 32761, 32764, 32766,
32767, 32766, 32764, 32761, 32757, 32751, 32744, 32736,
32727, 32717, 32705, 32692, 32678, 32662, 32646, 32628,
32609, 32588, 32567, 32544, 32520, 32495, 32468, 32441,
32412, 32382, 32350, 32318, 32284, 32249, 32213, 32176,
32137, 32097, 32056, 32014, 31970, 31926, 31880, 31833,
31785, 31735, 31684, 31633, 31580, 31525, 31470, 31413,
31356, 31297, 31236, 31175, 31113, 31049, 30984, 30918,
30851, 30783, 30713, 30643, 30571, 30498, 30424, 30349,
30272, 30195, 30116, 30036, 29955, 29873, 29790, 29706,
29621, 29534, 29446, 29358, 29268, 29177, 29085, 28992,
28897, 28802, 28706, 28608, 28510, 28410, 28309, 28208,
28105, 28001, 27896, 27790, 27683, 27575, 27466, 27355,
27244, 27132, 27019, 26905, 26789, 26673, 26556, 26437,
26318, 26198, 26077, 25954, 25831, 25707, 25582, 25456,
25329, 25201, 25072, 24942, 24811, 24679, 24546, 24413,
24278, 24143, 24006, 23869, 23731, 23592, 23452, 23311,
23169, 23027, 22883, 22739, 22594, 22448, 22301, 22153,
22004, 21855, 21705, 21554, 21402, 21249, 21096, 20942,
20787, 20631, 20474, 20317, 20159, 20000, 19840, 19680,
19519, 19357, 19194, 19031, 18867, 18702, 18537, 18371,
18204, 18036, 17868, 17699, 17530, 17360, 17189, 17017,
16845, 16672, 16499, 16325, 16150, 15975, 15799, 15623,
15446, 15268, 15090, 14911, 14732, 14552, 14372, 14191,
14009, 13827, 13645, 13462, 13278, 13094, 12909, 12724,
12539, 12353, 12166, 11980, 11792, 11604, 11416, 11227,
11038, 10849, 10659, 10469, 10278, 10087, 9895, 9703,
9511, 9319, 9126, 8932, 8739, 8545, 8351, 8156,
7961, 7766, 7571, 7375, 7179, 6982, 6786, 6589,
6392, 6195, 5997, 5799, 5601, 5403, 5205, 5006,
4807, 4608, 4409, 4210, 4011, 3811, 3611, 3411,
3211, 3011, 2811, 2610, 2410, 2209, 2009, 1808,
1607, 1406, 1206, 1005, 804, 603, 402, 201,
0, -201, -402, -603, -804, -1005, -1206, -1406,
-1607, -1808, -2009, -2209, -2410, -2610, -2811, -3011,
-3211, -3411, -3611, -3811, -4011, -4210, -4409, -4608,
-4807, -5006, -5205, -5403, -5601, -5799, -5997, -6195,
-6392, -6589, -6786, -6982, -7179, -7375, -7571, -7766,
-7961, -8156, -8351, -8545, -8739, -8932, -9126, -9319,
-9511, -9703, -9895, -10087, -10278, -10469, -10659, -10849,
-11038, -11227, -11416, -11604, -11792, -11980, -12166, -12353,
-12539, -12724, -12909, -13094, -13278, -13462, -13645, -13827,
-14009, -14191, -14372, -14552, -14732, -14911, -15090, -15268,
-15446, -15623, -15799, -15975, -16150, -16325, -16499, -16672,
-16845, -17017, -17189, -17360, -17530, -17699, -17868, -18036,
-18204, -18371, -18537, -18702, -18867, -19031, -19194, -19357,
-19519, -19680, -19840, -20000, -20159, -20317, -20474, -20631,
-20787, -20942, -21096, -21249, -21402, -21554, -21705, -21855,
-22004, -22153, -22301, -22448, -22594, -22739, -22883, -23027,
-23169, -23311, -23452, -23592, -23731, -23869, -24006, -24143,
-24278, -24413, -24546, -24679, -24811, -24942, -25072, -25201,
-25329, -25456, -25582, -25707, -25831, -25954, -26077, -26198,
-26318, -26437, -26556, -26673, -26789, -26905, -27019, -27132,
-27244, -27355, -27466, -27575, -27683, -27790, -27896, -28001,
-28105, -28208, -28309, -28410, -28510, -28608, -28706, -28802,
-28897, -28992, -29085, -29177, -29268, -29358, -29446, -29534,
-29621, -29706, -29790, -29873, -29955, -30036, -30116, -30195,
-30272, -30349, -30424, -30498, -30571, -30643, -30713, -30783,
-30851, -30918, -30984, -31049, -31113, -31175, -31236, -31297,
-31356, -31413, -31470, -31525, -31580, -31633, -31684, -31735,
-31785, -31833, -31880, -31926, -31970, -32014, -32056, -32097,
-32137, -32176, -32213, -32249, -32284, -32318, -32350, -32382,
-32412, -32441, -32468, -32495, -32520, -32544, -32567, -32588,
-32609, -32628, -32646, -32662, -32678, -32692, -32705, -32717,
-32727, -32736, -32744, -32751, -32757, -32761, -32764, -32766,
-32767, -32766, -32764, -32761, -32757, -32751, -32744, -32736,
-32727, -32717, -32705, -32692, -32678, -32662, -32646, -32628,
-32609, -32588, -32567, -32544, -32520, -32495, -32468, -32441,
-32412, -32382, -32350, -32318, -32284, -32249, -32213, -32176,
-32137, -32097, -32056, -32014, -31970, -31926, -31880, -31833,
-31785, -31735, -31684, -31633, -31580, -31525, -31470, -31413,
-31356, -31297, -31236, -31175, -31113, -31049, -30984, -30918,
-30851, -30783, -30713, -30643, -30571, -30498, -30424, -30349,
-30272, -30195, -30116, -30036, -29955, -29873, -29790, -29706,
-29621, -29534, -29446, -29358, -29268, -29177, -29085, -28992,
-28897, -28802, -28706, -28608, -28510, -28410, -28309, -28208,
-28105, -28001, -27896, -27790, -27683, -27575, -27466, -27355,
-27244, -27132, -27019, -26905, -26789, -26673, -26556, -26437,
-26318, -26198, -26077, -25954, -25831, -25707, -25582, -25456,
-25329, -25201, -25072, -24942, -24811, -24679, -24546, -24413,
-24278, -24143, -24006, -23869, -23731, -23592, -23452, -23311,
-23169, -23027, -22883, -22739, -22594, -22448, -22301, -22153,
-22004, -21855, -21705, -21554, -21402, -21249, -21096, -20942,
-20787, -20631, -20474, -20317, -20159, -20000, -19840, -19680,
-19519, -19357, -19194, -19031, -18867, -18702, -18537, -18371,
-18204, -18036, -17868, -17699, -17530, -17360, -17189, -17017,
-16845, -16672, -16499, -16325, -16150, -15975, -15799, -15623,
-15446, -15268, -15090, -14911, -14732, -14552, -14372, -14191,
-14009, -13827, -13645, -13462, -13278, -13094, -12909, -12724,
-12539, -12353, -12166, -11980, -11792, -11604, -11416, -11227,
-11038, -10849, -10659, -10469, -10278, -10087, -9895, -9703,
-9511, -9319, -9126, -8932, -8739, -8545, -8351, -8156,
-7961, -7766, -7571, -7375, -7179, -6982, -6786, -6589,
-6392, -6195, -5997, -5799, -5601, -5403, -5205, -5006,
-4807, -4608, -4409, -4210, -4011, -3811, -3611, -3411,
-3211, -3011, -2811, -2610, -2410, -2209, -2009, -1808,
-1607, -1406, -1206, -1005, -804, -603, -402, -201,
};

View File

@ -1,73 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'spl',
'type': '<(library)',
'include_dirs': [
'../interface',
],
'direct_dependent_settings': {
'include_dirs': [
'../interface',
],
},
'sources': [
'../interface/signal_processing_library.h',
'../interface/spl_inl.h',
'auto_corr_to_refl_coef.c',
'auto_correlation.c',
'complex_fft.c',
'complex_ifft.c',
'complex_bit_reverse.c',
'copy_set_operations.c',
'cos_table.c',
'cross_correlation.c',
'division_operations.c',
'dot_product_with_scale.c',
'downsample_fast.c',
'energy.c',
'filter_ar.c',
'filter_ar_fast_q12.c',
'filter_ma_fast_q12.c',
'get_hanning_window.c',
'get_scaling_square.c',
'hanning_table.c',
'ilbc_specific_functions.c',
'levinson_durbin.c',
'lpc_to_refl_coef.c',
'min_max_operations.c',
'randn_table.c',
'randomization_functions.c',
'refl_coef_to_lpc.c',
'resample.c',
'resample_48khz.c',
'resample_by_2.c',
'resample_by_2_internal.c',
'resample_by_2_internal.h',
'resample_fractional.c',
'sin_table.c',
'sin_table_1024.c',
'spl_sqrt.c',
'spl_sqrt_floor.c',
'spl_version.c',
'splitting_filter.c',
'sqrt_of_one_minus_x_squared.c',
'vector_scaling_operations.c',
],
},
],
}
# Local Variables:
# tab-width:2
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=2 shiftwidth=2:

View File

@ -1,53 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_SqrtFloor().
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
#define WEBRTC_SPL_SQRT_ITER(N) \
try1 = root + (1 << (N)); \
if (value >= try1 << (N)) \
{ \
value -= try1 << (N); \
root |= 2 << (N); \
}
// (out) Square root of input parameter
WebRtc_Word32 WebRtcSpl_SqrtFloor(WebRtc_Word32 value)
{
// new routine for performance, 4 cycles/bit in ARM
// output precision is 16 bits
WebRtc_Word32 root = 0, try1;
WEBRTC_SPL_SQRT_ITER (15);
WEBRTC_SPL_SQRT_ITER (14);
WEBRTC_SPL_SQRT_ITER (13);
WEBRTC_SPL_SQRT_ITER (12);
WEBRTC_SPL_SQRT_ITER (11);
WEBRTC_SPL_SQRT_ITER (10);
WEBRTC_SPL_SQRT_ITER ( 9);
WEBRTC_SPL_SQRT_ITER ( 8);
WEBRTC_SPL_SQRT_ITER ( 7);
WEBRTC_SPL_SQRT_ITER ( 6);
WEBRTC_SPL_SQRT_ITER ( 5);
WEBRTC_SPL_SQRT_ITER ( 4);
WEBRTC_SPL_SQRT_ITER ( 3);
WEBRTC_SPL_SQRT_ITER ( 2);
WEBRTC_SPL_SQRT_ITER ( 1);
WEBRTC_SPL_SQRT_ITER ( 0);
return root >> 1;
}

View File

@ -1,25 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the function WebRtcSpl_get_version().
* The description header can be found in signal_processing_library.h
*
*/
#include <string.h>
#include "signal_processing_library.h"
WebRtc_Word16 WebRtcSpl_get_version(char* version, WebRtc_Word16 length_in_bytes)
{
strncpy(version, "1.2.0", length_in_bytes);
return 0;
}

View File

@ -1,198 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the splitting filter functions.
*
*/
#include "signal_processing_library.h"
// Number of samples in a low/high-band frame.
enum
{
kBandFrameLength = 160
};
// QMF filter coefficients in Q16.
static const WebRtc_UWord16 WebRtcSpl_kAllPassFilter1[3] = {6418, 36982, 57261};
static const WebRtc_UWord16 WebRtcSpl_kAllPassFilter2[3] = {21333, 49062, 63010};
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcSpl_AllPassQMF(...)
//
// Allpass filter used by the analysis and synthesis parts of the QMF filter.
//
// Input:
// - in_data : Input data sequence (Q10)
// - data_length : Length of data sequence (>2)
// - filter_coefficients : Filter coefficients (length 3, Q16)
//
// Input & Output:
// - filter_state : Filter state (length 6, Q10).
//
// Output:
// - out_data : Output data sequence (Q10), length equal to
// |data_length|
//
void WebRtcSpl_AllPassQMF(WebRtc_Word32* in_data, const WebRtc_Word16 data_length,
WebRtc_Word32* out_data, const WebRtc_UWord16* filter_coefficients,
WebRtc_Word32* filter_state)
{
// The procedure is to filter the input with three first order all pass filters
// (cascade operations).
//
// a_3 + q^-1 a_2 + q^-1 a_1 + q^-1
// y[n] = ----------- ----------- ----------- x[n]
// 1 + a_3q^-1 1 + a_2q^-1 1 + a_1q^-1
//
// The input vector |filter_coefficients| includes these three filter coefficients.
// The filter state contains the in_data state, in_data[-1], followed by
// the out_data state, out_data[-1]. This is repeated for each cascade.
// The first cascade filter will filter the |in_data| and store the output in
// |out_data|. The second will the take the |out_data| as input and make an
// intermediate storage in |in_data|, to save memory. The third, and final, cascade
// filter operation takes the |in_data| (which is the output from the previous cascade
// filter) and store the output in |out_data|.
// Note that the input vector values are changed during the process.
WebRtc_Word16 k;
WebRtc_Word32 diff;
// First all-pass cascade; filter from in_data to out_data.
// Let y_i[n] indicate the output of cascade filter i (with filter coefficient a_i) at
// vector position n. Then the final output will be y[n] = y_3[n]
// First loop, use the states stored in memory.
// "diff" should be safe from wrap around since max values are 2^25
diff = WEBRTC_SPL_SUB_SAT_W32(in_data[0], filter_state[1]); // = (x[0] - y_1[-1])
// y_1[0] = x[-1] + a_1 * (x[0] - y_1[-1])
out_data[0] = WEBRTC_SPL_SCALEDIFF32(filter_coefficients[0], diff, filter_state[0]);
// For the remaining loops, use previous values.
for (k = 1; k < data_length; k++)
{
diff = WEBRTC_SPL_SUB_SAT_W32(in_data[k], out_data[k - 1]); // = (x[n] - y_1[n-1])
// y_1[n] = x[n-1] + a_1 * (x[n] - y_1[n-1])
out_data[k] = WEBRTC_SPL_SCALEDIFF32(filter_coefficients[0], diff, in_data[k - 1]);
}
// Update states.
filter_state[0] = in_data[data_length - 1]; // x[N-1], becomes x[-1] next time
filter_state[1] = out_data[data_length - 1]; // y_1[N-1], becomes y_1[-1] next time
// Second all-pass cascade; filter from out_data to in_data.
diff = WEBRTC_SPL_SUB_SAT_W32(out_data[0], filter_state[3]); // = (y_1[0] - y_2[-1])
// y_2[0] = y_1[-1] + a_2 * (y_1[0] - y_2[-1])
in_data[0] = WEBRTC_SPL_SCALEDIFF32(filter_coefficients[1], diff, filter_state[2]);
for (k = 1; k < data_length; k++)
{
diff = WEBRTC_SPL_SUB_SAT_W32(out_data[k], in_data[k - 1]); // =(y_1[n] - y_2[n-1])
// y_2[0] = y_1[-1] + a_2 * (y_1[0] - y_2[-1])
in_data[k] = WEBRTC_SPL_SCALEDIFF32(filter_coefficients[1], diff, out_data[k-1]);
}
filter_state[2] = out_data[data_length - 1]; // y_1[N-1], becomes y_1[-1] next time
filter_state[3] = in_data[data_length - 1]; // y_2[N-1], becomes y_2[-1] next time
// Third all-pass cascade; filter from in_data to out_data.
diff = WEBRTC_SPL_SUB_SAT_W32(in_data[0], filter_state[5]); // = (y_2[0] - y[-1])
// y[0] = y_2[-1] + a_3 * (y_2[0] - y[-1])
out_data[0] = WEBRTC_SPL_SCALEDIFF32(filter_coefficients[2], diff, filter_state[4]);
for (k = 1; k < data_length; k++)
{
diff = WEBRTC_SPL_SUB_SAT_W32(in_data[k], out_data[k - 1]); // = (y_2[n] - y[n-1])
// y[n] = y_2[n-1] + a_3 * (y_2[n] - y[n-1])
out_data[k] = WEBRTC_SPL_SCALEDIFF32(filter_coefficients[2], diff, in_data[k-1]);
}
filter_state[4] = in_data[data_length - 1]; // y_2[N-1], becomes y_2[-1] next time
filter_state[5] = out_data[data_length - 1]; // y[N-1], becomes y[-1] next time
}
void WebRtcSpl_AnalysisQMF(const WebRtc_Word16* in_data, WebRtc_Word16* low_band,
WebRtc_Word16* high_band, WebRtc_Word32* filter_state1,
WebRtc_Word32* filter_state2)
{
WebRtc_Word16 i;
WebRtc_Word16 k;
WebRtc_Word32 tmp;
WebRtc_Word32 half_in1[kBandFrameLength];
WebRtc_Word32 half_in2[kBandFrameLength];
WebRtc_Word32 filter1[kBandFrameLength];
WebRtc_Word32 filter2[kBandFrameLength];
// Split even and odd samples. Also shift them to Q10.
for (i = 0, k = 0; i < kBandFrameLength; i++, k += 2)
{
half_in2[i] = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)in_data[k], 10);
half_in1[i] = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)in_data[k + 1], 10);
}
// All pass filter even and odd samples, independently.
WebRtcSpl_AllPassQMF(half_in1, kBandFrameLength, filter1, WebRtcSpl_kAllPassFilter1,
filter_state1);
WebRtcSpl_AllPassQMF(half_in2, kBandFrameLength, filter2, WebRtcSpl_kAllPassFilter2,
filter_state2);
// Take the sum and difference of filtered version of odd and even
// branches to get upper & lower band.
for (i = 0; i < kBandFrameLength; i++)
{
tmp = filter1[i] + filter2[i] + 1024;
tmp = WEBRTC_SPL_RSHIFT_W32(tmp, 11);
low_band[i] = WebRtcSpl_SatW32ToW16(tmp);
tmp = filter1[i] - filter2[i] + 1024;
tmp = WEBRTC_SPL_RSHIFT_W32(tmp, 11);
high_band[i] = WebRtcSpl_SatW32ToW16(tmp);
}
}
void WebRtcSpl_SynthesisQMF(const WebRtc_Word16* low_band, const WebRtc_Word16* high_band,
WebRtc_Word16* out_data, WebRtc_Word32* filter_state1,
WebRtc_Word32* filter_state2)
{
WebRtc_Word32 tmp;
WebRtc_Word32 half_in1[kBandFrameLength];
WebRtc_Word32 half_in2[kBandFrameLength];
WebRtc_Word32 filter1[kBandFrameLength];
WebRtc_Word32 filter2[kBandFrameLength];
WebRtc_Word16 i;
WebRtc_Word16 k;
// Obtain the sum and difference channels out of upper and lower-band channels.
// Also shift to Q10 domain.
for (i = 0; i < kBandFrameLength; i++)
{
tmp = (WebRtc_Word32)low_band[i] + (WebRtc_Word32)high_band[i];
half_in1[i] = WEBRTC_SPL_LSHIFT_W32(tmp, 10);
tmp = (WebRtc_Word32)low_band[i] - (WebRtc_Word32)high_band[i];
half_in2[i] = WEBRTC_SPL_LSHIFT_W32(tmp, 10);
}
// all-pass filter the sum and difference channels
WebRtcSpl_AllPassQMF(half_in1, kBandFrameLength, filter1, WebRtcSpl_kAllPassFilter2,
filter_state1);
WebRtcSpl_AllPassQMF(half_in2, kBandFrameLength, filter2, WebRtcSpl_kAllPassFilter1,
filter_state2);
// The filtered signals are even and odd samples of the output. Combine
// them. The signals are Q10 should shift them back to Q0 and take care of
// saturation.
for (i = 0, k = 0; i < kBandFrameLength; i++)
{
tmp = WEBRTC_SPL_RSHIFT_W32(filter2[i] + 512, 10);
out_data[k++] = WebRtcSpl_SatW32ToW16(tmp);
tmp = WEBRTC_SPL_RSHIFT_W32(filter1[i] + 512, 10);
out_data[k++] = WebRtcSpl_SatW32ToW16(tmp);
}
}

View File

@ -1,151 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains implementations of the functions
* WebRtcSpl_VectorBitShiftW16()
* WebRtcSpl_VectorBitShiftW32()
* WebRtcSpl_VectorBitShiftW32ToW16()
* WebRtcSpl_ScaleVector()
* WebRtcSpl_ScaleVectorWithSat()
* WebRtcSpl_ScaleAndAddVectors()
*
* The description header can be found in signal_processing_library.h
*
*/
#include "signal_processing_library.h"
void WebRtcSpl_VectorBitShiftW16(WebRtc_Word16 *res,
WebRtc_Word16 length,
G_CONST WebRtc_Word16 *in,
WebRtc_Word16 right_shifts)
{
int i;
if (right_shifts > 0)
{
for (i = length; i > 0; i--)
{
(*res++) = ((*in++) >> right_shifts);
}
} else
{
for (i = length; i > 0; i--)
{
(*res++) = ((*in++) << (-right_shifts));
}
}
}
void WebRtcSpl_VectorBitShiftW32(WebRtc_Word32 *out_vector,
WebRtc_Word16 vector_length,
G_CONST WebRtc_Word32 *in_vector,
WebRtc_Word16 right_shifts)
{
int i;
if (right_shifts > 0)
{
for (i = vector_length; i > 0; i--)
{
(*out_vector++) = ((*in_vector++) >> right_shifts);
}
} else
{
for (i = vector_length; i > 0; i--)
{
(*out_vector++) = ((*in_vector++) << (-right_shifts));
}
}
}
void WebRtcSpl_VectorBitShiftW32ToW16(WebRtc_Word16 *res,
WebRtc_Word16 length,
G_CONST WebRtc_Word32 *in,
WebRtc_Word16 right_shifts)
{
int i;
if (right_shifts >= 0)
{
for (i = length; i > 0; i--)
{
(*res++) = (WebRtc_Word16)((*in++) >> right_shifts);
}
} else
{
WebRtc_Word16 left_shifts = -right_shifts;
for (i = length; i > 0; i--)
{
(*res++) = (WebRtc_Word16)((*in++) << left_shifts);
}
}
}
void WebRtcSpl_ScaleVector(G_CONST WebRtc_Word16 *in_vector, WebRtc_Word16 *out_vector,
WebRtc_Word16 gain, WebRtc_Word16 in_vector_length,
WebRtc_Word16 right_shifts)
{
// Performs vector operation: out_vector = (gain*in_vector)>>right_shifts
int i;
G_CONST WebRtc_Word16 *inptr;
WebRtc_Word16 *outptr;
inptr = in_vector;
outptr = out_vector;
for (i = 0; i < in_vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(*inptr++, gain, right_shifts);
}
}
void WebRtcSpl_ScaleVectorWithSat(G_CONST WebRtc_Word16 *in_vector, WebRtc_Word16 *out_vector,
WebRtc_Word16 gain, WebRtc_Word16 in_vector_length,
WebRtc_Word16 right_shifts)
{
// Performs vector operation: out_vector = (gain*in_vector)>>right_shifts
int i;
WebRtc_Word32 tmpW32;
G_CONST WebRtc_Word16 *inptr;
WebRtc_Word16 *outptr;
inptr = in_vector;
outptr = out_vector;
for (i = 0; i < in_vector_length; i++)
{
tmpW32 = WEBRTC_SPL_MUL_16_16_RSFT(*inptr++, gain, right_shifts);
(*outptr++) = WebRtcSpl_SatW32ToW16(tmpW32);
}
}
void WebRtcSpl_ScaleAndAddVectors(G_CONST WebRtc_Word16 *in1, WebRtc_Word16 gain1, int shift1,
G_CONST WebRtc_Word16 *in2, WebRtc_Word16 gain2, int shift2,
WebRtc_Word16 *out, int vector_length)
{
// Performs vector operation: out = (gain1*in1)>>shift1 + (gain2*in2)>>shift2
int i;
G_CONST WebRtc_Word16 *in1ptr;
G_CONST WebRtc_Word16 *in2ptr;
WebRtc_Word16 *outptr;
in1ptr = in1;
in2ptr = in2;
outptr = out;
for (i = 0; i < vector_length; i++)
{
(*outptr++) = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(gain1, *in1ptr++, shift1)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(gain2, *in2ptr++, shift2);
}
}

View File

@ -1,704 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the Q14 radix-8 tables used in ARM9e optimizations.
*
*/
extern const int s_Q14S_8;
const int s_Q14S_8 = 1024;
extern const unsigned short t_Q14S_8[2032];
const unsigned short t_Q14S_8[2032] = {
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x22a3,0x187e ,0x3249,0x0c7c ,0x11a8,0x238e ,
0x0000,0x2d41 ,0x22a3,0x187e ,0xdd5d,0x3b21 ,
0xdd5d,0x3b21 ,0x11a8,0x238e ,0xb4be,0x3ec5 ,
0xc000,0x4000 ,0x0000,0x2d41 ,0xa57e,0x2d41 ,
0xac61,0x3b21 ,0xee58,0x3537 ,0xb4be,0x0c7c ,
0xa57e,0x2d41 ,0xdd5d,0x3b21 ,0xdd5d,0xe782 ,
0xac61,0x187e ,0xcdb7,0x3ec5 ,0x11a8,0xcac9 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x396b,0x0646 ,0x3cc8,0x0324 ,0x35eb,0x0964 ,
0x3249,0x0c7c ,0x396b,0x0646 ,0x2aaa,0x1294 ,
0x2aaa,0x1294 ,0x35eb,0x0964 ,0x1e7e,0x1b5d ,
0x22a3,0x187e ,0x3249,0x0c7c ,0x11a8,0x238e ,
0x1a46,0x1e2b ,0x2e88,0x0f8d ,0x0471,0x2afb ,
0x11a8,0x238e ,0x2aaa,0x1294 ,0xf721,0x3179 ,
0x08df,0x289a ,0x26b3,0x1590 ,0xea02,0x36e5 ,
0x0000,0x2d41 ,0x22a3,0x187e ,0xdd5d,0x3b21 ,
0xf721,0x3179 ,0x1e7e,0x1b5d ,0xd178,0x3e15 ,
0xee58,0x3537 ,0x1a46,0x1e2b ,0xc695,0x3fb1 ,
0xe5ba,0x3871 ,0x15fe,0x20e7 ,0xbcf0,0x3fec ,
0xdd5d,0x3b21 ,0x11a8,0x238e ,0xb4be,0x3ec5 ,
0xd556,0x3d3f ,0x0d48,0x2620 ,0xae2e,0x3c42 ,
0xcdb7,0x3ec5 ,0x08df,0x289a ,0xa963,0x3871 ,
0xc695,0x3fb1 ,0x0471,0x2afb ,0xa678,0x3368 ,
0xc000,0x4000 ,0x0000,0x2d41 ,0xa57e,0x2d41 ,
0xba09,0x3fb1 ,0xfb8f,0x2f6c ,0xa678,0x2620 ,
0xb4be,0x3ec5 ,0xf721,0x3179 ,0xa963,0x1e2b ,
0xb02d,0x3d3f ,0xf2b8,0x3368 ,0xae2e,0x1590 ,
0xac61,0x3b21 ,0xee58,0x3537 ,0xb4be,0x0c7c ,
0xa963,0x3871 ,0xea02,0x36e5 ,0xbcf0,0x0324 ,
0xa73b,0x3537 ,0xe5ba,0x3871 ,0xc695,0xf9ba ,
0xa5ed,0x3179 ,0xe182,0x39db ,0xd178,0xf073 ,
0xa57e,0x2d41 ,0xdd5d,0x3b21 ,0xdd5d,0xe782 ,
0xa5ed,0x289a ,0xd94d,0x3c42 ,0xea02,0xdf19 ,
0xa73b,0x238e ,0xd556,0x3d3f ,0xf721,0xd766 ,
0xa963,0x1e2b ,0xd178,0x3e15 ,0x0471,0xd094 ,
0xac61,0x187e ,0xcdb7,0x3ec5 ,0x11a8,0xcac9 ,
0xb02d,0x1294 ,0xca15,0x3f4f ,0x1e7e,0xc625 ,
0xb4be,0x0c7c ,0xc695,0x3fb1 ,0x2aaa,0xc2c1 ,
0xba09,0x0646 ,0xc338,0x3fec ,0x35eb,0xc0b1 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3e69,0x0192 ,0x3f36,0x00c9 ,0x3d9a,0x025b ,
0x3cc8,0x0324 ,0x3e69,0x0192 ,0x3b1e,0x04b5 ,
0x3b1e,0x04b5 ,0x3d9a,0x025b ,0x388e,0x070e ,
0x396b,0x0646 ,0x3cc8,0x0324 ,0x35eb,0x0964 ,
0x37af,0x07d6 ,0x3bf4,0x03ed ,0x3334,0x0bb7 ,
0x35eb,0x0964 ,0x3b1e,0x04b5 ,0x306c,0x0e06 ,
0x341e,0x0af1 ,0x3a46,0x057e ,0x2d93,0x1050 ,
0x3249,0x0c7c ,0x396b,0x0646 ,0x2aaa,0x1294 ,
0x306c,0x0e06 ,0x388e,0x070e ,0x27b3,0x14d2 ,
0x2e88,0x0f8d ,0x37af,0x07d6 ,0x24ae,0x1709 ,
0x2c9d,0x1112 ,0x36ce,0x089d ,0x219c,0x1937 ,
0x2aaa,0x1294 ,0x35eb,0x0964 ,0x1e7e,0x1b5d ,
0x28b2,0x1413 ,0x3505,0x0a2b ,0x1b56,0x1d79 ,
0x26b3,0x1590 ,0x341e,0x0af1 ,0x1824,0x1f8c ,
0x24ae,0x1709 ,0x3334,0x0bb7 ,0x14ea,0x2193 ,
0x22a3,0x187e ,0x3249,0x0c7c ,0x11a8,0x238e ,
0x2093,0x19ef ,0x315b,0x0d41 ,0x0e61,0x257e ,
0x1e7e,0x1b5d ,0x306c,0x0e06 ,0x0b14,0x2760 ,
0x1c64,0x1cc6 ,0x2f7b,0x0eca ,0x07c4,0x2935 ,
0x1a46,0x1e2b ,0x2e88,0x0f8d ,0x0471,0x2afb ,
0x1824,0x1f8c ,0x2d93,0x1050 ,0x011c,0x2cb2 ,
0x15fe,0x20e7 ,0x2c9d,0x1112 ,0xfdc7,0x2e5a ,
0x13d5,0x223d ,0x2ba4,0x11d3 ,0xfa73,0x2ff2 ,
0x11a8,0x238e ,0x2aaa,0x1294 ,0xf721,0x3179 ,
0x0f79,0x24da ,0x29af,0x1354 ,0xf3d2,0x32ef ,
0x0d48,0x2620 ,0x28b2,0x1413 ,0xf087,0x3453 ,
0x0b14,0x2760 ,0x27b3,0x14d2 ,0xed41,0x35a5 ,
0x08df,0x289a ,0x26b3,0x1590 ,0xea02,0x36e5 ,
0x06a9,0x29ce ,0x25b1,0x164c ,0xe6cb,0x3812 ,
0x0471,0x2afb ,0x24ae,0x1709 ,0xe39c,0x392b ,
0x0239,0x2c21 ,0x23a9,0x17c4 ,0xe077,0x3a30 ,
0x0000,0x2d41 ,0x22a3,0x187e ,0xdd5d,0x3b21 ,
0xfdc7,0x2e5a ,0x219c,0x1937 ,0xda4f,0x3bfd ,
0xfb8f,0x2f6c ,0x2093,0x19ef ,0xd74e,0x3cc5 ,
0xf957,0x3076 ,0x1f89,0x1aa7 ,0xd45c,0x3d78 ,
0xf721,0x3179 ,0x1e7e,0x1b5d ,0xd178,0x3e15 ,
0xf4ec,0x3274 ,0x1d72,0x1c12 ,0xcea5,0x3e9d ,
0xf2b8,0x3368 ,0x1c64,0x1cc6 ,0xcbe2,0x3f0f ,
0xf087,0x3453 ,0x1b56,0x1d79 ,0xc932,0x3f6b ,
0xee58,0x3537 ,0x1a46,0x1e2b ,0xc695,0x3fb1 ,
0xec2b,0x3612 ,0x1935,0x1edc ,0xc40c,0x3fe1 ,
0xea02,0x36e5 ,0x1824,0x1f8c ,0xc197,0x3ffb ,
0xe7dc,0x37b0 ,0x1711,0x203a ,0xbf38,0x3fff ,
0xe5ba,0x3871 ,0x15fe,0x20e7 ,0xbcf0,0x3fec ,
0xe39c,0x392b ,0x14ea,0x2193 ,0xbabf,0x3fc4 ,
0xe182,0x39db ,0x13d5,0x223d ,0xb8a6,0x3f85 ,
0xdf6d,0x3a82 ,0x12bf,0x22e7 ,0xb6a5,0x3f30 ,
0xdd5d,0x3b21 ,0x11a8,0x238e ,0xb4be,0x3ec5 ,
0xdb52,0x3bb6 ,0x1091,0x2435 ,0xb2f2,0x3e45 ,
0xd94d,0x3c42 ,0x0f79,0x24da ,0xb140,0x3daf ,
0xd74e,0x3cc5 ,0x0e61,0x257e ,0xafa9,0x3d03 ,
0xd556,0x3d3f ,0x0d48,0x2620 ,0xae2e,0x3c42 ,
0xd363,0x3daf ,0x0c2e,0x26c1 ,0xacd0,0x3b6d ,
0xd178,0x3e15 ,0x0b14,0x2760 ,0xab8e,0x3a82 ,
0xcf94,0x3e72 ,0x09fa,0x27fe ,0xaa6a,0x3984 ,
0xcdb7,0x3ec5 ,0x08df,0x289a ,0xa963,0x3871 ,
0xcbe2,0x3f0f ,0x07c4,0x2935 ,0xa87b,0x374b ,
0xca15,0x3f4f ,0x06a9,0x29ce ,0xa7b1,0x3612 ,
0xc851,0x3f85 ,0x058d,0x2a65 ,0xa705,0x34c6 ,
0xc695,0x3fb1 ,0x0471,0x2afb ,0xa678,0x3368 ,
0xc4e2,0x3fd4 ,0x0355,0x2b8f ,0xa60b,0x31f8 ,
0xc338,0x3fec ,0x0239,0x2c21 ,0xa5bc,0x3076 ,
0xc197,0x3ffb ,0x011c,0x2cb2 ,0xa58d,0x2ee4 ,
0xc000,0x4000 ,0x0000,0x2d41 ,0xa57e,0x2d41 ,
0xbe73,0x3ffb ,0xfee4,0x2dcf ,0xa58d,0x2b8f ,
0xbcf0,0x3fec ,0xfdc7,0x2e5a ,0xa5bc,0x29ce ,
0xbb77,0x3fd4 ,0xfcab,0x2ee4 ,0xa60b,0x27fe ,
0xba09,0x3fb1 ,0xfb8f,0x2f6c ,0xa678,0x2620 ,
0xb8a6,0x3f85 ,0xfa73,0x2ff2 ,0xa705,0x2435 ,
0xb74d,0x3f4f ,0xf957,0x3076 ,0xa7b1,0x223d ,
0xb600,0x3f0f ,0xf83c,0x30f9 ,0xa87b,0x203a ,
0xb4be,0x3ec5 ,0xf721,0x3179 ,0xa963,0x1e2b ,
0xb388,0x3e72 ,0xf606,0x31f8 ,0xaa6a,0x1c12 ,
0xb25e,0x3e15 ,0xf4ec,0x3274 ,0xab8e,0x19ef ,
0xb140,0x3daf ,0xf3d2,0x32ef ,0xacd0,0x17c4 ,
0xb02d,0x3d3f ,0xf2b8,0x3368 ,0xae2e,0x1590 ,
0xaf28,0x3cc5 ,0xf19f,0x33df ,0xafa9,0x1354 ,
0xae2e,0x3c42 ,0xf087,0x3453 ,0xb140,0x1112 ,
0xad41,0x3bb6 ,0xef6f,0x34c6 ,0xb2f2,0x0eca ,
0xac61,0x3b21 ,0xee58,0x3537 ,0xb4be,0x0c7c ,
0xab8e,0x3a82 ,0xed41,0x35a5 ,0xb6a5,0x0a2b ,
0xaac8,0x39db ,0xec2b,0x3612 ,0xb8a6,0x07d6 ,
0xaa0f,0x392b ,0xeb16,0x367d ,0xbabf,0x057e ,
0xa963,0x3871 ,0xea02,0x36e5 ,0xbcf0,0x0324 ,
0xa8c5,0x37b0 ,0xe8ef,0x374b ,0xbf38,0x00c9 ,
0xa834,0x36e5 ,0xe7dc,0x37b0 ,0xc197,0xfe6e ,
0xa7b1,0x3612 ,0xe6cb,0x3812 ,0xc40c,0xfc13 ,
0xa73b,0x3537 ,0xe5ba,0x3871 ,0xc695,0xf9ba ,
0xa6d3,0x3453 ,0xe4aa,0x38cf ,0xc932,0xf763 ,
0xa678,0x3368 ,0xe39c,0x392b ,0xcbe2,0xf50f ,
0xa62c,0x3274 ,0xe28e,0x3984 ,0xcea5,0xf2bf ,
0xa5ed,0x3179 ,0xe182,0x39db ,0xd178,0xf073 ,
0xa5bc,0x3076 ,0xe077,0x3a30 ,0xd45c,0xee2d ,
0xa599,0x2f6c ,0xdf6d,0x3a82 ,0xd74e,0xebed ,
0xa585,0x2e5a ,0xde64,0x3ad3 ,0xda4f,0xe9b4 ,
0xa57e,0x2d41 ,0xdd5d,0x3b21 ,0xdd5d,0xe782 ,
0xa585,0x2c21 ,0xdc57,0x3b6d ,0xe077,0xe559 ,
0xa599,0x2afb ,0xdb52,0x3bb6 ,0xe39c,0xe33a ,
0xa5bc,0x29ce ,0xda4f,0x3bfd ,0xe6cb,0xe124 ,
0xa5ed,0x289a ,0xd94d,0x3c42 ,0xea02,0xdf19 ,
0xa62c,0x2760 ,0xd84d,0x3c85 ,0xed41,0xdd19 ,
0xa678,0x2620 ,0xd74e,0x3cc5 ,0xf087,0xdb26 ,
0xa6d3,0x24da ,0xd651,0x3d03 ,0xf3d2,0xd93f ,
0xa73b,0x238e ,0xd556,0x3d3f ,0xf721,0xd766 ,
0xa7b1,0x223d ,0xd45c,0x3d78 ,0xfa73,0xd59b ,
0xa834,0x20e7 ,0xd363,0x3daf ,0xfdc7,0xd3df ,
0xa8c5,0x1f8c ,0xd26d,0x3de3 ,0x011c,0xd231 ,
0xa963,0x1e2b ,0xd178,0x3e15 ,0x0471,0xd094 ,
0xaa0f,0x1cc6 ,0xd085,0x3e45 ,0x07c4,0xcf07 ,
0xaac8,0x1b5d ,0xcf94,0x3e72 ,0x0b14,0xcd8c ,
0xab8e,0x19ef ,0xcea5,0x3e9d ,0x0e61,0xcc21 ,
0xac61,0x187e ,0xcdb7,0x3ec5 ,0x11a8,0xcac9 ,
0xad41,0x1709 ,0xcccc,0x3eeb ,0x14ea,0xc983 ,
0xae2e,0x1590 ,0xcbe2,0x3f0f ,0x1824,0xc850 ,
0xaf28,0x1413 ,0xcafb,0x3f30 ,0x1b56,0xc731 ,
0xb02d,0x1294 ,0xca15,0x3f4f ,0x1e7e,0xc625 ,
0xb140,0x1112 ,0xc932,0x3f6b ,0x219c,0xc52d ,
0xb25e,0x0f8d ,0xc851,0x3f85 ,0x24ae,0xc44a ,
0xb388,0x0e06 ,0xc772,0x3f9c ,0x27b3,0xc37b ,
0xb4be,0x0c7c ,0xc695,0x3fb1 ,0x2aaa,0xc2c1 ,
0xb600,0x0af1 ,0xc5ba,0x3fc4 ,0x2d93,0xc21d ,
0xb74d,0x0964 ,0xc4e2,0x3fd4 ,0x306c,0xc18e ,
0xb8a6,0x07d6 ,0xc40c,0x3fe1 ,0x3334,0xc115 ,
0xba09,0x0646 ,0xc338,0x3fec ,0x35eb,0xc0b1 ,
0xbb77,0x04b5 ,0xc266,0x3ff5 ,0x388e,0xc064 ,
0xbcf0,0x0324 ,0xc197,0x3ffb ,0x3b1e,0xc02c ,
0xbe73,0x0192 ,0xc0ca,0x3fff ,0x3d9a,0xc00b ,
0x4000,0x0000 ,0x3f9b,0x0065 ,0x3f36,0x00c9 ,
0x3ed0,0x012e ,0x3e69,0x0192 ,0x3e02,0x01f7 ,
0x3d9a,0x025b ,0x3d31,0x02c0 ,0x3cc8,0x0324 ,
0x3c5f,0x0388 ,0x3bf4,0x03ed ,0x3b8a,0x0451 ,
0x3b1e,0x04b5 ,0x3ab2,0x051a ,0x3a46,0x057e ,
0x39d9,0x05e2 ,0x396b,0x0646 ,0x38fd,0x06aa ,
0x388e,0x070e ,0x381f,0x0772 ,0x37af,0x07d6 ,
0x373f,0x0839 ,0x36ce,0x089d ,0x365d,0x0901 ,
0x35eb,0x0964 ,0x3578,0x09c7 ,0x3505,0x0a2b ,
0x3492,0x0a8e ,0x341e,0x0af1 ,0x33a9,0x0b54 ,
0x3334,0x0bb7 ,0x32bf,0x0c1a ,0x3249,0x0c7c ,
0x31d2,0x0cdf ,0x315b,0x0d41 ,0x30e4,0x0da4 ,
0x306c,0x0e06 ,0x2ff4,0x0e68 ,0x2f7b,0x0eca ,
0x2f02,0x0f2b ,0x2e88,0x0f8d ,0x2e0e,0x0fee ,
0x2d93,0x1050 ,0x2d18,0x10b1 ,0x2c9d,0x1112 ,
0x2c21,0x1173 ,0x2ba4,0x11d3 ,0x2b28,0x1234 ,
0x2aaa,0x1294 ,0x2a2d,0x12f4 ,0x29af,0x1354 ,
0x2931,0x13b4 ,0x28b2,0x1413 ,0x2833,0x1473 ,
0x27b3,0x14d2 ,0x2733,0x1531 ,0x26b3,0x1590 ,
0x2632,0x15ee ,0x25b1,0x164c ,0x252f,0x16ab ,
0x24ae,0x1709 ,0x242b,0x1766 ,0x23a9,0x17c4 ,
0x2326,0x1821 ,0x22a3,0x187e ,0x221f,0x18db ,
0x219c,0x1937 ,0x2117,0x1993 ,0x2093,0x19ef ,
0x200e,0x1a4b ,0x1f89,0x1aa7 ,0x1f04,0x1b02 ,
0x1e7e,0x1b5d ,0x1df8,0x1bb8 ,0x1d72,0x1c12 ,
0x1ceb,0x1c6c ,0x1c64,0x1cc6 ,0x1bdd,0x1d20 ,
0x1b56,0x1d79 ,0x1ace,0x1dd3 ,0x1a46,0x1e2b ,
0x19be,0x1e84 ,0x1935,0x1edc ,0x18ad,0x1f34 ,
0x1824,0x1f8c ,0x179b,0x1fe3 ,0x1711,0x203a ,
0x1688,0x2091 ,0x15fe,0x20e7 ,0x1574,0x213d ,
0x14ea,0x2193 ,0x145f,0x21e8 ,0x13d5,0x223d ,
0x134a,0x2292 ,0x12bf,0x22e7 ,0x1234,0x233b ,
0x11a8,0x238e ,0x111d,0x23e2 ,0x1091,0x2435 ,
0x1005,0x2488 ,0x0f79,0x24da ,0x0eed,0x252c ,
0x0e61,0x257e ,0x0dd4,0x25cf ,0x0d48,0x2620 ,
0x0cbb,0x2671 ,0x0c2e,0x26c1 ,0x0ba1,0x2711 ,
0x0b14,0x2760 ,0x0a87,0x27af ,0x09fa,0x27fe ,
0x096d,0x284c ,0x08df,0x289a ,0x0852,0x28e7 ,
0x07c4,0x2935 ,0x0736,0x2981 ,0x06a9,0x29ce ,
0x061b,0x2a1a ,0x058d,0x2a65 ,0x04ff,0x2ab0 ,
0x0471,0x2afb ,0x03e3,0x2b45 ,0x0355,0x2b8f ,
0x02c7,0x2bd8 ,0x0239,0x2c21 ,0x01aa,0x2c6a ,
0x011c,0x2cb2 ,0x008e,0x2cfa ,0x0000,0x2d41 ,
0xff72,0x2d88 ,0xfee4,0x2dcf ,0xfe56,0x2e15 ,
0xfdc7,0x2e5a ,0xfd39,0x2e9f ,0xfcab,0x2ee4 ,
0xfc1d,0x2f28 ,0xfb8f,0x2f6c ,0xfb01,0x2faf ,
0xfa73,0x2ff2 ,0xf9e5,0x3034 ,0xf957,0x3076 ,
0xf8ca,0x30b8 ,0xf83c,0x30f9 ,0xf7ae,0x3139 ,
0xf721,0x3179 ,0xf693,0x31b9 ,0xf606,0x31f8 ,
0xf579,0x3236 ,0xf4ec,0x3274 ,0xf45f,0x32b2 ,
0xf3d2,0x32ef ,0xf345,0x332c ,0xf2b8,0x3368 ,
0xf22c,0x33a3 ,0xf19f,0x33df ,0xf113,0x3419 ,
0xf087,0x3453 ,0xeffb,0x348d ,0xef6f,0x34c6 ,
0xeee3,0x34ff ,0xee58,0x3537 ,0xedcc,0x356e ,
0xed41,0x35a5 ,0xecb6,0x35dc ,0xec2b,0x3612 ,
0xeba1,0x3648 ,0xeb16,0x367d ,0xea8c,0x36b1 ,
0xea02,0x36e5 ,0xe978,0x3718 ,0xe8ef,0x374b ,
0xe865,0x377e ,0xe7dc,0x37b0 ,0xe753,0x37e1 ,
0xe6cb,0x3812 ,0xe642,0x3842 ,0xe5ba,0x3871 ,
0xe532,0x38a1 ,0xe4aa,0x38cf ,0xe423,0x38fd ,
0xe39c,0x392b ,0xe315,0x3958 ,0xe28e,0x3984 ,
0xe208,0x39b0 ,0xe182,0x39db ,0xe0fc,0x3a06 ,
0xe077,0x3a30 ,0xdff2,0x3a59 ,0xdf6d,0x3a82 ,
0xdee9,0x3aab ,0xde64,0x3ad3 ,0xdde1,0x3afa ,
0xdd5d,0x3b21 ,0xdcda,0x3b47 ,0xdc57,0x3b6d ,
0xdbd5,0x3b92 ,0xdb52,0x3bb6 ,0xdad1,0x3bda ,
0xda4f,0x3bfd ,0xd9ce,0x3c20 ,0xd94d,0x3c42 ,
0xd8cd,0x3c64 ,0xd84d,0x3c85 ,0xd7cd,0x3ca5 ,
0xd74e,0x3cc5 ,0xd6cf,0x3ce4 ,0xd651,0x3d03 ,
0xd5d3,0x3d21 ,0xd556,0x3d3f ,0xd4d8,0x3d5b ,
0xd45c,0x3d78 ,0xd3df,0x3d93 ,0xd363,0x3daf ,
0xd2e8,0x3dc9 ,0xd26d,0x3de3 ,0xd1f2,0x3dfc ,
0xd178,0x3e15 ,0xd0fe,0x3e2d ,0xd085,0x3e45 ,
0xd00c,0x3e5c ,0xcf94,0x3e72 ,0xcf1c,0x3e88 ,
0xcea5,0x3e9d ,0xce2e,0x3eb1 ,0xcdb7,0x3ec5 ,
0xcd41,0x3ed8 ,0xcccc,0x3eeb ,0xcc57,0x3efd ,
0xcbe2,0x3f0f ,0xcb6e,0x3f20 ,0xcafb,0x3f30 ,
0xca88,0x3f40 ,0xca15,0x3f4f ,0xc9a3,0x3f5d ,
0xc932,0x3f6b ,0xc8c1,0x3f78 ,0xc851,0x3f85 ,
0xc7e1,0x3f91 ,0xc772,0x3f9c ,0xc703,0x3fa7 ,
0xc695,0x3fb1 ,0xc627,0x3fbb ,0xc5ba,0x3fc4 ,
0xc54e,0x3fcc ,0xc4e2,0x3fd4 ,0xc476,0x3fdb ,
0xc40c,0x3fe1 ,0xc3a1,0x3fe7 ,0xc338,0x3fec ,
0xc2cf,0x3ff1 ,0xc266,0x3ff5 ,0xc1fe,0x3ff8 ,
0xc197,0x3ffb ,0xc130,0x3ffd ,0xc0ca,0x3fff ,
0xc065,0x4000 ,0xc000,0x4000 ,0xbf9c,0x4000 ,
0xbf38,0x3fff ,0xbed5,0x3ffd ,0xbe73,0x3ffb ,
0xbe11,0x3ff8 ,0xbdb0,0x3ff5 ,0xbd50,0x3ff1 ,
0xbcf0,0x3fec ,0xbc91,0x3fe7 ,0xbc32,0x3fe1 ,
0xbbd4,0x3fdb ,0xbb77,0x3fd4 ,0xbb1b,0x3fcc ,
0xbabf,0x3fc4 ,0xba64,0x3fbb ,0xba09,0x3fb1 ,
0xb9af,0x3fa7 ,0xb956,0x3f9c ,0xb8fd,0x3f91 ,
0xb8a6,0x3f85 ,0xb84f,0x3f78 ,0xb7f8,0x3f6b ,
0xb7a2,0x3f5d ,0xb74d,0x3f4f ,0xb6f9,0x3f40 ,
0xb6a5,0x3f30 ,0xb652,0x3f20 ,0xb600,0x3f0f ,
0xb5af,0x3efd ,0xb55e,0x3eeb ,0xb50e,0x3ed8 ,
0xb4be,0x3ec5 ,0xb470,0x3eb1 ,0xb422,0x3e9d ,
0xb3d5,0x3e88 ,0xb388,0x3e72 ,0xb33d,0x3e5c ,
0xb2f2,0x3e45 ,0xb2a7,0x3e2d ,0xb25e,0x3e15 ,
0xb215,0x3dfc ,0xb1cd,0x3de3 ,0xb186,0x3dc9 ,
0xb140,0x3daf ,0xb0fa,0x3d93 ,0xb0b5,0x3d78 ,
0xb071,0x3d5b ,0xb02d,0x3d3f ,0xafeb,0x3d21 ,
0xafa9,0x3d03 ,0xaf68,0x3ce4 ,0xaf28,0x3cc5 ,
0xaee8,0x3ca5 ,0xaea9,0x3c85 ,0xae6b,0x3c64 ,
0xae2e,0x3c42 ,0xadf2,0x3c20 ,0xadb6,0x3bfd ,
0xad7b,0x3bda ,0xad41,0x3bb6 ,0xad08,0x3b92 ,
0xacd0,0x3b6d ,0xac98,0x3b47 ,0xac61,0x3b21 ,
0xac2b,0x3afa ,0xabf6,0x3ad3 ,0xabc2,0x3aab ,
0xab8e,0x3a82 ,0xab5b,0x3a59 ,0xab29,0x3a30 ,
0xaaf8,0x3a06 ,0xaac8,0x39db ,0xaa98,0x39b0 ,
0xaa6a,0x3984 ,0xaa3c,0x3958 ,0xaa0f,0x392b ,
0xa9e3,0x38fd ,0xa9b7,0x38cf ,0xa98d,0x38a1 ,
0xa963,0x3871 ,0xa93a,0x3842 ,0xa912,0x3812 ,
0xa8eb,0x37e1 ,0xa8c5,0x37b0 ,0xa89f,0x377e ,
0xa87b,0x374b ,0xa857,0x3718 ,0xa834,0x36e5 ,
0xa812,0x36b1 ,0xa7f1,0x367d ,0xa7d0,0x3648 ,
0xa7b1,0x3612 ,0xa792,0x35dc ,0xa774,0x35a5 ,
0xa757,0x356e ,0xa73b,0x3537 ,0xa71f,0x34ff ,
0xa705,0x34c6 ,0xa6eb,0x348d ,0xa6d3,0x3453 ,
0xa6bb,0x3419 ,0xa6a4,0x33df ,0xa68e,0x33a3 ,
0xa678,0x3368 ,0xa664,0x332c ,0xa650,0x32ef ,
0xa63e,0x32b2 ,0xa62c,0x3274 ,0xa61b,0x3236 ,
0xa60b,0x31f8 ,0xa5fb,0x31b9 ,0xa5ed,0x3179 ,
0xa5e0,0x3139 ,0xa5d3,0x30f9 ,0xa5c7,0x30b8 ,
0xa5bc,0x3076 ,0xa5b2,0x3034 ,0xa5a9,0x2ff2 ,
0xa5a1,0x2faf ,0xa599,0x2f6c ,0xa593,0x2f28 ,
0xa58d,0x2ee4 ,0xa588,0x2e9f ,0xa585,0x2e5a ,
0xa581,0x2e15 ,0xa57f,0x2dcf ,0xa57e,0x2d88 ,
0xa57e,0x2d41 ,0xa57e,0x2cfa ,0xa57f,0x2cb2 ,
0xa581,0x2c6a ,0xa585,0x2c21 ,0xa588,0x2bd8 ,
0xa58d,0x2b8f ,0xa593,0x2b45 ,0xa599,0x2afb ,
0xa5a1,0x2ab0 ,0xa5a9,0x2a65 ,0xa5b2,0x2a1a ,
0xa5bc,0x29ce ,0xa5c7,0x2981 ,0xa5d3,0x2935 ,
0xa5e0,0x28e7 ,0xa5ed,0x289a ,0xa5fb,0x284c ,
0xa60b,0x27fe ,0xa61b,0x27af ,0xa62c,0x2760 ,
0xa63e,0x2711 ,0xa650,0x26c1 ,0xa664,0x2671 ,
0xa678,0x2620 ,0xa68e,0x25cf ,0xa6a4,0x257e ,
0xa6bb,0x252c ,0xa6d3,0x24da ,0xa6eb,0x2488 ,
0xa705,0x2435 ,0xa71f,0x23e2 ,0xa73b,0x238e ,
0xa757,0x233b ,0xa774,0x22e7 ,0xa792,0x2292 ,
0xa7b1,0x223d ,0xa7d0,0x21e8 ,0xa7f1,0x2193 ,
0xa812,0x213d ,0xa834,0x20e7 ,0xa857,0x2091 ,
0xa87b,0x203a ,0xa89f,0x1fe3 ,0xa8c5,0x1f8c ,
0xa8eb,0x1f34 ,0xa912,0x1edc ,0xa93a,0x1e84 ,
0xa963,0x1e2b ,0xa98d,0x1dd3 ,0xa9b7,0x1d79 ,
0xa9e3,0x1d20 ,0xaa0f,0x1cc6 ,0xaa3c,0x1c6c ,
0xaa6a,0x1c12 ,0xaa98,0x1bb8 ,0xaac8,0x1b5d ,
0xaaf8,0x1b02 ,0xab29,0x1aa7 ,0xab5b,0x1a4b ,
0xab8e,0x19ef ,0xabc2,0x1993 ,0xabf6,0x1937 ,
0xac2b,0x18db ,0xac61,0x187e ,0xac98,0x1821 ,
0xacd0,0x17c4 ,0xad08,0x1766 ,0xad41,0x1709 ,
0xad7b,0x16ab ,0xadb6,0x164c ,0xadf2,0x15ee ,
0xae2e,0x1590 ,0xae6b,0x1531 ,0xaea9,0x14d2 ,
0xaee8,0x1473 ,0xaf28,0x1413 ,0xaf68,0x13b4 ,
0xafa9,0x1354 ,0xafeb,0x12f4 ,0xb02d,0x1294 ,
0xb071,0x1234 ,0xb0b5,0x11d3 ,0xb0fa,0x1173 ,
0xb140,0x1112 ,0xb186,0x10b1 ,0xb1cd,0x1050 ,
0xb215,0x0fee ,0xb25e,0x0f8d ,0xb2a7,0x0f2b ,
0xb2f2,0x0eca ,0xb33d,0x0e68 ,0xb388,0x0e06 ,
0xb3d5,0x0da4 ,0xb422,0x0d41 ,0xb470,0x0cdf ,
0xb4be,0x0c7c ,0xb50e,0x0c1a ,0xb55e,0x0bb7 ,
0xb5af,0x0b54 ,0xb600,0x0af1 ,0xb652,0x0a8e ,
0xb6a5,0x0a2b ,0xb6f9,0x09c7 ,0xb74d,0x0964 ,
0xb7a2,0x0901 ,0xb7f8,0x089d ,0xb84f,0x0839 ,
0xb8a6,0x07d6 ,0xb8fd,0x0772 ,0xb956,0x070e ,
0xb9af,0x06aa ,0xba09,0x0646 ,0xba64,0x05e2 ,
0xbabf,0x057e ,0xbb1b,0x051a ,0xbb77,0x04b5 ,
0xbbd4,0x0451 ,0xbc32,0x03ed ,0xbc91,0x0388 ,
0xbcf0,0x0324 ,0xbd50,0x02c0 ,0xbdb0,0x025b ,
0xbe11,0x01f7 ,0xbe73,0x0192 ,0xbed5,0x012e ,
0xbf38,0x00c9 ,0xbf9c,0x0065 };
extern const int s_Q14R_8;
const int s_Q14R_8 = 1024;
extern const unsigned short t_Q14R_8[2032];
const unsigned short t_Q14R_8[2032] = {
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3b21,0x187e ,0x3ec5,0x0c7c ,0x3537,0x238e ,
0x2d41,0x2d41 ,0x3b21,0x187e ,0x187e,0x3b21 ,
0x187e,0x3b21 ,0x3537,0x238e ,0xf384,0x3ec5 ,
0x0000,0x4000 ,0x2d41,0x2d41 ,0xd2bf,0x2d41 ,
0xe782,0x3b21 ,0x238e,0x3537 ,0xc13b,0x0c7c ,
0xd2bf,0x2d41 ,0x187e,0x3b21 ,0xc4df,0xe782 ,
0xc4df,0x187e ,0x0c7c,0x3ec5 ,0xdc72,0xcac9 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3fb1,0x0646 ,0x3fec,0x0324 ,0x3f4f,0x0964 ,
0x3ec5,0x0c7c ,0x3fb1,0x0646 ,0x3d3f,0x1294 ,
0x3d3f,0x1294 ,0x3f4f,0x0964 ,0x39db,0x1b5d ,
0x3b21,0x187e ,0x3ec5,0x0c7c ,0x3537,0x238e ,
0x3871,0x1e2b ,0x3e15,0x0f8d ,0x2f6c,0x2afb ,
0x3537,0x238e ,0x3d3f,0x1294 ,0x289a,0x3179 ,
0x3179,0x289a ,0x3c42,0x1590 ,0x20e7,0x36e5 ,
0x2d41,0x2d41 ,0x3b21,0x187e ,0x187e,0x3b21 ,
0x289a,0x3179 ,0x39db,0x1b5d ,0x0f8d,0x3e15 ,
0x238e,0x3537 ,0x3871,0x1e2b ,0x0646,0x3fb1 ,
0x1e2b,0x3871 ,0x36e5,0x20e7 ,0xfcdc,0x3fec ,
0x187e,0x3b21 ,0x3537,0x238e ,0xf384,0x3ec5 ,
0x1294,0x3d3f ,0x3368,0x2620 ,0xea70,0x3c42 ,
0x0c7c,0x3ec5 ,0x3179,0x289a ,0xe1d5,0x3871 ,
0x0646,0x3fb1 ,0x2f6c,0x2afb ,0xd9e0,0x3368 ,
0x0000,0x4000 ,0x2d41,0x2d41 ,0xd2bf,0x2d41 ,
0xf9ba,0x3fb1 ,0x2afb,0x2f6c ,0xcc98,0x2620 ,
0xf384,0x3ec5 ,0x289a,0x3179 ,0xc78f,0x1e2b ,
0xed6c,0x3d3f ,0x2620,0x3368 ,0xc3be,0x1590 ,
0xe782,0x3b21 ,0x238e,0x3537 ,0xc13b,0x0c7c ,
0xe1d5,0x3871 ,0x20e7,0x36e5 ,0xc014,0x0324 ,
0xdc72,0x3537 ,0x1e2b,0x3871 ,0xc04f,0xf9ba ,
0xd766,0x3179 ,0x1b5d,0x39db ,0xc1eb,0xf073 ,
0xd2bf,0x2d41 ,0x187e,0x3b21 ,0xc4df,0xe782 ,
0xce87,0x289a ,0x1590,0x3c42 ,0xc91b,0xdf19 ,
0xcac9,0x238e ,0x1294,0x3d3f ,0xce87,0xd766 ,
0xc78f,0x1e2b ,0x0f8d,0x3e15 ,0xd505,0xd094 ,
0xc4df,0x187e ,0x0c7c,0x3ec5 ,0xdc72,0xcac9 ,
0xc2c1,0x1294 ,0x0964,0x3f4f ,0xe4a3,0xc625 ,
0xc13b,0x0c7c ,0x0646,0x3fb1 ,0xed6c,0xc2c1 ,
0xc04f,0x0646 ,0x0324,0x3fec ,0xf69c,0xc0b1 ,
0x4000,0x0000 ,0x4000,0x0000 ,0x4000,0x0000 ,
0x3ffb,0x0192 ,0x3fff,0x00c9 ,0x3ff5,0x025b ,
0x3fec,0x0324 ,0x3ffb,0x0192 ,0x3fd4,0x04b5 ,
0x3fd4,0x04b5 ,0x3ff5,0x025b ,0x3f9c,0x070e ,
0x3fb1,0x0646 ,0x3fec,0x0324 ,0x3f4f,0x0964 ,
0x3f85,0x07d6 ,0x3fe1,0x03ed ,0x3eeb,0x0bb7 ,
0x3f4f,0x0964 ,0x3fd4,0x04b5 ,0x3e72,0x0e06 ,
0x3f0f,0x0af1 ,0x3fc4,0x057e ,0x3de3,0x1050 ,
0x3ec5,0x0c7c ,0x3fb1,0x0646 ,0x3d3f,0x1294 ,
0x3e72,0x0e06 ,0x3f9c,0x070e ,0x3c85,0x14d2 ,
0x3e15,0x0f8d ,0x3f85,0x07d6 ,0x3bb6,0x1709 ,
0x3daf,0x1112 ,0x3f6b,0x089d ,0x3ad3,0x1937 ,
0x3d3f,0x1294 ,0x3f4f,0x0964 ,0x39db,0x1b5d ,
0x3cc5,0x1413 ,0x3f30,0x0a2b ,0x38cf,0x1d79 ,
0x3c42,0x1590 ,0x3f0f,0x0af1 ,0x37b0,0x1f8c ,
0x3bb6,0x1709 ,0x3eeb,0x0bb7 ,0x367d,0x2193 ,
0x3b21,0x187e ,0x3ec5,0x0c7c ,0x3537,0x238e ,
0x3a82,0x19ef ,0x3e9d,0x0d41 ,0x33df,0x257e ,
0x39db,0x1b5d ,0x3e72,0x0e06 ,0x3274,0x2760 ,
0x392b,0x1cc6 ,0x3e45,0x0eca ,0x30f9,0x2935 ,
0x3871,0x1e2b ,0x3e15,0x0f8d ,0x2f6c,0x2afb ,
0x37b0,0x1f8c ,0x3de3,0x1050 ,0x2dcf,0x2cb2 ,
0x36e5,0x20e7 ,0x3daf,0x1112 ,0x2c21,0x2e5a ,
0x3612,0x223d ,0x3d78,0x11d3 ,0x2a65,0x2ff2 ,
0x3537,0x238e ,0x3d3f,0x1294 ,0x289a,0x3179 ,
0x3453,0x24da ,0x3d03,0x1354 ,0x26c1,0x32ef ,
0x3368,0x2620 ,0x3cc5,0x1413 ,0x24da,0x3453 ,
0x3274,0x2760 ,0x3c85,0x14d2 ,0x22e7,0x35a5 ,
0x3179,0x289a ,0x3c42,0x1590 ,0x20e7,0x36e5 ,
0x3076,0x29ce ,0x3bfd,0x164c ,0x1edc,0x3812 ,
0x2f6c,0x2afb ,0x3bb6,0x1709 ,0x1cc6,0x392b ,
0x2e5a,0x2c21 ,0x3b6d,0x17c4 ,0x1aa7,0x3a30 ,
0x2d41,0x2d41 ,0x3b21,0x187e ,0x187e,0x3b21 ,
0x2c21,0x2e5a ,0x3ad3,0x1937 ,0x164c,0x3bfd ,
0x2afb,0x2f6c ,0x3a82,0x19ef ,0x1413,0x3cc5 ,
0x29ce,0x3076 ,0x3a30,0x1aa7 ,0x11d3,0x3d78 ,
0x289a,0x3179 ,0x39db,0x1b5d ,0x0f8d,0x3e15 ,
0x2760,0x3274 ,0x3984,0x1c12 ,0x0d41,0x3e9d ,
0x2620,0x3368 ,0x392b,0x1cc6 ,0x0af1,0x3f0f ,
0x24da,0x3453 ,0x38cf,0x1d79 ,0x089d,0x3f6b ,
0x238e,0x3537 ,0x3871,0x1e2b ,0x0646,0x3fb1 ,
0x223d,0x3612 ,0x3812,0x1edc ,0x03ed,0x3fe1 ,
0x20e7,0x36e5 ,0x37b0,0x1f8c ,0x0192,0x3ffb ,
0x1f8c,0x37b0 ,0x374b,0x203a ,0xff37,0x3fff ,
0x1e2b,0x3871 ,0x36e5,0x20e7 ,0xfcdc,0x3fec ,
0x1cc6,0x392b ,0x367d,0x2193 ,0xfa82,0x3fc4 ,
0x1b5d,0x39db ,0x3612,0x223d ,0xf82a,0x3f85 ,
0x19ef,0x3a82 ,0x35a5,0x22e7 ,0xf5d5,0x3f30 ,
0x187e,0x3b21 ,0x3537,0x238e ,0xf384,0x3ec5 ,
0x1709,0x3bb6 ,0x34c6,0x2435 ,0xf136,0x3e45 ,
0x1590,0x3c42 ,0x3453,0x24da ,0xeeee,0x3daf ,
0x1413,0x3cc5 ,0x33df,0x257e ,0xecac,0x3d03 ,
0x1294,0x3d3f ,0x3368,0x2620 ,0xea70,0x3c42 ,
0x1112,0x3daf ,0x32ef,0x26c1 ,0xe83c,0x3b6d ,
0x0f8d,0x3e15 ,0x3274,0x2760 ,0xe611,0x3a82 ,
0x0e06,0x3e72 ,0x31f8,0x27fe ,0xe3ee,0x3984 ,
0x0c7c,0x3ec5 ,0x3179,0x289a ,0xe1d5,0x3871 ,
0x0af1,0x3f0f ,0x30f9,0x2935 ,0xdfc6,0x374b ,
0x0964,0x3f4f ,0x3076,0x29ce ,0xddc3,0x3612 ,
0x07d6,0x3f85 ,0x2ff2,0x2a65 ,0xdbcb,0x34c6 ,
0x0646,0x3fb1 ,0x2f6c,0x2afb ,0xd9e0,0x3368 ,
0x04b5,0x3fd4 ,0x2ee4,0x2b8f ,0xd802,0x31f8 ,
0x0324,0x3fec ,0x2e5a,0x2c21 ,0xd632,0x3076 ,
0x0192,0x3ffb ,0x2dcf,0x2cb2 ,0xd471,0x2ee4 ,
0x0000,0x4000 ,0x2d41,0x2d41 ,0xd2bf,0x2d41 ,
0xfe6e,0x3ffb ,0x2cb2,0x2dcf ,0xd11c,0x2b8f ,
0xfcdc,0x3fec ,0x2c21,0x2e5a ,0xcf8a,0x29ce ,
0xfb4b,0x3fd4 ,0x2b8f,0x2ee4 ,0xce08,0x27fe ,
0xf9ba,0x3fb1 ,0x2afb,0x2f6c ,0xcc98,0x2620 ,
0xf82a,0x3f85 ,0x2a65,0x2ff2 ,0xcb3a,0x2435 ,
0xf69c,0x3f4f ,0x29ce,0x3076 ,0xc9ee,0x223d ,
0xf50f,0x3f0f ,0x2935,0x30f9 ,0xc8b5,0x203a ,
0xf384,0x3ec5 ,0x289a,0x3179 ,0xc78f,0x1e2b ,
0xf1fa,0x3e72 ,0x27fe,0x31f8 ,0xc67c,0x1c12 ,
0xf073,0x3e15 ,0x2760,0x3274 ,0xc57e,0x19ef ,
0xeeee,0x3daf ,0x26c1,0x32ef ,0xc493,0x17c4 ,
0xed6c,0x3d3f ,0x2620,0x3368 ,0xc3be,0x1590 ,
0xebed,0x3cc5 ,0x257e,0x33df ,0xc2fd,0x1354 ,
0xea70,0x3c42 ,0x24da,0x3453 ,0xc251,0x1112 ,
0xe8f7,0x3bb6 ,0x2435,0x34c6 ,0xc1bb,0x0eca ,
0xe782,0x3b21 ,0x238e,0x3537 ,0xc13b,0x0c7c ,
0xe611,0x3a82 ,0x22e7,0x35a5 ,0xc0d0,0x0a2b ,
0xe4a3,0x39db ,0x223d,0x3612 ,0xc07b,0x07d6 ,
0xe33a,0x392b ,0x2193,0x367d ,0xc03c,0x057e ,
0xe1d5,0x3871 ,0x20e7,0x36e5 ,0xc014,0x0324 ,
0xe074,0x37b0 ,0x203a,0x374b ,0xc001,0x00c9 ,
0xdf19,0x36e5 ,0x1f8c,0x37b0 ,0xc005,0xfe6e ,
0xddc3,0x3612 ,0x1edc,0x3812 ,0xc01f,0xfc13 ,
0xdc72,0x3537 ,0x1e2b,0x3871 ,0xc04f,0xf9ba ,
0xdb26,0x3453 ,0x1d79,0x38cf ,0xc095,0xf763 ,
0xd9e0,0x3368 ,0x1cc6,0x392b ,0xc0f1,0xf50f ,
0xd8a0,0x3274 ,0x1c12,0x3984 ,0xc163,0xf2bf ,
0xd766,0x3179 ,0x1b5d,0x39db ,0xc1eb,0xf073 ,
0xd632,0x3076 ,0x1aa7,0x3a30 ,0xc288,0xee2d ,
0xd505,0x2f6c ,0x19ef,0x3a82 ,0xc33b,0xebed ,
0xd3df,0x2e5a ,0x1937,0x3ad3 ,0xc403,0xe9b4 ,
0xd2bf,0x2d41 ,0x187e,0x3b21 ,0xc4df,0xe782 ,
0xd1a6,0x2c21 ,0x17c4,0x3b6d ,0xc5d0,0xe559 ,
0xd094,0x2afb ,0x1709,0x3bb6 ,0xc6d5,0xe33a ,
0xcf8a,0x29ce ,0x164c,0x3bfd ,0xc7ee,0xe124 ,
0xce87,0x289a ,0x1590,0x3c42 ,0xc91b,0xdf19 ,
0xcd8c,0x2760 ,0x14d2,0x3c85 ,0xca5b,0xdd19 ,
0xcc98,0x2620 ,0x1413,0x3cc5 ,0xcbad,0xdb26 ,
0xcbad,0x24da ,0x1354,0x3d03 ,0xcd11,0xd93f ,
0xcac9,0x238e ,0x1294,0x3d3f ,0xce87,0xd766 ,
0xc9ee,0x223d ,0x11d3,0x3d78 ,0xd00e,0xd59b ,
0xc91b,0x20e7 ,0x1112,0x3daf ,0xd1a6,0xd3df ,
0xc850,0x1f8c ,0x1050,0x3de3 ,0xd34e,0xd231 ,
0xc78f,0x1e2b ,0x0f8d,0x3e15 ,0xd505,0xd094 ,
0xc6d5,0x1cc6 ,0x0eca,0x3e45 ,0xd6cb,0xcf07 ,
0xc625,0x1b5d ,0x0e06,0x3e72 ,0xd8a0,0xcd8c ,
0xc57e,0x19ef ,0x0d41,0x3e9d ,0xda82,0xcc21 ,
0xc4df,0x187e ,0x0c7c,0x3ec5 ,0xdc72,0xcac9 ,
0xc44a,0x1709 ,0x0bb7,0x3eeb ,0xde6d,0xc983 ,
0xc3be,0x1590 ,0x0af1,0x3f0f ,0xe074,0xc850 ,
0xc33b,0x1413 ,0x0a2b,0x3f30 ,0xe287,0xc731 ,
0xc2c1,0x1294 ,0x0964,0x3f4f ,0xe4a3,0xc625 ,
0xc251,0x1112 ,0x089d,0x3f6b ,0xe6c9,0xc52d ,
0xc1eb,0x0f8d ,0x07d6,0x3f85 ,0xe8f7,0xc44a ,
0xc18e,0x0e06 ,0x070e,0x3f9c ,0xeb2e,0xc37b ,
0xc13b,0x0c7c ,0x0646,0x3fb1 ,0xed6c,0xc2c1 ,
0xc0f1,0x0af1 ,0x057e,0x3fc4 ,0xefb0,0xc21d ,
0xc0b1,0x0964 ,0x04b5,0x3fd4 ,0xf1fa,0xc18e ,
0xc07b,0x07d6 ,0x03ed,0x3fe1 ,0xf449,0xc115 ,
0xc04f,0x0646 ,0x0324,0x3fec ,0xf69c,0xc0b1 ,
0xc02c,0x04b5 ,0x025b,0x3ff5 ,0xf8f2,0xc064 ,
0xc014,0x0324 ,0x0192,0x3ffb ,0xfb4b,0xc02c ,
0xc005,0x0192 ,0x00c9,0x3fff ,0xfda5,0xc00b ,
0x4000,0x0000 ,0x4000,0x0065 ,0x3fff,0x00c9 ,
0x3ffd,0x012e ,0x3ffb,0x0192 ,0x3ff8,0x01f7 ,
0x3ff5,0x025b ,0x3ff1,0x02c0 ,0x3fec,0x0324 ,
0x3fe7,0x0388 ,0x3fe1,0x03ed ,0x3fdb,0x0451 ,
0x3fd4,0x04b5 ,0x3fcc,0x051a ,0x3fc4,0x057e ,
0x3fbb,0x05e2 ,0x3fb1,0x0646 ,0x3fa7,0x06aa ,
0x3f9c,0x070e ,0x3f91,0x0772 ,0x3f85,0x07d6 ,
0x3f78,0x0839 ,0x3f6b,0x089d ,0x3f5d,0x0901 ,
0x3f4f,0x0964 ,0x3f40,0x09c7 ,0x3f30,0x0a2b ,
0x3f20,0x0a8e ,0x3f0f,0x0af1 ,0x3efd,0x0b54 ,
0x3eeb,0x0bb7 ,0x3ed8,0x0c1a ,0x3ec5,0x0c7c ,
0x3eb1,0x0cdf ,0x3e9d,0x0d41 ,0x3e88,0x0da4 ,
0x3e72,0x0e06 ,0x3e5c,0x0e68 ,0x3e45,0x0eca ,
0x3e2d,0x0f2b ,0x3e15,0x0f8d ,0x3dfc,0x0fee ,
0x3de3,0x1050 ,0x3dc9,0x10b1 ,0x3daf,0x1112 ,
0x3d93,0x1173 ,0x3d78,0x11d3 ,0x3d5b,0x1234 ,
0x3d3f,0x1294 ,0x3d21,0x12f4 ,0x3d03,0x1354 ,
0x3ce4,0x13b4 ,0x3cc5,0x1413 ,0x3ca5,0x1473 ,
0x3c85,0x14d2 ,0x3c64,0x1531 ,0x3c42,0x1590 ,
0x3c20,0x15ee ,0x3bfd,0x164c ,0x3bda,0x16ab ,
0x3bb6,0x1709 ,0x3b92,0x1766 ,0x3b6d,0x17c4 ,
0x3b47,0x1821 ,0x3b21,0x187e ,0x3afa,0x18db ,
0x3ad3,0x1937 ,0x3aab,0x1993 ,0x3a82,0x19ef ,
0x3a59,0x1a4b ,0x3a30,0x1aa7 ,0x3a06,0x1b02 ,
0x39db,0x1b5d ,0x39b0,0x1bb8 ,0x3984,0x1c12 ,
0x3958,0x1c6c ,0x392b,0x1cc6 ,0x38fd,0x1d20 ,
0x38cf,0x1d79 ,0x38a1,0x1dd3 ,0x3871,0x1e2b ,
0x3842,0x1e84 ,0x3812,0x1edc ,0x37e1,0x1f34 ,
0x37b0,0x1f8c ,0x377e,0x1fe3 ,0x374b,0x203a ,
0x3718,0x2091 ,0x36e5,0x20e7 ,0x36b1,0x213d ,
0x367d,0x2193 ,0x3648,0x21e8 ,0x3612,0x223d ,
0x35dc,0x2292 ,0x35a5,0x22e7 ,0x356e,0x233b ,
0x3537,0x238e ,0x34ff,0x23e2 ,0x34c6,0x2435 ,
0x348d,0x2488 ,0x3453,0x24da ,0x3419,0x252c ,
0x33df,0x257e ,0x33a3,0x25cf ,0x3368,0x2620 ,
0x332c,0x2671 ,0x32ef,0x26c1 ,0x32b2,0x2711 ,
0x3274,0x2760 ,0x3236,0x27af ,0x31f8,0x27fe ,
0x31b9,0x284c ,0x3179,0x289a ,0x3139,0x28e7 ,
0x30f9,0x2935 ,0x30b8,0x2981 ,0x3076,0x29ce ,
0x3034,0x2a1a ,0x2ff2,0x2a65 ,0x2faf,0x2ab0 ,
0x2f6c,0x2afb ,0x2f28,0x2b45 ,0x2ee4,0x2b8f ,
0x2e9f,0x2bd8 ,0x2e5a,0x2c21 ,0x2e15,0x2c6a ,
0x2dcf,0x2cb2 ,0x2d88,0x2cfa ,0x2d41,0x2d41 ,
0x2cfa,0x2d88 ,0x2cb2,0x2dcf ,0x2c6a,0x2e15 ,
0x2c21,0x2e5a ,0x2bd8,0x2e9f ,0x2b8f,0x2ee4 ,
0x2b45,0x2f28 ,0x2afb,0x2f6c ,0x2ab0,0x2faf ,
0x2a65,0x2ff2 ,0x2a1a,0x3034 ,0x29ce,0x3076 ,
0x2981,0x30b8 ,0x2935,0x30f9 ,0x28e7,0x3139 ,
0x289a,0x3179 ,0x284c,0x31b9 ,0x27fe,0x31f8 ,
0x27af,0x3236 ,0x2760,0x3274 ,0x2711,0x32b2 ,
0x26c1,0x32ef ,0x2671,0x332c ,0x2620,0x3368 ,
0x25cf,0x33a3 ,0x257e,0x33df ,0x252c,0x3419 ,
0x24da,0x3453 ,0x2488,0x348d ,0x2435,0x34c6 ,
0x23e2,0x34ff ,0x238e,0x3537 ,0x233b,0x356e ,
0x22e7,0x35a5 ,0x2292,0x35dc ,0x223d,0x3612 ,
0x21e8,0x3648 ,0x2193,0x367d ,0x213d,0x36b1 ,
0x20e7,0x36e5 ,0x2091,0x3718 ,0x203a,0x374b ,
0x1fe3,0x377e ,0x1f8c,0x37b0 ,0x1f34,0x37e1 ,
0x1edc,0x3812 ,0x1e84,0x3842 ,0x1e2b,0x3871 ,
0x1dd3,0x38a1 ,0x1d79,0x38cf ,0x1d20,0x38fd ,
0x1cc6,0x392b ,0x1c6c,0x3958 ,0x1c12,0x3984 ,
0x1bb8,0x39b0 ,0x1b5d,0x39db ,0x1b02,0x3a06 ,
0x1aa7,0x3a30 ,0x1a4b,0x3a59 ,0x19ef,0x3a82 ,
0x1993,0x3aab ,0x1937,0x3ad3 ,0x18db,0x3afa ,
0x187e,0x3b21 ,0x1821,0x3b47 ,0x17c4,0x3b6d ,
0x1766,0x3b92 ,0x1709,0x3bb6 ,0x16ab,0x3bda ,
0x164c,0x3bfd ,0x15ee,0x3c20 ,0x1590,0x3c42 ,
0x1531,0x3c64 ,0x14d2,0x3c85 ,0x1473,0x3ca5 ,
0x1413,0x3cc5 ,0x13b4,0x3ce4 ,0x1354,0x3d03 ,
0x12f4,0x3d21 ,0x1294,0x3d3f ,0x1234,0x3d5b ,
0x11d3,0x3d78 ,0x1173,0x3d93 ,0x1112,0x3daf ,
0x10b1,0x3dc9 ,0x1050,0x3de3 ,0x0fee,0x3dfc ,
0x0f8d,0x3e15 ,0x0f2b,0x3e2d ,0x0eca,0x3e45 ,
0x0e68,0x3e5c ,0x0e06,0x3e72 ,0x0da4,0x3e88 ,
0x0d41,0x3e9d ,0x0cdf,0x3eb1 ,0x0c7c,0x3ec5 ,
0x0c1a,0x3ed8 ,0x0bb7,0x3eeb ,0x0b54,0x3efd ,
0x0af1,0x3f0f ,0x0a8e,0x3f20 ,0x0a2b,0x3f30 ,
0x09c7,0x3f40 ,0x0964,0x3f4f ,0x0901,0x3f5d ,
0x089d,0x3f6b ,0x0839,0x3f78 ,0x07d6,0x3f85 ,
0x0772,0x3f91 ,0x070e,0x3f9c ,0x06aa,0x3fa7 ,
0x0646,0x3fb1 ,0x05e2,0x3fbb ,0x057e,0x3fc4 ,
0x051a,0x3fcc ,0x04b5,0x3fd4 ,0x0451,0x3fdb ,
0x03ed,0x3fe1 ,0x0388,0x3fe7 ,0x0324,0x3fec ,
0x02c0,0x3ff1 ,0x025b,0x3ff5 ,0x01f7,0x3ff8 ,
0x0192,0x3ffb ,0x012e,0x3ffd ,0x00c9,0x3fff ,
0x0065,0x4000 ,0x0000,0x4000 ,0xff9b,0x4000 ,
0xff37,0x3fff ,0xfed2,0x3ffd ,0xfe6e,0x3ffb ,
0xfe09,0x3ff8 ,0xfda5,0x3ff5 ,0xfd40,0x3ff1 ,
0xfcdc,0x3fec ,0xfc78,0x3fe7 ,0xfc13,0x3fe1 ,
0xfbaf,0x3fdb ,0xfb4b,0x3fd4 ,0xfae6,0x3fcc ,
0xfa82,0x3fc4 ,0xfa1e,0x3fbb ,0xf9ba,0x3fb1 ,
0xf956,0x3fa7 ,0xf8f2,0x3f9c ,0xf88e,0x3f91 ,
0xf82a,0x3f85 ,0xf7c7,0x3f78 ,0xf763,0x3f6b ,
0xf6ff,0x3f5d ,0xf69c,0x3f4f ,0xf639,0x3f40 ,
0xf5d5,0x3f30 ,0xf572,0x3f20 ,0xf50f,0x3f0f ,
0xf4ac,0x3efd ,0xf449,0x3eeb ,0xf3e6,0x3ed8 ,
0xf384,0x3ec5 ,0xf321,0x3eb1 ,0xf2bf,0x3e9d ,
0xf25c,0x3e88 ,0xf1fa,0x3e72 ,0xf198,0x3e5c ,
0xf136,0x3e45 ,0xf0d5,0x3e2d ,0xf073,0x3e15 ,
0xf012,0x3dfc ,0xefb0,0x3de3 ,0xef4f,0x3dc9 ,
0xeeee,0x3daf ,0xee8d,0x3d93 ,0xee2d,0x3d78 ,
0xedcc,0x3d5b ,0xed6c,0x3d3f ,0xed0c,0x3d21 ,
0xecac,0x3d03 ,0xec4c,0x3ce4 ,0xebed,0x3cc5 ,
0xeb8d,0x3ca5 ,0xeb2e,0x3c85 ,0xeacf,0x3c64 ,
0xea70,0x3c42 ,0xea12,0x3c20 ,0xe9b4,0x3bfd ,
0xe955,0x3bda ,0xe8f7,0x3bb6 ,0xe89a,0x3b92 ,
0xe83c,0x3b6d ,0xe7df,0x3b47 ,0xe782,0x3b21 ,
0xe725,0x3afa ,0xe6c9,0x3ad3 ,0xe66d,0x3aab ,
0xe611,0x3a82 ,0xe5b5,0x3a59 ,0xe559,0x3a30 ,
0xe4fe,0x3a06 ,0xe4a3,0x39db ,0xe448,0x39b0 ,
0xe3ee,0x3984 ,0xe394,0x3958 ,0xe33a,0x392b ,
0xe2e0,0x38fd ,0xe287,0x38cf ,0xe22d,0x38a1 ,
0xe1d5,0x3871 ,0xe17c,0x3842 ,0xe124,0x3812 ,
0xe0cc,0x37e1 ,0xe074,0x37b0 ,0xe01d,0x377e ,
0xdfc6,0x374b ,0xdf6f,0x3718 ,0xdf19,0x36e5 ,
0xdec3,0x36b1 ,0xde6d,0x367d ,0xde18,0x3648 ,
0xddc3,0x3612 ,0xdd6e,0x35dc ,0xdd19,0x35a5 ,
0xdcc5,0x356e ,0xdc72,0x3537 ,0xdc1e,0x34ff ,
0xdbcb,0x34c6 ,0xdb78,0x348d ,0xdb26,0x3453 ,
0xdad4,0x3419 ,0xda82,0x33df ,0xda31,0x33a3 ,
0xd9e0,0x3368 ,0xd98f,0x332c ,0xd93f,0x32ef ,
0xd8ef,0x32b2 ,0xd8a0,0x3274 ,0xd851,0x3236 ,
0xd802,0x31f8 ,0xd7b4,0x31b9 ,0xd766,0x3179 ,
0xd719,0x3139 ,0xd6cb,0x30f9 ,0xd67f,0x30b8 ,
0xd632,0x3076 ,0xd5e6,0x3034 ,0xd59b,0x2ff2 ,
0xd550,0x2faf ,0xd505,0x2f6c ,0xd4bb,0x2f28 ,
0xd471,0x2ee4 ,0xd428,0x2e9f ,0xd3df,0x2e5a ,
0xd396,0x2e15 ,0xd34e,0x2dcf ,0xd306,0x2d88 ,
0xd2bf,0x2d41 ,0xd278,0x2cfa ,0xd231,0x2cb2 ,
0xd1eb,0x2c6a ,0xd1a6,0x2c21 ,0xd161,0x2bd8 ,
0xd11c,0x2b8f ,0xd0d8,0x2b45 ,0xd094,0x2afb ,
0xd051,0x2ab0 ,0xd00e,0x2a65 ,0xcfcc,0x2a1a ,
0xcf8a,0x29ce ,0xcf48,0x2981 ,0xcf07,0x2935 ,
0xcec7,0x28e7 ,0xce87,0x289a ,0xce47,0x284c ,
0xce08,0x27fe ,0xcdca,0x27af ,0xcd8c,0x2760 ,
0xcd4e,0x2711 ,0xcd11,0x26c1 ,0xccd4,0x2671 ,
0xcc98,0x2620 ,0xcc5d,0x25cf ,0xcc21,0x257e ,
0xcbe7,0x252c ,0xcbad,0x24da ,0xcb73,0x2488 ,
0xcb3a,0x2435 ,0xcb01,0x23e2 ,0xcac9,0x238e ,
0xca92,0x233b ,0xca5b,0x22e7 ,0xca24,0x2292 ,
0xc9ee,0x223d ,0xc9b8,0x21e8 ,0xc983,0x2193 ,
0xc94f,0x213d ,0xc91b,0x20e7 ,0xc8e8,0x2091 ,
0xc8b5,0x203a ,0xc882,0x1fe3 ,0xc850,0x1f8c ,
0xc81f,0x1f34 ,0xc7ee,0x1edc ,0xc7be,0x1e84 ,
0xc78f,0x1e2b ,0xc75f,0x1dd3 ,0xc731,0x1d79 ,
0xc703,0x1d20 ,0xc6d5,0x1cc6 ,0xc6a8,0x1c6c ,
0xc67c,0x1c12 ,0xc650,0x1bb8 ,0xc625,0x1b5d ,
0xc5fa,0x1b02 ,0xc5d0,0x1aa7 ,0xc5a7,0x1a4b ,
0xc57e,0x19ef ,0xc555,0x1993 ,0xc52d,0x1937 ,
0xc506,0x18db ,0xc4df,0x187e ,0xc4b9,0x1821 ,
0xc493,0x17c4 ,0xc46e,0x1766 ,0xc44a,0x1709 ,
0xc426,0x16ab ,0xc403,0x164c ,0xc3e0,0x15ee ,
0xc3be,0x1590 ,0xc39c,0x1531 ,0xc37b,0x14d2 ,
0xc35b,0x1473 ,0xc33b,0x1413 ,0xc31c,0x13b4 ,
0xc2fd,0x1354 ,0xc2df,0x12f4 ,0xc2c1,0x1294 ,
0xc2a5,0x1234 ,0xc288,0x11d3 ,0xc26d,0x1173 ,
0xc251,0x1112 ,0xc237,0x10b1 ,0xc21d,0x1050 ,
0xc204,0x0fee ,0xc1eb,0x0f8d ,0xc1d3,0x0f2b ,
0xc1bb,0x0eca ,0xc1a4,0x0e68 ,0xc18e,0x0e06 ,
0xc178,0x0da4 ,0xc163,0x0d41 ,0xc14f,0x0cdf ,
0xc13b,0x0c7c ,0xc128,0x0c1a ,0xc115,0x0bb7 ,
0xc103,0x0b54 ,0xc0f1,0x0af1 ,0xc0e0,0x0a8e ,
0xc0d0,0x0a2b ,0xc0c0,0x09c7 ,0xc0b1,0x0964 ,
0xc0a3,0x0901 ,0xc095,0x089d ,0xc088,0x0839 ,
0xc07b,0x07d6 ,0xc06f,0x0772 ,0xc064,0x070e ,
0xc059,0x06aa ,0xc04f,0x0646 ,0xc045,0x05e2 ,
0xc03c,0x057e ,0xc034,0x051a ,0xc02c,0x04b5 ,
0xc025,0x0451 ,0xc01f,0x03ed ,0xc019,0x0388 ,
0xc014,0x0324 ,0xc00f,0x02c0 ,0xc00b,0x025b ,
0xc008,0x01f7 ,0xc005,0x0192 ,0xc003,0x012e ,
0xc001,0x00c9 ,0xc000,0x0065 };

View File

@ -1,27 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the Q14 radix-2 tables used in ARM9E optimization routines.
*
*/
extern const unsigned short t_Q14S_rad8[2];
const unsigned short t_Q14S_rad8[2] = { 0x0000,0x2d41 };
//extern const int t_Q30S_rad8[2];
//const int t_Q30S_rad8[2] = { 0x00000000,0x2d413ccd };
extern const unsigned short t_Q14R_rad8[2];
const unsigned short t_Q14R_rad8[2] = { 0x2d41,0x2d41 };
//extern const int t_Q30R_rad8[2];
//const int t_Q30R_rad8[2] = { 0x2d413ccd,0x2d413ccd };

View File

@ -1,494 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file contains the SPL unit_test.
*
*/
#include "unit_test.h"
#include "signal_processing_library.h"
class SplEnvironment : public ::testing::Environment {
public:
virtual void SetUp() {
}
virtual void TearDown() {
}
};
SplTest::SplTest()
{
}
void SplTest::SetUp() {
}
void SplTest::TearDown() {
}
TEST_F(SplTest, MacroTest) {
// Macros with inputs.
int A = 10;
int B = 21;
int a = -3;
int b = WEBRTC_SPL_WORD32_MAX;
int nr = 2;
int d_ptr1 = 0;
int d_ptr2 = 0;
EXPECT_EQ(10, WEBRTC_SPL_MIN(A, B));
EXPECT_EQ(21, WEBRTC_SPL_MAX(A, B));
EXPECT_EQ(3, WEBRTC_SPL_ABS_W16(a));
EXPECT_EQ(3, WEBRTC_SPL_ABS_W32(a));
EXPECT_EQ(0, WEBRTC_SPL_GET_BYTE(&B, nr));
WEBRTC_SPL_SET_BYTE(&d_ptr2, 1, nr);
EXPECT_EQ(65536, d_ptr2);
EXPECT_EQ(-63, WEBRTC_SPL_MUL(a, B));
EXPECT_EQ(-2147483645, WEBRTC_SPL_MUL(a, b));
EXPECT_EQ(-2147483645, WEBRTC_SPL_UMUL(a, b));
b = WEBRTC_SPL_WORD16_MAX >> 1;
EXPECT_EQ(65535, WEBRTC_SPL_UMUL_RSFT16(a, b));
EXPECT_EQ(1073627139, WEBRTC_SPL_UMUL_16_16(a, b));
EXPECT_EQ(16382, WEBRTC_SPL_UMUL_16_16_RSFT16(a, b));
EXPECT_EQ(-49149, WEBRTC_SPL_UMUL_32_16(a, b));
EXPECT_EQ(65535, WEBRTC_SPL_UMUL_32_16_RSFT16(a, b));
EXPECT_EQ(-49149, WEBRTC_SPL_MUL_16_U16(a, b));
a = b;
b = -3;
EXPECT_EQ(-5461, WEBRTC_SPL_DIV(a, b));
EXPECT_EQ(0, WEBRTC_SPL_UDIV(a, b));
EXPECT_EQ(-1, WEBRTC_SPL_MUL_16_32_RSFT16(a, b));
EXPECT_EQ(-1, WEBRTC_SPL_MUL_16_32_RSFT15(a, b));
EXPECT_EQ(-3, WEBRTC_SPL_MUL_16_32_RSFT14(a, b));
EXPECT_EQ(-24, WEBRTC_SPL_MUL_16_32_RSFT11(a, b));
int a32 = WEBRTC_SPL_WORD32_MAX;
int a32a = (WEBRTC_SPL_WORD32_MAX >> 16);
int a32b = (WEBRTC_SPL_WORD32_MAX & 0x0000ffff);
EXPECT_EQ(5, WEBRTC_SPL_MUL_32_32_RSFT32(a32a, a32b, A));
EXPECT_EQ(5, WEBRTC_SPL_MUL_32_32_RSFT32BI(a32, A));
EXPECT_EQ(-49149, WEBRTC_SPL_MUL_16_16(a, b));
EXPECT_EQ(-12288, WEBRTC_SPL_MUL_16_16_RSFT(a, b, 2));
EXPECT_EQ(-12287, WEBRTC_SPL_MUL_16_16_RSFT_WITH_ROUND(a, b, 2));
EXPECT_EQ(-1, WEBRTC_SPL_MUL_16_16_RSFT_WITH_FIXROUND(a, b));
EXPECT_EQ(16380, WEBRTC_SPL_ADD_SAT_W32(a, b));
EXPECT_EQ(21, WEBRTC_SPL_SAT(a, A, B));
EXPECT_EQ(21, WEBRTC_SPL_SAT(a, B, A));
EXPECT_EQ(-49149, WEBRTC_SPL_MUL_32_16(a, b));
EXPECT_EQ(16386, WEBRTC_SPL_SUB_SAT_W32(a, b));
EXPECT_EQ(16380, WEBRTC_SPL_ADD_SAT_W16(a, b));
EXPECT_EQ(16386, WEBRTC_SPL_SUB_SAT_W16(a, b));
EXPECT_TRUE(WEBRTC_SPL_IS_NEG(b));
// Shifting with negative numbers allowed
// Positive means left shift
EXPECT_EQ(32766, WEBRTC_SPL_SHIFT_W16(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_SHIFT_W32(a, 1));
// Shifting with negative numbers not allowed
// We cannot do casting here due to signed/unsigned problem
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_W16(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_W16(a, 1));
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_W32(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_W32(a, 1));
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_U16(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_U16(a, 1));
EXPECT_EQ(8191, WEBRTC_SPL_RSHIFT_U32(a, 1));
EXPECT_EQ(32766, WEBRTC_SPL_LSHIFT_U32(a, 1));
EXPECT_EQ(1470, WEBRTC_SPL_RAND(A));
}
TEST_F(SplTest, InlineTest) {
WebRtc_Word16 a = 121;
WebRtc_Word16 b = -17;
WebRtc_Word32 A = 111121;
WebRtc_Word32 B = -1711;
char bVersion[8];
EXPECT_EQ(104, WebRtcSpl_AddSatW16(a, b));
EXPECT_EQ(138, WebRtcSpl_SubSatW16(a, b));
EXPECT_EQ(109410, WebRtcSpl_AddSatW32(A, B));
EXPECT_EQ(112832, WebRtcSpl_SubSatW32(A, B));
EXPECT_EQ(17, WebRtcSpl_GetSizeInBits(A));
EXPECT_EQ(14, WebRtcSpl_NormW32(A));
EXPECT_EQ(4, WebRtcSpl_NormW16(B));
EXPECT_EQ(15, WebRtcSpl_NormU32(A));
EXPECT_EQ(0, WebRtcSpl_get_version(bVersion, 8));
}
TEST_F(SplTest, MathOperationsTest) {
int A = 117;
WebRtc_Word32 num = 117;
WebRtc_Word32 den = -5;
WebRtc_UWord16 denU = 5;
EXPECT_EQ(10, WebRtcSpl_Sqrt(A));
EXPECT_EQ(10, WebRtcSpl_SqrtFloor(A));
EXPECT_EQ(-91772805, WebRtcSpl_DivResultInQ31(den, num));
EXPECT_EQ(-23, WebRtcSpl_DivW32W16ResW16(num, (WebRtc_Word16)den));
EXPECT_EQ(-23, WebRtcSpl_DivW32W16(num, (WebRtc_Word16)den));
EXPECT_EQ(23, WebRtcSpl_DivU32U16(num, denU));
EXPECT_EQ(0, WebRtcSpl_DivW32HiLow(128, 0, 256));
}
TEST_F(SplTest, BasicArrayOperationsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, 1100};
int Bs[] = {2, 6, 66, 550};
WebRtc_UWord8 b8[kVectorSize];
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_UWord8 bTmp8[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
WebRtc_Word32 bTmp32[kVectorSize];
WebRtcSpl_MemSetW16(b16, 3, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(3, b16[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_ZerosArrayW16(b16, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(0, b16[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_OnesArrayW16(b16, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(1, b16[kk]);
}
WebRtcSpl_MemSetW32(b32, 3, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(3, b32[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_ZerosArrayW32(b32, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(0, b32[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_OnesArrayW32(b32, kVectorSize));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(1, b32[kk]);
}
for (int kk = 0; kk < kVectorSize; ++kk) {
bTmp8[kk] = (WebRtc_Word8)kk;
bTmp16[kk] = (WebRtc_Word16)kk;
bTmp32[kk] = (WebRtc_Word32)kk;
}
WEBRTC_SPL_MEMCPY_W8(b8, bTmp8, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(b8[kk], bTmp8[kk]);
}
WEBRTC_SPL_MEMCPY_W16(b16, bTmp16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(b16[kk], bTmp16[kk]);
}
// WEBRTC_SPL_MEMCPY_W32(b32, bTmp32, kVectorSize);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(b32[kk], bTmp32[kk]);
// }
EXPECT_EQ(2, WebRtcSpl_CopyFromEndW16(b16, kVectorSize, 2, bTmp16));
for (int kk = 0; kk < 2; ++kk) {
EXPECT_EQ(kk+2, bTmp16[kk]);
}
for (int kk = 0; kk < kVectorSize; ++kk) {
b32[kk] = B[kk];
b16[kk] = (WebRtc_Word16)B[kk];
}
WebRtcSpl_VectorBitShiftW32ToW16(bTmp16, kVectorSize, b32, 1);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]>>1), bTmp16[kk]);
}
WebRtcSpl_VectorBitShiftW16(bTmp16, kVectorSize, b16, 1);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]>>1), bTmp16[kk]);
}
WebRtcSpl_VectorBitShiftW32(bTmp32, kVectorSize, b32, 1);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]>>1), bTmp32[kk]);
}
WebRtcSpl_MemCpyReversedOrder(&bTmp16[3], b16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(b16[3-kk], bTmp16[kk]);
}
}
TEST_F(SplTest, MinMaxOperationsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, -1100};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = B[kk];
b32[kk] = B[kk];
}
EXPECT_EQ(1100, WebRtcSpl_MaxAbsValueW16(b16, kVectorSize));
EXPECT_EQ(1100, WebRtcSpl_MaxAbsValueW32(b32, kVectorSize));
EXPECT_EQ(133, WebRtcSpl_MaxValueW16(b16, kVectorSize));
EXPECT_EQ(133, WebRtcSpl_MaxValueW32(b32, kVectorSize));
EXPECT_EQ(3, WebRtcSpl_MaxAbsIndexW16(b16, kVectorSize));
EXPECT_EQ(2, WebRtcSpl_MaxIndexW16(b16, kVectorSize));
EXPECT_EQ(2, WebRtcSpl_MaxIndexW32(b32, kVectorSize));
EXPECT_EQ(-1100, WebRtcSpl_MinValueW16(b16, kVectorSize));
EXPECT_EQ(-1100, WebRtcSpl_MinValueW32(b32, kVectorSize));
EXPECT_EQ(3, WebRtcSpl_MinIndexW16(b16, kVectorSize));
EXPECT_EQ(3, WebRtcSpl_MinIndexW32(b32, kVectorSize));
EXPECT_EQ(0, WebRtcSpl_GetScalingSquare(b16, kVectorSize, 1));
}
TEST_F(SplTest, VectorOperationsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, 1100};
WebRtc_Word16 a16[kVectorSize];
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
for (int kk = 0; kk < kVectorSize; ++kk) {
a16[kk] = B[kk];
b16[kk] = B[kk];
}
WebRtcSpl_AffineTransformVector(bTmp16, b16, 3, 7, 2, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]*3+7)>>2, bTmp16[kk]);
}
WebRtcSpl_ScaleAndAddVectorsWithRound(b16, 3, b16, 2, 2, bTmp16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((B[kk]*3+B[kk]*2+2)>>2, bTmp16[kk]);
}
WebRtcSpl_AddAffineVectorToVector(bTmp16, b16, 3, 7, 2, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(((B[kk]*3+B[kk]*2+2)>>2)+((b16[kk]*3+7)>>2), bTmp16[kk]);
}
WebRtcSpl_CrossCorrelation(b32, b16, bTmp16, kVectorSize, 2, 2, 0);
for (int kk = 0; kk < 2; ++kk) {
EXPECT_EQ(614236, b32[kk]);
}
// EXPECT_EQ(, WebRtcSpl_DotProduct(b16, bTmp16, 4));
EXPECT_EQ(306962, WebRtcSpl_DotProductWithScale(b16, b16, kVectorSize, 2));
WebRtcSpl_ScaleVector(b16, bTmp16, 13, kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((b16[kk]*13)>>2, bTmp16[kk]);
}
WebRtcSpl_ScaleVectorWithSat(b16, bTmp16, 13, kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((b16[kk]*13)>>2, bTmp16[kk]);
}
WebRtcSpl_ScaleAndAddVectors(a16, 13, 2, b16, 7, 2, bTmp16, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(((a16[kk]*13)>>2)+((b16[kk]*7)>>2), bTmp16[kk]);
}
WebRtcSpl_AddVectorsAndShift(bTmp16, a16, b16, kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(B[kk] >> 1, bTmp16[kk]);
}
WebRtcSpl_ReverseOrderMultArrayElements(bTmp16, a16, &b16[3], kVectorSize, 2);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((a16[kk]*b16[3-kk])>>2, bTmp16[kk]);
}
WebRtcSpl_ElementwiseVectorMult(bTmp16, a16, b16, kVectorSize, 6);
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ((a16[kk]*b16[kk])>>6, bTmp16[kk]);
}
WebRtcSpl_SqrtOfOneMinusXSquared(b16, kVectorSize, bTmp16);
for (int kk = 0; kk < kVectorSize - 1; ++kk) {
EXPECT_EQ(32767, bTmp16[kk]);
}
EXPECT_EQ(32749, bTmp16[kVectorSize - 1]);
}
TEST_F(SplTest, EstimatorsTest) {
const int kVectorSize = 4;
int B[] = {4, 12, 133, 1100};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = B[kk];
b32[kk] = B[kk];
}
EXPECT_EQ(0, WebRtcSpl_LevinsonDurbin(b32, b16, bTmp16, 2));
}
TEST_F(SplTest, FilterTest) {
const int kVectorSize = 4;
WebRtc_Word16 A[] = {1, 2, 33, 100};
WebRtc_Word16 A5[] = {1, 2, 33, 100, -5};
WebRtc_Word16 B[] = {4, 12, 133, 110};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
WebRtc_Word16 bTmp16Low[kVectorSize];
WebRtc_Word16 bState[kVectorSize];
WebRtc_Word16 bStateLow[kVectorSize];
WebRtcSpl_ZerosArrayW16(bState, kVectorSize);
WebRtcSpl_ZerosArrayW16(bStateLow, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = A[kk];
}
// MA filters
WebRtcSpl_FilterMAFastQ12(b16, bTmp16, B, kVectorSize, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
//EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
}
// AR filters
WebRtcSpl_FilterARFastQ12(b16, bTmp16, A, kVectorSize, kVectorSize);
for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
}
EXPECT_EQ(kVectorSize, WebRtcSpl_FilterAR(A5,
5,
b16,
kVectorSize,
bState,
kVectorSize,
bStateLow,
kVectorSize,
bTmp16,
bTmp16Low,
kVectorSize));
}
TEST_F(SplTest, RandTest) {
const int kVectorSize = 4;
WebRtc_Word16 BU[] = {3653, 12446, 8525, 30691};
WebRtc_Word16 BN[] = {3459, -11689, -258, -3738};
WebRtc_Word16 b16[kVectorSize];
WebRtc_UWord32 bSeed = 100000;
EXPECT_EQ(464449057, WebRtcSpl_IncreaseSeed(&bSeed));
EXPECT_EQ(31565, WebRtcSpl_RandU(&bSeed));
EXPECT_EQ(-9786, WebRtcSpl_RandN(&bSeed));
EXPECT_EQ(kVectorSize, WebRtcSpl_RandUArray(b16, kVectorSize, &bSeed));
for (int kk = 0; kk < kVectorSize; ++kk) {
EXPECT_EQ(BU[kk], b16[kk]);
}
}
TEST_F(SplTest, SignalProcessingTest) {
const int kVectorSize = 4;
int A[] = {1, 2, 33, 100};
WebRtc_Word16 b16[kVectorSize];
WebRtc_Word32 b32[kVectorSize];
WebRtc_Word16 bTmp16[kVectorSize];
WebRtc_Word32 bTmp32[kVectorSize];
int bScale = 0;
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = A[kk];
b32[kk] = A[kk];
}
EXPECT_EQ(2, WebRtcSpl_AutoCorrelation(b16, kVectorSize, 1, bTmp32, &bScale));
WebRtcSpl_ReflCoefToLpc(b16, kVectorSize, bTmp16);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
// }
WebRtcSpl_LpcToReflCoef(bTmp16, kVectorSize, b16);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(a16[kk], b16[kk]);
// }
WebRtcSpl_AutoCorrToReflCoef(b32, kVectorSize, bTmp16);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
// }
WebRtcSpl_GetHanningWindow(bTmp16, kVectorSize);
// for (int kk = 0; kk < kVectorSize; ++kk) {
// EXPECT_EQ(aTmp16[kk], bTmp16[kk]);
// }
for (int kk = 0; kk < kVectorSize; ++kk) {
b16[kk] = A[kk];
}
EXPECT_EQ(11094 , WebRtcSpl_Energy(b16, kVectorSize, &bScale));
EXPECT_EQ(0, bScale);
}
TEST_F(SplTest, FFTTest) {
WebRtc_Word16 B[] = {1, 2, 33, 100,
2, 3, 34, 101,
3, 4, 35, 102,
4, 5, 36, 103};
EXPECT_EQ(0, WebRtcSpl_ComplexFFT(B, 3, 1));
// for (int kk = 0; kk < 16; ++kk) {
// EXPECT_EQ(A[kk], B[kk]);
// }
EXPECT_EQ(0, WebRtcSpl_ComplexIFFT(B, 3, 1));
// for (int kk = 0; kk < 16; ++kk) {
// EXPECT_EQ(A[kk], B[kk]);
// }
WebRtcSpl_ComplexBitReverse(B, 3);
for (int kk = 0; kk < 16; ++kk) {
//EXPECT_EQ(A[kk], B[kk]);
}
}
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
SplEnvironment* env = new SplEnvironment;
::testing::AddGlobalTestEnvironment(env);
return RUN_ALL_TESTS();
}

View File

@ -1,30 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file contains the function WebRtcSpl_CopyFromBeginU8().
* The description header can be found in signal_processing_library.h
*
*/
#ifndef WEBRTC_SPL_UNIT_TEST_H_
#define WEBRTC_SPL_UNIT_TEST_H_
#include <gtest/gtest.h>
class SplTest: public ::testing::Test
{
protected:
SplTest();
virtual void SetUp();
virtual void TearDown();
};
#endif // WEBRTC_SPL_UNIT_TEST_H_

View File

@ -1,15 +0,0 @@
noinst_LTLIBRARIES = libvad.la
libvad_la_SOURCES = main/interface/webrtc_vad.h \
main/source/webrtc_vad.c \
main/source/vad_core.c \
main/source/vad_core.h \
main/source/vad_defines.h \
main/source/vad_filterbank.c \
main/source/vad_filterbank.h \
main/source/vad_gmm.c \
main/source/vad_gmm.h \
main/source/vad_sp.c \
main/source/vad_sp.h
libvad_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface

View File

@ -1,159 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the VAD API calls. Specific function calls are given below.
*/
#ifndef WEBRTC_VAD_WEBRTC_VAD_H_
#define WEBRTC_VAD_WEBRTC_VAD_H_
#include "typedefs.h"
typedef struct WebRtcVadInst VadInst;
#ifdef __cplusplus
extern "C"
{
#endif
/****************************************************************************
* WebRtcVad_get_version(...)
*
* This function returns the version number of the code.
*
* Output:
* - version : Pointer to a buffer where the version info will
* be stored.
* Input:
* - size_bytes : Size of the buffer.
*
*/
WebRtc_Word16 WebRtcVad_get_version(char *version, size_t size_bytes);
/****************************************************************************
* WebRtcVad_AssignSize(...)
*
* This functions get the size needed for storing the instance for encoder
* and decoder, respectively
*
* Input/Output:
* - size_in_bytes : Pointer to integer where the size is returned
*
* Return value : 0
*/
WebRtc_Word16 WebRtcVad_AssignSize(int *size_in_bytes);
/****************************************************************************
* WebRtcVad_Assign(...)
*
* This functions Assigns memory for the instances.
*
* Input:
* - vad_inst_addr : Address to where to assign memory
* Output:
* - vad_inst : Pointer to the instance that should be created
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Assign(VadInst **vad_inst, void *vad_inst_addr);
/****************************************************************************
* WebRtcVad_Create(...)
*
* This function creates an instance to the VAD structure
*
* Input:
* - vad_inst : Pointer to VAD instance that should be created
*
* Output:
* - vad_inst : Pointer to created VAD instance
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Create(VadInst **vad_inst);
/****************************************************************************
* WebRtcVad_Free(...)
*
* This function frees the dynamic memory of a specified VAD instance
*
* Input:
* - vad_inst : Pointer to VAD instance that should be freed
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Free(VadInst *vad_inst);
/****************************************************************************
* WebRtcVad_Init(...)
*
* This function initializes a VAD instance
*
* Input:
* - vad_inst : Instance that should be initialized
*
* Output:
* - vad_inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Init(VadInst *vad_inst);
/****************************************************************************
* WebRtcVad_set_mode(...)
*
* This function initializes a VAD instance
*
* Input:
* - vad_inst : VAD instance
* - mode : Aggressiveness setting (0, 1, 2, or 3)
*
* Output:
* - vad_inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_set_mode(VadInst *vad_inst, WebRtc_Word16 mode);
/****************************************************************************
* WebRtcVad_Process(...)
*
* This functions does a VAD for the inserted speech frame
*
* Input
* - vad_inst : VAD Instance. Needs to be initiated before call.
* - fs : sampling frequency (Hz): 8000, 16000, or 32000
* - speech_frame : Pointer to speech frame buffer
* - frame_length : Length of speech frame buffer in number of samples
*
* Output:
* - vad_inst : Updated VAD instance
*
* Return value : 1 - Active Voice
* 0 - Non-active Voice
* -1 - Error
*/
WebRtc_Word16 WebRtcVad_Process(VadInst *vad_inst,
WebRtc_Word16 fs,
WebRtc_Word16 *speech_frame,
WebRtc_Word16 frame_length);
#ifdef __cplusplus
}
#endif
#endif // WEBRTC_VAD_WEBRTC_VAD_H_

View File

@ -1,46 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'vad',
'type': '<(library)',
'dependencies': [
'spl',
],
'include_dirs': [
'../interface',
],
'direct_dependent_settings': {
'include_dirs': [
'../interface',
],
},
'sources': [
'../interface/webrtc_vad.h',
'webrtc_vad.c',
'vad_core.c',
'vad_core.h',
'vad_defines.h',
'vad_filterbank.c',
'vad_filterbank.h',
'vad_gmm.c',
'vad_gmm.h',
'vad_sp.c',
'vad_sp.h',
],
},
],
}
# Local Variables:
# tab-width:2
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=2 shiftwidth=2:

View File

@ -1,723 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the core functionality in VAD.
* For function description, see vad_core.h.
*/
#include "vad_core.h"
#include "signal_processing_library.h"
#include "typedefs.h"
#include "vad_defines.h"
#include "vad_filterbank.h"
#include "vad_gmm.h"
#include "vad_sp.h"
// Spectrum Weighting
static const WebRtc_Word16 kSpectrumWeight[6] = { 6, 8, 10, 12, 14, 16 };
static const WebRtc_Word16 kNoiseUpdateConst = 655; // Q15
static const WebRtc_Word16 kSpeechUpdateConst = 6554; // Q15
static const WebRtc_Word16 kBackEta = 154; // Q8
// Minimum difference between the two models, Q5
static const WebRtc_Word16 kMinimumDifference[6] = {
544, 544, 576, 576, 576, 576 };
// Upper limit of mean value for speech model, Q7
static const WebRtc_Word16 kMaximumSpeech[6] = {
11392, 11392, 11520, 11520, 11520, 11520 };
// Minimum value for mean value
static const WebRtc_Word16 kMinimumMean[2] = { 640, 768 };
// Upper limit of mean value for noise model, Q7
static const WebRtc_Word16 kMaximumNoise[6] = {
9216, 9088, 8960, 8832, 8704, 8576 };
// Start values for the Gaussian models, Q7
// Weights for the two Gaussians for the six channels (noise)
static const WebRtc_Word16 kNoiseDataWeights[12] = {
34, 62, 72, 66, 53, 25, 94, 66, 56, 62, 75, 103 };
// Weights for the two Gaussians for the six channels (speech)
static const WebRtc_Word16 kSpeechDataWeights[12] = {
48, 82, 45, 87, 50, 47, 80, 46, 83, 41, 78, 81 };
// Means for the two Gaussians for the six channels (noise)
static const WebRtc_Word16 kNoiseDataMeans[12] = {
6738, 4892, 7065, 6715, 6771, 3369, 7646, 3863, 7820, 7266, 5020, 4362 };
// Means for the two Gaussians for the six channels (speech)
static const WebRtc_Word16 kSpeechDataMeans[12] = {
8306, 10085, 10078, 11823, 11843, 6309, 9473, 9571, 10879, 7581, 8180, 7483
};
// Stds for the two Gaussians for the six channels (noise)
static const WebRtc_Word16 kNoiseDataStds[12] = {
378, 1064, 493, 582, 688, 593, 474, 697, 475, 688, 421, 455 };
// Stds for the two Gaussians for the six channels (speech)
static const WebRtc_Word16 kSpeechDataStds[12] = {
555, 505, 567, 524, 585, 1231, 509, 828, 492, 1540, 1079, 850 };
static const int kInitCheck = 42;
// Initialize VAD
int WebRtcVad_InitCore(VadInstT *inst, short mode)
{
int i;
// Initialization of struct
inst->vad = 1;
inst->frame_counter = 0;
inst->over_hang = 0;
inst->num_of_speech = 0;
// Initialization of downsampling filter state
inst->downsampling_filter_states[0] = 0;
inst->downsampling_filter_states[1] = 0;
inst->downsampling_filter_states[2] = 0;
inst->downsampling_filter_states[3] = 0;
// Read initial PDF parameters
for (i = 0; i < NUM_TABLE_VALUES; i++)
{
inst->noise_means[i] = kNoiseDataMeans[i];
inst->speech_means[i] = kSpeechDataMeans[i];
inst->noise_stds[i] = kNoiseDataStds[i];
inst->speech_stds[i] = kSpeechDataStds[i];
}
// Index and Minimum value vectors are initialized
for (i = 0; i < 16 * NUM_CHANNELS; i++)
{
inst->low_value_vector[i] = 10000;
inst->index_vector[i] = 0;
}
for (i = 0; i < 5; i++)
{
inst->upper_state[i] = 0;
inst->lower_state[i] = 0;
}
for (i = 0; i < 4; i++)
{
inst->hp_filter_state[i] = 0;
}
// Init mean value memory, for FindMin function
inst->mean_value[0] = 1600;
inst->mean_value[1] = 1600;
inst->mean_value[2] = 1600;
inst->mean_value[3] = 1600;
inst->mean_value[4] = 1600;
inst->mean_value[5] = 1600;
if (mode == 0)
{
// Quality mode
inst->over_hang_max_1[0] = OHMAX1_10MS_Q; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_Q; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_Q; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_Q; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_Q; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_Q; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_Q;
inst->individual[1] = INDIVIDUAL_20MS_Q;
inst->individual[2] = INDIVIDUAL_30MS_Q;
inst->total[0] = TOTAL_10MS_Q;
inst->total[1] = TOTAL_20MS_Q;
inst->total[2] = TOTAL_30MS_Q;
} else if (mode == 1)
{
// Low bitrate mode
inst->over_hang_max_1[0] = OHMAX1_10MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_LBR; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_LBR; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_LBR;
inst->individual[1] = INDIVIDUAL_20MS_LBR;
inst->individual[2] = INDIVIDUAL_30MS_LBR;
inst->total[0] = TOTAL_10MS_LBR;
inst->total[1] = TOTAL_20MS_LBR;
inst->total[2] = TOTAL_30MS_LBR;
} else if (mode == 2)
{
// Aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_AGG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_AGG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_AGG;
inst->individual[1] = INDIVIDUAL_20MS_AGG;
inst->individual[2] = INDIVIDUAL_30MS_AGG;
inst->total[0] = TOTAL_10MS_AGG;
inst->total[1] = TOTAL_20MS_AGG;
inst->total[2] = TOTAL_30MS_AGG;
} else
{
// Very aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_VAG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_VAG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_VAG;
inst->individual[1] = INDIVIDUAL_20MS_VAG;
inst->individual[2] = INDIVIDUAL_30MS_VAG;
inst->total[0] = TOTAL_10MS_VAG;
inst->total[1] = TOTAL_20MS_VAG;
inst->total[2] = TOTAL_30MS_VAG;
}
inst->init_flag = kInitCheck;
return 0;
}
// Set aggressiveness mode
int WebRtcVad_set_mode_core(VadInstT *inst, short mode)
{
if (mode == 0)
{
// Quality mode
inst->over_hang_max_1[0] = OHMAX1_10MS_Q; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_Q; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_Q; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_Q; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_Q; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_Q; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_Q;
inst->individual[1] = INDIVIDUAL_20MS_Q;
inst->individual[2] = INDIVIDUAL_30MS_Q;
inst->total[0] = TOTAL_10MS_Q;
inst->total[1] = TOTAL_20MS_Q;
inst->total[2] = TOTAL_30MS_Q;
} else if (mode == 1)
{
// Low bitrate mode
inst->over_hang_max_1[0] = OHMAX1_10MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_LBR; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_LBR; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_LBR; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_LBR; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_LBR;
inst->individual[1] = INDIVIDUAL_20MS_LBR;
inst->individual[2] = INDIVIDUAL_30MS_LBR;
inst->total[0] = TOTAL_10MS_LBR;
inst->total[1] = TOTAL_20MS_LBR;
inst->total[2] = TOTAL_30MS_LBR;
} else if (mode == 2)
{
// Aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_AGG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_AGG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_AGG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_AGG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_AGG;
inst->individual[1] = INDIVIDUAL_20MS_AGG;
inst->individual[2] = INDIVIDUAL_30MS_AGG;
inst->total[0] = TOTAL_10MS_AGG;
inst->total[1] = TOTAL_20MS_AGG;
inst->total[2] = TOTAL_30MS_AGG;
} else if (mode == 3)
{
// Very aggressive mode
inst->over_hang_max_1[0] = OHMAX1_10MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[1] = OHMAX1_20MS_VAG; // Overhang short speech burst
inst->over_hang_max_1[2] = OHMAX1_30MS_VAG; // Overhang short speech burst
inst->over_hang_max_2[0] = OHMAX2_10MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[1] = OHMAX2_20MS_VAG; // Overhang long speech burst
inst->over_hang_max_2[2] = OHMAX2_30MS_VAG; // Overhang long speech burst
inst->individual[0] = INDIVIDUAL_10MS_VAG;
inst->individual[1] = INDIVIDUAL_20MS_VAG;
inst->individual[2] = INDIVIDUAL_30MS_VAG;
inst->total[0] = TOTAL_10MS_VAG;
inst->total[1] = TOTAL_20MS_VAG;
inst->total[2] = TOTAL_30MS_VAG;
} else
{
return -1;
}
return 0;
}
// Calculate VAD decision by first extracting feature values and then calculate
// probability for both speech and background noise.
WebRtc_Word16 WebRtcVad_CalcVad32khz(VadInstT *inst, WebRtc_Word16 *speech_frame,
int frame_length)
{
WebRtc_Word16 len, vad;
WebRtc_Word16 speechWB[480]; // Downsampled speech frame: 960 samples (30ms in SWB)
WebRtc_Word16 speechNB[240]; // Downsampled speech frame: 480 samples (30ms in WB)
// Downsample signal 32->16->8 before doing VAD
WebRtcVad_Downsampling(speech_frame, speechWB, &(inst->downsampling_filter_states[2]),
frame_length);
len = WEBRTC_SPL_RSHIFT_W16(frame_length, 1);
WebRtcVad_Downsampling(speechWB, speechNB, inst->downsampling_filter_states, len);
len = WEBRTC_SPL_RSHIFT_W16(len, 1);
// Do VAD on an 8 kHz signal
vad = WebRtcVad_CalcVad8khz(inst, speechNB, len);
return vad;
}
WebRtc_Word16 WebRtcVad_CalcVad16khz(VadInstT *inst, WebRtc_Word16 *speech_frame,
int frame_length)
{
WebRtc_Word16 len, vad;
WebRtc_Word16 speechNB[240]; // Downsampled speech frame: 480 samples (30ms in WB)
// Wideband: Downsample signal before doing VAD
WebRtcVad_Downsampling(speech_frame, speechNB, inst->downsampling_filter_states,
frame_length);
len = WEBRTC_SPL_RSHIFT_W16(frame_length, 1);
vad = WebRtcVad_CalcVad8khz(inst, speechNB, len);
return vad;
}
WebRtc_Word16 WebRtcVad_CalcVad8khz(VadInstT *inst, WebRtc_Word16 *speech_frame,
int frame_length)
{
WebRtc_Word16 feature_vector[NUM_CHANNELS], total_power;
// Get power in the bands
total_power = WebRtcVad_get_features(inst, speech_frame, frame_length, feature_vector);
// Make a VAD
inst->vad = WebRtcVad_GmmProbability(inst, feature_vector, total_power, frame_length);
return inst->vad;
}
// Calculate probability for both speech and background noise, and perform a
// hypothesis-test.
WebRtc_Word16 WebRtcVad_GmmProbability(VadInstT *inst, WebRtc_Word16 *feature_vector,
WebRtc_Word16 total_power, int frame_length)
{
int n, k;
WebRtc_Word16 backval;
WebRtc_Word16 h0, h1;
WebRtc_Word16 ratvec, xval;
WebRtc_Word16 vadflag;
WebRtc_Word16 shifts0, shifts1;
WebRtc_Word16 tmp16, tmp16_1, tmp16_2;
WebRtc_Word16 diff, nr, pos;
WebRtc_Word16 nmk, nmk2, nmk3, smk, smk2, nsk, ssk;
WebRtc_Word16 delt, ndelt;
WebRtc_Word16 maxspe, maxmu;
WebRtc_Word16 deltaN[NUM_TABLE_VALUES], deltaS[NUM_TABLE_VALUES];
WebRtc_Word16 ngprvec[NUM_TABLE_VALUES], sgprvec[NUM_TABLE_VALUES];
WebRtc_Word32 h0test, h1test;
WebRtc_Word32 tmp32_1, tmp32_2;
WebRtc_Word32 dotVal;
WebRtc_Word32 nmid, smid;
WebRtc_Word32 probn[NUM_MODELS], probs[NUM_MODELS];
WebRtc_Word16 *nmean1ptr, *nmean2ptr, *smean1ptr, *smean2ptr, *nstd1ptr, *nstd2ptr,
*sstd1ptr, *sstd2ptr;
WebRtc_Word16 overhead1, overhead2, individualTest, totalTest;
// Set the thresholds to different values based on frame length
if (frame_length == 80)
{
// 80 input samples
overhead1 = inst->over_hang_max_1[0];
overhead2 = inst->over_hang_max_2[0];
individualTest = inst->individual[0];
totalTest = inst->total[0];
} else if (frame_length == 160)
{
// 160 input samples
overhead1 = inst->over_hang_max_1[1];
overhead2 = inst->over_hang_max_2[1];
individualTest = inst->individual[1];
totalTest = inst->total[1];
} else
{
// 240 input samples
overhead1 = inst->over_hang_max_1[2];
overhead2 = inst->over_hang_max_2[2];
individualTest = inst->individual[2];
totalTest = inst->total[2];
}
if (total_power > MIN_ENERGY)
{ // If signal present at all
// Set pointers to the gaussian parameters
nmean1ptr = &inst->noise_means[0];
nmean2ptr = &inst->noise_means[NUM_CHANNELS];
smean1ptr = &inst->speech_means[0];
smean2ptr = &inst->speech_means[NUM_CHANNELS];
nstd1ptr = &inst->noise_stds[0];
nstd2ptr = &inst->noise_stds[NUM_CHANNELS];
sstd1ptr = &inst->speech_stds[0];
sstd2ptr = &inst->speech_stds[NUM_CHANNELS];
vadflag = 0;
dotVal = 0;
for (n = 0; n < NUM_CHANNELS; n++)
{ // For all channels
pos = WEBRTC_SPL_LSHIFT_W16(n, 1);
xval = feature_vector[n];
// Probability for Noise, Q7 * Q20 = Q27
tmp32_1 = WebRtcVad_GaussianProbability(xval, *nmean1ptr++, *nstd1ptr++,
&deltaN[pos]);
probn[0] = (WebRtc_Word32)(kNoiseDataWeights[n] * tmp32_1);
tmp32_1 = WebRtcVad_GaussianProbability(xval, *nmean2ptr++, *nstd2ptr++,
&deltaN[pos + 1]);
probn[1] = (WebRtc_Word32)(kNoiseDataWeights[n + NUM_CHANNELS] * tmp32_1);
h0test = probn[0] + probn[1]; // Q27
h0 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(h0test, 12); // Q15
// Probability for Speech
tmp32_1 = WebRtcVad_GaussianProbability(xval, *smean1ptr++, *sstd1ptr++,
&deltaS[pos]);
probs[0] = (WebRtc_Word32)(kSpeechDataWeights[n] * tmp32_1);
tmp32_1 = WebRtcVad_GaussianProbability(xval, *smean2ptr++, *sstd2ptr++,
&deltaS[pos + 1]);
probs[1] = (WebRtc_Word32)(kSpeechDataWeights[n + NUM_CHANNELS] * tmp32_1);
h1test = probs[0] + probs[1]; // Q27
h1 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(h1test, 12); // Q15
// Get likelihood ratio. Approximate log2(H1/H0) with shifts0 - shifts1
shifts0 = WebRtcSpl_NormW32(h0test);
shifts1 = WebRtcSpl_NormW32(h1test);
if ((h0test > 0) && (h1test > 0))
{
ratvec = shifts0 - shifts1;
} else if (h1test > 0)
{
ratvec = 31 - shifts1;
} else if (h0test > 0)
{
ratvec = shifts0 - 31;
} else
{
ratvec = 0;
}
// VAD decision with spectrum weighting
dotVal += WEBRTC_SPL_MUL_16_16(ratvec, kSpectrumWeight[n]);
// Individual channel test
if ((ratvec << 2) > individualTest)
{
vadflag = 1;
}
// Probabilities used when updating model
if (h0 > 0)
{
tmp32_1 = probn[0] & 0xFFFFF000; // Q27
tmp32_2 = WEBRTC_SPL_LSHIFT_W32(tmp32_1, 2); // Q29
ngprvec[pos] = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_2, h0);
ngprvec[pos + 1] = 16384 - ngprvec[pos];
} else
{
ngprvec[pos] = 16384;
ngprvec[pos + 1] = 0;
}
// Probabilities used when updating model
if (h1 > 0)
{
tmp32_1 = probs[0] & 0xFFFFF000;
tmp32_2 = WEBRTC_SPL_LSHIFT_W32(tmp32_1, 2);
sgprvec[pos] = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_2, h1);
sgprvec[pos + 1] = 16384 - sgprvec[pos];
} else
{
sgprvec[pos] = 0;
sgprvec[pos + 1] = 0;
}
}
// Overall test
if (dotVal >= totalTest)
{
vadflag |= 1;
}
// Set pointers to the means and standard deviations.
nmean1ptr = &inst->noise_means[0];
smean1ptr = &inst->speech_means[0];
nstd1ptr = &inst->noise_stds[0];
sstd1ptr = &inst->speech_stds[0];
maxspe = 12800;
// Update the model's parameters
for (n = 0; n < NUM_CHANNELS; n++)
{
pos = WEBRTC_SPL_LSHIFT_W16(n, 1);
// Get min value in past which is used for long term correction
backval = WebRtcVad_FindMinimum(inst, feature_vector[n], n); // Q4
// Compute the "global" mean, that is the sum of the two means weighted
nmid = WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n], *nmean1ptr); // Q7 * Q7
nmid += WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n+NUM_CHANNELS],
*(nmean1ptr+NUM_CHANNELS));
tmp16_1 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(nmid, 6); // Q8
for (k = 0; k < NUM_MODELS; k++)
{
nr = pos + k;
nmean2ptr = nmean1ptr + k * NUM_CHANNELS;
smean2ptr = smean1ptr + k * NUM_CHANNELS;
nstd2ptr = nstd1ptr + k * NUM_CHANNELS;
sstd2ptr = sstd1ptr + k * NUM_CHANNELS;
nmk = *nmean2ptr;
smk = *smean2ptr;
nsk = *nstd2ptr;
ssk = *sstd2ptr;
// Update noise mean vector if the frame consists of noise only
nmk2 = nmk;
if (!vadflag)
{
// deltaN = (x-mu)/sigma^2
// ngprvec[k] = probn[k]/(probn[0] + probn[1])
delt = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(ngprvec[nr],
deltaN[nr], 11); // Q14*Q11
nmk2 = nmk + (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(delt,
kNoiseUpdateConst,
22); // Q7+(Q14*Q15>>22)
}
// Long term correction of the noise mean
ndelt = WEBRTC_SPL_LSHIFT_W16(backval, 4);
ndelt -= tmp16_1; // Q8 - Q8
nmk3 = nmk2 + (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(ndelt,
kBackEta,
9); // Q7+(Q8*Q8)>>9
// Control that the noise mean does not drift to much
tmp16 = WEBRTC_SPL_LSHIFT_W16(k+5, 7);
if (nmk3 < tmp16)
nmk3 = tmp16;
tmp16 = WEBRTC_SPL_LSHIFT_W16(72+k-n, 7);
if (nmk3 > tmp16)
nmk3 = tmp16;
*nmean2ptr = nmk3;
if (vadflag)
{
// Update speech mean vector:
// deltaS = (x-mu)/sigma^2
// sgprvec[k] = probn[k]/(probn[0] + probn[1])
delt = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(sgprvec[nr],
deltaS[nr],
11); // (Q14*Q11)>>11=Q14
tmp16 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(delt,
kSpeechUpdateConst,
21) + 1;
smk2 = smk + (tmp16 >> 1); // Q7 + (Q14 * Q15 >> 22)
// Control that the speech mean does not drift to much
maxmu = maxspe + 640;
if (smk2 < kMinimumMean[k])
smk2 = kMinimumMean[k];
if (smk2 > maxmu)
smk2 = maxmu;
*smean2ptr = smk2;
// (Q7>>3) = Q4
tmp16 = WEBRTC_SPL_RSHIFT_W16((smk + 4), 3);
tmp16 = feature_vector[n] - tmp16; // Q4
tmp32_1 = WEBRTC_SPL_MUL_16_16_RSFT(deltaS[nr], tmp16, 3);
tmp32_2 = tmp32_1 - (WebRtc_Word32)4096; // Q12
tmp16 = WEBRTC_SPL_RSHIFT_W16((sgprvec[nr]), 2);
tmp32_1 = (WebRtc_Word32)(tmp16 * tmp32_2);// (Q15>>3)*(Q14>>2)=Q12*Q12=Q24
tmp32_2 = WEBRTC_SPL_RSHIFT_W32(tmp32_1, 4); // Q20
// 0.1 * Q20 / Q7 = Q13
if (tmp32_2 > 0)
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_2, ssk * 10);
else
{
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(-tmp32_2, ssk * 10);
tmp16 = -tmp16;
}
// divide by 4 giving an update factor of 0.025
tmp16 += 128; // Rounding
ssk += WEBRTC_SPL_RSHIFT_W16(tmp16, 8);
// Division with 8 plus Q7
if (ssk < MIN_STD)
ssk = MIN_STD;
*sstd2ptr = ssk;
} else
{
// Update GMM variance vectors
// deltaN * (feature_vector[n] - nmk) - 1, Q11 * Q4
tmp16 = feature_vector[n] - WEBRTC_SPL_RSHIFT_W16(nmk, 3);
// (Q15>>3) * (Q14>>2) = Q12 * Q12 = Q24
tmp32_1 = WEBRTC_SPL_MUL_16_16_RSFT(deltaN[nr], tmp16, 3) - 4096;
tmp16 = WEBRTC_SPL_RSHIFT_W16((ngprvec[nr]+2), 2);
tmp32_2 = (WebRtc_Word32)(tmp16 * tmp32_1);
tmp32_1 = WEBRTC_SPL_RSHIFT_W32(tmp32_2, 14);
// Q20 * approx 0.001 (2^-10=0.0009766)
// Q20 / Q7 = Q13
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_1, nsk);
if (tmp32_1 > 0)
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32_1, nsk);
else
{
tmp16 = (WebRtc_Word16)WebRtcSpl_DivW32W16(-tmp32_1, nsk);
tmp16 = -tmp16;
}
tmp16 += 32; // Rounding
nsk += WEBRTC_SPL_RSHIFT_W16(tmp16, 6);
if (nsk < MIN_STD)
nsk = MIN_STD;
*nstd2ptr = nsk;
}
}
// Separate models if they are too close - nmid in Q14
nmid = WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n], *nmean1ptr);
nmid += WEBRTC_SPL_MUL_16_16(kNoiseDataWeights[n+NUM_CHANNELS], *nmean2ptr);
// smid in Q14
smid = WEBRTC_SPL_MUL_16_16(kSpeechDataWeights[n], *smean1ptr);
smid += WEBRTC_SPL_MUL_16_16(kSpeechDataWeights[n+NUM_CHANNELS], *smean2ptr);
// diff = "global" speech mean - "global" noise mean
diff = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(smid, 9);
tmp16 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(nmid, 9);
diff -= tmp16;
if (diff < kMinimumDifference[n])
{
tmp16 = kMinimumDifference[n] - diff; // Q5
// tmp16_1 = ~0.8 * (kMinimumDifference - diff) in Q7
// tmp16_2 = ~0.2 * (kMinimumDifference - diff) in Q7
tmp16_1 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(13, tmp16, 2);
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(3, tmp16, 2);
// First Gauss, speech model
tmp16 = tmp16_1 + *smean1ptr;
*smean1ptr = tmp16;
smid = WEBRTC_SPL_MUL_16_16(tmp16, kSpeechDataWeights[n]);
// Second Gauss, speech model
tmp16 = tmp16_1 + *smean2ptr;
*smean2ptr = tmp16;
smid += WEBRTC_SPL_MUL_16_16(tmp16, kSpeechDataWeights[n+NUM_CHANNELS]);
// First Gauss, noise model
tmp16 = *nmean1ptr - tmp16_2;
*nmean1ptr = tmp16;
nmid = WEBRTC_SPL_MUL_16_16(tmp16, kNoiseDataWeights[n]);
// Second Gauss, noise model
tmp16 = *nmean2ptr - tmp16_2;
*nmean2ptr = tmp16;
nmid += WEBRTC_SPL_MUL_16_16(tmp16, kNoiseDataWeights[n+NUM_CHANNELS]);
}
// Control that the speech & noise means do not drift to much
maxspe = kMaximumSpeech[n];
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(smid, 7);
if (tmp16_2 > maxspe)
{ // Upper limit of speech model
tmp16_2 -= maxspe;
*smean1ptr -= tmp16_2;
*smean2ptr -= tmp16_2;
}
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(nmid, 7);
if (tmp16_2 > kMaximumNoise[n])
{
tmp16_2 -= kMaximumNoise[n];
*nmean1ptr -= tmp16_2;
*nmean2ptr -= tmp16_2;
}
nmean1ptr++;
smean1ptr++;
nstd1ptr++;
sstd1ptr++;
}
inst->frame_counter++;
} else
{
vadflag = 0;
}
// Hangover smoothing
if (!vadflag)
{
if (inst->over_hang > 0)
{
vadflag = 2 + inst->over_hang;
inst->over_hang = inst->over_hang - 1;
}
inst->num_of_speech = 0;
} else
{
inst->num_of_speech = inst->num_of_speech + 1;
if (inst->num_of_speech > NSP_MAX)
{
inst->num_of_speech = NSP_MAX;
inst->over_hang = overhead2;
} else
inst->over_hang = overhead1;
}
return vadflag;
}

View File

@ -1,132 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the descriptions of the core VAD calls.
*/
#ifndef WEBRTC_VAD_CORE_H_
#define WEBRTC_VAD_CORE_H_
#include "typedefs.h"
#include "vad_defines.h"
typedef struct VadInstT_
{
WebRtc_Word16 vad;
WebRtc_Word32 downsampling_filter_states[4];
WebRtc_Word16 noise_means[NUM_TABLE_VALUES];
WebRtc_Word16 speech_means[NUM_TABLE_VALUES];
WebRtc_Word16 noise_stds[NUM_TABLE_VALUES];
WebRtc_Word16 speech_stds[NUM_TABLE_VALUES];
WebRtc_Word32 frame_counter;
WebRtc_Word16 over_hang; // Over Hang
WebRtc_Word16 num_of_speech;
WebRtc_Word16 index_vector[16 * NUM_CHANNELS];
WebRtc_Word16 low_value_vector[16 * NUM_CHANNELS];
WebRtc_Word16 mean_value[NUM_CHANNELS];
WebRtc_Word16 upper_state[5];
WebRtc_Word16 lower_state[5];
WebRtc_Word16 hp_filter_state[4];
WebRtc_Word16 over_hang_max_1[3];
WebRtc_Word16 over_hang_max_2[3];
WebRtc_Word16 individual[3];
WebRtc_Word16 total[3];
short init_flag;
} VadInstT;
/****************************************************************************
* WebRtcVad_InitCore(...)
*
* This function initializes a VAD instance
*
* Input:
* - inst : Instance that should be initialized
* - mode : Aggressiveness degree
* 0 (High quality) - 3 (Highly aggressive)
*
* Output:
* - inst : Initialized instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcVad_InitCore(VadInstT* inst, short mode);
/****************************************************************************
* WebRtcVad_set_mode_core(...)
*
* This function changes the VAD settings
*
* Input:
* - inst : VAD instance
* - mode : Aggressiveness degree
* 0 (High quality) - 3 (Highly aggressive)
*
* Output:
* - inst : Changed instance
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcVad_set_mode_core(VadInstT* inst, short mode);
/****************************************************************************
* WebRtcVad_CalcVad32khz(...)
* WebRtcVad_CalcVad16khz(...)
* WebRtcVad_CalcVad8khz(...)
*
* Calculate probability for active speech and make VAD decision.
*
* Input:
* - inst : Instance that should be initialized
* - speech_frame : Input speech frame
* - frame_length : Number of input samples
*
* Output:
* - inst : Updated filter states etc.
*
* Return value : VAD decision
* 0 - No active speech
* 1-6 - Active speech
*/
WebRtc_Word16 WebRtcVad_CalcVad32khz(VadInstT* inst, WebRtc_Word16* speech_frame,
int frame_length);
WebRtc_Word16 WebRtcVad_CalcVad16khz(VadInstT* inst, WebRtc_Word16* speech_frame,
int frame_length);
WebRtc_Word16 WebRtcVad_CalcVad8khz(VadInstT* inst, WebRtc_Word16* speech_frame,
int frame_length);
/****************************************************************************
* WebRtcVad_GmmProbability(...)
*
* This function calculates the probabilities for background noise and
* speech using Gaussian Mixture Models. A hypothesis-test is performed to decide
* which type of signal is most probable.
*
* Input:
* - inst : Pointer to VAD instance
* - feature_vector : Feature vector = log10(energy in frequency band)
* - total_power : Total power in frame.
* - frame_length : Number of input samples
*
* Output:
* VAD decision : 0 - noise, 1 - speech
*
*/
WebRtc_Word16 WebRtcVad_GmmProbability(VadInstT* inst, WebRtc_Word16* feature_vector,
WebRtc_Word16 total_power, int frame_length);
#endif // WEBRTC_VAD_CORE_H_

View File

@ -1,95 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the macros used in VAD.
*/
#ifndef WEBRTC_VAD_DEFINES_H_
#define WEBRTC_VAD_DEFINES_H_
#define NUM_CHANNELS 6 // Eight frequency bands
#define NUM_MODELS 2 // Number of Gaussian models
#define NUM_TABLE_VALUES NUM_CHANNELS * NUM_MODELS
#define MIN_ENERGY 10
#define ALPHA1 6553 // 0.2 in Q15
#define ALPHA2 32439 // 0.99 in Q15
#define NSP_MAX 6 // Maximum number of VAD=1 frames in a row counted
#define MIN_STD 384 // Minimum standard deviation
// Mode 0, Quality thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_Q 24
#define INDIVIDUAL_20MS_Q 21 // (log10(2)*66)<<2 ~=16
#define INDIVIDUAL_30MS_Q 24
#define TOTAL_10MS_Q 57
#define TOTAL_20MS_Q 48
#define TOTAL_30MS_Q 57
#define OHMAX1_10MS_Q 8 // Max Overhang 1
#define OHMAX2_10MS_Q 14 // Max Overhang 2
#define OHMAX1_20MS_Q 4 // Max Overhang 1
#define OHMAX2_20MS_Q 7 // Max Overhang 2
#define OHMAX1_30MS_Q 3
#define OHMAX2_30MS_Q 5
// Mode 1, Low bitrate thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_LBR 37
#define INDIVIDUAL_20MS_LBR 32
#define INDIVIDUAL_30MS_LBR 37
#define TOTAL_10MS_LBR 100
#define TOTAL_20MS_LBR 80
#define TOTAL_30MS_LBR 100
#define OHMAX1_10MS_LBR 8 // Max Overhang 1
#define OHMAX2_10MS_LBR 14 // Max Overhang 2
#define OHMAX1_20MS_LBR 4
#define OHMAX2_20MS_LBR 7
#define OHMAX1_30MS_LBR 3
#define OHMAX2_30MS_LBR 5
// Mode 2, Very aggressive thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_AGG 82
#define INDIVIDUAL_20MS_AGG 78
#define INDIVIDUAL_30MS_AGG 82
#define TOTAL_10MS_AGG 285 //580
#define TOTAL_20MS_AGG 260
#define TOTAL_30MS_AGG 285
#define OHMAX1_10MS_AGG 6 // Max Overhang 1
#define OHMAX2_10MS_AGG 9 // Max Overhang 2
#define OHMAX1_20MS_AGG 3
#define OHMAX2_20MS_AGG 5
#define OHMAX1_30MS_AGG 2
#define OHMAX2_30MS_AGG 3
// Mode 3, Super aggressive thresholds - Different thresholds for the different frame lengths
#define INDIVIDUAL_10MS_VAG 94
#define INDIVIDUAL_20MS_VAG 94
#define INDIVIDUAL_30MS_VAG 94
#define TOTAL_10MS_VAG 1100 //1700
#define TOTAL_20MS_VAG 1050
#define TOTAL_30MS_VAG 1100
#define OHMAX1_10MS_VAG 6 // Max Overhang 1
#define OHMAX2_10MS_VAG 9 // Max Overhang 2
#define OHMAX1_20MS_VAG 3
#define OHMAX2_20MS_VAG 5
#define OHMAX1_30MS_VAG 2
#define OHMAX2_30MS_VAG 3
#endif // WEBRTC_VAD_DEFINES_H_

View File

@ -1,279 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the internal filterbank associated functions.
* For function description, see vad_filterbank.h.
*/
#include "vad_filterbank.h"
#include "signal_processing_library.h"
#include "typedefs.h"
#include "vad_defines.h"
// Constant 160*log10(2) in Q9
static const WebRtc_Word16 kLogConst = 24660;
// Coefficients used by WebRtcVad_HpOutput, Q14
static const WebRtc_Word16 kHpZeroCoefs[3] = {6631, -13262, 6631};
static const WebRtc_Word16 kHpPoleCoefs[3] = {16384, -7756, 5620};
// Allpass filter coefficients, upper and lower, in Q15
// Upper: 0.64, Lower: 0.17
static const WebRtc_Word16 kAllPassCoefsQ15[2] = {20972, 5571};
// Adjustment for division with two in WebRtcVad_SplitFilter
static const WebRtc_Word16 kOffsetVector[6] = {368, 368, 272, 176, 176, 176};
void WebRtcVad_HpOutput(WebRtc_Word16 *in_vector,
WebRtc_Word16 in_vector_length,
WebRtc_Word16 *out_vector,
WebRtc_Word16 *filter_state)
{
WebRtc_Word16 i, *pi, *outPtr;
WebRtc_Word32 tmpW32;
pi = &in_vector[0];
outPtr = &out_vector[0];
// The sum of the absolute values of the impulse response:
// The zero/pole-filter has a max amplification of a single sample of: 1.4546
// Impulse response: 0.4047 -0.6179 -0.0266 0.1993 0.1035 -0.0194
// The all-zero section has a max amplification of a single sample of: 1.6189
// Impulse response: 0.4047 -0.8094 0.4047 0 0 0
// The all-pole section has a max amplification of a single sample of: 1.9931
// Impulse response: 1.0000 0.4734 -0.1189 -0.2187 -0.0627 0.04532
for (i = 0; i < in_vector_length; i++)
{
// all-zero section (filter coefficients in Q14)
tmpW32 = (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpZeroCoefs[0], (*pi));
tmpW32 += (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpZeroCoefs[1], filter_state[0]);
tmpW32 += (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpZeroCoefs[2], filter_state[1]); // Q14
filter_state[1] = filter_state[0];
filter_state[0] = *pi++;
// all-pole section
tmpW32 -= (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpPoleCoefs[1], filter_state[2]); // Q14
tmpW32 -= (WebRtc_Word32)WEBRTC_SPL_MUL_16_16(kHpPoleCoefs[2], filter_state[3]);
filter_state[3] = filter_state[2];
filter_state[2] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32 (tmpW32, 14);
*outPtr++ = filter_state[2];
}
}
void WebRtcVad_Allpass(WebRtc_Word16 *in_vector,
WebRtc_Word16 *out_vector,
WebRtc_Word16 filter_coefficients,
int vector_length,
WebRtc_Word16 *filter_state)
{
// The filter can only cause overflow (in the w16 output variable)
// if more than 4 consecutive input numbers are of maximum value and
// has the the same sign as the impulse responses first taps.
// First 6 taps of the impulse response: 0.6399 0.5905 -0.3779
// 0.2418 -0.1547 0.0990
int n;
WebRtc_Word16 tmp16;
WebRtc_Word32 tmp32, in32, state32;
state32 = WEBRTC_SPL_LSHIFT_W32(((WebRtc_Word32)(*filter_state)), 16); // Q31
for (n = 0; n < vector_length; n++)
{
tmp32 = state32 + WEBRTC_SPL_MUL_16_16(filter_coefficients, (*in_vector));
tmp16 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 16);
*out_vector++ = tmp16;
in32 = WEBRTC_SPL_LSHIFT_W32(((WebRtc_Word32)(*in_vector)), 14);
state32 = in32 - WEBRTC_SPL_MUL_16_16(filter_coefficients, tmp16);
state32 = WEBRTC_SPL_LSHIFT_W32(state32, 1);
in_vector += 2;
}
*filter_state = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(state32, 16);
}
void WebRtcVad_SplitFilter(WebRtc_Word16 *in_vector,
WebRtc_Word16 *out_vector_hp,
WebRtc_Word16 *out_vector_lp,
WebRtc_Word16 *upper_state,
WebRtc_Word16 *lower_state,
int in_vector_length)
{
WebRtc_Word16 tmpOut;
int k, halflen;
// Downsampling by 2 and get two branches
halflen = WEBRTC_SPL_RSHIFT_W16(in_vector_length, 1);
// All-pass filtering upper branch
WebRtcVad_Allpass(&in_vector[0], out_vector_hp, kAllPassCoefsQ15[0], halflen, upper_state);
// All-pass filtering lower branch
WebRtcVad_Allpass(&in_vector[1], out_vector_lp, kAllPassCoefsQ15[1], halflen, lower_state);
// Make LP and HP signals
for (k = 0; k < halflen; k++)
{
tmpOut = *out_vector_hp;
*out_vector_hp++ -= *out_vector_lp;
*out_vector_lp++ += tmpOut;
}
}
WebRtc_Word16 WebRtcVad_get_features(VadInstT *inst,
WebRtc_Word16 *in_vector,
int frame_size,
WebRtc_Word16 *out_vector)
{
int curlen, filtno;
WebRtc_Word16 vecHP1[120], vecLP1[120];
WebRtc_Word16 vecHP2[60], vecLP2[60];
WebRtc_Word16 *ptin;
WebRtc_Word16 *hptout, *lptout;
WebRtc_Word16 power = 0;
// Split at 2000 Hz and downsample
filtno = 0;
ptin = in_vector;
hptout = vecHP1;
lptout = vecLP1;
curlen = frame_size;
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Split at 3000 Hz and downsample
filtno = 1;
ptin = vecHP1;
hptout = vecHP2;
lptout = vecLP2;
curlen = WEBRTC_SPL_RSHIFT_W16(frame_size, 1);
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 3000 Hz - 4000 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP2, &out_vector[5], &power, kOffsetVector[5], curlen);
// Energy in 2000 Hz - 3000 Hz
WebRtcVad_LogOfEnergy(vecLP2, &out_vector[4], &power, kOffsetVector[4], curlen);
// Split at 1000 Hz and downsample
filtno = 2;
ptin = vecLP1;
hptout = vecHP2;
lptout = vecLP2;
curlen = WEBRTC_SPL_RSHIFT_W16(frame_size, 1);
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 1000 Hz - 2000 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP2, &out_vector[3], &power, kOffsetVector[3], curlen);
// Split at 500 Hz
filtno = 3;
ptin = vecLP2;
hptout = vecHP1;
lptout = vecLP1;
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 500 Hz - 1000 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP1, &out_vector[2], &power, kOffsetVector[2], curlen);
// Split at 250 Hz
filtno = 4;
ptin = vecLP1;
hptout = vecHP2;
lptout = vecLP2;
WebRtcVad_SplitFilter(ptin, hptout, lptout, &inst->upper_state[filtno],
&inst->lower_state[filtno], curlen);
// Energy in 250 Hz - 500 Hz
curlen = WEBRTC_SPL_RSHIFT_W16(curlen, 1);
WebRtcVad_LogOfEnergy(vecHP2, &out_vector[1], &power, kOffsetVector[1], curlen);
// Remove DC and LFs
WebRtcVad_HpOutput(vecLP2, curlen, vecHP1, inst->hp_filter_state);
// Power in 80 Hz - 250 Hz
WebRtcVad_LogOfEnergy(vecHP1, &out_vector[0], &power, kOffsetVector[0], curlen);
return power;
}
void WebRtcVad_LogOfEnergy(WebRtc_Word16 *vector,
WebRtc_Word16 *enerlogval,
WebRtc_Word16 *power,
WebRtc_Word16 offset,
int vector_length)
{
WebRtc_Word16 enerSum = 0;
WebRtc_Word16 zeros, frac, log2;
WebRtc_Word32 energy;
int shfts = 0, shfts2;
energy = WebRtcSpl_Energy(vector, vector_length, &shfts);
if (energy > 0)
{
shfts2 = 16 - WebRtcSpl_NormW32(energy);
shfts += shfts2;
// "shfts" is the total number of right shifts that has been done to enerSum.
enerSum = (WebRtc_Word16)WEBRTC_SPL_SHIFT_W32(energy, -shfts2);
// Find:
// 160*log10(enerSum*2^shfts) = 160*log10(2)*log2(enerSum*2^shfts) =
// 160*log10(2)*(log2(enerSum) + log2(2^shfts)) =
// 160*log10(2)*(log2(enerSum) + shfts)
zeros = WebRtcSpl_NormU32(enerSum);
frac = (WebRtc_Word16)(((WebRtc_UWord32)((WebRtc_Word32)(enerSum) << zeros)
& 0x7FFFFFFF) >> 21);
log2 = (WebRtc_Word16)(((31 - zeros) << 10) + frac);
*enerlogval = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(kLogConst, log2, 19)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(shfts, kLogConst, 9);
if (*enerlogval < 0)
{
*enerlogval = 0;
}
} else
{
*enerlogval = 0;
shfts = -15;
enerSum = 0;
}
*enerlogval += offset;
// Total power in frame
if (*power <= MIN_ENERGY)
{
if (shfts > 0)
{
*power += MIN_ENERGY + 1;
} else if (WEBRTC_SPL_SHIFT_W16(enerSum, shfts) > MIN_ENERGY)
{
*power += MIN_ENERGY + 1;
} else
{
*power += WEBRTC_SPL_SHIFT_W16(enerSum, shfts);
}
}
}

View File

@ -1,143 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the description of the internal VAD call
* WebRtcVad_GaussianProbability.
*/
#ifndef WEBRTC_VAD_FILTERBANK_H_
#define WEBRTC_VAD_FILTERBANK_H_
#include "vad_core.h"
/****************************************************************************
* WebRtcVad_HpOutput(...)
*
* This function removes DC from the lowest frequency band
*
* Input:
* - in_vector : Samples in the frequency interval 0 - 250 Hz
* - in_vector_length : Length of input and output vector
* - filter_state : Current state of the filter
*
* Output:
* - out_vector : Samples in the frequency interval 80 - 250 Hz
* - filter_state : Updated state of the filter
*
*/
void WebRtcVad_HpOutput(WebRtc_Word16* in_vector,
WebRtc_Word16 in_vector_length,
WebRtc_Word16* out_vector,
WebRtc_Word16* filter_state);
/****************************************************************************
* WebRtcVad_Allpass(...)
*
* This function is used when before splitting a speech file into
* different frequency bands
*
* Note! Do NOT let the arrays in_vector and out_vector correspond to the same address.
*
* Input:
* - in_vector : (Q0)
* - filter_coefficients : (Q15)
* - vector_length : Length of input and output vector
* - filter_state : Current state of the filter (Q(-1))
*
* Output:
* - out_vector : Output speech signal (Q(-1))
* - filter_state : Updated state of the filter (Q(-1))
*
*/
void WebRtcVad_Allpass(WebRtc_Word16* in_vector,
WebRtc_Word16* outw16,
WebRtc_Word16 filter_coefficients,
int vector_length,
WebRtc_Word16* filter_state);
/****************************************************************************
* WebRtcVad_SplitFilter(...)
*
* This function is used when before splitting a speech file into
* different frequency bands
*
* Input:
* - in_vector : Input signal to be split into two frequency bands.
* - upper_state : Current state of the upper filter
* - lower_state : Current state of the lower filter
* - in_vector_length : Length of input vector
*
* Output:
* - out_vector_hp : Upper half of the spectrum
* - out_vector_lp : Lower half of the spectrum
* - upper_state : Updated state of the upper filter
* - lower_state : Updated state of the lower filter
*
*/
void WebRtcVad_SplitFilter(WebRtc_Word16* in_vector,
WebRtc_Word16* out_vector_hp,
WebRtc_Word16* out_vector_lp,
WebRtc_Word16* upper_state,
WebRtc_Word16* lower_state,
int in_vector_length);
/****************************************************************************
* WebRtcVad_get_features(...)
*
* This function is used to get the logarithm of the power of each of the
* 6 frequency bands used by the VAD:
* 80 Hz - 250 Hz
* 250 Hz - 500 Hz
* 500 Hz - 1000 Hz
* 1000 Hz - 2000 Hz
* 2000 Hz - 3000 Hz
* 3000 Hz - 4000 Hz
*
* Input:
* - inst : Pointer to VAD instance
* - in_vector : Input speech signal
* - frame_size : Frame size, in number of samples
*
* Output:
* - out_vector : 10*log10(power in each freq. band), Q4
*
* Return: total power in the signal (NOTE! This value is not exact since it
* is only used in a comparison.
*/
WebRtc_Word16 WebRtcVad_get_features(VadInstT* inst,
WebRtc_Word16* in_vector,
int frame_size,
WebRtc_Word16* out_vector);
/****************************************************************************
* WebRtcVad_LogOfEnergy(...)
*
* This function is used to get the logarithm of the power of one frequency band.
*
* Input:
* - vector : Input speech samples for one frequency band
* - offset : Offset value for the current frequency band
* - vector_length : Length of input vector
*
* Output:
* - enerlogval : 10*log10(energy);
* - power : Update total power in speech frame. NOTE! This value
* is not exact since it is only used in a comparison.
*
*/
void WebRtcVad_LogOfEnergy(WebRtc_Word16* vector,
WebRtc_Word16* enerlogval,
WebRtc_Word16* power,
WebRtc_Word16 offset,
int vector_length);
#endif // WEBRTC_VAD_FILTERBANK_H_

View File

@ -1,75 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the internal VAD call
* WebRtcVad_GaussianProbability. For function description, see vad_gmm.h.
*/
#include "vad_gmm.h"
#include "signal_processing_library.h"
#include "typedefs.h"
static const WebRtc_Word32 kCompVar = 22005;
// Constant log2(exp(1)) in Q12
static const WebRtc_Word16 kLog10Const = 5909;
WebRtc_Word32 WebRtcVad_GaussianProbability(WebRtc_Word16 in_sample,
WebRtc_Word16 mean,
WebRtc_Word16 std,
WebRtc_Word16 *delta)
{
WebRtc_Word16 tmp16, tmpDiv, tmpDiv2, expVal, tmp16_1, tmp16_2;
WebRtc_Word32 tmp32, y32;
// Calculate tmpDiv=1/std, in Q10
tmp32 = (WebRtc_Word32)WEBRTC_SPL_RSHIFT_W16(std,1) + (WebRtc_Word32)131072; // 1 in Q17
tmpDiv = (WebRtc_Word16)WebRtcSpl_DivW32W16(tmp32, std); // Q17/Q7 = Q10
// Calculate tmpDiv2=1/std^2, in Q14
tmp16 = WEBRTC_SPL_RSHIFT_W16(tmpDiv, 2); // From Q10 to Q8
tmpDiv2 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(tmp16, tmp16, 2); // (Q8 * Q8)>>2 = Q14
tmp16 = WEBRTC_SPL_LSHIFT_W16(in_sample, 3); // Q7
tmp16 = tmp16 - mean; // Q7 - Q7 = Q7
// To be used later, when updating noise/speech model
// delta = (x-m)/std^2, in Q11
*delta = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(tmpDiv2, tmp16, 10); //(Q14*Q7)>>10 = Q11
// Calculate tmp32=(x-m)^2/(2*std^2), in Q10
tmp32 = (WebRtc_Word32)WEBRTC_SPL_MUL_16_16_RSFT(*delta, tmp16, 9); // One shift for /2
// Calculate expVal ~= exp(-(x-m)^2/(2*std^2)) ~= exp2(-log2(exp(1))*tmp32)
if (tmp32 < kCompVar)
{
// Calculate tmp16 = log2(exp(1))*tmp32 , in Q10
tmp16 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((WebRtc_Word16)tmp32,
kLog10Const, 12);
tmp16 = -tmp16;
tmp16_2 = (WebRtc_Word16)(0x0400 | (tmp16 & 0x03FF));
tmp16_1 = (WebRtc_Word16)(tmp16 ^ 0xFFFF);
tmp16 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W16(tmp16_1, 10);
tmp16 += 1;
// Calculate expVal=log2(-tmp32), in Q10
expVal = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((WebRtc_Word32)tmp16_2, tmp16);
} else
{
expVal = 0;
}
// Calculate y32=(1/std)*exp(-(x-m)^2/(2*std^2)), in Q20
y32 = WEBRTC_SPL_MUL_16_16(tmpDiv, expVal); // Q10 * Q10 = Q20
return y32; // Q20
}

View File

@ -1,47 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the description of the internal VAD call
* WebRtcVad_GaussianProbability.
*/
#ifndef WEBRTC_VAD_GMM_H_
#define WEBRTC_VAD_GMM_H_
#include "typedefs.h"
/****************************************************************************
* WebRtcVad_GaussianProbability(...)
*
* This function calculates the probability for the value 'in_sample', given that in_sample
* comes from a normal distribution with mean 'mean' and standard deviation 'std'.
*
* Input:
* - in_sample : Input sample in Q4
* - mean : mean value in the statistical model, Q7
* - std : standard deviation, Q7
*
* Output:
*
* - delta : Value used when updating the model, Q11
*
* Return:
* - out : out = 1/std * exp(-(x-m)^2/(2*std^2));
* Probability for x.
*
*/
WebRtc_Word32 WebRtcVad_GaussianProbability(WebRtc_Word16 in_sample,
WebRtc_Word16 mean,
WebRtc_Word16 std,
WebRtc_Word16 *delta);
#endif // WEBRTC_VAD_GMM_H_

View File

@ -1,236 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the VAD internal calls for
* Downsampling and FindMinimum.
* For function call descriptions; See vad_sp.h.
*/
#include "vad_sp.h"
#include "signal_processing_library.h"
#include "typedefs.h"
#include "vad_defines.h"
// Allpass filter coefficients, upper and lower, in Q13
// Upper: 0.64, Lower: 0.17
static const WebRtc_Word16 kAllPassCoefsQ13[2] = {5243, 1392}; // Q13
// Downsampling filter based on the splitting filter and the allpass functions
// in vad_filterbank.c
void WebRtcVad_Downsampling(WebRtc_Word16* signal_in,
WebRtc_Word16* signal_out,
WebRtc_Word32* filter_state,
int inlen)
{
WebRtc_Word16 tmp16_1, tmp16_2;
WebRtc_Word32 tmp32_1, tmp32_2;
int n, halflen;
// Downsampling by 2 and get two branches
halflen = WEBRTC_SPL_RSHIFT_W16(inlen, 1);
tmp32_1 = filter_state[0];
tmp32_2 = filter_state[1];
// Filter coefficients in Q13, filter state in Q0
for (n = 0; n < halflen; n++)
{
// All-pass filtering upper branch
tmp16_1 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32_1, 1)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[0]),
*signal_in, 14);
*signal_out = tmp16_1;
tmp32_1 = (WebRtc_Word32)(*signal_in++)
- (WebRtc_Word32)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[0]), tmp16_1, 12);
// All-pass filtering lower branch
tmp16_2 = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32_2, 1)
+ (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[1]),
*signal_in, 14);
*signal_out++ += tmp16_2;
tmp32_2 = (WebRtc_Word32)(*signal_in++)
- (WebRtc_Word32)WEBRTC_SPL_MUL_16_16_RSFT((kAllPassCoefsQ13[1]), tmp16_2, 12);
}
filter_state[0] = tmp32_1;
filter_state[1] = tmp32_2;
}
WebRtc_Word16 WebRtcVad_FindMinimum(VadInstT* inst,
WebRtc_Word16 x,
int n)
{
int i, j, k, II = -1, offset;
WebRtc_Word16 meanV, alpha;
WebRtc_Word32 tmp32, tmp32_1;
WebRtc_Word16 *valptr, *idxptr, *p1, *p2, *p3;
// Offset to beginning of the 16 minimum values in memory
offset = WEBRTC_SPL_LSHIFT_W16(n, 4);
// Pointer to memory for the 16 minimum values and the age of each value
idxptr = &inst->index_vector[offset];
valptr = &inst->low_value_vector[offset];
// Each value in low_value_vector is getting 1 loop older.
// Update age of each value in indexVal, and remove old values.
for (i = 0; i < 16; i++)
{
p3 = idxptr + i;
if (*p3 != 100)
{
*p3 += 1;
} else
{
p1 = valptr + i + 1;
p2 = p3 + 1;
for (j = i; j < 16; j++)
{
*(valptr + j) = *p1++;
*(idxptr + j) = *p2++;
}
*(idxptr + 15) = 101;
*(valptr + 15) = 10000;
}
}
// Check if x smaller than any of the values in low_value_vector.
// If so, find position.
if (x < *(valptr + 7))
{
if (x < *(valptr + 3))
{
if (x < *(valptr + 1))
{
if (x < *valptr)
{
II = 0;
} else
{
II = 1;
}
} else if (x < *(valptr + 2))
{
II = 2;
} else
{
II = 3;
}
} else if (x < *(valptr + 5))
{
if (x < *(valptr + 4))
{
II = 4;
} else
{
II = 5;
}
} else if (x < *(valptr + 6))
{
II = 6;
} else
{
II = 7;
}
} else if (x < *(valptr + 15))
{
if (x < *(valptr + 11))
{
if (x < *(valptr + 9))
{
if (x < *(valptr + 8))
{
II = 8;
} else
{
II = 9;
}
} else if (x < *(valptr + 10))
{
II = 10;
} else
{
II = 11;
}
} else if (x < *(valptr + 13))
{
if (x < *(valptr + 12))
{
II = 12;
} else
{
II = 13;
}
} else if (x < *(valptr + 14))
{
II = 14;
} else
{
II = 15;
}
}
// Put new min value on right position and shift bigger values up
if (II > -1)
{
for (i = 15; i > II; i--)
{
k = i - 1;
*(valptr + i) = *(valptr + k);
*(idxptr + i) = *(idxptr + k);
}
*(valptr + II) = x;
*(idxptr + II) = 1;
}
meanV = 0;
if ((inst->frame_counter) > 4)
{
j = 5;
} else
{
j = inst->frame_counter;
}
if (j > 2)
{
meanV = *(valptr + 2);
} else if (j > 0)
{
meanV = *valptr;
} else
{
meanV = 1600;
}
if (inst->frame_counter > 0)
{
if (meanV < inst->mean_value[n])
{
alpha = (WebRtc_Word16)ALPHA1; // 0.2 in Q15
} else
{
alpha = (WebRtc_Word16)ALPHA2; // 0.99 in Q15
}
} else
{
alpha = 0;
}
tmp32 = WEBRTC_SPL_MUL_16_16((alpha+1), inst->mean_value[n]);
tmp32_1 = WEBRTC_SPL_MUL_16_16(WEBRTC_SPL_WORD16_MAX - alpha, meanV);
tmp32 += tmp32_1;
tmp32 += 16384;
inst->mean_value[n] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 15);
return inst->mean_value[n];
}

View File

@ -1,60 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the VAD internal calls for Downsampling and FindMinimum.
* Specific function calls are given below.
*/
#ifndef WEBRTC_VAD_SP_H_
#define WEBRTC_VAD_SP_H_
#include "vad_core.h"
/****************************************************************************
* WebRtcVad_Downsampling(...)
*
* Downsamples the signal a factor 2, eg. 32->16 or 16->8
*
* Input:
* - signal_in : Input signal
* - in_length : Length of input signal in samples
*
* Input & Output:
* - filter_state : Filter state for first all-pass filters
*
* Output:
* - signal_out : Downsampled signal (of length len/2)
*/
void WebRtcVad_Downsampling(WebRtc_Word16* signal_in,
WebRtc_Word16* signal_out,
WebRtc_Word32* filter_state,
int in_length);
/****************************************************************************
* WebRtcVad_FindMinimum(...)
*
* Find the five lowest values of x in 100 frames long window. Return a mean
* value of these five values.
*
* Input:
* - feature_value : Feature value
* - channel : Channel number
*
* Input & Output:
* - inst : State information
*
* Output:
* return value : Weighted minimum value for a moving window.
*/
WebRtc_Word16 WebRtcVad_FindMinimum(VadInstT* inst, WebRtc_Word16 feature_value, int channel);
#endif // WEBRTC_VAD_SP_H_

View File

@ -1,197 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the VAD API calls. For a specific function call description,
* see webrtc_vad.h
*/
#include <stdlib.h>
#include <string.h>
#include "webrtc_vad.h"
#include "vad_core.h"
static const int kInitCheck = 42;
WebRtc_Word16 WebRtcVad_get_version(char *version, size_t size_bytes)
{
const char my_version[] = "VAD 1.2.0";
if (version == NULL)
{
return -1;
}
if (size_bytes < sizeof(my_version))
{
return -1;
}
memcpy(version, my_version, sizeof(my_version));
return 0;
}
WebRtc_Word16 WebRtcVad_AssignSize(int *size_in_bytes)
{
*size_in_bytes = sizeof(VadInstT) * 2 / sizeof(WebRtc_Word16);
return 0;
}
WebRtc_Word16 WebRtcVad_Assign(VadInst **vad_inst, void *vad_inst_addr)
{
if (vad_inst == NULL)
{
return -1;
}
if (vad_inst_addr != NULL)
{
*vad_inst = (VadInst*)vad_inst_addr;
return 0;
} else
{
return -1;
}
}
WebRtc_Word16 WebRtcVad_Create(VadInst **vad_inst)
{
VadInstT *vad_ptr = NULL;
if (vad_inst == NULL)
{
return -1;
}
*vad_inst = NULL;
vad_ptr = (VadInstT *)malloc(sizeof(VadInstT));
*vad_inst = (VadInst *)vad_ptr;
if (vad_ptr == NULL)
{
return -1;
}
vad_ptr->init_flag = 0;
return 0;
}
WebRtc_Word16 WebRtcVad_Free(VadInst *vad_inst)
{
if (vad_inst == NULL)
{
return -1;
}
free(vad_inst);
return 0;
}
WebRtc_Word16 WebRtcVad_Init(VadInst *vad_inst)
{
short mode = 0; // Default high quality
if (vad_inst == NULL)
{
return -1;
}
return WebRtcVad_InitCore((VadInstT*)vad_inst, mode);
}
WebRtc_Word16 WebRtcVad_set_mode(VadInst *vad_inst, WebRtc_Word16 mode)
{
VadInstT* vad_ptr;
if (vad_inst == NULL)
{
return -1;
}
vad_ptr = (VadInstT*)vad_inst;
if (vad_ptr->init_flag != kInitCheck)
{
return -1;
}
return WebRtcVad_set_mode_core((VadInstT*)vad_inst, mode);
}
WebRtc_Word16 WebRtcVad_Process(VadInst *vad_inst,
WebRtc_Word16 fs,
WebRtc_Word16 *speech_frame,
WebRtc_Word16 frame_length)
{
WebRtc_Word16 vad;
VadInstT* vad_ptr;
if (vad_inst == NULL)
{
return -1;
}
vad_ptr = (VadInstT*)vad_inst;
if (vad_ptr->init_flag != kInitCheck)
{
return -1;
}
if (speech_frame == NULL)
{
return -1;
}
if (fs == 32000)
{
if ((frame_length != 320) && (frame_length != 640) && (frame_length != 960))
{
return -1;
}
vad = WebRtcVad_CalcVad32khz((VadInstT*)vad_inst, speech_frame, frame_length);
} else if (fs == 16000)
{
if ((frame_length != 160) && (frame_length != 320) && (frame_length != 480))
{
return -1;
}
vad = WebRtcVad_CalcVad16khz((VadInstT*)vad_inst, speech_frame, frame_length);
} else if (fs == 8000)
{
if ((frame_length != 80) && (frame_length != 160) && (frame_length != 240))
{
return -1;
}
vad = WebRtcVad_CalcVad8khz((VadInstT*)vad_inst, speech_frame, frame_length);
} else
{
return -1; // Not a supported sampling frequency
}
if (vad > 0)
{
return 1;
} else if (vad == 0)
{
return 0;
} else
{
return -1;
}
}

View File

@ -1,123 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This file includes the implementation of the VAD unit tests.
*/
#include <cstring>
#include "unit_test.h"
#include "webrtc_vad.h"
class VadEnvironment : public ::testing::Environment {
public:
virtual void SetUp() {
}
virtual void TearDown() {
}
};
VadTest::VadTest()
{
}
void VadTest::SetUp() {
}
void VadTest::TearDown() {
}
TEST_F(VadTest, ApiTest) {
VadInst *vad_inst;
int i, j, k;
short zeros[960];
short speech[960];
char version[32];
// Valid test cases
int fs[3] = {8000, 16000, 32000};
int nMode[4] = {0, 1, 2, 3};
int framelen[3][3] = {{80, 160, 240},
{160, 320, 480}, {320, 640, 960}} ;
int vad_counter = 0;
memset(zeros, 0, sizeof(short) * 960);
memset(speech, 1, sizeof(short) * 960);
speech[13] = 1374;
speech[73] = -3747;
// WebRtcVad_get_version()
WebRtcVad_get_version(version);
//printf("API Test for %s\n", version);
// Null instance tests
EXPECT_EQ(-1, WebRtcVad_Create(NULL));
EXPECT_EQ(-1, WebRtcVad_Init(NULL));
EXPECT_EQ(-1, WebRtcVad_Assign(NULL, NULL));
EXPECT_EQ(-1, WebRtcVad_Free(NULL));
EXPECT_EQ(-1, WebRtcVad_set_mode(NULL, nMode[0]));
EXPECT_EQ(-1, WebRtcVad_Process(NULL, fs[0], speech, framelen[0][0]));
EXPECT_EQ(WebRtcVad_Create(&vad_inst), 0);
// Not initialized tests
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, fs[0], speech, framelen[0][0]));
EXPECT_EQ(-1, WebRtcVad_set_mode(vad_inst, nMode[0]));
// WebRtcVad_Init() tests
EXPECT_EQ(WebRtcVad_Init(vad_inst), 0);
// WebRtcVad_set_mode() tests
EXPECT_EQ(-1, WebRtcVad_set_mode(vad_inst, -1));
EXPECT_EQ(-1, WebRtcVad_set_mode(vad_inst, 4));
for (i = 0; i < sizeof(nMode)/sizeof(nMode[0]); i++) {
EXPECT_EQ(WebRtcVad_set_mode(vad_inst, nMode[i]), 0);
}
// WebRtcVad_Process() tests
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, fs[0], NULL, framelen[0][0]));
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, 12000, speech, framelen[0][0]));
EXPECT_EQ(-1, WebRtcVad_Process(vad_inst, fs[0], speech, framelen[1][1]));
EXPECT_EQ(WebRtcVad_Process(vad_inst, fs[0], zeros, framelen[0][0]), 0);
for (i = 0; i < sizeof(fs)/sizeof(fs[0]); i++) {
for (j = 0; j < sizeof(framelen[0])/sizeof(framelen[0][0]); j++) {
for (k = 0; k < sizeof(nMode)/sizeof(nMode[0]); k++) {
EXPECT_EQ(WebRtcVad_set_mode(vad_inst, nMode[k]), 0);
// printf("%d\n", WebRtcVad_Process(vad_inst, fs[i], speech, framelen[i][j]));
if (vad_counter < 9)
{
EXPECT_EQ(WebRtcVad_Process(vad_inst, fs[i], speech, framelen[i][j]), 1);
} else
{
EXPECT_EQ(WebRtcVad_Process(vad_inst, fs[i], speech, framelen[i][j]), 0);
}
vad_counter++;
}
}
}
EXPECT_EQ(0, WebRtcVad_Free(vad_inst));
}
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
VadEnvironment* env = new VadEnvironment;
::testing::AddGlobalTestEnvironment(env);
return RUN_ALL_TESTS();
}

View File

@ -1,28 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* This header file includes the declaration of the VAD unit test.
*/
#ifndef WEBRTC_VAD_UNIT_TEST_H_
#define WEBRTC_VAD_UNIT_TEST_H_
#include <gtest/gtest.h>
class VadTest : public ::testing::Test {
protected:
VadTest();
virtual void SetUp();
virtual void TearDown();
};
#endif // WEBRTC_VAD_UNIT_TEST_H_

View File

@ -1,610 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_COMMON_TYPES_H
#define WEBRTC_COMMON_TYPES_H
#include "typedefs.h"
#ifdef WEBRTC_EXPORT
#define WEBRTC_DLLEXPORT _declspec(dllexport)
#elif WEBRTC_DLL
#define WEBRTC_DLLEXPORT _declspec(dllimport)
#else
#define WEBRTC_DLLEXPORT
#endif
#ifndef NULL
#define NULL 0
#endif
namespace webrtc {
class InStream
{
public:
virtual int Read(void *buf,int len) = 0;
virtual int Rewind() {return -1;}
virtual ~InStream() {}
protected:
InStream() {}
};
class OutStream
{
public:
virtual bool Write(const void *buf,int len) = 0;
virtual int Rewind() {return -1;}
virtual ~OutStream() {}
protected:
OutStream() {}
};
enum TraceModule
{
// not a module, triggered from the engine code
kTraceVoice = 0x0001,
// not a module, triggered from the engine code
kTraceVideo = 0x0002,
// not a module, triggered from the utility code
kTraceUtility = 0x0003,
kTraceRtpRtcp = 0x0004,
kTraceTransport = 0x0005,
kTraceSrtp = 0x0006,
kTraceAudioCoding = 0x0007,
kTraceAudioMixerServer = 0x0008,
kTraceAudioMixerClient = 0x0009,
kTraceFile = 0x000a,
kTraceAudioProcessing = 0x000b,
kTraceVideoCoding = 0x0010,
kTraceVideoMixer = 0x0011,
kTraceAudioDevice = 0x0012,
kTraceVideoRenderer = 0x0014,
kTraceVideoCapture = 0x0015,
kTraceVideoPreocessing = 0x0016
};
enum TraceLevel
{
kTraceNone = 0x0000, // no trace
kTraceStateInfo = 0x0001,
kTraceWarning = 0x0002,
kTraceError = 0x0004,
kTraceCritical = 0x0008,
kTraceApiCall = 0x0010,
kTraceDefault = 0x00ff,
kTraceModuleCall = 0x0020,
kTraceMemory = 0x0100, // memory info
kTraceTimer = 0x0200, // timing info
kTraceStream = 0x0400, // "continuous" stream of data
// used for debug purposes
kTraceDebug = 0x0800, // debug
kTraceInfo = 0x1000, // debug info
kTraceAll = 0xffff
};
// External Trace API
class TraceCallback
{
public:
virtual void Print(const TraceLevel level,
const char *traceString,
const int length) = 0;
protected:
virtual ~TraceCallback() {}
TraceCallback() {}
};
enum FileFormats
{
kFileFormatWavFile = 1,
kFileFormatCompressedFile = 2,
kFileFormatAviFile = 3,
kFileFormatPreencodedFile = 4,
kFileFormatPcm16kHzFile = 7,
kFileFormatPcm8kHzFile = 8,
kFileFormatPcm32kHzFile = 9
};
enum ProcessingTypes
{
kPlaybackPerChannel = 0,
kPlaybackAllChannelsMixed,
kRecordingPerChannel,
kRecordingAllChannelsMixed
};
// Encryption enums
enum CipherTypes
{
kCipherNull = 0,
kCipherAes128CounterMode = 1
};
enum AuthenticationTypes
{
kAuthNull = 0,
kAuthHmacSha1 = 3
};
enum SecurityLevels
{
kNoProtection = 0,
kEncryption = 1,
kAuthentication = 2,
kEncryptionAndAuthentication = 3
};
class Encryption
{
public:
virtual void encrypt(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
virtual void decrypt(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
virtual void encrypt_rtcp(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
virtual void decrypt_rtcp(
int channel_no,
unsigned char* in_data,
unsigned char* out_data,
int bytes_in,
int* bytes_out) = 0;
protected:
virtual ~Encryption() {}
Encryption() {}
};
// External transport callback interface
class Transport
{
public:
virtual int SendPacket(int channel, const void *data, int len) = 0;
virtual int SendRTCPPacket(int channel, const void *data, int len) = 0;
protected:
virtual ~Transport() {}
Transport() {}
};
// ==================================================================
// Voice specific types
// ==================================================================
// Each codec supported can be described by this structure.
struct CodecInst
{
int pltype;
char plname[32];
int plfreq;
int pacsize;
int channels;
int rate;
};
enum FrameType
{
kFrameEmpty = 0,
kAudioFrameSpeech = 1,
kAudioFrameCN = 2,
kVideoFrameKey = 3, // independent frame
kVideoFrameDelta = 4, // depends on the previus frame
kVideoFrameGolden = 5, // depends on a old known previus frame
kVideoFrameAltRef = 6
};
// RTP
enum {kRtpCsrcSize = 15}; // RFC 3550 page 13
enum RTPDirections
{
kRtpIncoming = 0,
kRtpOutgoing
};
enum PayloadFrequencies
{
kFreq8000Hz = 8000,
kFreq16000Hz = 16000,
kFreq32000Hz = 32000
};
enum VadModes // degree of bandwidth reduction
{
kVadConventional = 0, // lowest reduction
kVadAggressiveLow,
kVadAggressiveMid,
kVadAggressiveHigh // highest reduction
};
struct NetworkStatistics // NETEQ statistics
{
// current jitter buffer size in ms
WebRtc_UWord16 currentBufferSize;
// preferred (optimal) buffer size in ms
WebRtc_UWord16 preferredBufferSize;
// loss rate (network + late) in percent (in Q14)
WebRtc_UWord16 currentPacketLossRate;
// late loss rate in percent (in Q14)
WebRtc_UWord16 currentDiscardRate;
// fraction (of original stream) of synthesized speech inserted through
// expansion (in Q14)
WebRtc_UWord16 currentExpandRate;
// fraction of synthesized speech inserted through pre-emptive expansion
// (in Q14)
WebRtc_UWord16 currentPreemptiveRate;
// fraction of data removed through acceleration (in Q14)
WebRtc_UWord16 currentAccelerateRate;
};
struct JitterStatistics
{
// smallest Jitter Buffer size during call in ms
WebRtc_UWord32 jbMinSize;
// largest Jitter Buffer size during call in ms
WebRtc_UWord32 jbMaxSize;
// the average JB size, measured over time - ms
WebRtc_UWord32 jbAvgSize;
// number of times the Jitter Buffer changed (using Accelerate or
// Pre-emptive Expand)
WebRtc_UWord32 jbChangeCount;
// amount (in ms) of audio data received late
WebRtc_UWord32 lateLossMs;
// milliseconds removed to reduce jitter buffer size
WebRtc_UWord32 accelerateMs;
// milliseconds discarded through buffer flushing
WebRtc_UWord32 flushedMs;
// milliseconds of generated silence
WebRtc_UWord32 generatedSilentMs;
// milliseconds of synthetic audio data (non-background noise)
WebRtc_UWord32 interpolatedVoiceMs;
// milliseconds of synthetic audio data (background noise level)
WebRtc_UWord32 interpolatedSilentMs;
// count of tiny expansions in output audio
WebRtc_UWord32 countExpandMoreThan120ms;
// count of small expansions in output audio
WebRtc_UWord32 countExpandMoreThan250ms;
// count of medium expansions in output audio
WebRtc_UWord32 countExpandMoreThan500ms;
// count of long expansions in output audio
WebRtc_UWord32 countExpandMoreThan2000ms;
// duration of longest audio drop-out
WebRtc_UWord32 longestExpandDurationMs;
// count of times we got small network outage (inter-arrival time in
// [500, 1000) ms)
WebRtc_UWord32 countIAT500ms;
// count of times we got medium network outage (inter-arrival time in
// [1000, 2000) ms)
WebRtc_UWord32 countIAT1000ms;
// count of times we got large network outage (inter-arrival time >=
// 2000 ms)
WebRtc_UWord32 countIAT2000ms;
// longest packet inter-arrival time in ms
WebRtc_UWord32 longestIATms;
// min time incoming Packet "waited" to be played
WebRtc_UWord32 minPacketDelayMs;
// max time incoming Packet "waited" to be played
WebRtc_UWord32 maxPacketDelayMs;
// avg time incoming Packet "waited" to be played
WebRtc_UWord32 avgPacketDelayMs;
};
typedef struct
{
int min; // minumum
int max; // maximum
int average; // average
} StatVal;
typedef struct // All levels are reported in dBm0
{
StatVal speech_rx; // long-term speech levels on receiving side
StatVal speech_tx; // long-term speech levels on transmitting side
StatVal noise_rx; // long-term noise/silence levels on receiving side
StatVal noise_tx; // long-term noise/silence levels on transmitting side
} LevelStatistics;
typedef struct // All levels are reported in dB
{
StatVal erl; // Echo Return Loss
StatVal erle; // Echo Return Loss Enhancement
StatVal rerl; // RERL = ERL + ERLE
// Echo suppression inside EC at the point just before its NLP
StatVal a_nlp;
} EchoStatistics;
enum TelephoneEventDetectionMethods
{
kInBand = 0,
kOutOfBand = 1,
kInAndOutOfBand = 2
};
enum NsModes // type of Noise Suppression
{
kNsUnchanged = 0, // previously set mode
kNsDefault, // platform default
kNsConference, // conferencing default
kNsLowSuppression, // lowest suppression
kNsModerateSuppression,
kNsHighSuppression,
kNsVeryHighSuppression, // highest suppression
};
enum AgcModes // type of Automatic Gain Control
{
kAgcUnchanged = 0, // previously set mode
kAgcDefault, // platform default
// adaptive mode for use when analog volume control exists (e.g. for
// PC softphone)
kAgcAdaptiveAnalog,
// scaling takes place in the digital domain (e.g. for conference servers
// and embedded devices)
kAgcAdaptiveDigital,
// can be used on embedded devices where the the capture signal is level
// is predictable
kAgcFixedDigital
};
// EC modes
enum EcModes // type of Echo Control
{
kEcUnchanged = 0, // previously set mode
kEcDefault, // platform default
kEcConference, // conferencing default (aggressive AEC)
kEcAec, // Acoustic Echo Cancellation
kEcAecm, // AEC mobile
};
// AECM modes
enum AecmModes // mode of AECM
{
kAecmQuietEarpieceOrHeadset = 0,
// Quiet earpiece or headset use
kAecmEarpiece, // most earpiece use
kAecmLoudEarpiece, // Loud earpiece or quiet speakerphone use
kAecmSpeakerphone, // most speakerphone use (default)
kAecmLoudSpeakerphone // Loud speakerphone
};
// AGC configuration
typedef struct
{
unsigned short targetLeveldBOv;
unsigned short digitalCompressionGaindB;
bool limiterEnable;
} AgcConfig; // AGC configuration parameters
enum StereoChannel
{
kStereoLeft = 0,
kStereoRight,
kStereoBoth
};
// Audio device layers
enum AudioLayers
{
kAudioPlatformDefault = 0,
kAudioWindowsWave = 1,
kAudioWindowsCore = 2,
kAudioLinuxAlsa = 3,
kAudioLinuxPulse = 4
};
enum NetEqModes // NetEQ playout configurations
{
// Optimized trade-off between low delay and jitter robustness for two-way
// communication.
kNetEqDefault = 0,
// Improved jitter robustness at the cost of increased delay. Can be
// used in one-way communication.
kNetEqStreaming = 1,
// Optimzed for decodability of fax signals rather than for perceived audio
// quality.
kNetEqFax = 2,
};
enum NetEqBgnModes // NetEQ Background Noise (BGN) configurations
{
// BGN is always on and will be generated when the incoming RTP stream
// stops (default).
kBgnOn = 0,
// The BGN is faded to zero (complete silence) after a few seconds.
kBgnFade = 1,
// BGN is not used at all. Silence is produced after speech extrapolation
// has faded.
kBgnOff = 2,
};
enum OnHoldModes // On Hold direction
{
kHoldSendAndPlay = 0, // Put both sending and playing in on-hold state.
kHoldSendOnly, // Put only sending in on-hold state.
kHoldPlayOnly // Put only playing in on-hold state.
};
enum AmrMode
{
kRfc3267BwEfficient = 0,
kRfc3267OctetAligned = 1,
kRfc3267FileStorage = 2,
};
// ==================================================================
// Video specific types
// ==================================================================
// Raw video types
enum RawVideoType
{
kVideoI420 = 0,
kVideoYV12 = 1,
kVideoYUY2 = 2,
kVideoUYVY = 3,
kVideoIYUV = 4,
kVideoARGB = 5,
kVideoRGB24 = 6,
kVideoRGB565 = 7,
kVideoARGB4444 = 8,
kVideoARGB1555 = 9,
kVideoMJPEG = 10,
kVideoNV12 = 11,
kVideoNV21 = 12,
kVideoUnknown = 99
};
// Video codec
enum { kConfigParameterSize = 128};
enum { kPayloadNameSize = 32};
enum { kMaxSimulcastStreams = 4};
// H.263 specific
struct VideoCodecH263
{
char quality;
};
// H.264 specific
enum H264Packetization
{
kH264SingleMode = 0,
kH264NonInterleavedMode = 1
};
enum VideoCodecComplexity
{
kComplexityNormal = 0,
kComplexityHigh = 1,
kComplexityHigher = 2,
kComplexityMax = 3
};
enum VideoCodecProfile
{
kProfileBase = 0x00,
kProfileMain = 0x01
};
struct VideoCodecH264
{
H264Packetization packetization;
VideoCodecComplexity complexity;
VideoCodecProfile profile;
char level;
char quality;
bool useFMO;
unsigned char configParameters[kConfigParameterSize];
unsigned char configParametersSize;
};
// VP8 specific
struct VideoCodecVP8
{
bool pictureLossIndicationOn;
bool feedbackModeOn;
VideoCodecComplexity complexity;
unsigned char numberOfTemporalLayers;
};
// MPEG-4 specific
struct VideoCodecMPEG4
{
unsigned char configParameters[kConfigParameterSize];
unsigned char configParametersSize;
char level;
};
// Unknown specific
struct VideoCodecGeneric
{
};
// Video codec types
enum VideoCodecType
{
kVideoCodecH263,
kVideoCodecH264,
kVideoCodecVP8,
kVideoCodecMPEG4,
kVideoCodecI420,
kVideoCodecRED,
kVideoCodecULPFEC,
kVideoCodecUnknown
};
union VideoCodecUnion
{
VideoCodecH263 H263;
VideoCodecH264 H264;
VideoCodecVP8 VP8;
VideoCodecMPEG4 MPEG4;
VideoCodecGeneric Generic;
};
/*
* Simulcast is when the same stream is encoded multiple times with different
* settings such as resolution.
*/
struct SimulcastStream
{
unsigned short width;
unsigned short height;
unsigned char numberOfTemporalLayers;
unsigned int maxBitrate;
unsigned int qpMax; // minimum quality
};
// Common video codec properties
struct VideoCodec
{
VideoCodecType codecType;
char plName[kPayloadNameSize];
unsigned char plType;
unsigned short width;
unsigned short height;
unsigned int startBitrate;
unsigned int maxBitrate;
unsigned int minBitrate;
unsigned char maxFramerate;
VideoCodecUnion codecSpecific;
unsigned int qpMax;
unsigned char numberOfSimulcastStreams;
SimulcastStream simulcastStream[kMaxSimulcastStreams];
};
} // namespace webrtc
#endif // WEBRTC_COMMON_TYPES_H

View File

@ -1 +0,0 @@
SUBDIRS = audio_processing

View File

@ -1,59 +0,0 @@
SUBDIRS = utility ns aec aecm agc
lib_LTLIBRARIES = libwebrtc_audio_processing.la
if NS_FIXED
COMMON_CXXFLAGS += -DWEBRTC_NS_FIXED=1
NS_LIB = libns_fix
else
COMMON_CXXFLAGS += -DWEBRTC_NS_FLOAT=1
NS_LIB = libns
endif
webrtcincludedir = $(includedir)/webrtc_audio_processing
webrtcinclude_HEADERS = $(top_srcdir)/src/typedefs.h \
$(top_srcdir)/src/modules/interface/module.h \
interface/audio_processing.h \
$(top_srcdir)/src/common_types.h \
$(top_srcdir)/src/modules/interface/module_common_types.h
libwebrtc_audio_processing_la_SOURCES = interface/audio_processing.h \
audio_buffer.cc \
audio_buffer.h \
audio_processing_impl.cc \
audio_processing_impl.h \
echo_cancellation_impl.cc \
echo_cancellation_impl.h \
echo_control_mobile_impl.cc \
echo_control_mobile_impl.h \
gain_control_impl.cc \
gain_control_impl.h \
high_pass_filter_impl.cc \
high_pass_filter_impl.h \
level_estimator_impl.cc \
level_estimator_impl.h \
noise_suppression_impl.cc \
noise_suppression_impl.h \
splitting_filter.cc \
splitting_filter.h \
processing_component.cc \
processing_component.h \
voice_detection_impl.cc \
voice_detection_impl.h
libwebrtc_audio_processing_la_CXXFLAGS = $(AM_CXXFLAGS) $(COMMON_CXXFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/common_audio/vad/main/interface \
-I$(top_srcdir)/src/system_wrappers/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility \
-I$(top_srcdir)/src/modules/audio_processing/ns/interface \
-I$(top_srcdir)/src/modules/audio_processing/aec/interface \
-I$(top_srcdir)/src/modules/audio_processing/aecm/interface \
-I$(top_srcdir)/src/modules/audio_processing/agc/interface
libwebrtc_audio_processing_la_LIBADD = $(top_builddir)/src/system_wrappers/libsystem_wrappers.la \
$(top_builddir)/src/common_audio/signal_processing_library/libspl.la \
$(top_builddir)/src/common_audio/vad/libvad.la \
$(top_builddir)/src/modules/audio_processing/utility/libapm_util.la \
$(top_builddir)/src/modules/audio_processing/ns/$(NS_LIB).la \
$(top_builddir)/src/modules/audio_processing/aec/libaec.la \
$(top_builddir)/src/modules/audio_processing/aecm/libaecm.la \
$(top_builddir)/src/modules/audio_processing/agc/libagc.la
libwebrtc_audio_processing_la_LDFLAGS = $(AM_LDFLAGS) -version-info $(LIBWEBRTC_AUDIO_PROCESSING_VERSION_INFO)

View File

@ -1,2 +0,0 @@
andrew@webrtc.org
bjornv@webrtc.org

View File

@ -1,16 +0,0 @@
noinst_LTLIBRARIES = libaec.la
libaec_la_SOURCES = interface/echo_cancellation.h \
echo_cancellation.c \
aec_core.h \
aec_core.c \
aec_core_sse2.c \
aec_rdft.h \
aec_rdft.c \
aec_rdft_sse2.c \
resampler.h \
resampler.c
libaec_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/system_wrappers/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility

View File

@ -1,40 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'aec',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
'apm_util'
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/echo_cancellation.h',
'echo_cancellation.c',
'aec_core.h',
'aec_core.c',
'aec_core_sse2.c',
'aec_rdft.h',
'aec_rdft.c',
'aec_rdft_sse2.c',
'resampler.h',
'resampler.c',
],
},
],
}

File diff suppressed because it is too large Load Diff

View File

@ -1,181 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* Specifies the interface for the AEC core.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_CORE_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_CORE_H_
#include <stdio.h>
#include "signal_processing_library.h"
#include "typedefs.h"
//#define AEC_DEBUG // for recording files
#define FRAME_LEN 80
#define PART_LEN 64 // Length of partition
#define PART_LEN1 (PART_LEN + 1) // Unique fft coefficients
#define PART_LEN2 (PART_LEN * 2) // Length of partition * 2
#define NR_PART 12 // Number of partitions
#define FILT_LEN (PART_LEN * NR_PART) // Filter length
#define FILT_LEN2 (FILT_LEN * 2) // Double filter length
#define FAR_BUF_LEN (FILT_LEN2 * 2)
#define PREF_BAND_SIZE 24
#define BLOCKL_MAX FRAME_LEN
// Maximum delay in fixed point delay estimator, used for logging
enum {kMaxDelay = 100};
typedef float complex_t[2];
// For performance reasons, some arrays of complex numbers are replaced by twice
// as long arrays of float, all the real parts followed by all the imaginary
// ones (complex_t[SIZE] -> float[2][SIZE]). This allows SIMD optimizations and
// is better than two arrays (one for the real parts and one for the imaginary
// parts) as this other way would require two pointers instead of one and cause
// extra register spilling. This also allows the offsets to be calculated at
// compile time.
// Metrics
enum {offsetLevel = -100};
typedef struct {
float sfrsum;
int sfrcounter;
float framelevel;
float frsum;
int frcounter;
float minlevel;
float averagelevel;
} power_level_t;
typedef struct {
float instant;
float average;
float min;
float max;
float sum;
float hisum;
float himean;
int counter;
int hicounter;
} stats_t;
typedef struct {
int farBufWritePos, farBufReadPos;
int knownDelay;
int inSamples, outSamples;
int delayEstCtr;
void *farFrBuf, *nearFrBuf, *outFrBuf;
void *nearFrBufH;
void *outFrBufH;
float xBuf[PART_LEN2]; // farend
float dBuf[PART_LEN2]; // nearend
float eBuf[PART_LEN2]; // error
float dBufH[PART_LEN2]; // nearend
float xPow[PART_LEN1];
float dPow[PART_LEN1];
float dMinPow[PART_LEN1];
float dInitMinPow[PART_LEN1];
float *noisePow;
float xfBuf[2][NR_PART * PART_LEN1]; // farend fft buffer
float wfBuf[2][NR_PART * PART_LEN1]; // filter fft
complex_t sde[PART_LEN1]; // cross-psd of nearend and error
complex_t sxd[PART_LEN1]; // cross-psd of farend and nearend
complex_t xfwBuf[NR_PART * PART_LEN1]; // farend windowed fft buffer
float sx[PART_LEN1], sd[PART_LEN1], se[PART_LEN1]; // far, near and error psd
float hNs[PART_LEN1];
float hNlFbMin, hNlFbLocalMin;
float hNlXdAvgMin;
int hNlNewMin, hNlMinCtr;
float overDrive, overDriveSm;
float targetSupp, minOverDrive;
float outBuf[PART_LEN];
int delayIdx;
short stNearState, echoState;
short divergeState;
int xfBufBlockPos;
short farBuf[FILT_LEN2 * 2];
short mult; // sampling frequency multiple
int sampFreq;
WebRtc_UWord32 seed;
float mu; // stepsize
float errThresh; // error threshold
int noiseEstCtr;
power_level_t farlevel;
power_level_t nearlevel;
power_level_t linoutlevel;
power_level_t nlpoutlevel;
int metricsMode;
int stateCounter;
stats_t erl;
stats_t erle;
stats_t aNlp;
stats_t rerl;
// Quantities to control H band scaling for SWB input
int freq_avg_ic; //initial bin for averaging nlp gain
int flag_Hband_cn; //for comfort noise
float cn_scale_Hband; //scale for comfort noise in H band
int delay_histogram[kMaxDelay];
int delay_logging_enabled;
void* delay_estimator;
#ifdef AEC_DEBUG
FILE *farFile;
FILE *nearFile;
FILE *outFile;
FILE *outLpFile;
#endif
} aec_t;
typedef void (*WebRtcAec_FilterFar_t)(aec_t *aec, float yf[2][PART_LEN1]);
extern WebRtcAec_FilterFar_t WebRtcAec_FilterFar;
typedef void (*WebRtcAec_ScaleErrorSignal_t)(aec_t *aec, float ef[2][PART_LEN1]);
extern WebRtcAec_ScaleErrorSignal_t WebRtcAec_ScaleErrorSignal;
typedef void (*WebRtcAec_FilterAdaptation_t)
(aec_t *aec, float *fft, float ef[2][PART_LEN1]);
extern WebRtcAec_FilterAdaptation_t WebRtcAec_FilterAdaptation;
typedef void (*WebRtcAec_OverdriveAndSuppress_t)
(aec_t *aec, float hNl[PART_LEN1], const float hNlFb, float efw[2][PART_LEN1]);
extern WebRtcAec_OverdriveAndSuppress_t WebRtcAec_OverdriveAndSuppress;
int WebRtcAec_CreateAec(aec_t **aec);
int WebRtcAec_FreeAec(aec_t *aec);
int WebRtcAec_InitAec(aec_t *aec, int sampFreq);
void WebRtcAec_InitAec_SSE2(void);
void WebRtcAec_InitMetrics(aec_t *aec);
void WebRtcAec_ProcessFrame(aec_t *aec, const short *farend,
const short *nearend, const short *nearendH,
short *out, short *outH,
int knownDelay);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_CORE_H_

View File

@ -1,417 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* The core AEC algorithm, SSE2 version of speed-critical functions.
*/
#include "typedefs.h"
#if defined(WEBRTC_USE_SSE2)
#include <emmintrin.h>
#include <math.h>
#include "aec_core.h"
#include "aec_rdft.h"
__inline static float MulRe(float aRe, float aIm, float bRe, float bIm)
{
return aRe * bRe - aIm * bIm;
}
__inline static float MulIm(float aRe, float aIm, float bRe, float bIm)
{
return aRe * bIm + aIm * bRe;
}
static void FilterFarSSE2(aec_t *aec, float yf[2][PART_LEN1])
{
int i;
for (i = 0; i < NR_PART; i++) {
int j;
int xPos = (i + aec->xfBufBlockPos) * PART_LEN1;
int pos = i * PART_LEN1;
// Check for wrap
if (i + aec->xfBufBlockPos >= NR_PART) {
xPos -= NR_PART*(PART_LEN1);
}
// vectorized code (four at once)
for (j = 0; j + 3 < PART_LEN1; j += 4) {
const __m128 xfBuf_re = _mm_loadu_ps(&aec->xfBuf[0][xPos + j]);
const __m128 xfBuf_im = _mm_loadu_ps(&aec->xfBuf[1][xPos + j]);
const __m128 wfBuf_re = _mm_loadu_ps(&aec->wfBuf[0][pos + j]);
const __m128 wfBuf_im = _mm_loadu_ps(&aec->wfBuf[1][pos + j]);
const __m128 yf_re = _mm_loadu_ps(&yf[0][j]);
const __m128 yf_im = _mm_loadu_ps(&yf[1][j]);
const __m128 a = _mm_mul_ps(xfBuf_re, wfBuf_re);
const __m128 b = _mm_mul_ps(xfBuf_im, wfBuf_im);
const __m128 c = _mm_mul_ps(xfBuf_re, wfBuf_im);
const __m128 d = _mm_mul_ps(xfBuf_im, wfBuf_re);
const __m128 e = _mm_sub_ps(a, b);
const __m128 f = _mm_add_ps(c, d);
const __m128 g = _mm_add_ps(yf_re, e);
const __m128 h = _mm_add_ps(yf_im, f);
_mm_storeu_ps(&yf[0][j], g);
_mm_storeu_ps(&yf[1][j], h);
}
// scalar code for the remaining items.
for (; j < PART_LEN1; j++) {
yf[0][j] += MulRe(aec->xfBuf[0][xPos + j], aec->xfBuf[1][xPos + j],
aec->wfBuf[0][ pos + j], aec->wfBuf[1][ pos + j]);
yf[1][j] += MulIm(aec->xfBuf[0][xPos + j], aec->xfBuf[1][xPos + j],
aec->wfBuf[0][ pos + j], aec->wfBuf[1][ pos + j]);
}
}
}
static void ScaleErrorSignalSSE2(aec_t *aec, float ef[2][PART_LEN1])
{
const __m128 k1e_10f = _mm_set1_ps(1e-10f);
const __m128 kThresh = _mm_set1_ps(aec->errThresh);
const __m128 kMu = _mm_set1_ps(aec->mu);
int i;
// vectorized code (four at once)
for (i = 0; i + 3 < PART_LEN1; i += 4) {
const __m128 xPow = _mm_loadu_ps(&aec->xPow[i]);
const __m128 ef_re_base = _mm_loadu_ps(&ef[0][i]);
const __m128 ef_im_base = _mm_loadu_ps(&ef[1][i]);
const __m128 xPowPlus = _mm_add_ps(xPow, k1e_10f);
__m128 ef_re = _mm_div_ps(ef_re_base, xPowPlus);
__m128 ef_im = _mm_div_ps(ef_im_base, xPowPlus);
const __m128 ef_re2 = _mm_mul_ps(ef_re, ef_re);
const __m128 ef_im2 = _mm_mul_ps(ef_im, ef_im);
const __m128 ef_sum2 = _mm_add_ps(ef_re2, ef_im2);
const __m128 absEf = _mm_sqrt_ps(ef_sum2);
const __m128 bigger = _mm_cmpgt_ps(absEf, kThresh);
__m128 absEfPlus = _mm_add_ps(absEf, k1e_10f);
const __m128 absEfInv = _mm_div_ps(kThresh, absEfPlus);
__m128 ef_re_if = _mm_mul_ps(ef_re, absEfInv);
__m128 ef_im_if = _mm_mul_ps(ef_im, absEfInv);
ef_re_if = _mm_and_ps(bigger, ef_re_if);
ef_im_if = _mm_and_ps(bigger, ef_im_if);
ef_re = _mm_andnot_ps(bigger, ef_re);
ef_im = _mm_andnot_ps(bigger, ef_im);
ef_re = _mm_or_ps(ef_re, ef_re_if);
ef_im = _mm_or_ps(ef_im, ef_im_if);
ef_re = _mm_mul_ps(ef_re, kMu);
ef_im = _mm_mul_ps(ef_im, kMu);
_mm_storeu_ps(&ef[0][i], ef_re);
_mm_storeu_ps(&ef[1][i], ef_im);
}
// scalar code for the remaining items.
for (; i < (PART_LEN1); i++) {
float absEf;
ef[0][i] /= (aec->xPow[i] + 1e-10f);
ef[1][i] /= (aec->xPow[i] + 1e-10f);
absEf = sqrtf(ef[0][i] * ef[0][i] + ef[1][i] * ef[1][i]);
if (absEf > aec->errThresh) {
absEf = aec->errThresh / (absEf + 1e-10f);
ef[0][i] *= absEf;
ef[1][i] *= absEf;
}
// Stepsize factor
ef[0][i] *= aec->mu;
ef[1][i] *= aec->mu;
}
}
static void FilterAdaptationSSE2(aec_t *aec, float *fft, float ef[2][PART_LEN1]) {
int i, j;
for (i = 0; i < NR_PART; i++) {
int xPos = (i + aec->xfBufBlockPos)*(PART_LEN1);
int pos = i * PART_LEN1;
// Check for wrap
if (i + aec->xfBufBlockPos >= NR_PART) {
xPos -= NR_PART * PART_LEN1;
}
// Process the whole array...
for (j = 0; j < PART_LEN; j+= 4) {
// Load xfBuf and ef.
const __m128 xfBuf_re = _mm_loadu_ps(&aec->xfBuf[0][xPos + j]);
const __m128 xfBuf_im = _mm_loadu_ps(&aec->xfBuf[1][xPos + j]);
const __m128 ef_re = _mm_loadu_ps(&ef[0][j]);
const __m128 ef_im = _mm_loadu_ps(&ef[1][j]);
// Calculate the product of conjugate(xfBuf) by ef.
// re(conjugate(a) * b) = aRe * bRe + aIm * bIm
// im(conjugate(a) * b)= aRe * bIm - aIm * bRe
const __m128 a = _mm_mul_ps(xfBuf_re, ef_re);
const __m128 b = _mm_mul_ps(xfBuf_im, ef_im);
const __m128 c = _mm_mul_ps(xfBuf_re, ef_im);
const __m128 d = _mm_mul_ps(xfBuf_im, ef_re);
const __m128 e = _mm_add_ps(a, b);
const __m128 f = _mm_sub_ps(c, d);
// Interleave real and imaginary parts.
const __m128 g = _mm_unpacklo_ps(e, f);
const __m128 h = _mm_unpackhi_ps(e, f);
// Store
_mm_storeu_ps(&fft[2*j + 0], g);
_mm_storeu_ps(&fft[2*j + 4], h);
}
// ... and fixup the first imaginary entry.
fft[1] = MulRe(aec->xfBuf[0][xPos + PART_LEN],
-aec->xfBuf[1][xPos + PART_LEN],
ef[0][PART_LEN], ef[1][PART_LEN]);
aec_rdft_inverse_128(fft);
memset(fft + PART_LEN, 0, sizeof(float)*PART_LEN);
// fft scaling
{
float scale = 2.0f / PART_LEN2;
const __m128 scale_ps = _mm_load_ps1(&scale);
for (j = 0; j < PART_LEN; j+=4) {
const __m128 fft_ps = _mm_loadu_ps(&fft[j]);
const __m128 fft_scale = _mm_mul_ps(fft_ps, scale_ps);
_mm_storeu_ps(&fft[j], fft_scale);
}
}
aec_rdft_forward_128(fft);
{
float wt1 = aec->wfBuf[1][pos];
aec->wfBuf[0][pos + PART_LEN] += fft[1];
for (j = 0; j < PART_LEN; j+= 4) {
__m128 wtBuf_re = _mm_loadu_ps(&aec->wfBuf[0][pos + j]);
__m128 wtBuf_im = _mm_loadu_ps(&aec->wfBuf[1][pos + j]);
const __m128 fft0 = _mm_loadu_ps(&fft[2 * j + 0]);
const __m128 fft4 = _mm_loadu_ps(&fft[2 * j + 4]);
const __m128 fft_re = _mm_shuffle_ps(fft0, fft4, _MM_SHUFFLE(2, 0, 2 ,0));
const __m128 fft_im = _mm_shuffle_ps(fft0, fft4, _MM_SHUFFLE(3, 1, 3 ,1));
wtBuf_re = _mm_add_ps(wtBuf_re, fft_re);
wtBuf_im = _mm_add_ps(wtBuf_im, fft_im);
_mm_storeu_ps(&aec->wfBuf[0][pos + j], wtBuf_re);
_mm_storeu_ps(&aec->wfBuf[1][pos + j], wtBuf_im);
}
aec->wfBuf[1][pos] = wt1;
}
}
}
static __m128 mm_pow_ps(__m128 a, __m128 b)
{
// a^b = exp2(b * log2(a))
// exp2(x) and log2(x) are calculated using polynomial approximations.
__m128 log2_a, b_log2_a, a_exp_b;
// Calculate log2(x), x = a.
{
// To calculate log2(x), we decompose x like this:
// x = y * 2^n
// n is an integer
// y is in the [1.0, 2.0) range
//
// log2(x) = log2(y) + n
// n can be evaluated by playing with float representation.
// log2(y) in a small range can be approximated, this code uses an order
// five polynomial approximation. The coefficients have been
// estimated with the Remez algorithm and the resulting
// polynomial has a maximum relative error of 0.00086%.
// Compute n.
// This is done by masking the exponent, shifting it into the top bit of
// the mantissa, putting eight into the biased exponent (to shift/
// compensate the fact that the exponent has been shifted in the top/
// fractional part and finally getting rid of the implicit leading one
// from the mantissa by substracting it out.
static const ALIGN16_BEG int float_exponent_mask[4] ALIGN16_END =
{0x7F800000, 0x7F800000, 0x7F800000, 0x7F800000};
static const ALIGN16_BEG int eight_biased_exponent[4] ALIGN16_END =
{0x43800000, 0x43800000, 0x43800000, 0x43800000};
static const ALIGN16_BEG int implicit_leading_one[4] ALIGN16_END =
{0x43BF8000, 0x43BF8000, 0x43BF8000, 0x43BF8000};
static const int shift_exponent_into_top_mantissa = 8;
const __m128 two_n = _mm_and_ps(a, *((__m128 *)float_exponent_mask));
const __m128 n_1 = _mm_castsi128_ps(_mm_srli_epi32(_mm_castps_si128(two_n),
shift_exponent_into_top_mantissa));
const __m128 n_0 = _mm_or_ps(n_1, *((__m128 *)eight_biased_exponent));
const __m128 n = _mm_sub_ps(n_0, *((__m128 *)implicit_leading_one));
// Compute y.
static const ALIGN16_BEG int mantissa_mask[4] ALIGN16_END =
{0x007FFFFF, 0x007FFFFF, 0x007FFFFF, 0x007FFFFF};
static const ALIGN16_BEG int zero_biased_exponent_is_one[4] ALIGN16_END =
{0x3F800000, 0x3F800000, 0x3F800000, 0x3F800000};
const __m128 mantissa = _mm_and_ps(a, *((__m128 *)mantissa_mask));
const __m128 y = _mm_or_ps(
mantissa, *((__m128 *)zero_biased_exponent_is_one));
// Approximate log2(y) ~= (y - 1) * pol5(y).
// pol5(y) = C5 * y^5 + C4 * y^4 + C3 * y^3 + C2 * y^2 + C1 * y + C0
static const ALIGN16_BEG float ALIGN16_END C5[4] =
{-3.4436006e-2f, -3.4436006e-2f, -3.4436006e-2f, -3.4436006e-2f};
static const ALIGN16_BEG float ALIGN16_END C4[4] =
{3.1821337e-1f, 3.1821337e-1f, 3.1821337e-1f, 3.1821337e-1f};
static const ALIGN16_BEG float ALIGN16_END C3[4] =
{-1.2315303f, -1.2315303f, -1.2315303f, -1.2315303f};
static const ALIGN16_BEG float ALIGN16_END C2[4] =
{2.5988452f, 2.5988452f, 2.5988452f, 2.5988452f};
static const ALIGN16_BEG float ALIGN16_END C1[4] =
{-3.3241990f, -3.3241990f, -3.3241990f, -3.3241990f};
static const ALIGN16_BEG float ALIGN16_END C0[4] =
{3.1157899f, 3.1157899f, 3.1157899f, 3.1157899f};
const __m128 pol5_y_0 = _mm_mul_ps(y, *((__m128 *)C5));
const __m128 pol5_y_1 = _mm_add_ps(pol5_y_0, *((__m128 *)C4));
const __m128 pol5_y_2 = _mm_mul_ps(pol5_y_1, y);
const __m128 pol5_y_3 = _mm_add_ps(pol5_y_2, *((__m128 *)C3));
const __m128 pol5_y_4 = _mm_mul_ps(pol5_y_3, y);
const __m128 pol5_y_5 = _mm_add_ps(pol5_y_4, *((__m128 *)C2));
const __m128 pol5_y_6 = _mm_mul_ps(pol5_y_5, y);
const __m128 pol5_y_7 = _mm_add_ps(pol5_y_6, *((__m128 *)C1));
const __m128 pol5_y_8 = _mm_mul_ps(pol5_y_7, y);
const __m128 pol5_y = _mm_add_ps(pol5_y_8, *((__m128 *)C0));
const __m128 y_minus_one = _mm_sub_ps(
y, *((__m128 *)zero_biased_exponent_is_one));
const __m128 log2_y = _mm_mul_ps(y_minus_one , pol5_y);
// Combine parts.
log2_a = _mm_add_ps(n, log2_y);
}
// b * log2(a)
b_log2_a = _mm_mul_ps(b, log2_a);
// Calculate exp2(x), x = b * log2(a).
{
// To calculate 2^x, we decompose x like this:
// x = n + y
// n is an integer, the value of x - 0.5 rounded down, therefore
// y is in the [0.5, 1.5) range
//
// 2^x = 2^n * 2^y
// 2^n can be evaluated by playing with float representation.
// 2^y in a small range can be approximated, this code uses an order two
// polynomial approximation. The coefficients have been estimated
// with the Remez algorithm and the resulting polynomial has a
// maximum relative error of 0.17%.
// To avoid over/underflow, we reduce the range of input to ]-127, 129].
static const ALIGN16_BEG float max_input[4] ALIGN16_END =
{129.f, 129.f, 129.f, 129.f};
static const ALIGN16_BEG float min_input[4] ALIGN16_END =
{-126.99999f, -126.99999f, -126.99999f, -126.99999f};
const __m128 x_min = _mm_min_ps(b_log2_a, *((__m128 *)max_input));
const __m128 x_max = _mm_max_ps(x_min, *((__m128 *)min_input));
// Compute n.
static const ALIGN16_BEG float half[4] ALIGN16_END =
{0.5f, 0.5f, 0.5f, 0.5f};
const __m128 x_minus_half = _mm_sub_ps(x_max, *((__m128 *)half));
const __m128i x_minus_half_floor = _mm_cvtps_epi32(x_minus_half);
// Compute 2^n.
static const ALIGN16_BEG int float_exponent_bias[4] ALIGN16_END =
{127, 127, 127, 127};
static const int float_exponent_shift = 23;
const __m128i two_n_exponent = _mm_add_epi32(
x_minus_half_floor, *((__m128i *)float_exponent_bias));
const __m128 two_n = _mm_castsi128_ps(_mm_slli_epi32(
two_n_exponent, float_exponent_shift));
// Compute y.
const __m128 y = _mm_sub_ps(x_max, _mm_cvtepi32_ps(x_minus_half_floor));
// Approximate 2^y ~= C2 * y^2 + C1 * y + C0.
static const ALIGN16_BEG float C2[4] ALIGN16_END =
{3.3718944e-1f, 3.3718944e-1f, 3.3718944e-1f, 3.3718944e-1f};
static const ALIGN16_BEG float C1[4] ALIGN16_END =
{6.5763628e-1f, 6.5763628e-1f, 6.5763628e-1f, 6.5763628e-1f};
static const ALIGN16_BEG float C0[4] ALIGN16_END =
{1.0017247f, 1.0017247f, 1.0017247f, 1.0017247f};
const __m128 exp2_y_0 = _mm_mul_ps(y, *((__m128 *)C2));
const __m128 exp2_y_1 = _mm_add_ps(exp2_y_0, *((__m128 *)C1));
const __m128 exp2_y_2 = _mm_mul_ps(exp2_y_1, y);
const __m128 exp2_y = _mm_add_ps(exp2_y_2, *((__m128 *)C0));
// Combine parts.
a_exp_b = _mm_mul_ps(exp2_y, two_n);
}
return a_exp_b;
}
extern const float WebRtcAec_weightCurve[65];
extern const float WebRtcAec_overDriveCurve[65];
static void OverdriveAndSuppressSSE2(aec_t *aec, float hNl[PART_LEN1],
const float hNlFb,
float efw[2][PART_LEN1]) {
int i;
const __m128 vec_hNlFb = _mm_set1_ps(hNlFb);
const __m128 vec_one = _mm_set1_ps(1.0f);
const __m128 vec_minus_one = _mm_set1_ps(-1.0f);
const __m128 vec_overDriveSm = _mm_set1_ps(aec->overDriveSm);
// vectorized code (four at once)
for (i = 0; i + 3 < PART_LEN1; i+=4) {
// Weight subbands
__m128 vec_hNl = _mm_loadu_ps(&hNl[i]);
const __m128 vec_weightCurve = _mm_loadu_ps(&WebRtcAec_weightCurve[i]);
const __m128 bigger = _mm_cmpgt_ps(vec_hNl, vec_hNlFb);
const __m128 vec_weightCurve_hNlFb = _mm_mul_ps(
vec_weightCurve, vec_hNlFb);
const __m128 vec_one_weightCurve = _mm_sub_ps(vec_one, vec_weightCurve);
const __m128 vec_one_weightCurve_hNl = _mm_mul_ps(
vec_one_weightCurve, vec_hNl);
const __m128 vec_if0 = _mm_andnot_ps(bigger, vec_hNl);
const __m128 vec_if1 = _mm_and_ps(
bigger, _mm_add_ps(vec_weightCurve_hNlFb, vec_one_weightCurve_hNl));
vec_hNl = _mm_or_ps(vec_if0, vec_if1);
{
const __m128 vec_overDriveCurve = _mm_loadu_ps(
&WebRtcAec_overDriveCurve[i]);
const __m128 vec_overDriveSm_overDriveCurve = _mm_mul_ps(
vec_overDriveSm, vec_overDriveCurve);
vec_hNl = mm_pow_ps(vec_hNl, vec_overDriveSm_overDriveCurve);
_mm_storeu_ps(&hNl[i], vec_hNl);
}
// Suppress error signal
{
__m128 vec_efw_re = _mm_loadu_ps(&efw[0][i]);
__m128 vec_efw_im = _mm_loadu_ps(&efw[1][i]);
vec_efw_re = _mm_mul_ps(vec_efw_re, vec_hNl);
vec_efw_im = _mm_mul_ps(vec_efw_im, vec_hNl);
// Ooura fft returns incorrect sign on imaginary component. It matters
// here because we are making an additive change with comfort noise.
vec_efw_im = _mm_mul_ps(vec_efw_im, vec_minus_one);
_mm_storeu_ps(&efw[0][i], vec_efw_re);
_mm_storeu_ps(&efw[1][i], vec_efw_im);
}
}
// scalar code for the remaining items.
for (; i < PART_LEN1; i++) {
// Weight subbands
if (hNl[i] > hNlFb) {
hNl[i] = WebRtcAec_weightCurve[i] * hNlFb +
(1 - WebRtcAec_weightCurve[i]) * hNl[i];
}
hNl[i] = powf(hNl[i], aec->overDriveSm * WebRtcAec_overDriveCurve[i]);
// Suppress error signal
efw[0][i] *= hNl[i];
efw[1][i] *= hNl[i];
// Ooura fft returns incorrect sign on imaginary component. It matters
// here because we are making an additive change with comfort noise.
efw[1][i] *= -1;
}
}
void WebRtcAec_InitAec_SSE2(void) {
WebRtcAec_FilterFar = FilterFarSSE2;
WebRtcAec_ScaleErrorSignal = ScaleErrorSignalSSE2;
WebRtcAec_FilterAdaptation = FilterAdaptationSSE2;
WebRtcAec_OverdriveAndSuppress = OverdriveAndSuppressSSE2;
}
#endif // WEBRTC_USE_SSE2

View File

@ -1,57 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_RDFT_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_RDFT_H_
// These intrinsics were unavailable before VS 2008.
// TODO(andrew): move to a common file.
#if defined(_MSC_VER) && _MSC_VER < 1500
#include <emmintrin.h>
static __inline __m128 _mm_castsi128_ps(__m128i a) { return *(__m128*)&a; }
static __inline __m128i _mm_castps_si128(__m128 a) { return *(__m128i*)&a; }
#endif
#ifdef _MSC_VER /* visual c++ */
# define ALIGN16_BEG __declspec(align(16))
# define ALIGN16_END
#else /* gcc or icc */
# define ALIGN16_BEG
# define ALIGN16_END __attribute__((aligned(16)))
#endif
// constants shared by all paths (C, SSE2).
extern float rdft_w[64];
// constants used by the C path.
extern float rdft_wk3ri_first[32];
extern float rdft_wk3ri_second[32];
// constants used by SSE2 but initialized in C path.
extern float rdft_wk1r[32];
extern float rdft_wk2r[32];
extern float rdft_wk3r[32];
extern float rdft_wk1i[32];
extern float rdft_wk2i[32];
extern float rdft_wk3i[32];
extern float cftmdl_wk1r[4];
// code path selection function pointers
typedef void (*rft_sub_128_t)(float *a);
extern rft_sub_128_t rftfsub_128;
extern rft_sub_128_t rftbsub_128;
extern rft_sub_128_t cft1st_128;
extern rft_sub_128_t cftmdl_128;
// entry points
void aec_rdft_init(void);
void aec_rdft_init_sse2(void);
void aec_rdft_forward_128(float *a);
void aec_rdft_inverse_128(float *a);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_AEC_RDFT_H_

View File

@ -1,431 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "typedefs.h"
#if defined(WEBRTC_USE_SSE2)
#include <emmintrin.h>
#include "aec_rdft.h"
static const ALIGN16_BEG float ALIGN16_END k_swap_sign[4] =
{-1.f, 1.f, -1.f, 1.f};
static void cft1st_128_SSE2(float *a) {
const __m128 mm_swap_sign = _mm_load_ps(k_swap_sign);
int j, k2;
for (k2 = 0, j = 0; j < 128; j += 16, k2 += 4) {
__m128 a00v = _mm_loadu_ps(&a[j + 0]);
__m128 a04v = _mm_loadu_ps(&a[j + 4]);
__m128 a08v = _mm_loadu_ps(&a[j + 8]);
__m128 a12v = _mm_loadu_ps(&a[j + 12]);
__m128 a01v = _mm_shuffle_ps(a00v, a08v, _MM_SHUFFLE(1, 0, 1 ,0));
__m128 a23v = _mm_shuffle_ps(a00v, a08v, _MM_SHUFFLE(3, 2, 3 ,2));
__m128 a45v = _mm_shuffle_ps(a04v, a12v, _MM_SHUFFLE(1, 0, 1 ,0));
__m128 a67v = _mm_shuffle_ps(a04v, a12v, _MM_SHUFFLE(3, 2, 3 ,2));
const __m128 wk1rv = _mm_load_ps(&rdft_wk1r[k2]);
const __m128 wk1iv = _mm_load_ps(&rdft_wk1i[k2]);
const __m128 wk2rv = _mm_load_ps(&rdft_wk2r[k2]);
const __m128 wk2iv = _mm_load_ps(&rdft_wk2i[k2]);
const __m128 wk3rv = _mm_load_ps(&rdft_wk3r[k2]);
const __m128 wk3iv = _mm_load_ps(&rdft_wk3i[k2]);
__m128 x0v = _mm_add_ps(a01v, a23v);
const __m128 x1v = _mm_sub_ps(a01v, a23v);
const __m128 x2v = _mm_add_ps(a45v, a67v);
const __m128 x3v = _mm_sub_ps(a45v, a67v);
__m128 x0w;
a01v = _mm_add_ps(x0v, x2v);
x0v = _mm_sub_ps(x0v, x2v);
x0w = _mm_shuffle_ps(x0v, x0v, _MM_SHUFFLE(2, 3, 0 ,1));
{
const __m128 a45_0v = _mm_mul_ps(wk2rv, x0v);
const __m128 a45_1v = _mm_mul_ps(wk2iv, x0w);
a45v = _mm_add_ps(a45_0v, a45_1v);
}
{
__m128 a23_0v, a23_1v;
const __m128 x3w = _mm_shuffle_ps(x3v, x3v, _MM_SHUFFLE(2, 3, 0 ,1));
const __m128 x3s = _mm_mul_ps(mm_swap_sign, x3w);
x0v = _mm_add_ps(x1v, x3s);
x0w = _mm_shuffle_ps(x0v, x0v, _MM_SHUFFLE(2, 3, 0 ,1));
a23_0v = _mm_mul_ps(wk1rv, x0v);
a23_1v = _mm_mul_ps(wk1iv, x0w);
a23v = _mm_add_ps(a23_0v, a23_1v);
x0v = _mm_sub_ps(x1v, x3s);
x0w = _mm_shuffle_ps(x0v, x0v, _MM_SHUFFLE(2, 3, 0 ,1));
}
{
const __m128 a67_0v = _mm_mul_ps(wk3rv, x0v);
const __m128 a67_1v = _mm_mul_ps(wk3iv, x0w);
a67v = _mm_add_ps(a67_0v, a67_1v);
}
a00v = _mm_shuffle_ps(a01v, a23v, _MM_SHUFFLE(1, 0, 1 ,0));
a04v = _mm_shuffle_ps(a45v, a67v, _MM_SHUFFLE(1, 0, 1 ,0));
a08v = _mm_shuffle_ps(a01v, a23v, _MM_SHUFFLE(3, 2, 3 ,2));
a12v = _mm_shuffle_ps(a45v, a67v, _MM_SHUFFLE(3, 2, 3 ,2));
_mm_storeu_ps(&a[j + 0], a00v);
_mm_storeu_ps(&a[j + 4], a04v);
_mm_storeu_ps(&a[j + 8], a08v);
_mm_storeu_ps(&a[j + 12], a12v);
}
}
static void cftmdl_128_SSE2(float *a) {
const int l = 8;
const __m128 mm_swap_sign = _mm_load_ps(k_swap_sign);
int j0;
__m128 wk1rv = _mm_load_ps(cftmdl_wk1r);
for (j0 = 0; j0 < l; j0 += 2) {
const __m128i a_00 = _mm_loadl_epi64((__m128i*)&a[j0 + 0]);
const __m128i a_08 = _mm_loadl_epi64((__m128i*)&a[j0 + 8]);
const __m128i a_32 = _mm_loadl_epi64((__m128i*)&a[j0 + 32]);
const __m128i a_40 = _mm_loadl_epi64((__m128i*)&a[j0 + 40]);
const __m128 a_00_32 = _mm_shuffle_ps(_mm_castsi128_ps(a_00),
_mm_castsi128_ps(a_32),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_08_40 = _mm_shuffle_ps(_mm_castsi128_ps(a_08),
_mm_castsi128_ps(a_40),
_MM_SHUFFLE(1, 0, 1 ,0));
__m128 x0r0_0i0_0r1_x0i1 = _mm_add_ps(a_00_32, a_08_40);
const __m128 x1r0_1i0_1r1_x1i1 = _mm_sub_ps(a_00_32, a_08_40);
const __m128i a_16 = _mm_loadl_epi64((__m128i*)&a[j0 + 16]);
const __m128i a_24 = _mm_loadl_epi64((__m128i*)&a[j0 + 24]);
const __m128i a_48 = _mm_loadl_epi64((__m128i*)&a[j0 + 48]);
const __m128i a_56 = _mm_loadl_epi64((__m128i*)&a[j0 + 56]);
const __m128 a_16_48 = _mm_shuffle_ps(_mm_castsi128_ps(a_16),
_mm_castsi128_ps(a_48),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_24_56 = _mm_shuffle_ps(_mm_castsi128_ps(a_24),
_mm_castsi128_ps(a_56),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 x2r0_2i0_2r1_x2i1 = _mm_add_ps(a_16_48, a_24_56);
const __m128 x3r0_3i0_3r1_x3i1 = _mm_sub_ps(a_16_48, a_24_56);
const __m128 xx0 = _mm_add_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 xx1 = _mm_sub_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 x3i0_3r0_3i1_x3r1 = _mm_castsi128_ps(
_mm_shuffle_epi32(_mm_castps_si128(x3r0_3i0_3r1_x3i1),
_MM_SHUFFLE(2, 3, 0, 1)));
const __m128 x3_swapped = _mm_mul_ps(mm_swap_sign, x3i0_3r0_3i1_x3r1);
const __m128 x1_x3_add = _mm_add_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 x1_x3_sub = _mm_sub_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 yy0 = _mm_shuffle_ps(x1_x3_add, x1_x3_sub,
_MM_SHUFFLE(2, 2, 2 ,2));
const __m128 yy1 = _mm_shuffle_ps(x1_x3_add, x1_x3_sub,
_MM_SHUFFLE(3, 3, 3 ,3));
const __m128 yy2 = _mm_mul_ps(mm_swap_sign, yy1);
const __m128 yy3 = _mm_add_ps(yy0, yy2);
const __m128 yy4 = _mm_mul_ps(wk1rv, yy3);
_mm_storel_epi64((__m128i*)&a[j0 + 0], _mm_castps_si128(xx0));
_mm_storel_epi64((__m128i*)&a[j0 + 32],
_mm_shuffle_epi32(_mm_castps_si128(xx0),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 16], _mm_castps_si128(xx1));
_mm_storel_epi64((__m128i*)&a[j0 + 48],
_mm_shuffle_epi32(_mm_castps_si128(xx1),
_MM_SHUFFLE(2, 3, 2, 3)));
a[j0 + 48] = -a[j0 + 48];
_mm_storel_epi64((__m128i*)&a[j0 + 8], _mm_castps_si128(x1_x3_add));
_mm_storel_epi64((__m128i*)&a[j0 + 24], _mm_castps_si128(x1_x3_sub));
_mm_storel_epi64((__m128i*)&a[j0 + 40], _mm_castps_si128(yy4));
_mm_storel_epi64((__m128i*)&a[j0 + 56],
_mm_shuffle_epi32(_mm_castps_si128(yy4),
_MM_SHUFFLE(2, 3, 2, 3)));
}
{
int k = 64;
int k1 = 2;
int k2 = 2 * k1;
const __m128 wk2rv = _mm_load_ps(&rdft_wk2r[k2+0]);
const __m128 wk2iv = _mm_load_ps(&rdft_wk2i[k2+0]);
const __m128 wk1iv = _mm_load_ps(&rdft_wk1i[k2+0]);
const __m128 wk3rv = _mm_load_ps(&rdft_wk3r[k2+0]);
const __m128 wk3iv = _mm_load_ps(&rdft_wk3i[k2+0]);
wk1rv = _mm_load_ps(&rdft_wk1r[k2+0]);
for (j0 = k; j0 < l + k; j0 += 2) {
const __m128i a_00 = _mm_loadl_epi64((__m128i*)&a[j0 + 0]);
const __m128i a_08 = _mm_loadl_epi64((__m128i*)&a[j0 + 8]);
const __m128i a_32 = _mm_loadl_epi64((__m128i*)&a[j0 + 32]);
const __m128i a_40 = _mm_loadl_epi64((__m128i*)&a[j0 + 40]);
const __m128 a_00_32 = _mm_shuffle_ps(_mm_castsi128_ps(a_00),
_mm_castsi128_ps(a_32),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_08_40 = _mm_shuffle_ps(_mm_castsi128_ps(a_08),
_mm_castsi128_ps(a_40),
_MM_SHUFFLE(1, 0, 1 ,0));
__m128 x0r0_0i0_0r1_x0i1 = _mm_add_ps(a_00_32, a_08_40);
const __m128 x1r0_1i0_1r1_x1i1 = _mm_sub_ps(a_00_32, a_08_40);
const __m128i a_16 = _mm_loadl_epi64((__m128i*)&a[j0 + 16]);
const __m128i a_24 = _mm_loadl_epi64((__m128i*)&a[j0 + 24]);
const __m128i a_48 = _mm_loadl_epi64((__m128i*)&a[j0 + 48]);
const __m128i a_56 = _mm_loadl_epi64((__m128i*)&a[j0 + 56]);
const __m128 a_16_48 = _mm_shuffle_ps(_mm_castsi128_ps(a_16),
_mm_castsi128_ps(a_48),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 a_24_56 = _mm_shuffle_ps(_mm_castsi128_ps(a_24),
_mm_castsi128_ps(a_56),
_MM_SHUFFLE(1, 0, 1 ,0));
const __m128 x2r0_2i0_2r1_x2i1 = _mm_add_ps(a_16_48, a_24_56);
const __m128 x3r0_3i0_3r1_x3i1 = _mm_sub_ps(a_16_48, a_24_56);
const __m128 xx = _mm_add_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 xx1 = _mm_sub_ps(x0r0_0i0_0r1_x0i1, x2r0_2i0_2r1_x2i1);
const __m128 xx2 = _mm_mul_ps(xx1 , wk2rv);
const __m128 xx3 = _mm_mul_ps(wk2iv,
_mm_castsi128_ps(_mm_shuffle_epi32(_mm_castps_si128(xx1),
_MM_SHUFFLE(2, 3, 0, 1))));
const __m128 xx4 = _mm_add_ps(xx2, xx3);
const __m128 x3i0_3r0_3i1_x3r1 = _mm_castsi128_ps(
_mm_shuffle_epi32(_mm_castps_si128(x3r0_3i0_3r1_x3i1),
_MM_SHUFFLE(2, 3, 0, 1)));
const __m128 x3_swapped = _mm_mul_ps(mm_swap_sign, x3i0_3r0_3i1_x3r1);
const __m128 x1_x3_add = _mm_add_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 x1_x3_sub = _mm_sub_ps(x1r0_1i0_1r1_x1i1, x3_swapped);
const __m128 xx10 = _mm_mul_ps(x1_x3_add, wk1rv);
const __m128 xx11 = _mm_mul_ps(wk1iv,
_mm_castsi128_ps(_mm_shuffle_epi32(_mm_castps_si128(x1_x3_add),
_MM_SHUFFLE(2, 3, 0, 1))));
const __m128 xx12 = _mm_add_ps(xx10, xx11);
const __m128 xx20 = _mm_mul_ps(x1_x3_sub, wk3rv);
const __m128 xx21 = _mm_mul_ps(wk3iv,
_mm_castsi128_ps(_mm_shuffle_epi32(_mm_castps_si128(x1_x3_sub),
_MM_SHUFFLE(2, 3, 0, 1))));
const __m128 xx22 = _mm_add_ps(xx20, xx21);
_mm_storel_epi64((__m128i*)&a[j0 + 0], _mm_castps_si128(xx));
_mm_storel_epi64((__m128i*)&a[j0 + 32],
_mm_shuffle_epi32(_mm_castps_si128(xx),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 16], _mm_castps_si128(xx4));
_mm_storel_epi64((__m128i*)&a[j0 + 48],
_mm_shuffle_epi32(_mm_castps_si128(xx4),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 8], _mm_castps_si128(xx12));
_mm_storel_epi64((__m128i*)&a[j0 + 40],
_mm_shuffle_epi32(_mm_castps_si128(xx12),
_MM_SHUFFLE(3, 2, 3, 2)));
_mm_storel_epi64((__m128i*)&a[j0 + 24], _mm_castps_si128(xx22));
_mm_storel_epi64((__m128i*)&a[j0 + 56],
_mm_shuffle_epi32(_mm_castps_si128(xx22),
_MM_SHUFFLE(3, 2, 3, 2)));
}
}
}
static void rftfsub_128_SSE2(float *a) {
const float *c = rdft_w + 32;
int j1, j2, k1, k2;
float wkr, wki, xr, xi, yr, yi;
static const ALIGN16_BEG float ALIGN16_END k_half[4] =
{0.5f, 0.5f, 0.5f, 0.5f};
const __m128 mm_half = _mm_load_ps(k_half);
// Vectorized code (four at once).
// Note: commented number are indexes for the first iteration of the loop.
for (j1 = 1, j2 = 2; j2 + 7 < 64; j1 += 4, j2 += 8) {
// Load 'wk'.
const __m128 c_j1 = _mm_loadu_ps(&c[ j1]); // 1, 2, 3, 4,
const __m128 c_k1 = _mm_loadu_ps(&c[29 - j1]); // 28, 29, 30, 31,
const __m128 wkrt = _mm_sub_ps(mm_half, c_k1); // 28, 29, 30, 31,
const __m128 wkr_ =
_mm_shuffle_ps(wkrt, wkrt, _MM_SHUFFLE(0, 1, 2, 3)); // 31, 30, 29, 28,
const __m128 wki_ = c_j1; // 1, 2, 3, 4,
// Load and shuffle 'a'.
const __m128 a_j2_0 = _mm_loadu_ps(&a[0 + j2]); // 2, 3, 4, 5,
const __m128 a_j2_4 = _mm_loadu_ps(&a[4 + j2]); // 6, 7, 8, 9,
const __m128 a_k2_0 = _mm_loadu_ps(&a[122 - j2]); // 120, 121, 122, 123,
const __m128 a_k2_4 = _mm_loadu_ps(&a[126 - j2]); // 124, 125, 126, 127,
const __m128 a_j2_p0 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(2, 0, 2 ,0)); // 2, 4, 6, 8,
const __m128 a_j2_p1 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(3, 1, 3 ,1)); // 3, 5, 7, 9,
const __m128 a_k2_p0 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(0, 2, 0 ,2)); // 126, 124, 122, 120,
const __m128 a_k2_p1 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(1, 3, 1 ,3)); // 127, 125, 123, 121,
// Calculate 'x'.
const __m128 xr_ = _mm_sub_ps(a_j2_p0, a_k2_p0);
// 2-126, 4-124, 6-122, 8-120,
const __m128 xi_ = _mm_add_ps(a_j2_p1, a_k2_p1);
// 3-127, 5-125, 7-123, 9-121,
// Calculate product into 'y'.
// yr = wkr * xr - wki * xi;
// yi = wkr * xi + wki * xr;
const __m128 a_ = _mm_mul_ps(wkr_, xr_);
const __m128 b_ = _mm_mul_ps(wki_, xi_);
const __m128 c_ = _mm_mul_ps(wkr_, xi_);
const __m128 d_ = _mm_mul_ps(wki_, xr_);
const __m128 yr_ = _mm_sub_ps(a_, b_); // 2-126, 4-124, 6-122, 8-120,
const __m128 yi_ = _mm_add_ps(c_, d_); // 3-127, 5-125, 7-123, 9-121,
// Update 'a'.
// a[j2 + 0] -= yr;
// a[j2 + 1] -= yi;
// a[k2 + 0] += yr;
// a[k2 + 1] -= yi;
const __m128 a_j2_p0n = _mm_sub_ps(a_j2_p0, yr_); // 2, 4, 6, 8,
const __m128 a_j2_p1n = _mm_sub_ps(a_j2_p1, yi_); // 3, 5, 7, 9,
const __m128 a_k2_p0n = _mm_add_ps(a_k2_p0, yr_); // 126, 124, 122, 120,
const __m128 a_k2_p1n = _mm_sub_ps(a_k2_p1, yi_); // 127, 125, 123, 121,
// Shuffle in right order and store.
const __m128 a_j2_0n = _mm_unpacklo_ps(a_j2_p0n, a_j2_p1n);
// 2, 3, 4, 5,
const __m128 a_j2_4n = _mm_unpackhi_ps(a_j2_p0n, a_j2_p1n);
// 6, 7, 8, 9,
const __m128 a_k2_0nt = _mm_unpackhi_ps(a_k2_p0n, a_k2_p1n);
// 122, 123, 120, 121,
const __m128 a_k2_4nt = _mm_unpacklo_ps(a_k2_p0n, a_k2_p1n);
// 126, 127, 124, 125,
const __m128 a_k2_0n = _mm_shuffle_ps(a_k2_0nt, a_k2_0nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 120, 121, 122, 123,
const __m128 a_k2_4n = _mm_shuffle_ps(a_k2_4nt, a_k2_4nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 124, 125, 126, 127,
_mm_storeu_ps(&a[0 + j2], a_j2_0n);
_mm_storeu_ps(&a[4 + j2], a_j2_4n);
_mm_storeu_ps(&a[122 - j2], a_k2_0n);
_mm_storeu_ps(&a[126 - j2], a_k2_4n);
}
// Scalar code for the remaining items.
for (; j2 < 64; j1 += 1, j2 += 2) {
k2 = 128 - j2;
k1 = 32 - j1;
wkr = 0.5f - c[k1];
wki = c[j1];
xr = a[j2 + 0] - a[k2 + 0];
xi = a[j2 + 1] + a[k2 + 1];
yr = wkr * xr - wki * xi;
yi = wkr * xi + wki * xr;
a[j2 + 0] -= yr;
a[j2 + 1] -= yi;
a[k2 + 0] += yr;
a[k2 + 1] -= yi;
}
}
static void rftbsub_128_SSE2(float *a) {
const float *c = rdft_w + 32;
int j1, j2, k1, k2;
float wkr, wki, xr, xi, yr, yi;
static const ALIGN16_BEG float ALIGN16_END k_half[4] =
{0.5f, 0.5f, 0.5f, 0.5f};
const __m128 mm_half = _mm_load_ps(k_half);
a[1] = -a[1];
// Vectorized code (four at once).
// Note: commented number are indexes for the first iteration of the loop.
for (j1 = 1, j2 = 2; j2 + 7 < 64; j1 += 4, j2 += 8) {
// Load 'wk'.
const __m128 c_j1 = _mm_loadu_ps(&c[ j1]); // 1, 2, 3, 4,
const __m128 c_k1 = _mm_loadu_ps(&c[29 - j1]); // 28, 29, 30, 31,
const __m128 wkrt = _mm_sub_ps(mm_half, c_k1); // 28, 29, 30, 31,
const __m128 wkr_ =
_mm_shuffle_ps(wkrt, wkrt, _MM_SHUFFLE(0, 1, 2, 3)); // 31, 30, 29, 28,
const __m128 wki_ = c_j1; // 1, 2, 3, 4,
// Load and shuffle 'a'.
const __m128 a_j2_0 = _mm_loadu_ps(&a[0 + j2]); // 2, 3, 4, 5,
const __m128 a_j2_4 = _mm_loadu_ps(&a[4 + j2]); // 6, 7, 8, 9,
const __m128 a_k2_0 = _mm_loadu_ps(&a[122 - j2]); // 120, 121, 122, 123,
const __m128 a_k2_4 = _mm_loadu_ps(&a[126 - j2]); // 124, 125, 126, 127,
const __m128 a_j2_p0 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(2, 0, 2 ,0)); // 2, 4, 6, 8,
const __m128 a_j2_p1 = _mm_shuffle_ps(a_j2_0, a_j2_4,
_MM_SHUFFLE(3, 1, 3 ,1)); // 3, 5, 7, 9,
const __m128 a_k2_p0 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(0, 2, 0 ,2)); // 126, 124, 122, 120,
const __m128 a_k2_p1 = _mm_shuffle_ps(a_k2_4, a_k2_0,
_MM_SHUFFLE(1, 3, 1 ,3)); // 127, 125, 123, 121,
// Calculate 'x'.
const __m128 xr_ = _mm_sub_ps(a_j2_p0, a_k2_p0);
// 2-126, 4-124, 6-122, 8-120,
const __m128 xi_ = _mm_add_ps(a_j2_p1, a_k2_p1);
// 3-127, 5-125, 7-123, 9-121,
// Calculate product into 'y'.
// yr = wkr * xr + wki * xi;
// yi = wkr * xi - wki * xr;
const __m128 a_ = _mm_mul_ps(wkr_, xr_);
const __m128 b_ = _mm_mul_ps(wki_, xi_);
const __m128 c_ = _mm_mul_ps(wkr_, xi_);
const __m128 d_ = _mm_mul_ps(wki_, xr_);
const __m128 yr_ = _mm_add_ps(a_, b_); // 2-126, 4-124, 6-122, 8-120,
const __m128 yi_ = _mm_sub_ps(c_, d_); // 3-127, 5-125, 7-123, 9-121,
// Update 'a'.
// a[j2 + 0] = a[j2 + 0] - yr;
// a[j2 + 1] = yi - a[j2 + 1];
// a[k2 + 0] = yr + a[k2 + 0];
// a[k2 + 1] = yi - a[k2 + 1];
const __m128 a_j2_p0n = _mm_sub_ps(a_j2_p0, yr_); // 2, 4, 6, 8,
const __m128 a_j2_p1n = _mm_sub_ps(yi_, a_j2_p1); // 3, 5, 7, 9,
const __m128 a_k2_p0n = _mm_add_ps(a_k2_p0, yr_); // 126, 124, 122, 120,
const __m128 a_k2_p1n = _mm_sub_ps(yi_, a_k2_p1); // 127, 125, 123, 121,
// Shuffle in right order and store.
const __m128 a_j2_0n = _mm_unpacklo_ps(a_j2_p0n, a_j2_p1n);
// 2, 3, 4, 5,
const __m128 a_j2_4n = _mm_unpackhi_ps(a_j2_p0n, a_j2_p1n);
// 6, 7, 8, 9,
const __m128 a_k2_0nt = _mm_unpackhi_ps(a_k2_p0n, a_k2_p1n);
// 122, 123, 120, 121,
const __m128 a_k2_4nt = _mm_unpacklo_ps(a_k2_p0n, a_k2_p1n);
// 126, 127, 124, 125,
const __m128 a_k2_0n = _mm_shuffle_ps(a_k2_0nt, a_k2_0nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 120, 121, 122, 123,
const __m128 a_k2_4n = _mm_shuffle_ps(a_k2_4nt, a_k2_4nt,
_MM_SHUFFLE(1, 0, 3 ,2)); // 124, 125, 126, 127,
_mm_storeu_ps(&a[0 + j2], a_j2_0n);
_mm_storeu_ps(&a[4 + j2], a_j2_4n);
_mm_storeu_ps(&a[122 - j2], a_k2_0n);
_mm_storeu_ps(&a[126 - j2], a_k2_4n);
}
// Scalar code for the remaining items.
for (; j2 < 64; j1 += 1, j2 += 2) {
k2 = 128 - j2;
k1 = 32 - j1;
wkr = 0.5f - c[k1];
wki = c[j1];
xr = a[j2 + 0] - a[k2 + 0];
xi = a[j2 + 1] + a[k2 + 1];
yr = wkr * xr + wki * xi;
yi = wkr * xi - wki * xr;
a[j2 + 0] = a[j2 + 0] - yr;
a[j2 + 1] = yi - a[j2 + 1];
a[k2 + 0] = yr + a[k2 + 0];
a[k2 + 1] = yi - a[k2 + 1];
}
a[65] = -a[65];
}
void aec_rdft_init_sse2(void) {
cft1st_128 = cft1st_128_SSE2;
cftmdl_128 = cftmdl_128_SSE2;
rftfsub_128 = rftfsub_128_SSE2;
rftbsub_128 = rftbsub_128_SSE2;
}
#endif // WEBRTC_USE_SS2

View File

@ -1,901 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*
* Contains the API functions for the AEC.
*/
#include "echo_cancellation.h"
#include <math.h>
#ifdef AEC_DEBUG
#include <stdio.h>
#endif
#include <stdlib.h>
#include <string.h>
#include "aec_core.h"
#include "resampler.h"
#include "ring_buffer.h"
#define BUF_SIZE_FRAMES 50 // buffer size (frames)
// Maximum length of resampled signal. Must be an integer multiple of frames
// (ceil(1/(1 + MIN_SKEW)*2) + 1)*FRAME_LEN
// The factor of 2 handles wb, and the + 1 is as a safety margin
#define MAX_RESAMP_LEN (5 * FRAME_LEN)
static const int bufSizeSamp = BUF_SIZE_FRAMES * FRAME_LEN; // buffer size (samples)
static const int sampMsNb = 8; // samples per ms in nb
// Target suppression levels for nlp modes
// log{0.001, 0.00001, 0.00000001}
static const float targetSupp[3] = {-6.9f, -11.5f, -18.4f};
static const float minOverDrive[3] = {1.0f, 2.0f, 5.0f};
static const int initCheck = 42;
typedef struct {
int delayCtr;
int sampFreq;
int splitSampFreq;
int scSampFreq;
float sampFactor; // scSampRate / sampFreq
short nlpMode;
short autoOnOff;
short activity;
short skewMode;
short bufSizeStart;
//short bufResetCtr; // counts number of noncausal frames
int knownDelay;
// Stores the last frame added to the farend buffer
short farendOld[2][FRAME_LEN];
short initFlag; // indicates if AEC has been initialized
// Variables used for averaging far end buffer size
short counter;
short sum;
short firstVal;
short checkBufSizeCtr;
// Variables used for delay shifts
short msInSndCardBuf;
short filtDelay;
int timeForDelayChange;
int ECstartup;
int checkBuffSize;
int delayChange;
short lastDelayDiff;
#ifdef AEC_DEBUG
FILE *bufFile;
FILE *delayFile;
FILE *skewFile;
FILE *preCompFile;
FILE *postCompFile;
#endif // AEC_DEBUG
// Structures
void *farendBuf;
void *resampler;
int skewFrCtr;
int resample; // if the skew is small enough we don't resample
int highSkewCtr;
float skew;
int lastError;
aec_t *aec;
} aecpc_t;
// Estimates delay to set the position of the farend buffer read pointer
// (controlled by knownDelay)
static int EstBufDelay(aecpc_t *aecInst, short msInSndCardBuf);
// Stuffs the farend buffer if the estimated delay is too large
static int DelayComp(aecpc_t *aecInst);
WebRtc_Word32 WebRtcAec_Create(void **aecInst)
{
aecpc_t *aecpc;
if (aecInst == NULL) {
return -1;
}
aecpc = malloc(sizeof(aecpc_t));
*aecInst = aecpc;
if (aecpc == NULL) {
return -1;
}
if (WebRtcAec_CreateAec(&aecpc->aec) == -1) {
WebRtcAec_Free(aecpc);
aecpc = NULL;
return -1;
}
if (WebRtcApm_CreateBuffer(&aecpc->farendBuf, bufSizeSamp) == -1) {
WebRtcAec_Free(aecpc);
aecpc = NULL;
return -1;
}
if (WebRtcAec_CreateResampler(&aecpc->resampler) == -1) {
WebRtcAec_Free(aecpc);
aecpc = NULL;
return -1;
}
aecpc->initFlag = 0;
aecpc->lastError = 0;
#ifdef AEC_DEBUG
aecpc->aec->farFile = fopen("aecFar.pcm","wb");
aecpc->aec->nearFile = fopen("aecNear.pcm","wb");
aecpc->aec->outFile = fopen("aecOut.pcm","wb");
aecpc->aec->outLpFile = fopen("aecOutLp.pcm","wb");
aecpc->bufFile = fopen("aecBuf.dat", "wb");
aecpc->skewFile = fopen("aecSkew.dat", "wb");
aecpc->delayFile = fopen("aecDelay.dat", "wb");
aecpc->preCompFile = fopen("preComp.pcm", "wb");
aecpc->postCompFile = fopen("postComp.pcm", "wb");
#endif // AEC_DEBUG
return 0;
}
WebRtc_Word32 WebRtcAec_Free(void *aecInst)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
#ifdef AEC_DEBUG
fclose(aecpc->aec->farFile);
fclose(aecpc->aec->nearFile);
fclose(aecpc->aec->outFile);
fclose(aecpc->aec->outLpFile);
fclose(aecpc->bufFile);
fclose(aecpc->skewFile);
fclose(aecpc->delayFile);
fclose(aecpc->preCompFile);
fclose(aecpc->postCompFile);
#endif // AEC_DEBUG
WebRtcAec_FreeAec(aecpc->aec);
WebRtcApm_FreeBuffer(aecpc->farendBuf);
WebRtcAec_FreeResampler(aecpc->resampler);
free(aecpc);
return 0;
}
WebRtc_Word32 WebRtcAec_Init(void *aecInst, WebRtc_Word32 sampFreq, WebRtc_Word32 scSampFreq)
{
aecpc_t *aecpc = aecInst;
AecConfig aecConfig;
if (aecpc == NULL) {
return -1;
}
if (sampFreq != 8000 && sampFreq != 16000 && sampFreq != 32000) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->sampFreq = sampFreq;
if (scSampFreq < 1 || scSampFreq > 96000) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->scSampFreq = scSampFreq;
// Initialize echo canceller core
if (WebRtcAec_InitAec(aecpc->aec, aecpc->sampFreq) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
// Initialize farend buffer
if (WebRtcApm_InitBuffer(aecpc->farendBuf) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
if (WebRtcAec_InitResampler(aecpc->resampler, aecpc->scSampFreq) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
aecpc->initFlag = initCheck; // indicates that initialization has been done
if (aecpc->sampFreq == 32000) {
aecpc->splitSampFreq = 16000;
}
else {
aecpc->splitSampFreq = sampFreq;
}
aecpc->skewFrCtr = 0;
aecpc->activity = 0;
aecpc->delayChange = 1;
aecpc->delayCtr = 0;
aecpc->sum = 0;
aecpc->counter = 0;
aecpc->checkBuffSize = 1;
aecpc->firstVal = 0;
aecpc->ECstartup = 1;
aecpc->bufSizeStart = 0;
aecpc->checkBufSizeCtr = 0;
aecpc->filtDelay = 0;
aecpc->timeForDelayChange =0;
aecpc->knownDelay = 0;
aecpc->lastDelayDiff = 0;
aecpc->skew = 0;
aecpc->resample = kAecFalse;
aecpc->highSkewCtr = 0;
aecpc->sampFactor = (aecpc->scSampFreq * 1.0f) / aecpc->splitSampFreq;
memset(&aecpc->farendOld[0][0], 0, 160);
// Default settings.
aecConfig.nlpMode = kAecNlpModerate;
aecConfig.skewMode = kAecFalse;
aecConfig.metricsMode = kAecFalse;
aecConfig.delay_logging = kAecFalse;
if (WebRtcAec_set_config(aecpc, aecConfig) == -1) {
aecpc->lastError = AEC_UNSPECIFIED_ERROR;
return -1;
}
return 0;
}
// only buffer L band for farend
WebRtc_Word32 WebRtcAec_BufferFarend(void *aecInst, const WebRtc_Word16 *farend,
WebRtc_Word16 nrOfSamples)
{
aecpc_t *aecpc = aecInst;
WebRtc_Word32 retVal = 0;
short newNrOfSamples;
short newFarend[MAX_RESAMP_LEN];
float skew;
if (aecpc == NULL) {
return -1;
}
if (farend == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
// number of samples == 160 for SWB input
if (nrOfSamples != 80 && nrOfSamples != 160) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
skew = aecpc->skew;
// TODO: Is this really a good idea?
if (!aecpc->ECstartup) {
DelayComp(aecpc);
}
if (aecpc->skewMode == kAecTrue && aecpc->resample == kAecTrue) {
// Resample and get a new number of samples
newNrOfSamples = WebRtcAec_ResampleLinear(aecpc->resampler,
farend,
nrOfSamples,
skew,
newFarend);
WebRtcApm_WriteBuffer(aecpc->farendBuf, newFarend, newNrOfSamples);
#ifdef AEC_DEBUG
fwrite(farend, 2, nrOfSamples, aecpc->preCompFile);
fwrite(newFarend, 2, newNrOfSamples, aecpc->postCompFile);
#endif
}
else {
WebRtcApm_WriteBuffer(aecpc->farendBuf, farend, nrOfSamples);
}
return retVal;
}
WebRtc_Word32 WebRtcAec_Process(void *aecInst, const WebRtc_Word16 *nearend,
const WebRtc_Word16 *nearendH, WebRtc_Word16 *out, WebRtc_Word16 *outH,
WebRtc_Word16 nrOfSamples, WebRtc_Word16 msInSndCardBuf, WebRtc_Word32 skew)
{
aecpc_t *aecpc = aecInst;
WebRtc_Word32 retVal = 0;
short i;
short farend[FRAME_LEN];
short nmbrOfFilledBuffers;
short nBlocks10ms;
short nFrames;
#ifdef AEC_DEBUG
short msInAECBuf;
#endif
// Limit resampling to doubling/halving of signal
const float minSkewEst = -0.5f;
const float maxSkewEst = 1.0f;
if (aecpc == NULL) {
return -1;
}
if (nearend == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (out == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
// number of samples == 160 for SWB input
if (nrOfSamples != 80 && nrOfSamples != 160) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
// Check for valid pointers based on sampling rate
if (aecpc->sampFreq == 32000 && nearendH == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (msInSndCardBuf < 0) {
msInSndCardBuf = 0;
aecpc->lastError = AEC_BAD_PARAMETER_WARNING;
retVal = -1;
}
else if (msInSndCardBuf > 500) {
msInSndCardBuf = 500;
aecpc->lastError = AEC_BAD_PARAMETER_WARNING;
retVal = -1;
}
msInSndCardBuf += 10;
aecpc->msInSndCardBuf = msInSndCardBuf;
if (aecpc->skewMode == kAecTrue) {
if (aecpc->skewFrCtr < 25) {
aecpc->skewFrCtr++;
}
else {
retVal = WebRtcAec_GetSkew(aecpc->resampler, skew, &aecpc->skew);
if (retVal == -1) {
aecpc->skew = 0;
aecpc->lastError = AEC_BAD_PARAMETER_WARNING;
}
aecpc->skew /= aecpc->sampFactor*nrOfSamples;
if (aecpc->skew < 1.0e-3 && aecpc->skew > -1.0e-3) {
aecpc->resample = kAecFalse;
}
else {
aecpc->resample = kAecTrue;
}
if (aecpc->skew < minSkewEst) {
aecpc->skew = minSkewEst;
}
else if (aecpc->skew > maxSkewEst) {
aecpc->skew = maxSkewEst;
}
#ifdef AEC_DEBUG
fwrite(&aecpc->skew, sizeof(aecpc->skew), 1, aecpc->skewFile);
#endif
}
}
nFrames = nrOfSamples / FRAME_LEN;
nBlocks10ms = nFrames / aecpc->aec->mult;
if (aecpc->ECstartup) {
if (nearend != out) {
// Only needed if they don't already point to the same place.
memcpy(out, nearend, sizeof(short) * nrOfSamples);
}
nmbrOfFilledBuffers = WebRtcApm_get_buffer_size(aecpc->farendBuf) / FRAME_LEN;
// The AEC is in the start up mode
// AEC is disabled until the soundcard buffer and farend buffers are OK
// Mechanism to ensure that the soundcard buffer is reasonably stable.
if (aecpc->checkBuffSize) {
aecpc->checkBufSizeCtr++;
// Before we fill up the far end buffer we require the amount of data on the
// sound card to be stable (+/-8 ms) compared to the first value. This
// comparison is made during the following 4 consecutive frames. If it seems
// to be stable then we start to fill up the far end buffer.
if (aecpc->counter == 0) {
aecpc->firstVal = aecpc->msInSndCardBuf;
aecpc->sum = 0;
}
if (abs(aecpc->firstVal - aecpc->msInSndCardBuf) <
WEBRTC_SPL_MAX(0.2 * aecpc->msInSndCardBuf, sampMsNb)) {
aecpc->sum += aecpc->msInSndCardBuf;
aecpc->counter++;
}
else {
aecpc->counter = 0;
}
if (aecpc->counter*nBlocks10ms >= 6) {
// The farend buffer size is determined in blocks of 80 samples
// Use 75% of the average value of the soundcard buffer
aecpc->bufSizeStart = WEBRTC_SPL_MIN((int) (0.75 * (aecpc->sum *
aecpc->aec->mult) / (aecpc->counter * 10)), BUF_SIZE_FRAMES);
// buffersize has now been determined
aecpc->checkBuffSize = 0;
}
if (aecpc->checkBufSizeCtr * nBlocks10ms > 50) {
// for really bad sound cards, don't disable echocanceller for more than 0.5 sec
aecpc->bufSizeStart = WEBRTC_SPL_MIN((int) (0.75 * (aecpc->msInSndCardBuf *
aecpc->aec->mult) / 10), BUF_SIZE_FRAMES);
aecpc->checkBuffSize = 0;
}
}
// if checkBuffSize changed in the if-statement above
if (!aecpc->checkBuffSize) {
// soundcard buffer is now reasonably stable
// When the far end buffer is filled with approximately the same amount of
// data as the amount on the sound card we end the start up phase and start
// to cancel echoes.
if (nmbrOfFilledBuffers == aecpc->bufSizeStart) {
aecpc->ECstartup = 0; // Enable the AEC
}
else if (nmbrOfFilledBuffers > aecpc->bufSizeStart) {
WebRtcApm_FlushBuffer(aecpc->farendBuf, WebRtcApm_get_buffer_size(aecpc->farendBuf) -
aecpc->bufSizeStart * FRAME_LEN);
aecpc->ECstartup = 0;
}
}
}
else {
// AEC is enabled
// Note only 1 block supported for nb and 2 blocks for wb
for (i = 0; i < nFrames; i++) {
nmbrOfFilledBuffers = WebRtcApm_get_buffer_size(aecpc->farendBuf) / FRAME_LEN;
// Check that there is data in the far end buffer
if (nmbrOfFilledBuffers > 0) {
// Get the next 80 samples from the farend buffer
WebRtcApm_ReadBuffer(aecpc->farendBuf, farend, FRAME_LEN);
// Always store the last frame for use when we run out of data
memcpy(&(aecpc->farendOld[i][0]), farend, FRAME_LEN * sizeof(short));
}
else {
// We have no data so we use the last played frame
memcpy(farend, &(aecpc->farendOld[i][0]), FRAME_LEN * sizeof(short));
}
// Call buffer delay estimator when all data is extracted,
// i.e. i = 0 for NB and i = 1 for WB or SWB
if ((i == 0 && aecpc->splitSampFreq == 8000) ||
(i == 1 && (aecpc->splitSampFreq == 16000))) {
EstBufDelay(aecpc, aecpc->msInSndCardBuf);
}
// Call the AEC
WebRtcAec_ProcessFrame(aecpc->aec, farend, &nearend[FRAME_LEN * i], &nearendH[FRAME_LEN * i],
&out[FRAME_LEN * i], &outH[FRAME_LEN * i], aecpc->knownDelay);
}
}
#ifdef AEC_DEBUG
msInAECBuf = WebRtcApm_get_buffer_size(aecpc->farendBuf) / (sampMsNb*aecpc->aec->mult);
fwrite(&msInAECBuf, 2, 1, aecpc->bufFile);
fwrite(&(aecpc->knownDelay), sizeof(aecpc->knownDelay), 1, aecpc->delayFile);
#endif
return retVal;
}
WebRtc_Word32 WebRtcAec_set_config(void *aecInst, AecConfig config)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
if (config.skewMode != kAecFalse && config.skewMode != kAecTrue) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->skewMode = config.skewMode;
if (config.nlpMode != kAecNlpConservative && config.nlpMode !=
kAecNlpModerate && config.nlpMode != kAecNlpAggressive) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->nlpMode = config.nlpMode;
aecpc->aec->targetSupp = targetSupp[aecpc->nlpMode];
aecpc->aec->minOverDrive = minOverDrive[aecpc->nlpMode];
if (config.metricsMode != kAecFalse && config.metricsMode != kAecTrue) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->aec->metricsMode = config.metricsMode;
if (aecpc->aec->metricsMode == kAecTrue) {
WebRtcAec_InitMetrics(aecpc->aec);
}
if (config.delay_logging != kAecFalse && config.delay_logging != kAecTrue) {
aecpc->lastError = AEC_BAD_PARAMETER_ERROR;
return -1;
}
aecpc->aec->delay_logging_enabled = config.delay_logging;
if (aecpc->aec->delay_logging_enabled == kAecTrue) {
memset(aecpc->aec->delay_histogram, 0, sizeof(aecpc->aec->delay_histogram));
}
return 0;
}
WebRtc_Word32 WebRtcAec_get_config(void *aecInst, AecConfig *config)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (config == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
config->nlpMode = aecpc->nlpMode;
config->skewMode = aecpc->skewMode;
config->metricsMode = aecpc->aec->metricsMode;
config->delay_logging = aecpc->aec->delay_logging_enabled;
return 0;
}
WebRtc_Word32 WebRtcAec_get_echo_status(void *aecInst, WebRtc_Word16 *status)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (status == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
*status = aecpc->aec->echoState;
return 0;
}
WebRtc_Word32 WebRtcAec_GetMetrics(void *aecInst, AecMetrics *metrics)
{
const float upweight = 0.7f;
float dtmp;
short stmp;
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
if (metrics == NULL) {
aecpc->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (aecpc->initFlag != initCheck) {
aecpc->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
// ERL
metrics->erl.instant = (short) aecpc->aec->erl.instant;
if ((aecpc->aec->erl.himean > offsetLevel) && (aecpc->aec->erl.average > offsetLevel)) {
// Use a mix between regular average and upper part average
dtmp = upweight * aecpc->aec->erl.himean + (1 - upweight) * aecpc->aec->erl.average;
metrics->erl.average = (short) dtmp;
}
else {
metrics->erl.average = offsetLevel;
}
metrics->erl.max = (short) aecpc->aec->erl.max;
if (aecpc->aec->erl.min < (offsetLevel * (-1))) {
metrics->erl.min = (short) aecpc->aec->erl.min;
}
else {
metrics->erl.min = offsetLevel;
}
// ERLE
metrics->erle.instant = (short) aecpc->aec->erle.instant;
if ((aecpc->aec->erle.himean > offsetLevel) && (aecpc->aec->erle.average > offsetLevel)) {
// Use a mix between regular average and upper part average
dtmp = upweight * aecpc->aec->erle.himean + (1 - upweight) * aecpc->aec->erle.average;
metrics->erle.average = (short) dtmp;
}
else {
metrics->erle.average = offsetLevel;
}
metrics->erle.max = (short) aecpc->aec->erle.max;
if (aecpc->aec->erle.min < (offsetLevel * (-1))) {
metrics->erle.min = (short) aecpc->aec->erle.min;
} else {
metrics->erle.min = offsetLevel;
}
// RERL
if ((metrics->erl.average > offsetLevel) && (metrics->erle.average > offsetLevel)) {
stmp = metrics->erl.average + metrics->erle.average;
}
else {
stmp = offsetLevel;
}
metrics->rerl.average = stmp;
// No other statistics needed, but returned for completeness
metrics->rerl.instant = stmp;
metrics->rerl.max = stmp;
metrics->rerl.min = stmp;
// A_NLP
metrics->aNlp.instant = (short) aecpc->aec->aNlp.instant;
if ((aecpc->aec->aNlp.himean > offsetLevel) && (aecpc->aec->aNlp.average > offsetLevel)) {
// Use a mix between regular average and upper part average
dtmp = upweight * aecpc->aec->aNlp.himean + (1 - upweight) * aecpc->aec->aNlp.average;
metrics->aNlp.average = (short) dtmp;
}
else {
metrics->aNlp.average = offsetLevel;
}
metrics->aNlp.max = (short) aecpc->aec->aNlp.max;
if (aecpc->aec->aNlp.min < (offsetLevel * (-1))) {
metrics->aNlp.min = (short) aecpc->aec->aNlp.min;
}
else {
metrics->aNlp.min = offsetLevel;
}
return 0;
}
int WebRtcAec_GetDelayMetrics(void* handle, int* median, int* std) {
aecpc_t* self = handle;
int i = 0;
int delay_values = 0;
int num_delay_values = 0;
int my_median = 0;
const int kMsPerBlock = (PART_LEN * 1000) / self->splitSampFreq;
float l1_norm = 0;
if (self == NULL) {
return -1;
}
if (median == NULL) {
self->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (std == NULL) {
self->lastError = AEC_NULL_POINTER_ERROR;
return -1;
}
if (self->initFlag != initCheck) {
self->lastError = AEC_UNINITIALIZED_ERROR;
return -1;
}
if (self->aec->delay_logging_enabled == 0) {
// Logging disabled
self->lastError = AEC_UNSUPPORTED_FUNCTION_ERROR;
return -1;
}
// Get number of delay values since last update
for (i = 0; i < kMaxDelay; i++) {
num_delay_values += self->aec->delay_histogram[i];
}
if (num_delay_values == 0) {
// We have no new delay value data
*median = -1;
*std = -1;
return 0;
}
delay_values = num_delay_values >> 1; // Start value for median count down
// Get median of delay values since last update
for (i = 0; i < kMaxDelay; i++) {
delay_values -= self->aec->delay_histogram[i];
if (delay_values < 0) {
my_median = i;
break;
}
}
*median = my_median * kMsPerBlock;
// Calculate the L1 norm, with median value as central moment
for (i = 0; i < kMaxDelay; i++) {
l1_norm += (float) (fabs(i - my_median) * self->aec->delay_histogram[i]);
}
*std = (int) (l1_norm / (float) num_delay_values + 0.5f) * kMsPerBlock;
// Reset histogram
memset(self->aec->delay_histogram, 0, sizeof(self->aec->delay_histogram));
return 0;
}
WebRtc_Word32 WebRtcAec_get_version(WebRtc_Word8 *versionStr, WebRtc_Word16 len)
{
const char version[] = "AEC 2.5.0";
const short versionLen = (short)strlen(version) + 1; // +1 for null-termination
if (versionStr == NULL) {
return -1;
}
if (versionLen > len) {
return -1;
}
strncpy(versionStr, version, versionLen);
return 0;
}
WebRtc_Word32 WebRtcAec_get_error_code(void *aecInst)
{
aecpc_t *aecpc = aecInst;
if (aecpc == NULL) {
return -1;
}
return aecpc->lastError;
}
static int EstBufDelay(aecpc_t *aecpc, short msInSndCardBuf)
{
short delayNew, nSampFar, nSampSndCard;
short diff;
nSampFar = WebRtcApm_get_buffer_size(aecpc->farendBuf);
nSampSndCard = msInSndCardBuf * sampMsNb * aecpc->aec->mult;
delayNew = nSampSndCard - nSampFar;
// Account for resampling frame delay
if (aecpc->skewMode == kAecTrue && aecpc->resample == kAecTrue) {
delayNew -= kResamplingDelay;
}
if (delayNew < FRAME_LEN) {
WebRtcApm_FlushBuffer(aecpc->farendBuf, FRAME_LEN);
delayNew += FRAME_LEN;
}
aecpc->filtDelay = WEBRTC_SPL_MAX(0, (short)(0.8*aecpc->filtDelay + 0.2*delayNew));
diff = aecpc->filtDelay - aecpc->knownDelay;
if (diff > 224) {
if (aecpc->lastDelayDiff < 96) {
aecpc->timeForDelayChange = 0;
}
else {
aecpc->timeForDelayChange++;
}
}
else if (diff < 96 && aecpc->knownDelay > 0) {
if (aecpc->lastDelayDiff > 224) {
aecpc->timeForDelayChange = 0;
}
else {
aecpc->timeForDelayChange++;
}
}
else {
aecpc->timeForDelayChange = 0;
}
aecpc->lastDelayDiff = diff;
if (aecpc->timeForDelayChange > 25) {
aecpc->knownDelay = WEBRTC_SPL_MAX((int)aecpc->filtDelay - 160, 0);
}
return 0;
}
static int DelayComp(aecpc_t *aecpc)
{
int nSampFar, nSampSndCard, delayNew, nSampAdd;
const int maxStuffSamp = 10 * FRAME_LEN;
nSampFar = WebRtcApm_get_buffer_size(aecpc->farendBuf);
nSampSndCard = aecpc->msInSndCardBuf * sampMsNb * aecpc->aec->mult;
delayNew = nSampSndCard - nSampFar;
// Account for resampling frame delay
if (aecpc->skewMode == kAecTrue && aecpc->resample == kAecTrue) {
delayNew -= kResamplingDelay;
}
if (delayNew > FAR_BUF_LEN - FRAME_LEN*aecpc->aec->mult) {
// The difference of the buffersizes is larger than the maximum
// allowed known delay. Compensate by stuffing the buffer.
nSampAdd = (int)(WEBRTC_SPL_MAX((int)(0.5 * nSampSndCard - nSampFar),
FRAME_LEN));
nSampAdd = WEBRTC_SPL_MIN(nSampAdd, maxStuffSamp);
WebRtcApm_StuffBuffer(aecpc->farendBuf, nSampAdd);
aecpc->delayChange = 1; // the delay needs to be updated
}
return 0;
}

View File

@ -1,278 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_INTERFACE_ECHO_CANCELLATION_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_INTERFACE_ECHO_CANCELLATION_H_
#include "typedefs.h"
// Errors
#define AEC_UNSPECIFIED_ERROR 12000
#define AEC_UNSUPPORTED_FUNCTION_ERROR 12001
#define AEC_UNINITIALIZED_ERROR 12002
#define AEC_NULL_POINTER_ERROR 12003
#define AEC_BAD_PARAMETER_ERROR 12004
// Warnings
#define AEC_BAD_PARAMETER_WARNING 12050
enum {
kAecNlpConservative = 0,
kAecNlpModerate,
kAecNlpAggressive
};
enum {
kAecFalse = 0,
kAecTrue
};
typedef struct {
WebRtc_Word16 nlpMode; // default kAecNlpModerate
WebRtc_Word16 skewMode; // default kAecFalse
WebRtc_Word16 metricsMode; // default kAecFalse
int delay_logging; // default kAecFalse
//float realSkew;
} AecConfig;
typedef struct {
WebRtc_Word16 instant;
WebRtc_Word16 average;
WebRtc_Word16 max;
WebRtc_Word16 min;
} AecLevel;
typedef struct {
AecLevel rerl;
AecLevel erl;
AecLevel erle;
AecLevel aNlp;
} AecMetrics;
#ifdef __cplusplus
extern "C" {
#endif
/*
* Allocates the memory needed by the AEC. The memory needs to be initialized
* separately using the WebRtcAec_Init() function.
*
* Inputs Description
* -------------------------------------------------------------------
* void **aecInst Pointer to the AEC instance to be created
* and initialized
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Create(void **aecInst);
/*
* This function releases the memory allocated by WebRtcAec_Create().
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Free(void *aecInst);
/*
* Initializes an AEC instance.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* WebRtc_Word32 sampFreq Sampling frequency of data
* WebRtc_Word32 scSampFreq Soundcard sampling frequency
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Init(void *aecInst,
WebRtc_Word32 sampFreq,
WebRtc_Word32 scSampFreq);
/*
* Inserts an 80 or 160 sample block of data into the farend buffer.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* WebRtc_Word16 *farend In buffer containing one frame of
* farend signal for L band
* WebRtc_Word16 nrOfSamples Number of samples in farend buffer
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_BufferFarend(void *aecInst,
const WebRtc_Word16 *farend,
WebRtc_Word16 nrOfSamples);
/*
* Runs the echo canceller on an 80 or 160 sample blocks of data.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* WebRtc_Word16 *nearend In buffer containing one frame of
* nearend+echo signal for L band
* WebRtc_Word16 *nearendH In buffer containing one frame of
* nearend+echo signal for H band
* WebRtc_Word16 nrOfSamples Number of samples in nearend buffer
* WebRtc_Word16 msInSndCardBuf Delay estimate for sound card and
* system buffers
* WebRtc_Word16 skew Difference between number of samples played
* and recorded at the soundcard (for clock skew
* compensation)
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word16 *out Out buffer, one frame of processed nearend
* for L band
* WebRtc_Word16 *outH Out buffer, one frame of processed nearend
* for H band
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_Process(void *aecInst,
const WebRtc_Word16 *nearend,
const WebRtc_Word16 *nearendH,
WebRtc_Word16 *out,
WebRtc_Word16 *outH,
WebRtc_Word16 nrOfSamples,
WebRtc_Word16 msInSndCardBuf,
WebRtc_Word32 skew);
/*
* This function enables the user to set certain parameters on-the-fly.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
* AecConfig config Config instance that contains all
* properties to be set
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_set_config(void *aecInst, AecConfig config);
/*
* Gets the on-the-fly paramters.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* AecConfig *config Pointer to the config instance that
* all properties will be written to
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_get_config(void *aecInst, AecConfig *config);
/*
* Gets the current echo status of the nearend signal.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word16 *status 0: Almost certainly nearend single-talk
* 1: Might not be neared single-talk
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_get_echo_status(void *aecInst, WebRtc_Word16 *status);
/*
* Gets the current echo metrics for the session.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* AecMetrics *metrics Struct which will be filled out with the
* current echo metrics.
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_GetMetrics(void *aecInst, AecMetrics *metrics);
/*
* Gets the current delay metrics for the session.
*
* Inputs Description
* -------------------------------------------------------------------
* void* handle Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* int* median Delay median value.
* int* std Delay standard deviation.
*
* int return 0: OK
* -1: error
*/
int WebRtcAec_GetDelayMetrics(void* handle, int* median, int* std);
/*
* Gets the last error code.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecInst Pointer to the AEC instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 11000-11100: error code
*/
WebRtc_Word32 WebRtcAec_get_error_code(void *aecInst);
/*
* Gets a version string.
*
* Inputs Description
* -------------------------------------------------------------------
* char *versionStr Pointer to a string array
* WebRtc_Word16 len The maximum length of the string
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word8 *versionStr Pointer to a string array
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAec_get_version(WebRtc_Word8 *versionStr, WebRtc_Word16 len);
#ifdef __cplusplus
}
#endif
#endif /* WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_INTERFACE_ECHO_CANCELLATION_H_ */

View File

@ -1,233 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/* Resamples a signal to an arbitrary rate. Used by the AEC to compensate for clock
* skew by resampling the farend signal.
*/
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include "resampler.h"
#include "aec_core.h"
enum { kFrameBufferSize = FRAME_LEN * 4 };
enum { kEstimateLengthFrames = 400 };
typedef struct {
short buffer[kFrameBufferSize];
float position;
int deviceSampleRateHz;
int skewData[kEstimateLengthFrames];
int skewDataIndex;
float skewEstimate;
} resampler_t;
static int EstimateSkew(const int* rawSkew,
int size,
int absLimit,
float *skewEst);
int WebRtcAec_CreateResampler(void **resampInst)
{
resampler_t *obj = malloc(sizeof(resampler_t));
*resampInst = obj;
if (obj == NULL) {
return -1;
}
return 0;
}
int WebRtcAec_InitResampler(void *resampInst, int deviceSampleRateHz)
{
resampler_t *obj = (resampler_t*) resampInst;
memset(obj->buffer, 0, sizeof(obj->buffer));
obj->position = 0.0;
obj->deviceSampleRateHz = deviceSampleRateHz;
memset(obj->skewData, 0, sizeof(obj->skewData));
obj->skewDataIndex = 0;
obj->skewEstimate = 0.0;
return 0;
}
int WebRtcAec_FreeResampler(void *resampInst)
{
resampler_t *obj = (resampler_t*) resampInst;
free(obj);
return 0;
}
int WebRtcAec_ResampleLinear(void *resampInst,
const short *inspeech,
int size,
float skew,
short *outspeech)
{
resampler_t *obj = (resampler_t*) resampInst;
short *y;
float be, tnew, interp;
int tn, outsize, mm;
if (size < 0 || size > 2 * FRAME_LEN) {
return -1;
}
// Add new frame data in lookahead
memcpy(&obj->buffer[FRAME_LEN + kResamplingDelay],
inspeech,
size * sizeof(short));
// Sample rate ratio
be = 1 + skew;
// Loop over input frame
mm = 0;
y = &obj->buffer[FRAME_LEN]; // Point at current frame
tnew = be * mm + obj->position;
tn = (int) tnew;
while (tn < size) {
// Interpolation
interp = y[tn] + (tnew - tn) * (y[tn+1] - y[tn]);
if (interp > 32767) {
interp = 32767;
}
else if (interp < -32768) {
interp = -32768;
}
outspeech[mm] = (short) interp;
mm++;
tnew = be * mm + obj->position;
tn = (int) tnew;
}
outsize = mm;
obj->position += outsize * be - size;
// Shift buffer
memmove(obj->buffer,
&obj->buffer[size],
(kFrameBufferSize - size) * sizeof(short));
return outsize;
}
int WebRtcAec_GetSkew(void *resampInst, int rawSkew, float *skewEst)
{
resampler_t *obj = (resampler_t*)resampInst;
int err = 0;
if (obj->skewDataIndex < kEstimateLengthFrames) {
obj->skewData[obj->skewDataIndex] = rawSkew;
obj->skewDataIndex++;
}
else if (obj->skewDataIndex == kEstimateLengthFrames) {
err = EstimateSkew(obj->skewData,
kEstimateLengthFrames,
obj->deviceSampleRateHz,
skewEst);
obj->skewEstimate = *skewEst;
obj->skewDataIndex++;
}
else {
*skewEst = obj->skewEstimate;
}
return err;
}
int EstimateSkew(const int* rawSkew,
int size,
int deviceSampleRateHz,
float *skewEst)
{
const int absLimitOuter = (int)(0.04f * deviceSampleRateHz);
const int absLimitInner = (int)(0.0025f * deviceSampleRateHz);
int i = 0;
int n = 0;
float rawAvg = 0;
float err = 0;
float rawAbsDev = 0;
int upperLimit = 0;
int lowerLimit = 0;
float cumSum = 0;
float x = 0;
float x2 = 0;
float y = 0;
float xy = 0;
float xAvg = 0;
float denom = 0;
float skew = 0;
*skewEst = 0; // Set in case of error below.
for (i = 0; i < size; i++) {
if ((rawSkew[i] < absLimitOuter && rawSkew[i] > -absLimitOuter)) {
n++;
rawAvg += rawSkew[i];
}
}
if (n == 0) {
return -1;
}
assert(n > 0);
rawAvg /= n;
for (i = 0; i < size; i++) {
if ((rawSkew[i] < absLimitOuter && rawSkew[i] > -absLimitOuter)) {
err = rawSkew[i] - rawAvg;
rawAbsDev += err >= 0 ? err : -err;
}
}
assert(n > 0);
rawAbsDev /= n;
upperLimit = (int)(rawAvg + 5 * rawAbsDev + 1); // +1 for ceiling.
lowerLimit = (int)(rawAvg - 5 * rawAbsDev - 1); // -1 for floor.
n = 0;
for (i = 0; i < size; i++) {
if ((rawSkew[i] < absLimitInner && rawSkew[i] > -absLimitInner) ||
(rawSkew[i] < upperLimit && rawSkew[i] > lowerLimit)) {
n++;
cumSum += rawSkew[i];
x += n;
x2 += n*n;
y += cumSum;
xy += n * cumSum;
}
}
if (n == 0) {
return -1;
}
assert(n > 0);
xAvg = x / n;
denom = x2 - xAvg*x;
if (denom != 0) {
skew = (xy - xAvg*y) / denom;
}
*skewEst = skew;
return 0;
}

View File

@ -1,32 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_RESAMPLER_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_RESAMPLER_H_
enum { kResamplingDelay = 1 };
// Unless otherwise specified, functions return 0 on success and -1 on error
int WebRtcAec_CreateResampler(void **resampInst);
int WebRtcAec_InitResampler(void *resampInst, int deviceSampleRateHz);
int WebRtcAec_FreeResampler(void *resampInst);
// Estimates skew from raw measurement.
int WebRtcAec_GetSkew(void *resampInst, int rawSkew, float *skewEst);
// Resamples input using linear interpolation.
// Returns size of resampled array.
int WebRtcAec_ResampleLinear(void *resampInst,
const short *inspeech,
int size,
float skew,
short *outspeech);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AEC_MAIN_SOURCE_RESAMPLER_H_

View File

@ -1,9 +0,0 @@
noinst_LTLIBRARIES = libaecm.la
libaecm_la_SOURCES = interface/echo_control_mobile.h \
echo_control_mobile.c \
aecm_core.c \
aecm_core.h
libaecm_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility

View File

@ -1,34 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'aecm',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
'apm_util'
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/echo_control_mobile.h',
'echo_control_mobile.c',
'aecm_core.c',
'aecm_core.h',
],
},
],
}

File diff suppressed because it is too large Load Diff

View File

@ -1,358 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
// Performs echo control (suppression) with fft routines in fixed-point
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AECM_MAIN_SOURCE_AECM_CORE_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AECM_MAIN_SOURCE_AECM_CORE_H_
#define AECM_DYNAMIC_Q // turn on/off dynamic Q-domain
//#define AECM_WITH_ABS_APPROX
//#define AECM_SHORT // for 32 sample partition length (otherwise 64)
#include "typedefs.h"
#include "signal_processing_library.h"
// Algorithm parameters
#define FRAME_LEN 80 // Total frame length, 10 ms
#ifdef AECM_SHORT
#define PART_LEN 32 // Length of partition
#define PART_LEN_SHIFT 6 // Length of (PART_LEN * 2) in base 2
#else
#define PART_LEN 64 // Length of partition
#define PART_LEN_SHIFT 7 // Length of (PART_LEN * 2) in base 2
#endif
#define PART_LEN1 (PART_LEN + 1) // Unique fft coefficients
#define PART_LEN2 (PART_LEN << 1) // Length of partition * 2
#define PART_LEN4 (PART_LEN << 2) // Length of partition * 4
#define FAR_BUF_LEN PART_LEN4 // Length of buffers
#define MAX_DELAY 100
// Counter parameters
#ifdef AECM_SHORT
#define CONV_LEN 1024 // Convergence length used at startup
#else
#define CONV_LEN 512 // Convergence length used at startup
#endif
#define CONV_LEN2 (CONV_LEN << 1) // Convergence length * 2 used at startup
// Energy parameters
#define MAX_BUF_LEN 64 // History length of energy signals
#define FAR_ENERGY_MIN 1025 // Lowest Far energy level: At least 2 in energy
#define FAR_ENERGY_DIFF 929 // Allowed difference between max and min
#define ENERGY_DEV_OFFSET 0 // The energy error offset in Q8
#define ENERGY_DEV_TOL 400 // The energy estimation tolerance in Q8
#define FAR_ENERGY_VAD_REGION 230 // Far VAD tolerance region
// Stepsize parameters
#define MU_MIN 10 // Min stepsize 2^-MU_MIN (far end energy dependent)
#define MU_MAX 1 // Max stepsize 2^-MU_MAX (far end energy dependent)
#define MU_DIFF 9 // MU_MIN - MU_MAX
// Channel parameters
#define MIN_MSE_COUNT 20 // Min number of consecutive blocks with enough far end
// energy to compare channel estimates
#define MIN_MSE_DIFF 29 // The ratio between adapted and stored channel to
// accept a new storage (0.8 in Q-MSE_RESOLUTION)
#define MSE_RESOLUTION 5 // MSE parameter resolution
#define RESOLUTION_CHANNEL16 12 // W16 Channel in Q-RESOLUTION_CHANNEL16
#define RESOLUTION_CHANNEL32 28 // W32 Channel in Q-RESOLUTION_CHANNEL
#define CHANNEL_VAD 16 // Minimum energy in frequency band to update channel
// Suppression gain parameters: SUPGAIN_ parameters in Q-(RESOLUTION_SUPGAIN)
#define RESOLUTION_SUPGAIN 8 // Channel in Q-(RESOLUTION_SUPGAIN)
#define SUPGAIN_DEFAULT (1 << RESOLUTION_SUPGAIN) // Default suppression gain
#define SUPGAIN_ERROR_PARAM_A 3072 // Estimation error parameter (Maximum gain) (8 in Q8)
#define SUPGAIN_ERROR_PARAM_B 1536 // Estimation error parameter (Gain before going down)
#define SUPGAIN_ERROR_PARAM_D SUPGAIN_DEFAULT // Estimation error parameter
// (Should be the same as Default) (1 in Q8)
#define SUPGAIN_EPC_DT 200 // = SUPGAIN_ERROR_PARAM_C * ENERGY_DEV_TOL
// Defines for "check delay estimation"
#define CORR_WIDTH 31 // Number of samples to correlate over.
#define CORR_MAX 16 // Maximum correlation offset
#define CORR_MAX_BUF 63
#define CORR_DEV 4
#define CORR_MAX_LEVEL 20
#define CORR_MAX_LOW 4
#define CORR_BUF_LEN (CORR_MAX << 1) + 1
// Note that CORR_WIDTH + 2*CORR_MAX <= MAX_BUF_LEN
#define ONE_Q14 (1 << 14)
// NLP defines
#define NLP_COMP_LOW 3277 // 0.2 in Q14
#define NLP_COMP_HIGH ONE_Q14 // 1 in Q14
extern const WebRtc_Word16 WebRtcAecm_kSqrtHanning[];
typedef struct {
WebRtc_Word16 real;
WebRtc_Word16 imag;
} complex16_t;
typedef struct
{
int farBufWritePos;
int farBufReadPos;
int knownDelay;
int lastKnownDelay;
int firstVAD; // Parameter to control poorly initialized channels
void *farFrameBuf;
void *nearNoisyFrameBuf;
void *nearCleanFrameBuf;
void *outFrameBuf;
WebRtc_Word16 farBuf[FAR_BUF_LEN];
WebRtc_Word16 mult;
WebRtc_UWord32 seed;
// Delay estimation variables
void* delay_estimator;
WebRtc_UWord16 currentDelay;
WebRtc_Word16 nlpFlag;
WebRtc_Word16 fixedDelay;
WebRtc_UWord32 totCount;
WebRtc_Word16 dfaCleanQDomain;
WebRtc_Word16 dfaCleanQDomainOld;
WebRtc_Word16 dfaNoisyQDomain;
WebRtc_Word16 dfaNoisyQDomainOld;
WebRtc_Word16 nearLogEnergy[MAX_BUF_LEN];
WebRtc_Word16 farLogEnergy;
WebRtc_Word16 echoAdaptLogEnergy[MAX_BUF_LEN];
WebRtc_Word16 echoStoredLogEnergy[MAX_BUF_LEN];
// The extra 16 or 32 bytes in the following buffers are for alignment based Neon code.
// It's designed this way since the current GCC compiler can't align a buffer in 16 or 32
// byte boundaries properly.
WebRtc_Word16 channelStored_buf[PART_LEN1 + 8];
WebRtc_Word16 channelAdapt16_buf[PART_LEN1 + 8];
WebRtc_Word32 channelAdapt32_buf[PART_LEN1 + 8];
WebRtc_Word16 xBuf_buf[PART_LEN2 + 16]; // farend
WebRtc_Word16 dBufClean_buf[PART_LEN2 + 16]; // nearend
WebRtc_Word16 dBufNoisy_buf[PART_LEN2 + 16]; // nearend
WebRtc_Word16 outBuf_buf[PART_LEN + 8];
// Pointers to the above buffers
WebRtc_Word16 *channelStored;
WebRtc_Word16 *channelAdapt16;
WebRtc_Word32 *channelAdapt32;
WebRtc_Word16 *xBuf;
WebRtc_Word16 *dBufClean;
WebRtc_Word16 *dBufNoisy;
WebRtc_Word16 *outBuf;
WebRtc_Word32 echoFilt[PART_LEN1];
WebRtc_Word16 nearFilt[PART_LEN1];
WebRtc_Word32 noiseEst[PART_LEN1];
int noiseEstTooLowCtr[PART_LEN1];
int noiseEstTooHighCtr[PART_LEN1];
WebRtc_Word16 noiseEstCtr;
WebRtc_Word16 cngMode;
WebRtc_Word32 mseAdaptOld;
WebRtc_Word32 mseStoredOld;
WebRtc_Word32 mseThreshold;
WebRtc_Word16 farEnergyMin;
WebRtc_Word16 farEnergyMax;
WebRtc_Word16 farEnergyMaxMin;
WebRtc_Word16 farEnergyVAD;
WebRtc_Word16 farEnergyMSE;
int currentVADValue;
WebRtc_Word16 vadUpdateCount;
WebRtc_Word16 startupState;
WebRtc_Word16 mseChannelCount;
WebRtc_Word16 supGain;
WebRtc_Word16 supGainOld;
WebRtc_Word16 supGainErrParamA;
WebRtc_Word16 supGainErrParamD;
WebRtc_Word16 supGainErrParamDiffAB;
WebRtc_Word16 supGainErrParamDiffBD;
#ifdef AEC_DEBUG
FILE *farFile;
FILE *nearFile;
FILE *outFile;
#endif
} AecmCore_t;
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_CreateCore(...)
//
// Allocates the memory needed by the AECM. The memory needs to be
// initialized separately using the WebRtcAecm_InitCore() function.
//
// Input:
// - aecm : Instance that should be created
//
// Output:
// - aecm : Created instance
//
// Return value : 0 - Ok
// -1 - Error
//
int WebRtcAecm_CreateCore(AecmCore_t **aecm);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_InitCore(...)
//
// This function initializes the AECM instant created with WebRtcAecm_CreateCore(...)
// Input:
// - aecm : Pointer to the AECM instance
// - samplingFreq : Sampling Frequency
//
// Output:
// - aecm : Initialized instance
//
// Return value : 0 - Ok
// -1 - Error
//
int WebRtcAecm_InitCore(AecmCore_t * const aecm, int samplingFreq);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_FreeCore(...)
//
// This function releases the memory allocated by WebRtcAecm_CreateCore()
// Input:
// - aecm : Pointer to the AECM instance
//
// Return value : 0 - Ok
// -1 - Error
// 11001-11016: Error
//
int WebRtcAecm_FreeCore(AecmCore_t *aecm);
int WebRtcAecm_Control(AecmCore_t *aecm, int delay, int nlpFlag);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_InitEchoPathCore(...)
//
// This function resets the echo channel adaptation with the specified channel.
// Input:
// - aecm : Pointer to the AECM instance
// - echo_path : Pointer to the data that should initialize the echo path
//
// Output:
// - aecm : Initialized instance
//
void WebRtcAecm_InitEchoPathCore(AecmCore_t* aecm, const WebRtc_Word16* echo_path);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_ProcessFrame(...)
//
// This function processes frames and sends blocks to WebRtcAecm_ProcessBlock(...)
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one frame of echo signal
// - nearendNoisy : In buffer containing one frame of nearend+echo signal without NS
// - nearendClean : In buffer containing one frame of nearend+echo signal with NS
//
// Output:
// - out : Out buffer, one frame of nearend signal :
//
//
int WebRtcAecm_ProcessFrame(AecmCore_t * aecm, const WebRtc_Word16 * farend,
const WebRtc_Word16 * nearendNoisy,
const WebRtc_Word16 * nearendClean,
WebRtc_Word16 * out);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_ProcessBlock(...)
//
// This function is called for every block within one frame
// This function is called by WebRtcAecm_ProcessFrame(...)
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one block of echo signal
// - nearendNoisy : In buffer containing one frame of nearend+echo signal without NS
// - nearendClean : In buffer containing one frame of nearend+echo signal with NS
//
// Output:
// - out : Out buffer, one block of nearend signal :
//
//
int WebRtcAecm_ProcessBlock(AecmCore_t * aecm, const WebRtc_Word16 * farend,
const WebRtc_Word16 * nearendNoisy,
const WebRtc_Word16 * noisyClean,
WebRtc_Word16 * out);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_BufferFarFrame()
//
// Inserts a frame of data into farend buffer.
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one frame of farend signal
// - farLen : Length of frame
//
void WebRtcAecm_BufferFarFrame(AecmCore_t * const aecm, const WebRtc_Word16 * const farend,
const int farLen);
///////////////////////////////////////////////////////////////////////////////////////////////
// WebRtcAecm_FetchFarFrame()
//
// Read the farend buffer to account for known delay
//
// Inputs:
// - aecm : Pointer to the AECM instance
// - farend : In buffer containing one frame of farend signal
// - farLen : Length of frame
// - knownDelay : known delay
//
void WebRtcAecm_FetchFarFrame(AecmCore_t * const aecm, WebRtc_Word16 * const farend,
const int farLen, const int knownDelay);
///////////////////////////////////////////////////////////////////////////////////////////////
// Some internal functions shared by ARM NEON and generic C code:
//
void WebRtcAecm_CalcLinearEnergies(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echoEst,
WebRtc_UWord32* far_energy,
WebRtc_UWord32* echo_energy_adapt,
WebRtc_UWord32* echo_energy_stored);
void WebRtcAecm_StoreAdaptiveChannel(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echo_est);
void WebRtcAecm_ResetAdaptiveChannel(AecmCore_t *aecm);
void WebRtcAecm_WindowAndFFT(WebRtc_Word16* fft,
const WebRtc_Word16* time_signal,
complex16_t* freq_signal,
int time_signal_scaling);
void WebRtcAecm_InverseFFTAndWindow(AecmCore_t* aecm,
WebRtc_Word16* fft,
complex16_t* efw,
WebRtc_Word16* output,
const WebRtc_Word16* nearendClean);
#endif

View File

@ -1,314 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#if defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON)
#include "aecm_core.h"
#include <arm_neon.h>
#include <assert.h>
// Square root of Hanning window in Q14.
static const WebRtc_Word16 kSqrtHanningReversed[] __attribute__ ((aligned (8))) = {
16384, 16373, 16354, 16325,
16286, 16237, 16179, 16111,
16034, 15947, 15851, 15746,
15631, 15506, 15373, 15231,
15079, 14918, 14749, 14571,
14384, 14189, 13985, 13773,
13553, 13325, 13089, 12845,
12594, 12335, 12068, 11795,
11514, 11227, 10933, 10633,
10326, 10013, 9695, 9370,
9040, 8705, 8364, 8019,
7668, 7313, 6954, 6591,
6224, 5853, 5478, 5101,
4720, 4337, 3951, 3562,
3172, 2780, 2386, 1990,
1594, 1196, 798, 399
};
void WebRtcAecm_WindowAndFFT(WebRtc_Word16* fft,
const WebRtc_Word16* time_signal,
complex16_t* freq_signal,
int time_signal_scaling)
{
int i, j;
int16x4_t tmp16x4_scaling = vdup_n_s16(time_signal_scaling);
__asm__("vmov.i16 d21, #0" ::: "d21");
for(i = 0, j = 0; i < PART_LEN; i += 4, j += 8)
{
int16x4_t tmp16x4_0;
int16x4_t tmp16x4_1;
int32x4_t tmp32x4_0;
/* Window near end */
// fft[j] = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT((time_signal[i]
// << time_signal_scaling), WebRtcAecm_kSqrtHanning[i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&time_signal[i]));
tmp16x4_0 = vshl_s16(tmp16x4_0, tmp16x4_scaling);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&WebRtcAecm_kSqrtHanning[i]));
tmp32x4_0 = vmull_s16(tmp16x4_0, tmp16x4_1);
__asm__("vshrn.i32 d20, %q0, #14" : : "w"(tmp32x4_0) : "d20");
__asm__("vst2.16 {d20, d21}, [%0, :128]" : : "r"(&fft[j]) : "q10");
// fft[PART_LEN2 + j] = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT(
// (time_signal[PART_LEN + i] << time_signal_scaling),
// WebRtcAecm_kSqrtHanning[PART_LEN - i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&time_signal[i + PART_LEN]));
tmp16x4_0 = vshl_s16(tmp16x4_0, tmp16x4_scaling);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&kSqrtHanningReversed[i]));
tmp32x4_0 = vmull_s16(tmp16x4_0, tmp16x4_1);
__asm__("vshrn.i32 d20, %q0, #14" : : "w"(tmp32x4_0) : "d20");
__asm__("vst2.16 {d20, d21}, [%0, :128]" : : "r"(&fft[PART_LEN2 + j]) : "q10");
}
WebRtcSpl_ComplexBitReverse(fft, PART_LEN_SHIFT);
WebRtcSpl_ComplexFFT(fft, PART_LEN_SHIFT, 1);
// Take only the first PART_LEN2 samples, and switch the sign of the imaginary part.
for(i = 0, j = 0; j < PART_LEN2; i += 8, j += 16)
{
__asm__("vld2.16 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&fft[j]) : "q10", "q11");
__asm__("vneg.s16 d22, d22" : : : "q10");
__asm__("vneg.s16 d23, d23" : : : "q11");
__asm__("vst2.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&freq_signal[i].real): "q10", "q11");
}
}
void WebRtcAecm_InverseFFTAndWindow(AecmCore_t* aecm,
WebRtc_Word16* fft,
complex16_t* efw,
WebRtc_Word16* output,
const WebRtc_Word16* nearendClean)
{
int i, j, outCFFT;
WebRtc_Word32 tmp32no1;
// Synthesis
for(i = 0, j = 0; i < PART_LEN; i += 4, j += 8)
{
// We overwrite two more elements in fft[], but it's ok.
__asm__("vld2.16 {d20, d21}, [%0, :128]" : : "r"(&(efw[i].real)) : "q10");
__asm__("vmov q11, q10" : : : "q10", "q11");
__asm__("vneg.s16 d23, d23" : : : "q11");
__asm__("vst2.16 {d22, d23}, [%0, :128]" : : "r"(&fft[j]): "q11");
__asm__("vrev64.16 q10, q10" : : : "q10");
__asm__("vst2.16 {d20, d21}, [%0]" : : "r"(&fft[PART_LEN4 - j - 6]): "q10");
}
fft[PART_LEN2] = efw[PART_LEN].real;
fft[PART_LEN2 + 1] = -efw[PART_LEN].imag;
// Inverse FFT, result should be scaled with outCFFT.
WebRtcSpl_ComplexBitReverse(fft, PART_LEN_SHIFT);
outCFFT = WebRtcSpl_ComplexIFFT(fft, PART_LEN_SHIFT, 1);
// Take only the real values and scale with outCFFT.
for (i = 0, j = 0; i < PART_LEN2; i += 8, j+= 16)
{
__asm__("vld2.16 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&fft[j]) : "q10", "q11");
__asm__("vst1.16 {d20, d21}, [%0, :128]" : : "r"(&fft[i]): "q10");
}
int32x4_t tmp32x4_2;
__asm__("vdup.32 %q0, %1" : "=w"(tmp32x4_2) : "r"((WebRtc_Word32)
(outCFFT - aecm->dfaCleanQDomain)));
for (i = 0; i < PART_LEN; i += 4)
{
int16x4_t tmp16x4_0;
int16x4_t tmp16x4_1;
int32x4_t tmp32x4_0;
int32x4_t tmp32x4_1;
// fft[i] = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16_RSFT_WITH_ROUND(
// fft[i], WebRtcAecm_kSqrtHanning[i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&fft[i]));
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&WebRtcAecm_kSqrtHanning[i]));
__asm__("vmull.s16 %q0, %P1, %P2" : "=w"(tmp32x4_0) : "w"(tmp16x4_0), "w"(tmp16x4_1));
__asm__("vrshr.s32 %q0, %q1, #14" : "=w"(tmp32x4_0) : "0"(tmp32x4_0));
// tmp32no1 = WEBRTC_SPL_SHIFT_W32((WebRtc_Word32)fft[i],
// outCFFT - aecm->dfaCleanQDomain);
__asm__("vshl.s32 %q0, %q1, %q2" : "=w"(tmp32x4_0) : "0"(tmp32x4_0), "w"(tmp32x4_2));
// fft[i] = (WebRtc_Word16)WEBRTC_SPL_SAT(WEBRTC_SPL_WORD16_MAX,
// tmp32no1 + outBuf[i], WEBRTC_SPL_WORD16_MIN);
// output[i] = fft[i];
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&aecm->outBuf[i]));
__asm__("vmovl.s16 %q0, %P1" : "=w"(tmp32x4_1) : "w"(tmp16x4_0));
__asm__("vadd.i32 %q0, %q1" : : "w"(tmp32x4_0), "w"(tmp32x4_1));
__asm__("vqshrn.s32 %P0, %q1, #0" : "=w"(tmp16x4_0) : "w"(tmp32x4_0));
__asm__("vst1.16 %P0, [%1, :64]" : : "w"(tmp16x4_0), "r"(&fft[i]));
__asm__("vst1.16 %P0, [%1, :64]" : : "w"(tmp16x4_0), "r"(&output[i]));
// tmp32no1 = WEBRTC_SPL_MUL_16_16_RSFT(
// fft[PART_LEN + i], WebRtcAecm_kSqrtHanning[PART_LEN - i], 14);
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_0) : "r"(&fft[PART_LEN + i]));
__asm__("vld1.16 %P0, [%1, :64]" : "=w"(tmp16x4_1) : "r"(&kSqrtHanningReversed[i]));
__asm__("vmull.s16 %q0, %P1, %P2" : "=w"(tmp32x4_0) : "w"(tmp16x4_0), "w"(tmp16x4_1));
__asm__("vshr.s32 %q0, %q1, #14" : "=w"(tmp32x4_0) : "0"(tmp32x4_0));
// tmp32no1 = WEBRTC_SPL_SHIFT_W32(tmp32no1, outCFFT - aecm->dfaCleanQDomain);
__asm__("vshl.s32 %q0, %q1, %q2" : "=w"(tmp32x4_0) : "0"(tmp32x4_0), "w"(tmp32x4_2));
// outBuf[i] = (WebRtc_Word16)WEBRTC_SPL_SAT(
// WEBRTC_SPL_WORD16_MAX, tmp32no1, WEBRTC_SPL_WORD16_MIN);
__asm__("vqshrn.s32 %P0, %q1, #0" : "=w"(tmp16x4_0) : "w"(tmp32x4_0));
__asm__("vst1.16 %P0, [%1, :64]" : : "w"(tmp16x4_0), "r"(&aecm->outBuf[i]));
}
// Copy the current block to the old position (outBuf is shifted elsewhere).
for (i = 0; i < PART_LEN; i += 16)
{
__asm__("vld1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->xBuf[i + PART_LEN]) : "q10");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&aecm->xBuf[i]): "q10");
}
for (i = 0; i < PART_LEN; i += 16)
{
__asm__("vld1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufNoisy[i + PART_LEN]) : "q10");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufNoisy[i]): "q10");
}
if (nearendClean != NULL) {
for (i = 0; i < PART_LEN; i += 16)
{
__asm__("vld1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufClean[i + PART_LEN]) : "q10");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->dBufClean[i]): "q10");
}
}
}
void WebRtcAecm_CalcLinearEnergies(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echo_est,
WebRtc_UWord32* far_energy,
WebRtc_UWord32* echo_energy_adapt,
WebRtc_UWord32* echo_energy_stored)
{
int i;
register WebRtc_UWord32 far_energy_r;
register WebRtc_UWord32 echo_energy_stored_r;
register WebRtc_UWord32 echo_energy_adapt_r;
uint32x4_t tmp32x4_0;
__asm__("vmov.i32 q14, #0" : : : "q14"); // far_energy
__asm__("vmov.i32 q8, #0" : : : "q8"); // echo_energy_stored
__asm__("vmov.i32 q9, #0" : : : "q9"); // echo_energy_adapt
for(i = 0; i < PART_LEN -7; i += 8)
{
// far_energy += (WebRtc_UWord32)(far_spectrum[i]);
__asm__("vld1.16 {d26, d27}, [%0]" : : "r"(&far_spectrum[i]) : "q13");
__asm__("vaddw.u16 q14, q14, d26" : : : "q14", "q13");
__asm__("vaddw.u16 q14, q14, d27" : : : "q14", "q13");
// Get estimated echo energies for adaptive channel and stored channel.
// echoEst[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
__asm__("vld1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelStored[i]) : "q12");
__asm__("vmull.u16 q10, d26, d24" : : : "q12", "q13", "q10");
__asm__("vmull.u16 q11, d27, d25" : : : "q12", "q13", "q11");
__asm__("vst1.32 {d20, d21, d22, d23}, [%0, :256]" : : "r"(&echo_est[i]):
"q10", "q11");
// echo_energy_stored += (WebRtc_UWord32)echoEst[i];
__asm__("vadd.u32 q8, q10" : : : "q10", "q8");
__asm__("vadd.u32 q8, q11" : : : "q11", "q8");
// echo_energy_adapt += WEBRTC_SPL_UMUL_16_16(
// aecm->channelAdapt16[i], far_spectrum[i]);
__asm__("vld1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelAdapt16[i]) : "q12");
__asm__("vmull.u16 q10, d26, d24" : : : "q12", "q13", "q10");
__asm__("vmull.u16 q11, d27, d25" : : : "q12", "q13", "q11");
__asm__("vadd.u32 q9, q10" : : : "q9", "q15");
__asm__("vadd.u32 q9, q11" : : : "q9", "q11");
}
__asm__("vadd.u32 d28, d29" : : : "q14");
__asm__("vpadd.u32 d28, d28" : : : "q14");
__asm__("vmov.32 %0, d28[0]" : "=r"(far_energy_r): : "q14");
__asm__("vadd.u32 d18, d19" : : : "q9");
__asm__("vpadd.u32 d18, d18" : : : "q9");
__asm__("vmov.32 %0, d18[0]" : "=r"(echo_energy_adapt_r): : "q9");
__asm__("vadd.u32 d16, d17" : : : "q8");
__asm__("vpadd.u32 d16, d16" : : : "q8");
__asm__("vmov.32 %0, d16[0]" : "=r"(echo_energy_stored_r): : "q8");
// Get estimated echo energies for adaptive channel and stored channel.
echo_est[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
*echo_energy_stored = echo_energy_stored_r + (WebRtc_UWord32)echo_est[i];
*far_energy = far_energy_r + (WebRtc_UWord32)(far_spectrum[i]);
*echo_energy_adapt = echo_energy_adapt_r + WEBRTC_SPL_UMUL_16_16(
aecm->channelAdapt16[i], far_spectrum[i]);
}
void WebRtcAecm_StoreAdaptiveChannel(AecmCore_t* aecm,
const WebRtc_UWord16* far_spectrum,
WebRtc_Word32* echo_est)
{
int i;
// During startup we store the channel every block.
// Recalculate echo estimate.
for(i = 0; i < PART_LEN -7; i += 8)
{
// aecm->channelStored[i] = acem->channelAdapt16[i];
// echo_est[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
__asm__("vld1.16 {d26, d27}, [%0]" : : "r"(&far_spectrum[i]) : "q13");
__asm__("vld1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelAdapt16[i]) : "q12");
__asm__("vst1.16 {d24, d25}, [%0, :128]" : : "r"(&aecm->channelStored[i]) : "q12");
__asm__("vmull.u16 q10, d26, d24" : : : "q12", "q13", "q10");
__asm__("vmull.u16 q11, d27, d25" : : : "q12", "q13", "q11");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&echo_est[i]) : "q10", "q11");
}
aecm->channelStored[i] = aecm->channelAdapt16[i];
echo_est[i] = WEBRTC_SPL_MUL_16_U16(aecm->channelStored[i], far_spectrum[i]);
}
void WebRtcAecm_ResetAdaptiveChannel(AecmCore_t* aecm)
{
int i;
for(i = 0; i < PART_LEN -7; i += 8)
{
// aecm->channelAdapt16[i] = aecm->channelStored[i];
// aecm->channelAdapt32[i] = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)
// aecm->channelStored[i], 16);
__asm__("vld1.16 {d24, d25}, [%0, :128]" : :
"r"(&aecm->channelStored[i]) : "q12");
__asm__("vst1.16 {d24, d25}, [%0, :128]" : :
"r"(&aecm->channelAdapt16[i]) : "q12");
__asm__("vshll.s16 q10, d24, #16" : : : "q12", "q13", "q10");
__asm__("vshll.s16 q11, d25, #16" : : : "q12", "q13", "q11");
__asm__("vst1.16 {d20, d21, d22, d23}, [%0, :256]" : :
"r"(&aecm->channelAdapt32[i]): "q10", "q11");
}
aecm->channelAdapt16[i] = aecm->channelStored[i];
aecm->channelAdapt32[i] = WEBRTC_SPL_LSHIFT_W32(
(WebRtc_Word32)aecm->channelStored[i], 16);
}
#endif // #if defined(WEBRTC_ANDROID) && defined(WEBRTC_ARCH_ARM_NEON)

View File

@ -1,800 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include <stdlib.h>
//#include <string.h>
#include "echo_control_mobile.h"
#include "aecm_core.h"
#include "ring_buffer.h"
#ifdef AEC_DEBUG
#include <stdio.h>
#endif
#ifdef MAC_IPHONE_PRINT
#include <time.h>
#include <stdio.h>
#elif defined ARM_WINM_LOG
#include "windows.h"
extern HANDLE logFile;
#endif
#define BUF_SIZE_FRAMES 50 // buffer size (frames)
// Maximum length of resampled signal. Must be an integer multiple of frames
// (ceil(1/(1 + MIN_SKEW)*2) + 1)*FRAME_LEN
// The factor of 2 handles wb, and the + 1 is as a safety margin
#define MAX_RESAMP_LEN (5 * FRAME_LEN)
static const int kBufSizeSamp = BUF_SIZE_FRAMES * FRAME_LEN; // buffer size (samples)
static const int kSampMsNb = 8; // samples per ms in nb
// Target suppression levels for nlp modes
// log{0.001, 0.00001, 0.00000001}
static const int kInitCheck = 42;
typedef struct
{
int sampFreq;
int scSampFreq;
short bufSizeStart;
int knownDelay;
// Stores the last frame added to the farend buffer
short farendOld[2][FRAME_LEN];
short initFlag; // indicates if AEC has been initialized
// Variables used for averaging far end buffer size
short counter;
short sum;
short firstVal;
short checkBufSizeCtr;
// Variables used for delay shifts
short msInSndCardBuf;
short filtDelay;
int timeForDelayChange;
int ECstartup;
int checkBuffSize;
int delayChange;
short lastDelayDiff;
WebRtc_Word16 echoMode;
#ifdef AEC_DEBUG
FILE *bufFile;
FILE *delayFile;
FILE *preCompFile;
FILE *postCompFile;
#endif // AEC_DEBUG
// Structures
void *farendBuf;
int lastError;
AecmCore_t *aecmCore;
} aecmob_t;
// Estimates delay to set the position of the farend buffer read pointer
// (controlled by knownDelay)
static int WebRtcAecm_EstBufDelay(aecmob_t *aecmInst, short msInSndCardBuf);
// Stuffs the farend buffer if the estimated delay is too large
static int WebRtcAecm_DelayComp(aecmob_t *aecmInst);
WebRtc_Word32 WebRtcAecm_Create(void **aecmInst)
{
aecmob_t *aecm;
if (aecmInst == NULL)
{
return -1;
}
aecm = malloc(sizeof(aecmob_t));
*aecmInst = aecm;
if (aecm == NULL)
{
return -1;
}
if (WebRtcAecm_CreateCore(&aecm->aecmCore) == -1)
{
WebRtcAecm_Free(aecm);
aecm = NULL;
return -1;
}
if (WebRtcApm_CreateBuffer(&aecm->farendBuf, kBufSizeSamp) == -1)
{
WebRtcAecm_Free(aecm);
aecm = NULL;
return -1;
}
aecm->initFlag = 0;
aecm->lastError = 0;
#ifdef AEC_DEBUG
aecm->aecmCore->farFile = fopen("aecFar.pcm","wb");
aecm->aecmCore->nearFile = fopen("aecNear.pcm","wb");
aecm->aecmCore->outFile = fopen("aecOut.pcm","wb");
//aecm->aecmCore->outLpFile = fopen("aecOutLp.pcm","wb");
aecm->bufFile = fopen("aecBuf.dat", "wb");
aecm->delayFile = fopen("aecDelay.dat", "wb");
aecm->preCompFile = fopen("preComp.pcm", "wb");
aecm->postCompFile = fopen("postComp.pcm", "wb");
#endif // AEC_DEBUG
return 0;
}
WebRtc_Word32 WebRtcAecm_Free(void *aecmInst)
{
aecmob_t *aecm = aecmInst;
if (aecm == NULL)
{
return -1;
}
#ifdef AEC_DEBUG
fclose(aecm->aecmCore->farFile);
fclose(aecm->aecmCore->nearFile);
fclose(aecm->aecmCore->outFile);
//fclose(aecm->aecmCore->outLpFile);
fclose(aecm->bufFile);
fclose(aecm->delayFile);
fclose(aecm->preCompFile);
fclose(aecm->postCompFile);
#endif // AEC_DEBUG
WebRtcAecm_FreeCore(aecm->aecmCore);
WebRtcApm_FreeBuffer(aecm->farendBuf);
free(aecm);
return 0;
}
WebRtc_Word32 WebRtcAecm_Init(void *aecmInst, WebRtc_Word32 sampFreq)
{
aecmob_t *aecm = aecmInst;
AecmConfig aecConfig;
if (aecm == NULL)
{
return -1;
}
if (sampFreq != 8000 && sampFreq != 16000)
{
aecm->lastError = AECM_BAD_PARAMETER_ERROR;
return -1;
}
aecm->sampFreq = sampFreq;
// Initialize AECM core
if (WebRtcAecm_InitCore(aecm->aecmCore, aecm->sampFreq) == -1)
{
aecm->lastError = AECM_UNSPECIFIED_ERROR;
return -1;
}
// Initialize farend buffer
if (WebRtcApm_InitBuffer(aecm->farendBuf) == -1)
{
aecm->lastError = AECM_UNSPECIFIED_ERROR;
return -1;
}
aecm->initFlag = kInitCheck; // indicates that initialization has been done
aecm->delayChange = 1;
aecm->sum = 0;
aecm->counter = 0;
aecm->checkBuffSize = 1;
aecm->firstVal = 0;
aecm->ECstartup = 1;
aecm->bufSizeStart = 0;
aecm->checkBufSizeCtr = 0;
aecm->filtDelay = 0;
aecm->timeForDelayChange = 0;
aecm->knownDelay = 0;
aecm->lastDelayDiff = 0;
memset(&aecm->farendOld[0][0], 0, 160);
// Default settings.
aecConfig.cngMode = AecmTrue;
aecConfig.echoMode = 3;
if (WebRtcAecm_set_config(aecm, aecConfig) == -1)
{
aecm->lastError = AECM_UNSPECIFIED_ERROR;
return -1;
}
return 0;
}
WebRtc_Word32 WebRtcAecm_BufferFarend(void *aecmInst, const WebRtc_Word16 *farend,
WebRtc_Word16 nrOfSamples)
{
aecmob_t *aecm = aecmInst;
WebRtc_Word32 retVal = 0;
if (aecm == NULL)
{
return -1;
}
if (farend == NULL)
{
aecm->lastError = AECM_NULL_POINTER_ERROR;
return -1;
}
if (aecm->initFlag != kInitCheck)
{
aecm->lastError = AECM_UNINITIALIZED_ERROR;
return -1;
}
if (nrOfSamples != 80 && nrOfSamples != 160)
{
aecm->lastError = AECM_BAD_PARAMETER_ERROR;
return -1;
}
// TODO: Is this really a good idea?
if (!aecm->ECstartup)
{
WebRtcAecm_DelayComp(aecm);
}
WebRtcApm_WriteBuffer(aecm->farendBuf, farend, nrOfSamples);
return retVal;
}
WebRtc_Word32 WebRtcAecm_Process(void *aecmInst, const WebRtc_Word16 *nearendNoisy,
const WebRtc_Word16 *nearendClean, WebRtc_Word16 *out,
WebRtc_Word16 nrOfSamples, WebRtc_Word16 msInSndCardBuf)
{
aecmob_t *aecm = aecmInst;
WebRtc_Word32 retVal = 0;
short i;
short farend[FRAME_LEN];
short nmbrOfFilledBuffers;
short nBlocks10ms;
short nFrames;
#ifdef AEC_DEBUG
short msInAECBuf;
#endif
#ifdef ARM_WINM_LOG
__int64 freq, start, end, diff;
unsigned int milliseconds;
DWORD temp;
#elif defined MAC_IPHONE_PRINT
// double endtime = 0, starttime = 0;
struct timeval starttime;
struct timeval endtime;
static long int timeused = 0;
static int timecount = 0;
#endif
if (aecm == NULL)
{
return -1;
}
if (nearendNoisy == NULL)
{
aecm->lastError = AECM_NULL_POINTER_ERROR;
return -1;
}
if (out == NULL)
{
aecm->lastError = AECM_NULL_POINTER_ERROR;
return -1;
}
if (aecm->initFlag != kInitCheck)
{
aecm->lastError = AECM_UNINITIALIZED_ERROR;
return -1;
}
if (nrOfSamples != 80 && nrOfSamples != 160)
{
aecm->lastError = AECM_BAD_PARAMETER_ERROR;
return -1;
}
if (msInSndCardBuf < 0)
{
msInSndCardBuf = 0;
aecm->lastError = AECM_BAD_PARAMETER_WARNING;
retVal = -1;
} else if (msInSndCardBuf > 500)
{
msInSndCardBuf = 500;
aecm->lastError = AECM_BAD_PARAMETER_WARNING;
retVal = -1;
}
msInSndCardBuf += 10;
aecm->msInSndCardBuf = msInSndCardBuf;
nFrames = nrOfSamples / FRAME_LEN;
nBlocks10ms = nFrames / aecm->aecmCore->mult;
if (aecm->ECstartup)
{
if (nearendClean == NULL)
{
memcpy(out, nearendNoisy, sizeof(short) * nrOfSamples);
} else
{
memcpy(out, nearendClean, sizeof(short) * nrOfSamples);
}
nmbrOfFilledBuffers = WebRtcApm_get_buffer_size(aecm->farendBuf) / FRAME_LEN;
// The AECM is in the start up mode
// AECM is disabled until the soundcard buffer and farend buffers are OK
// Mechanism to ensure that the soundcard buffer is reasonably stable.
if (aecm->checkBuffSize)
{
aecm->checkBufSizeCtr++;
// Before we fill up the far end buffer we require the amount of data on the
// sound card to be stable (+/-8 ms) compared to the first value. This
// comparison is made during the following 4 consecutive frames. If it seems
// to be stable then we start to fill up the far end buffer.
if (aecm->counter == 0)
{
aecm->firstVal = aecm->msInSndCardBuf;
aecm->sum = 0;
}
if (abs(aecm->firstVal - aecm->msInSndCardBuf)
< WEBRTC_SPL_MAX(0.2 * aecm->msInSndCardBuf, kSampMsNb))
{
aecm->sum += aecm->msInSndCardBuf;
aecm->counter++;
} else
{
aecm->counter = 0;
}
if (aecm->counter * nBlocks10ms >= 6)
{
// The farend buffer size is determined in blocks of 80 samples
// Use 75% of the average value of the soundcard buffer
aecm->bufSizeStart
= WEBRTC_SPL_MIN((3 * aecm->sum
* aecm->aecmCore->mult) / (aecm->counter * 40), BUF_SIZE_FRAMES);
// buffersize has now been determined
aecm->checkBuffSize = 0;
}
if (aecm->checkBufSizeCtr * nBlocks10ms > 50)
{
// for really bad sound cards, don't disable echocanceller for more than 0.5 sec
aecm->bufSizeStart = WEBRTC_SPL_MIN((3 * aecm->msInSndCardBuf
* aecm->aecmCore->mult) / 40, BUF_SIZE_FRAMES);
aecm->checkBuffSize = 0;
}
}
// if checkBuffSize changed in the if-statement above
if (!aecm->checkBuffSize)
{
// soundcard buffer is now reasonably stable
// When the far end buffer is filled with approximately the same amount of
// data as the amount on the sound card we end the start up phase and start
// to cancel echoes.
if (nmbrOfFilledBuffers == aecm->bufSizeStart)
{
aecm->ECstartup = 0; // Enable the AECM
} else if (nmbrOfFilledBuffers > aecm->bufSizeStart)
{
WebRtcApm_FlushBuffer(
aecm->farendBuf,
WebRtcApm_get_buffer_size(aecm->farendBuf)
- aecm->bufSizeStart * FRAME_LEN);
aecm->ECstartup = 0;
}
}
} else
{
// AECM is enabled
// Note only 1 block supported for nb and 2 blocks for wb
for (i = 0; i < nFrames; i++)
{
nmbrOfFilledBuffers = WebRtcApm_get_buffer_size(aecm->farendBuf) / FRAME_LEN;
// Check that there is data in the far end buffer
if (nmbrOfFilledBuffers > 0)
{
// Get the next 80 samples from the farend buffer
WebRtcApm_ReadBuffer(aecm->farendBuf, farend, FRAME_LEN);
// Always store the last frame for use when we run out of data
memcpy(&(aecm->farendOld[i][0]), farend, FRAME_LEN * sizeof(short));
} else
{
// We have no data so we use the last played frame
memcpy(farend, &(aecm->farendOld[i][0]), FRAME_LEN * sizeof(short));
}
// Call buffer delay estimator when all data is extracted,
// i,e. i = 0 for NB and i = 1 for WB
if ((i == 0 && aecm->sampFreq == 8000) || (i == 1 && aecm->sampFreq == 16000))
{
WebRtcAecm_EstBufDelay(aecm, aecm->msInSndCardBuf);
}
#ifdef ARM_WINM_LOG
// measure tick start
QueryPerformanceFrequency((LARGE_INTEGER*)&freq);
QueryPerformanceCounter((LARGE_INTEGER*)&start);
#elif defined MAC_IPHONE_PRINT
// starttime = clock()/(double)CLOCKS_PER_SEC;
gettimeofday(&starttime, NULL);
#endif
// Call the AECM
/*WebRtcAecm_ProcessFrame(aecm->aecmCore, farend, &nearend[FRAME_LEN * i],
&out[FRAME_LEN * i], aecm->knownDelay);*/
if (nearendClean == NULL)
{
if (WebRtcAecm_ProcessFrame(aecm->aecmCore,
farend,
&nearendNoisy[FRAME_LEN * i],
NULL,
&out[FRAME_LEN * i]) == -1)
{
return -1;
}
} else
{
if (WebRtcAecm_ProcessFrame(aecm->aecmCore,
farend,
&nearendNoisy[FRAME_LEN * i],
&nearendClean[FRAME_LEN * i],
&out[FRAME_LEN * i]) == -1)
{
return -1;
}
}
#ifdef ARM_WINM_LOG
// measure tick end
QueryPerformanceCounter((LARGE_INTEGER*)&end);
if(end > start)
{
diff = ((end - start) * 1000) / (freq/1000);
milliseconds = (unsigned int)(diff & 0xffffffff);
WriteFile (logFile, &milliseconds, sizeof(unsigned int), &temp, NULL);
}
#elif defined MAC_IPHONE_PRINT
// endtime = clock()/(double)CLOCKS_PER_SEC;
// printf("%f\n", endtime - starttime);
gettimeofday(&endtime, NULL);
if( endtime.tv_usec > starttime.tv_usec)
{
timeused += endtime.tv_usec - starttime.tv_usec;
} else
{
timeused += endtime.tv_usec + 1000000 - starttime.tv_usec;
}
if(++timecount == 1000)
{
timecount = 0;
printf("AEC: %ld\n", timeused);
timeused = 0;
}
#endif
}
}
#ifdef AEC_DEBUG
msInAECBuf = WebRtcApm_get_buffer_size(aecm->farendBuf) / (kSampMsNb*aecm->aecmCore->mult);
fwrite(&msInAECBuf, 2, 1, aecm->bufFile);
fwrite(&(aecm->knownDelay), sizeof(aecm->knownDelay), 1, aecm->delayFile);
#endif
return retVal;
}
WebRtc_Word32 WebRtcAecm_set_config(void *aecmInst, AecmConfig config)
{
aecmob_t *aecm = aecmInst;
if (aecm == NULL)
{
return -1;
}
if (aecm->initFlag != kInitCheck)
{
aecm->lastError = AECM_UNINITIALIZED_ERROR;
return -1;
}
if (config.cngMode != AecmFalse && config.cngMode != AecmTrue)
{
aecm->lastError = AECM_BAD_PARAMETER_ERROR;
return -1;
}
aecm->aecmCore->cngMode = config.cngMode;
if (config.echoMode < 0 || config.echoMode > 4)
{
aecm->lastError = AECM_BAD_PARAMETER_ERROR;
return -1;
}
aecm->echoMode = config.echoMode;
if (aecm->echoMode == 0)
{
aecm->aecmCore->supGain = SUPGAIN_DEFAULT >> 3;
aecm->aecmCore->supGainOld = SUPGAIN_DEFAULT >> 3;
aecm->aecmCore->supGainErrParamA = SUPGAIN_ERROR_PARAM_A >> 3;
aecm->aecmCore->supGainErrParamD = SUPGAIN_ERROR_PARAM_D >> 3;
aecm->aecmCore->supGainErrParamDiffAB = (SUPGAIN_ERROR_PARAM_A >> 3)
- (SUPGAIN_ERROR_PARAM_B >> 3);
aecm->aecmCore->supGainErrParamDiffBD = (SUPGAIN_ERROR_PARAM_B >> 3)
- (SUPGAIN_ERROR_PARAM_D >> 3);
} else if (aecm->echoMode == 1)
{
aecm->aecmCore->supGain = SUPGAIN_DEFAULT >> 2;
aecm->aecmCore->supGainOld = SUPGAIN_DEFAULT >> 2;
aecm->aecmCore->supGainErrParamA = SUPGAIN_ERROR_PARAM_A >> 2;
aecm->aecmCore->supGainErrParamD = SUPGAIN_ERROR_PARAM_D >> 2;
aecm->aecmCore->supGainErrParamDiffAB = (SUPGAIN_ERROR_PARAM_A >> 2)
- (SUPGAIN_ERROR_PARAM_B >> 2);
aecm->aecmCore->supGainErrParamDiffBD = (SUPGAIN_ERROR_PARAM_B >> 2)
- (SUPGAIN_ERROR_PARAM_D >> 2);
} else if (aecm->echoMode == 2)
{
aecm->aecmCore->supGain = SUPGAIN_DEFAULT >> 1;
aecm->aecmCore->supGainOld = SUPGAIN_DEFAULT >> 1;
aecm->aecmCore->supGainErrParamA = SUPGAIN_ERROR_PARAM_A >> 1;
aecm->aecmCore->supGainErrParamD = SUPGAIN_ERROR_PARAM_D >> 1;
aecm->aecmCore->supGainErrParamDiffAB = (SUPGAIN_ERROR_PARAM_A >> 1)
- (SUPGAIN_ERROR_PARAM_B >> 1);
aecm->aecmCore->supGainErrParamDiffBD = (SUPGAIN_ERROR_PARAM_B >> 1)
- (SUPGAIN_ERROR_PARAM_D >> 1);
} else if (aecm->echoMode == 3)
{
aecm->aecmCore->supGain = SUPGAIN_DEFAULT;
aecm->aecmCore->supGainOld = SUPGAIN_DEFAULT;
aecm->aecmCore->supGainErrParamA = SUPGAIN_ERROR_PARAM_A;
aecm->aecmCore->supGainErrParamD = SUPGAIN_ERROR_PARAM_D;
aecm->aecmCore->supGainErrParamDiffAB = SUPGAIN_ERROR_PARAM_A - SUPGAIN_ERROR_PARAM_B;
aecm->aecmCore->supGainErrParamDiffBD = SUPGAIN_ERROR_PARAM_B - SUPGAIN_ERROR_PARAM_D;
} else if (aecm->echoMode == 4)
{
aecm->aecmCore->supGain = SUPGAIN_DEFAULT << 1;
aecm->aecmCore->supGainOld = SUPGAIN_DEFAULT << 1;
aecm->aecmCore->supGainErrParamA = SUPGAIN_ERROR_PARAM_A << 1;
aecm->aecmCore->supGainErrParamD = SUPGAIN_ERROR_PARAM_D << 1;
aecm->aecmCore->supGainErrParamDiffAB = (SUPGAIN_ERROR_PARAM_A << 1)
- (SUPGAIN_ERROR_PARAM_B << 1);
aecm->aecmCore->supGainErrParamDiffBD = (SUPGAIN_ERROR_PARAM_B << 1)
- (SUPGAIN_ERROR_PARAM_D << 1);
}
return 0;
}
WebRtc_Word32 WebRtcAecm_get_config(void *aecmInst, AecmConfig *config)
{
aecmob_t *aecm = aecmInst;
if (aecm == NULL)
{
return -1;
}
if (config == NULL)
{
aecm->lastError = AECM_NULL_POINTER_ERROR;
return -1;
}
if (aecm->initFlag != kInitCheck)
{
aecm->lastError = AECM_UNINITIALIZED_ERROR;
return -1;
}
config->cngMode = aecm->aecmCore->cngMode;
config->echoMode = aecm->echoMode;
return 0;
}
WebRtc_Word32 WebRtcAecm_InitEchoPath(void* aecmInst,
const void* echo_path,
size_t size_bytes)
{
aecmob_t *aecm = aecmInst;
const WebRtc_Word16* echo_path_ptr = echo_path;
if ((aecm == NULL) || (echo_path == NULL))
{
aecm->lastError = AECM_NULL_POINTER_ERROR;
return -1;
}
if (size_bytes != WebRtcAecm_echo_path_size_bytes())
{
// Input channel size does not match the size of AECM
aecm->lastError = AECM_BAD_PARAMETER_ERROR;
return -1;
}
if (aecm->initFlag != kInitCheck)
{
aecm->lastError = AECM_UNINITIALIZED_ERROR;
return -1;
}
WebRtcAecm_InitEchoPathCore(aecm->aecmCore, echo_path_ptr);
return 0;
}
WebRtc_Word32 WebRtcAecm_GetEchoPath(void* aecmInst,
void* echo_path,
size_t size_bytes)
{
aecmob_t *aecm = aecmInst;
WebRtc_Word16* echo_path_ptr = echo_path;
if ((aecm == NULL) || (echo_path == NULL))
{
aecm->lastError = AECM_NULL_POINTER_ERROR;
return -1;
}
if (size_bytes != WebRtcAecm_echo_path_size_bytes())
{
// Input channel size does not match the size of AECM
aecm->lastError = AECM_BAD_PARAMETER_ERROR;
return -1;
}
if (aecm->initFlag != kInitCheck)
{
aecm->lastError = AECM_UNINITIALIZED_ERROR;
return -1;
}
memcpy(echo_path_ptr, aecm->aecmCore->channelStored, size_bytes);
return 0;
}
size_t WebRtcAecm_echo_path_size_bytes()
{
return (PART_LEN1 * sizeof(WebRtc_Word16));
}
WebRtc_Word32 WebRtcAecm_get_version(WebRtc_Word8 *versionStr, WebRtc_Word16 len)
{
const char version[] = "AECM 1.2.0";
const short versionLen = (short)strlen(version) + 1; // +1 for null-termination
if (versionStr == NULL)
{
return -1;
}
if (versionLen > len)
{
return -1;
}
strncpy(versionStr, version, versionLen);
return 0;
}
WebRtc_Word32 WebRtcAecm_get_error_code(void *aecmInst)
{
aecmob_t *aecm = aecmInst;
if (aecm == NULL)
{
return -1;
}
return aecm->lastError;
}
static int WebRtcAecm_EstBufDelay(aecmob_t *aecm, short msInSndCardBuf)
{
short delayNew, nSampFar, nSampSndCard;
short diff;
nSampFar = WebRtcApm_get_buffer_size(aecm->farendBuf);
nSampSndCard = msInSndCardBuf * kSampMsNb * aecm->aecmCore->mult;
delayNew = nSampSndCard - nSampFar;
if (delayNew < FRAME_LEN)
{
WebRtcApm_FlushBuffer(aecm->farendBuf, FRAME_LEN);
delayNew += FRAME_LEN;
}
aecm->filtDelay = WEBRTC_SPL_MAX(0, (8 * aecm->filtDelay + 2 * delayNew) / 10);
diff = aecm->filtDelay - aecm->knownDelay;
if (diff > 224)
{
if (aecm->lastDelayDiff < 96)
{
aecm->timeForDelayChange = 0;
} else
{
aecm->timeForDelayChange++;
}
} else if (diff < 96 && aecm->knownDelay > 0)
{
if (aecm->lastDelayDiff > 224)
{
aecm->timeForDelayChange = 0;
} else
{
aecm->timeForDelayChange++;
}
} else
{
aecm->timeForDelayChange = 0;
}
aecm->lastDelayDiff = diff;
if (aecm->timeForDelayChange > 25)
{
aecm->knownDelay = WEBRTC_SPL_MAX((int)aecm->filtDelay - 160, 0);
}
return 0;
}
static int WebRtcAecm_DelayComp(aecmob_t *aecm)
{
int nSampFar, nSampSndCard, delayNew, nSampAdd;
const int maxStuffSamp = 10 * FRAME_LEN;
nSampFar = WebRtcApm_get_buffer_size(aecm->farendBuf);
nSampSndCard = aecm->msInSndCardBuf * kSampMsNb * aecm->aecmCore->mult;
delayNew = nSampSndCard - nSampFar;
if (delayNew > FAR_BUF_LEN - FRAME_LEN * aecm->aecmCore->mult)
{
// The difference of the buffer sizes is larger than the maximum
// allowed known delay. Compensate by stuffing the buffer.
nSampAdd = (int)(WEBRTC_SPL_MAX(((nSampSndCard >> 1) - nSampFar),
FRAME_LEN));
nSampAdd = WEBRTC_SPL_MIN(nSampAdd, maxStuffSamp);
WebRtcApm_StuffBuffer(aecm->farendBuf, nSampAdd);
aecm->delayChange = 1; // the delay needs to be updated
}
return 0;
}

View File

@ -1,250 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AECM_MAIN_INTERFACE_ECHO_CONTROL_MOBILE_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AECM_MAIN_INTERFACE_ECHO_CONTROL_MOBILE_H_
#include "typedefs.h"
enum {
AecmFalse = 0,
AecmTrue
};
// Errors
#define AECM_UNSPECIFIED_ERROR 12000
#define AECM_UNSUPPORTED_FUNCTION_ERROR 12001
#define AECM_UNINITIALIZED_ERROR 12002
#define AECM_NULL_POINTER_ERROR 12003
#define AECM_BAD_PARAMETER_ERROR 12004
// Warnings
#define AECM_BAD_PARAMETER_WARNING 12100
typedef struct {
WebRtc_Word16 cngMode; // AECM_FALSE, AECM_TRUE (default)
WebRtc_Word16 echoMode; // 0, 1, 2, 3 (default), 4
} AecmConfig;
#ifdef __cplusplus
extern "C" {
#endif
/*
* Allocates the memory needed by the AECM. The memory needs to be
* initialized separately using the WebRtcAecm_Init() function.
*
* Inputs Description
* -------------------------------------------------------------------
* void **aecmInst Pointer to the AECM instance to be
* created and initialized
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_Create(void **aecmInst);
/*
* This function releases the memory allocated by WebRtcAecm_Create()
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecmInst Pointer to the AECM instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_Free(void *aecmInst);
/*
* Initializes an AECM instance.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecmInst Pointer to the AECM instance
* WebRtc_Word32 sampFreq Sampling frequency of data
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_Init(void* aecmInst,
WebRtc_Word32 sampFreq);
/*
* Inserts an 80 or 160 sample block of data into the farend buffer.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecmInst Pointer to the AECM instance
* WebRtc_Word16 *farend In buffer containing one frame of
* farend signal
* WebRtc_Word16 nrOfSamples Number of samples in farend buffer
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_BufferFarend(void* aecmInst,
const WebRtc_Word16* farend,
WebRtc_Word16 nrOfSamples);
/*
* Runs the AECM on an 80 or 160 sample blocks of data.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecmInst Pointer to the AECM instance
* WebRtc_Word16 *nearendNoisy In buffer containing one frame of
* reference nearend+echo signal. If
* noise reduction is active, provide
* the noisy signal here.
* WebRtc_Word16 *nearendClean In buffer containing one frame of
* nearend+echo signal. If noise
* reduction is active, provide the
* clean signal here. Otherwise pass a
* NULL pointer.
* WebRtc_Word16 nrOfSamples Number of samples in nearend buffer
* WebRtc_Word16 msInSndCardBuf Delay estimate for sound card and
* system buffers
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word16 *out Out buffer, one frame of processed nearend
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_Process(void* aecmInst,
const WebRtc_Word16* nearendNoisy,
const WebRtc_Word16* nearendClean,
WebRtc_Word16* out,
WebRtc_Word16 nrOfSamples,
WebRtc_Word16 msInSndCardBuf);
/*
* This function enables the user to set certain parameters on-the-fly
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecmInst Pointer to the AECM instance
* AecmConfig config Config instance that contains all
* properties to be set
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_set_config(void* aecmInst,
AecmConfig config);
/*
* This function enables the user to set certain parameters on-the-fly
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecmInst Pointer to the AECM instance
*
* Outputs Description
* -------------------------------------------------------------------
* AecmConfig *config Pointer to the config instance that
* all properties will be written to
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_get_config(void *aecmInst,
AecmConfig *config);
/*
* This function enables the user to set the echo path on-the-fly.
*
* Inputs Description
* -------------------------------------------------------------------
* void* aecmInst Pointer to the AECM instance
* void* echo_path Pointer to the echo path to be set
* size_t size_bytes Size in bytes of the echo path
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_InitEchoPath(void* aecmInst,
const void* echo_path,
size_t size_bytes);
/*
* This function enables the user to get the currently used echo path
* on-the-fly
*
* Inputs Description
* -------------------------------------------------------------------
* void* aecmInst Pointer to the AECM instance
* void* echo_path Pointer to echo path
* size_t size_bytes Size in bytes of the echo path
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_GetEchoPath(void* aecmInst,
void* echo_path,
size_t size_bytes);
/*
* This function enables the user to get the echo path size in bytes
*
* Outputs Description
* -------------------------------------------------------------------
* size_t return : size in bytes
*/
size_t WebRtcAecm_echo_path_size_bytes();
/*
* Gets the last error code.
*
* Inputs Description
* -------------------------------------------------------------------
* void *aecmInst Pointer to the AECM instance
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word32 return 11000-11100: error code
*/
WebRtc_Word32 WebRtcAecm_get_error_code(void *aecmInst);
/*
* Gets a version string
*
* Inputs Description
* -------------------------------------------------------------------
* char *versionStr Pointer to a string array
* WebRtc_Word16 len The maximum length of the string
*
* Outputs Description
* -------------------------------------------------------------------
* WebRtc_Word8 *versionStr Pointer to a string array
* WebRtc_Word32 return 0: OK
* -1: error
*/
WebRtc_Word32 WebRtcAecm_get_version(WebRtc_Word8 *versionStr,
WebRtc_Word16 len);
#ifdef __cplusplus
}
#endif
#endif /* WEBRTC_MODULES_AUDIO_PROCESSING_AECM_MAIN_INTERFACE_ECHO_CONTROL_MOBILE_H_ */

View File

@ -1,10 +0,0 @@
noinst_LTLIBRARIES = libagc.la
libagc_la_SOURCES = interface/gain_control.h \
analog_agc.c \
analog_agc.h \
digital_agc.c \
digital_agc.h
libagc_la_CFLAGS = $(AM_CFLAGS) $(COMMON_CFLAGS) \
-I$(top_srcdir)/src/common_audio/signal_processing_library/main/interface \
-I$(top_srcdir)/src/modules/audio_processing/utility

View File

@ -1,34 +0,0 @@
# Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
{
'targets': [
{
'target_name': 'agc',
'type': '<(library)',
'dependencies': [
'<(webrtc_root)/common_audio/common_audio.gyp:spl',
],
'include_dirs': [
'interface',
],
'direct_dependent_settings': {
'include_dirs': [
'interface',
],
},
'sources': [
'interface/gain_control.h',
'analog_agc.c',
'analog_agc.h',
'digital_agc.c',
'digital_agc.h',
],
},
],
}

File diff suppressed because it is too large Load Diff

View File

@ -1,133 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_
#include "typedefs.h"
#include "gain_control.h"
#include "digital_agc.h"
//#define AGC_DEBUG
//#define MIC_LEVEL_FEEDBACK
#ifdef AGC_DEBUG
#include <stdio.h>
#endif
/* Analog Automatic Gain Control variables:
* Constant declarations (inner limits inside which no changes are done)
* In the beginning the range is narrower to widen as soon as the measure
* 'Rxx160_LP' is inside it. Currently the starting limits are -22.2+/-1dBm0
* and the final limits -22.2+/-2.5dBm0. These levels makes the speech signal
* go towards -25.4dBm0 (-31.4dBov). Tuned with wbfile-31.4dBov.pcm
* The limits are created by running the AGC with a file having the desired
* signal level and thereafter plotting Rxx160_LP in the dBm0-domain defined
* by out=10*log10(in/260537279.7); Set the target level to the average level
* of our measure Rxx160_LP. Remember that the levels are in blocks of 16 in
* Q(-7). (Example matlab code: round(db2pow(-21.2)*16/2^7) )
*/
#define RXX_BUFFER_LEN 10
static const WebRtc_Word16 kMsecSpeechInner = 520;
static const WebRtc_Word16 kMsecSpeechOuter = 340;
static const WebRtc_Word16 kNormalVadThreshold = 400;
static const WebRtc_Word16 kAlphaShortTerm = 6; // 1 >> 6 = 0.0156
static const WebRtc_Word16 kAlphaLongTerm = 10; // 1 >> 10 = 0.000977
typedef struct
{
// Configurable parameters/variables
WebRtc_UWord32 fs; // Sampling frequency
WebRtc_Word16 compressionGaindB; // Fixed gain level in dB
WebRtc_Word16 targetLevelDbfs; // Target level in -dBfs of envelope (default -3)
WebRtc_Word16 agcMode; // Hard coded mode (adaptAna/adaptDig/fixedDig)
WebRtc_UWord8 limiterEnable; // Enabling limiter (on/off (default off))
WebRtcAgc_config_t defaultConfig;
WebRtcAgc_config_t usedConfig;
// General variables
WebRtc_Word16 initFlag;
WebRtc_Word16 lastError;
// Target level parameters
// Based on the above: analogTargetLevel = round((32767*10^(-22/20))^2*16/2^7)
WebRtc_Word32 analogTargetLevel; // = RXX_BUFFER_LEN * 846805; -22 dBfs
WebRtc_Word32 startUpperLimit; // = RXX_BUFFER_LEN * 1066064; -21 dBfs
WebRtc_Word32 startLowerLimit; // = RXX_BUFFER_LEN * 672641; -23 dBfs
WebRtc_Word32 upperPrimaryLimit; // = RXX_BUFFER_LEN * 1342095; -20 dBfs
WebRtc_Word32 lowerPrimaryLimit; // = RXX_BUFFER_LEN * 534298; -24 dBfs
WebRtc_Word32 upperSecondaryLimit;// = RXX_BUFFER_LEN * 2677832; -17 dBfs
WebRtc_Word32 lowerSecondaryLimit;// = RXX_BUFFER_LEN * 267783; -27 dBfs
WebRtc_UWord16 targetIdx; // Table index for corresponding target level
#ifdef MIC_LEVEL_FEEDBACK
WebRtc_UWord16 targetIdxOffset; // Table index offset for level compensation
#endif
WebRtc_Word16 analogTarget; // Digital reference level in ENV scale
// Analog AGC specific variables
WebRtc_Word32 filterState[8]; // For downsampling wb to nb
WebRtc_Word32 upperLimit; // Upper limit for mic energy
WebRtc_Word32 lowerLimit; // Lower limit for mic energy
WebRtc_Word32 Rxx160w32; // Average energy for one frame
WebRtc_Word32 Rxx16_LPw32; // Low pass filtered subframe energies
WebRtc_Word32 Rxx160_LPw32; // Low pass filtered frame energies
WebRtc_Word32 Rxx16_LPw32Max; // Keeps track of largest energy subframe
WebRtc_Word32 Rxx16_vectorw32[RXX_BUFFER_LEN];// Array with subframe energies
WebRtc_Word32 Rxx16w32_array[2][5];// Energy values of microphone signal
WebRtc_Word32 env[2][10]; // Envelope values of subframes
WebRtc_Word16 Rxx16pos; // Current position in the Rxx16_vectorw32
WebRtc_Word16 envSum; // Filtered scaled envelope in subframes
WebRtc_Word16 vadThreshold; // Threshold for VAD decision
WebRtc_Word16 inActive; // Inactive time in milliseconds
WebRtc_Word16 msTooLow; // Milliseconds of speech at a too low level
WebRtc_Word16 msTooHigh; // Milliseconds of speech at a too high level
WebRtc_Word16 changeToSlowMode; // Change to slow mode after some time at target
WebRtc_Word16 firstCall; // First call to the process-function
WebRtc_Word16 msZero; // Milliseconds of zero input
WebRtc_Word16 msecSpeechOuterChange;// Min ms of speech between volume changes
WebRtc_Word16 msecSpeechInnerChange;// Min ms of speech between volume changes
WebRtc_Word16 activeSpeech; // Milliseconds of active speech
WebRtc_Word16 muteGuardMs; // Counter to prevent mute action
WebRtc_Word16 inQueue; // 10 ms batch indicator
// Microphone level variables
WebRtc_Word32 micRef; // Remember ref. mic level for virtual mic
WebRtc_UWord16 gainTableIdx; // Current position in virtual gain table
WebRtc_Word32 micGainIdx; // Gain index of mic level to increase slowly
WebRtc_Word32 micVol; // Remember volume between frames
WebRtc_Word32 maxLevel; // Max possible vol level, incl dig gain
WebRtc_Word32 maxAnalog; // Maximum possible analog volume level
WebRtc_Word32 maxInit; // Initial value of "max"
WebRtc_Word32 minLevel; // Minimum possible volume level
WebRtc_Word32 minOutput; // Minimum output volume level
WebRtc_Word32 zeroCtrlMax; // Remember max gain => don't amp low input
WebRtc_Word16 scale; // Scale factor for internal volume levels
#ifdef MIC_LEVEL_FEEDBACK
WebRtc_Word16 numBlocksMicLvlSat;
WebRtc_UWord8 micLvlSat;
#endif
// Structs for VAD and digital_agc
AgcVad_t vadMic;
DigitalAgc_t digitalAgc;
#ifdef AGC_DEBUG
FILE* fpt;
FILE* agcLog;
WebRtc_Word32 fcount;
#endif
WebRtc_Word16 lowLevelSignal;
} Agc_t;
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_

View File

@ -1,786 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/* digital_agc.c
*
*/
#include <string.h>
#ifdef AGC_DEBUG
#include <stdio.h>
#endif
#include "digital_agc.h"
#include "gain_control.h"
// To generate the gaintable, copy&paste the following lines to a Matlab window:
// MaxGain = 6; MinGain = 0; CompRatio = 3; Knee = 1;
// zeros = 0:31; lvl = 2.^(1-zeros);
// A = -10*log10(lvl) * (CompRatio - 1) / CompRatio;
// B = MaxGain - MinGain;
// gains = round(2^16*10.^(0.05 * (MinGain + B * ( log(exp(-Knee*A)+exp(-Knee*B)) - log(1+exp(-Knee*B)) ) / log(1/(1+exp(Knee*B))))));
// fprintf(1, '\t%i, %i, %i, %i,\n', gains);
// % Matlab code for plotting the gain and input/output level characteristic (copy/paste the following 3 lines):
// in = 10*log10(lvl); out = 20*log10(gains/65536);
// subplot(121); plot(in, out); axis([-30, 0, -5, 20]); grid on; xlabel('Input (dB)'); ylabel('Gain (dB)');
// subplot(122); plot(in, in+out); axis([-30, 0, -30, 5]); grid on; xlabel('Input (dB)'); ylabel('Output (dB)');
// zoom on;
// Generator table for y=log2(1+e^x) in Q8.
static const WebRtc_UWord16 kGenFuncTable[128] = {
256, 485, 786, 1126, 1484, 1849, 2217, 2586,
2955, 3324, 3693, 4063, 4432, 4801, 5171, 5540,
5909, 6279, 6648, 7017, 7387, 7756, 8125, 8495,
8864, 9233, 9603, 9972, 10341, 10711, 11080, 11449,
11819, 12188, 12557, 12927, 13296, 13665, 14035, 14404,
14773, 15143, 15512, 15881, 16251, 16620, 16989, 17359,
17728, 18097, 18466, 18836, 19205, 19574, 19944, 20313,
20682, 21052, 21421, 21790, 22160, 22529, 22898, 23268,
23637, 24006, 24376, 24745, 25114, 25484, 25853, 26222,
26592, 26961, 27330, 27700, 28069, 28438, 28808, 29177,
29546, 29916, 30285, 30654, 31024, 31393, 31762, 32132,
32501, 32870, 33240, 33609, 33978, 34348, 34717, 35086,
35456, 35825, 36194, 36564, 36933, 37302, 37672, 38041,
38410, 38780, 39149, 39518, 39888, 40257, 40626, 40996,
41365, 41734, 42104, 42473, 42842, 43212, 43581, 43950,
44320, 44689, 45058, 45428, 45797, 46166, 46536, 46905
};
static const WebRtc_Word16 kAvgDecayTime = 250; // frames; < 3000
WebRtc_Word32 WebRtcAgc_CalculateGainTable(WebRtc_Word32 *gainTable, // Q16
WebRtc_Word16 digCompGaindB, // Q0
WebRtc_Word16 targetLevelDbfs,// Q0
WebRtc_UWord8 limiterEnable,
WebRtc_Word16 analogTarget) // Q0
{
// This function generates the compressor gain table used in the fixed digital part.
WebRtc_UWord32 tmpU32no1, tmpU32no2, absInLevel, logApprox;
WebRtc_Word32 inLevel, limiterLvl;
WebRtc_Word32 tmp32, tmp32no1, tmp32no2, numFIX, den, y32;
const WebRtc_UWord16 kLog10 = 54426; // log2(10) in Q14
const WebRtc_UWord16 kLog10_2 = 49321; // 10*log10(2) in Q14
const WebRtc_UWord16 kLogE_1 = 23637; // log2(e) in Q14
WebRtc_UWord16 constMaxGain;
WebRtc_UWord16 tmpU16, intPart, fracPart;
const WebRtc_Word16 kCompRatio = 3;
const WebRtc_Word16 kSoftLimiterLeft = 1;
WebRtc_Word16 limiterOffset = 0; // Limiter offset
WebRtc_Word16 limiterIdx, limiterLvlX;
WebRtc_Word16 constLinApprox, zeroGainLvl, maxGain, diffGain;
WebRtc_Word16 i, tmp16, tmp16no1;
int zeros, zerosScale;
// Constants
// kLogE_1 = 23637; // log2(e) in Q14
// kLog10 = 54426; // log2(10) in Q14
// kLog10_2 = 49321; // 10*log10(2) in Q14
// Calculate maximum digital gain and zero gain level
tmp32no1 = WEBRTC_SPL_MUL_16_16(digCompGaindB - analogTarget, kCompRatio - 1);
tmp16no1 = analogTarget - targetLevelDbfs;
tmp16no1 += WebRtcSpl_DivW32W16ResW16(tmp32no1 + (kCompRatio >> 1), kCompRatio);
maxGain = WEBRTC_SPL_MAX(tmp16no1, (analogTarget - targetLevelDbfs));
tmp32no1 = WEBRTC_SPL_MUL_16_16(maxGain, kCompRatio);
zeroGainLvl = digCompGaindB;
zeroGainLvl -= WebRtcSpl_DivW32W16ResW16(tmp32no1 + ((kCompRatio - 1) >> 1),
kCompRatio - 1);
if ((digCompGaindB <= analogTarget) && (limiterEnable))
{
zeroGainLvl += (analogTarget - digCompGaindB + kSoftLimiterLeft);
limiterOffset = 0;
}
// Calculate the difference between maximum gain and gain at 0dB0v:
// diffGain = maxGain + (compRatio-1)*zeroGainLvl/compRatio
// = (compRatio-1)*digCompGaindB/compRatio
tmp32no1 = WEBRTC_SPL_MUL_16_16(digCompGaindB, kCompRatio - 1);
diffGain = WebRtcSpl_DivW32W16ResW16(tmp32no1 + (kCompRatio >> 1), kCompRatio);
if (diffGain < 0)
{
return -1;
}
// Calculate the limiter level and index:
// limiterLvlX = analogTarget - limiterOffset
// limiterLvl = targetLevelDbfs + limiterOffset/compRatio
limiterLvlX = analogTarget - limiterOffset;
limiterIdx = 2
+ WebRtcSpl_DivW32W16ResW16(WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)limiterLvlX, 13),
WEBRTC_SPL_RSHIFT_U16(kLog10_2, 1));
tmp16no1 = WebRtcSpl_DivW32W16ResW16(limiterOffset + (kCompRatio >> 1), kCompRatio);
limiterLvl = targetLevelDbfs + tmp16no1;
// Calculate (through table lookup):
// constMaxGain = log2(1+2^(log2(e)*diffGain)); (in Q8)
constMaxGain = kGenFuncTable[diffGain]; // in Q8
// Calculate a parameter used to approximate the fractional part of 2^x with a
// piecewise linear function in Q14:
// constLinApprox = round(3/2*(4*(3-2*sqrt(2))/(log(2)^2)-0.5)*2^14);
constLinApprox = 22817; // in Q14
// Calculate a denominator used in the exponential part to convert from dB to linear scale:
// den = 20*constMaxGain (in Q8)
den = WEBRTC_SPL_MUL_16_U16(20, constMaxGain); // in Q8
for (i = 0; i < 32; i++)
{
// Calculate scaled input level (compressor):
// inLevel = fix((-constLog10_2*(compRatio-1)*(1-i)+fix(compRatio/2))/compRatio)
tmp16 = (WebRtc_Word16)WEBRTC_SPL_MUL_16_16(kCompRatio - 1, i - 1); // Q0
tmp32 = WEBRTC_SPL_MUL_16_U16(tmp16, kLog10_2) + 1; // Q14
inLevel = WebRtcSpl_DivW32W16(tmp32, kCompRatio); // Q14
// Calculate diffGain-inLevel, to map using the genFuncTable
inLevel = WEBRTC_SPL_LSHIFT_W32((WebRtc_Word32)diffGain, 14) - inLevel; // Q14
// Make calculations on abs(inLevel) and compensate for the sign afterwards.
absInLevel = (WebRtc_UWord32)WEBRTC_SPL_ABS_W32(inLevel); // Q14
// LUT with interpolation
intPart = (WebRtc_UWord16)WEBRTC_SPL_RSHIFT_U32(absInLevel, 14);
fracPart = (WebRtc_UWord16)(absInLevel & 0x00003FFF); // extract the fractional part
tmpU16 = kGenFuncTable[intPart + 1] - kGenFuncTable[intPart]; // Q8
tmpU32no1 = WEBRTC_SPL_UMUL_16_16(tmpU16, fracPart); // Q22
tmpU32no1 += WEBRTC_SPL_LSHIFT_U32((WebRtc_UWord32)kGenFuncTable[intPart], 14); // Q22
logApprox = WEBRTC_SPL_RSHIFT_U32(tmpU32no1, 8); // Q14
// Compensate for negative exponent using the relation:
// log2(1 + 2^-x) = log2(1 + 2^x) - x
if (inLevel < 0)
{
zeros = WebRtcSpl_NormU32(absInLevel);
zerosScale = 0;
if (zeros < 15)
{
// Not enough space for multiplication
tmpU32no2 = WEBRTC_SPL_RSHIFT_U32(absInLevel, 15 - zeros); // Q(zeros-1)
tmpU32no2 = WEBRTC_SPL_UMUL_32_16(tmpU32no2, kLogE_1); // Q(zeros+13)
if (zeros < 9)
{
tmpU32no1 = WEBRTC_SPL_RSHIFT_U32(tmpU32no1, 9 - zeros); // Q(zeros+13)
zerosScale = 9 - zeros;
} else
{
tmpU32no2 = WEBRTC_SPL_RSHIFT_U32(tmpU32no2, zeros - 9); // Q22
}
} else
{
tmpU32no2 = WEBRTC_SPL_UMUL_32_16(absInLevel, kLogE_1); // Q28
tmpU32no2 = WEBRTC_SPL_RSHIFT_U32(tmpU32no2, 6); // Q22
}
logApprox = 0;
if (tmpU32no2 < tmpU32no1)
{
logApprox = WEBRTC_SPL_RSHIFT_U32(tmpU32no1 - tmpU32no2, 8 - zerosScale); //Q14
}
}
numFIX = WEBRTC_SPL_LSHIFT_W32(WEBRTC_SPL_MUL_16_U16(maxGain, constMaxGain), 6); // Q14
numFIX -= WEBRTC_SPL_MUL_32_16((WebRtc_Word32)logApprox, diffGain); // Q14
// Calculate ratio
// Shift numFIX as much as possible
zeros = WebRtcSpl_NormW32(numFIX);
numFIX = WEBRTC_SPL_LSHIFT_W32(numFIX, zeros); // Q(14+zeros)
// Shift den so we end up in Qy1
tmp32no1 = WEBRTC_SPL_SHIFT_W32(den, zeros - 8); // Q(zeros)
if (numFIX < 0)
{
numFIX -= WEBRTC_SPL_RSHIFT_W32(tmp32no1, 1);
} else
{
numFIX += WEBRTC_SPL_RSHIFT_W32(tmp32no1, 1);
}
y32 = WEBRTC_SPL_DIV(numFIX, tmp32no1); // in Q14
if (limiterEnable && (i < limiterIdx))
{
tmp32 = WEBRTC_SPL_MUL_16_U16(i - 1, kLog10_2); // Q14
tmp32 -= WEBRTC_SPL_LSHIFT_W32(limiterLvl, 14); // Q14
y32 = WebRtcSpl_DivW32W16(tmp32 + 10, 20);
}
if (y32 > 39000)
{
tmp32 = WEBRTC_SPL_MUL(y32 >> 1, kLog10) + 4096; // in Q27
tmp32 = WEBRTC_SPL_RSHIFT_W32(tmp32, 13); // in Q14
} else
{
tmp32 = WEBRTC_SPL_MUL(y32, kLog10) + 8192; // in Q28
tmp32 = WEBRTC_SPL_RSHIFT_W32(tmp32, 14); // in Q14
}
tmp32 += WEBRTC_SPL_LSHIFT_W32(16, 14); // in Q14 (Make sure final output is in Q16)
// Calculate power
if (tmp32 > 0)
{
intPart = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 14);
fracPart = (WebRtc_UWord16)(tmp32 & 0x00003FFF); // in Q14
if (WEBRTC_SPL_RSHIFT_W32(fracPart, 13))
{
tmp16 = WEBRTC_SPL_LSHIFT_W16(2, 14) - constLinApprox;
tmp32no2 = WEBRTC_SPL_LSHIFT_W32(1, 14) - fracPart;
tmp32no2 = WEBRTC_SPL_MUL_32_16(tmp32no2, tmp16);
tmp32no2 = WEBRTC_SPL_RSHIFT_W32(tmp32no2, 13);
tmp32no2 = WEBRTC_SPL_LSHIFT_W32(1, 14) - tmp32no2;
} else
{
tmp16 = constLinApprox - WEBRTC_SPL_LSHIFT_W16(1, 14);
tmp32no2 = WEBRTC_SPL_MUL_32_16(fracPart, tmp16);
tmp32no2 = WEBRTC_SPL_RSHIFT_W32(tmp32no2, 13);
}
fracPart = (WebRtc_UWord16)tmp32no2;
gainTable[i] = WEBRTC_SPL_LSHIFT_W32(1, intPart)
+ WEBRTC_SPL_SHIFT_W32(fracPart, intPart - 14);
} else
{
gainTable[i] = 0;
}
}
return 0;
}
WebRtc_Word32 WebRtcAgc_InitDigital(DigitalAgc_t *stt, WebRtc_Word16 agcMode)
{
if (agcMode == kAgcModeFixedDigital)
{
// start at minimum to find correct gain faster
stt->capacitorSlow = 0;
} else
{
// start out with 0 dB gain
stt->capacitorSlow = 134217728; // (WebRtc_Word32)(0.125f * 32768.0f * 32768.0f);
}
stt->capacitorFast = 0;
stt->gain = 65536;
stt->gatePrevious = 0;
stt->agcMode = agcMode;
#ifdef AGC_DEBUG
stt->frameCounter = 0;
#endif
// initialize VADs
WebRtcAgc_InitVad(&stt->vadNearend);
WebRtcAgc_InitVad(&stt->vadFarend);
return 0;
}
WebRtc_Word32 WebRtcAgc_AddFarendToDigital(DigitalAgc_t *stt, const WebRtc_Word16 *in_far,
WebRtc_Word16 nrSamples)
{
// Check for valid pointer
if (&stt->vadFarend == NULL)
{
return -1;
}
// VAD for far end
WebRtcAgc_ProcessVad(&stt->vadFarend, in_far, nrSamples);
return 0;
}
WebRtc_Word32 WebRtcAgc_ProcessDigital(DigitalAgc_t *stt, const WebRtc_Word16 *in_near,
const WebRtc_Word16 *in_near_H, WebRtc_Word16 *out,
WebRtc_Word16 *out_H, WebRtc_UWord32 FS,
WebRtc_Word16 lowlevelSignal)
{
// array for gains (one value per ms, incl start & end)
WebRtc_Word32 gains[11];
WebRtc_Word32 out_tmp, tmp32;
WebRtc_Word32 env[10];
WebRtc_Word32 nrg, max_nrg;
WebRtc_Word32 cur_level;
WebRtc_Word32 gain32, delta;
WebRtc_Word16 logratio;
WebRtc_Word16 lower_thr, upper_thr;
WebRtc_Word16 zeros, zeros_fast, frac;
WebRtc_Word16 decay;
WebRtc_Word16 gate, gain_adj;
WebRtc_Word16 k, n;
WebRtc_Word16 L, L2; // samples/subframe
// determine number of samples per ms
if (FS == 8000)
{
L = 8;
L2 = 3;
} else if (FS == 16000)
{
L = 16;
L2 = 4;
} else if (FS == 32000)
{
L = 16;
L2 = 4;
} else
{
return -1;
}
// TODO(andrew): again, we don't need input and output pointers...
if (in_near != out)
{
// Only needed if they don't already point to the same place.
memcpy(out, in_near, 10 * L * sizeof(WebRtc_Word16));
}
if (FS == 32000)
{
if (in_near_H != out_H)
{
memcpy(out_H, in_near_H, 10 * L * sizeof(WebRtc_Word16));
}
}
// VAD for near end
logratio = WebRtcAgc_ProcessVad(&stt->vadNearend, out, L * 10);
// Account for far end VAD
if (stt->vadFarend.counter > 10)
{
tmp32 = WEBRTC_SPL_MUL_16_16(3, logratio);
logratio = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32 - stt->vadFarend.logRatio, 2);
}
// Determine decay factor depending on VAD
// upper_thr = 1.0f;
// lower_thr = 0.25f;
upper_thr = 1024; // Q10
lower_thr = 0; // Q10
if (logratio > upper_thr)
{
// decay = -2^17 / DecayTime; -> -65
decay = -65;
} else if (logratio < lower_thr)
{
decay = 0;
} else
{
// decay = (WebRtc_Word16)(((lower_thr - logratio)
// * (2^27/(DecayTime*(upper_thr-lower_thr)))) >> 10);
// SUBSTITUTED: 2^27/(DecayTime*(upper_thr-lower_thr)) -> 65
tmp32 = WEBRTC_SPL_MUL_16_16((lower_thr - logratio), 65);
decay = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 10);
}
// adjust decay factor for long silence (detected as low standard deviation)
// This is only done in the adaptive modes
if (stt->agcMode != kAgcModeFixedDigital)
{
if (stt->vadNearend.stdLongTerm < 4000)
{
decay = 0;
} else if (stt->vadNearend.stdLongTerm < 8096)
{
// decay = (WebRtc_Word16)(((stt->vadNearend.stdLongTerm - 4000) * decay) >> 12);
tmp32 = WEBRTC_SPL_MUL_16_16((stt->vadNearend.stdLongTerm - 4000), decay);
decay = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 12);
}
if (lowlevelSignal != 0)
{
decay = 0;
}
}
#ifdef AGC_DEBUG
stt->frameCounter++;
fprintf(stt->logFile, "%5.2f\t%d\t%d\t%d\t", (float)(stt->frameCounter) / 100, logratio, decay, stt->vadNearend.stdLongTerm);
#endif
// Find max amplitude per sub frame
// iterate over sub frames
for (k = 0; k < 10; k++)
{
// iterate over samples
max_nrg = 0;
for (n = 0; n < L; n++)
{
nrg = WEBRTC_SPL_MUL_16_16(out[k * L + n], out[k * L + n]);
if (nrg > max_nrg)
{
max_nrg = nrg;
}
}
env[k] = max_nrg;
}
// Calculate gain per sub frame
gains[0] = stt->gain;
for (k = 0; k < 10; k++)
{
// Fast envelope follower
// decay time = -131000 / -1000 = 131 (ms)
stt->capacitorFast = AGC_SCALEDIFF32(-1000, stt->capacitorFast, stt->capacitorFast);
if (env[k] > stt->capacitorFast)
{
stt->capacitorFast = env[k];
}
// Slow envelope follower
if (env[k] > stt->capacitorSlow)
{
// increase capacitorSlow
stt->capacitorSlow
= AGC_SCALEDIFF32(500, (env[k] - stt->capacitorSlow), stt->capacitorSlow);
} else
{
// decrease capacitorSlow
stt->capacitorSlow
= AGC_SCALEDIFF32(decay, stt->capacitorSlow, stt->capacitorSlow);
}
// use maximum of both capacitors as current level
if (stt->capacitorFast > stt->capacitorSlow)
{
cur_level = stt->capacitorFast;
} else
{
cur_level = stt->capacitorSlow;
}
// Translate signal level into gain, using a piecewise linear approximation
// find number of leading zeros
zeros = WebRtcSpl_NormU32((WebRtc_UWord32)cur_level);
if (cur_level == 0)
{
zeros = 31;
}
tmp32 = (WEBRTC_SPL_LSHIFT_W32(cur_level, zeros) & 0x7FFFFFFF);
frac = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 19); // Q12
tmp32 = WEBRTC_SPL_MUL((stt->gainTable[zeros-1] - stt->gainTable[zeros]), frac);
gains[k + 1] = stt->gainTable[zeros] + WEBRTC_SPL_RSHIFT_W32(tmp32, 12);
#ifdef AGC_DEBUG
if (k == 0)
{
fprintf(stt->logFile, "%d\t%d\t%d\t%d\t%d\n", env[0], cur_level, stt->capacitorFast, stt->capacitorSlow, zeros);
}
#endif
}
// Gate processing (lower gain during absence of speech)
zeros = WEBRTC_SPL_LSHIFT_W16(zeros, 9) - WEBRTC_SPL_RSHIFT_W16(frac, 3);
// find number of leading zeros
zeros_fast = WebRtcSpl_NormU32((WebRtc_UWord32)stt->capacitorFast);
if (stt->capacitorFast == 0)
{
zeros_fast = 31;
}
tmp32 = (WEBRTC_SPL_LSHIFT_W32(stt->capacitorFast, zeros_fast) & 0x7FFFFFFF);
zeros_fast = WEBRTC_SPL_LSHIFT_W16(zeros_fast, 9);
zeros_fast -= (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 22);
gate = 1000 + zeros_fast - zeros - stt->vadNearend.stdShortTerm;
if (gate < 0)
{
stt->gatePrevious = 0;
} else
{
tmp32 = WEBRTC_SPL_MUL_16_16(stt->gatePrevious, 7);
gate = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32((WebRtc_Word32)gate + tmp32, 3);
stt->gatePrevious = gate;
}
// gate < 0 -> no gate
// gate > 2500 -> max gate
if (gate > 0)
{
if (gate < 2500)
{
gain_adj = WEBRTC_SPL_RSHIFT_W16(2500 - gate, 5);
} else
{
gain_adj = 0;
}
for (k = 0; k < 10; k++)
{
if ((gains[k + 1] - stt->gainTable[0]) > 8388608)
{
// To prevent wraparound
tmp32 = WEBRTC_SPL_RSHIFT_W32((gains[k+1] - stt->gainTable[0]), 8);
tmp32 = WEBRTC_SPL_MUL(tmp32, (178 + gain_adj));
} else
{
tmp32 = WEBRTC_SPL_MUL((gains[k+1] - stt->gainTable[0]), (178 + gain_adj));
tmp32 = WEBRTC_SPL_RSHIFT_W32(tmp32, 8);
}
gains[k + 1] = stt->gainTable[0] + tmp32;
}
}
// Limit gain to avoid overload distortion
for (k = 0; k < 10; k++)
{
// To prevent wrap around
zeros = 10;
if (gains[k + 1] > 47453132)
{
zeros = 16 - WebRtcSpl_NormW32(gains[k + 1]);
}
gain32 = WEBRTC_SPL_RSHIFT_W32(gains[k+1], zeros) + 1;
gain32 = WEBRTC_SPL_MUL(gain32, gain32);
// check for overflow
while (AGC_MUL32(WEBRTC_SPL_RSHIFT_W32(env[k], 12) + 1, gain32)
> WEBRTC_SPL_SHIFT_W32((WebRtc_Word32)32767, 2 * (1 - zeros + 10)))
{
// multiply by 253/256 ==> -0.1 dB
if (gains[k + 1] > 8388607)
{
// Prevent wrap around
gains[k + 1] = WEBRTC_SPL_MUL(WEBRTC_SPL_RSHIFT_W32(gains[k+1], 8), 253);
} else
{
gains[k + 1] = WEBRTC_SPL_RSHIFT_W32(WEBRTC_SPL_MUL(gains[k+1], 253), 8);
}
gain32 = WEBRTC_SPL_RSHIFT_W32(gains[k+1], zeros) + 1;
gain32 = WEBRTC_SPL_MUL(gain32, gain32);
}
}
// gain reductions should be done 1 ms earlier than gain increases
for (k = 1; k < 10; k++)
{
if (gains[k] > gains[k + 1])
{
gains[k] = gains[k + 1];
}
}
// save start gain for next frame
stt->gain = gains[10];
// Apply gain
// handle first sub frame separately
delta = WEBRTC_SPL_LSHIFT_W32(gains[1] - gains[0], (4 - L2));
gain32 = WEBRTC_SPL_LSHIFT_W32(gains[0], 4);
// iterate over samples
for (n = 0; n < L; n++)
{
// For lower band
tmp32 = WEBRTC_SPL_MUL((WebRtc_Word32)out[n], WEBRTC_SPL_RSHIFT_W32(gain32 + 127, 7));
out_tmp = WEBRTC_SPL_RSHIFT_W32(tmp32 , 16);
if (out_tmp > 4095)
{
out[n] = (WebRtc_Word16)32767;
} else if (out_tmp < -4096)
{
out[n] = (WebRtc_Word16)-32768;
} else
{
tmp32 = WEBRTC_SPL_MUL((WebRtc_Word32)out[n], WEBRTC_SPL_RSHIFT_W32(gain32, 4));
out[n] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32 , 16);
}
// For higher band
if (FS == 32000)
{
tmp32 = WEBRTC_SPL_MUL((WebRtc_Word32)out_H[n],
WEBRTC_SPL_RSHIFT_W32(gain32 + 127, 7));
out_tmp = WEBRTC_SPL_RSHIFT_W32(tmp32 , 16);
if (out_tmp > 4095)
{
out_H[n] = (WebRtc_Word16)32767;
} else if (out_tmp < -4096)
{
out_H[n] = (WebRtc_Word16)-32768;
} else
{
tmp32 = WEBRTC_SPL_MUL((WebRtc_Word32)out_H[n],
WEBRTC_SPL_RSHIFT_W32(gain32, 4));
out_H[n] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32 , 16);
}
}
//
gain32 += delta;
}
// iterate over subframes
for (k = 1; k < 10; k++)
{
delta = WEBRTC_SPL_LSHIFT_W32(gains[k+1] - gains[k], (4 - L2));
gain32 = WEBRTC_SPL_LSHIFT_W32(gains[k], 4);
// iterate over samples
for (n = 0; n < L; n++)
{
// For lower band
tmp32 = WEBRTC_SPL_MUL((WebRtc_Word32)out[k * L + n],
WEBRTC_SPL_RSHIFT_W32(gain32, 4));
out[k * L + n] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32 , 16);
// For higher band
if (FS == 32000)
{
tmp32 = WEBRTC_SPL_MUL((WebRtc_Word32)out_H[k * L + n],
WEBRTC_SPL_RSHIFT_W32(gain32, 4));
out_H[k * L + n] = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32 , 16);
}
gain32 += delta;
}
}
return 0;
}
void WebRtcAgc_InitVad(AgcVad_t *state)
{
WebRtc_Word16 k;
state->HPstate = 0; // state of high pass filter
state->logRatio = 0; // log( P(active) / P(inactive) )
// average input level (Q10)
state->meanLongTerm = WEBRTC_SPL_LSHIFT_W16(15, 10);
// variance of input level (Q8)
state->varianceLongTerm = WEBRTC_SPL_LSHIFT_W32(500, 8);
state->stdLongTerm = 0; // standard deviation of input level in dB
// short-term average input level (Q10)
state->meanShortTerm = WEBRTC_SPL_LSHIFT_W16(15, 10);
// short-term variance of input level (Q8)
state->varianceShortTerm = WEBRTC_SPL_LSHIFT_W32(500, 8);
state->stdShortTerm = 0; // short-term standard deviation of input level in dB
state->counter = 3; // counts updates
for (k = 0; k < 8; k++)
{
// downsampling filter
state->downState[k] = 0;
}
}
WebRtc_Word16 WebRtcAgc_ProcessVad(AgcVad_t *state, // (i) VAD state
const WebRtc_Word16 *in, // (i) Speech signal
WebRtc_Word16 nrSamples) // (i) number of samples
{
WebRtc_Word32 out, nrg, tmp32, tmp32b;
WebRtc_UWord16 tmpU16;
WebRtc_Word16 k, subfr, tmp16;
WebRtc_Word16 buf1[8];
WebRtc_Word16 buf2[4];
WebRtc_Word16 HPstate;
WebRtc_Word16 zeros, dB;
// process in 10 sub frames of 1 ms (to save on memory)
nrg = 0;
HPstate = state->HPstate;
for (subfr = 0; subfr < 10; subfr++)
{
// downsample to 4 kHz
if (nrSamples == 160)
{
for (k = 0; k < 8; k++)
{
tmp32 = (WebRtc_Word32)in[2 * k] + (WebRtc_Word32)in[2 * k + 1];
tmp32 = WEBRTC_SPL_RSHIFT_W32(tmp32, 1);
buf1[k] = (WebRtc_Word16)tmp32;
}
in += 16;
WebRtcSpl_DownsampleBy2(buf1, 8, buf2, state->downState);
} else
{
WebRtcSpl_DownsampleBy2(in, 8, buf2, state->downState);
in += 8;
}
// high pass filter and compute energy
for (k = 0; k < 4; k++)
{
out = buf2[k] + HPstate;
tmp32 = WEBRTC_SPL_MUL(600, out);
HPstate = (WebRtc_Word16)(WEBRTC_SPL_RSHIFT_W32(tmp32, 10) - buf2[k]);
tmp32 = WEBRTC_SPL_MUL(out, out);
nrg += WEBRTC_SPL_RSHIFT_W32(tmp32, 6);
}
}
state->HPstate = HPstate;
// find number of leading zeros
if (!(0xFFFF0000 & nrg))
{
zeros = 16;
} else
{
zeros = 0;
}
if (!(0xFF000000 & (nrg << zeros)))
{
zeros += 8;
}
if (!(0xF0000000 & (nrg << zeros)))
{
zeros += 4;
}
if (!(0xC0000000 & (nrg << zeros)))
{
zeros += 2;
}
if (!(0x80000000 & (nrg << zeros)))
{
zeros += 1;
}
// energy level (range {-32..30}) (Q10)
dB = WEBRTC_SPL_LSHIFT_W16(15 - zeros, 11);
// Update statistics
if (state->counter < kAvgDecayTime)
{
// decay time = AvgDecTime * 10 ms
state->counter++;
}
// update short-term estimate of mean energy level (Q10)
tmp32 = (WEBRTC_SPL_MUL_16_16(state->meanShortTerm, 15) + (WebRtc_Word32)dB);
state->meanShortTerm = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 4);
// update short-term estimate of variance in energy level (Q8)
tmp32 = WEBRTC_SPL_RSHIFT_W32(WEBRTC_SPL_MUL_16_16(dB, dB), 12);
tmp32 += WEBRTC_SPL_MUL(state->varianceShortTerm, 15);
state->varianceShortTerm = WEBRTC_SPL_RSHIFT_W32(tmp32, 4);
// update short-term estimate of standard deviation in energy level (Q10)
tmp32 = WEBRTC_SPL_MUL_16_16(state->meanShortTerm, state->meanShortTerm);
tmp32 = WEBRTC_SPL_LSHIFT_W32(state->varianceShortTerm, 12) - tmp32;
state->stdShortTerm = (WebRtc_Word16)WebRtcSpl_Sqrt(tmp32);
// update long-term estimate of mean energy level (Q10)
tmp32 = WEBRTC_SPL_MUL_16_16(state->meanLongTerm, state->counter) + (WebRtc_Word32)dB;
state->meanLongTerm = WebRtcSpl_DivW32W16ResW16(tmp32,
WEBRTC_SPL_ADD_SAT_W16(state->counter, 1));
// update long-term estimate of variance in energy level (Q8)
tmp32 = WEBRTC_SPL_RSHIFT_W32(WEBRTC_SPL_MUL_16_16(dB, dB), 12);
tmp32 += WEBRTC_SPL_MUL(state->varianceLongTerm, state->counter);
state->varianceLongTerm = WebRtcSpl_DivW32W16(tmp32,
WEBRTC_SPL_ADD_SAT_W16(state->counter, 1));
// update long-term estimate of standard deviation in energy level (Q10)
tmp32 = WEBRTC_SPL_MUL_16_16(state->meanLongTerm, state->meanLongTerm);
tmp32 = WEBRTC_SPL_LSHIFT_W32(state->varianceLongTerm, 12) - tmp32;
state->stdLongTerm = (WebRtc_Word16)WebRtcSpl_Sqrt(tmp32);
// update voice activity measure (Q10)
tmp16 = WEBRTC_SPL_LSHIFT_W16(3, 12);
tmp32 = WEBRTC_SPL_MUL_16_16(tmp16, (dB - state->meanLongTerm));
tmp32 = WebRtcSpl_DivW32W16(tmp32, state->stdLongTerm);
tmpU16 = WEBRTC_SPL_LSHIFT_U16((WebRtc_UWord16)13, 12);
tmp32b = WEBRTC_SPL_MUL_16_U16(state->logRatio, tmpU16);
tmp32 += WEBRTC_SPL_RSHIFT_W32(tmp32b, 10);
state->logRatio = (WebRtc_Word16)WEBRTC_SPL_RSHIFT_W32(tmp32, 6);
// limit
if (state->logRatio > 2048)
{
state->logRatio = 2048;
}
if (state->logRatio < -2048)
{
state->logRatio = -2048;
}
return state->logRatio; // Q10
}

View File

@ -1,76 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_DIGITAL_AGC_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_DIGITAL_AGC_H_
#ifdef AGC_DEBUG
#include <stdio.h>
#endif
#include "typedefs.h"
#include "signal_processing_library.h"
// the 32 most significant bits of A(19) * B(26) >> 13
#define AGC_MUL32(A, B) (((B)>>13)*(A) + ( ((0x00001FFF & (B))*(A)) >> 13 ))
// C + the 32 most significant bits of A * B
#define AGC_SCALEDIFF32(A, B, C) ((C) + ((B)>>16)*(A) + ( ((0x0000FFFF & (B))*(A)) >> 16 ))
typedef struct
{
WebRtc_Word32 downState[8];
WebRtc_Word16 HPstate;
WebRtc_Word16 counter;
WebRtc_Word16 logRatio; // log( P(active) / P(inactive) ) (Q10)
WebRtc_Word16 meanLongTerm; // Q10
WebRtc_Word32 varianceLongTerm; // Q8
WebRtc_Word16 stdLongTerm; // Q10
WebRtc_Word16 meanShortTerm; // Q10
WebRtc_Word32 varianceShortTerm; // Q8
WebRtc_Word16 stdShortTerm; // Q10
} AgcVad_t; // total = 54 bytes
typedef struct
{
WebRtc_Word32 capacitorSlow;
WebRtc_Word32 capacitorFast;
WebRtc_Word32 gain;
WebRtc_Word32 gainTable[32];
WebRtc_Word16 gatePrevious;
WebRtc_Word16 agcMode;
AgcVad_t vadNearend;
AgcVad_t vadFarend;
#ifdef AGC_DEBUG
FILE* logFile;
int frameCounter;
#endif
} DigitalAgc_t;
WebRtc_Word32 WebRtcAgc_InitDigital(DigitalAgc_t *digitalAgcInst, WebRtc_Word16 agcMode);
WebRtc_Word32 WebRtcAgc_ProcessDigital(DigitalAgc_t *digitalAgcInst, const WebRtc_Word16 *inNear,
const WebRtc_Word16 *inNear_H, WebRtc_Word16 *out,
WebRtc_Word16 *out_H, WebRtc_UWord32 FS,
WebRtc_Word16 lowLevelSignal);
WebRtc_Word32 WebRtcAgc_AddFarendToDigital(DigitalAgc_t *digitalAgcInst, const WebRtc_Word16 *inFar,
WebRtc_Word16 nrSamples);
void WebRtcAgc_InitVad(AgcVad_t *vadInst);
WebRtc_Word16 WebRtcAgc_ProcessVad(AgcVad_t *vadInst, // (i) VAD state
const WebRtc_Word16 *in, // (i) Speech signal
WebRtc_Word16 nrSamples); // (i) number of samples
WebRtc_Word32 WebRtcAgc_CalculateGainTable(WebRtc_Word32 *gainTable, // Q16
WebRtc_Word16 compressionGaindB, // Q0 (in dB)
WebRtc_Word16 targetLevelDbfs,// Q0 (in dB)
WebRtc_UWord8 limiterEnable, WebRtc_Word16 analogTarget);
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_SOURCE_ANALOG_AGC_H_

View File

@ -1,273 +0,0 @@
/*
* Copyright (c) 2011 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_INTERFACE_GAIN_CONTROL_H_
#define WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_INTERFACE_GAIN_CONTROL_H_
#include "typedefs.h"
// Errors
#define AGC_UNSPECIFIED_ERROR 18000
#define AGC_UNSUPPORTED_FUNCTION_ERROR 18001
#define AGC_UNINITIALIZED_ERROR 18002
#define AGC_NULL_POINTER_ERROR 18003
#define AGC_BAD_PARAMETER_ERROR 18004
// Warnings
#define AGC_BAD_PARAMETER_WARNING 18050
enum
{
kAgcModeUnchanged,
kAgcModeAdaptiveAnalog,
kAgcModeAdaptiveDigital,
kAgcModeFixedDigital
};
enum
{
kAgcFalse = 0,
kAgcTrue
};
typedef struct
{
WebRtc_Word16 targetLevelDbfs; // default 3 (-3 dBOv)
WebRtc_Word16 compressionGaindB; // default 9 dB
WebRtc_UWord8 limiterEnable; // default kAgcTrue (on)
} WebRtcAgc_config_t;
#if defined(__cplusplus)
extern "C"
{
#endif
/*
* This function processes a 10/20ms frame of far-end speech to determine
* if there is active speech. Far-end speech length can be either 10ms or
* 20ms. The length of the input speech vector must be given in samples
* (80/160 when FS=8000, and 160/320 when FS=16000 or FS=32000).
*
* Input:
* - agcInst : AGC instance.
* - inFar : Far-end input speech vector (10 or 20ms)
* - samples : Number of samples in input vector
*
* Return value:
* : 0 - Normal operation.
* : -1 - Error
*/
int WebRtcAgc_AddFarend(void* agcInst,
const WebRtc_Word16* inFar,
WebRtc_Word16 samples);
/*
* This function processes a 10/20ms frame of microphone speech to determine
* if there is active speech. Microphone speech length can be either 10ms or
* 20ms. The length of the input speech vector must be given in samples
* (80/160 when FS=8000, and 160/320 when FS=16000 or FS=32000). For very low
* input levels, the input signal is increased in level by multiplying and
* overwriting the samples in inMic[].
*
* This function should be called before any further processing of the
* near-end microphone signal.
*
* Input:
* - agcInst : AGC instance.
* - inMic : Microphone input speech vector (10 or 20 ms) for
* L band
* - inMic_H : Microphone input speech vector (10 or 20 ms) for
* H band
* - samples : Number of samples in input vector
*
* Return value:
* : 0 - Normal operation.
* : -1 - Error
*/
int WebRtcAgc_AddMic(void* agcInst,
WebRtc_Word16* inMic,
WebRtc_Word16* inMic_H,
WebRtc_Word16 samples);
/*
* This function replaces the analog microphone with a virtual one.
* It is a digital gain applied to the input signal and is used in the
* agcAdaptiveDigital mode where no microphone level is adjustable.
* Microphone speech length can be either 10ms or 20ms. The length of the
* input speech vector must be given in samples (80/160 when FS=8000, and
* 160/320 when FS=16000 or FS=32000).
*
* Input:
* - agcInst : AGC instance.
* - inMic : Microphone input speech vector for (10 or 20 ms)
* L band
* - inMic_H : Microphone input speech vector for (10 or 20 ms)
* H band
* - samples : Number of samples in input vector
* - micLevelIn : Input level of microphone (static)
*
* Output:
* - inMic : Microphone output after processing (L band)
* - inMic_H : Microphone output after processing (H band)
* - micLevelOut : Adjusted microphone level after processing
*
* Return value:
* : 0 - Normal operation.
* : -1 - Error
*/
int WebRtcAgc_VirtualMic(void* agcInst,
WebRtc_Word16* inMic,
WebRtc_Word16* inMic_H,
WebRtc_Word16 samples,
WebRtc_Word32 micLevelIn,
WebRtc_Word32* micLevelOut);
/*
* This function processes a 10/20ms frame and adjusts (normalizes) the gain
* both analog and digitally. The gain adjustments are done only during
* active periods of speech. The input speech length can be either 10ms or
* 20ms and the output is of the same length. The length of the speech
* vectors must be given in samples (80/160 when FS=8000, and 160/320 when
* FS=16000 or FS=32000). The echo parameter can be used to ensure the AGC will
* not adjust upward in the presence of echo.
*
* This function should be called after processing the near-end microphone
* signal, in any case after any echo cancellation.
*
* Input:
* - agcInst : AGC instance
* - inNear : Near-end input speech vector (10 or 20 ms) for
* L band
* - inNear_H : Near-end input speech vector (10 or 20 ms) for
* H band
* - samples : Number of samples in input/output vector
* - inMicLevel : Current microphone volume level
* - echo : Set to 0 if the signal passed to add_mic is
* almost certainly free of echo; otherwise set
* to 1. If you have no information regarding echo
* set to 0.
*
* Output:
* - outMicLevel : Adjusted microphone volume level
* - out : Gain-adjusted near-end speech vector (L band)
* : May be the same vector as the input.
* - out_H : Gain-adjusted near-end speech vector (H band)
* - saturationWarning : A returned value of 1 indicates a saturation event
* has occurred and the volume cannot be further
* reduced. Otherwise will be set to 0.
*
* Return value:
* : 0 - Normal operation.
* : -1 - Error
*/
int WebRtcAgc_Process(void* agcInst,
const WebRtc_Word16* inNear,
const WebRtc_Word16* inNear_H,
WebRtc_Word16 samples,
WebRtc_Word16* out,
WebRtc_Word16* out_H,
WebRtc_Word32 inMicLevel,
WebRtc_Word32* outMicLevel,
WebRtc_Word16 echo,
WebRtc_UWord8* saturationWarning);
/*
* This function sets the config parameters (targetLevelDbfs,
* compressionGaindB and limiterEnable).
*
* Input:
* - agcInst : AGC instance
* - config : config struct
*
* Output:
*
* Return value:
* : 0 - Normal operation.
* : -1 - Error
*/
int WebRtcAgc_set_config(void* agcInst, WebRtcAgc_config_t config);
/*
* This function returns the config parameters (targetLevelDbfs,
* compressionGaindB and limiterEnable).
*
* Input:
* - agcInst : AGC instance
*
* Output:
* - config : config struct
*
* Return value:
* : 0 - Normal operation.
* : -1 - Error
*/
int WebRtcAgc_get_config(void* agcInst, WebRtcAgc_config_t* config);
/*
* This function creates an AGC instance, which will contain the state
* information for one (duplex) channel.
*
* Return value : AGC instance if successful
* : 0 (i.e., a NULL pointer) if unsuccessful
*/
int WebRtcAgc_Create(void **agcInst);
/*
* This function frees the AGC instance created at the beginning.
*
* Input:
* - agcInst : AGC instance.
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcAgc_Free(void *agcInst);
/*
* This function initializes an AGC instance.
*
* Input:
* - agcInst : AGC instance.
* - minLevel : Minimum possible mic level
* - maxLevel : Maximum possible mic level
* - agcMode : 0 - Unchanged
* : 1 - Adaptive Analog Automatic Gain Control -3dBOv
* : 2 - Adaptive Digital Automatic Gain Control -3dBOv
* : 3 - Fixed Digital Gain 0dB
* - fs : Sampling frequency
*
* Return value : 0 - Ok
* -1 - Error
*/
int WebRtcAgc_Init(void *agcInst,
WebRtc_Word32 minLevel,
WebRtc_Word32 maxLevel,
WebRtc_Word16 agcMode,
WebRtc_UWord32 fs);
/*
* This function returns a text string containing the version.
*
* Input:
* - length : Length of the char array pointed to by version
* Output:
* - version : Pointer to a char array of to which the version
* : string will be copied.
*
* Return value : 0 - OK
* -1 - Error
*/
int WebRtcAgc_Version(WebRtc_Word8 *versionStr, WebRtc_Word16 length);
#if defined(__cplusplus)
}
#endif
#endif // WEBRTC_MODULES_AUDIO_PROCESSING_AGC_MAIN_INTERFACE_GAIN_CONTROL_H_

Some files were not shown because too many files have changed in this diff Show More