MULTI-TASK DEEP RESIDUAL ECHO SUPPRESSION WITH ECHO-AWARE LOSS
0. Contents
- Abstract
- Demos -- ICASSP 2022 AEC-Challenge blind test (near-end single-talk)
- Demos -- ICASSP 2022 AEC-Challenge blind test (far-end single-talk)
- Demos -- ICASSP 2022 AEC-Challenge blind test (double-talk)
- Demos -- Full 300 clips of ICASSP 2022 AEC-Challenge blind test far-end single-talk scenario
1. Abstract
This paper introduces the NWPU Team's entry to the ICASSP 2022 AEC Challenge. We take a hybrid approach that cascades a linear AEC with a neural post-filter. The former is used to deal with the linear echo components while the latter suppresses the residual non-linear echo components. We use gated convolutional F-T-LSTM neural network (GFTNN) as the backbone and shape the post-filter by a multi-task learning (MTL) framework, where a voice activity detection (VAD) module is adopted as an auxiliary task along with echo suppression, with the aim to avoid over suppression that may cause speech distortion. Moreover, we adopt an echo-aware loss function, where the mean square error (MSE) loss can be optimized particularly for every time-frequency bins (TF-bins) according to the signal-to-echo ratio (SER), leading to further suppression on the echo. Extensive ablation study shows that the time delay estimation (TDE) module in neural post-filter leads to better perceptual quality, and an adaptive filter with better convergence will bring consistent performance gain for the post-filter. Besides, we find that using the linear echo as the input of our neural post-filter is a better choice than using the reference signal directly. In the ICASSP 2022 AEC-Challenge, our approach has ranked the 1st place on word acceptance rate (WAcc) (0.817) and the 3rd place on both mean opinion score (MOS) (4.502) and the final score (0.864).
2. Demos -- Near-end single-talk
Models | Sample 1 | Sample 2 | Sample 3 | Sample 4 |
---|---|---|---|---|
Microphone | ||||
Reference | ||||
Baseline | ||||
GFTNN-VAD-L |
3. Demos -- Far-end single-talk
Models | Sample 1 | Sample 2 | Sample 3 | Sample 4 |
---|---|---|---|---|
Microphone | ||||
Reference | ||||
Baseline | ||||
GFTNN-VAD-L |
4. Demos -- Double-talk
Models | Sample 1 | Sample 2 | Sample 3 | Sample 4 |
---|---|---|---|---|
Microphone | ||||
Reference | ||||
Baseline | ||||
GFTNN-VAD-L |
5. Demos -- Full 300 far-end single-talk clips
It should be noted that 4ecd5889-aa9e-4c02-a81a-ff87ad6e9c38_farend-singletalk_mic.wav, 51bdf2f1-bb37-4eba-a5ee-39102a0fbb9e_farend-singletalk-with-movement_mic.wav and f783b002-4a43-4e89-ad6e-b9f999e8e39f_farend-singletalk_mic.wav belong double-talk scenario, not residual echo.
The full double-talk clips and the near-end single-talk clips have 800M and 200M respectively, so they are not put here.