Giter VIP home page Giter VIP logo

esphome-sound-level-meter's Introduction

ESPHome Sound Level Meter CI

This component was made to measure environmental noise levels (Leq, Lmin, Lmax, Lpeak) with different frequency weightings over configured time intervals. It is heavily based on awesome work by Ivan Kostoski: esp32-i2s-slm (his hackaday.io project).

image

Typical weekly traffic noise recorded with a microphone located 50m from a medium traffic road: image

Add it to your ESPHome config:

external_components:
  - source: github://stas-sl/esphome-sound-level-meter  # add @tag if you want to use a specific version (e.g @v1.0.0)

For configuration options see minimal-example-config.yaml or advanced-example-config.yaml:

i2s:
  bck_pin: 23
  ws_pin: 18
  din_pin: 19
  sample_rate: 48000            # default: 48000
  bits_per_sample: 32           # default: 32
  dma_buf_count: 8              # default: 8
  dma_buf_len: 256              # default: 256
  use_apll: true                # default: false

  # according to datasheet when L/R pin is connected to GND,
  # the mic should output its signal in the left channel,
  # however in my experience it's the opposite: when I connect
  # L/R to GND then the signal is in the right channel
  channel: right                # default: right

  # right shift samples.
  # for example if mic has 24 bit resolution, and
  # i2s configured as 32 bits, then audio data will be aligned left (MSB)
  # and LSB will be padded with zeros, so you might want to shift them right by 8 bits
  bits_shift: 8                 # default: 0

sound_level_meter:
  id: sound_level_meter1

  # update_interval specifies over which interval to aggregate audio data
  # you can specify default update_interval on top level, but you can also override
  # it further by specifying it on sensor level
  update_interval: 60s           # default: 60s

  # you can disable (turn off) component by default (on boot)
  # and turn it on later when needed via sound_level_meter.turn_on/toggle actions;
  # when used with switch it might conflict/being overriden by
  # switch state restoration logic, so you have to either disable it in
  # switch config and then is_on property here will have effect, 
  # or completely rely on switch state restoration/initialization and 
  # any value set here will be ignored
  is_on: true                   # default: true

  # buffer_size is in samples (not bytes), so for float data type
  # number of bytes will be buffer_size * 4
  buffer_size: 1024             # default: 1024

  # ignore audio data at startup for this long
  warmup_interval: 500ms        # default: 500ms

  # audio processing runs in a separate task, you can change its settings below
  task_stack_size: 4096         # default: 4096
  task_priority: 2              # default: 2
  task_core: 1                  # default: 1

  # see your mic datasheet to find sensitivity and reference SPL.
  # those are used to convert dB FS to db SPL
  mic_sensitivity: -26dB        # default: empty
  mic_sensitivity_ref: 94dB     # default: empty
  # additional offset if needed
  offset: 0dB                   # default: empty

  # for flexibility sensors are organized hierarchically into groups. each group
  # could have any number of filters, sensors and nested groups.
  # for examples if there is a top level group A with filter A and nested group B
  # with filter B, then for sensors inside group B filters A and then B will be
  # applied:
  # groups:
  #   # group A
  #   - filters:
  #       - filter A
  #     groups:
  #       # group B
  #       - filters:
  #           - filter B
  #         sensors:
  #           - sensor X
  groups:
    # group 1 (mic eq)
    - filters:
        # for now only SOS filter type is supported, see math/filter-design.ipynb
        # to learn how to create or convert other filter types to SOS
        - type: sos
          coeffs:
            # INMP441:
            #      b0            b1           b2          a1            a2
            - [ 1.0019784 , -1.9908513  , 0.9889158 , -1.9951786  , 0.99518436]

      # nested groups
      groups:
        # group 1.1 (no weighting)
        - sensors:
            # 'eq' type sensor calculates Leq (average) sound level over specified period
            - type: eq
              name: LZeq_1s
              id: LZeq_1s
              # you can override updated_interval specified on top level
              # individually per each sensor
              update_interval: 1s

            # you can have as many sensors of same type, but with different
            # other parameters (e.g. update_interval) as needed
            - type: eq
              name: LZeq_1min
              id: LZeq_1min
              unit_of_measurement: dBZ

            # 'max' sensor type calculates Lmax with specified window_size.
            # for example, if update_interval is 60s and window_size is 1s
            # then it will calculate 60 Leq values for each second of audio data
            # and the result will be max of them
            - type: max
              name: LZmax_1s_1min
              id: LZmax_1s_1min
              window_size: 1s
              unit_of_measurement: dBZ

            # same as 'max', but 'min'
            - type: min
              name: LZmin_1s_1min
              id: LZmin_1s_1min
              window_size: 1s
              unit_of_measurement: dBZ

            # it finds max single sample over whole update_interval
            - type: peak
              name: LZpeak_1min
              id: LZpeak_1min
              unit_of_measurement: dBZ

        # group 1.2 (A-weighting)
        - filters:
            # for now only SOS filter type is supported, see math/filter-design.ipynb
            # to learn how to create or convert other filter types to SOS
            - type: sos
              coeffs:
                # A-weighting:
                #       b0           b1            b2             a1            a2
                - [ 0.16999495 ,  0.741029   ,  0.52548885 , -0.11321865 , -0.056549273]
                - [ 1.         , -2.00027    ,  1.0002706  , -0.03433284 , -0.79215795 ]
                - [ 1.         , -0.709303   , -0.29071867 , -1.9822421  ,  0.9822986  ]
          sensors:
            - type: eq
              name: LAeq_1min
              id: LAeq_1min
              unit_of_measurement: dBA
            - type: max
              name: LAmax_1s_1min
              id: LAmax_1s_1min
              window_size: 1s
              unit_of_measurement: dBA
            - type: min
              name: LAmin_1s_1min
              id: LAmin_1s_1min
              window_size: 1s
              unit_of_measurement: dBA
            - type: peak
              name: LApeak_1min
              id: LApeak_1min
              unit_of_measurement: dBA

        # group 1.3 (C-weighting)
        - filters:
            # for now only SOS filter type is supported, see math/filter-design.ipynb
            # to learn how to create or convert other filter types to SOS
            - type: sos
              coeffs:
                # C-weighting:
                #       b0             b1             b2             a1             a2
                - [-0.49651518  , -0.12296628  , -0.0076134163, -0.37165618   , 0.03453208  ]
                - [ 1.          ,  1.3294908   ,  0.44188643  ,  1.2312505    , 0.37899444  ]
                - [ 1.          , -2.          ,  1.          , -1.9946145    , 0.9946217   ]
          sensors:
            - type: eq
              name: LCeq_1min
              id: LCeq_1min
              unit_of_measurement: dBC
            - type: max
              name: LCmax_1s_1min
              id: LCmax_1s_1min
              window_size: 1s
              unit_of_measurement: dBC
            - type: min
              name: LCmin_1s_1min
              id: LCmin_1s_1min
              window_size: 1s
              unit_of_measurement: dBC
            - type: peak
              name: LCpeak_1min
              id: LCpeak_1min
              unit_of_measurement: dBC


# automation
# available actions:
#   - sound_level_meter.turn_on
#   - sound_level_meter.turn_off
#   - sound_level_meter.toggle
switch:
  - platform: template
    name: "Sound Level Meter Switch"
    # if you want is_on property on component to have effect, then set
    # restore_mode to DISABLED, or alternatively you can use other modes
    # (more on them in esphome docs), then is_on property on the component will
    # be overriden by the switch
    restore_mode: DISABLED # ALWAYS_OFF | ALWAYS_ON | RESTORE_DEFAULT_OFF | RESTORE_DEFAULT_ON
    lambda: |-
      return id(sound_level_meter1).is_on();
    turn_on_action:
      - sound_level_meter.turn_on
    turn_off_action:
      - sound_level_meter.turn_off

button:
  - platform: template
    name: "Sound Level Meter Toggle Button"
    on_press:
      - sound_level_meter.toggle: sound_level_meter1

binary_sensor:
  - platform: gpio
    pin: GPIO0
    name: "Sound Level Meter GPIO Toggle"
    on_press:
      - sound_level_meter.toggle: sound_level_meter1

Filter design (math)

Check out filter-design notebook to learn how those SOS coefficients were calculated.

Performance

In Ivan's project SOS filters are implemented using ESP32 assembler, so they are really fast. A quote from him:

Well, now you can lower the frequency of ESP32 down to 80MHz (i.e. for battery operation) and filtering and summation of I2S data will still take less than 15% of single core processing time. At 240MHz, filtering 1/8sec worth of samples with 2 x 6th-order IIR filters takes less than 5ms.

I'm not so familiar with assembler and it is hard to understand and maintain, so I implemented filtering in regular C++. Looks like the performance is not that bad. At 80MHz filtering and summation takes ~210ms per 1s of audio (48000 samples), which is 21% of single core processing time (vs. 15% if implemented in ASM). At 240MHz same task takes 67ms (vs. 5x8=40ms in ASM).

CPU Freq # SOS Sensors Sample Rate Buffer size Time (per 1s audio)
80MHz 0 1 Leq 48000 1024 57 ms
80MHz 6 1 Leq 48000 1024 204 ms
80MHz 6 1 Lmax 48000 1024 211 ms
80MHz 6 1 Lpeak 48000 1024 207 ms
240MHz 0 1 Leq 48000 1024 18 ms
240MHz 6 1 Leq 48000 1024 67 ms
240MHz 6 1 Leq, 1 Lpeak, 1 Lmax, 1 Lmin 48000 1024 90 ms

Supported platforms

Tested with ESPHome version 2023.2.0, platforms:

  • ESP32 (Arduino v2.0.5, ESP-IDF v4.4.2)
  • ESP32-IDF (ESP-IDF v4.4.2)

Sending data to sensor.community

See sensor-community-example-config.yaml

References

  1. ESP32-I2S-SLM hackaday.io project
  2. Measuring Audible Noise in Real-Time hackaday.io project
  3. What are LAeq and LAFmax?
  4. Noise measuring @ smartcitizen.me
  5. EspAudioSensor
  6. Design of a digital A-weighting filter with arbitrary sample rate (dsp.stackexchange.com)
  7. How to compute dBFS? (dsp.stackexchange.com)
  8. Microphone Specification Explained
  9. esp32-i2s-slm source code
  10. DNMS source code
  11. NoiseLevel source code

esphome-sound-level-meter's People

Contributors

raymiiorg avatar stas-sl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

esphome-sound-level-meter's Issues

Is it possible to stream the captured audio to Home Assistant?

Hi.

I know this is not the purpose of this cool project, but could anybody think of a way to capture the audio and stream it to Home Assistant, so we can feed it to any media_player device?

Voice Assistant already does it, but I cannot find a way to use the audio for another purposes apart from STT or even replay it in HA. Thus I'm looking for another approaches, like streaming to FFmpeg integration or something like that.

Any ideas are welcome. Thanks!!

WiFi implementation

hey buddy
I hope that u can maybe help me, everything is working fine but when i implement wifi somehow it is not working with freertos i get constantly this error:
Guru Meditation Error: Core 0 panic'ed (Unhandled debug exception).
Debug exception reason: Stack canary watchpoint triggered (Mic I2S Reader)
Core 0 register dump:
PC : 0x4008f75b PS : 0x00060736 A0 : 0x8008589a A1 : 0x3ffe89c0
A2 : 0x3ffbf6b0 A3 : 0xb33fffff A4 : 0x0000abab A5 : 0x00060723
A6 : 0x00060720 A7 : 0x0000cdcd A8 : 0xb33fffff A9 : 0xffffffff
A10 : 0x3ffe9230 A11 : 0x3ffb6884 A12 : 0x3ffb6af8 A13 : 0x00000042
A14 : 0x007bf6b0 A15 : 0x003fffff SAR : 0x00000004 EXCCAUSE: 0x00000001
EXCVADDR: 0x00000000 LBEG : 0x4008a505 LEND : 0x4008a515 LCOUNT : 0xfffffffa
and her is my wifi code
/*
WiFi Connectivity Implementation

This C++ file manages WiFi connectivity using the WiFi library and NeoPixel feedback.
It includes functions to connect to WiFi networks, handle multiple network options, and indicate connection status.

Purpose:

  • Connect to available WiFi networks using stored credentials.
  • Provide feedback using NeoPixel LEDs to indicate successful or failed WiFi connection attempts.

Author: [Latif Faghiri]
*/

#include "Arduino.h"
#include "Wifi.h"

// Constructor
MyWiFi::MyWiFi() : pixels(LED_COUNT, LED_PIN, NEO_GRB + NEO_KHZ800) {
// Do Nothing
}

// Getting Wifi
void MyWiFi::begin() {
wifiMulti.addAP(WIFI_SSID1, WIFI_PASSWORD1);
wifiMulti.addAP(WIFI_SSID2, WIFI_PASSWORD2);
}

// Connecting to Strongest Wifi
void MyWiFi::connect() {
begin();
Serial.print("Connecting: ");
int WiFiAttempt = 0;
int WiFiTimeout = 10;

// Checking
while (WiFi.status() != WL_CONNECTED && WiFiAttempt < WiFiTimeout) {
delay(100);
Serial.println(".");
WiFiAttempt++;
}

if (wifiMulti.run() == WL_CONNECTED) {
Serial.print("\nSuccessfully Connected to: ");
Serial.println(WiFi.SSID());
Serial.println(" ");
Serial.print("IP-Address: ");
Serial.println(WiFi.localIP());

// Neopixel should light blue to confirm connection
for (int i = 0; i < 2; i++) {
pixels.setPixelColor(0, pixels.Color(0, 0, 255));
pixels.show();
delay(500);
pixels.clear();
pixels.show();
delay(500);
}
} else {
Serial.println("Failed to establish WiFi connection");
Serial.println("Please try again!!!!!!!");
}
}

int MyWiFi::status(){
return WiFi.status();
}

what do i do wrong

SLM doesn't start automatically on boot when a switch is configured

Hello,

First thanks for the nice component. Works very well! I use it as a barking detector when we're not home ;)

I faced a small issue : despite having is_on: true in the yaml (as defined in the example config on the repo), when the ESP boot the sound level meter is turned off. I have to use the switch in HA to enable it.

While digging in the logs, I saw that the switch has a default restore mode to off. Adding restore_mode: ALWAYS_ON solved my issue. But his makes the is_on: true parameter useless (in case the switch is configured). So maybe there's a nicer way to work around this or a small information to add to the the docs/examples.

Hope this helps.
Thanks again for the nice work!
Regards,
Olivier B.

Can't get value

Hej I have an issue that it is only working with wrong channel. f.eks. when i set L/R to GND so the channel should be left but it is only working with right channel which is teoritically wrong. and the lowest i get is 75 dBA in a quiet room which is totally wrong.
Do you have any tips that i can do?
I am using INMP441 mic and ESP32 devkitc v4.
Have you had also those issue when u working with Ivan's code.

SPH0645: values are wrong

Hello

i try your component and if i use advanced example code i have 121db and if i speak or clap my hand nothing happend.

I try the minimal sample code and i get 104db and same, not a lot of difference if i speak or clap my hand close to the mic nothing change.

I use a SPH0645 like this :

i2s:
  bck_pin: GPIO23
  ws_pin: GPIO18
  din_pin: GPIO19
  sample_rate: 48000            # default: 48000
  bits_per_sample: 32           # default: 32

any idea please why i doesnt work?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.