Giter VIP home page Giter VIP logo

davidusb-geek / emhass Goto Github PK

View Code? Open in Web Editor NEW
245.0 14.0 45.0 19.24 MB

emhass: Energy Management for Home Assistant, is a Python module designed to optimize your home energy interfacing with Home Assistant.

License: MIT License

Shell 0.17% Python 89.52% Dockerfile 1.38% Makefile 0.08% CSS 3.61% HTML 1.22% JavaScript 4.01%
energy home-automation linear-programming management model-predictive-control optimization

emhass's Introduction


EMHASS

Energy Management for Home Assistant


GitHub release (latest by date) GitHub Workflow Status GitHub PyPI - Python Version PyPI - Status Read the Docs


If you like this work please consider buying a coffee ;-)

Buy Me A Coffee

EHMASS is a Python module designed to optimize your home energy interfacing with Home Assistant.

Introduction

EMHASS (Energy Management for Home Assistant) is an optimization tool designed for residential households. The package uses a Linear Programming approach to optimize energy usage while considering factors such as electricity prices, power generation from solar panels, and energy storage from batteries. EMHASS provides a high degree of configurability, making it easy to integrate with Home Assistant and other smart home systems. Whether you have solar panels, energy storage, or just a controllable load, EMHASS can provide an optimized daily schedule for your devices, allowing you to save money and minimize your environmental impact.

The complete documentation for this package is available here.

What is Energy Management for Home Assistant (EMHASS)?

EMHASS and Home Assistant provide a comprehensive energy management solution that can optimize energy usage and reduce costs for households. By integrating these two systems, households can take advantage of advanced energy management features that provide significant cost savings, increased energy efficiency, and greater sustainability.

EMHASS is a powerful energy management tool that generates an optimization plan based on variables such as solar power production, energy usage, and energy costs. The plan provides valuable insights into how energy can be better managed and utilized in the household. Even if households do not have all the necessary equipment, such as solar panels or batteries, EMHASS can still provide a minimal use case solution to optimize energy usage for controllable/deferrable loads.

Home Assistant provides a platform for the automation of household devices based on the optimization plan generated by EMHASS. This includes devices such as batteries, pool pumps, hot water heaters, and electric vehicle (EV) chargers. By automating EV charging and other devices, households can take advantage of off-peak energy rates and optimize their EV charging schedule based on the optimization plan generated by EMHASS.

One of the main benefits of integrating EMHASS and Home Assistant is the ability to customize and tailor the energy management solution to the specific needs and preferences of each household. With EMHASS, households can define their energy management objectives and constraints, such as maximizing self-consumption or minimizing energy costs, and the system will generate an optimization plan accordingly. Home Assistant provides a platform for the automation of devices based on the optimization plan, allowing households to create a fully customized and optimized energy management solution.

Overall, the integration of EMHASS and Home Assistant offers a comprehensive energy management solution that provides significant cost savings, increased energy efficiency, and greater sustainability for households. By leveraging advanced energy management features and automation capabilities, households can achieve their energy management objectives while enjoying the benefits of a more efficient and sustainable energy usage, including optimized EV charging schedules.

The package flow can be graphically represented as follows:

Configuration and Installation

The package is meant to be highly configurable with an object oriented modular approach and a main configuration file defined by the user. EMHASS was designed to be integrated with Home Assistant, hence it's name. Installation instructions and example Home Assistant automation configurations are given below.

You must follow these steps to make EMHASS work properly:

  1. Define all the parameters in the configuration file according to your installation. See the description for each parameter in the configuration section.

  2. You most notably will need to define the main data entering EMHASS. This will be the sensor_power_photovoltaics for the name of the your hass variable containing the PV produced power and the variable sensor_power_load_no_var_loads for the load power of your household excluding the power of the deferrable loads that you want to optimize.

  3. Launch the actual optimization and check the results. This can be done manually using the buttons in the web ui or with a curl command like this: curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/dayahead-optim.

  4. If you’re satisfied with the optimization results then you can set the optimization and data publish task commands in an automation. You can read more about this on the usage section below.

  5. The final step is to link the deferrable loads variables to real switchs on your installation. An example code for this using automations and the shell command integration is presented below in the usage section.

A more detailed workflow is given below:

Method 1) The EMHASS add-on for Home Assistant OS and supervised users

For Home Assistant OS and HA Supervised users, I've developed an add-on that will help you use EMHASS. The add-on is more user friendly as the configuration can be modified directly in the add-on options pane and as with the standalone docker it exposes a web ui that can be used to inspect the optimization results and manually trigger a new optimization.

You can find the add-on with the installation instructions here: https://github.com/davidusb-geek/emhass-add-on

The add-on usage instructions can be found on the documentation pane of the add-on once installed or directly here: EMHASS Add-on documentation

These architectures are supported: amd64, armv7, armhf and aarch64.

Method 2) Using Docker in standalone mode

You can also install EMHASS using docker. This can be in the same machine as Home Assistant (if using the supervised install method) or in a different distant machine. To install first pull the latest image from docker hub:

docker pull davidusb/emhass-docker-standalone

You can also build your image locally. For this clone this repository, setup your config_emhass.yaml file and use the provided make file with this command:

make -f deploy_docker.mk clean_deploy

Then load the image in the .tar file:

docker load -i <TarFileName>.tar

Finally check your image tag with docker images and launch the docker itself:

docker run -it --restart always -p 5000:5000 -e LOCAL_COSTFUN="profit" -v $(pwd)/config_emhass.yaml:/app/config_emhass.yaml -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name DockerEMHASS <REPOSITORY:TAG>
  • If you wish to keep a local, persistent copy of the EMHASS generated data, create a local folder on your device, then mount said folder inside the container.
    mkdir -p $(pwd)/data #linux: create data folder on local device
    
    docker run -it --restart always -p 5000:5000 -e LOCAL_COSTFUN="profit" -v $(pwd)/config_emhass.yaml:/app/config_emhass.yaml -v $(pwd)/data:/app/data  -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name DockerEMHASS <REPOSITORY:TAG>

If you wish to set the web_server's diagrams to a timezone other than UTC, set TZ environment variable on:

docker run -it --restart always -p 5000:5000  -e TZ="Europe/Paris"  -e LOCAL_COSTFUN="profit" -v $(pwd)/config_emhass.yaml:/app/config_emhass.yaml -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name DockerEMHASS <REPOSITORY:TAG>

Method 3) Legacy method using a Python virtual environment

With this method it is recommended to install on a virtual environment. For this you will need virtualenv, install it using:

sudo apt install python3-virtualenv

Then create and activate the virtual environment:

virtualenv -p /usr/bin/python3 emhassenv
cd emhassenv
source bin/activate

Install using the distribution files:

python3 -m pip install emhass

Clone this repository to obtain the example configuration files. We will suppose that this repository is cloned to:

/home/user/emhass

This will be the root path containing the yaml configuration files (config_emhass.yaml and secrets_emhass.yaml) and the different needed folders (a data folder to store the optimizations results and a scripts folder containing the bash scripts described further below).

To upgrade the installation in the future just use:

python3 -m pip install --upgrade emhass

Usage

Method 1) Add-on and docker standalone

If using the add-on or the standalone docker installation, it exposes a simple webserver on port 5000. You can access it directly using your brower, ex: http://localhost:5000.

With this web server you can perform RESTful POST commands on multiple ENDPOINTS with prefix action/*:

  • A POST call to action/perfect-optim to perform a perfect optimization task on the historical data.
  • A POST call to action/dayahead-optim to perform a day-ahead optimization task of your home energy.
  • A POST call to action/naive-mpc-optim to perform a naive Model Predictive Controller optimization task. If using this option you will need to define the correct runtimeparams (see further below).
  • A POST call to action/publish-data to publish the optimization results data for the current timestamp.
  • A POST call to action/forecast-model-fit to train a machine learning forecaster model with the passed data (see the dedicated section for more help).
  • A POST call to action/forecast-model-predict to obtain a forecast from a pre-trained machine learning forecaster model (see the dedicated section for more help).
  • A POST call to action/forecast-model-tune to optimize the machine learning forecaster models hyperparameters using bayesian optimization (see the dedicated section for more help).

A curl command can then be used to launch an optimization task like this: curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/dayahead-optim.

Method 2) Legacy method using a Python virtual environment

To run a command simply use the emhass CLI command followed by the needed arguments. The available arguments are:

  • --action: That is used to set the desired action, options are: perfect-optim, dayahead-optim, naive-mpc-optim, publish-data, forecast-model-fit, forecast-model-predict and forecast-model-tune.
  • --config: Define path to the config.yaml file (including the yaml file itself)
  • --costfun: Define the type of cost function, this is optional and the options are: profit (default), cost, self-consumption
  • --log2file: Define if we should log to a file or not, this is optional and the options are: True or False (default)
  • --params: Configuration as JSON.
  • --runtimeparams: Data passed at runtime. This can be used to pass your own forecast data to EMHASS.
  • --debug: Use True for testing purposes.
  • --version: Show the current version of EMHASS.

For example, the following line command can be used to perform a day-ahead optimization task:

emhass --action 'dayahead-optim' --config '/home/user/emhass/config_emhass.yaml' --costfun 'profit'

Before running any valuable command you need to modify the config_emhass.yaml and secrets_emhass.yaml files. These files should contain the information adapted to your own system. To do this take a look at the special section for this in the documentation.

Home Assistant integration

To integrate with home assistant we will need to define some shell commands in the configuration.yaml file and some basic automations in the automations.yaml file. In the next few paragraphs we are going to consider the dayahead-optim optimization strategy, which is also the first that was implemented, and we will also cover how to publish the results. Then additional optimization strategies were developed, that can be used in combination with/replace the dayahead-optim strategy, such as MPC, or to expland the funcitonalities such as the Machine Learning method to predict your hosehold consumption. Each of them has some specificities and features and will be considered in dedicated sections.

Dayahead Optimization - Method 1) Add-on and docker standalone

In configuration.yaml:

shell_command:
  dayahead_optim: "curl -i -H \"Content-Type:application/json\" -X POST -d '{}' http://localhost:5000/action/dayahead-optim"
  publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{}' http://localhost:5000/action/publish-data"

Dayahead Optimization - Method 2) Legacy method using a Python virtual environment

In configuration.yaml:

shell_command:
  dayahead_optim: /home/user/emhass/scripts/dayahead_optim.sh
  publish_data: /home/user/emhass/scripts/publish_data.sh

Create the file dayahead_optim.sh with the following content:

#!/bin/bash
. /home/user/emhassenv/bin/activate
emhass --action 'dayahead-optim' --config '/home/user/emhass/config_emhass.yaml'

And the file publish_data.sh with the following content:

#!/bin/bash
. /home/user/emhassenv/bin/activate
emhass --action 'publish-data' --config '/home/user/emhass/config_emhass.yaml'

Then specify user rights and make the files executables:

sudo chmod -R 755 /home/user/emhass/scripts/dayahead_optim.sh
sudo chmod -R 755 /home/user/emhass/scripts/publish_data.sh
sudo chmod +x /home/user/emhass/scripts/dayahead_optim.sh
sudo chmod +x /home/user/emhass/scripts/publish_data.sh

Common for any installation method

In automations.yaml:

- alias: EMHASS day-ahead optimization
  trigger:
    platform: time
    at: '05:30:00'
  action:
  - service: shell_command.dayahead_optim
- alias: EMHASS publish data
  trigger:
  - minutes: /5
    platform: time_pattern
  action:
  - service: shell_command.publish_data

In these automations the day-ahead optimization is performed everyday at 5:30am and the data is published every 5 minutes.

The final action will be to link a sensor value in Home Assistant to control the switch of a desired controllable load. For example imagine that I want to control my water heater and that the publish-data action is publishing the optimized value of a deferrable load that I want to be linked to my water heater desired behavior. In this case we could use an automation like this one below to control the desired real switch:

automation:
- alias: Water Heater Optimized ON
  trigger:
  - minutes: /5
    platform: time_pattern
  condition:
  - condition: numeric_state
    entity_id: sensor.p_deferrable0
    above: 0.1
  action:
    - service: homeassistant.turn_on
      entity_id: switch.water_heater_switch

A second automation should be used to turn off the switch:

automation:
- alias: Water Heater Optimized OFF
  trigger:
  - minutes: /5
    platform: time_pattern
  condition:
  - condition: numeric_state
    entity_id: sensor.p_deferrable0
    below: 0.1
  action:
    - service: homeassistant.turn_off
      entity_id: switch.water_heater_switch

The publish-data specificities

The publish-data command will push to Home Assistant the optimization results for each deferrable load defined in the configuration. For example if you have defined two deferrable loads, then the command will publish sensor.p_deferrable0 and sensor.p_deferrable1 to Home Assistant. When the dayahead-optim is launched, after the optimization, a csv file will be saved on disk. The publish-data command will load the latest csv file and look for the closest timestamp that match the current time using the datetime.now() method in Python. This means that if EMHASS is configured for 30min time step optimizations, the csv will be saved with timestamps 00:00, 00:30, 01:00, 01:30, ... and so on. If the current time is 00:05, then the closest timestamp of the optimization results that will be published is 00:00. If the current time is 00:25, then the closest timestamp of the optimization results that will be published is 00:30.

The publish-data command will also publish PV and load forecast data on sensors p_pv_forecast and p_load_forecast. If using a battery, then the battery optimized power and the SOC will be published on sensors p_batt_forecast and soc_batt_forecast. On these sensors the future values are passed as nested attributes.

It is possible to provide custm sensor names for all the data exported by the publish-data command. For this, when using the publish-data endpoint just add some runtime parameters as dictionaries like this:

shell_command:
  publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{\"custom_load_forecast_id\": {\"entity_id\": \"sensor.p_load_forecast\", \"unit_of_measurement\": \"W\", \"friendly_name\": \"Load Power Forecast\"}}' http://localhost:5000/action/publish-data"

These keys are available to modify: custom_pv_forecast_id, custom_load_forecast_id, custom_batt_forecast_id, custom_batt_soc_forecast_id, custom_grid_forecast_id, custom_cost_fun_id, custom_deferrable_forecast_id, custom_unit_load_cost_id and custom_unit_prod_price_id.

If you provide the custom_deferrable_forecast_id then the passed data should be a list of dictionaries, like this:

shell_command:
  publish_data: "curl -i -H \"Content-Type:application/json\" -X POST -d '{\"custom_deferrable_forecast_id\": [{\"entity_id\": \"sensor.p_deferrable0\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Deferrable Load 0\"},{\"entity_id\": \"sensor.p_deferrable1\",\"unit_of_measurement\": \"W\", \"friendly_name\": \"Deferrable Load 1\"}]}' http://localhost:5000/action/publish-data"

And you should be careful that the list of dictionaries has the correct length, which is the number of defined deferrable loads.

Computed variables and published data

Below you can find a list of the variables resulting from EMHASS computation, shown in the charts and published to Home Assistant through the publish_data command:

EMHASS variable Definition Home Assistant published sensor
P_PV Forecasted power generation from your solar panels (Watts). This helps you predict how much solar energy you will produce during the forecast period. sensor.p_pv_forecast
P_Load Forecasted household power consumption (Watts). This gives you an idea of how much energy your appliances are expected to use. sensor.p_load_forecast
P_deferrableX
[X = 0, 1, 2, ...]
Forecasted power consumption of deferrable loads (Watts). Deferable loads are appliances that can be managed by EMHASS. EMHASS helps you optimise energy usage by prioritising solar self-consumption and minimizing reliance on the grid or by taking advantage or supply and feed-in tariff volatility. You can have multiple deferable loads and you use this sensor in HA to control these loads via smart switch or other IoT means at your disposal. sensor.p_deferrableX
P_grid_pos Forecasted power imported from the grid (Watts). This indicates the amount of energy you are expected to draw from the grid when your solar production is insufficient to meet your needs or it is advantagous to consume from the grid. -
P_grid_neg Forecasted power exported to the grid (Watts). This indicates the amount of excess solar energy you are expected to send back to the grid during the forecast period. -
P_batt Forecasted (dis)charge power load (Watts) for the battery (if installed). If negative it indicates the battery is charging, if positive that the battery is discharging. sensor.p_batt_forecast
P_grid Forecasted net power flow between your home and the grid (Watts). This is calculated as P_grid_pos - P_grid_neg. A positive value indicates net export, while a negative value indicates net import. sensor.p_grid_forecast
SOC_opt Forecasted battery optimized Status Of Charge (SOC) percentage level sensor.soc_batt_forecast
unit_load_cost Forecasted cost per unit of energy you pay to the grid (typically "Currency"/kWh). This helps you understand the expected energy cost during the forecast period. sensor.unit_load_cost
unit_prod_price Forecasted price you receive for selling excess solar energy back to the grid (typically "Currency"/kWh). This helps you understand the potential income from your solar production. sensor.unit_prod_price
cost_profit Forecasted profit or loss from your energy usage for the forecast period. This is calculated as unit_load_cost * P_Load - unit_prod_price * P_grid_pos. A positive value indicates a profit, while a negative value indicates a loss. sensor.total_cost_profit_value
cost_fun_cost Forecasted cost associated with deferring loads to maximize solar self-consumption. This helps you evaluate the trade-off between managing the load and not managing and potential cost savings. sensor.total_cost_fun_value
optim_status This contains the status of the latest execution and is the same you can see in the Log following an optimization job. Its values can be Optimal or Infeasible. sensor.optim_status

Passing your own data

In EMHASS we have basically 4 forecasts to deal with:

  • PV power production forecast (internally based on the weather forecast and the characteristics of your PV plant). This is given in Watts.

  • Load power forecast: how much power your house will demand on the next 24h. This is given in Watts.

  • Load cost forecast: the price of the energy from the grid on the next 24h. This is given in EUR/kWh.

  • PV production selling price forecast: at what price are you selling your excess PV production on the next 24h. This is given in EUR/kWh.

The sensor containing the load data should be specified in parameter var_load in the configuration file. As we want to optimize the household energies, when need to forecast the load power conumption. The default method for this is a naive approach using 1-day persistence. The load data variable should not contain the data from the deferrable loads themselves. For example, lets say that you set your deferrable load to be the washing machine. The variable that you should enter in EMHASS will be: var_load: 'sensor.power_load_no_var_loads' and sensor_power_load_no_var_loads = sensor_power_load - sensor_power_washing_machine. This is supposing that the overall load of your house is contained in variable: sensor_power_load. The sensor sensor_power_load_no_var_loads can be easily created with a new template sensor in Home Assistant.

If you are implementing a MPC controller, then you should also need to provide some data at the optimization runtime using the key runtimeparams.

The valid values to pass for both forecast data and MPC related data are explained below.

Forecast data

It is possible to provide EMHASS with your own forecast data. For this just add the data as list of values to a data dictionary during the call to emhass using the runtimeparams option.

For example if using the add-on or the standalone docker installation you can pass this data as list of values to the data dictionary during the curl POST:

curl -i -H 'Content-Type:application/json' -X POST -d '{"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}' http://localhost:5000/action/dayahead-optim

Or if using the legacy method using a Python virtual environment:

emhass --action 'dayahead-optim' --config '/home/user/emhass/config_emhass.yaml' --runtimeparams '{"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}'

The possible dictionary keys to pass data are:

  • pv_power_forecast for the PV power production forecast.

  • load_power_forecast for the Load power forecast.

  • load_cost_forecast for the Load cost forecast.

  • prod_price_forecast for the PV production selling price forecast.

Passing other data

It is possible to also pass other data during runtime in order to automate the energy management. For example, it could be useful to dynamically update the total number of hours for each deferrable load (def_total_hours) using for instance a correlation with the outdoor temperature (useful for water heater for example).

Here is the list of the other additional dictionary keys that can be passed at runtime:

  • num_def_loads for the number of deferrable loads to consider.

  • P_deferrable_nom for the nominal power for each deferrable load in Watts.

  • def_total_hours for the total number of hours that each deferrable load should operate.

  • def_start_timestep for the timestep as from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization timewindow).

  • def_end_timestep for the timestep before which each deferrable load should operate (if you don't want the deferrable load to use the whole optimization timewindow).

  • treat_def_as_semi_cont to define if we should treat each deferrable load as a semi-continuous variable.

  • set_def_constant to define if we should set each deferrable load as a constant fixed value variable with just one startup for each optimization task.

  • solcast_api_key for the SolCast API key if you want to use this service for PV power production forecast.

  • solcast_rooftop_id for the ID of your rooftop for the SolCast service implementation.

  • solar_forecast_kwp for the PV peak installed power in kW used for the solar.forecast API call.

  • SOCtarget for the desired target value of initial and final SOC.

  • publish_prefix use this key to pass a common prefix to all published data. This will add a prefix to the sensor name but also to the forecasts attributes keys within the sensor.

A naive Model Predictive Controller

A MPC controller was introduced in v0.3.0. This is an informal/naive representation of a MPC controller. This can be used in combination with/as a replacement of the Dayahead Optimization.

A MPC controller performs the following actions:

  • Set the prediction horizon and receding horizon parameters.
  • Perform an optimization on the prediction horizon.
  • Apply the first element of the obtained optimized control variables.
  • Repeat at a relatively high frequency, ex: 5 min.

This is the receding horizon principle.

When applying this controller, the following runtimeparams should be defined:

  • prediction_horizon for the MPC prediction horizon. Fix this at at least 5 times the optimization time step.

  • soc_init for the initial value of the battery SOC for the current iteration of the MPC.

  • soc_final for the final value of the battery SOC for the current iteration of the MPC.

  • def_total_hours for the list of deferrable loads functioning hours. These values can decrease as the day advances to take into account receding horizon daily energy objectives for each deferrable load.

  • def_start_timestep for the timestep as from which each deferrable load is allowed to operate (if you don't want the deferrable load to use the whole optimization timewindow). If you specify a value of 0 (or negative), the deferrable load will be optimized as from the beginning of the complete prediction horizon window.

  • def_end_timestep for the timestep before which each deferrable load should operate (if you don't want the deferrable load to use the whole optimization timewindow). If you specify a value of 0 (or negative), the deferrable load optimization window will extend up to the end of the prediction horizon window.

A correct call for a MPC optimization should look like:

curl -i -H 'Content-Type:application/json' -X POST -d '{"pv_power_forecast":[0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93], "prediction_horizon":10, "soc_init":0.5,"soc_final":0.6}' http://192.168.3.159:5000/action/naive-mpc-optim

Example with :def_total_hours, def_start_timestep, def_end_timestep.

curl -i -H 'Content-Type:application/json' -X POST -d '{"pv_power_forecast":[0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93], "prediction_horizon":10, "soc_init":0.5,"soc_final":0.6,"def_total_hours":[1,3],"def_start_timestep":[0,3],"def_end_timestep":[0,6]}' http://localhost:5000/action/naive-mpc-optim

A machine learning forecaster

Starting in v0.4.0 a new machine learning forecaster class was introduced. This is intended to provide a new and alternative method to forecast your household consumption and use it when such forecast is needed to optimize your energy through the available strategies. Check the dedicated section in the documentation here: https://emhass.readthedocs.io/en/latest/mlforecaster.html

Development

Pull request are very much accepted on this project. For development you can find some instructions here Development

Troubleshooting

Some problems may arise from solver related issues in the Pulp package. It was found that for arm64 architectures (ie. Raspberry Pi4, 64 bits) the default solver is not avaliable. A workaround is to use another solver. The glpk solver is an option.

This can be controlled in the configuration file with parameters lp_solver and lp_solver_path. The options for lp_solver are: 'PULP_CBC_CMD', 'GLPK_CMD' and 'COIN_CMD'. If using 'COIN_CMD' as the solver you will need to provide the correct path to this solver in parameter lp_solver_path, ex: '/usr/bin/cbc'.

License

MIT License

Copyright (c) 2021-2023 David HERNANDEZ

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

emhass's People

Contributors

davidhernandez-cea avatar davidusb-geek avatar dependabot[bot] avatar g1za avatar geoderp avatar majorfrog avatar michaelpiron avatar pail23 avatar purcell-lab avatar siku2 avatar treynaer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

emhass's Issues

Feature request: Document setup for native solcast forecast module

I couldn't find the documentation for setting up the native solcast module.

I gather from b22b4a8, that I need to update an secrets_emhass.yaml file, but I couldn't locate that. Is it inside the docker image or elsewhere?

Can I set these parameters from the normal emhass configuration file?

solcast_api_key: yoursecretsolcastapikey
solcast_rooftop_id: yourrooftopid

prod_cost not updated.

Great to see prod_cost and load_cost published, could I also suggest it would be useful to include the forecasts in the attribute like the others.

I'm seeing load_cost updated twice.

2023-05-21 07:49:01,093 - web_server - INFO - Successfully posted to sensor.unit_load_cost = 0.14
2023-05-21 07:49:01,130 - web_server - INFO - Successfully posted to sensor.unit_load_cost = 0.06

Suspect this should be prod cost:

'custom_unit_prod_price_id': {"entity_id": "sensor.unit_load_cost", "unit_of_measurement": "€", "friendly_name": "Unit Prod Price"},

Feature request: Access EMHASS web interface via ingress

Ingress provides a means for an Home Assistant addon to expose its web service via the HA UI:

https://www.home-assistant.io/blog/2019/04/15/hassio-ingress/
https://developers.home-assistant.io/docs/add-ons/presentation/#ingress

Currently the EMHASS Web User Interface is accessed via http://homeassistant-ip:5000.

It would greatly simplify the user experience if EMHASS was configured to utilise the ingress interface.

My other add-on's are access via the Home Assistant UI and ingress interface:

ESPHome: http://odroid.local:8123/hassio/ingress/5c53de3b_esphome
Google Drive Backup: http://odroid.local:8123/hassio/ingress/cebe7a76_hassio_google_drive_backup
Terminal & SSH: http://odroid.local:8123/hassio/ingress/core_ssh

image
image

Upgrade to 0.4.0 - TypeError: string indices must be integers

Describe the bug
TypeError: string indices must be integers

To Reproduce
Upgraded to 0.4.0, not other changes.

Expected behavior
Expected clean upgrade or graceful failure after upgrade.

Screenshots

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
waitress   INFO  Serving on http://0.0.0.0:5000
Exception on /action/publish-data [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2528, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1825, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1823, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1799, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 154, in action_call
    _ = publish_data(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 464, in publish_data
    custom_pv_forecast_id = input_data_dict['params']['passed_data']['custom_pv_forecast_id']
TypeError: string indices must be integers

Home Assistant installation type

  • Home Assistant Supervised

Your hardware

  • OS: Linux Debian
  • Architecture: aarch64

EMHASS installation type

  • Add-on

Additional context

  post_mpc_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"load_cost_forecast\":{{(
          ([states('sensor.amber_general_price')|float(0)] +
          state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)[:48])
          }}, \"load_power_forecast\":{{
          [states('sensor.power_load_no_var_loads')|int] +(states('input_text.fi_fo_buffer').split(', ')|map('multiply',1000)|map('int')|list)[1:]
          }}, \"prod_price_forecast\":{{(
          ([states('sensor.amber_feed_in_price')|float(0)] +
          state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list)[:48]) 
          }}, \"pv_power_forecast\":{{[min(15000,(states('sensor.APF_Generation_Entity')|int(0)
                                      /(states('sensor.solaredge_i1_active_power_limit')|int(0)
                                        +states('sensor.solaredge_i2_active_power_limit')|int(0))*200)|int(0))]
            + state_attr('sensor.solcast_forecast_today', 'detailedForecast')|selectattr('period_start','gt',utcnow())
              | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list 
            + state_attr('sensor.solcast_forecast_tomorrow', 'detailedForecast')|selectattr('period_start','gt',utcnow())
              | map(attribute='pv_estimate')|map('multiply',2000)|map('int')|list
          }}, \"prediction_horizon\":{{min(48,
          (state_attr('sensor.amber_feed_in_forecast', 'forecasts')|map(attribute='per_kwh')|list|length)+1)
          }}, \"alpha\":1, \"beta\":0, \"soc_init\":{{(states('sensor.filtered_powerwall_soc')|int(0))/100
          }}, \"soc_final\":0.30, \"def_total_hours\":[{{states('sensor.def_total_hours_pool_filter')
          }},{{states('sensor.def_total_hours_pool_heatpump')
          }},{{states('sensor.def_total_hours_ev')
          }},{{states('sensor.def_total_hours_hvac')
          }},{{states('sensor.def_total_hours_hws')
          }}]}' http://localhost:5000/action/naive-mpc-optim"
post_mpc_optim: "curl -i -H \"Content-Type: application/json\" -X POST -d '{
  \"load_cost_forecast\":[0.09, 0.04, 0.08, 0.07, 0.11, 0.12, 0.14, 0.11, 0.11, 0.13, 0.13, 0.18, 0.18, 0.32, 0.37, 0.43, 0.65, 0.65, 0.64, 0.59, 0.43, 0.21, 0.19, 0.17, 0.16, 0.17, 0.19, 0.19, 0.16, 0.18, 0.17, 0.17, 0.16, 0.16, 0.16, 0.16, 0.16], 
  \"load_power_forecast\":[5522, 4000, -500, 1000, -1000, -1100, 2300, 300, 5300, 5900, 3600, 3300, 7400, 4500, 3700, 3600, 300, 900, 0, 700, 900, 700, 500, 700, 700, 1100, 800, 300, 200, 300, 200, 300, 300, 200, 200, 300, 200, 300, 200, 200, 400, 200, 4800, -2900, 300, -400, -3200, 1700], 
  \"prod_price_forecast\":[0.01, -0.03, 0.0, -0.0, 0.03, 0.04, 0.06, 0.03, 0.03, 0.05, 0.05, 0.09, 0.09, 0.11, 0.15, 0.21, 0.41, 0.4, 0.4, 0.36, 0.21, 0.12, 0.1, 0.08, 0.08, 0.08, 0.1, 0.1, 0.07, 0.09, 0.08, 0.08, 0.07, 0.07, 0.07, 0.07, 0.07], 
  \"pv_power_forecast\":[11737, 13431, 14034, 14218, 14325, 14315, 14207, 13987, 13698, 13271, 12582, 11319, 9401, 6950, 4720, 2461, 515, 22, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 23, 376, 1535, 3078, 4474, 5707, 6803, 7839, 8732, 9659, 10501, 11161, 11740, 12040, 12052, 11727, 11049, 9914, 8289, 6652, 5176, 3935, 2543, 1139, 276, 11, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 
  \"prediction_horizon\":37, \"alpha\":1, \"beta\":0, \"soc_init\":0.18, \"soc_final\":0.30, \"def_total_hours\":[5,0,2,8,1]}' http://localhost:5000/action/naive-mpc-optim"

Feature request: Combining with thresholds

Hey there

First of all, you got a nice project here! I was just thinking to implement something similar in an HASS integration. But then I googled before I start such a big undertaking and found your Project 👍

If I understand it correctly, you calculate the optimal device load times based on the PV forecast and expected non controllable loads based on the past by solving a LP problem. My idea was quite similar, but instead of solving for the optimal times to turn the devices on, I wanted to solve for the optimal threshold for the devices to turn on.

Currently my biggest consumers which I control are:

  • EV
  • Conduction heater for water

Also I try to estimate when to turn on the dish washer and tumbler/washing machine.
This works quite well using a set threshold for the conduction heater. The EV charges dependent on the PV power with an factor how much grid/PV power can be used. The only downside is defining the thresholds. This is currently manual guess work based on my PV forecast (which is an self made implementation which uses a linear regression model based on my historic production and weather. For me it's more precise than solcast or similar I have tried).

Do you think it's feasible to implement a threshold prediction into your system? I haven't digged trough the code yet, but I think it should be possible. It would be great to define different type of devices:

  • Time based control
  • Threshold based
  • Combined (for example during the day threshold based, but if a specific minimum power is required, it could be time based during night. For example if the EV has to have >70% SOC at the morning)

Also for my water heater I know that it doesn't have to heat every day, but at least 10kW in 3 days. This would also be great to implement into the LP.

If you think this features make sense in your project, I'm happy to contribute to the code! We could also implement the PV forecast based on the historic data, but this would most likely make more sense in a separate integration.

Best regards
Philip

How to create a deferrable load sensor

First off, great project. Can't wait to start using/ benefiting from it

I need to create a new power sensor which picks up on my consumed power (A: the house electricity consumption) and deducts the power consumed by the device I want to deduct (B: boiler thermostats resistance).

I have the two (A and B) and can build a template sensor in Home Assistant, BUT A returns the delta of value effectively consumed by the house, meaning, during PV generation, this value can become negative, as the house is spending less than what the solar panels are generating.

What is the right way to go here for the new sensor that I need to add in EMHASS for the load power without the deferrable loads? Just a subtraction (A-B)? Or something else?

FR: Pass additional sensor entities.

Working quite well here now and stable.

Would it be possible to pass some additional sensor entities to HA?

p_grid_forecast & total value of cost function

Thanks.

My optimisation for next 24 hrs, total value of cost function $63:

image

optimisation commands not updating data

Around 26th August, the optimisation process stopped running on my HAOS install.

I run 2 calls, 5.47am Day Ahead, and regularly throughout day the perfect with external solar, price and battery state being sent. its worked fine for 6 months.

when I goto webUI:5000 I run the commands and observe the charts do not update.

I have uninstalled, and reinstalled, and same data set is maintained.

I think the DB is correct in EMHASS. How can I clear this?

Today is 28 August, this shows 26 aug:
image

Selected module throws "KeyError" in the log

I have updated the module by choosing Trina Solar TSM-340DD14A(II) from the PEV DB the one that matches my solar panels, and I get the following exception in the log.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 134, in action_call
    input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 76, in set_input_data_dict
    P_PV_forecast = fcst.get_power_from_weather(df_weather)
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 317, in get_power_from_weather
    module = cec_modules[self.plant_conf['module_model'][i]]
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py", line 3505, in __getitem__
    indexer = self.columns.get_loc(key)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 3623, in get_loc
    raise KeyError(key) from err
KeyError: 'Trina Solar TSM-340DD14A(II)'

SSL Error SSLCertVerificationError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "src/emhass/web_server.py", line 174, in action_call
_ = publish_data(input_data_dict, app.logger)
File "/usr/local/lib/python3.8/site-packages/emhass-0.4.5-py3.8.egg/emhass/command_line.py", line 467, in publish_data
input_data_dict['rh'].post_data(opt_res_latest['P_PV'], idx_closest,
File "/usr/local/lib/python3.8/site-packages/emhass-0.4.5-py3.8.egg/emhass/retrieve_hass.py", line 305, in post_data
response = post(url, headers=headers, data=json.dumps(data))
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 563, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='URL', port=xxxx): Max retries exceeded with url: /api/states/sensor.p_pv_forecast (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')))\

docker and home assistant have similar issue

Solar.Forecast service gives error

Describe the bug
I am trying to use the Solar.Forecast service with emhass. I have earlier reported another issue Solar.Forecast you fixed: #47

I have two Solar.Forecast service for my two roofs.
image

I pass the solar_forecast_kwp key with this shell_command:

shell_command:
  trigger_entsoe_forecast: "curl -i -H \"Content-Type: application/json\" -X POST -d '{\"solar_forecast_kwp\":5.3,
    \"load_cost_forecast\":{{((state_attr('sensor.entsoe_average_electricity_price_today', 'prices_today') | map(attribute='price') | list  + state_attr('sensor.entsoe_average_electricity_price_today', 'prices_tomorrow') | map(attribute='price') | list))[now().hour:][:24] }},
    \"prod_price_forecast\":{{((state_attr('sensor.entsoe_average_electricity_price_today', 'prices_today') | map(attribute='price') | list  + state_attr('sensor.entsoe_average_electricity_price_today', 'prices_tomorrow') | map(attribute='price') | list))[now().hour:][:24]}}
    }' http://localhost:5000/action/dayahead-optim"

Emhass gives these error in the log:

2023-03-13 20:10:41,838 - web_server - INFO - Retrieving weather forecast data using method = solar.forecast
2023-03-13 20:10:42,043 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2528, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1825, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1823, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1799, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 170, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 89, in set_input_data_dict
    df_weather = fcst.get_weather_forecast(method=optim_conf['weather_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 273, in get_weather_forecast
    data_dict = {'ts':list(data_raw['result']['watts'].keys()), 'yhat':list(data_raw['result']['watts'].values())}
TypeError: 'NoneType' object is not subscriptable

Hassos also logs:

Logger: homeassistant.config_entries
Source: config_entries.py:425
First occurred: 20:09:30 (2 occurrences)
Last logged: 20:09:30

Config entry 'Hustak' for forecast_solar integration not ready yet: Rate limit for API calls reached. (error 429); Retrying in background
Config entry 'Garasjetak' for forecast_solar integration not ready yet: Rate limit for API calls reached. (error 429); Retrying in background

and

Logger: homeassistant.components.forecast_solar
Source: components/forecast_solar/coordinator.py:65
Integration: Forecast.Solar (documentation, issues)
First occurred: 20:09:30 (18 occurrences)
Last logged: 20:16:18

Unexpected error fetching forecast_solar data: Rate limit for API calls reached. (error 429)
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 239, in _async_refresh
    self.data = await self._async_update_data()
  File "/usr/src/homeassistant/homeassistant/components/forecast_solar/coordinator.py", line 65, in _async_update_data
    return await self.forecast.estimate()
  File "/usr/local/lib/python3.10/site-packages/forecast_solar/__init__.py", line 148, in estimate
    data = await self._request(
  File "/usr/local/lib/python3.10/site-packages/forecast_solar/__init__.py", line 122, in _request
    raise ForecastSolarRatelimit(data["message"])
forecast_solar.exceptions.ForecastSolarRatelimit: Rate limit for API calls reached. (error 429)

Home Assistant installation type
Home Assistant 2023.3.3
Supervisor 2023.03.1
Operating System 9.5
Frontend 20230309.0 - latest

Home Assistant installation type

  • Home Assistant OS
  • Home Assistant Supervised
  • Home Assistant Core

Your hardware
Intel Nuc with hassos

EMHASS installation type

  • Add-on 0.3.5

Emhass.log
emhass.log

Passing forecast data do not work

I have posted most of the description of the problem here:
https://community.home-assistant.io/t/emhass-an-energy-management-for-home-assistant/338126/111?u=haraldov

What is the Home Assistant log saying when you execute that shell command?

2022-08-28 20:38:03.701 DEBUG (MainThread) [homeassistant.components.shell_command] Stdout of command: curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim, return code: 0:
2022-08-28 20:38:03.701 DEBUG (MainThread) [homeassistant.components.shell_command] Stderr of command: curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim, return code: 0:
➜ config

However it doesn't get updated in the model.

Home Assistant 2022.8.7
Supervisor 2022.08.3
Operating System 8.5
Frontend 20220802.0 - latest
Raspberry Pi 4

EMHASS
Current version: 0.2.20

[2022-08-28 00:14:04,984] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000
[2022-08-28 00:14:04,985] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api
[2022-08-28 00:14:04,985] INFO in web_server: The base path is: /usr/src
[2022-08-28 00:14:04,989] INFO in web_server: Using core emhass version: 0.3.18
[2022-08-28 00:15:24,545] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-28 00:20:29,186] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-28 00:22:31,750] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-28 00:22:33,664] INFO in command_line: Setting up needed data
[2022-08-28 00:22:33,759] INFO in forecast: Retrieving weather forecast data using method = scrapper
[2022-08-28 00:22:39,605] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-08-28 00:22:39,607] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-08-28 00:22:53,887] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-28 00:22:56,188] INFO in web_server: >> Performing dayahead optimization...
[2022-08-28 00:22:56,189] INFO in command_line: Performing day-ahead forecast optimization
[2022-08-28 00:22:56,198] INFO in optimization: Perform optimization for the day-ahead
[2022-08-28 00:22:56,755] INFO in optimization: Status: Optimal
[2022-08-28 00:22:56,756] INFO in optimization: Total value of the Cost function = -2.02
[2022-08-28 20:37:38,794] INFO in web_server: EMHASS server online, serving index.html...
[2022-08-28 20:38:27,230] INFO in web_server: EMHASS server online, serving index.html...

System Information

version core-2022.8.7
installation_type Home Assistant OS
dev false
hassio true
docker true
user root
virtualenv false
python_version 3.10.5
os_name Linux
os_version 5.15.32-v8
arch aarch64
timezone Europe/Oslo
config_dir /config
Home Assistant Community Store
GitHub API ok
GitHub Content ok
GitHub Web ok
GitHub API Calls Remaining 5000
Installed Version 1.26.2
Stage running
Available Repositories 1164
Downloaded Repositories 12
Home Assistant Cloud
logged_in true
subscription_expiration September 7, 2022 at 02:00
relayer_connected true
remote_enabled true
remote_connected true
alexa_enabled false
google_enabled true
remote_server eu-west-2-3.ui.nabu.casa
can_reach_cert_server ok
can_reach_cloud_auth ok
can_reach_cloud ok
Easee EV Charger
component_version 0.9.44
reach_easee_cloud ok
connected2stream true
Home Assistant Supervisor
host_os Home Assistant OS 8.5
update_channel stable
supervisor_version supervisor-2022.08.3
agent_version 1.2.1
docker_version 20.10.14
disk_total 118.3 GB
disk_used 14.8 GB
healthy true
supported true
board rpi4-64
supervisor_api ok
version_api ok
installed_addons ESPHome (2022.3.1), SSH & Web Terminal (12.0.2), Mosquitto broker (6.1.2), File editor (5.3.3), Frigate NVR Proxy (1.3), Home Assistant Google Drive Backup (0.108.4), Z-Wave JS to MQTT (0.45.0), EMHASS (0.2.20)
Dashboards
dashboards 1
resources 2
views 10
mode storage
Recorder
oldest_recorder_run August 25, 2022 at 08:33
current_recorder_run August 28, 2022 at 09:14
estimated_db_size 575.15 MiB
database_engine sqlite
database_version 3.38.5
Spotify
api_endpoint_reachable ok

Error passing forecast data

When trying to pass forecast data all looks good from the command line:

mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"prod_price_forecast":[0.14, 0.15, 0.27, 0.3, 0.34, 0.29, 0.23, 0.19, 0.16, 0.13, 0.13, 0.12, 0.13, 0.12, 0.15, 0.14, 0.15, 0.14, 0.13, 0.13, 0.13, 0.13, 0.11, 0.11, 0.14, 0.15, 0.15, 0.14, 0.22, 0.37, 0.37, 0.37, 0.37, 0.3, 0.37, 0.37, 0.16, 0.15, 0.15, 0.15, 0.16, 0.22, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 201 CREATED
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Mon, 25 Apr 2022 05:45:55 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 45

EMHASS >> Action dayahead-optim executed... 
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast":[0.25, 0.26, 0.39, 0.42, 0.46, 0.42, 0.35, 0.31, 0.27, 0.24, 0.24, 0.23, 0.24, 0.23, 0.26, 0.26, 0.26, 0.25, 0.24, 0.24, 0.24, 0.24, 0.22, 0.21, 0.25, 0.26, 0.26, 0.25, 0.34, 0.5, 0.51, 0.51, 0.5, 0.42, 0.51, 0.5, 0.28, 0.27, 0.26, 0.26, 0.27, 0.34, 0.5, 0.51, 0.51, 0.51, 0.51, 0.51]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 201 CREATED
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Mon, 25 Apr 2022 05:46:33 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 45

EMHASS >> Action dayahead-optim executed... 
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 201 CREATED
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Mon, 25 Apr 2022 05:47:30 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 45

EMHASS >> Action dayahead-optim executed... 
mark@odroid:~$ 

However it doesn't get updated in the model and the logs complain about length needing to be 48, but then confirming length is 48.

[2022-04-25 15:45:40,694] ERROR in app_server: ERROR: The passed data is either not a list or the length is not correct, length should be 48
[2022-04-25 15:45:40,695] ERROR in app_server: Passed type is <class 'list'> and length is 48
[2022-04-25 15:45:55,225] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-25 15:45:55,252] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
172.30.32.1 - - [25/Apr/2022 15:45:55] "POST /action/dayahead-optim HTTP/1.1" 201 -
[2022-04-25 15:46:18,650] ERROR in app_server: ERROR: The passed data is either not a list or the length is not correct, length should be 48
[2022-04-25 15:46:18,653] ERROR in app_server: Passed type is <class 'list'> and length is 48
[2022-04-25 15:46:33,131] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-25 15:46:33,155] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
172.30.32.1 - - [25/Apr/2022 15:46:33] "POST /action/dayahead-optim HTTP/1.1" 201 -
[2022-04-25 15:47:14,370] ERROR in app_server: ERROR: The passed data is either not a list or the length is not correct, length should be 48
[2022-04-25 15:47:14,373] ERROR in app_server: Passed type is <class 'list'> and length is 48
[2022-04-25 15:47:30,050] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-25 15:47:30,083] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
172.30.32.1 - - [25/Apr/2022 15:47:30] "POST /action/dayahead-optim HTTP/1.1" 201 -

How to bypass this on HA?

Logger: homeassistant.components.hassio
Source: components/hassio/websocket_api.py:123
Integration: Home Assistant Supervisor (documentation, issues)
First occurred: 3:30:36 PM (1 occurrences)
Last logged: 3:30:36 PM

Failed to to call /store/repositories - https://github.com/davidusb-geek/emhass is not a valid add-on repository

datetime.now() method consequences for submitted forecasts

The publish-data command will load the latest csv file and look for the closest timestamp that match the current time using the datetime.now() method in Python. This means that if EMHASS is configured for 30min time step optimizations, the csv will be saved with timestamps 00:00, 00:30, 01:00, 01:30, … and so on. If the current time is 00:05, then the closest timestamp of the optimization results that will be published is 00:00. If the current time is 00:25, then the closest timestamp of the optimization results that will be published is 00:30.

One (unintended) consequence of the datetime.now() method results when submitting forecast data.

Consider the data from my energy provider:
image

Which is represented in HA as the following entities and attributes:

sensor.amber_general_price: 0.36
sensor.amber_general_forecast: 0.37
'forecasts.per_kwh': [0.37, 0.38, 0.38, 0.38, 0.38, 0.38, 0.38, 0.38, 0.39, 0.43, 0.45, 0.72, 0.73, 0.73, 0.72, 0.45, 0.7, 0.46, 0.45, 0.39, 0.36, 0.32, 0.32, 0.33, 0.7, 0.7, 0.51, 0.43, 0.51, 0.61, 0.46, 0.47, 0.61, 15.12, 15.08, 0.74, 0.52, 0.61, 0.52, 0.45, 0.44, 0.43, 0.45, 0.44, 0.38, 0.36, 0.43, 0.39]
  • If I conduct an optimisation at 00:35 the optimisation results will be published for 00:30, so the first value needs to be the now value, followed by the forecast for 01:00, then 01:30 and so on.
"load_cost_forecast":(sensor.amber_general_price + forecasts.per_kwh)[:48]
"load_cost_forecast": [0.36, 0.37, 0.38, 0.38, ...]

[:48] is to truncate to 48 elements as that list will have 49 elements
  • If I conduct an optimisation at 00:55 the optimisation results will be published for 01:00, so the first value needs to be the forecast value for 01:00, the 01:30 and so on. The Now value isn't used.
"load_cost_forecast":(forecasts.per_kwh)
"load_cost_forecast":[0.37, 0.38, 0.38, 0.38, ...]

Whilst not impossible to calculate the changing load_cost_forecast, it is confusing as the data it requires changes depending on which 15 minute block it is generated in.

Additionally I would submit the first optimisation including the now value is superior as it includes more data, to base the optimisation on and this is also important for the MPC as it will be called with high frequency (maybe every 5 minutes).

Could I propose that the optimisation should always include the now value as the first element of the forecast list and that for consistency the published optimisation shouldn't map to the closest timestamp (which maybe ahead of behind), but that the published optimisation is always published with the start of the current time step. So if I publish at either 00:05 or 00:25 it will be published for 00:00, if I publish at either 00:35 or 00:55 it will be published for 00:30.

Solar forcast is broken

Describe the bug
When calling EMHASS with solarforcast the system fails

To Reproduce
curl -i -H "Content-Type: application/json" -X POST -d ''{"solar_forecast_kwp":5}'' http://localhost:5000/action/dayahead-optim

Screenshots

2023-06-01 13:58:01,021 - web_server - INFO - Retrieving weather forecast data using method = solar.forecast
2023-06-01 13:58:02,577 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 89, in set_input_data_dict
    df_weather = fcst.get_weather_forecast(method=optim_conf['weather_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 273, in get_weather_forecast
    data_dict = {'ts':list(data_raw['result']['watts'].keys()), 'yhat':list(data_raw['result']['watts'].values())}
TypeError: 'NoneType' object is not subscriptable
2023-06-01 13:58:50,660 - web_server - INFO - Setting up needed data
2023-06-01 13:58:50,687 - web_server - INFO - Retrieving weather forecast data using method = solar.forecast
2023-06-01 13:58:51,871 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 174, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 89, in set_input_data_dict
    df_weather = fcst.get_weather_forecast(method=optim_conf['weather_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 273, in get_weather_forecast
    data_dict = {'ts':list(data_raw['result']['watts'].keys()), 'yhat':list(data_raw['result']['watts'].values())}
TypeError: 'NoneType' object is not subscriptable

Home Assistant installation type

  • Home Assistant OS

Your hardware

  • OS: HA OS
  • Architecture: amd64

EMHASS installation type

  • Add-on

Additional context
Add any other context about the problem here.

Feature Request: more verbose logging

Describe the bug
It would be great to get more verbose logging during error conditions.

To Reproduce

From my logs, I am passing some invalid data, but I don't know what it is.

[2023-01-28 14:10:00,382] INFO in command_line: Setting up needed data
[2023-01-28 14:10:00,384] WARNING in utils: There are non numeric values on the passed data, check for missing values (nans, null, etc)
[2023-01-28 14:10:00,385] WARNING in utils: There are non numeric values on the passed data, check for missing values (nans, null, etc)
[2023-01-28 14:10:00,388] INFO in retrieve_hass: Retrieve hass get data method initiated...

Expected behavior
It would be useful to include the out of bounds data in the logs.

Home Assistant installation type

  • Home Assistant Supervised

Your hardware
-OS: Linux, etc

  • Architecture: aarch64

EMHASS installation type

  • Add-on

Additional context
Add any other context about the problem here.

KeyError: 'P_batt'

Running 0.1.37 and feels like we are getting close :-).

When I enable the battery for the solution space dayahead doesn't seem to finish, running for over 10 minutes before I called again:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
 * Serving Flask app 'app_server' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on all addresses (0.0.0.0)
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://127.0.0.1:5000
 * Running on http://172.30.33.4:5000 (Press CTRL+C to quit)
[2022-04-28 13:27:11,810] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-28 13:27:11,866] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
192.168.86.50 - - [28/Apr/2022 13:38:18] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [28/Apr/2022 13:38:18] "GET /static/style.css HTTP/1.1" 304 -
[2022-04-28 13:45:39,692] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-28 13:45:39,742] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD

Then the next time publish runs it fails:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
 * Serving Flask app 'app_server' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on all addresses (0.0.0.0)
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://127.0.0.1:5000
 * Running on http://172.30.33.4:5000 (Press CTRL+C to quit)
192.168.86.50 - - [28/Apr/2022 13:15:43] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [28/Apr/2022 13:15:44] "GET /static/style.css HTTP/1.1" 304 -
[2022-04-28 13:16:10,105] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-28 13:16:10,160] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
[2022-04-28 13:20:17,487] ERROR in app: Exception on /action/publish-data [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 3621, in get_loc
    return self._engine.get_loc(casted_key)
  File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'P_batt'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "/usr/src/app_server.py", line 191, in action_call
    _ = publish_data(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 182, in publish_data
    input_data_dict['rh'].post_data(opt_res_dayahead['P_batt'], idx_closest,
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py", line 3505, in __getitem__
    indexer = self.columns.get_loc(key)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 3623, in get_loc
    raise KeyError(key) from err
KeyError: 'P_batt'
172.30.32.1 - - [28/Apr/2022 13:20:17] "POST /action/publish-data HTTP/1.1" 500 -
web_ui_url: 0.0.0.0
hass_url: empty
long_lived_token: empty
costfun: profit
optimization_time_step: 30
historic_days_to_retrieve: 2
sensor_power_photovoltaics: sensor.apf_generation_entity
sensor_power_load_no_var_loads: sensor.power_load_no_var_loads
number_of_deferrable_loads: 2
nominal_power_of_deferrable_loads: 5000,1500
operating_hours_of_each_deferrable_load: 5,8
peak_hours_periods_start_hours: 02:54,17:24
peak_hours_periods_end_hours: 15:24,20:24
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 30000
pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
surface_tilt: 30
surface_azimuth: 205
modules_per_string: 16
strings_per_inverter: 1
set_use_battery: true
battery_discharge_power_max: 5000
battery_charge_power_max: 5000
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 13500
battery_minimum_state_of_charge: 0.05
battery_maximum_state_of_charge: 1
battery_target_state_of_charge: 0.2

UnicodeDecodeError

Describe the bugI
When I installed your addon it worked for a few hours but then I got an error while running emhass ass addon on ha supervisor.

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 285, in
injection_dict = pickle.load(fid)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x95 in position 3591584: invalid start byte

Home Assistant installation type

  • Home Assistant Supervised

Your hardware

  • OS: HA OS
  • Architecture: amd64

EMHASS installation type

  • Add-on

Feature request - pass current battery SOC to dayahead-optim to improve optimization

I find it that dayahead-optim quite often get my battery strategy wrong since the battery has a different SOC than dayahead-optim seems to assume. For different reasons, my battery level might not be what the optimization expects due to e.g. unexpected load, or unexpected PV power. As of now it always seems to assume that the battery has SOCtarget charge at the beginning of an optimization period.

Allow me to pass a parameter to dayahead-optim where I can send current battery SOC to help the optimization make the right decisions.

Error on dayahead-optim POST call

Hi

I'm getting an error when triggering the REST call for the dayahead-optim.

This is what I get in the logs:

[2022-06-19 10:45:17,391] INFO in command_line: Setting up needed data
[2022-06-19 10:45:17,399] INFO in forecast: Retrieving weather forecast data using method = scrapper
[2022-06-19 10:45:19,945] INFO in forecast: Retrieving data from hass for load forecast using method = naive
[2022-06-19 10:45:19,947] INFO in retrieve_hass: Retrieve hass get data method initiated...
[2022-06-19 10:45:47,744] INFO in web_server:  >> Performing dayahead optimization...
[2022-06-19 10:45:47,745] INFO in command_line: Performing day-ahead forecast optimization
[2022-06-19 10:45:47,755] INFO in optimization: Perform optimization for the day-ahead
[2022-06-19 10:45:47,894] ERROR in optimization: It was not possible to find a valid solver for Pulp package
[2022-06-19 10:45:47,895] INFO in optimization: Status: Not Solved
[2022-06-19 10:45:47,895] WARNING in optimization: Cost function cannot be evaluated, probably None
[2022-06-19 10:45:47,904] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 151, in action_call
    opt_res = dayahead_forecast_optim(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 195, in dayahead_forecast_optim
    opt_res_dayahead = input_data_dict['opt'].perform_dayahead_forecast_optim(
  File "/usr/local/lib/python3.9/dist-packages/emhass/optimization.py", line 493, in perform_dayahead_forecast_optim
    self.opt_res = self.perform_optimization(df_input_data, P_PV.values.ravel(),
  File "/usr/local/lib/python3.9/dist-packages/emhass/optimization.py", line 384, in perform_optimization
    opt_tp["P_grid"] = [P_grid_pos[i].varValue + P_grid_neg[i].varValue for i in set_I]
  File "/usr/local/lib/python3.9/dist-packages/emhass/optimization.py", line 384, in <listcomp>
    opt_tp["P_grid"] = [P_grid_pos[i].varValue + P_grid_neg[i].varValue for i in set_I]
TypeError: unsupported operand type(s) for +: 'NoneType' and 'NoneType'

This is my config.yaml configuration:

rest_command:
  dayahead_optim: 
    url: http://192.168.86.29:5000/action/dayahead-optim
    method: POST
    content_type: 'application/json'
    payload: '{}'

  publish_data: 
    url: http://192.168.86.29:5000/action/publish-data
    method: POST
    content_type: 'application/json'
    payload: '{}'

Adding real examples in the documentation and better explain parameters/abbreviations

Hello davidusb-geek,
Thank you for your project you have created.

Concerning the settings and documentation - I am struggling for several days to start the addon correctly without having errors in log. Even though I have read trough all notes, I still had issues. Thus I would propose to have more verbose StartUp Guide. I know that documentation was always the least loved part for me, but if we see real example (exact wording) it would help a lot :-)

The addon is rather sensitive for correct fillings (IP address with or without http prefix, or not allowing "localhost" word, but allowing IP address 0.0.0.0, stating http or https, etc).

Also in the next steps, for example when I was hesitating if to optimize for cost or profit, the difference was not clear to me. And some of the abbreviations were not clear too.

Thank you for cosidering it and thank you again for your addon.
Daman

Feature request: Make own forecast data persistent

After the load_cost_forecast and prod_price_forecast from Nordpool addon are passed as a list to emhass the data is no persistent. When restarting emhass the data is lost and the default values are used in unit_load_cast and unit_prod_price.

This cause problems:
When I use the service: Shell Command: post_nordpool_forecast to update the data, the data do not exist because Nordpool are only updated 13:00 and are only updated once from Nordpool.
or
The data exist, but today and tomorrow data from post_nordpool_forecast are not correct placed in time
Example: Tomorrow price data - 0.351 are placed in 21:00-22:00, but the correct time is 00:00-01:00.

It would have been nice if the forecast data could have been stored in emhass and not lost when emhass are restarted.

Multiple deferrable_loads

I wan't to use multiple deferrable_loas, like dishwasher, washing machine, dryer, hot water, boost of my thermostat
Some of my deferrable_loads aren't needed every day.
Is there a way to tell to emhass if that deferrable_load is needed?
Maybe an input_boolean to controle every deferrable_load?

ERROR:web_server:P_batt was not found in results

Having issues with the P_batt not being available :-(

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
[2022-05-19 07:30:45,287] INFO in web_server: Launching the emhass webserver at: http://0.0.0.0:5000
[2022-05-19 07:30:45,287] INFO in web_server: Home Assistant data fetch will be performed using url: http://supervisor/core/api
[2022-05-19 07:30:45,288] INFO in web_server: The base path is: /usr/src
[2022-05-19 07:30:45,292] INFO in web_server: Using core emhass version: 0.3.8
[2022-05-19 07:31:06,375] INFO in command_line: Setting up needed data
INFO:web_server:Setting up needed data
[2022-05-19 07:31:06,431] INFO in forecast: Retrieving weather forecast data using method = scrapper
INFO:web_server:Retrieving weather forecast data using method = scrapper
[2022-05-19 07:31:09,662] INFO in forecast: Retrieving data from hass for load forecast using method = naive
INFO:web_server:Retrieving data from hass for load forecast using method = naive
[2022-05-19 07:31:09,663] INFO in retrieve_hass: Retrieve hass get data method initiated...
INFO:web_server:Retrieve hass get data method initiated...
[2022-05-19 07:31:13,392] INFO in web_server:  >> Performing naive MPC optimization...
INFO:web_server: >> Performing naive MPC optimization...
[2022-05-19 07:31:13,393] INFO in command_line: Performing naive MPC optimization
INFO:web_server:Performing naive MPC optimization
[2022-05-19 07:31:13,416] INFO in optimization: Perform an iteration of a naive MPC controller
INFO:web_server:Perform an iteration of a naive MPC controller
[2022-05-19 07:31:13,563] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
WARNING:web_server:Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-05-19 07:31:13,616] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
WARNING:web_server:Failed LP solve with default Pulp solver, falling back to GLPK_CMD
[2022-05-19 07:31:13,737] INFO in optimization: Status: Optimal
INFO:web_server:Status: Optimal
[2022-05-19 07:31:13,737] INFO in optimization: Total value of the Cost function = -10.1
INFO:web_server:Total value of the Cost function = -10.1
[2022-05-19 07:40:04,528] INFO in command_line: Setting up needed data
INFO:web_server:Setting up needed data
[2022-05-19 07:40:04,539] INFO in web_server:  >> Publishing data...
INFO:web_server: >> Publishing data...
[2022-05-19 07:40:04,540] INFO in command_line: Publishing data to HASS instance
INFO:web_server:Publishing data to HASS instance
[2022-05-19 07:40:04,586] INFO in retrieve_hass: Successfully posted value in existing entity_id
INFO:web_server:Successfully posted value in existing entity_id
[2022-05-19 07:40:04,618] INFO in retrieve_hass: Successfully posted value in existing entity_id
INFO:web_server:Successfully posted value in existing entity_id
[2022-05-19 07:40:04,656] INFO in retrieve_hass: Successfully posted value in existing entity_id
INFO:web_server:Successfully posted value in existing entity_id
[2022-05-19 07:40:04,702] INFO in retrieve_hass: Successfully posted value in existing entity_id
INFO:web_server:Successfully posted value in existing entity_id
[2022-05-19 07:40:04,704] ERROR in command_line: P_batt was not found in results DataFrame. Optimization task may need to be relaunched or it did not converged to a solution.
ERROR:web_server:P_batt was not found in results DataFrame. Optimization task may need to be relaunched or it did not converged to a solution.
web_ui_url: 0.0.0.0
hass_url: empty
long_lived_token: empty
costfun: profit
optimization_time_step: 30
historic_days_to_retrieve: 2
set_total_pv_sell: false
sensor_power_photovoltaics: sensor.apf_generation_entity
sensor_power_load_no_var_loads: sensor.power_load_no_var_loads
number_of_deferrable_loads: 2
list_nominal_power_of_deferrable_loads:
  - nominal_power_of_deferrable_loads: 1500
  - nominal_power_of_deferrable_loads: 5500
  - nominal_power_of_deferrable_loads: 8000
  - nominal_power_of_deferrable_loads: 2400
list_operating_hours_of_each_deferrable_load:
  - operating_hours_of_each_deferrable_load: 4
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 2
  - operating_hours_of_each_deferrable_load: 1
list_peak_hours_periods_start_hours:
  - peak_hours_periods_start_hours: '02:54'
  - peak_hours_periods_start_hours: '17:24'
list_peak_hours_periods_end_hours:
  - peak_hours_periods_end_hours: '15:24'
  - peak_hours_periods_end_hours: '20:24'
list_treat_deferrable_load_as_semi_cont:
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: true
  - treat_deferrable_load_as_semi_cont: false
  - treat_deferrable_load_as_semi_cont: true
load_peak_hours_cost: 0.1907
load_offpeak_hours_cost: 0.1419
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 30000
list_pv_module_model:
  - pv_module_model: Advance_Power_API_M370
  - pv_module_model: Advance_Power_API_M370
list_pv_inverter_model:
  - pv_inverter_model: SolarEdge_Technologies_Ltd___SE7600A_US__208V_
  - pv_inverter_model: SolarEdge_Technologies_Ltd___SE7600A_US__208V_
list_surface_tilt:
  - surface_tilt: 18
  - surface_tilt: 10
list_surface_azimuth:
  - surface_azimuth: 90
  - surface_azimuth: 270
list_modules_per_string:
  - modules_per_string: 29
  - modules_per_string: 21
list_strings_per_inverter:
  - strings_per_inverter: 1
  - strings_per_inverter: 1
set_use_battery: true
battery_discharge_power_max: 5000
battery_charge_power_max: 5000
battery_discharge_efficiency: 1
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 13500
battery_minimum_state_of_charge: 0.01
battery_maximum_state_of_charge: 0.99
battery_target_state_of_charge: 0.15
nominal_power_of_deferrable_loads: 5500,1500,7000,3000
operating_hours_of_each_deferrable_load: 10,16,3,1
peak_hours_periods_start_hours: 02:54,17:24
peak_hours_periods_end_hours: 15:24,20:24
pv_module_model: Advance_Power_API_M370,Advance_Power_API_M370
pv_inverter_model: >-
  SolarEdge_Technologies_Ltd___SE7600A_US__208V_,SolarEdge_Technologies_Ltd___SE7600A_US__208V_
surface_tilt: 10,10
surface_azimuth: 90,270
modules_per_string: 29,21
strings_per_inverter: 1,1

logging significantly reduced with 0.4.0

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Upgrade to 0.3.0+ and logging elements are significantly reduced.

Expected behavior
EMHASS logging verbosity should be able to be tuned up and down.

By default logging should only report significant errors .
At a 'debug' level of logging EMHAS could display the calculated values.

Something like the best practice framework https://sematext.com/blog/best-practices-for-efficient-log-management-and-monitoring/#5-use-the-proper-log-level

Screenshots

This is all the logging I have received over a 24 hour operation, normally with my MPC I was receiving the calculated values logged every call - in my case every minute.

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
waitress   INFO  Serving on http://0.0.0.0:5000
Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 4 (char 3)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2528, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1825, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1823, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1799, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 150, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 110, in set_input_data_dict
    rh.get_data(days_list, var_list,
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 124, in get_data
    data = response.json()[0]
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Extra data: line 1 column 4 (char 3)
Set the MPC prediction horizon to at least 5 times the optimization time step
Exception on /action/publish-data [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2528, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1825, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1823, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1799, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 154, in action_call
    _ = publish_data(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 458, in publish_data
    idx_closest = opt_res_latest.index.get_indexer([now_precise], method='ffill')[0]
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 3729, in get_indexer
    return self._get_indexer_non_comparable(target, method=method, unique=True)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 5894, in _get_indexer_non_comparable
    raise TypeError(f"Cannot compare dtypes {self.dtype} and {other.dtype}")
TypeError: Cannot compare dtypes datetime64[ns] and datetime64[ns, Australia/Brisbane]
Exception on /action/naive-mpc-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.9/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 4 (char 3)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2528, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1825, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1823, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1799, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 150, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 110, in set_input_data_dict
    rh.get_data(days_list, var_list,
  File "/usr/local/lib/python3.9/dist-packages/emhass/retrieve_hass.py", line 124, in get_data
    data = response.json()[0]
  File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Extra data: line 1 column 4 (char 3)

Home Assistant installation type

  • Home Assistant Supervised

Your hardware

  • OS: Linux
  • Architecture: aarch64

EMHASS installation type

  • Add-on

Feature request: Grid availability

Hello

Thank you for amazing work.

South Africa currently has significant power constraints. As such, power is rotationally shedded according to schedules that are updated according to grid constraints.

Clearly a huge number of Home Assistant users are affected by this.

Someone has developed an Home Assistant integration to pull load shedding data into HA. This works well.

It would seem that this could be handled generically by EMHASS by having a constraint as a schedule of when not to use grid power (in SA the applicable load shedding schedule) - or even vice versa, when to always prefer grid power. I can imagine this could be forced via the cost schedule of power - making it exorbitantly expensive during load shedding and free when it should be used mandatorily.

Trying to use the Solar.Forecast service gives error

I am using Emhass addon release 0.2.29. Using hassos with Home Assistant 2023.2.2. I want to use the Solar.Forecast service for my PV power production forecast.

I pass the solar_forecast_kwp with this shell_command:

shell_command:
  publish_data: "curl -i -H 'Content-Type:application/json' -X POST -d '{}' http://localhost:5000/action/publish-data"

  trigger_nordpool_forecast: 'curl -i -H "Content-Type: application/json" -X POST -d ''{"solar_forecast_kwp":5,"load_cost_forecast":{{((state_attr(''sensor.nordpool'', ''raw_today'') | map(attribute=''value'') | list  + state_attr(''sensor.nordpool'', ''raw_tomorrow'') | map(attribute=''value'') | list))[now().hour:][:24] }},"prod_price_forecast":{{((state_attr(''sensor.nordpool'', ''raw_today'') | map(attribute=''value'') | list  + state_attr(''sensor.nordpool'', ''raw_tomorrow'') | map(attribute=''value'') | list))[now().hour:][:24]}}}'' http://localhost:5000/action/dayahead-optim'

Emhass log gives these errors:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 135, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 81, in set_input_data_dict
    df_weather = fcst.get_weather_forecast(method=optim_conf['weather_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 248, in get_weather_forecast
    "/"+str(self.plant_conf["surface_tilt"])+"/"+str(self.plant_conf["surface_azimuth"]-180)+\
TypeError: unsupported operand type(s) for -: 'list' and 'int'
[2023-02-06 20:13:46,801] INFO in command_line: Setting up needed data
[2023-02-06 20:13:46,804] ERROR in utils: ERROR: The passed data is either not a list or the length is not correct, length should be 24
[2023-02-06 20:13:46,805] ERROR in utils: Passed type is <class 'list'> and length is 24
[2023-02-06 20:13:46,806] ERROR in utils: ERROR: The passed data is either not a list or the length is not correct, length should be 24
[2023-02-06 20:13:46,806] ERROR in utils: Passed type is <class 'list'> and length is 24
[2023-02-06 20:13:46,812] INFO in forecast: Retrieving weather forecast data using method = solar.forecast
[2023-02-06 20:13:46,813] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 135, in action_call
    input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 81, in set_input_data_dict
    df_weather = fcst.get_weather_forecast(method=optim_conf['weather_forecast_method'])
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 248, in get_weather_forecast
    "/"+str(self.plant_conf["surface_tilt"])+"/"+str(self.plant_conf["surface_azimuth"]-180)+\
TypeError: unsupported operand type(s) for -: 'list' and 'int'

The emhass config for the solar panels, microinverter, azimuth, angle I use is publish here:
(https://community.home-assistant.io/t/emhass-an-energy-management-for-home-assistant/338126/100)

When using the custom names there are no attributes is the sensors

Describe the bug
When I use custom entity names there are no attrfibutes exposed
To Reproduce
Steps to reproduce the behavior

Expected behavior
I expected to see the attributes

Screenshots
Scherm­afbeelding 2023-06-26 om 18 19 40

Scherm­afbeelding 2023-06-26 om 18 25 17 Scherm­afbeelding 2023-06-26 om 18 25 40 Scherm­afbeelding 2023-06-26 om 18 25 52

Home Assistant installation type

  • Home Assistant OS

Your hardware

  • OS: HA OS
  • Architecture: amd64

EMHASS installation type

  • Add-on

Additional context

publish_data:
  url: http://localhost:5001/action/publish-data
  method: POST
  content_type: "application/json"
  payload: >-
    {
      "custom_deferrable_forecast_id": [
        {"entity_id": "sensor.emhass_wasmachien",
          "unit_of_measurement": "W",
          "friendly_name": "Emhass wasmachien"},
        {"entity_id": "sensor.emhass_droogkast",
          "unit_of_measurement": "W",
          "friendly_name": "Emhass droogkast"},
        {"entity_id": "sensor.emhass_afwasmachien",
          "unit_of_measurement": "W",
          "friendly_name": "Emhass afwasmachien"},
        {"entity_id": "sensor.emhass_warmtepompboiler",
          "unit_of_measurement": "W",
          "friendly_name": "Emhass warmtepompboiler"
        }
      ],
      "custom_pv_forecast_id": {
        "entity_id": "sensor.emhass_pv_forecast",
        "unit_of_measurement": "W",
        "friendly_name": "Emhass pv forecast"},
      "custom_load_forecast_id": {
        "entity_id": "sensor.emhass_load_forecast",
        "unit_of_measurement": "W",
        "friendly_name": "Emhass load forecast"},
      "custom_grid_forecast_id": {
        "entity_id": "sensor.emhass_grid_forecast",
        "unit_of_measurement": "W",
        "friendly_name": "Emhass grid forecast"},
      "custom_unit_load_cost_id": {
        "entity_id": "sensor.emhass_load_cost",
        "unit_of_measurement": "€/kWh",
        "friendly_name": "Emhass load cost"},
      "custom_unit_prod_price_id": {
        "entity_id": "sensor.emhass_prod_price",
        "unit_of_measurement": "€/kWh",
        "friendly_name": "Emhass production price"}
    }

Time Zone not taken from secrets.yaml in Docker

Describe the bug
The time zone settintgs do not seem to be taken from the secrets.yaml when using the docker image from docker hub. If I manually add it to the docker parameters, it is working.

To Reproduce
Try it with only setting it in the yaml and then when also placing it into the docker parameters.

Expected behavior
Take timezone from yaml.

Home Assistant installation type

  • Home Assistant Core on Synology Docker

Your hardware

  • OS: Synology

EMHASS installation type

  • Docker from hub

Feature request: Let users change the default entity name "sensor.p_deferrableX" to a friendly name

I has been nice if users can change the default entity name "sensor.p_deferrableX" to for instance the friendly name "sensor.heating_cable_kitchen". I think it can be done if the entity has a unique ID.

This entity ('sensor.p_deferrable0') does not have a unique ID, therefore its settings cannot be managed from the UI. See the [documentation](https://www.home-assistant.io/faq/unique_id) for more detail.

Feature Request: smooth allocation of power to continuous loads

I have three continuous loads in my configuration.

P_deferrable2 (EV charger) can take values between 0W to + 11 kW
P_deferrable3 (HVAC) can take values between 0W to 4 kW
P_batt which can take values from -15 kW to + 15 kW

Screenshot 2023-04-10 10 18 32

As you can see in the above graph, the allocation of power to these loads is very lumpy between time slots.

For example rather than having the EV charging; [21530,11500,2173,11500] could this be smoothed to [6831, 6831, 6831, 6831] so the same in aggregate?

Similar for battery charging currently [0, -15000, 0, -15000] could this be smoothed to [-7500, -7500, -7500, -7500]?

Screenshot 2023-04-10 10 29 00

The benefits of a smoothed allocation is all continuous loads receive an ongoing proportion rather than starting and stopping on alternative time slots (useful for battery and HVAC) and if things change, such as the EV departing, it will have received some power allocation earlier in the day.

Error: KeyError: 'list_hp_periods' when running optimization with CSV option

When I trigger a optimization I’m getting the following error logged in the docker container's output:

2022-10-01 11:25:58,714] INFO in command_line: Setting up needed data
[2022-10-01 11:25:58,715] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "src/emhass/web_server.py", line 134, in action_call
    input_data_dict = set_input_data_dict(config_path, str(config_path.parent), costfun,
  File "/usr/local/lib/python3.8/site-packages/emhass-0.3.19-py3.8.egg/emhass/command_line.py", line 42, in set_input_data_dict
    retrieve_hass_conf, optim_conf, plant_conf = utils.get_yaml_parse(config_path, params=params)
  File "/usr/local/lib/python3.8/site-packages/emhass-0.3.19-py3.8.egg/emhass/utils.py", line 239, in get_yaml_parse
    optim_conf['list_hp_periods'] = dict((key,d[key]) for d in optim_conf['list_hp_periods'] for key in d)
KeyError: 'list_hp_periods'

I'm running Emhass in a standalone docker container. I’m loading the solar forecast from Solcast manually using a script and storing that forecast in a csv file in the /data folder. I have two sites setup that need to be combined and I could only add one Solcast site id. I have confirmed that there is a CSV file in the /data folder with the forecast data.

There is also a scipt running that downloads the electricity prices from my electricity provider and stores that in two CSV files in the same /data folder. One for the production and one for the load costs.

In the /data folder I have these CSV files:

  • data_load_cost_forecast.csv
  • data_load_forecast.csv
  • data_prod_price_forecast.csv
  • data_pv_power_forecast.csv
  • data_weather_forecast.csv

Some content from data_pv_power_forecast.csv

2022-10-02 06:00:00+02:00,0
2022-10-02 06:30:00+02:00,0
2022-10-02 07:00:00+02:00,0
2022-10-02 07:30:00+02:00,0.048100000000000004
2022-10-02 08:00:00+02:00,0.31379999999999997
2022-10-02 08:30:00+02:00,0.8818999999999999
2022-10-02 09:00:00+02:00,1.6953
2022-10-02 09:30:00+02:00,2.6088
2022-10-02 10:00:00+02:00,3.5980999999999996

Some content from data_prod_price_forecast.csv

2022-10-01 20:00:00+02:00,1.0199
2022-10-01 21:00:00+02:00,0.9753
2022-10-01 22:00:00+02:00,1.0066
2022-10-01 23:00:00+02:00,0.998
2022-10-02 00:00:00+02:00,0.9714

Based on the comment # list of different tariff periods (only needed if load_cost_forecast_method='hp_hc_periods') after list_hp_periods this config option should not be needed. I’ve tried commenting that line and still I get the KeyError about list_hp_periods.

I haven’t changed what’s in list_hp_periods, it’s still the same from the default config that came with the download of Emhass.

My full config_emhass.yaml file:

# Configuration file for EMHASS

retrieve_hass_conf:
  - freq: 5 # The time step to resample retrieved data from hass in minutes
  - days_to_retrieve: 8 # We will retrieve data from now and up to days_to_retrieve days
  - var_PV: 'sensor.total_current_solar_production_watts' # Photovoltaic produced power sensor in Watts
  - var_load: 'sensor.total_household_consumption' # Household power consumption sensor in Watts (deferrable loads should be substracted)
  - load_negative: False # Set to True if the retrived load variable is negative by convention
  - set_zero_min: True # A special treatment for a minimum value saturation to zero. Values below zero are replaced by nans
  - var_replace_zero: # A list of retrived variables that we would want  to replace nans with zeros
    - 'sensor.total_current_solar_production_watts'
  - var_interp: # A list of retrived variables that we would want to interpolate nan values using linear interpolation
    - 'sensor.total_current_solar_production_watts'
    - 'sensor.total_household_consumption'
  - method_ts_round: 'nearest' # Set the method for timestamp rounding, options are: first, last and nearest

optim_conf:
  - set_use_battery: True # consider a battery storage
  - delta_forecast: 1 # days
  - num_def_loads: 0
  - P_deferrable_nom: # Watts
    - 3000.0
    - 750.0
  - def_total_hours: # hours
    - 5
    - 8
  - treat_def_as_semi_cont: # treat this variable as semi continuous 
    - True
    - True
  - set_def_constant: # set as a constant fixed value variable with just one startup for each 24h
    - False
    - False
  - weather_forecast_method: 'scrapper' # options are 'scrapper' and 'csv'
  - load_forecast_method: 'naive' # options are 'csv' to load a custom load forecast from a CSV file or 'naive' for a persistance model
  - load_cost_forecast_method: 'csv' # options are 'hp_hc_periods' for peak and non-peak hours contracts and 'csv' to load custom cost from CSV file 
  - list_hp_periods: # list of different tariff periods (only needed if load_cost_forecast_method='hp_hc_periods')
    - period_hp_1:
      - start: '02:54'
      - end: '15:24'
    - period_hp_2:
      - start: '17:24'
      - end: '20:24'
  - load_cost_hp: 0.1907 # peak hours load cost in €/kWh (only needed if load_cost_forecast_method='hp_hc_periods')
  - load_cost_hc: 0.1419 # non-peak hours load cost in €/kWh (only needed if load_cost_forecast_method='hp_hc_periods')
  - prod_price_forecast_method: 'csv' # options are 'constant' for constant fixed value or 'csv' to load custom price forecast from a CSV file
  - prod_sell_price: 0.065 # power production selling price in €/kWh (only needed if prod_price_forecast_method='constant')
  - set_total_pv_sell: False # consider that all PV power is injected to the grid (self-consumption with total sell)
  - lp_solver: 'PULP_CBC_CMD' # set the name of the linear programming solver that will be used
  - lp_solver_path: 'empty' # set the path to the LP solver

plant_conf:
#   - P_grid_max: 17000 # The maximum power that can be supplied by the utility grid in Watts
#   - Pd_max: 1000 # If your system has a battery (set_use_battery=True), the maximum discharge power in Watts
#   - Pc_max: 1000 # If your system has a battery (set_use_battery=True), the maximum charge power in Watts
#   - eta_disch: 0.95 # If your system has a battery (set_use_battery=True), the discharge efficiency
#   - eta_ch: 0.95 # If your system has a battery (set_use_battery=True), the charge efficiency
#   - Enom: 5000 # If your system has a battery (set_use_battery=True), the total capacity of the battery stack in Wh
#   - SOCmin: 0.3 # If your system has a battery (set_use_battery=True), the minimun allowable battery state of charge
#   - SOCmax: 0.9 # If your system has a battery (set_use_battery=True), the minimun allowable battery state of charge
#   - SOCtarget: 0.6 # If your system has a battery (set_use_battery=True), the desired battery state of charge at the end of each optimization cycle

Feature Request: Provide all senors / columns in Home Assistant

Describe the bug
As described under https://community.home-assistant.io/t/emhass-an-energy-management-for-home-assistant/338126/445?u=smi it would be awesome to have all senors / columns shown in EMHASS as sensors in Home Assistant. So not only those which are available currently (like defferable loads), but also unit_load_cost and unit_prod_price.

To Reproduce
None

Expected behavior
No issue, just a request

Screenshots
not applicable

Home Assistant installation type
Applies to all

Your hardware
Applies to all

EMHASS installation type
Applies to all

Additional context
None

0.2.23 AttributeError: module 'numpy' has no attribute 'typeDict'

Describe the bug
0.2.23 fails to start.

Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 14, in <module>
    from emhass.command_line import set_input_data_dict
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 13, in <module>
    from emhass.forecast import forecast
  File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 10, in <module>
    import pvlib
  File "/usr/local/lib/python3.9/dist-packages/pvlib/__init__.py", line 3, in <module>
    from pvlib import (  # noqa: F401
  File "/usr/local/lib/python3.9/dist-packages/pvlib/clearsky.py", line 14, in <module>
    import h5py
  File "/usr/lib/python3/dist-packages/h5py/__init__.py", line 21, in <module>
    from . import _debian_h5py_serial as _h5py
  File "/usr/lib/python3/dist-packages/h5py/_debian_h5py_serial/__init__.py", line 46, in <module>
    from ._conv import register_converters as _register_converters
  File "h5py/_debian_h5py_serial/_conv.pyx", line 1, in init h5py._debian_h5py_serial._conv
  File "h5py/_debian_h5py_serial/h5t.pyx", line 293, in init h5py._debian_h5py_serial.h5t
  File "/usr/local/lib/python3.9/dist-packages/numpy/__init__.py", line 284, in __getattr__
    raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'typeDict'

To Reproduce
Upgraded to 0.2.23

Expected behavior
Functional as per past version

Screenshots
If applicable, add screenshots to help explain your problem.

Home Assistant installation type

  • Home Assistant Supervised

Your hardware

  • OS: Debian
  • Architecture: aarch64

EMHASS installation type

  • Add-on,

Additional context

'last_window' has missing values error after HASS restart

After a HASS restart, I am seeing the following error when a run the predict and publish for the new ML Forcaster. The Fit and Tune steps are successful.

2023-06-14 08:45:33,181 - web_server - INFO - Setting up needed data
2023-06-14 08:45:33,182 - web_server - INFO - Retrieve hass get data method initiated...
2023-06-14 08:45:34,437 - web_server - INFO -  >> Performing a machine learning forecast model predict...
2023-06-14 08:45:34,442 - web_server - ERROR - Exception on /action/forecast-model-predict [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.9/dist-packages/emhass/web_server.py", line 216, in action_call
    df_pred = forecast_model_predict(input_data_dict, app.logger)
  File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 361, in forecast_model_predict
    predictions = mlf.predict(data_last_window)
  File "/usr/local/lib/python3.9/dist-packages/emhass/machine_learning_forecaster.py", line 210, in predict
    predictions = self.forecaster.predict(steps=self.num_lags,
  File "/usr/local/lib/python3.9/dist-packages/skforecast/ForecasterAutoreg/ForecasterAutoreg.py", line 637, in predict
    check_predict_input(
  File "/usr/local/lib/python3.9/dist-packages/skforecast/utils/utils.py", line 620, in check_predict_input
    raise ValueError(
ValueError: `last_window` has missing values.

Comments from David over in HASS community:

This is because there are NaN’s in your data probably caused by the HA restart and they are not treated for the prediction and the last window of data needed for that prediction. They are only treated for fit and tune. This should be fixed and a linear interpolation is a good solution for small windows of missing values. However if you have too many missing values it can lead to unexpected results. But I guess is the best we can do in that case.

Feature request: Log message in emhass about incomplete cost price data

Nordpool have sometime missing tomorrow cost data.
nordpool 08-35

When trying to pass the incomplete cost data as list from the command line the log says:

2022-08-28 08:38:29.481 DEBUG (MainThread) [homeassistant.components.shell_command] Stdout of command: `curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim`, return code: 0:
b'HTTP/1.1 400 BAD REQUEST\r\nContent-Length: 167\r\nContent-Type: text/html; charset=utf-8\r\nDate: Sun, 28 Aug 2022 06:38:29 GMT\r\nServer: waitress\r\n\r\n<!doctype html>\n<html lang=en>\n<title>400 Bad Request</title>\n<h1>Bad Request</h1>\n<p>The browser (or proxy) sent a request that this server could not understand.</p>\n'
2022-08-28 08:38:29.481 DEBUG (MainThread) [homeassistant.components.shell_command] Stderr of command: `curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast": {{ ((( state_attr('sensor.nordpool', 'raw_today') + state_attr('sensor.nordpool', 'raw_tomorrow')) |map(attribute='value')|list)[:48]) }}' http://localhost:5000/action/dayahead-optim`, return code: 0:
b'  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r100   501  100   167  100   334  29896  59792 --:--:-- --:--:-- --:--:--   97k\n'

Is there any way emhass can log a message about missing tomorrow cost data.

FR: Return forecast values to home assistant sensor entity attributes

A feature request.

The forecast values are currently written to the .csv file within the file system, but are unable to be accessed by home assistant.

Would it be possible to include the forecasts as attributes of the sensor entities published to home assistant.

This would enable the creation of the same chart of values as the web UI, but in Lovelace using the apexcharts-card

For example the SolCast integration posts forecast PV through a series of sensors and attributes.
image

My energy provider also provides future price forecasts in entity attributes:
image

When visualized in apexcharts looks like this:
image

Add single configuration to prefix all published sensors

Describe the feature request
I want to be able to configure a sensor prefix for all sensors published to HA from EMHASS.

Expected behavior
In config file or as parameters in the call to the publish-data endpoint, a prefix string can be configured that will prefix all sensors published to Homeassistant.

E.g. Configuring the string "emhass_" as prefix would automatically nameall sensors published to the form sensor.emhass_<sensor_name>.

The reason for this request is that it will be much easier to find the emhass specific sensors in HA. It's also a bit more future proof as any added sensors in the future will also be automatically prefixed. This makes the need to update documentation, and publish-data endpoint, less critical, for those in need of renaming published sensors just to add prefixes. As an example to prove my point, we have gotten two new parameters added: unit_load_cost and unit_prod_price and they are both undocumented in the https://emhass.readthedocs.io/en/latest/intro.html#the-publish-data-specificities section and, afaik, has no custom renaming features yet.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.