Giter VIP home page Giter VIP logo

pyscript's Introduction

Pyscript: Python Scripting for Home Assistant

GitHub Release License hacs Project Maintenance

This HACS custom integration allows you to write Python functions and scripts that can implement a wide range of automation, logic and triggers. State variables are bound to Python variables and services are callable as Python functions, so it's easy and concise to implement logic.

Functions you write can be configured to be called as a service or run upon time, state-change or event triggers. Functions can also call any service, fire events and set state variables. Functions can sleep or wait for additional changes in state variables or events, without slowing or affecting other operations. You can think of these functions as small programs that run in parallel, independently of each other, and they could be active for extended periods of time.

Pyscript also provides a kernel that interfaces with the Jupyter front-ends (eg, notebook, console, lab and VSCode). That allows you to develop and test pyscript code interactively. Plus you can interact with much of HASS by looking at state variables, calling services etc.

Documentation

Here is the pyscript documentation.

For more information about the Jupyter kernel, see the README. There is also a Jupyter notebook tutorial, which can be downloaded and run interactively in Jupyter notebook connected to your live HASS with pyscript.

Installation

Option 1: HACS

Under HACS -> Integrations, select "+", search for pyscript and install it.

Option 2: Manual

From the latest release download the zip file hass-custom-pyscript.zip

cd YOUR_HASS_CONFIG_DIRECTORY    # same place as configuration.yaml
mkdir -p custom_components/pyscript
cd custom_components/pyscript
unzip hass-custom-pyscript.zip

Alternatively, you can install the current GitHub master version by cloning and copying:

mkdir SOME_LOCAL_WORKSPACE
cd SOME_LOCAL_WORKSPACE
git clone https://github.com/custom-components/pyscript.git
mkdir -p YOUR_HASS_CONFIG_DIRECTORY/custom_components
cp -pr pyscript/custom_components/pyscript YOUR_HASS_CONFIG_DIRECTORY/custom_components

Install Jupyter Kernel

Installing the Pyscript Jupyter kernel is optional. The steps to install and use it are in this README.

Configuration

  • Go to the Integrations menu in the Home Assistant Configuration UI and add Pyscript Python scripting from there. Alternatively, add pyscript: to <config>/configuration.yaml; pyscript has two optional configuration parameters that allow any python package to be imported if set and to expose hass as a variable; both default to false:
    pyscript:
      allow_all_imports: true
      hass_is_global: true
  • Add files with a suffix of .py in the folder <config>/pyscript.
  • Restart HASS.
  • Whenever you change a script file, make a reload service call to pyscript.
  • Watch the HASS log for pyscript errors and logger output from your scripts.

Contributing

Contributions are welcome! You are encouraged to submit PRs, bug reports, feature requests or add to the Wiki with examples and tutorials. It would be fun to hear about unique and clever applications you develop. Please see this README for setting up a development environment and running tests.

Even if you aren't a developer, please participate in our discussions community. Helping other users is another great way to contribute to pyscript!

Useful Links

Copyright

Copyright (c) 2020-2023 Craig Barratt. May be freely used and copied according to the terms of the Apache 2.0 License.

pyscript's People

Contributors

akonradi avatar alertua avatar anwirs avatar basnijholt avatar craigbarratt avatar dlashua avatar exponentactivity avatar flexible avatar matzman666 avatar misa1515 avatar raman325 avatar rccoleman avatar robertgresock avatar rodneybetts avatar swazrgb avatar wrt54g avatar wsw70 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyscript's Issues

Feature Request: access entity attributes as object attributes

I have pyscript app YAML like this:

- app: calc_setpoint
  sensor_temp: climate.downstairs
  sensor_temp_attr: current_temperature

Notice, sensor_temp_attr, which is optional as consumed by this code:

    sensor_temp_sensor = data.get('sensor_temp')
    if sensor_temp_sensor is None:
        log.error(f'{climate_entity}: sensor_temp is required')
        return

    sensor_temp_attr = data.get('sensor_temp_attr')
    if sensor_temp_attr is None:
            sensor_temp = float(state.get(sensor_temp_sensor))
    else:
            sensor_temp = float(state.get_attr(sensor_temp_sensor)[sensor_temp_attr])

If entity attributes were accessible as object attributes my code would be simpler:

- app: calc_setpoint
  sensor_temp: climate.downstairs.current_temperature
    sensor_temp_sensor = data.get('sensor_temp')
    if sensor_temp_sensor is None:
        log.error(f'{climate_entity}: sensor_temp is required')
        return

   sensor_temp = float(state.get(sensor_temp_sensor))

For other uses, it would also make state_trigger expressions easier:

@state_trigger('climate.downstairs.current_temperature > 75')

vs (and I don't even know if this code works):

@state_trigger('state.get_attr('climate.downstairs')['current_temperature'] > 75')

If there is a namespace collision issue at the root of the state variable, this syntax would be second best:

climate.downstairs.attributes.current_temperature

poorly formed state_trigger makes endless loop

In testing #56 a typo ran me into a weird issue:

@state_trigger('input_boolean.test_1 == "on"', state_hold=3)
@task.unique('test1')
def test():
    log.info('test_1 is on for 3')
    task.wait_until(state_trigger="input_boolean.test_2 = 'on'", state_hold=3, state_check_now=True)
    log.info('test_2 is on for 3')

Notice my task.wait_until improperly uses assignment in state_trigger instead of evaluation (single '=' instead of double '==').

Despite the task.unique and several reloads of pyscript, input_boolean.test_2 stayed on regardless of my attempts to turn it off. I had to restart Home Assistant.

I'm not sure if a user error like this is detectable or preventable, but I imagine it'll bite someone else at some point.

Feature Request: Add logging for life-cycle changes

I find myself using the following pattern to make it easier to grep through logs to see if the script has been loaded:

@time_trigger
def run_on_startup_or_reload():
    log.info(f"Loading {__name__}")

It would be great if the "core" would do that on my behalf to avoid code duplication. I have a locally added info-level message under __init__.load_scripts, but I'm not sure if that's really the correct place to add such. Ideas/opinions?

error in star zip

tuples = [(1, 2), (3, 4), (5, 6)]
a, b = zip(*tuples)

gives

Exception in <jupyter_3> line 2:
    a, b = zip(*tuples)
                ^
TypeError: cannot unpack non-iterable object

however a, b should be ((1, 3, 5), (2, 4, 6)).

Feature Request: Declare new restorable state from pyscript

It is currently possible to declare new state from pyscript very easily like some_domain.abc = 'def', even if some_domain or some_domain.abc does not yet exist.

This is very helpful for scripts that want to define their own state. However these states are not automatically persisted when home assistant restarts. A workaround is for the user to manually precreate a input_text.abc helper then manipulating that from pyscript.

However this is rather inconvenient because for every state variable the user has to perform some manual tasks.

So it would be nice if it was somehow possible to define restorable states from pyscript.

PyCharm IDE inspection warnings

Since the refactoring away from instancing State, Handler and the other classes, my PyCharm IDE warns about two inspections, and I think it's right to do so:

  • Usually first parameter of a method is named 'self'
    To fix this, the @staticmethoddecorator should be used (explanation)
  • Unresolved attribute reference 'notify' for class 'State'
    Class variables are defined and initialized in class scope (Python docs).
    This would also do away with the รฌnit() class method, as it is not needed anymore. As init, just set the hass class variable

import file in pyscript directory

I've tried several different methods of importing another file in the pyscript directory.

pyscript/
  test.py
  followme.py

In test.py I have:

import followme
from . import followme
from .followme import follow_me

I even tried making it a package:

pyscript/
  test.py
  followme/
    __init__.py

In all cases, I'm unable to import the file.

I tried with allow_all_imports set to true as well.

Is there a way to do this?

registered_triggers is not defined in new app

Using the below code for a new app and no matter what I try I can't get it to not throw an error on registered_triggers

registered_triggers = []

# start the app
@time_trigger('startup')
def personTrackerStartup():
    loadApp('location', makePersonTracker)


def makePersonTracker(config):
    global registered_triggers
    personID = config['person']
    personName = personID.split('.')[1]

    @task_unique(f'{personName}_tracker')
    @state_trigger(f'{personID} != {personID}.old')
    def tracker(value=None):
        return

        
    # register to global scope
    registered_triggers.append(tracker) # registered_triggers.append(tracker): NameError: global name 'registered_triggers' is not defined


def loadApp(app_name, factory):
    if 'apps' not in pyscript.config:
        return
    
    if app_name not in pyscript.config['apps']:
        return

    for app in pyscript.config['apps'][app_name]:
        factory(app)

I followed the Wiki for the structure but there's a good chance I'm doing something wrong.

functions evaluated before they are called?

def both(msg):
    a(msg)
    b(msg)

def a(msg):
    log.info(f'in a {msg}')

def b(msg):
    log.info(f'in b {msg}')

both('test')

The above does not work in pyscript, however, it works in regular python.

Exception in </config/pyscript/test.py> line 1:
    def both(msg):
    ^
SyntaxError: no binding for nonlocal 'a' found

Moving the declaration of "both" down below "a" and "b" makes this work. However recursive functions like this wouldn't work:

def show_hashes(num):
    hashes = ["#" for i in range(0,num)]
    log.info("".join(hashes))
    num -= 1
    if num <= 0:
        return
    show_dots(num)

def show_dots(num):
    dots = ["." for i in range(0,num)]
    log.info("".join(dots))
    num -= 1
    if num <= 0:
        return
    show_hashes(num)

show_dots(6)
show_hashes(6)

There are, of course, ways to rewrite this example to make it work, but I imagine there are some non-trivial cases where rewriting would be difficult.

Possible to access the old state attributes?

I have a state trigger for something I wanted to do that should look like this but doesn't work:

homeTheaterTvId = "media_player.tv"

@state_trigger(f'{homeTheaterTvId}.old.source != "HDMI-5"')
def doSomething():
   ...

I know this doesn't work since .old: str, but how would I do something like this? Is it possible to access the old state attributes?

unrelated: pyscript is fantastic and has helped me get into the automation game with such ease!

"old" doesn't work in state_trigger

pyscript:

@state_trigger("input_boolean.test_1 == 'on' and input_boolean.test_1.old == 'off'")
def report_old():
    log.info('report with old ON')

Triggering this results int the following error:

2020-10-05 05:59:40 ERROR (MainThread) [custom_components.pyscript.file.test.report_old] Exception in <file.test.report_old @state_trigger()> line 1:
    input_boolean.test_1 == 'on' and input_boolean.test_1.old == 'off'
                                     ^
AttributeError: 'str' object has no attribute 'old'

HA never restarts with pyscript activated

I'm running HA 0.116.3 in a python3.8 venv on Ubuntu 18.04.
Installed pyscript via HASS and tried both options, UI Integration and pyscript in configuration.yaml.

First script runs very well and i like it.
BUT
I cannot restart HA anymore.
It looks like HA stops because there are no more new lines in home-assistant.log but my screen session where the hass process runs shows tons of tracebacks and i have to kill the process.

After removing from integration or in configuration.yaml, HA restart works normal.

...
Update for sensor.memory_use_percent fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for sensor.swap_use_percent fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for sensor.processor_use fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for media_player.mpd fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for media_player.mpd fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for weather.dark_sky fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for media_player.mpd fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for switch.433_keller fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for switch.lueften_keller fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for switch.playstation4 fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for switch.easyvdr_sound fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for switch.beaglebone fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for switch.e3dc fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Update for switch.terra_nas fails
Traceback (most recent call last):
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/helpers/entity.py", line 471, in async_device_update
    await self.hass.async_add_executor_job(self.update)  # type: ignore
  File "/srv/ha/lib/python3.8/site-packages/homeassistant/core.py", line 350, in async_add_executor_job
    task = self.loop.run_in_executor(None, target, *args)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 783, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 179, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
...

Feature Request: service call contexts

Right now, when a pyscript performs a service call, the LogBook has an entry like this:

HallDown Occupied turned off by service input_boolean.turn_off

However, when Home Assistant Automations perform an action, the LogBook looks like this:

 halldown_overhead turned off by halldown_occupied_off 

It does this through a context object:

        service_task = self._hass.async_create_task(
            self._hass.services.async_call(
                domain,
                service,
                service_data,
                blocking=True,
                context=self._context,
                limit=limit,
            )
        )

pyscript can create one of these with something like:

from homeassistant.core import Context

# context comes from the state_trigger, time_trigger, etc
parent_id = None if context is None else context.id
new_context = Context(parent_id=parent_id)

I can't find a lot of documentation on how to create context and name the pieces so they show up well in LogBook, but I think this is a start anyway. I think even this small bit of code would make the logbook show that, for instance, the "binary_sensor.dark" in my @state_trigger is what called input_boolean.turn_on. And, even this, is better than what we have now.

The final bit would be to get LogBook to show that binary_sensor.dark activated my pyscript called "halldown_at_dark", and that "halldown_at_dark" is what called input_boolean.turn_on.

Feature Request: Automatic reloading for existing scripts?

While jupyter notebook is a great place for doing the initial development, at some point you want to move your automations out from it. It would be great if (at least the existing scripts) would be automatically reloaded on change, i.e., by using inotify listeners on the existing script files and calling the reload() when a modification is detected.

My personal development setup looks like this: I'm using notebooks to do the initial development, and as soon as I'm confident that things are mostly working, I'll move the code over into a new script. For development, I'm using pycharm which I have configured to do an auto-deployment on the production instance whenever a file gets saved. In such cases, the ability to avoid doing a manual reload service call would simplify the workflow.

task.unique and task.sleep interaction bug

@state_trigger("input_boolean.test_office_motion == 'on'")
def office_occupied():
    task.unique('office_occupied')
    log.info('office_occupied running')
    state.set('binary_sensor.test_office_occupied', 'on', {'held': 'yes'})
    task.wait_until(state_trigger="input_boolean.test_office_motion == 'off'")
    state.set('binary_sensor.test_office_occupied', 'on', {'held': 'no'})
    task.sleep(5)
    state.set('binary_sensor.test_office_occupied', 'off')

To see this in action, turn the input_boolean on. See the binary_sensor turn on. Turn in input_boolean off, then back on again. Wait 5 seconds. The binary_sensor will turn off, but task.unique should have prevented that.

It seems like the sleeping task is not killed by task.unique, so it still runs to completion.

Feature Request: pyscript/ subdirectories

In line with recent changes to master to allow for "apps" and YAML based configuration, allowing subdirectories inside of pyscript/ would make keeping separate applications under revision control much easier.

cd pyscript/
git clone http://github.com/dlashua/occupancy_manager.git

which would create files like:

pyscript/
  occupancy_manager/
    occupany_manager.py
    README.md
    configuration.sample.yaml

and allow commands like:

git pull
git commit
git push

This also lines pyscript applications up to be included in HACS (with HACS cooperation, of course) the same way that AppDaemon, NetDaemon, and python_script apps are included in the "Automations" section.

module changes are not loaded on pyscript.reload

at pyscript/modules/test_module.py I have:

def foo():
    return "foo v1"

at pyscript/module_test.py I have:

import test_module

@time_trigger('startup')
def do_this():
    log.info(test_module.foo())

On Home Assistant start, this acts as expected and logs "foo v1".

Then I change test_module.py to return "foo v2". Then I reload pyscript. It still logs "foo v1".

When I restart Home Assistant, it logs "foo v2" as expected.

Feature Request: Reusable Functions

I tend to reuse the same logic for various rooms, the only difference is the entities that get watched and acted on. This is also VERY useful for sharing "apps" that are configurable.

However, I can't seem to find a nice way to make code reusable in pyscript.

For instance, this was the closest I could come, but it still doesn't work:

def follow_me(leader=None,followers=[]):
    log.error('creating trigger for {}'.format(leader))

    @state_trigger("True or {}".format(leader))
    def follow_me_trigger():
        log.error('follow me triggered for {}'.format(leader))
        value = state.get(leader)
        if value == 'on':
            for item in followers:
                homeassistant.turn_on(entity_id=item)
        else:
            for item in followers:
                homeassistant.turn_off(entity_id=item)

    return follow_me_trigger

follow_me('input_boolean.test_1', ['input_boolean.test_2'])
follow_me('input_boolean.test_3', ['input_boolean.test_4'])

Is there a recommended way to do this already? If not, would you consider adding such a feature?

Cannot reference closure in while loop with `task.unique`

When I run the following script I get an error when I try to reference a closure defined task.unique.

Exception in <file.test_svc.test_svc> line 19:
        method_post_unique()
        ^
NameError: name 'method_post_unique' is not defined
@service
def test_svc(entity_id=None):
  if entity_id is None:
    log.error(f"No entity id passed")
    return

  def method_pre_unique():
    return "pre-unique"

  log.debug(f"pre-unique for {entity_id}")
  task.unique(f"test_svc|{entity_id}")
  log.debug(f"post-unique for {entity_id}")

  def method_post_unique():
    return method_pre_unique() + " post-unique"

  cnt = 0
  def inc_and_test():
    method_post_unique()
    method_pre_unique()
    cnt += 1
    return cnt < 5

  while inc_and_test():
    log.debug(f"running loop {cnt} {method_post_unique()} {method_pre_unique()}")

    task.sleep(0.1)

add function 'entity_ids'

Right now I cannot figure out how to get a list of all defined entities.

Maybe there could be a function

def entity_ids(domain: str = None) -> List[str]:
    """Returns a list of all entity_ids in a domain. If None, all entity_ids are returned."""

prevent overwriting entity variables

I just accidentally did

light.living_room_lights = "off"

and now my lights in Home Assistant became unavailable.

Maybe it an idea to now allow overwriting entities?

The same goes for state, service, task, etc.

TypeError: 'EvalFuncVar' object is not callable with Voluptuous

This code:

import voluptuous as vol

class IntOrError:
    def __init__(self):
        pass

    def __call__(self, x):
        if isinstance(x, int):
            return x
        raise Exception('Not an int')

SCHEMA = vol.Schema({
    vol.Required('a'): IntOrError(),
    vol.Required('b'): int,
})


def configme(config):
    config = SCHEMA(config)
    doprint(config)


def doprint(x):
    try:
        log.info(x)
    except:
        print(x)


configme({
    "a": 1,
    "b": 2,
})

Produces this error:

2020-10-24 08:28:50 ERROR (MainThread) [custom_components.pyscript.file.broken] Exception in </config/pyscript/broken.py> line 19:
        config = SCHEMA(config)
                        ^
TypeError: 'EvalFuncVar' object is not callable

I tried adding __call__ to EvalFuncVar (just duplicating the existing call method). But that results in the passing of a coroutine.

I've also tried writing the IntOrError class as a method and as a lambda, with the same results.

Detected I/O inside the event loop

Hi
What about this warning in the log? It's my fault?

WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue to the custom component author for pyscript doing I/O at custom_components/pyscript/eval.py, line 1143: return func(*args, **kwargs)
Thank you

Feature Request: state_check_now kwarg for state_trigger

I use this pattern often:

@state_trigger((
    "float(sensor.master_temperature) <= float(climate.hvac_up.temperature)"
    " and input_select.master_mode == 'sleep'"
))
def turn_on_heater():
    log.info('Turning on Heater')
    switch.turn_on(entity_id="switch.master_heater")

The issue is, I'd also like that same check to be performed at startup (i.e. when the trigger is registered). I can't just use @time_trigger('startup') because the conditions in the state_trigger won't be checked. I can't just change @state_trigger to @state_active because then it'll ONLY trigger at startup. So, instead, to accomplish what I want, I have to have a state_trigger, a time_trigger, and a state_active.

Adding state_check_now to @state_trigger would allow me to keep the code short and reduce redundancy.

Create Unique Task Decorator

The convention of declaring task.unique(โ€˜function_nameโ€™) seems a bit non-pythonic.

I feel like it might be better to create a @unique decorator that ensures that the method it wraps is only run once.

Could even add a param that will control what that means:

  • @unique or @unique(โ€œrestartโ€) would kill the previous run
  • @unique(โ€œqueueโ€) would enqueue subsequent calls.

Iโ€™m sure there are more cases, just spit-balling.

Feature Request: reload single file/app

When developing a new pyscript (without Jupyter) it is often necessary to restart pyscript to load changes made to the new script. If one has many scripts with informational startup messages, parsing the logs becomes tedious. Having the ability to restart only ONE script would be useful in this scenario.

pyscript.reload could take an optional 'script' argument that should be populated with "file.test" or "app.test" to indicate what to reload.

Alternatives:

  • User should make startup less verbose on other scripts.
  • User should use Jupyter for script development.

Feature Request: cleaner @state_trigger('True or ... ')

This syntax is a bit hacky looking and hard for new users to follow:

@state_trigger('True or binary_sensor.dark')

There's no reason to make the above syntax NOT work, but think a convenience decorator for this use pattern might be beneficial. Something like this:

@state_change_trigger('binary_sensor.dark')

@state_change_trigger([
  'binary_sensor.dark',
  'binary_sensor.motion'
])

Though, naming things is hard. There's probably something better than state_change_trigger.

Open Discussion: How to handle package installations for duplicate but differing requirements

One limitation with the current implementation for package installation is that two different requirements.txt can list the same package with different version requirements that may not be compatible (e.g. pkg==0.0.1 and pkg==1.0.0), and the package will be installed multiple times in the order that it was parsed which could lead to potential issues down the line. What should the right behavior be? There are a couple of decisions to make:

  1. Do we try to resolve this at all or do we let it happen with a caveat in the documentation?
  2. If we try to resolve this, we can make a best effort to find a version that matches all of the requirements, but if we can't, does the newer version or the older version take precedence?

The step to find a version that matches all of the requirements would normally be achieved by passing a constraints file to pip, but we are using Home Assistant's installation mechanism which uses its own constraints file, so that's not a valid option for us. We would have to essentially implement our own version of this, which we may be able to use pkg_resources to do, I just haven't had a chance to look into it yet, and I wanted to get your thoughts @craigbarratt before I spend any time on this

tox.ini missing

I am trying to implement some new functionality which requires me to run the tests.

However, it seems like the Tox configuration file is not committed.

accessing state last_updated and last_changed

Hi there.
I'm fairly new to HASS as well as Python. forgive me if it's a stupid question.

I can't find a way to get the full state from an entity. specifically, I want to find out how long a sensor has been on or off, for that I need timeSinceChangedMs that I can get from NodeRed.

I tried get_state, but it only shows the current state...

0.12: Requirements for pyscript not found: ['zmq==0.0.0']

I just updated and pyscript fails to load with the following error.

2020-08-24 10:18:54 ERROR (MainThread) [homeassistant.config] Package pyscript setup failed. ->
  Integration pyscript Requirements for pyscript not found: ['zmq==0.0.0']. (See /config/integrations/pyscript.yaml:2). 

await wasn't used with future

Code:

In a Class:

    def get_unique_id(self):
        return f'occupancy_manager_{self.state_entity}'

Error:

2020-10-11 06:18:22 ERROR (MainThread) [custom_components.pyscript.apps.occupancy_manager.startup_trigger] Exception in <apps.occupancy_manager.startup_trigger> line 113:
            return f'occupancy_manager_{self.state_entity}'
                                        ^
RuntimeError: await wasn't used with future

Please let me know if you need to see more of the class code.

Feature Request: use state_trigger with a passed function

I find I use this pattern quite a bit (untested):

registered_triggers = []

class FollowMe:
    def __init__(self, config):
        self.config = self.validate_config(config)

        @state_trigger(f'True or {self.config["input"]}')
        def state_trigger_update():
            self.update()

        registered_triggers.append(state_trigger_update)

    def validate_config(self, config):
        # do better
        return config

    def update(self):
        if state.get(config["input"]) == 'on':
            homeassistant.turn_on(entity_id=self.config['output'])
        else:
            homeassistant.turn_off(entity_id=self.config['output'])

It would be cleaner if I could do this:

class FollowMe:
    def __init__(self, config):
        pyscript.state_trigger(f'True or {self.config["input"]}')(self.update)

The pyscript.state_trigger method could be named something else if that made more sense. It could also take the function as a parameter (named or positional) if that's better. And this same functionality would be useful on all the decorators.

I can implement something similar in code myself like this (untested):

registered_triggers = []

def make_state_trigger(trigger, cb):
    @state_trigger(trigger)
    def inner_state_trigger():
        cb()
    
    registered_triggers.append(inner_state_trigger)

class FollowMe:
    def __init__(self, config):
        self.config = self.validate_config(config)

        make_state_trigger(
          f'True or {self.config["input"]}',
          self.update
        )

... It's just more readable, useful, and documented if it's built-in.

It would also be amazing if these triggers were removable. Something like this:

registered_triggers = {}

def make_state_trigger(trigger, cb):
    # make some unique ID in a better way
    unique_id = time.time()

    @state_trigger(trigger)
    def inner_state_trigger():
        cb()
    
    registered_triggers[unique_id] = inner_state_trigger

    def cancel_trigger():
        del registered_triggers[unique_id]

    return cancel_trigger


# in some class
cancel = make_state_trigger('domain.entity == "on"', self.update)

# later
cancel()

no way to prevent state_set from removes existing attributes

Hi,

using state.set("domain.entity", "new state") removes existing attributes. It is possible to supply an attribute dict, but there was no way of collecting all existing attributes from a state. state.get() only supported pulling a single attribute by name.

Also state_get() would not return the full State object, but only the state as string or the attribute value.

As a possible way to fix this, I did some changes:

  • Add states object, known from template extensions to be the new way of simple state retrievals
  • Add new method get_new that returns the actual State object and remapped state.get() to it
  • Add new method set_new that preserves existing attributes by default, but also allowes them to be updated or removed

The PR will follow soon.

wrong times for sunrise and sunset, when next occurrence is tomorrow

When the next sunset or sunrise is not on the same but the following day, the datetime is only adjusted to be on the next day but for the time, todays value is used.

Sample code:

import datetime as dt

@time_trigger("startup", "once(sunrise)", "once(sunset)")
def test_next_sunrise_tomorrow():
    print(dt.datetime.fromisoformat(sun.sun.next_rising).astimezone())
    print(dt.datetime.fromisoformat(sun.sun.next_setting).astimezone())

[custom_components.pyscript.trigger] trigger: got sunrise = 06:30:35 (t = 2020-09-09 06:30:35+02:00)
[custom_components.pyscript.trigger] trigger: got sunrise = 06:30:35 (t = 2020-09-09 06:30:35+02:00)
[custom_components.pyscript.trigger] trigger: got sunset = 19:36:24 (t = 2020-09-09 19:36:24+02:00)[custom_components.pyscript.trigger] trigger: got sunset = 19:36:24 (t = 2020-09-09 19:36:24+02:00)
[custom_components.pyscript.trigger] trigger jupyter_2.test_next_sunrise_tomorrow time_next = 2020-09-10 06:30:35, now = 2020-09-09 23:03:19.136877
[custom_components.pyscript.trigger] trigger jupyter_2.test_next_sunrise_tomorrow waiting for 26835.863123 secs
[custom_components.pyscript.eval] jupyter_0.test_next_sunrise_tomorrow: calling fromisoformat("2020-09-10T04:32:14+00:00", {})
[custom_components.pyscript.eval] jupyter_0.test_next_sunrise_tomorrow: calling astimezone(, {})
[custom_components.pyscript.eval] jupyter_0.test_next_sunrise_tomorrow: calling print(2020-09-10 06:32:14+02:00, {})
[custom_components.pyscript.jupyter_0.test_next_sunrise_tomorrow] 2020-09-10 06:32:14+02:00
[custom_components.pyscript.eval] jupyter_0.test_next_sunrise_tomorrow: calling fromisoformat("2020-09-10T17:34:02+00:00", {})
[custom_components.pyscript.eval] jupyter_0.test_next_sunrise_tomorrow: calling astimezone(, {})
[custom_components.pyscript.eval] jupyter_0.test_next_sunrise_tomorrow: calling print(2020-09-10 19:34:02+02:00, {})
[custom_components.pyscript.jupyter_0.test_next_sunrise_tomorrow] 2020-09-10 19:34:02+02:00

variables undefined after task.sleep

This issue is dependent on ... something else ... that I can't quite figure out. It doesn't happen every time, so a simplified test doesn't show the issue immediately.

In a Class Method:

        start_sleep = time.monotonic()
        task.sleep(self.timeout)
        elapsed_sleep = time.monotonic() - start_sleep

Error:

2020-10-11 06:14:12 ERROR (MainThread) [custom_components.pyscript.apps.occupancy_manager.inner_occupied_states] Exception in <apps.occupancy_manager.inner_occupied_states> line 224:
            elapsed_sleep = time.monotonic() - start_sleep
                                               ^
NameError: name 'start_sleep' is not defined

task.unique with kill_me sometimes kills itself when it shouldn't

code:

import time

rt = []

class TaskUniqueTest:

    def kill_others(self):
        log.info(f'{self.name}: killing others')
        task.unique('task_unique_test')
        log.info(f'{self.name}: survived')

    def kill_myself(self):
        log.info(f'{self.name}: killing myself')
        task.unique('task_unique_test', kill_me=True)
        log.info(f'{self.name}: survived (there were no others)')

    def __init__(self, name, seconds):
        self.name = name
        self.seconds = seconds
        log.info(f'{self.name}: starting {__name__}')
        self.kill_others()

        @time_trigger('startup')
        def inner_startup():
            log.info(f'{self.name}: in inner_startup')
            log.info(f'{self.name}: sleeping {self.seconds}')
            start_time = time.time()
            task.sleep(self.seconds)
            elapsed_time = round((time.time() - start_time), 2)
            log.info(f'{self.name}: done sleeping {elapsed_time}/{self.seconds}')
            log.info(f'{self.name}: calling update')
            self.update()

        rt.append(inner_startup)

    def update(self):
        log.info(f'{self.name}: in update')
        self.kill_myself()
        log.info(f'{self.name}: i did not die')



@time_trigger('startup')
def startup():
    task.sleep(2)
    a = TaskUniqueTest('a', 10)
    task.sleep(2)
    b = TaskUniqueTest('b', 5)

logs:

2020-10-12 12:53:04 INFO (MainThread) [custom_components.pyscript.apps.for_test.startup] a: starting apps.for_test
2020-10-12 12:53:04 INFO (MainThread) [custom_components.pyscript.apps.for_test.startup] a: killing others
2020-10-12 12:53:04 INFO (MainThread) [custom_components.pyscript.apps.for_test.startup] a: survived
2020-10-12 12:53:04 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: in inner_startup
2020-10-12 12:53:04 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: sleeping 10
2020-10-12 12:53:06 INFO (MainThread) [custom_components.pyscript.apps.for_test.startup] b: starting apps.for_test
2020-10-12 12:53:06 INFO (MainThread) [custom_components.pyscript.apps.for_test.startup] b: killing others
2020-10-12 12:53:06 INFO (MainThread) [custom_components.pyscript.apps.for_test.startup] b: survived
2020-10-12 12:53:06 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: in inner_startup
2020-10-12 12:53:06 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: sleeping 5
2020-10-12 12:53:11 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: done sleeping 5.0/5
2020-10-12 12:53:11 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: calling update
2020-10-12 12:53:11 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: in update
2020-10-12 12:53:11 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: killing myself
2020-10-12 12:53:11 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: survived (there were no others)
2020-10-12 12:53:11 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] b: i did not die
2020-10-12 12:53:14 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: done sleeping 10.0/10
2020-10-12 12:53:14 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: calling update
2020-10-12 12:53:14 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: in update
2020-10-12 12:53:14 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: killing myself
2020-10-12 12:53:14 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: survived (there were no others)
2020-10-12 12:53:14 INFO (MainThread) [custom_components.pyscript.apps.for_test.inner_startup] a: i did not die

Since "b" killed others with task.unique() while "a" was sleeping, "a" should have been killed. However, "a" completes. Additionally, if "a" was still a running task, then when "b" killed itself with task.unique(kill_me=True), it should not have survived.

The expected outcome of the above is:
a starts
a calls task.unique which kills nothing
a sleeps
b starts
b calls task.unique which kills a
b sleeps
b completes

Feature Request: access to sunset/sunrise in function

This works, but it's complicated to write and follow:

@state_trigger('binary_sensor.dark == "on"')
@time_active("range(sunrise - 120min, sunset + 120min)")
def turn_on_dark():
    turn_on()

@state_trigger('binary_sensor.downstairs_occupied == "on" and binary_sensor.dark == "on"')
def turn_on_occupied():
    turn_on()

@state_trigger('binary_sensor.downstairs_occupied == "off"')
@time_active("range(sunset + 120min, sunrise - 120min)")
def turn_off_notoccupied():
    turn_off()

@state_trigger('binary_sensor.dark == "off"')
def turn_off_notdark():
    turn_off()

@time_trigger('startup')
@state_active('binary_sensor.dark == "on"')
@time_active("range(sunrise - 120min, sunset + 120min)")
def turn_on_startup_dark():
    turn_on()

@time_trigger('startup')
@state_active('binary_sensor.downstairs_occupied == "on" and binary_sensor.dark == "on"')
def turn_on_startup_occupied():
    turn_on()

@time_trigger('startup')
@state_active('binary_sensor.downstairs_occupied == "off"')
@time_active("range(sunset + 120min, sunrise - 120min)")
def turn_off_startup_notoccupied():
    turn_off()

@time_trigger('startup')
@state_active('binary_sensor.dark == "off"')
def turn_off_startup_notdark():
    turn_off()

I would prefer this:

import datetime

@time_trigger('startup')
@state_trigger('True or binary_sensor.dark or binary_sensor.downstairs_occupied')
def set_lights():
    today = datetime.datetime.today()
    start_time = today.replace(hour=5, minute=0) # would like sunrise - 120min
    end_time = today.replace(hour=22, minute=0) # would like sunset + 120min

    if binary_sensor.dark == "off":
        turn_off()
        return

    if start_time < datetime.datetime.now() < end_time:
        turn_on()
        return

    if binary_sensor.downstairs_occupied == "on":
        turn_on()
        return

    turn_off()

However, as you can see in the code comments, there's not an easy way to get to sunrise/sunset offsets.

Feature Request: a decorator to declare a function/class as "native"

Would it be possible to use a decorator to indicate that a function/class should remain native (i.e. not "pyscript") so that it can be called from other native (pure python) code?

As an example of what I mean, in the below code, I'd much prefer to define PyscriptWatchdogEventHandler directly in watchdog.py with some kind of decorator on it to indicate that it shouldn't be "pyscriptified".

https://gist.github.com/dlashua/f7d88f9a5afdcf7af17ce24266925a0b

Related Question: The purpose of the above code is to automatically reload pyscript files when they change (because I'm lazy). I use an input_boolean to indicate this functionality should be enabled that I can turn off when I'm not developing to save resources. Have I missed a situation that could occur and leave me with memory leaks or other unintended effects? Is there a better way to do this?

state_trigger firing although value and old_value are the same.

This state_trigger keeps tripping even though the actual state doesn't appear to have changed. I will be at home and regularly/randomly get notifications on my phone from this function:

@state_trigger("device_tracker.my_iphone == 'home'")
def someone_arrived(**kwargs):
    log.info(f"Someone arrived, disarming system: {kwargs}")
    alarm_control_panel.alarm_disarm(entity_id="alarm_control_panel.ha_alarm")
    notify.mobile_app_my_iphone(title="Status", message="Someone arrived, disarming system.")

The logging above provides this output:

INFO (MainThread) [custom_components.pyscript.file.example.someone_arrived] Someone arrived, disarming system: {'trigger_type': 'state', 'var_name': 'device_tracker.my_iphone', 'value': 'home', 'old_value': 'home', 'context': Context(user_id=None, parent_id=None, id='xxxxxxxxxxxxxxxxxx')}

As you can see, both value and old_value are home; so I'm not sure exactly why this is happening.

My interim-fix is to manually check value and old_value inside the function and return if they are equal.

Feature request: debounce decorator

Sometimes you will want to limit triggers to a specific frequency. Therefore I would like to suggest a decorator that provides this functionality.

It would be similar to the existing @time_active decorator, only it would check the time elapsed since the last time a task was called. If that time exceeds a provided threshold, the task is called. Otherwise it is skipped.

Something like this:

@state_trigger('some condition')
@debounce(millis=300)
def some_task():
    pass

Meaning that the task will only be exceuted if the last execution was more than 300ms ago.

I tried implementing this myself, only to find out that pyscript doesn't support decorators.

bug with list comprehension since 0.3

Updating to 0.3 I suddenly see this message in my error-logs
SyntaxError: no binding for nonlocal 'entity_id' found

Reproduce with:

def white_or_cozy(group_entity_id):
    entity_ids = state.get_attr(group_entity_id)['entity_id']
    attrs = [state.get_attr(entity_id) for entity_id in entity_ids]

task.sleep doesn't sleep long enough (in a class?)

This script:

registered_triggers = []

class SleepTest:

    def __init__(self, name, seconds):

        @time_trigger('startup')
        def inner_trigger():
            log.error(f"{name}: starting")

            log.error(f"{name}: sleeping {seconds}")
            task.sleep(seconds)
            log.error(f"{name}: done sleeping")

        registered_triggers.append(inner_trigger)

task.sleep(1)
SleepTest('short', 5)
SleepTest('long', 30)

This log:

2020-10-05 15:43:16 ERROR (MainThread) [custom_components.pyscript.file.sleeptest.inner_trigger] short: starting
2020-10-05 15:43:16 ERROR (MainThread) [custom_components.pyscript.file.sleeptest.inner_trigger] short: sleeping 5
2020-10-05 15:43:16 ERROR (MainThread) [custom_components.pyscript.file.sleeptest.inner_trigger] long: starting
2020-10-05 15:43:16 ERROR (MainThread) [custom_components.pyscript.file.sleeptest.inner_trigger] long: sleeping 30
2020-10-05 15:43:21 ERROR (MainThread) [custom_components.pyscript.file.sleeptest.inner_trigger] short: done sleeping
2020-10-05 15:43:26 ERROR (MainThread) [custom_components.pyscript.file.sleeptest.inner_trigger] long: done sleeping

Notice the timestamps, long did not sleep long enough.

Feature Request: methods on entity_ids

It would be a nice language feature if entity id's could have service calls made against them as though they were real python objects. When this syntax is invoked, it would result in a service_call with the entity_id kwarg set to the entity_id it was called on.

Example Script that works today:

@state_trigger('input_boolean.test_1 == "on"')
def do_it():
  input_boolean.turn_on(entity_id="input_boolean.test_2")

Proposed Possible Syntax:

@state_trigger('input_boolean.test_1 == "on"')
def do_it():
  input_boolean.test_2.turn_on()

This should be enabled for all entity_ids in any domain:

climate.set_preset_mode(entity_id="climate.hvac", preset_mode="home")
climate.hvac.set_preset_mode(preset_mode="home")

input_number.set_value(entity_id="input_number.test", value="13")
input_number.test.set_value(value="13")

input_select.select_option(entity_id="input_select.dishwasher_status", option="clean")
input_select.dishwasher_status.select_option(option="clean")

This makes the code look cleaner and more readable. It also prevents the need to type "input_number" (for instance) twice, and prevents the need to type "entity_id" at all.

An even better extension of this proposal would require a hardcoded list to be kept and maintained in pyscript since I don't believe this information is provided by HASS (though it might be detectable by the parameter information given when HASS enumerates available service calls). Service calls that only require two kwargs -- entity_id and one other thing -- could have the kwarg for that other thing hard coded (or somehow detected). For instance, for input_number.set_value, the other kwarg is value. If pyscript knew this then this syntax would be possible:

input_number.set_value(entity_id="input_number.test", value="13")
input_number.test.set_value(value="13")
input_number.test.set_value("13")

multiple time-based triggers and startup/reload combined

Hi,

first of all, thanks for the amazing component. It's exactly what I was looking. No extra docker dontainer like AppDaemon but also based on Python. Finally it is possible to properly compute in automations thanks to the type-stable store and retrieval of state attributes!

My current use-case is to have the brightness of a light be changed, depending on the time of day and state of the sun.

The ugly way:

binary_sensor:
 - platform: tod
   name: morning
   after: '08:00'
   before: '10:00'
 - platform: tod
   name: day
   after: '10:00'
   before: '18:00'
 - platform: tod
   name: evening
   after: '18:00'
   before: '21:00'

sensor:
  - platform: command_line
    name: light_brightness_cmd
    command: 'python3 /config/shell-scripts/set-brightness.py' # extreme customization needed for control
    value_template: "{{ value_json.info.brightnessValue|default('unavailable', false) }}"
    scan_interval: 30
  - platform: template
    sensors:
      light_brightness:
        value_template: "{{ states('sensor.light_brightness_cmd')|float }}"
        attribute_templates:
          brightness: "{{ states('sensor.light_brightness_cmd')|float }}"
          brightness_scaled: "{{ (states('sensor.light_brightness')|float * 100)|int }}"

And on top of that, there is an automation to determine the current time of day, assign and rescale the brightness value and, using another py-script, adjusting it. It's way to complicated for the result.

My idea was now to a time_trigger like
@time_trigger("once(08:00:00)", "once(10:00:00)", "once(18:00:00)", "once(21:00:00)", "once(sunrise)", "once(sunset)" put whatever fancy logic and calculations in a set_daytime() function and store the results as state&attributes in a dummy entity.

My problem now is, I would need set_daytime() also be executed at reload&restart for proper initialization.

Any idea?

Feature Request: Support python package installation somehow

I love that pyscript lets you import any package to use in automations, but I am somewhat limited in the packages I can use because I run HomeAssistant in a Docker container, and I would have to either manually install the packages each time I rebuilt the container or create my own Dockerfile that did the same. It would be great if there was a mechanism through pyscript to install packages if they are not found. I am not sure what the best way to achieve this would be, there are two problems to consider:

  1. How to determine what packages need to be installed and their versions? HomeAssistant achieves this through the use of manifest.json which lists the dependent package and version.
  2. How to install the package? You can probably use the code here as a reference https://github.com/home-assistant/core/blob/dev/homeassistant/requirements.py#L105

Open questions:

  1. How would pyscript deal with version mismatches between what HomeAssistant expects and what pyscript wants? My take is that we should always defer to HomeAssistant requirements to prevent issues with the core application

Thanks for the consideration!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.