Giter VIP home page Giter VIP logo

aiocoap's People

Contributors

aellwein avatar bergzand avatar chriscuts avatar chrysn avatar david-drinn avatar fabaff avatar geaaru avatar hrogge avatar kasroka avatar kb2ma avatar kxepal avatar lp-haw avatar martinhjelmare avatar michaelackermann avatar miri64 avatar mwasilak avatar mwhudson avatar phillipberndt avatar pokgak avatar redferne avatar roysjosh avatar scop avatar thaulino avatar weili-jiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiocoap's Issues

coap-coap proxy server

aiocoap should ship everything needed to run a vanilla coap-coap proxy server out-of-the-box, which should then be usable for implementing custom proxies.

Run tests with all supported versions (drop Python 3.3 support)

If it isn't tested, it doesn't work. Worse enough the code coverage is not near 100% yet, but Python 3.4 continuously breaks due to old syntax glitches, careless use of 3.5 idioms and what so not.

Mitigation:

  • Run CI on all supported versions
  • Drop Python 3.3 support because while the Gitlab CI runners do support it, I can't eaily reproduce failures in development because no stable Debian release shipped that version.

(Those applications I had in mind that justified Python 3.3 support, PyPy and micropython, would need dedicated tests anyway. If you want to run aiocoap on a particular Python 3.3 setup, open an issue and we'll figure something out.)

clientGET-observe constantly sending updates

Is there any way to code the observe program to only send updates on change. so If I am sending some data to the server every 10 mins or so then I only want it to update then. currently I have edited clientGet-observe.py program so that it gets the temperature topic but it receives and loops through this data constantly even when there has been no update sent. is there anyway to change this. thanks

All resources are visible to WK/C

The draft-ietf-core-resource-directory-09 adds a Resource Directory (RD) to CoAP. It adds a number of resources such as rd.core-lookup which should be findable through WK/C but also a number of resources such as rd.core-lookup/res which probably should not. Remote resources added to the RD almost certainly should not be findable from WK/C. I suggest the following changes to resource.py to add another attribute to a resource. There should be no changes required to existing programs. This conforms to RFC 6690 Section 4 " It is, however, up to the application which links are included and how they are organized."

def add_resource(self, path, resource, dont_add_to_WKC=False):
    # Parameter dont_add_to_WKC is to make resource invisible to WKC
    # Default is visible
    self._resources[tuple(path)] = resource
    if dont_add_to_WKC:
        # set a new attribute on the resource
        resource.dont_add_to_WKC = True

and

def get_resources_as_linkheader(self):
    import link_header

    links = []
    for path, resource in self._resources.items():
        # skip resources with dont_add_to_WKC set
        if hasattr(resource, 'dont_add_to_WKC'):
            continue
        if hasattr(resource, "get_link_description"):
            details = resource.get_link_description()
        else:
            details = {}
        lh = link_header.Link('/' + '/'.join(path), **details)

        links.append(lh)
    return link_header.LinkHeader(links)

Called by

# visible to WK/C, standard use
root.add_resource(('rd-lookup',),
                  ResourceDirectoryLookup(resource_directory))
# not visible to WKC
root.add_resource(('rd-lookup','res'),
                  ResourceDirectoryLookupResources(resource_directory),
                  True)

New release on Pypi

Hello,

Would you mind creating a new release on Pypi? Installing with pip install now gives me version 0.2, which seems to be about a year old.

Regards,
Lars

Token change between each block2 part

I noticed that the token is increased for each request sent for a new block in a block2 transfer and the observation list contains the latest token. This causes problem with observe requests with block2 responses from Contiki, which uses the token from the initial observe subscription request in each response instead of the token from the last block transferred. From what I can tell from the RFC it seems that the token is expected to be the same for each block in a GET block2 sequence.

MulticastRequest timeout does not put end on QueueWithEnd

In running:

client_context = yield from aiocoap.Context.create_client_context(loggername='client')
responsesFuture = client_context.multicast_request(message)
responses = []
while (yield from responsesFuture.can_peek()):
  r = responsesFuture.get_nowait()
  print('got future response: %s'%r)
  responses.append(r)
print('while done')

the print('while done') is never reached even after the _timeout() function is reached.

If the queuewithend.py line 822 is changed to

asyncio.async(self.responses.finish())

the end is placed on the queue and the while loop terminates. Is this an issue?

Question about UDP transport

Hi @chrysn,

i am experimenting on tunneling CoAP packets through another transport protocol / framework. There is a question: is it possible to completely disable internal UDP transport and use another one instead?
P.S. found the text dumper which is also implemented as transport, but is there a way to completely disable the UDP?

client-side observations don't support renewal

observe-16 notes that clients can send a request inside an observation with the same token to "to reinforce [their] interest in a resource".

without such a mechanism, clients will wait indefinitely for updates when the server goes away.

a first implementation should allow the requester to trigger such an action; when that is done we could think about having this done in a configurable interval starting after the last message.

the server side of this is currently untested, and should receive proper testing once the client can do that.

recognize ICMP errors

when a remote is not available at all, there is usually an ICMP response rather than no response at all, resulting in a quick error message (as opposed to the current behavior of timing out after rather long time).

with client side code, things should in theory be easier because the socket could connect(2) and then receive errors, but the aiocoap implementation always uses unbound sockets that can be a server as well as a client (and in particular support being a client to two different endpoints at the same time).

similar to #8, there are two possible ways:

  • create a socket per transaction
  • convince the os to convey error information in recvmsg

Blockwise transfers not queued if conflicting

Simultaneous blockwise (block1) requests to the same resource can not be distinguished in CoAP and thus need to be serialized or originate from different ports.

Neither happens, causing test_blockwise..test_sequential to fail.

Planned fix is the block handler noting in-progress transfers and queueing up (or maybe failing in case of NONs) the second request.

Missing test: Concurrent POSTs to a single resource must be serialized

Assuming a user were to concurrently start two POST requests to a single remote resource, the library would need to delay one of the requests until the other has completed.

aiocoap most probably does not delay the request at all right now (didn't test it yet), but there certainly is no test to ensure it's working yet.

OSError: [WinError 10022] An invalid argument was supplied

Hi,
On windows I get this error while running the example fileserver.py or any other examples on http://aiocoap.readthedocs.io/en/latest/examples.html

Traceback (most recent call last):
  File "fileserver.py", line 173, in <module>
    loop.run_until_complete(main(**vars(p.parse_args())))
  File "c:\program files (x86)\python35-32\Lib\asyncio\base_events.py", line 387, in run_until_complete
    return future.result()
  File "c:\program files (x86)\python35-32\Lib\asyncio\futures.py", line 274, in result
    raise self._exception
  File "c:\program files (x86)\python35-32\Lib\asyncio\tasks.py", line 239, in _step
    result = coro.send(None)
  File "fileserver.py", line 161, in main
    context = await aiocoap.Context.create_server_context(server)
  File "C:\Users\eremtas\Documents\Projects\AppIoT\aiocoap-101\venv\lib\site-packages\aiocoap\protocol.py", line 535, in create_server_context
    transport, protocol = yield from loop.create_datagram_endpoint(protofact, family=socket.AF_INET6)
  File "c:\program files (x86)\python35-32\Lib\asyncio\base_events.py", line 848, in create_datagram_endpoint
    sock, protocol, r_addr, waiter)
  File "c:\program files (x86)\python35-32\Lib\asyncio\selector_events.py", line 90, in _make_datagram_transport
    address, waiter, extra)
  File "c:\program files (x86)\python35-32\Lib\asyncio\selector_events.py", line 997, in __init__
    super().__init__(loop, sock, protocol, extra)
  File "c:\program files (x86)\python35-32\Lib\asyncio\selector_events.py", line 514, in __init__
    self._extra['sockname'] = sock.getsockname()
OSError: [WinError 10022] An invalid argument was supplied
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<FileServer.check_files_for_refreshes() running at fileserver.py:49> wait_for=<Future pending cb=[Task._wakeup()]>>

I don't get this error on Ubuntu. it could be something to do with asyncio

Remi

Treat URI_PORT as a known option in proxy server

Current implementation of proxy server replies with

4.02 Bad Option
Unsafe options in request: URI_PORT

when a client makes a request through it specifying the port it wants to use.

Is there any rationale behind treating URI_PORT as a not-known option in proxy.server.raise_unless_safe?

Adding URI_PORT to the set of known options lets the request reach the desired server and the final response arrive to the client.

Example:
# Server listening on 172.17.0.26:5683
./server.py
# Reverse Proxy listening on 172.17.0.46:6666, forwards requests to the server above
./coap-proxy --reverse --server-address 172.17.0.46 --server-port 6666 --pathbased "xyz":"172.17.0.26"
# Client on 172.17.0.27, requests to the proxy server
./coap-client -v coap://172.17.0.46:6666/xyz/time
4.02 Bad Option
Unsafe options in request: URI_PORT

Adding URI_PORT to the set of known options in proxy.server.raise_unless_safe lets the client get the expected reply from the server:

# Same setup as above
./coap-client.py -v coap://172.17.0.46:6666/xyz/time
2015-07-24 12:39
(No newline at end of message)

No module named 'link_header'

When I use pip to install the LinkHeader package it says that the requirement is already satisfied. yet I'm still getting the error when I try and access .well-known/core. Thanks in advance for any help.

observations do not complain about being orphaned

client-side observations can linger in the context without anybody listening to them, and without being cancelled.

when a request is send that contains an observe option, but the .observe property of the request is never fetched, the observation should be free for the garbage collector, and should issue a warning about an uncancelled observation being deleted.

observation change semantics

(original question by @ChangSam in #27; moving it here to answer it as it is not actually related to that issue)

I wanna ask a question about observe CoAP, and thanks for answer to me
The IEFT defines that the
observer can subscribe to listen for "any change" in the state of an
observable subject.
which means that if the "state change" or "after a certain second(max age
time)doesn't appear state change" ,it will return the observe payload
so ,if the observe server uses "time" as observable_resource ,by
IEFT difinision, it will sent oberserve value return to observer "every
second"
because time update every second,rather than by a certain (max age time) to
sent.

Or we can just say that because time varies to fast, so we only need to
sent by certain seconds, is this statement also follow observe rules by
IEFT?

Create interface for UDP6EndpointAddress & co

The current code accesses implementation details of UDP6EndpointAddress; this is partially due to no interface being defined for that, and details (sockaddr) not being private.

This first surfaced in #35, which is now fixed superficially. A proper solution would probably mean that the EndpointAddress class has its own _build_request_uri method, because not all URI schemes need to use host/port, but that's currently hardcoded in message.py.

Error using GET Client in loop. ("Too many open files")

Hi,

I'm working in the development of a Home Energy Controller, using Contiki OS with Zolertia Z1 motes and CoAP, and I have a problem using aiocoap GET client.

I would really appreciate if you could help me, giving some clue about were to find a posible solution for this:

I'm calling the GET coroutine in a loop, and after 1016 iterations an exception is raised: "OSError [Errno 24] too many open files."

Thank you very much for your help.
Best regards,
Pablo Modernell.

This is a simplified version of my code, to reproduce the error:

def main():
    loop = asyncio.get_event_loop()
    measure = threading.Thread(target=call_loopMasure, args=(loop,))
    measure.start()    

if __name__ == '__main__':
    main()

@asyncio.coroutine
def loopMeasure():
    if DEBUG: print("Starting Thread\n") 
    while True:
        for n in list.nl:
            try:
                med = yield from WSNCom.getMedidas(n)
            except WSNCom.CoapError:
                print("CoAP server of {0} UNREACHABLE.\n".format(str(n)))
                continue
            else:
                print("POWER {0} = {1}\n".format(str(n),med[2]))
                print("Measurement: "+str(med))
                print("Status: "+str(n._status))

def call_loopMeasure(loop):
    asyncio.set_event_loop(loop)
    loop.run_until_complete(loopMeasure())

This is the GET client code:

@asyncio.coroutine
def getMedidas(node):    
    try:
        protocol = yield from Context.create_client_context()
        request = Message(code=GET)
        request.set_request_uri('coap://['+str(node.ip)+']:5683/Reports/Measurements')
        response = yield from protocol.request(request).response
    except Exception as e:
        if DEBUG: print('Failed to fetch resource:')
        raise CoapError(node,e)
    else:
        return (response.payload).decode().split(",")

NO module name IN

File "C:\Users\Administrator\Documents\python\aiocoap-master\aiocoap\transports\udp6.py", line 20, in
import IN
ImportError: No module named 'IN'

Handling out of range Block2 requests

I have a misbehaving client that sometimes continues to request blocks beyond the final block, and I'm hoping to return an error from aiocoap in this case. Currently it misinterprets the first out of range request as a new blockwise request and sends block 0.

In protocol.py, line 983 (https://github.com/chrysn/aiocoap/blob/master/aiocoap/protocol.py#L983), I see a comment about returning an error when a request with Block2 option and a non-zero block number is processed, but I'm having a hard time figuring out how to implement that. My naive attempt to check for this and return an error at the point in the function where that comment appears doesn't work; quite a few of the unit tests break.

Could you suggest a viable approach to fixing this issue?

Thanks.

Error assembling blockwise response, passing on error NotImplemented()

Reading from /.well-known/core on contiki (v2.7) device raises error.NotImplemented(). My guess is that something around block2 is not implemented.

Error assembling blockwise response, passing on error NotImplemented()

Traceback (most recent call last):
File "aiocoap_examples.py", line 31, in <module>
    asyncio.get_event_loop().run_until_complete(main())
File "/usr/lib64/python3.5/asyncio/base_events.py", line 387, in run_until_complete
    return future.result()
File "/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
File "/usr/lib64/python3.5/asyncio/tasks.py", line 241, in _step
    result = coro.throw(exc)
File "aiocoap_examples.py", line 21, in main
    response = await protocol.request(request).response
File "/usr/lib64/python3.5/asyncio/futures.py", line 361, in __iter__
    yield self  # This tells Task to wait for completion.
File "/usr/lib64/python3.5/asyncio/tasks.py", line 296, in _wakeup
    future.result()
File "/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
File "/home/gcerar/workspace/logatec2-tools/env35/lib64/python3.5/site-packages/aiocoap/protocol.py", line 742, in process_block2_in_response
    self._assembled_response._append_response_block(response)
File "/home/gcerar/workspace/logatec2-tools/env35/lib64/python3.5/site-packages/aiocoap/message.py", line 257, in _append_response_block
    raise error.NotImplemented()
aiocoap.error.NotImplemented

Code where this raises:

import aiocoap as coap
import asyncio

# skipped config of address and port
async def main():
    uri = 'coap://[{}]:{}/.well-known/core'.format(address, port)

    protocol = await coap.Context.create_client_context()
    request = coap.Message(code=coap.GET, uri=uri)

    try:
        response = await protocol.request(request).response
    except Exception as e:
        print(e)
        raise
    else:
        print('Result: %s\n%r' % (response.code, response.payload))

if __name__ == '__main__':
    asyncio.get_event_loop().run_until_complete(main())

file-based demo server

a demo server based on files (like python -m http.server) would be useful for quick demos, and could be helpful to figure out how resources can be managed dynamically.

Server sends responses from different endpoints when different global IPv6 addresses are configured

When a server is configured with different IPv6 addresses, and a aioCoAP server is bound to :: (which is currently the only way to get both v4 and v6 support in a single context), responses will always originate from the v6 address that is preferred by the kernel for packages to its destination, regardless of the destination address of the request that elicited it.

As source addresses can't be set in DatagramTransport.sendto, this needs dedicated endpoints.

Microsoft Windows not fully supported

C:\home\aiocoap (master)
λ py -3 server.py
ERROR:asyncio:Task exception was never retrieved
future: <Task finished coro=<create_server_context() done, defined at C:\home\aiocoap\aiocoap\protocol.py:520> exception=OSError(10022, 'An invalid argument was supplied', None, 10022, None)>
Traceback (most recent call last):
  File "C:\Python34\lib\asyncio\tasks.py", line 238, in _step
    result = next(coro)
  File "C:\home\aiocoap\aiocoap\protocol.py", line 535, in create_server_context
    transport, protocol = yield from loop.create_datagram_endpoint(protofact, family=socket.AF_INET6)
  File "C:\Python34\lib\asyncio\base_events.py", line 744, in create_datagram_endpoint
    waiter)
  File "C:\Python34\lib\asyncio\selector_events.py", line 90, in _make_datagram_transport
    address, waiter, extra)
  File "C:\Python34\lib\asyncio\selector_events.py", line 983, in __init__
    super().__init__(loop, sock, protocol, extra)
  File "C:\Python34\lib\asyncio\selector_events.py", line 513, in __init__
    self._extra['sockname'] = sock.getsockname()
OSError: [WinError 10022] An invalid argument was supplied

and

  C:\home\aiocoap (master)
λ py -3 clientGET.py
Traceback (most recent call last):
  File "clientGET.py", line 34, in <module>
    asyncio.get_event_loop().run_until_complete(main())
  File "C:\Python34\lib\asyncio\base_events.py", line 316, in run_until_complete
    return future.result()
  File "C:\Python34\lib\asyncio\futures.py", line 275, in result
    raise self._exception
  File "C:\Python34\lib\asyncio\tasks.py", line 238, in _step
    result = next(coro)
  File "clientGET.py", line 20, in main
    protocol = yield from Context.create_client_context()
  File "C:\home\aiocoap\aiocoap\protocol.py", line 510, in create_client_context
    transport, protocol = yield from loop.create_datagram_endpoint(protofact, family=socket.AF_INET6)
  File "C:\Python34\lib\asyncio\base_events.py", line 744, in create_datagram_endpoint
    waiter)
  File "C:\Python34\lib\asyncio\selector_events.py", line 90, in _make_datagram_transport
    address, waiter, extra)
  File "C:\Python34\lib\asyncio\selector_events.py", line 983, in __init__
    super().__init__(loop, sock, protocol, extra)
  File "C:\Python34\lib\asyncio\selector_events.py", line 513, in __init__
    self._extra['sockname'] = sock.getsockname()
OSError: [WinError 10022] An invalid argument was supplied

Evaluate usability of BERT as a model for blockwise interaction

The current code for handling blockwise is relatively complex, is not fully satisfying (#44, #45) and includes library-specific workarounds.

Modelling the process after the BERTs from draft-ietf-core-coap-tcp-tls-05 could clarify things -- a program would ask the library to exchange "jumbo" messages (even without explicit BERT flagging) that are always final (0, 0, 7). The library would than act exactly like a proxy towards the application (unless "hands off blocks" is requested).

Probably a race while retransmitting exchanges

Hi @chrysn,

while doing some performance tests with observation of about 3000 LWM2M (CoAP) clients against the server, i've encountered following error:

2016-08-29 16:28:54,992 [ERROR] Exception in callback Context._schedule_retransmit.<locals>.retr() at /usr/local/lib/python3.5/site-packages/aiocoap-0.2-py3.5.egg/aiocoap/protocol.py:331
handle: <TimerHandle when=1745359.3182603796 Context._schedule_retransmit.<locals>.retr() at /usr/local/lib/python3.5/site-packages/aiocoap-0.2-py3.5.egg/aiocoap/protocol.py:331>
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/asyncio/events.py", line 125, in _run
    self._callback(*self._args)
  File "/usr/local/lib/python3.5/site-packages/aiocoap-0.2-py3.5.egg/aiocoap/protocol.py", line 337, in retr
    self._retransmit(message, timeout, retransmission_counter)
  File "/usr/local/lib/python3.5/site-packages/aiocoap-0.2-py3.5.egg/aiocoap/protocol.py", line 344, in _retransmit
    exchange_monitor, next_retransmission = self._active_exchanges.pop(key)
KeyError: (('::ffff:185.112.178.101', 5683, 0, 0), 28534)

About 6 of 3000 clients failed to notify the server with observed changes. The error message is always from the same line, only the keys are always different, i've extracted them from the tracebacks:

KeyError: (('::ffff:185.112.178.101', 5683, 0, 0), 28534)
KeyError: (('::ffff:185.112.178.101', 5683, 0, 0), 27413)
KeyError: (('::ffff:185.112.178.101', 5683, 0, 0), 2950)
KeyError: (('::ffff:185.112.178.101', 5683, 0, 0), 7791)
KeyError: (('::ffff:185.112.178.101', 5683, 0, 0), 28881)
KeyError: (('::ffff:185.112.178.101', 5683, 0, 0), 13036)

I am not certain about it but i think it could be a race condition, if a key lookup fails (or maybe the exchange times out while so many clients are about to send the notifications to the server?).

Do you have any idea on this?

overhaul resource management

large parts of the way server-side resources are created come from some kind of twisted convention. that does not apply any more, and the interface should focus more on what a resource looks like in coap.

fixing this goes together well with issue #2.

Exception ignored in:

Im using an async post call. I get the correct response but also "Exception ignored in:"

Full debug output:
user@instant-contiki:/git/coap_control$ python3 get3.py
DEBUG:asyncio:Using selector: EpollSelector
bytearray(b'\x00\x01\x01\xe8\x03')
DEBUG:coap:Sending message <aiocoap.Message at 0xb6c8776c: Type.CON POST (ID 18563, token b'\x00\x00\xa08') remote <UDP6EndpointAddress [fd00:c:1::2]:5683>, 2 option(s), 5 byte(s) payload>
DEBUG:coap:Exchange added, message ID: 18563.
DEBUG:coap.requester:Timeout is 93.0
DEBUG:coap.requester:Sending request - Token: 0000a038, Remote: <UDP6EndpointAddress [fd00:c:1::2]:5683>
DEBUG:coap:Incoming message <aiocoap.Message at 0xb6c8798c: Type.ACK EMPTY (ID 18563, token b'') remote <UDP6EndpointAddress [fd00:c:1::2]:5683 with local address>>
DEBUG:coap:New unique message received
DEBUG:coap:Exchange removed, message ID: 18563.
DEBUG:coap:Incoming message <aiocoap.Message at 0xb6c87c8c: Type.CON 2.05 Content (ID 38725, token b'\x00\x00\xa08') remote <UDP6EndpointAddress [fd00:c:1::2]:5683 with local address>, 5 byte(s) payload>
DEBUG:coap:New unique message received
DEBUG:coap:Received Response: <aiocoap.Message at 0xb6c87c8c: Type.CON 2.05 Content (ID 38725, token b'\x00\x00\xa08') remote <UDP6EndpointAddress [fd00:c:1::2]:5683 with local address>, 5 byte(s) payload>
DEBUG:coap:Sending message <aiocoap.Message at 0xb6c87e6c: Type.ACK EMPTY (ID 38725, token b'') remote <UDP6EndpointAddress [fd00:c:1::2]:5683 with local address>>
Result: 2.05 Content
b'\x00\x01\x00\x01\x00'
Exception ignored in: user@instant-contiki:~/git/coap_control$

This is the test program i'm using:

import logging
import asyncio
import struct
from aiocoap import *

logging.basicConfig(level=logging.DEBUG)

node_id = "1"
control_prefix = "fd00:c:"
node_interface_address = "::2"
NodeControlAddress = control_prefix + node_id + node_interface_address

@asyncio.coroutine
def main():
    protocol = yield from Context.create_client_context()

    payload = bytearray()
    payload.append(0)  # connector id
    payload.append(1)  # function id
    payload.append(1)  # num_of_args
    payload.extend(struct.pack("<H", 1000))

    print(repr((payload)))

    request = Message(code=POST, payload=payload, uri='coap://['+NodeControlAddress+']/wishful_funcs')

    try:
        response = yield from protocol.request(request).response
    except Exception as e:
        print('Failed to fetch resource:')
        print(e)
    else:
        print('Result: %s\n%r' % (response.code, response.payload))

if __name__ == "__main__":
     asyncio.get_event_loop().run_until_complete(main())

I found this.
http://stackoverflow.com/questions/29502779/aiohttp-exception-ignored-message
So something is still running?

No response to POST/PUT

Hi, this is more a query than an actual issue,

I have a use-case where I send a lot of non-confirmable POST&PUT to my aiocoap server. As described in 5.8.2 and .3 of the standard, the aiocoap server SHOULD send a response - and it does. However, in my case I would like to have the option to not send any responses. I was able to do this using an ugly hack - do you have any suggestion on how I can achieve this gracefully?

Regards,
Andreas U

request with raising response

it would be nice if there was an easy way to do a request that results in an exception being raised if the response is not successful. aiocoap already has exceptions that represent a particular error code (eg. aiocoap.error.NoResource for 4.04 Not Found), but those are only used server-side.

such a feature would allow writing more straightforward code for simple cases, and would make it easier to tell expected functionality from error handling by moving the latter into except clauses.

possible ways are:

  • having a dedicated method
  • having an additional future next to .response

open questions are:

  • which objects are raised in the exception? can it be a subclass of Message like MessageThatIsAnError, or should it keep the response message in an attribute?
  • how to deal with successful states other than 2.05? (eg. 2.03 -- should there be an option to pass in an etag-to-payload mapping for 2.03 so we can get the validated payload easily? if yes, shoudl the result be a faked 2.05, or should the raising interface directly return the payload?)

in all cases,

  • this must not be the only interface (because some uses want the bare answer)
  • both the constraints on the response and the exception must be tight (eg. "if there is a result, it is guaranteed to be of successful response type; if there is an exception, it is either a response messaeg of class 4.xx of 5.xx or a dedicated 'something strange happened' (eg. response has a request code) error") so the user can rely on the error handling and does not need additional checks again
  • the exceptions must carry their response codes in a class (no more except IOError as e: if e.errno != x: raise).

Different endpoints per context

The socket management functionality should be split out of the Context into one or more Endpoint objects that each could represent different kinds of sockets because of

  • #33
  • IPV6_V6ONLY being set on some systems (some BSDs can't turn it off)
  • TCP endpoints might come along sooner or later
  • This could allow spawning bound sockets for particular communication partners, helping with #9
  • It might help with #8
  • It could help in experimenting with more exotic transports (SMS; CoAP as serial protocol to a microprocessor maybe?)

It might be argued that some of those could just as well be handled by setting up different bound contexts, but then again,

  • something would need to manage the contexts to be spawned for every interface (and who would want to call that ContextManager?),
  • a requester would need to decide beforehand which endpoint to use, and
  • a proxy would possibly need to be introduced to several outgoing endpoints,

so I'd just keep the endpoint management in the contexts. How much of what the context currently does will migrate to the endpoints remains to be seen.

[SyntaxError: invalid syntax] yield from XYZ

Not very used to latest pyhon installation processes so I might have missed something during installation. Running raspbian on my Pi and have cloned your repository trying to run the server.py and clientGET.py examples I get the following errors.

@rpi-7 ~/applications/aiocoap $ ./server.py 
  File "./server.py", line 59
    yield from asyncio.sleep(3)
             ^
SyntaxError: invalid syntax
@rpi-7 ~/applications/aiocoap $ ./clientGET.py 
  File "./clientGET.py", line 20
    protocol = yield from Context.create_client_context()
                        ^
SyntaxError: invalid syntax
@rpi-7 ~/applications/aiocoap $ sudo ./server.py 
  File "./server.py", line 59
    yield from asyncio.sleep(3)
             ^
SyntaxError: invalid syntax

Any pointers are appreciated.

how to install aiocoap correctly, I installed aiocoap by using pip but I think it is not the newest version

I installed aiocoap by using pip, when I learn it from Usage Examples,
the error happened:

class BlockResource(resource.Resource):
AttributeError: 'module' object has no attribute 'Resource'

Then I check the aiocoap.resource.
The one I installed doesnot has this class, eg:
the aiocoap.resource have three class:
builtins.object
CoAPResource
LinkParam
Site
It's different from the github version
I execute pip install -U aiocoap
It still doesn't work

my python version : Python 3.4.2
my os: os X Yosemite Version 10.10.2

Thanks in advance!~


update:
I uninstalled aiocoap and installed it by using easy_install, but it is the same with before...

28 tests failed with python 3.4.1

Hi,
I can install aiocoap without problem with the setup file but when I'm trying to test it I got 28 errors. All are the same. The error come from server.py or by some others who use server.py.
The end of the traceback is everytime the same :

File "/usr/lib64/python3.4/asyncio/base_events.py", line 521, in create_datagram_endpoint
transport = self._make_datagram_transport(sock, protocol, r_addr)
File "/home/H/Documents/stage/aiocoap/aiocoap/util/asyncio.py", line 105, in _new_mdt return
RecvmsgSelectorDatagramTransport(self, sock, protocol, address, waiter, extra)
File "/home/H/Documents/stage/aiocoap/aiocoap/util/asyncio.py", line 32, in __init__
super(RecvmsgSelectorDatagramTransport, self).__init__(*args, **kwargs)
TypeError: __init__() takes from 4 to 6 positional arguments but 7 were given

I'm using python 3.4.1. Asyncio and link_header are installed.
I'm using fedora21. Have you an idea on what's happening ?

Assertion error when subsequently observing with same source port

A client can cause a server with an observable resource to crash with and assertion error. See the following gist: https://gist.github.com/bbc2/9509c171bc2b6e0fa01d.

While server.py is running, first run client.py, then stop it and run it a second time. The server will throw an exception when notifying the second observer with:

Exception in callback _SelectorDatagramTransport._read_ready()
handle: <Handle _SelectorDatagramTransport._read_ready()>
Traceback (most recent call last):
  File "/usr/lib/python3.4/asyncio/events.py", line 119, in _run
    self._callback(*self._args)
  File "/usr/lib/python3.4/asyncio/selector_events.py", line 932, in _read_ready
    self._protocol.datagram_received(data, addr)
  File "/home/bertrand/util/pyenv/coap/lib/python3.4/site-packages/aiocoap-0.1_git-py3.4.egg/aiocoap/protocol.py", line 159, in datagram_received
    self._dispatch_message(message)
  File "/home/bertrand/util/pyenv/coap/lib/python3.4/site-packages/aiocoap-0.1_git-py3.4.egg/aiocoap/protocol.py", line 194, in _dispatch_message
    self._remove_exchange(message)
  File "/home/bertrand/util/pyenv/coap/lib/python3.4/site-packages/aiocoap-0.1_git-py3.4.egg/aiocoap/protocol.py", line 310, in _remove_exchange
    self._send(next_message, exchange_monitor)
  File "/home/bertrand/util/pyenv/coap/lib/python3.4/site-packages/aiocoap-0.1_git-py3.4.egg/aiocoap/protocol.py", line 439, in _send
    self._add_exchange(message, exchange_monitor)
  File "/home/bertrand/util/pyenv/coap/lib/python3.4/site-packages/aiocoap-0.1_git-py3.4.egg/aiocoap/protocol.py", line 270, in _add_exchange
    assert message.remote not in self._backlogs
AssertionError

I know the client should probably not reuse the source port, but this would be useful in my case. Even if this use case is invalid, I am not sure such an exception should occur.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.