Giter VIP home page Giter VIP logo

aiohttp's People

Contributors

aio-libs-github-bot[bot] avatar alexdutton avatar andrewleech avatar argaen avatar arthurdarcet avatar asvetlov avatar bdraco avatar dependabot-preview[bot] avatar dependabot[bot] avatar derlih avatar dreamsorcerer avatar eteamin avatar fafhrd91 avatar github-actions[bot] avatar greshilov avatar jashandeep-sohi avatar jettify avatar kxepal avatar l1storez avatar mind1m avatar patchback[bot] avatar pfreixes avatar popravich avatar pre-commit-ci[bot] avatar pyup-bot avatar rutsky avatar samuelcolvin avatar scop avatar socketpair avatar webknjaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiohttp's Issues

Traceback error with examples/srv.py and mpsrv.py

Hi,

With the Git version of aiohttp, the examples srv.py and mpsrv.py generate a traceback error at each request:

(venv-aiohttp-master)[lg@lg examples]$ ./srv.py 
serving on ('127.0.0.1', 8080)
method = 'GET'; path = '/'; version = (1, 1)
HOST 127.0.0.1:8080
CONNECTION keep-alive
CACHE-CONTROL max-age=0
ACCEPT text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
ACCEPT-ENCODING gzip,deflate,sdch
ACCEPT-LANGUAGE en-GB,en;q=0.8,en-US;q=0.6,fr;q=0.4
ACCEPT-CHARSET UTF-8,*;q=0.5
ERROR:root:Error handling request
Traceback (most recent call last):
  File "/home/lg/Documents/git/async/venv-aiohttp-master/lib/python3.4/site-packages/aiohttp-0.7.0b6-py3.4.egg/aiohttp/server.py", line 191, in start
    yield from handler
  File "/home/lg/.pythonz/pythons/CPython-3.4.0/lib/python3.4/asyncio/tasks.py", line 84, in coro
    res = func(*args, **kw)
  File "./srv.py", line 100, in handle_request
    response.write_eof()
  File "/home/lg/Documents/git/async/venv-aiohttp-master/lib/python3.4/site-packages/aiohttp-0.7.0b6-py3.4.egg/aiohttp/protocol.py", line 692, in write_eof
    return self.transport.drain()
AttributeError: '_SelectorSocketTransport' object has no attribute 'drain'

An idea to fix that ?

Make client tracebacks more useful.

replace following traceback with something more useful.

  File "/Users/nikolay/dev/files-api/venv/src/aiohttp/aiohttp/client.py", line 890, in read
    chunk = yield from self.content.read()
  File "/Users/nikolay/dev/files-api/venv/src/aiohttp/aiohttp/parsers.py", line 252, in read
    raise self._exception
  File "/Users/nikolay/dev/files-api/venv/src/aiohttp/aiohttp/parsers.py", line 155, in set_parser
    next(p)
  File "/Users/nikolay/dev/files-api/venv/src/aiohttp/aiohttp/protocol.py", line 302, in __call__
    out.feed_eof()
  File "/Users/nikolay/dev/files-api/venv/src/aiohttp/aiohttp/protocol.py", line 377, in feed_eof
    raise errors.IncompleteRead(b'')
nose.proxy.IncompleteRead: IncompleteRead: Bad Request

Incorrect behavior for `timeout` parameter in `aiohttp.request` method

Now the library uses timeout only for reading data from opened socket, not for connecting stage.
So if the server don't answer I'll get a error like TimeoutError: [Errno 110] Connect call failed ('66.221.212.5', 8888) where TimeoutError is not asyncio.TimeoutError but exception derived from OSError, it's raised by OS.
Moreover, event if we put connection stage under the hood of timeout we still have a bit strange behavior: timeout will work only for connecting and reading HTTP headers but not for getting HTTP body.

I guess just to remove timeout parameter from the API and recommend to use asyncio.wait_for directly if needed.

Thoughts?

allowed_methods fail due to traling whitespace

When using allowed_methods parameter in ServerHttpProtocol, the server always returns a Method Not Allowed response.
The "method" variable in HttpPrefixParser contains trailing whitespace and doesn't match strings in allowed_methods list.

Test suite warnings on FreeBSD: ResourceWarning / DeprecationWarning

Noted 3 test warnings while updating the aiohttp FreeBSD port:

FreeBSD: 9.2-STABLE r264519 (FreeBSD 9.x)
Python: 3.3.5
aiohttp: 0.8.2
asyncio: 3.4.1

test_del (test_client.ClientResponseTests) ... /mnt/home/user/repos/freebsd/ports/www/py-aiohttp/work/aiohttp-0.8.2/aiohttp/client.py:592: ResourceWarning: ClientResponse has to be closed explicitly! get::http://python.org
  warnings.warn(msg, ResourceWarning)
ok

test_dont_close_explicit_connector (test_client_functional.HttpClientFunctionalTests) ... /mnt/home/user/repos/freebsd/ports/www/py-aiohttp/work/aiohttp-0.8.2/tests/test_client_functional.py:28: ResourceWarning: unclosed <socket.socket object, $
d=10, family=2, type=1, proto=6>
  gc.collect()
ok

test_session_cookies (test_client_functional.HttpClientFunctionalTests) ... /mnt/home/user/repos/freebsd/ports/www/py-aiohttp/work/aiohttp-0.8.2/aiohttp/client.py:634: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
  logging.warn('Can not load response cookies: %s', exc)
ok

Ran 390 tests in 2.977s - OK

HttpResponse doesn't parse response body without Content-Length header and Connection: close

Steps to reproduce:

  1. create server that replies on request with response:
HTTP/1.1 200 OK
Server: nginx/1.4.3
Date: Mon, 24 Feb 2014 13:38:02 GMT
Content-Type: text/plain
Connection: close
Expires: Mon, 24 Feb 2014 13:38:01 GMT
Cache-Control: no-cache

MESSAGE_BODY
  1. run curl example:
python examples/curl.py http://video-1-1.rutube.ru/api/tx_bytes
  1. GET empty response body in stdout.

That's because in HttpResponse.start method HttpPayloadParser is constructed with only message argument.
So, in HttpPayloadParser.__init__ field self.readall defaults to False and next code in HttpPayloadParser.__call__ is never called:

    if self.readall:
         yield from self.parse_eof_payload(out, buf)

ServerHttpProtocol handle_request exception handling

ServerHttpProtocol.handle_request does not honor the handled exception's message member variable.

--- ../../../lib/python3.4/site-packages/aiohttp/server.py (revision )
+++ ../../../lib/python3.4/site-packages/aiohttp/server.py (revision )
@@ -192,9 +192,9 @@
self.log_debug('Ignored premature client disconnection.')
break
except errors.HttpException as exc:

  •            self.handle_error(exc.code, message, None, exc, exc.headers)
    
  •            self.handle_error(exc.code, message if not exc.message else exc.message, None, exc, exc.headers)
         except Exception as exc:
    
  •            self.handle_error(500, message, None, exc)
    
  •            self.handle_error(500, message if not exc.message else exc.message, None, exc)
         finally:
             if reader.output and not reader.output.at_eof():
                 self.log_debug('Uncompleted request.')
    

No way to close a response on a timeout

If a timeout occurs, you get an error in the logs on warn level, complaining that requests must be explicitly closed, however, you don't get a response object to call close() on.

Http over unix socket

Hi,

There are cases where HTTP is used over unix socket (e.g. to communicate with docker, or to run unit tests without occupying TCP port). There should be easy way to setup such connection. Currently it's possible to override Session.start, copy-pasteing all it code, or to override loop.create_connection which is ugly.

I could made a patch, but we should agree on API for it.

AssertionError: Transport already set

OS: Arch Linux x86_64
Python: 3.4.0

After upgrade aiohttp from 0.6.5 to 0.7.0 or current master
(commit d7fd9b1),
I got an error when requesting some URLs with the WSGI server using cURL.
The HTTP client side of cURL is hung. And the traceback of WSGI server:

Exception in callback <bound method WSGIServerHttpProtocol.connection_made of <aiohttp.wsgi.WSGIServerHttpProtocol object at 0x7fa200477e10>>(<asyncio.selector_events._SelectorSocketTransport object at 0x7fa1fa92e4e0>,)
handle: Handle(<bound method WSGIServerHttpProtocol.connection_made of <aiohttp.wsgi.WSGIServerHttpProtocol object at 0x7fa200477e10>>, (<asyncio.selector_events._SelectorSocketTransport object at 0x7fa1fa92e4e0>,))
Traceback (most recent call last):
  File "/usr/lib/python3.4/asyncio/events.py", line 39, in _run
    self._callback(*self._args)
  File "/home/zkc/.buildout/eggs/aiohttp-0.7.0-py3.4.egg/aiohttp/server.py", line 88, in connection_made
    super().connection_made(transport)
  File "/home/zkc/.buildout/eggs/aiohttp-0.7.0-py3.4.egg/aiohttp/parsers.py", line 250, in connection_made
    self.reader.set_transport(transport)
  File "/home/zkc/.buildout/eggs/aiohttp-0.7.0-py3.4.egg/aiohttp/parsers.py", line 103, in set_transport
    assert self._transport is None, 'Transport already set'
AssertionError: Transport already set

The code of HTTP server side is much like(based on examples/srv.py):

import asyncio
from aiohttp.wsgi import WSGIServerHttpProtocol

from mymodule import simple_app

host, port = '127.0.0.1', 8080
server = WSGIServerHttpProtocol(simple_app)
loop = asyncio.get_event_loop()
f = loop.create_server(lambda: server, host, port)
svr = loop.run_until_complete(f)
socks = svr.sockets
print('serving on', socks[0].getsockname())
try:
    loop.run_forever()
except KeyboardInterrupt:
    pass

Does that right? It works well with aiohttp 0.6.5 .
Sorry but I am wondering if I need to make some changes for aiohttp 0.7.0 .

aiohttp raise http.cookies.CookieError, how to disable the cookies?

this is my test code

import aiohttp
import asyncio

def request():
    rsp = yield from aiohttp.request('GET', "http://pic.315che.com/girls/3487_9.htm")
    content = yield from rsp.read()
    print(len(content))


loop = asyncio.get_event_loop()
loop.run_until_complete(request())

it raise CookieError

http.cookies.CookieError: Illegal key value: ISAWPLB{A7F52349-3531-4DA9-8776-F74BC6F4F1BB}

ReferenceError: weakly-referenced object no longer exists

@asvetlov could you look at this exception, ClientReponse.del?

ERROR [asyncio] Exception in default exception handler
Traceback (most recent call last):
 File "/usr/lib/python3.4/asyncio/base_events.py", line 719, in call_exception_handler
   self.default_exception_handler(context)
 File "/usr/lib/python3.4/asyncio/base_events.py", line 696, in default_exception_handler
   log_lines.append('{}: {!r}'.format(key, context[key]))
 File "/usr/lib/python3.4/asyncio/tasks.py", line 164, in __repr__
   res = super().__repr__()
 File "/usr/lib/python3.4/asyncio/futures.py", line 157, in __repr__
   res += '<exception={!r}>'.format(self._exception)
 File "/opt/files/lib/python3.4/site-packages/aiohttp/client.py", line 576, in __repr__
   print(self.headers, file=out)
 File "/opt/files/lib/python3.4/site-packages/aiohttp/multidict.py", line 88, in __repr__
   return '<{} {!r}>'.format(self.__class__.__name__, self._items)
 File "/usr/lib/python3.4/reprlib.py", line 24, in wrapper
   result = user_function(self)
 File "/usr/lib/python3.4/collections/__init__.py", line 198, in __repr__
   return '%s(%r)' % (self.__class__.__name__, list(self.items()))
 File "/usr/lib/python3.4/_collections_abc.py", line 484, in __iter__
   for key in self._mapping:
 File "/usr/lib/python3.4/collections/__init__.py", line 87, in __iter__
   curr = root.next
ReferenceError: weakly-referenced object no longer exists

Please document UnixConnection work

My code is similar to:

where self.url is /run/foo.sock, and url is set to /run/foo.sock/events

         response = yield from aiohttp.request(
             method, url,
             connector=aiohttp.UnixSocketConnector(self.url), 
             headers=headers, data=data
         )

Which raises:

  File "/home/tag/dev/local/aiohttp/aiohttp/client.py", line 97, in request
    loop=loop, expect100=expect100)
  File "/home/tag/dev/local/aiohttp/aiohttp/client.py", line 177, in __init__
    self.update_host(url)
  File "/home/tag/dev/local/aiohttp/aiohttp/client.py", line 203, in update_host
    raise ValueError('Host could not be detected.')
ValueError: Host could not be detected.

From what I can tell from the test code, this should work, but better docs would be great, so that other people don't hit similar issues :) ๐Ÿ‘

Thanks!

examples/wsclient.py seems to have bitrotted:

kcr@ordinator$ python3 ./wsclient.py --port 9000
Please enter your name: kcr
Future/Task exception was never retrieved
future: Task(<start_client>)<exception=AttributeError("'HttpResponse' object has no attribute 'stream'",)>
Traceback (most recent call last):
File "/usr/lib/python3.4/asyncio/tasks.py", line 281, in _step
result = coro.send(value)
File "./wsclient.py", line 48, in start_client
stream = response.stream.set_parser(websocket.WebSocketParser)
AttributeError: 'HttpResponse' object has no attribute 'stream'

ServerHttpProtocol handle_request exception handling

===================================================================
--- ../../../lib/python3.4/site-packages/aiohttp/server.py  (revision )
+++ ../../../lib/python3.4/site-packages/aiohttp/server.py  (revision )
@@ -192,9 +192,9 @@
                 self.log_debug('Ignored premature client disconnection.')
                 break
             except errors.HttpException as exc:
-                self.handle_error(exc.code, message, None, exc, exc.headers)
+                self.handle_error(exc.code, message if not exc.message else exc.message, None, exc, exc.headers)
             except Exception as exc:
-                self.handle_error(500, message, None, exc)
+                self.handle_error(500, message if not exc.message else exc.message, None, exc)
             finally:
                 if reader.output and not reader.output.at_eof():
                     self.log_debug('Uncompleted request.')

Tests for new proxy connectot

  1. we need tests
  2. we should think about connection keep-alive for proxy.
  3. probably we should remove use_proxy from ClientRequest and modify path in connector's connect() method.

Revisit server logging code

File "/usr/local/lib/python3.4/dist-packages/aiohttp-0.7.2-py3.4.egg/aiohttp/server.py", line 122, in log_access
   self.access_log.info(self.access_log_format % atoms)
 File "/usr/lib/python3.4/logging/__init__.py", line 1251, in info
   self._log(INFO, msg, args, **kwargs)
 File "/usr/lib/python3.4/logging/__init__.py", line 1386, in _log
   self.handle(record)
 File "/usr/lib/python3.4/logging/__init__.py", line 1396, in handle
   self.callHandlers(record)
 File "/usr/lib/python3.4/logging/__init__.py", line 1458, in callHandlers
   hdlr.handle(record)
 File "/usr/lib/python3.4/logging/__init__.py", line 837, in handle
   self.emit(record)
 File "/usr/lib/python3.4/logging/handlers.py", line 894, in emit
   msg = msg.encode('utf-8')
UnicodeEncodeError: 'utf-8' codec can't encode character '\udcc0' in position 115: surrogates not allowed

TCPConnector with conn_timeout doesn't stop

Slightly modified examples/curl.py:

def curl(url):
    connector = aiohttp.TCPConnector(loop=asyncio.get_event_loop(), conn_timeout=1)
    response = yield from aiohttp.request('GET', url)

And try to connect to unavailable host:

$> curl --connect-timeout 1 http://192.169.1.1/
curl: (28) Connection timed out after 1001 milliseconds
$> python examples/curl.py http://192.169.1.1/
^CTraceback (most recent call last):
  File "examples/curl.py", line 31, in <module>
    loop.run_until_complete(curl(sys.argv[1]))
  File "/usr/lib/python3.4/asyncio/base_events.py", line 203, in run_until_complete
    self.run_forever()
  File "/usr/lib/python3.4/asyncio/base_events.py", line 184, in run_forever
    self._run_once()
  File "/usr/lib/python3.4/asyncio/base_events.py", line 799, in _run_once
    event_list = self._selector.select(timeout)
  File "/usr/lib/python3.4/selectors.py", line 424, in select
    fd_event_list = self._epoll.poll(timeout, max_ev)
KeyboardInterrupt

python3.4, aiohttp==0.8.1

DataQueue is missing the read(n) method (and some others)

Currently the python spec says that there are several utilities methods that needs to be implemented: http://legacy.python.org/dev/peps/pep-3156/#servers

One is really important the read(n) method. Also two other methods are missing.
readline() and readexcatly(n) if aiohttp won't cover these method it isn't compatible to 3156 and won't work with a lot of projects.

Edit: Here is the class that misses these methods: https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/parsers.py#L281

``aiohttp.server.ServerHttpProtocol`` has no mechanism to prevent broken sockets (example broken from client side)

good day!

aiohttp.server.ServerHttpProtocol has no mechanism to prevent broken sockets.

broken sockets -- will be as resource leak on server-side.

from client side -- attacker may be [specially] creating broken-sockets to make easy DoS-attake to owr server-side..

or broken sockets may be accumulates on server-side [not-specially] due other randomly situations from badly clients.

example of solution of this issue: #11
(variable: self._keep_alive_period it is not solution for all cases)

alternative solution -- create timeout in aiohttp.server.ServerHttpProtocol.
(timeout is good way, but it may has own problems.)

thanks in advance! :-)
and sorry my bad english :(

Flow control support for large transfers

Do you folks have a mailing list or IRC channel for aiohttp discussions? Or is creating Issues good enough?

My current thoughts lie in the aiohttp.client area. The HttpResponse looks to need some love with respect to large responses. It seems it would load a 5G download entirely into memory first if read() is called. Using content.read() directly is certainly okay, but I wonder if you have plans to improve this area already?

On the flip side, what about sending large requests? I'm pretty new to all of this (asyncio and aiohttp) so forgive me if this is obvious stuff to you folks. I've tried setting data to a file-like object from the standard open(path) but it just hangs for me. If I pass open(path).read() instead it works, but of course that's bad if it's a huge file.

Just to give some context, I'm working on an SDK to work as a client against OpenStack Swift / Rackspace Cloud Files, something I'm quite familiar with ;) as I've been working on that project for years now, but only in the Python 2.6 and 2.7 realm. This is my first real attempt with Python 3, specifically 3.4.

make 0.8.1 release

i fixed some bugs. i'd like to release 0.8.1, #74 can wait.
@asvetlov thoughts? do you want to include anything in bugfix release?

HTTP POST not working

An HTTP POST example would be really helpful as I'm struggling to read the payload data in the received request.

The generator returned by payload.read() (DataQueue.read()) never returns anything so either I'm doing something wrong or the generator is broken. I'm assuming here that the generator will return the POST payload data (I haven't gone through the aiohttp code in any detail to be sure) Some documentation and an example would be really helpful.

Is my assumption incorrect or should I be doing something completely different to get to the POST data?

I'm using jquery $.post to make the request.

Thanks,
JP

ServerHttpProtocol: automate Response object creation

In the current design of ServerHttpProtocol user needs to create Response object himself in handle_request and set up a lot of stuff himself - like setting closing = True if message.version == (1, 0), remember to call write_eof() after calling send_headers() if the message.method == 'HEAD', setting keep_alive on ServerHttpProtocol instance if responsonse.keep_alive() returns True, and so on... These are really tedious tasks that could be solved by ServerHttpProtocol implementation.

My suggestion is to make handle_request method abstract, and pass a specially prepared response object via additional argument when calling it. When request handler coroutine terminates the response object could be inspected (to set keep-alive). AFAIK NodeJS solves it in a similar way.

I'd like to help you with some code, but I'm still quite fresh to asyncio (I think I still don't fully get the differences between transport and protocol).

Best regards,
Tomasz

Possible error in session.py

It could definitely still be a coding error on my part, but I occasionally see this error I can't explain so I thought I'd report it:

WARNING HttpResponse has to be closed explicitly! DELETE:host:/path
Exception ignored in: <bound method HttpResponse.__del__ of <HttpResponse(host/path) [None None]>


>
Traceback (most recent call last):
  File "/home/greg/python3/lib/python3.4/site-packages/aiohttp/client.py", line 801, in __del__
    self.close()
  File "/home/greg/python3/lib/python3.4/site-packages/aiohttp/client.py", line 852, in close
    self.transport.close(force)
  File "/home/greg/python3/lib/python3.4/site-packages/aiohttp/session.py", line 107, in close
    (self.transport, self.protocol))
  File "/home/greg/python3/lib/python3.4/site-packages/aiohttp/session.py", line 79, in _release
    should_close = resp.message.should_close
AttributeError: 'NoneType' object has no attribute 'should_close'

Fix proxy issue

Code to reproduce:

import aiohttp
import asyncio


@asyncio.coroutine
def go():
    connector = aiohttp.ProxyConnector('http://37.44.192.55:8888')
    resp = yield from aiohttp.request(
        'get',
        'http://httpbin.org/cookies/set?k1=v1&k2=v2',
        connector=connector)
    body = yield from resp.read_and_close()
    print(body)


asyncio.get_event_loop().run_until_complete(go())

Proxies support

Any plan for client.request to support HTTP proxies?

urllib supports proxies through ProxyHandler and requests / urllib3 through ProxyManager - would a similar wrapper of client.HttpClient work here?

Number of workers, why?

I need some help understanding this code. Why does it need a Supervisor to spawn ChildProcesses? Can the event loop not handle things itself; why might I want multiple event loops (as in the default kwarg 2)?
@fafhrd91
Edit: I meant to refer to Supervisor in the wssrv.py script under "examples." Thanks in advance for any help!

Websocket example fails on Windows

The websocke example fails on Windows because it uses os.fork().
But why does it do this?
Why does an async server need fork?
This example is way too complicated.

Proxy auth proposal

Currently proxy auth can be invoked only by passing proxy url like "login:password@host", however it is not obvious especially because http auth can be invoked using auth param in aiohttp.request.
Maybe it makes sense to add auth param to ProxyConnector, e.g.:

connector = ProxyConnector('http://localhost', auth=('login', 'pass'))

Adding socks5 support

Hi, I was wondering if there is any way to make aiohttp or asyncio in general work with socks5 proxies. So far only http proxies seem to be supported.

Thanks for your consideration.

Heizenbug in test_HTTP_200_OK_METHOD_ssl

The test_HTTP_200_OK_METHOD_ssl may fail from time to time with various reasons:

Traceback (most recent call last):
  File "/home/kxepal/projects/aiohttp/tests/test_client_functional.py", line 121, in test_HTTP_200_OK_METHOD_ssl
    loop=self.loop, connector=connector))
  File "/usr/lib64/python3.4/asyncio/base_events.py", line 208, in run_until_complete
    return future.result()
  File "/usr/lib64/python3.4/asyncio/futures.py", line 243, in result
    raise self._exception
  File "/usr/lib64/python3.4/asyncio/tasks.py", line 279, in _step
    result = coro.throw(exc)
  File "/home/kxepal/projects/aiohttp/aiohttp/client.py", line 109, in request
    yield from resp.start(conn, read_until_eof)
  File "/home/kxepal/projects/aiohttp/aiohttp/client.py", line 606, in start
    self.message = yield from httpstream.read()
  File "/home/kxepal/projects/aiohttp/aiohttp/parsers.py", line 335, in read
    yield from self._waiter
  File "/usr/lib64/python3.4/asyncio/futures.py", line 348, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib64/python3.4/asyncio/tasks.py", line 332, in _wakeup
    value = future.result()
  File "/usr/lib64/python3.4/asyncio/futures.py", line 243, in result
    raise self._exception
  File "/home/kxepal/projects/aiohttp/aiohttp/parsers.py", line 170, in feed_eof
    self._parser.throw(EofStream())
  File "/home/kxepal/projects/aiohttp/aiohttp/protocol.py", line 251, in __call__
    'Can not read status line') from None
nose.proxy.ClientConnectionError: Can not read status line

or

Traceback (most recent call last):
  File "/home/kxepal/projects/aiohttp/tests/test_client_functional.py", line 121, in test_HTTP_200_OK_METHOD_ssl
    loop=self.loop, connector=connector))
  File "/usr/lib64/python3.4/asyncio/base_events.py", line 208, in run_until_complete
    return future.result()
  File "/usr/lib64/python3.4/asyncio/futures.py", line 243, in result
    raise self._exception
  File "/usr/lib64/python3.4/asyncio/tasks.py", line 279, in _step
    result = coro.throw(exc)
  File "/home/kxepal/projects/aiohttp/aiohttp/client.py", line 117, in request
    raise aiohttp.OsConnectionError(exc)
nose.proxy.OsConnectionError: [Errno 104] Connection reset by peer

For the last case if raise the original exception instead of aiohttp.OsConnectionError the traceback becomes more verbose:

Traceback (most recent call last):
  File "/home/kxepal/projects/aiohttp/tests/test_client_functional.py", line 121, in test_HTTP_200_OK_METHOD_ssl
    loop=self.loop, connector=connector))
  File "/usr/lib64/python3.4/asyncio/base_events.py", line 208, in run_until_complete
    return future.result()
  File "/usr/lib64/python3.4/asyncio/futures.py", line 243, in result
    raise self._exception
  File "/usr/lib64/python3.4/asyncio/tasks.py", line 279, in _step
    result = coro.throw(exc)
  File "/home/kxepal/projects/aiohttp/aiohttp/client.py", line 127, in request
    yield from resp.start(conn, read_until_eof)
  File "/home/kxepal/projects/aiohttp/aiohttp/client.py", line 628, in start
    self.message = yield from httpstream.read()
  File "/home/kxepal/projects/aiohttp/aiohttp/parsers.py", line 335, in read
    yield from self._waiter
  File "/usr/lib64/python3.4/asyncio/futures.py", line 348, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib64/python3.4/asyncio/tasks.py", line 332, in _wakeup
    value = future.result()
  File "/usr/lib64/python3.4/asyncio/futures.py", line 243, in result
    raise self._exception
  File "/usr/lib64/python3.4/asyncio/selector_events.py", line 673, in _read_ready
    data = self._sock.recv(self.max_size)
  File "/usr/lib64/python3.4/ssl.py", line 693, in recv
    return self.read(buflen)
  File "/usr/lib64/python3.4/ssl.py", line 582, in read
    v = self._sslobj.read(len or 1024)
nose.proxy.ConnectionResetError: [Errno 104] Connection reset by peer

To guarantee reproduce run:

watch -e nosetests --tests=tests/test_client_functional.py

and wait for a while. Not sure for now where the issue is: in test_util.run_server or somewhere deeper, but may be you have quick idea about what could be wrong.

Python: 3.3.5 / 3.4.1
aiohttp: 0.8.2 @ #2b19a85e3b
tulip: 3.4.1

Get rid of email.Message, use MultiDict instead

In private conversation with @fafhrd91 we decided to get rid of using email.Message and switch to MultiDict for HTTP headers and so on.

email.Message handles HTTP headers well but pollutes public API with email specific methods which just doesn't works for aiohttp.

MultiDict implementation will be transplanted from aiorest library, which borrowed the class from WebOb with some cleanup.

The change will break backward compatibility, sorry, but it's strongly required to make aiohttp API clean and safe.

I hope the change will not make intolerable hurt for library users.

aiohttp.HttpClient

What the purpose of the class?
From my perspective it solves very specific problem and should be moved to some separate project if you really need for it.
For example now I'm solving similar problem but my criteria for 'good' answer much more complex -- even if host send a response the host may be marked as 'failed' by analyzing response status code, body and headers.
I think there are no common solution for the problem -- or the solution will be too complicated for simple and low level library like aiohttp.

fix header encoding error

find embedded a patch fixing benoitc/gunicorn#775

--- protocol.orig.py    2014-06-07 15:19:01.234911490 +0200
+++ protocol.py 2014-06-07 15:22:23.158921591 +0200
@@ -629,8 +629,9 @@
         # status + headers
         hdrs = ''.join(itertools.chain(
            (self.status_line,),
-           *((k, ': ', v, '\r\n') for k, v in self.headers)))
-        hdrs = hdrs.encode('ascii') + b'\r\n'
+            *((k, ': ', str(v).strip(), '\r\n') for k, v in self.headers)))
+
+        hdrs = hdrs.encode('utf8') + b'\r\n'

         self.output_length += len(hdrs)
         self.transport.write(hdrs)

HTTP pipelining support improvements

Request handling in ServerHttpProtocol implementation executes handle_request coroutines sequentially when HTTP pipelining is used.

Please take a look at the following gist for more details:
https://gist.github.com/telendt/7732021

There are 2 HTTP requests for 2 different resources, each one is available after 1s (I use the asyncio.sleep here, but it can be really some IO happening there). These coroutines might run concurrently (the second one starts when the previous one "waits"), but they run sequentially instead (the second one starts when the previous one terminates).

Of course concurrent execution of handle_requests coroutines would still need to return responses to those requests in the same order that the requests were received. In that case HttpMessage shouldn't write to the transport directly (except of the one from the first pending coroutine) but would need to write to some buffer instead.

MultiDict issues in http request

  1. Why MultiDict is used for request headers, not CaseInsensitiveMultiDict?
  2. Why getall() neither returns empty list when there is no value, nor have default argument?

HTTPS Support

Hi,

It would be nice to have HTTPS support.

SSL/TLS is easy right? :)

run_in_executor in handle_request blocking I/O

Hi !

I've tried to use your module to handle slow requests.
What I've done is that in the handle_request coroutine, I yield from loop.run_in_executor a function that just does a time.sleep(10) and return a "test" string, and then send the response with the return run_in_executor string.

def slow_func():
  time.sleep(10)
  return "test"

class HttpProtocol(aiohttp.server.ServerHttpProtocol):
  def __init__(self, ppexecutor, loop):
    self.loop = loop
    self.ppexecutor = ppexecutor

  ...

  @asyncio.coroutine
  def handle_request(self, message, payload):
    # ppexecutor is a ProcessPoolExecutor instance
    data = yield from self.loop.run_in_executor(self.ppexecutor, slow_func)
    response = aiohttp.Response(self.transport, 200, http_version=message.version, close=True)

    response.add_headers(
      ('Content-Type', 'text'),
      ('Content-Length', str(len(data)))
    )
    response.send_headers()
    response.write(data.encode())
    self.keep_alive(False)

The problem is that the server doesn't accept any request until the function given in the run_in_executor hasn't returned.
Is that normal or I' m doing something wrong ?

Notice that it works perfectly well with a asyncio.start_server in the request's handler.

Would do you like to rename SocketConnector to TCPSocketConnector?

I think SocketConnector name is too common name and it corresponds to BaseConnector, not to TCP sockets itself. The naming is ambiguous. I guess to save BaseConnector and get rid of SocketConnector.

Moreover, it would be better to rename SocketConnector -> TCPConnector and UnixSocketConnector -> UnixConnector.

If you like to keep backward compatibility (do you really pursuit the target for now?) you can add aliases.

What do you think about?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.