Giter VIP home page Giter VIP logo

httplib2 / httplib2 Goto Github PK

View Code? Open in Web Editor NEW
488.0 27.0 185.0 3.71 MB

Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.

Home Page: http://httplib2.readthedocs.io/

License: Other

HTML 13.79% TeX 4.54% Python 76.62% CSS 2.19% Shell 2.85%
python network http http-client

httplib2's Introduction

Introduction

httplib2 is a comprehensive HTTP client library, httplib2.py supports many features left out of other HTTP libraries.

If you want to help this project by bug report or code change, contribution guidelines may contain useful information.

HTTP and HTTPS

HTTPS support is only available if the socket module was compiled with SSL support.

Keep-Alive

Supports HTTP 1.1 Keep-Alive, keeping the socket open and performing multiple requests over the same connection if possible.

Authentication

The following three types of HTTP Authentication are supported. These can be used over both HTTP and HTTPS.

  • Digest
  • Basic
  • WSSE

Caching

The module can optionally operate with a private cache that understands the Cache-Control: header and uses both the ETag and Last-Modified cache validators.

All Methods

The module can handle any HTTP request method, not just GET and POST.

Redirects

Automatically follows 3XX redirects on GETs.

Compression

Handles both 'deflate' and 'gzip' types of compression.

Lost update support

Automatically adds back ETags into PUT requests to resources we have already cached. This implements Section 3.2 of Detecting the Lost Update Problem Using Unreserved Checkout.

Unit Tested

A large and growing set of unit tests.

Installation

$ pip install httplib2

Usage

A simple retrieval:

import httplib2
h = httplib2.Http(".cache")
(resp_headers, content) = h.request("http://example.org/", "GET")

The 'content' is the content retrieved from the URL. The content is already decompressed or unzipped if necessary.

To PUT some content to a server that uses SSL and Basic authentication:

import httplib2
h = httplib2.Http(".cache")
h.add_credentials('name', 'password')
(resp, content) = h.request("https://example.org/chapter/2",
                            "PUT", body="This is text",
                            headers={'content-type':'text/plain'} )

Use the Cache-Control: header to control how the caching operates.

import httplib2
h = httplib2.Http(".cache")
(resp, content) = h.request("http://bitworking.org/", "GET")
...
(resp, content) = h.request("http://bitworking.org/", "GET",
                            headers={'cache-control':'no-cache'})

The first request will be cached and since this is a request to bitworking.org it will be set to be cached for two hours, because that is how I have my server configured. Any subsequent GET to that URI will return the value from the on-disk cache and no request will be made to the server. You can use the Cache-Control: header to change the caches behavior and in this example the second request adds the Cache-Control: header with a value of 'no-cache' which tells the library that the cached copy must not be used when handling this request.

More example usage can be found at:

httplib2's People

Contributors

alainv avatar bearieq avatar cdent avatar cglouch avatar davidkorczynski avatar dependabot[bot] avatar djc avatar durin42 avatar eshokrgozar avatar houglum avatar hugovk avatar jaraco avatar jcgregorio avatar kramarz avatar lukeminall avatar marie-donnie avatar mat1g3r avatar mcunha avatar mgorny avatar micolous avatar pcahyna avatar quentiumyt avatar slyon avatar snarfed avatar sroettger avatar stan3 avatar stinky2nine avatar temoto avatar timgates42 avatar yyhylh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

httplib2's Issues

what units is the timeout in?

seconds? milliseconds? something else?

class httplib2.Http([cache=None][, timeout=None][, proxy_info==ProxyInfo.from_environment][, ca_certs=None][, disable_ssl_certificate_validation=False])

The timeout parameter is the socket level timeout.

connection_type is discarded for redirects

Copying this issue over from https://code.google.com/archive/p/httplib2/issues/305, since it looks like it never got ported over to this repo, and the issue is still present.

"""

What steps will reproduce the problem? 1. Pass in a custom connection_type to request with a uri that returns a 307.

What is the expected output? What do you see instead? It is expected that the custom connection_type is used after the redirect is followed, but instead it is not used.

What version of the product are you using? On what operating system? Linux, python 2.7.3, httplib2 @df8bfce22cc07127164235da2850c3c5b0caae3f

Please provide any additional information below. I plan on submitting a patch request to fix this shortly.
"""

We currently get around this in gsutil by monkey-patching in a nearly-identical _request method, with this line added:
https://github.com/GoogleCloudPlatform/gsutil/blob/d5446e5de350e2377f295e62ccc718a0dd6a5660/gslib/gcs_json_media.py#L393

Seems like the fix there is fairly simple, and we've been using it for ~4 years in gsutil. Unfortunately, I don't currently have a lot of extra time, so it will be a while before I can submit a PR with that fix included, run whatever tests are necessary, etc. -- others are welcome to beat me to it if they have the time :)

httplib2 fails when fips mode is enabled

When trying to use the httplib2 module on a RHEL server is fips_enabled = 1, the module fails. _cnonce uses the md5 algorithm which is not FIPS approved.

RHEL 6.7
httplib2 0.7.7

OSError: [WinError 10038] an operation was attempted on something that is not a socket

Just install the package on Win10, Python 3.5.2

>>> (resp_headers, content) = h.request("https://wtfismyip.com/", "GET")
Traceback (most recent call last):
  File "<pyshell#42>", line 1, in <module>
    (resp_headers, content) = h.request("https://wtfismyip.com/", "GET")
  File "%Python%\Python35\lib\site-packages\httplib2\__init__.py", line 1314, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "%Python%\Python35\lib\site-packages\httplib2\__init__.py", line 1064, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "%Python%\Python35\lib\site-packages\httplib2\__init__.py", line 1017, in _conn_request
    response = conn.getresponse()
  File "%Python%\Python35\lib\http\client.py", line 1197, in getresponse
    response.begin()
  File "%Python%\Python35\lib\http\client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "%Python%\Python35\lib\http\client.py", line 258, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "%Python%\Python35\lib\socket.py", line 575, in readinto
    return self._sock.recv_into(b)
OSError: [WinError 10038] an operation was attempted on something that is not a socket

And also:

>>> (resp_headers, content) = h.request("https://wtfismyip.com", "GET")
Traceback (most recent call last):
  File "<pyshell#43>", line 1, in <module>
    (resp_headers, content) = h.request("https://wtfismyip.com", "GET")
  File "%Python%\Python35\lib\site-packages\httplib2\__init__.py", line 1314, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "%Python%\Python35\lib\site-packages\httplib2\__init__.py", line 1064, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "%Python%\Python35\lib\site-packages\httplib2\__init__.py", line 987, in _conn_request
    conn.connect()
  File "%Python%\Python35\lib\http\client.py", line 1260, in connect
    server_hostname=server_hostname)
  File "%Python%\Python35\lib\ssl.py", line 377, in wrap_socket
    _context=self)
  File "%Python%\Python35\lib\ssl.py", line 752, in __init__
    self.do_handshake()
  File "%Python%\Python35\lib\ssl.py", line 988, in do_handshake
    self._sslobj.do_handshake()
  File "%Python%\Python35\lib\ssl.py", line 633, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)

urllib works fine with this site

>>> r.urlopen('https://wtfismyip.com')
<http.client.HTTPResponse object at 0x000001CD2EF15EB8>

Fail to validate HTTPS server while using custom CA file

Hello httplib2.
I am using httplib2==0.9.2.

My use case is quite simple: we have a private network and have setup some HTTPS services. These services provide certificates signed by a site-local CA authority. All certificates are signed using SHA-256 as signing algorithm.

Take that:

  • a self-signed root CA certificate exists inside /etc/ssl/certs/ca-certificates.crt (the standard debian location)
  • the certificate for CN foo.localdomain is signed by this local authority, and is installed to a web server https://foo.localdomain

I'am trying to use httplib2 to communicate with this service:

h = httplib2.Http(ca_certs='/etc/ssl/certs/ca-certificates.crt')
resp, cont = h.request('https://foo.localdomain')

The above fails to perform a proper handshake, but unfortunately it returns poor feedback on where the actual problem is:

Traceback (most recent call last):
  File "example.py", line 3, in <module>
    resp, cont = h.request('https://foo.localdomain')
  File "/home/malex/pyenv/local/lib/python2.7/site-packages/httplib2/__init__.py", line 1609, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/home/malex/pyenv/local/lib/python2.7/site-packages/httplib2/__init__.py", line 1351, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/home/malex/pyenv/local/lib/python2.7/site-packages/httplib2/__init__.py", line 1272, in _conn_request
    conn.connect()
  File "/home/malex/pyenv/local/lib/python2.7/site-packages/httplib2/__init__.py", line 1059, in connect
    raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)

At the same time, the following s_client test succeeds (certificate verification returns 0):

openssl s_client -connect foo.localdomain:443 -servername foo.localdomain -CAfile /etc/ssl/certs/ca-certificates.crt

Also, curl succeeds:

curl --cacert /etc/ssl/certs/ca-certificates.crt https://foo.localdomain

More "safename"

This issue is maybe of Windows issue. Newline in filename is likely the problem.

>>> import httplib2
>>> http = httplib2.Http(".cache")
>>> headers, contents = http.request("https://skyvector.com/airport/83Q", "GET")
>>> headers, contents = http.request("https://skyvector.com/airport/83Q", "GET")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "c:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 1532, in request
    headers=headers, redirections=redirections - 1)
  File "c:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 1593, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "c:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 1396, in _request
    _updateCache(headers, response, content, self.cache, cachekey)
  File "c:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 448, in _updateCache
    cache.set(cachekey, text)
  File "c:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 715, in set
    f = file(cacheFullPath, "wb")
IOError: [Errno 22] invalid mode ('wb') or filename: '.cache\\skyvector.com,airport,83Q,Port-of-Poulsbo-Marina-Moorage-Seaplane\r\n Base,f2669977249d54dcf311752972d65d2f'
>>>

safename in __init__.py should replace newline, I think.

socks proxy code not delivered for Python 3

Is there a reason that the Python 2 package delivers socks.py, but the Python 3 package doesn't? It looks like the upstream project is dead, and that what's in the Python 2 package has fixes that upstream doesn't, but the Python 3 side makes you make it available separately.

I'm tempted, as part of the Solaris delivery of httplib2, to copy (and modify as necessary) socks.py over to the python3 directory and be done with it. Is there a good reason not to? Would a pull request of that work be accepted?

Timeout makes the next request to the same domain fail

Heya.

Whenever a request fail with a socket.timeout, the following request to the same domain fails. Example:

import httplib2
h = httplib2.Http(timeout=3)
h.force_exception_to_status_code = True

head_1, cont_1 = h.request('https://httpbin.org/delay/60', headers={'cache-control':'no-cache'})
# head_1: {'status': '408', 'content-type': 'text/plain', 'content-length': 15}
# cont_1: b'Request Timeout'

# right after the timeout, request anything to the same domain
head_2, cont_2 = h.request('https://httpbin.org/delay/2', headers={'cache-control':'no-cache'})
# Instead of a 2 second delay, it will reuse the connection made on the previous request, making the response timeout / have the response of the previous request.

I'm using the latest httplib2 and python3


If this is expected, is there any way to really 'drop' the connection? Forcing a new connection instead.

http proxy raise 403 error

Hi,

I got 403 forbidden error when I send a request over proxy, this is my code:

proxy_info = ProxyInfo(proxy_type=socks.PROXY_TYPE_HTTP,
                                       proxy_host='my host',
                                       proxy_port='my port',
                                       proxy_username='my proxy username',
                                       proxy_password='my proxy password',
                                       proxy_rdns=False)
http = Http(proxy_info=proxy_info)
response, content = http.request(url, method, headers=headers)

Then I got a exception:

Traceback (most recent call last):
  File "/Users/likel/httplib2test/main.py", line 141, in send_request
    response, content = http.request(url, method, headers=headers)
  File "/Users/likel/httplib2test/httplib2/__init__.py", line 1609, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/Users/likel/httplib2test/httplib2/__init__.py", line 1351, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/Users/likel/httplib2test/httplib2/__init__.py", line 1272, in _conn_request
    conn.connect()
  File "/Users/likel/httplib2test/httplib2/__init__.py", line 1033, in connect
    sock.connect((self.host, self.port))
  File "/Users/likel/httplib2test/httplib2/socks.py", line 424, in connect
    self.__negotiatehttp(destpair[0], destpair[1])
  File "/Users/likel/httplib2test/httplib2/socks.py", line 390, in __negotiatehttp
    raise HTTPError((statuscode, statusline[2]))
HTTPError: (403, 'Forbidden')

I'm sure the REST service didn't respond 403 because I can get correct result when I use curl or requests lib. I don't know the root cause here and also not sure if this is a bug in httplib2. Thanks in advance.

Install to custom directory fails

I'm trying to install httplib2-0.10.3 to a custom directory on macOS 10.12.2 and it's failing with the following output:

$ PYTHONPATH=/Users/mark/code/p1/lib/python python setup.py -n install --home ~/code/p1
running install
running bdist_egg
running egg_info
writing python2/httplib2.egg-info/PKG-INFO
writing top-level names to python2/httplib2.egg-info/top_level.txt
writing dependency_links to python2/httplib2.egg-info/dependency_links.txt
reading manifest file 'python2/httplib2.egg-info/SOURCES.txt'
writing manifest file 'python2/httplib2.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.12-intel/egg
running install_lib
running build_py
creating build/lib
creating build/lib/httplib2
copying python2/httplib2/__init__.py -> build/lib/httplib2
copying python2/httplib2/iri2uri.py -> build/lib/httplib2
copying python2/httplib2/socks.py -> build/lib/httplib2
copying python2/httplib2/cacerts.txt -> build/lib/httplib2
warning: install_lib: 'build/lib' does not exist -- no Python modules to install

creating build/bdist.macosx-10.12-intel
creating build/bdist.macosx-10.12-intel/egg
creating build/bdist.macosx-10.12-intel/egg/EGG-INFO
copying python2/httplib2.egg-info/SOURCES.txt -> build/bdist.macosx-10.12-intel/egg/EGG-INFO
copying python2/httplib2.egg-info/dependency_links.txt -> build/bdist.macosx-10.12-intel/egg/EGG-INFO
copying python2/httplib2.egg-info/top_level.txt -> build/bdist.macosx-10.12-intel/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist/httplib2-0.10.3-py2.7.egg' and adding 'build/bdist.macosx-10.12-intel/egg' to it
removing 'build/bdist.macosx-10.12-intel/egg' (and everything under it)
Creating /Users/mark/code/p1/lib/python/site.py
error: Not a URL, existing file, or requirement spec: 'dist/httplib2-0.10.3-py2.7.egg'

How can I fix this?

SSL certificate validation on Python 2.7.3 is broken by 0.10.2

40cbdcc introduced a dependency on ssl.CertificateError which isn't available in Python 2.7.3. This means that calls to _ssl_wrap_socket will always raise httplib2.CertificateValidationUnsupported:

$ python2
iPython 2.7.3 (default, Oct 26 2016, 21:01:49) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
>>> ssl.CertificateError
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'CertificateError'
>>> import httplib2
>>> httplib2._ssl_wrap_socket(None, None, None, None, None, None, None)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/srv/jenkins/jobs/GCE-Publish-Daily/workspace/wagon_venv/local/lib/python2.7/site-packages/httplib2/__init__.py", line 101, in _ssl_wrap_socket
    "SSL certificate validation is not supported without "
httplib2.CertificateValidationUnsupported: SSL certificate validation is not supported without the ssl module installed. To avoid this error, install the ssl module, or explicity disable validation.

CertificateValidationUnsupported error with v0.10.2

I upgraded from 0.9.2 to v0.10.2 and now I get the following error:

  File "/home/ubuntu/www/ubm-60.3/lib/python2.7/site-packages/httplib2/__init__.py", line 101, in _ssl_wrap_socket
    "SSL certificate validation is not supported without "
CertificateValidationUnsupported: SSL certificate validation is not supported without the ssl module installed. To avoid this error, install the ssl module, or explicity disable validation.

I'm using python 2.7.3, which has ssl module built-in.

http.client deprecated key_file, cert_file and check_hostname

Hello! As of python 3.6 the way HTTPSConnectionWithTimeout access the http.client library is using a deprecated call method.

Deprecated since version 3.6: key_file and cert_file are deprecated in favor of context. Please use ssl.SSLContext.load_cert_chain() instead, or let ssl.create_default_context() select the systemโ€™s trusted CA certificates for you.

The check_hostname parameter is also deprecated; the ssl.SSLContext.check_hostname attribute of context should be used instead.

https://docs.python.org/3/library/http.client.html

What can we do to begin using ssl.SSLContext?

Thanks!

Version 0.10.2 Deleted from PyPI

This is causing our application's build to fail:

Collecting httplib2==0.10.2 (from -r requirements.txt (line 72))
  Could not find a version that satisfies the requirement httplib2==0.10.2 (from -r requirements.txt (line 72)) (from versions: 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.8, 0.9, 0.9.1, 0.9.2, 0.10.3)
No matching distribution found for httplib2==0.10.2 (from -r requirements.txt (line 72))

We can upgrade to 0.10.3, but for the most part, versions should really never be deleted or modified once pushed to a package repository. Deletion breaks things for a lot of people.

ConnectionRefusedError: [WinError 10061] on Python3 behind Proxy

Have just install httplib2 on my machine (Windows 10 - 64 Bit) with pip install method. Already on my machine Anaconda python distribution installed. Tried to run the following but got an error:

import os
os.environ['http_proxy'] = 'http://<proxy_user>:<proxy_pass>@<proxy_host>.com:<proxy_port>'
os.environ['https_proxy'] = 'https://<proxy_user>:<proxy_pass>@<proxy_host>.com:<proxy_port>'

import httplib2
from httplib2 import socks

http = httplib2.Http(proxy_info = httplib2.ProxyInfo(socks.PROXY_TYPE_HTTP, proxy_host='myproxy.com',proxy_port=80))

resp, content = http.request("http://google.com", "GET")

And here's the output:

image

Appreciate any words of wisdom anyone can share!

Timeout error when using proxy with httplib2

i have a simple code that uses proxy to submit a "GET" request to google .

import httplib2

http = httplib2.Http(proxy_info = httplib2.ProxyInfo(proxy_type=3,proxy_host=myProxy, proxy_port=myPort))
resp, content = http.request("http://google.com", "GET")
print(resp)
print(content)

For some reason i get a Timeout error :

    resp, content = http.request("http://google.com", "GET")
  File "C:\Python35\lib\site-packages\httplib2\__init__.py", line 1322, in requ
st
    (response, content) = self._request(conn, authority, uri, request_uri, meth
d, body, headers, redirections, cachekey)
  File "C:\Python35\lib\site-packages\httplib2\__init__.py", line 1072, in _req
est
    (response, content) = self._conn_request(conn, request_uri, method, body, h
aders)
  File "C:\Python35\lib\site-packages\httplib2\__init__.py", line 995, in _conn
request
    conn.connect()
  File "C:\Python35\lib\http\client.py", line 849, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "C:\Python35\lib\socket.py", line 711, in create_connection
    raise err
  File "C:\Python35\lib\socket.py", line 702, in create_connection
    sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connecte
 party did not properly respond after a period of time, or established connecti
n failed because connected host has failed to respond

I'm using a valid proxy, but this module doesn't works, Does anybody knows why this happened?

Where is new_request() defined?

flake8 testing of https://github.com/httplib2/httplib2 on Python 2.7.14

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./python2/httplib2test.py:1239:20: F821 undefined name 'new_request'
            return new_request(*args, **kwargs)
                   ^
./python2/httplib2/__init__.py:890:24: F821 undefined name 'httplib2'
        bypass_hosts = httplib2.AllHosts
                       ^
./python3/httplib2test.py:1182:20: F821 undefined name 'new_request'
            return new_request(*args, **kwargs)
                   ^

Where is new_request() defined?? https://github.com/httplib2/httplib2/search?q=new_request

Httplib2 ignored global proxy setting

Dear Httplib2 team,

this issue puzzled me a while, you know somehow, some place in the world have to use proxy to connect to google and google API. I use "Shadowsocks", lol.

The problem is when I set Shadowsocks to global mode, means all my local traffic will go through Shadowsocks proxy.

Weird thing is when I use "requests" it works fine, see blow code:
import requests
print requests.get("http://www.google.com").content
print 'this works fine'

However I tried to use httplib2, I got error 'httplib.ResponseNotReady':
import httplib2
h = httplib2.Http()
(resp_headers, content) = h.request("http://www.google.com", "GET")
print 'this will throw httplib.ResponseNotReady error'

if this would be useful to you, here are the callstack:
File "D:\virtualenv\env\lib\site-packages\httplib2\__init__.py", line 1659, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "D:\virtualenv\env\lib\site-packages\httplib2\__init__.py", line 1399, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "D:\virtualenv\env\lib\site-packages\httplib2\__init__.py", line 1355, in _conn_request
response = conn.getresponse()
File "C:\Python27\Lib\httplib.py", line 1108, in getresponse
raise ResponseNotReady()
httplib.ResponseNotReady

Could you please suggest in my situation, how I can use httplib2 and access to Google Analytics API?
Thanks in advance.

Wally

SyntaxError (Python 2 style 'print') in Python 3 (v0.10.1 from PyPI)

Python 3.4.3 (default, Nov 17 2016, 01:08:31) 
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import httplib2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/srv/venv/lib/python3.4/site-packages/httplib2/__init__.py", line 932
    print "connect: (%s, %s) ************" % (self.host, self.port)
                                         ^
SyntaxError: invalid syntax
>>> 

Looks like Python 2 code was uploaded to PyPI for Python3.

Proxy not working

import httplib2, socks
h = httplib2.Http(timeout=15, proxy_info=httplib2.ProxyInfo(socks.PROXY_TYPE_SOCKS5, '127.0.0.1', 8000))
h.request('http://www.google.com/')

However, it seems that proxy isn't used no matter what type constant (PROXY_TYPE_HTTP, PROXY_TYPE_HTTP_NO_TUNNEL and PROXY_TYPE_SOCKS4) and proxy I put. Has anyone experienced such issue?

Timeout error when using proxy with httplib2

I'm trying to use httplib2 to work with proxy but for some reason socks alwayes None.

for example the following code :

import httplib2
http = httplib2.Http(proxy_info = httplib2.ProxyInfo(httplib2.socks.PROXY_TYPE_HTTP_NO_TUNNEL,myproxy,myPort) )
resp, content = http.request("http://google.com", "GET")
print(Resp)

Any idea why ?

cache handling should separate header and body storage

I know cache mechanism is not a main function of httplib2, just to put an proposal here,

as currently handling about cache is a full get/set,
when hitting a 304, it still result a full read/write cycle for FileCache,

which cause a heavy I/O penalty and drag down the performance significantly,
for small file, it takes extra IOPS,
for large file, it kills bandwidth, not to mention about the heavy CPU used for serialization.

v0.10.2 is broken on python 2.7.8 Windows platform

import httplib2
conn=httplib2.Http(timeout=60,disable_ssl_certificate_validation=True)
conn.request("https://www.google.com")

>>> conn=httplib2.Http(timeout=60,disable_ssl_certificate_validation=True)
>>> conn.request("https://www.google.com")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\ven\lib\site-packages\httplib2\__init__.py", line 1649
, in request
    (response, content) = self._request(conn, authority, uri, request_uri, metho
d, body, headers, redirections, cachekey)
  File "C:\ven\lib\site-packages\httplib2\__init__.py", line 1389
, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, he
aders)
  File "C:\ven\lib\site-packages\httplib2\__init__.py", line 1310
, in _conn_request
    conn.request(method, request_uri, body, headers)
  File "C:\Python27\Lib\httplib.py", line 995, in request
    self._send_request(method, url, body, headers)
  File "C:\Python27\Lib\httplib.py", line 1029, in _send_request
    self.endheaders(body)
  File "C:\Python27\Lib\httplib.py", line 991, in endheaders
    self._send_output(message_body)
  File "C:\Python27\Lib\httplib.py", line 844, in _send_output
    self.send(msg)
  File "C:\Python27\Lib\httplib.py", line 820, in send
    self.sock.sendall(data)
AttributeError: sendall

The appengine code is not functional with GAE_USE_SOCKETS_HTTPLIB with Google IP ranges

(This was moved from the old repo)

Currently, there's some code for appengine in, but it will only work when GAE_USE_SOCKETS_HTTPLIB is not set, because otherwise the regular httplib will be returned.

This is a problem when trying to use googleapiclient from appengine, with that option set. The workaround, for us, is to monkey patch httplib2.SCHEME_TO_CONNECTION with the versions of the AppEngineHttp[s]Connection objects using the urlfetch httplib and then httplib2.Response to change httplib.HTTPResponse to the correct instance.

This has the drawback of being global and I also wanted to get some feedback on whether you think this is a problem and if so, how this should be fixed. My best idea so far is to be able to pass a httplib to httplib2.Http so that you can select per instance what is going to be used

gcloud on appegnine - deadline doesn't work the way it should

Hey,

During my usage in gcloud(which uses httplib2) over appengine environment I got an exceptions when my request to a url took more than 5 seconds. According to google app engine documentation when you need more than 5 seconds for a request you should use:
urlfetch.set_default_fetch_deadline(seconds)
So I did. But the problem exception kept happening. When using fetch, you are passing the deadline parameter by (line 1002):
socket.getdefaulttimeout() or 5
but it should be in this way to support appengine users:
deadline = get_default_fetch_deadline() or socket.getdefaulttimeout() or 5

Attaching my code (I'm not familiar with google3 package) :

`try:
try:
from google.appengine.api import apiproxy_stub_map
if apiproxy_stub_map.apiproxy.GetStub('urlfetch') is None:
raise ImportError # Bail out; we're not actually running on App Engine.
from google.appengine.api.urlfetch import fetch
from google.appengine.api.urlfetch import InvalidURLError
from google.appengine.api.urlfetch import get_default_fetch_deadline
except (ImportError, AttributeError):
from google3.apphosting.api import apiproxy_stub_map
if apiproxy_stub_map.apiproxy.GetStub('urlfetch') is None:
raise ImportError # Bail out; we're not actually running on App Engine.
from google3.apphosting.api.urlfetch import fetch
from google3.apphosting.api.urlfetch import InvalidURLError

def _new_fixed_fetch(validate_certificate):
    def fixed_fetch(url, payload=None, method="GET", headers={},
                    allow_truncated=False, follow_redirects=True,
                    deadline=None):
        if deadline is None:
            deadline = get_default_fetch_deadline() or socket.getdefaulttimeout() or 5
        return fetch(url, payload=payload, method=method, headers=headers,
                     allow_truncated=allow_truncated,
                     follow_redirects=follow_redirects, deadline=deadline,
                     validate_certificate=validate_certificate)
    return fixed_fetch`

Searching source not supported using github search

This is more about github than the library itself, but since this, the now official repo, is a fork, searching the source isn't supported, and instead you have to search the former official library, which will become more of an issue as development continues.

Use HTTP basic auth credentials by default when provided

When add_credentials has been called, use the credentials in the initial request instead of expecting a 401 response before retrying with credentials.

If the credentials have been specified, it's reasonable to assume they're required and wouldn't have been provided if they weren't. The current behavior causes many unnecessary requests that are know to fail. Not all servers return 401 on failed auth. I ran into problems where the 401 caused a broken pipe error, and the correctly authenticated call was never made. Other examples exist in the google code version of this bug

For people looking for a workaround (from this blog post), explicitly put the auth in the header initially.

import base64
import httplib2

h = httplib2.Http()
auth = base64.encodestring( 'username' + ':' + 'password' )

resp, content = h.request(
    'http://www.google.com/',
    'GET',
    headers = { 'Authorization' : 'Basic ' + auth }
)

Can`t connect to Skyscanner API

Hi, I've asked the same question in StackOverflow and to the Skyscanner Support team, but haven't received yet an answer to my question. On both sides people are suggesting me to use the skyscanner library, which is based on the requests library. I don't have anything against it, but the purpose of my question is to learn how to connect to an API, because I want to start using them more often. I read in Dive Into Python 3 good things about httplib2, thus I want to start using it.

I'm trying to establish a connection via an http request to the Skyscanner API. To do so, I wrote the following code:

import sys
import httplib2
from urllib.parse import urlencode


class Connection(object):
    """Connect to Skyscanner Business API and retrieve quotes"""

    API_HOST = "http://partners.api.skyscanner.net"
    PRICING_SESSION_URL = "{api_host}/apiservices/pricing/v1.0".format(
        api_host=API_HOST)
    SESSION_HEADERS = {"Content-Type": "application/x-www-form-urlencode",
                       "Accept": "application/json"}

    def __init__(self, api_key="prtl6749387986743898559646983194", body=None):
        """
        :param api_key: the API key, the default is an API key provided for
        testing purposes in the Skyscanner API documentation
        :param body: the details of the flight we are interested on
        """

        if not api_key:
            raise ValueError("The API key must be provided")

        self.api_key = api_key
        self.body = body
        self.create_session(api_key=self.api_key, body=self.body)

    def get_result(self):
        pass

    def make_request(self, service_url, method="GET", headers=None, body=None):
        """Perform either a POST or GET request

        :param service_url: URL to request
        :param method: request method, default is GET
        :param headers: request headers
        :param data: the body of the request
        """

        if "apikey" not in service_url.lower():
            body.update({
                "apikey": self.api_key
            })

        h = httplib2.Http(".cache")
        response, content = h.request(service_url,
                                      method=method,
                                      body=urlencode(body),
                                      headers=headers)
        print(str(response))

    def create_session(self, api_key, body):
        """Create the Live Pricing Service session"""

        return self.make_request(self.PRICING_SESSION_URL,
                                 method="POST",
                                 headers=self.SESSION_HEADERS,
                                 body=body)

def main():
    body = {
        "country": "UK",
        "currency": "GBP",
        "locale": "en-GB",
        "originplace": "LOND-sky",
        "destinationplace": "NYCA-sky",
        "outbounddate": "2016-05-01",
        "inbounddate": "2016-05-10",
        "adults": 1
    }

    Connection(body=body)


if __name__ == "__main__":
    sys.exit(main())

What the code above does is to send a POST request to their API in order to create a Live Pricing session. To retrieve the data I should then send a GET request with the URL to poll the booking details. This URL is provided in the Location header of the POST response.

The API key is publicly available on their documentation, as it's a generic one they suggest to use for experimenting with their API. I'm sure the key is working as I've replicated the POST request using the same details on the code above using Postman, a Google Chrome extension to test APIs sending HTTP requests. Using Postman I can successfully establish a connection (Status: 201 Created).

However, if I run the code above I get the following response:

{'content-length': '0', 'date': 'Sun, 24 Apr 2016 08:44:23 GMT', 'status': '415', 'cache-control': 'private'}

The documentation doesn't explicitly say what the Status 415 represents. Anyway, do you see anything wrong with how I'm sending the request?

Apologies for the quality of my code. I'm a beginner.

unexpected meterpreter results (Metasploit RPC(MSGRPC))

Hi dears,

Could you help me to fix below issue??. I don't know where is the problem !!!! :(
rapid7/metasploit-framework#7878

note:
I'm using httpLib in msfrpc.py

I must run the script two times for obtain expected result.
when i change the CMD(for example from getwd to sysinfo), i must call the function two times for obtaining the result of sysinfo; because first execution is result of previous CMD :(

Thanks a lot

Drop support for Python 2.6 and 3.3

With positive resolution, we will only support Python2.7 and 3.

Please vote with github reactions ๐Ÿ‘ or ๐Ÿ‘Ž on this issue (plus smile icon in top right corner of this box) or write your opinion in comments.

HTTPSConnectionWithTimeout.connect fails if one IP raises timeout

In the iteration below:

https://github.com/httplib2/httplib2/blob/master/python2/httplib2/__init__.py#L1053

If one IP returns timeout, I can't make the request.

For example:

# host www.googleapis.com
www.googleapis.com is an alias for googleapis.l.google.com.
googleapis.l.google.com has address 216.58.198.10
googleapis.l.google.com has address 216.58.198.42
googleapis.l.google.com has address 216.58.205.42
googleapis.l.google.com has address 216.58.205.74
googleapis.l.google.com has address 216.58.205.106
googleapis.l.google.com has address 216.58.205.138
googleapis.l.google.com has address 216.58.205.170
googleapis.l.google.com has address 172.217.23.106
googleapis.l.google.com has IPv6 address 2a00:1450:4002:808::200a

In this case, I don't have route to 2a00:1450:4002:808::200a, but I have to all IPv4s.

Do we need check all IPs ?

If I am not wrong, in the current implementation, we check all and consider only the last one that will fail in my case.

If the first IP is working we should stop the iteration. What do you think?

httplib2 + gevent = ConcurrentObjectUseError?

Hi, I was using googleapiclient + gevent, and hit the following bug. It seems httplib2 is calculating conn_key by this line, it doesn't care about the greenlet, however greenlet doesn't like share a connection with each other. So, I am proposing using this method to compute conn_key:

conn_key = ":".join([threading.current_thread().name, scheme, authority])

My script works well with the above patch, the only thing that makes me concerned is this bug doesn't happen on regular python interpreter and the patch will prevent regular threads sharing connections. Any advice?

Full stack trace:

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gevent/greenlet.py", line 536, in run
    result = self._run(*self.args, **self.kwargs)
  File "test.py", line 166, in test
    request.next_chunk()
  File "/usr/local/lib/python2.7/dist-packages/oauth2client/util.py", line 135, in positional_wrapper
    return wrapped(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/googleapiclient/http.py", line 894, in next_chunk
    headers=headers)
  File "/usr/local/lib/python2.7/dist-packages/oauth2client/client.py", line 621, in new_request
    redirections, connection_type)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", line 1609, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", line 1351, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", line 1307, in _conn_request
    response = conn.getresponse()
  File "/usr/lib/python2.7/httplib.py", line 1136, in getresponse
    response.begin()
  File "/usr/lib/python2.7/httplib.py", line 453, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python2.7/httplib.py", line 409, in _read_status
    line = self.fp.readline(_MAXLINE + 1)
  File "/usr/lib/python2.7/socket.py", line 480, in readline
    data = self._sock.recv(self._rbufsize)
  File "/usr/local/lib/python2.7/dist-packages/gevent/_sslgte279.py", line 458, in recv
    return self.read(buflen)
  File "/usr/local/lib/python2.7/dist-packages/gevent/_sslgte279.py", line 318, in read
    self._wait(self._read_event, timeout_exc=_SSLErrorReadTimeout)
  File "/usr/local/lib/python2.7/dist-packages/gevent/_socket2.py", line 176, in _wait
    raise _socketcommon.ConcurrentObjectUseError('This socket is already used by another greenlet: %r' % (watcher.callback, ))
ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.hub.Waiter object at 0x7f3b37050910>>

PUT request through ISA Server proxy raise 502 error

I'm going to show you several test cases that I've done:


This is the issue, I can do POST, GET, but PATCH, PUT or DELETE not. I'm no sure about what is happening with my proxy setup. I appreciate any help.

This is what I fund so far:

import httplib2

httplib2.debuglevel = 2

http = httplib2.Http(proxy_info=httplib2.ProxyInfo(httplib2.socks.PROXY_TYPE_HTTP_NO_TUNNEL, proxy_host, proxy_port))
response, content = http.request("http://www.mocky.io/v2/5185415ba171ea3a00704eed", "PUT", body="{'name':'foo'}", headers={'content-type':'application/json'})

The logs are:

connect: (www.mocky.io, 80) ************
proxy: (proxy_address, proxy_port, True, None, None) ************
send: "PUT /v2/5185415ba171ea3a00704eed HTTP/1.1\r\nHost: www.mocky.io\r\nContent-Length: 14\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: Python-httplib2/0.9.2 (gzip)\r\n\r\n{'name':'foo'}"
reply: 'HTTP/1.1 502 Proxy Error ( The Uniform Resource Locator (URL) does not use a recognized protocol. Either the protocol is not supported or the request was not typed correctly. Confirm that a valid protocol is in use (for example, HTTP for a Web request).  )\r\n'
header: Via: 1.1 proxy_name
header: Connection: close
header: Proxy-Connection: close
header: Pragma: no-cache
header: Cache-Control: no-cache
header: Content-Type: text/html
header: Content-Length: 4220

But if I try using directly httplib module, then:

import httplib

conn = httplib.HTTPConnection(proxy_host, proxy_port)
conn.request("PUT", "http://www.mocky.io/v2/5185415ba171ea3a00704eed/", body="{\"name\":\"foo\"}", headers={'Content-Type':'application/json'})
response = conn.getresponse()
print response.status, response.reason

200 OK

Certainly, there are way much difference between those approach. The httplib2 module use its socks class or socket module for setup proxy connection with connect function and then run similar conn.request but only with the path of the real url and not the full url. Those are the fundamental difference that I fund and, all the know how that I can extract for the source code through hours of debugging over my own code. I still stuck and I can figure out any workaround.

Note: I also test with curl and other REST clients (e.g POSTMAN) and it's works perfectly. My corporate proxy is an ISA server running on Windows 2000, this is all I know.

httplib2.Http() generates cache keys that exceed memcached max length

I'm creating an Http object with a cache like so:

httplib2.Http(cache=cache)

This fails when my API calls have long querystrings, because the max length of a memcached key is 250. As a proposed solution, maybe a hash of the querystring could be used as the cache key instead.

Confusing response status types

In the response class, we can observe this inconsistent behavior:

type(response)            # <class 'httplib2.Response'>
type(response.status)     # <class 'int'>
type(response['status'])  # <class 'str'>

I just spent half an hour debugging this before I finally looked into httplib2 source here. This piece of code seems to have been there for 12 years, may I ask what is the reason behind it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.