Giter VIP home page Giter VIP logo

pyodide-http's Introduction

Pyodide-HTTP

PyPI Latest Release GHA

Provides patches for widely used http libraries to make them work in Pyodide environments like JupyterLite.

Usage

# 1. Install this package
import micropip
await micropip.install('pyodide-http')

# 2. Patch requests
import pyodide_http
pyodide_http.patch_all()  # Patch all libraries

# 3. Use requests
import requests
response = requests.get('https://raw.githubusercontent.com/statsbomb/open-data/master/data/lineups/15946.json')

How does this work?

This package applies patches to common http libraries. How the patch works depends on the package.

All non-streaming requests are replaced with calls using XMLHttpRequest.

Streaming requests (i.e. calls with stream=True in requests) are replaced by calls to fetch in a separate web-worker if (and only if) you are in a state that can support web-threading correctly, which is that cross-origin isolation is enabled, and you are running pyodide in a web-worker. Otherwise it isn't possible until WebAssembly stack-switching becomes available, and it falls back to an implementation that fetches everything then returns a stream wrapper to a memory buffer.

Enabling Cross-Origin isolation

The implementation of streaming requests makes use of Atomics.wait and SharedArrayBuffer to do the fetch in a separate web worker. For complicated web-security reasons, SharedArrayBuffers cannot be passed to a web-worker unless you have cross-origin isolation enabled. You enable this by serving the page using the following two headers:

Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp

Be aware that this will have effects on what you are able to embed on the page - check out https://web.dev/cross-origin-isolation-guide/ for more details.

Supported packages

Currently the following packages can be patched:

Package Patched
requests Session
request
head, get, post, put, patch, delete
urllib urlopen
OpenerDirector

Used by

pyodide-http is used by a some awesome projects:

  • Pyodide - included as a standard package
  • Panel - included since 0.14.1 (can be disabled) when running Panel in the Browser using WASM. Read more

pyodide-http's People

Contributors

joemarshall avatar jonas-w avatar koenvo avatar nicornk avatar rth avatar xangma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pyodide-http's Issues

Project name searchability

I personally like the name, but it's rather impossible to search for it. For instance, pyodide-http in google does not produce the Github repo page at all (same with Bing). It also doesn't help that there is a module called pyodide.http in Pyodide.

Just FYI, maybe the situation would improve as this projects gets more popular.
Feel free to add it to https://pyodide.org/en/stable/project/related-projects.html as well.

BadStatusLine: HTTP/1.1 0 in urlopen

When executing urlopen (after patching) in a JupyterLite kernel, I get the following error.

---------------------------------------------------------------------------
BadStatusLine                             Traceback (most recent call last)
Cell In[5], line 1
----> 1 with urlopen(pkg_url) as package:
      2     with tarfile.open(fileobj=package, mode="r|gz") as package_tar:
      3         for member in package_tar:

File /lib/python3.11/site-packages/pyodide_http/_urllib.py:53, in urlopen(url, *args, **kwargs)
     41 response_data = (
     42     b"HTTP/1.1 "
     43     + str(resp.status_code).encode("ascii")
   (...)
     49     + resp.body
     50 )
     52 response = HTTPResponse(FakeSock(response_data))
---> 53 response.begin()
     54 return response

File /lib/python311.zip/http/client.py:318, in HTTPResponse.begin(self)
    316 # read until we get a non-100 response
    317 while True:
--> 318     version, status, reason = self._read_status()
    319     if status != CONTINUE:
    320         break

File /lib/python311.zip/http/client.py:306, in HTTPResponse._read_status(self)
    304     status = int(status)
    305     if status < 100 or status > 999:
--> 306         raise BadStatusLine(line)
    307 except ValueError:
    308     raise BadStatusLine(line)

BadStatusLine: HTTP/1.1 0

Add tests

Nice work on this!

I think API wise having an explicit patch method is indeed a reasonable compromise to issue of how to make requests available (related pyodide/micropip#9). Maybe slightly more flexible,

pyodide_http.patch('requests')  # also accepts lists a input.

so that it's easier to extend to other packages, without adding too many of those methods.

In the end, does this work for you for binary files (in the main thread)? I'm no longer sure what the situation is there pyodide/pyodide#3062

It would be nice to add some CI and tests e.g. using pytest-pyodide.

httpx support

Leaving this issue here to think about httpx support. I hope to come back to this one day and take a crack at it. I think it should be relatively simple since httpx is written in a sans-io manner which theoretically means a fetch version would be relatively simple to accomplish.

Refused to set unsafe header "User-Agent"

Hi, first, your work saved a part of a project I was trying to port to wasm, thanks for having done that !

I just have small inconvenience : console errors just like the one in the title and for 'Accept-Encoding', 'Connection' and 'Content-Length'.
It does not prevent my program to run, I just don't like red messages in the console ;)
Any way to get rid of that ?

Thanks again for the work !

Error with some module in pandas_datareader

This nice package has solved most of my user cases with jupyterlite except following one problem for now.

Working env

My working environment is the same as following link:

https://jupyterlite.github.io/demo/lab/index.html

Reproducible code

%pip install -q pyodide-http pandas-datareader requests

import pyodide_http
pyodide_http.patch_all()  # Patch all libraries

import pandas_datareader as pdr
pdr.get_data_fred('GS10')

Error messages

---------------------------------------------------------------------------
JsException                               Traceback (most recent call last)
Cell In[3], line 2
      1 import pandas_datareader as pdr
----> 2 pdr.get_data_fred('GS10')

File /lib/python3.10/site-packages/pandas_datareader/data.py:72, in get_data_fred(*args, **kwargs)
     71 def get_data_fred(*args, **kwargs):
---> 72     return FredReader(*args, **kwargs).read()

File /lib/python3.10/site-packages/pandas_datareader/fred.py:27, in FredReader.read(self)
     18 """Read data
     19 
     20 Returns
   (...)
     24     DataFrame is the outer join of the indicies of each series.
     25 """
     26 try:
---> 27     return self._read()
     28 finally:
     29     self.close()

File /lib/python3.10/site-packages/pandas_datareader/fred.py:62, in FredReader._read(self)
     55             raise IOError(
     56                 "Failed to get the data. Check that "
     57                 "{0!r} is a valid FRED series.".format(name)
     58             )
     59         raise
     61 df = concat(
---> 62     [fetch_data(url, n) for url, n in zip(urls, names)], axis=1, join="outer"
     63 )
     64 return df

File /lib/python3.10/site-packages/pandas_datareader/fred.py:62, in <listcomp>(.0)
     55             raise IOError(
     56                 "Failed to get the data. Check that "
     57                 "{0!r} is a valid FRED series.".format(name)
     58             )
     59         raise
     61 df = concat(
---> 62     [fetch_data(url, n) for url, n in zip(urls, names)], axis=1, join="outer"
     63 )
     64 return df

File /lib/python3.10/site-packages/pandas_datareader/fred.py:41, in FredReader._read.<locals>.fetch_data(url, name)
     39 def fetch_data(url, name):
     40     """Utillity to fetch data"""
---> 41     resp = self._read_url_as_StringIO(url)
     42     data = read_csv(
     43         resp,
     44         index_col=0,
   (...)
     49         na_values=".",
     50     )
     51     try:

File /lib/python3.10/site-packages/pandas_datareader/base.py:119, in _BaseReader._read_url_as_StringIO(self, url, params)
    115 def _read_url_as_StringIO(self, url, params=None):
    116     """
    117     Open url (and retry)
    118     """
--> 119     response = self._get_response(url, params=params)
    120     text = self._sanitize_response(response)
    121     out = StringIO()

File /lib/python3.10/site-packages/pandas_datareader/base.py:155, in _BaseReader._get_response(self, url, params, headers)
    153 last_response_text = ""
    154 for _ in range(self.retry_count + 1):
--> 155     response = self.session.get(
    156         url, params=params, headers=headers, timeout=self.timeout
    157     )
    158     if response.status_code == requests.codes.ok:
    159         return response

File /lib/python3.10/site-packages/requests/sessions.py:600, in Session.get(self, url, **kwargs)
    592 r"""Sends a GET request. Returns :class:`Response` object.
    593 
    594 :param url: URL for the new :class:`Request` object.
    595 :param \*\*kwargs: Optional arguments that ``request`` takes.
    596 :rtype: requests.Response
    597 """
    599 kwargs.setdefault("allow_redirects", True)
--> 600 return self.request("GET", url, **kwargs)

File /lib/python3.10/site-packages/requests/sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
    582 send_kwargs = {
    583     "timeout": timeout,
    584     "allow_redirects": allow_redirects,
    585 }
    586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
    589 return resp

File /lib/python3.10/site-packages/requests/sessions.py:701, in Session.send(self, request, **kwargs)
    698 start = preferred_clock()
    700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
    703 # Total elapsed time of the request (approximately)
    704 elapsed = preferred_clock() - start

File /lib/python3.10/site-packages/pyodide_http/_requests.py:42, in PyodideHTTPAdapter.send(self, request, **kwargs)
     40     pyodide_request.set_body(request.body)
     41 try:
---> 42     resp = send(pyodide_request, stream)
     43 except _StreamingTimeout:
     44     from requests import ConnectTimeout

File /lib/python3.10/site-packages/pyodide_http/_core.py:113, in send(request, stream)
    110 for name, value in request.headers.items():
    111     xhr.setRequestHeader(name, value)
--> 113 xhr.send(to_js(request.body))
    115 headers = dict(Parser().parsestr(xhr.getAllResponseHeaders()))
    117 if _IN_WORKER:

JsException: NetworkError: Failed to execute 'send' on 'XMLHttpRequest': Failed to load 'https://fred.stlouisfed.org/graph/fredgraph.csv?id=GS10'.

Working case

In the same package, pandas-datareader, following code works well.

from pandas_datareader import wb
matches = wb.search('gdp.*capita.*const')

Use Atomics.wait when possible

Atomics.wait seems to be a great solution to make a function blocking, without actually blocking the thread. The Atomics.wait is a blocking call which waits until values in a ArrayBuffer are changed from the outside.

The Atomics.wait uses a SharedArrayBuffer which isn't available by default, due to security

pyodide-http could switch to an implementation when crossOriginIsolated is true. It would also be great to add some documentation on how to enable cross-origin isolation. The implementation could use Atomics.wait with a timeout as a replacement for time.sleep, or (if possible) wait until the response from XMLHttpRequest is available.

More details:

If request.body is a file-like object, send fails

When using the python requests library, if one sets the body (data) to a file-like object, such as io.BytesIO or io.StringIO, the body becomes something like <_io.BytesIO object at 0x1550d50> when using pyodide-http, as opposed to the actual contents of the file-like object.

Below is a very hacky fix for this problem:

diff --git a/pyodide_http/_core.py b/pyodide_http/_core.py
index 1cb711d..e8de6b9 100644
--- a/pyodide_http/_core.py
+++ b/pyodide_http/_core.py
@@ -118,7 +118,13 @@ def send(request: Request, stream: bool = False) -> Response:
         if name.lower() not in HEADERS_TO_IGNORE:
             xhr.setRequestHeader(name, value)

-    xhr.send(to_js(request.body))
+    body = request.body
+
+    if hasattr(body, 'read'):
+        body = body.read()
+
+    xhr.send(to_js(body))
+    #xhr.send(to_js(request.body))

     headers = dict(Parser().parsestr(xhr.getAllResponseHeaders()))

I suppose that perhaps a stream method could be used with a file-like object? Any ideas on how to best solve this problem?

Response object should contain request object

image

File /lib/python3.10/site-packages/requests_oauthlib/oauth2_session.py:354, in OAuth2Session.fetch_token(self, token_url, code, authorization_response, body, auth, username, password, method, force_querystring, timeout, headers, verify, proxies, include_client_id, client_secret, cert, **kwargs)
    341 r = self.request(
    342     method=method,
    343     url=token_url,
   (...)
    350     **request_kwargs
    351 )
    353 log.debug("Request to fetch token completed with status %s.", r.status_code)
--> 354 log.debug("Request url was %s", r.request.url)
    355 log.debug("Request headers were %s", r.request.headers)
    356 log.debug("Request body was %s", r.request.body)

The response object of requests should contain the request object: response.request.

Integrate functionality into urllib3

👋 Hey, thanks for creating this project! When PyScript was announced at PyCon US 2022 I immediately tried to add support for synchronous HTTP requests w/ PyScript into urllib3 but was unfortunately unsuccessful at the time. I can see that support for this has grown since that time 🚀

I wanted to ask what you were thinking long-term for this project, if you'd like to land support for PyScript into urllib3 natively (and thus, many packages depending on urllib3/Requests would immediately be able to take advantage) versus the current patching approach. Let me know your thoughts on this.

Again, thanks much!

Post request fail with InvalidStateError: Failed to execute 'setRequestHeader' on 'XMLHttpRequest': The object's state must be OPENED.

---------------------------------------------------------------------------
JsException                               Traceback (most recent call last)
Cell In [7], line 2
      1 import requests
----> 2 response = requests.post('https://raw.githubusercontent.com/statsbomb/open-data/master/data/lineups/15946.json')

File /lib/python3.10/site-packages/requests/api.py:115, in post(url, data, json, **kwargs)
    103 def post(url, data=None, json=None, **kwargs):
    104     r"""Sends a POST request.
    105 
    106     :param url: URL for the new :class:`Request` object.
   (...)
    112     :rtype: requests.Response
    113     """
--> 115     return request("post", url, data=data, json=json, **kwargs)

File /lib/python3.10/site-packages/requests/api.py:59, in request(method, url, **kwargs)
     55 # By using the 'with' statement we are sure the session is closed, thus we
     56 # avoid leaving sockets open which can trigger a ResourceWarning in some
     57 # cases, and look like a memory leak in others.
     58 with sessions.Session() as session:
---> 59     return session.request(method=method, url=url, **kwargs)

File /lib/python3.10/site-packages/pyodide_http/_requests.py:24, in Session.request(method, url, **kwargs)
     22 if 'json' in kwargs:
     23     request.set_json(kwargs['json'])
---> 24 resp = send(request)
     26 response = requests.Response()
     27 # Fallback to None if there's no status_code, for whatever reason.

File /lib/python3.10/site-packages/pyodide_http/_core.py:42, in send(request)
     40 xhr = XMLHttpRequest.new()
     41 for name, value in request.headers.items():
---> 42     xhr.setRequestHeader(name, value)
     44 if _IN_WORKER:
     45     xhr.responseType = "arraybuffer"

JsException: InvalidStateError: Failed to execute 'setRequestHeader' on 'XMLHttpRequest': The object's state must be OPENED.

My quick google search found that xhr.open(request.method, request.url, False) needs to be called before xhr.setRequestHeader(name, value).

I will try to setup a dev environment (are there any instructions?) to create a PR.

pyodide_http patch_requests breaks COI expectations

This was erroneously opened in here pyodide/pyodide#4191

🐛 Bug

While testing/demoing one of our apps in PSDC we noticed that while Chrome/ium was managing to load a 3rd party spreadsheet both Firefox and Safari were completely broken at the headers and permissions headers.

We use code from a worker which requires SharedArrayBuffer and while we managed to enable it, all requests were blocked by the browsers.

To Reproduce

import requests
from typing import Union, Optional

from xlrd import Book
from xlrd.sheet import Sheet

# Sync Calls
from pyodide_http import patch_requests

def extract():
    """ do stuff """

def sync_load(data_url: str, sheet_name: str = None) -> Optional[Union[Book, Sheet]]:
    """"""
    patch_requests()  # patch requests and 

    r = requests.get(data_url)
    if r.status_code != 200:  # Not OK
        return None
    return extract(r.content, sheet_name=sheet_name)

The error in Safari is about headers messed up

[Error] Refused to set unsafe header "Accept-Encoding"
[Error] Refused to set unsafe header "Connection"
[Error] Preflight response is not successful. Status code: 403
[Error] Failed to load resource: Preflight response is not successful. Status code: 403 (sample_workbook.xls, line 0)
[Error] XMLHttpRequest cannot load https://raw.githubusercontent.com/XXX/sample_workbook.xls due to access control checks.
[Error] Failed to load resource: Preflight response is not successful. Status code: 403 (sample_workbook.xls, line 0)

ending up in pyodide as A network error occurred.

Expected behavior

If we change the code to use XHR out of the box everything works without issues and no network warning is ever shown:

def sync_load(data_url: str, sheet_name: str = None) -> Optional[Union[Book, Sheet]]:
    """"""
    xhr = js.XMLHttpRequest.new()
    xhr.open("GET", data_url, False)
    xhr.responseType = "arraybuffer"
    xhr.send(None)
    content = bytes(xhr.response.to_py())
    return extract(content, sheet_name=sheet_name)

I suspect the error is somewhere in here: https://github.com/koenvo/pyodide-http/blob/main/pyodide_http/_core.py#L75

There are a lot of headers manipulation but in some cases browsers really don't like user-land code messing up with security related server defined headers so that override mime type, as example, can be considered insecure as well as anything else that would not otherwise be part already of the predefined headers.

I hence suggest to allow something like patch_requests(ignore_headers=True) so that nothing is changed but I am also not sure why non worker env should change anything at mime type expectations ... although I think that in our case that value is True.

Environment

  • Browser version: breaks in Safari latest and Firefox latest

AttributeError: module 'urllib.request' has no attribute 'HTTPSHandler' when using astropy

Hello and thanks for this library!

I was unsure about where to post this issue but I'm wondering about why pyodide-http does not work with astropy.

Here is a minimal non-working example :

# do pyodide http magics like in the readme here
from astropy.coordinates import SkyCoord
SkyCoord.from_name("Crab Nebula")

In jupyterlite, the output is like this :

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[4], line 1
----> 1 SkyCoord.from_name("M1")

File /lib/python3.10/site-packages/astropy/coordinates/sky_coordinate.py:2218, in SkyCoord.from_name(cls, name, frame, parse, cache)
   2183 """
   2184 Given a name, query the CDS name resolver to attempt to retrieve
   2185 coordinate information for that object. The search database, sesame
   (...)
   2213     Instance of the SkyCoord class.
   2214 """
   2216 from .name_resolve import get_icrs_coordinates
-> 2218 icrs_coord = get_icrs_coordinates(name, parse, cache=cache)
   2219 icrs_sky_coord = cls(icrs_coord)
   2220 if frame in ("icrs", icrs_coord.__class__):

File /lib/python3.10/site-packages/astropy/coordinates/name_resolve.py:170, in get_icrs_coordinates(name, parse, cache)
    167 for url in urls:
    168     try:
    169         resp_data = get_file_contents(
--> 170             download_file(url, cache=cache, show_progress=False)
    171         )
    172         break
    173     except urllib.error.URLError as e:

File /lib/python3.10/site-packages/astropy/utils/data.py:1509, in download_file(remote_url, cache, show_progress, timeout, sources, pkgname, http_headers, ssl_context, allow_insecure)
   1507 for source_url in sources:
   1508     try:
-> 1509         f_name = _download_file_from_source(
   1510             source_url,
   1511             timeout=timeout,
   1512             show_progress=show_progress,
   1513             cache=cache,
   1514             remote_url=remote_url,
   1515             pkgname=pkgname,
   1516             http_headers=http_headers,
   1517             ssl_context=ssl_context,
   1518             allow_insecure=allow_insecure,
   1519         )
   1520         # Success!
   1521         break

File /lib/python3.10/site-packages/astropy/utils/data.py:1293, in _download_file_from_source(source_url, show_progress, timeout, remote_url, cache, pkgname, http_headers, ftp_tls, ssl_context, allow_insecure)
   1290         else:
   1291             raise
-> 1293 with _try_url_open(
   1294     source_url,
   1295     timeout=timeout,
   1296     http_headers=http_headers,
   1297     ftp_tls=ftp_tls,
   1298     ssl_context=ssl_context,
   1299     allow_insecure=allow_insecure,
   1300 ) as remote:
   1301     info = remote.info()
   1302     try:

File /lib/python3.10/site-packages/astropy/utils/data.py:1205, in _try_url_open(source_url, timeout, http_headers, ftp_tls, ssl_context, allow_insecure)
   1201 # Always try first with a secure connection
   1202 # _build_urlopener uses lru_cache, so the ssl_context argument must be
   1203 # converted to a hashshable type (a set of 2-tuples)
   1204 ssl_context = frozenset(ssl_context.items() if ssl_context else [])
-> 1205 urlopener = _build_urlopener(
   1206     ftp_tls=ftp_tls, ssl_context=ssl_context, allow_insecure=False
   1207 )
   1208 req = urllib.request.Request(source_url, headers=http_headers)
   1210 try:

File /lib/python3.10/site-packages/astropy/utils/data.py:1179, in _build_urlopener(ftp_tls, ssl_context, allow_insecure)
   1176 if cert_chain:
   1177     ssl_context.load_cert_chain(**cert_chain)
-> 1179 https_handler = urllib.request.HTTPSHandler(context=ssl_context)
   1181 if ftp_tls:
   1182     urlopener = urllib.request.build_opener(_FTPTLSHandler(), https_handler)

AttributeError: module 'urllib.request' has no attribute 'HTTPSHandler'

You can have a look at it there in the notebook 04-sesame.ipynb :

https://cds-astro.github.io/jupyterlite/lab/index.html

From there, what I understand is that maybe urllib needs more patching in order to work with astropy? Or is it more an issue that I should post on their side of the story?

Thanks again!

(PS: the example uses a really cool function that outputs the coordinates of any objects for any of their registered names or designations :) )

Auto rewrite urls to their CORS counterpart’s

Some common used urls like ‘github.com/…?raw=true’ don’t have CORS headers set. This will fail the request.

Luckily those urls can be rewritten to urls with CORS headers enabled. The ‘raw.githubusercontent.com’ has those headers enabled.

Three things need to be implemented:

  1. Generic way of rewriting urls
  2. User friendly api to enable this
  3. Rewriter for github (regex can be found here: simonw/datasette-lite@e7ccaf6 )

More context: https://mobile.twitter.com/simonw/status/1569835954365669376

Show a meaningful error on CORS error

When a request works in a regular python environment but not in the browser, often the problem are the lack of CORS headers/support, or blocked by a policy.

Make sure the end-user will get a meaningful error message which makes it easier to understand what's going wrong.

  1. On request error try to determine if it's related to CORS (SO says it's not possible - https://stackoverflow.com/questions/19325314/how-to-detect-cross-origin-cors-error-vs-other-types-of-errors-for-xmlhttpreq)
  2. Raise an exception with a pointer to a CORS tester like https://cors-test.codehappy.dev/
  3. Try to give pointers on how to fix this, and more details about CORS - https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

Related issue:

  • CORS headers not set: #25
  • CORS blocked by policy: #22

Node support

Please document whether Node is supported and if so, please provide an example. This example failed for me:

const { loadPyodide } = require("pyodide");

async function hello_python() {
  let pyodide = await loadPyodide();
  await pyodide.loadPackage("micropip");
  const micropip = pyodide.pyimport("micropip");
  await micropip.install("requests");
  return pyodide.runPythonAsync(`
  # 1. Install this package
  import micropip
  await micropip.install('pyodide-http')
  
  # 2. Patch requests
  import pyodide_http
  pyodide_http.patch_all()  # Patch all libraries
  
  # 3. Use requests
  import requests
  response = requests.get('http://www.google.com/')
  `);
}

hello_python().then((result) => {
  console.log("Python says that the lineup is", result);
});

Node.js v18.12.1

Error:

: Failed to establish a new connection: [Errno 50] Protocol not available'))

Or with https:

Can't connect to HTTPS URL because the SSL module is not available.

Versions:

        "fs": "^0.0.1-security",
        "node-fetch": "^3.3.0",
        "pyodide": "^0.21.3"

requests not working in Firefox

I've been trying to get the basic example to work in Jupyterlite, just changing micropip to instead use the %pip magic:

%pip install -q pyodide-http requests

import pyodide_http
pyodide_http.patch_all()  # Patch all libraries

import requests
response = requests.get('http://raw.githubusercontent.com/statsbomb/open-data/master/data/lineups/15946.json')

The cells run without errors, but the response object has a status code of 0, and response.content is empty. Am I missing something?

The result is the same running locally using jupyter lite serve, and on GitHub Pages.

XMLHttpRequest.responseType async

According to the docs, setting responseType on the XMLHttpRequest object sets the requirement that the call must be async.

This results in the following exception in pyscript.

xhr.open(request.method, request.url, False)

JsException: InvalidAccessError: Failed to execute 'open' on 'XMLHttpRequest': Synchronous requests from a document must not set a response type. )

feature idea: aiohttp patch

I think aiohttp (client only) would be quite easy to patch because it is already async (and pyodide has an event loop), so it could just defer nicely to fetch.

I have a dependency on it according to my lock files, but right now I think it isn't used, so I don't urgently need to patch, but I might get round to it at some point.

SSLError("Can't connect to HTTPS URL because the SSL module is not available.")

Using the pyscript example in the tests folder.

Server up the file using python -m http.server --directory dist 8081

Visit page using Firefox results in the following error in browser console:

APPENDING: True ==> py-8457af36-6e5c-d18f-55ca-9cc14c4d72d4 --> PythonError: Traceback (most recent call last):
  File "/lib/python3.10/site-packages/urllib3/connectionpool.py", line 692, in urlopen
    conn = self._get_conn(timeout=pool_timeout)
  File "/lib/python3.10/site-packages/urllib3/connectionpool.py", line 281, in _get_conn
    return conn or self._new_conn()
  File "/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1011, in _new_conn
    raise SSLError(
urllib3.exceptions.SSLError: Can't connect to HTTPS URL because the SSL module is not available.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
    resp = conn.urlopen(
  File "/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /pyodide/pyodide/main/docs/_static/img/pyodide-logo-readme.png (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception
  File "/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)
  File "<exec>", line 12, in init
  File "/lib/python3.10/site-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/lib/python3.10/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "/lib/python3.10/site-packages/requests/adapters.py", line 563, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /pyodide/pyodide/main/docs/_static/img/pyodide-logo-readme.png (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))

Any ideas?

Thanks

Darren

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.