Giter VIP home page Giter VIP logo

pynsee's Introduction

pynsee gives a quick access to more than 150 000 macroeconomic series, a dozen datasets of local data, numerous sources available on insee.fr, geographical limits of administrative areas taken from IGN as well as key metadata and SIRENE database containing data on all French companies. Have a look at the detailed API page api.insee.fr.

This package is a contribution to reproducible research and public data transparency. It benefits from the developments made by teams working on APIs at INSEE and IGN.

Installation & API subscription

The files available on insee.fr and IGN data, i.e. the use of download and geodata modules, do not require authentication. Credentials are necessary to access some of the INSEE APIs available through pynsee by the modules macrodata, localdata, metadata and sirene. API credentials can be created here : api.insee.fr

# Download Pypi package
pip install pynsee[full] 

# Get the development version from GitHub
# git clone https://github.com/InseeFrLab/pynsee.git
# cd pynsee
# pip install .[full]

# Subscribe to api.insee.fr and get your credentials!
# Save your credentials with init_conn function :      
from pynsee.utils.init_conn import init_conn
init_conn(insee_key="my_insee_key", insee_secret="my_insee_secret")

# Beware : any change to the keys should be tested after having cleared the cache
# Please do : from pynsee.utils import clear_all_cache; clear_all_cache()

Data Search and Collection Advice

  • Macroeconomic data : First, use get_dataset_list to search what are your datasets of interest and then get the series list with get_series_list. Alternatively, you can make a keyword-based search with search_macrodata, e.g. search_macrodata('GDP'). Then, get the data with get_dataset or get_series
  • Local data : use first get_local_metadata, then get data with get_local_data
  • Metadata : e.g. function to get the classification of economic activities (Naf/Nace Rev2) get_activity_list
  • Sirene (French companies database) : use first get_dimension_list, then use search_sirene with dimensions as filtering variables
  • Geodata : get the list of available geographical data with get_geodata_list and then retrieve it with get_geodata
  • Files on insee.fr: get the list of available files on insee.fr with get_file_list and then download it with download_file

For further advice, have a look at the documentation and gallery of the examples.

Example - Population Map

from pynsee.geodata import get_geodata_list, get_geodata, GeoFrDataFrame

import math
import geopandas as gpd
import pandas as pd
from pandas.api.types import CategoricalDtype
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import descartes

import warnings
from shapely.errors import ShapelyDeprecationWarning
warnings.filterwarnings("ignore", category=ShapelyDeprecationWarning)

# get geographical data list
geodata_list = get_geodata_list()
# get departments geographical limits
com = get_geodata('ADMINEXPRESS-COG-CARTO.LATEST:commune')

mapcom = gpd.GeoDataFrame(com).set_crs("EPSG:3857")

# area calculations depend on crs which fits metropolitan france but not overseas departements
# figures should not be considered as official statistics
mapcom = mapcom.to_crs(epsg=3035)
mapcom["area"] = mapcom['geometry'].area / 10**6
mapcom = mapcom.to_crs(epsg=3857)

mapcom['REF_AREA'] = 'D' + mapcom['insee_dep']
mapcom['density'] = mapcom['population'] / mapcom['area']

mapcom = GeoFrDataFrame(mapcom)
mapcom = mapcom.translate(departement = ['971', '972', '974', '973', '976'],
                          factor = [1.5, 1.5, 1.5, 0.35, 1.5])
                          
mapcom = mapcom.zoom(departement = ["75","92", "93", "91", "77", "78", "95", "94"],
                 factor=1.5, startAngle = math.pi * (1 - 3 * 1/9))
mapcom

mapplot = gpd.GeoDataFrame(mapcom)
mapplot.loc[mapplot.density < 40, 'range'] = "< 40"
mapplot.loc[mapplot.density >= 20000, 'range'] = "> 20 000"

density_ranges = [40, 80, 100, 120, 150, 200, 250, 400, 600, 1000, 2000, 5000, 10000, 20000]
list_ranges = []
list_ranges.append( "< 40")

for i in range(len(density_ranges)-1):
    min_range = density_ranges[i]
    max_range = density_ranges[i+1]
    range_string = "[{}, {}[".format(min_range, max_range)
    mapplot.loc[(mapplot.density >= min_range) & (mapplot.density < max_range), 'range'] = range_string
    list_ranges.append(range_string)

list_ranges.append("> 20 000")

mapplot['range'] = mapplot['range'].astype(CategoricalDtype(categories=list_ranges, ordered=True))

fig, ax = plt.subplots(1,1,figsize=[15,15])
mapplot.plot(column='range', cmap=cm.viridis,
legend=True, ax=ax,
legend_kwds={'bbox_to_anchor': (1.1, 0.8),
             'title':'density per km2'})
ax.set_axis_off()
ax.set(title='Distribution of population in France')
plt.show()

fig.savefig('pop_france.svg',
            format='svg', dpi=1200,
            bbox_inches = 'tight',
            pad_inches = 0)
 

How to avoid proxy issues ?

# Use the proxy_server argument of the init_conn function to change the proxy server address   
from pynsee.utils.init_conn import init_conn
init_conn(insee_key="my_insee_key",
         insee_secret="my_insee_secret",
         http_proxy="http://my_proxy_server:port",
         https_proxy="http://my_proxy_server:port")

# Beware : any change to the keys should be tested after having cleared the cache
# Please do : from pynsee.utils import *; clear_all_cache()

# Alternativety you can use directly environment variables as follows. 
# Beware not to commit your credentials!
import os
os.environ['insee_key'] = 'my_insee_key'
os.environ['insee_secret'] = 'my_insee_secret'
os.environ['http_proxy'] = "http://my_proxy_server:port"
os.environ['https_proxy'] = "http://my_proxy_server:port"

Support

Feel free to open an issue with any question about this package using the Github repository.

Contributing

All contributions, whatever their forms, are welcome. See CONTRIBUTING.md

pynsee's People

Contributors

elishowk avatar hadrilec avatar hadrilec2 avatar linogaliana avatar raphaeleadjerad avatar tfardet avatar tgrandje avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pynsee's Issues

Bug installation from github using sspcloud

Cannot install from github directly using sspcloud, probleme installing pyYAML :
"ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall."

Error using example 2 map

Error when trying to use example 2 chunk of code:
To reproduce error: on sspcloud, clone project then try chunk of code trying to reproduce map, produces error :

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-4-84396579dd28> in <module>
     22 
     23 # get geographic area list
---> 24 area = get_area_list()
     25 
     26 # get all communes in Paris urban area

~/work/pynsee/pynsee/localdata/get_area_list.py in get_area_list(area)
     62             api_url=api_url, file_format='application/json')
     63 
---> 64         data = request.json()
     65 
     66         for i in range(len(data)):

AttributeError: 'NoneType' object has no attribute 'json'

Example notebooks as artifacts rather than objects within the repository

I think it's a good idea to give access to some notebooks to illustrate how to use pyinsee. However, version control and notebooks do not mix well together.

I think a better approach than committing files in the doc directory would be :

  • write examples as .md files
  • use jupytext to transform that into .ipynb files in Github Actions
  • give access to ipynb as artifacts

I face a similar challenge for my python website and might soon have a general solution in hand. I could try this approach that would be, on the long run, more flexible and more reliable (notebooks mix code and output and might lead to commit tokens, etc.)

Install from git using poetry

I'm having trouble installing the package from git using poetry.

I get the following traceback :

Because pynsee (0.1.2) @ git+https://github.com/InseeFrLab/pynsee@master depends on pathlib (*) which doesn't match any versions, pynsee is forbidden.
So, because ssp-scraping depends on pynsee (0.1.2) @ git+https://github.com/InseeFrLab/pynsee@master, version solving failed.

This seems to be related to the "requirements.txt" specifying built-in packages (pathlib and urllib3).

As I usually rely on poetry to build packages, I'm not 100% sure : is this syntax standard practice? (I've seen some posts here and there about this being bad practice, but those where mostly pieces of opinion, not reputable sources).

(I'm ok with doing the PR if confirmed, just not wanting to break anything...)

Error with get_location on Sirene

I'm just following this simple example provided in this repo. My first data is:

test = search_sirene(variable = ["activitePrincipaleEtablissement", "codeCommuneEtablissement"],
                         pattern = ["13.20Z", "68*"], kind = 'siret', number=1000)

But when I execute this to get the associated locations:

test = test.get_location()

Got this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_5040/2220752205.py in <module>
----> 1 test = test.get_location()

~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\core\generic.py in __getattr__(self, name)
   5485         ):
   5486             return self[name]
-> 5487         return object.__getattribute__(self, name)
   5488 
   5489     def __setattr__(self, name: str, value) -> None:

AttributeError: 'DataFrame' object has no attribute 'get_location'

I noticed that the get_location method could have been renamed _get_location_openstreetmap, but it didn't solve this issue.

What can I do ?

shapely deprecation warning

Warning from _geojsson_parser function :
ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use thegeoms property to access the constituent parts of a multi-part geometry. value = np.array(value, dtype=object)

Warnins have been disabled for on one line in _geojson_parser, and multiple lines in translate and zoom.
Requirements and setup files have been modified so that lower a version than shapely 2.0 is downloaded to avoid any bug.
In due time, _geojson_parser should be modified to work with the future 2.0 version of shapely

fix tests for better ci (quality)

pynsee's pipeline are in errors since august. Best practices advise to keep it working in order to detect bugs and regressing code.

It seems there are two errors, one minor and one bigger :

minor : a small typo indentation in test_pynsee_sirene.py
complex : the test of the download of the census data is not working, because pandas in the github runner cannot achieve to load the 4gb CSV file

get_new_city - error message not relevant when uptodate city

When you try the function with an uptodate city, you get a false error message :

import pynsee
r = pynsee.localdata.get_new_city('59350')

Return was :

!!! An error occurred !!!
Query : https://api.insee.fr/metadonnees/V1/geo/commune/59350/suivants?date=2013-01-01


!!! Make sure you have subscribed to all APIs !!!
Click on all APIs' icons one by one, select your application, and click on Subscribe

Error 404

Traceback (most recent call last):

  Cell In[13], line 1
    r = pynsee.localdata.get_new_city('59350')

  File ~\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\pynsee\localdata\get_new_city.py:50 in get_new_city
    request = _request_insee(api_url=api_link, file_format="application/json")

  File ~\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\pynsee\utils\_request_insee.py:116 in _request_insee
    raise ValueError(results.text)

ValueError

The API seems to return 404 indeed (which imho is strange behaviour, a specific code might have been employed). I'm not sure how to proceed regarding the error message though (which relates to the API subscription and has nothing to do with the 'problem' here).

Remove all `import *` statements

For instance from pynsee.macrodata import * in examples.
Explicit is better than implicit, import * is not recommended
Instead use explicit names of functions or short name for pynsee.macrodata

Using the updatedAfter parameter with a date that has no data available returns an error when it should not

Description

As of writing this message the last CPIH release was on Oct 14, if I use get_series to acquire the data updated after Oct 13 it works as expected, however, if I select a later day (Oct 15) I get an error and not an empty dataframe.

Reproducible example

Works

cpih = get_series('001762616', metadata=True, startPeriod='1950-01-01', includeHistory=False, updatedAfter="2022-10-13T08:45:00")

Returns ExpatError: not well-formed (invalid token): line 2, column 189

cpih = get_series('001762616', metadata=True, startPeriod='1950-01-01', includeHistory=False, updatedAfter="2022-10-15T08:45:00")

Solution

This is the XML returned if there's no observations available after the updatedAfter parameter:

'<?xml version="1.0" encoding="UTF-8"?>\n<mes:Error xmlns:mes="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/message"><mes:ErrorMessage code="100"><com:Text xmlns:com="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/common">À partir de 2022-10-16T08:45:00, aucune donnée n?est disponible pour cette série </com:Text></mes:ErrorMessage></mes:Error>'

We could check if this is the message returned and return an empty dataframe insted of processing the data as usual and i'm willing to write this solution if this fits the purpose of the package.

Use logging instead of print

It would be nice to have the ability to hide messages from pynsee.
This would be possible if logging was used rather than calls to print (e.g. for cached data or during requests.

With a logger, the importance of each log could be set independently (e.g. messages about the use of cached data could be INFO while some issues with requests would be WARNING, ERROR, or CRITICAL) which would let people filter the levels they don't care about.

If necessary, the default visibility can be set to INFO so that all messages are still printed when nothing is changed.

Cf. https://docs.python.org/3/howto/logging.html for examples.

Issue when trying access to macrodata

Salut ! Thx for the great work !

I am trying to get data from the BDM API without success. I registered to the macro API and got access keys but can't access any serie. The other API seem to work well (sirene, localdata or metadata).

Steps to reproduce:

con = init_conn(insee_key=INSEE_API_KEY, insee_secret=INSEE_API_SECRET)
from pynsee.macrodata import *

idbank_ipc = get_series_list("IPC-2015", "CLIMAT-AFFAIRES")

Yield following error :

!!! An error occurred !!!
Query : https://api.insee.fr/series/BDM/V1/dataflow/FR1/all

!!! SDMX web service used instead of API !!!

Extending geopandas methods and attributes to geodata objects

@hadrilec
I started to use geodata package for my Python class, thanks a lot it is very convenient.

I have remarked a few things:

Object class

communes = get_geodata('ADMINEXPRESS-COG-CARTO.LATEST:commune')

communes is of class pynsee.geodata.GeoDataframe.GeoDataframe:

  • The repeated GeoDataframe is a typo ?
  • Maybe would be better to use the same convention than geopandas thus GeoDataFrame

CRS

CRS of communes is not set. I think we could handle that internally when setting the class (in order to inherit the attribute of gpd.GeoDataFrame` object)

Extension of geopandas methods

The following code will fail if the object is not coerced to a gpd object (while it will work with a gpd object).
There's something to be done for our class to inherit gpd methods

paris = communes.loc[communes['insee'].str.startswith("75")]

fig,ax = plt.subplots(figsize=(10, 10))
paris.plot(ax = ax, alpha=0.5, edgecolor='blue')
ctx.add_basemap(ax, crs = paris.crs.to_string())
ax.set_axis_off()
ax

Enhancement doc: put get_dataset_list before get_series_list in example 1

Should present for non user get_dataset_list() first in order to get name of the dataset, suggestion before example 1:

# In order to list all datasets, you can do: 
from pynsee.macrodata import get_dataset_list
insee_dataset = get_dataset_list()
insee_dataset.head()

produces:

	id	Name.fr	Name.en	url	n_series
0	BALANCE-PAIEMENTS	Balance des paiements	Balance of payments	https://www.insee.fr/fr/statistiques/series/10...	197
1	CHOMAGE-TRIM-NATIONAL	Chômage, taux de chômage par sexe et âge (sens...	Unemployment, unemployment rate and halo by se...	https://www.insee.fr/fr/statistiques/series/10...	169
2	CLIMAT-AFFAIRES	Indicateurs synthétiques du climat des affaires	Business climate composite indicators	https://www.insee.fr/fr/statistiques/series/10...	3
3	CNA-2010-CONSO-MEN	Consommation des ménages - Résultats par produ...	Households' consumption - Results by product, ...	https://www.insee.fr/fr/statistiques/series/10...	2247
4	CNA-2010-CONSO-SI	Dépenses de consommation finale par secteur in...	Final consumption expenditure by institutional...	https://www.insee.fr/fr/statistiques/series/10...	1391

Then use get_series_list for instance:

from pynsee.macrodata import get_dataset_list, get_series_list
idbank_ipc = get_series_list('BALANCE-PAIEMENTS')

Additional module to directly download files from Insee website

Many Insee's dataset are not yet available in API. I think we can recycle lots of stuff from doremifasol R package to propose a simple approach in python. This could be an additional module in pyinsee, let's say pyinsee.download.

Within doremifasol, downloads are controlled from a json file. We could directly use this file. This would make that module easier to maintain because both doremifasol and pyinsee.download would rely on that.

@hadrilec I am volunteer to explore this approach if you think this is an interesting one

Suggestion pour la méthode de merge des PR

#13 is my first PR on this repo. I see that all merge methods allowed by Github are available. @hadrilec I recommend you to go in Settings and only enable Squash and merge commits.

Squash and merge method is preferrable because evolutions on a master branch with hundreds of commits are harder to grasp. By squashing commits before merging, we will have a better understanding of changes in master

get_geodata fails and freezes main process

When downloading geodata (adminexpress), multiprocessing often fails and freeze the whole process with following traceback :

Traceback (most recent call last):
  File "C:\Winpython\WinPython-64bit-3.9.2.0\python-3.9.2.amd64\lib\threading.py", line 954, in _bootstrap_inner
    self.run()
  File "C:\Winpython\WinPython-64bit-3.9.2.0\python-3.9.2.amd64\lib\threading.py", line 892, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Winpython\WinPython-64bit-3.9.2.0\python-3.9.2.amd64\lib\multiprocessing\pool.py", line 576, in _handle_results
    task = get()
  File "C:\Winpython\WinPython-64bit-3.9.2.0\python-3.9.2.amd64\lib\multiprocessing\connection.py", line 256, in recv
    return _ForkingPickler.loads(buf.getbuffer())
  File "C:\Users\thomas.grandjean\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\requests\exceptions.py", line 41, in __init__
    CompatJSONDecodeError.__init__(self, *args)
TypeError: __init__() missing 2 required positional arguments: 'doc' and 'pos'

Might be linked to the handling of 502 return code in _get_data_with_bbox.py. I'll submit a PR.

Should zip data be kept inside the package ?

Some files are zipped into the package, should they be left as such ? Or requested from given links using appropriate functions ?

Cons

  • heavy package
  • possibility that the underlying files, hence data, may change
  • ownership problem (gregoiredavid geojson for instance)

Pros

  • backup if underlying file disappears from source or is changed from place

pynsee.sirene.get_all_columns bug

Problem detected by @cthiounn

pynsee.sirene.get_all_columns()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/coder/.conda/envs/pythonENSAE/lib/python3.9/site-packages/pynsee/sirene/get_all_columns.py", line 35, in get_all_columns
    df = df.iloc[:,[0,2]]
  File "/home/coder/.conda/envs/pythonENSAE/lib/python3.9/site-packages/pandas/core/indexing.py", line 925, in __getitem__
    return self._getitem_tuple(key)
  File "/home/coder/.conda/envs/pythonENSAE/lib/python3.9/site-packages/pandas/core/indexing.py", line 1506, in _getitem_tuple
    self._has_valid_tuple(tup)
  File "/home/coder/.conda/envs/pythonENSAE/lib/python3.9/site-packages/pandas/core/indexing.py", line 754, in _has_valid_tuple
    self._validate_key(k, i)
  File "/home/coder/.conda/envs/pythonENSAE/lib/python3.9/site-packages/pandas/core/indexing.py", line 1424, in _validate_key
    raise IndexError("positional indexers are out-of-bounds")

Update notebooks to remove local data

Notebooks here

  • examples/example_automotive_industry_sirene.ipynb
  • examples/example_gdp_growth_rate_yoy.ipynb
  • examples/example_insee_premises_sirene.ipynb
  • examples/example_doctors.ipynb
  • examples/example_population_pyramid.ipynb
  • examples/example_deaths_births.ipynb
  • examples/example_inflation_yoy.ipynb
  • examples/example_population_map.ipynb
  • examples/example_poverty_paris_urban_area.ipynb

present search_macro_data() in introduction doc

I think before presenting example 1 you could present search_macro_data() which is great when you are not sure what you are looking for
Simply add example for search_macro_data() above example 1 regarding gdp

from pynsee.macrodata import search_macrodata
search_paris = search_macrodata("PARIS")

You get the result:


DATASET | IDBANK | KEY | TITLE_FR | TITLE_EN
-- | -- | -- | -- | --
CONSTRUCTION-LOCAUX | 001761985 | M.BDM.SURF_PLANCHER_AUT.VALEUR_ABSOLUE.CLA.SO.... | Surface de plancher autorisée - Ensemble des l... | Authorized floor area - All premises - Paris -...
CONSTRUCTION-LOCAUX | 001762081 | M.BDM.SURF_PLANCHER_COM.VALEUR_ABSOLUE.CLC.SO.... | Surface de plancher commencée - Ensemble des l... | Started floor area - All premises - Paris - Re...
CONSTRUCTION-LOGEMENTS | 001761793 | M.CUMUL.BDM.NBRE_LOG_AUT.VALEUR_ABSOLUE.CLA.SO... | Nombre de logements autorisés - Cumul sur douz... | Number of authorized dwellings - 12-month aggr...


also say it is case insensitive in the short presentation

How to find latest local data?

Maybe I don't understand exactly how pynsee works but there does not seem to be a way to update the information about local metadata (i.e. about https://api.insee.fr/donnees-locales/).

When I call get_local_metadata I get the

!!! This function renders only package's internal data,
it might not be the most up-to-date
Have a look at api.insee.fr !!!

message and old metadata (e.g. "GEO2020RP2017" while the most recent seems to be "GEO2022RP2019").

Is there a way to get a get_local_metadata equivalent that fetches the latest data from the INSEE API?

On a somewhat similar note, is there a way to know automatically what the latest version of a dataset is?
For instance, if I want to work with the most recent data from FLORES and the RP, how can I call get_local_data on the most recent dataset? Is there something like GEOlatestRPlatest?

I could just try all dates and ignore calls that return empty dataframes but that is quite cumbersome.

Add history file

A HISTORY (or NEWS or CHANGELOG) file describing changes associated with each version makes it easier for users to see what’s changing in the package and how it might impact their workflow

Can't load geodataframe on geopandas 0.12.2

I have an issue on a Windows machine with geopandas 0.12.2 :

com = get_geodata('ADMINEXPRESS-COG-CARTO.LATEST:commune', update=True)
gdf = gpd.GeoDataFrame(com).set_crs("EPSG:4326")

raises the following exception :

File C:\Winpython\WinPython-64bit-3.9.2.0\notebooks\sis_scraping\sis_scraping\utils.py:136 in get_cities_geo
   gdf = gpd.GeoDataFrame(com)

 File ~\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\geopandas\geodataframe.py:173 in __init__
   self["geometry"] = _ensure_geometry(self["geometry"].values, crs)

 File ~\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\geopandas\geodataframe.py:1440 in __setitem__
   value = _ensure_geometry(value, crs=crs)

 File ~\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\geopandas\geodataframe.py:55 in _ensure_geometry
   data.crs = crs

 File ~\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\geopandas\array.py:339 in crs
   self._crs = None if not value else CRS.from_user_input(value)

 File ~\AppData\Local\pypoetry\Cache\virtualenvs\sis-scraping-0fCjv732-py3.9\lib\site-packages\pandas\core\generic.py:1527 in __nonzero__
   raise ValueError(

ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().

The same command runs perfectly on a Linux with geopandas 0.10.2 (same version of pandas on both cases, 1.5.3).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.