Giter VIP home page Giter VIP logo

scipdf_parser's Introduction

SciPDF Parser

A Python parser for scientific PDF based on GROBID.

Installation

Use pip to install from this Github repository

pip install git+https://github.com/titipata/scipdf_parser

Note

  • We also need an en_core_web_sm model for spacy, where you can run python -m spacy download en_core_web_sm to download it
  • You can change GROBID version in serve_grobid.sh to test the parser on a new GROBID version

Usage

Run the GROBID using the given bash script before parsing PDF.

NOTE: the recommended way to run grobid is via docker, so make sure it's running on your machine. Update the script so that you are using latest version. Generally, at every version there are substantial improvements.

bash serve_grobid.sh

This script will run GROBID at default port 8070 (see more here). To parse a PDF provided in example_data folder or direct URL, use the following function:

import scipdf
article_dict = scipdf.parse_pdf_to_dict('example_data/futoma2017improved.pdf') # return dictionary
 
# option to parse directly from URL to PDF, if as_list is set to True, output 'text' of parsed section will be in a list of paragraphs instead
article_dict = scipdf.parse_pdf_to_dict('https://www.biorxiv.org/content/biorxiv/early/2018/11/20/463760.full.pdf', as_list=False)

# output example
>> {
    'title': 'Proceedings of Machine Learning for Healthcare',
    'abstract': '...',
    'sections': [
        {'heading': '...', 'text': '...'},
        {'heading': '...', 'text': '...'},
        ...
    ],
    'references': [
        {'title': '...', 'year': '...', 'journal': '...', 'author': '...'},
        ...
    ],
    'figures': [
        {'figure_label': '...', 'figure_type': '...', 'figure_id': '...', 'figure_caption': '...', 'figure_data': '...'},
        ...
    ],
    'doi': '...'
}

xml = scipdf.parse_pdf('example_data/futoma2017improved.pdf', soup=True) # option to parse full XML from GROBID

To parse figures from PDF using pdffigures2, you can run

scipdf.parse_figures('example_data', output_folder='figures') # folder should contain only PDF files

You can see example output figures in figures folder.

scipdf_parser's People

Contributors

dozerzoc avatar g-simmons avatar lfoppiano avatar manuelrech avatar mohibullah70 avatar prateek-dagar avatar titipata avatar yokohide0317 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scipdf_parser's Issues

[Bug?] Please, restore scipdf_parser! It is not worning anymore on Colab :(

Hello everyone,

I was used to launch "scipdf_parser" on Google Colab and it worked so well!
Today I tried to launch it again with the same commands, but it does not work anymore!
Please, please can someone help me? :(

Here is the code I was using:

from google.colab import drive
drive.mount('/content/drive')

!pip install git+https://github.com/titipata/scipdf_parser

!python -m spacy download en_core_web_sm

import subprocess
subprocess.Popen("bash serve_grobid.sh", shell=True)

(It now returns: <Popen: returncode: None args: 'bash serve_grobid.sh'>)

!bash serve_grobid.sh

(It now returns: Error: Docker is not installed. Please install Docker before running Grobid.)

import scipdf
import os
import pandas as pd
import warnings
from bs4.builder import XMLParsedAsHTMLWarning
warnings.filterwarnings('ignore', category=XMLParsedAsHTMLWarning)

files_to_process = os.listdir("/content/drive/My Drive/PDF/")[:1500]
for idx, filename in enumerate(files_to_process, start=1):
if filename.lower().endswith(".pdf"):
try:
pmid = filename.split(".")[0]
percorso_file_csv = f"/content/drive/My Drive/CSV/{pmid}.csv"
dizionario = scipdf.parse_pdf_to_dict(f'/content/drive/My Drive/PDF/{filename}')
sections_content = [f"{key}: {value}" for section in dizionario.get('sections', []) for key, value in section.items()]
#references_content = [f"{key}: {value}" for reference in dizionario.get('references', []) for key, value in reference.items()]
#figures_content = [f"{key}: {value}" for figure in dizionario.get('figures', []) for key, value in figure.items()]

        content = {
            "Title": f"{dizionario.get('title', '')}\n",
            "Authors": f"{dizionario.get('authors', '')}\n",
            "Publication date": f"{dizionario.get('pub_date', '')}\n",
            "Abstract": f"{dizionario.get('abstract', '')}\n",
            "Sections": "\n".join(sections_content),
            #"References": "\n".join(references_content),
            #"Figures": "\n".join(figures_content),
            "Doi": f"{dizionario.get('doi', '')}"
        }

        df = pd.DataFrame([[pmid, content_str]])

        df.to_csv(percorso_file_csv, index=False, header=["pmid", "content"])
        print(f"Il file CSV è stato creato con successo per il file {filename}")
        print(f"File numero {idx}")

    except Exception as e:
        print(e)
        continue

(It now returns:

OSError Traceback (most recent call last)
in <cell line: 9>()
7
8 # Elabora solo i primi 10 file
----> 9 files_to_process = os.listdir("/content/drive/My Drive/PDF/")[:1500]
10 for idx, filename in enumerate(files_to_process, start=1):
11 if filename.lower().endswith(".pdf"):

OSError: [Errno 5] Input/output error: '/content/drive/My Drive/PDF/')

What is going on?

Thank you so much in advance!!

Include consolidateHeader option while running scipdf

Hi,

I am trying to run grobid via scipdf in my python environment. However, couldn't find option to include "consolidateHeader" option provided by grobid. Can you please help if there is an option to do the same or if I am missing something.

Thanks in advance :)

java.lang.IllegalAccessError

grobid-0.6.2 $ ./gradlew run

Configuration(s) specified but the install task does not exist in project :grobid-core.
Configuration(s) specified but the install task does not exist in project :grobid-home.
Configuration(s) specified but the install task does not exist in project :grobid-service.
Configuration(s) specified but the install task does not exist in project :grobid-trainer.

Task :grobid-core:compileJava FAILED
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

FAILURE: Build failed with an exception.

  • What went wrong:
    Execution failed for task ':grobid-core:compileJava'.

java.lang.IllegalAccessError: class org.gradle.internal.compiler.java.ClassNameCollector (in unnamed module @0x6e843276) cannot access class com.sun.tools.javac.code.Symbol$TypeSymbol (in module jdk.compiler) because module jdk.compiler does not export com.sun.tools.javac.code to unnamed module @0x6e843276

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Get more help at https://help.gradle.org

Wrong parsing when dealing with double column papers

Hope you are well,

I am here to report a bug where the Abstract and Introduction is wrongly parsed. I have emphasize the error with Bold text in the parsed json below. I also included the original paper.

Since I need this for my current research, can you take a look and give me some advice about what might have gone wrong here as soon as possible? I can help fix the bug and make a pull request.

Thank you in advanced.

paper11.pdf
{
"authors": "Sahra Ghalebikesabi; Harrison Wilde; Jack Jewson; Arnaud Doucet; Sebastian Vollmer; Chris Holmes",
"pub_date": "",
"title": "Mitigating Statistical Bias within Differentially Private Synthetic Data",
"abstract": "Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data. However, mechanisms of privacy preservation can significantly reduce the utility of synthetic data, which in turn impacts downstream tasks such as learning predictive models or inference. We propose several re-weighting strategies using privatised likelihood ratios that not only mitigate statistical bias of downstream estimators but also have general applicability to differentially private generative models. Through large-scale empirical evaluation, we show that private importance weighting provides simple and effective privacycompliant augmentation for general applications of synthetic data.
Recent literature has proposed techniques to decrease this bias by modifying the training processes of private algorithms. These approaches are specific to a particular synthetic data generating method (Zhang et al., 2018;Frigerio et al., 2019;Neunhoeffer et al., 2020), or are query-based (Hardt and Rothblum, 2010;Liu et al., 2021) and are thus not generally applicable. Hence, we propose several postprocessing approaches that aid mitigating the bias induced by the DP synthetic data. While there has been extensive research into estimating models directly on protected data without leaking privacy, we argue that releasing DP synthetic data is crucial for rigorous statistical analysis. This makes providing a framework to debias inference on this an important direction of future research that goes beyond the applicability of any particular DP estimator. Because of the post-processing theorem (Dwork et al., 2014), any function on the DP synthetic data is itself DP. This allows deployment of standard statistical analysis tooling that may otherwise be unavailable for DP estimation. These include 1) exploratory data analysis, 2) model verification and analysis of model diagnostics, 3) private release of (newly developed) models for which no DP analogue has been derived, 4) the computation of con-",
"sections": [
{
"heading": "INTRODUCTION",
"text": "The prevalence of sensitive datasets, such as electronic health records, contributes to a growing concern for violations of an individual's privacy. In recent years, the notion of Differential Privacy (Dwork et al., 2006) has gained popularity as a privacy metric offering statistical guarantees. This framework bounds how much the likelihood of a randomised algorithm can differ under neighbouring real datasets. We say two datasets D and D are neighbouring when they differ by at most one observation. A randomised algorithm g : M \u2192 R satisfies ( , \u03b4)-differential privacy for , \u03b4 \u2265 0 if and only if for all neighbouring datasets D, D and all subsets S \u2286 R, we have Pr(g(D) \u2208 S) \u2264 \u03b4 + e Pr(g(D ) \u2208 S).\nThe parameter is referred to as the privacy budget; smaller quantities imply more private algorithms.\nInjecting noise into sensitive data according to this paradigm allows for datasets to be published in a private manner. With the rise of generative modelling approaches, such as Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), there has been a surge of literature proposing generative models for differentially private (DP) synthetic data generation and release (Jordon et al., 2019;Xie et al., 2018;Zhang et al., 2017). These generative models often fail to capture the true underlying distribution of the real data, possibly due to flawed parametric assumptions and the injection of noise into their training and release mechanisms.\nThe constraints imposed by privacy-preservation can lead to significant differences between nature's true data generating process (DGP) and the induced synthetic DGP (SDGP) (Wilde et al., 2020). This increases the bias of estimators trained on data from the SDGP which reduces their utility.\nfidence intervals of downstream estimators through the nonparametric bootstrap, and 5) the public release of a data set to a research community whose individual requests would otherwise overload the data curator. This endeavour could facilitate the release of data on public platforms like the UCI Machine Learning Repository (Lichman, 2013) or the creation of data competitions, fuelling research growth for specific modelling areas.\nThis motivates our main contributions, namely the formulation of multiple approaches to generating DP importance weights that correct for synthetic data's issues. In particular, this includes:\n\u2022 The bias estimation of an existing DP importance weight estimation method, and the introduction of an unbiased extension with smaller variance (Section 3.3).\n\u2022 An adjustment to DP Stochastic Gradient Descent's sampling probability and noise injection to facilitate its use in the training of DP-compliant neural networkbased classifiers to estimate importance weights from combinations of real and synthetic data (Section 3.4).\n\u2022 The use of discriminator outputs of DP GANs as importance weights that do not require any additional privacy budget (Section 3.5).\n\u2022 An application of importance weighting to correct for the biases incurred in Bayesian posterior belief updating with synthetic data motivated by the results from (Wilde et al., 2020) and to exhibit our methods' wide applicability in frequentist and Bayesian contexts (Section 3.1).",
"n_publication_ref": 8,
"n_figure_ref": 0
},
}

SyntaxWarning: is not with a literal

mypy throws these warnings.

Are you interested in a PR that fixes this?

[...]/.local/lib/python3.10/site-packages/scipdf/pdf/parse_pdf.py:114: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if middlename is not "":

General error during semantic analysis: Unsupported class file major version 61

When I run bash serve_grobid.sh, I encounter the following error

I'm using GROBID version 0.6.2, Java version 17.0.4 and Gradle version 8.1

  General error during semantic analysis: Unsupported class file major version 61
  
  java.lang.IllegalArgumentException: Unsupported class file major version 61
  	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:196)
  	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:177)
  	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:163)
  	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:284)
  	at org.codehaus.groovy.ast.decompiled.AsmDecompiler.parseClass(As ....```

Processing in batch and GPU

Hello,

Is there any functionality in the library to process the pdfs in batches and/or using GPU to accelerate computing?
If not, what would be the go to steps to perform such thing?

Thanks!
J

Import scipdf fails when en_core_web_sm exists in another conda environment

Hi, I encountered this issue when importing scipdf from a conda environment. I have several environments with spacy installed, each of them most likely at a slightly different version, with perhaps a different version of en_core_web_sm. This appears to be fixed by downloading en_core_web_sm in the current environment.

import scipdf
/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/spacy/util.py:715: UserWarning: [W094] Model 'en_core_web_sm' (2.0.0) specifies an under-constrained spaCy version requirement: >=2.0.0a18. This can lead to compatibility problems with older versions, or as new spaCy versions are released, because the model may say it's compatible when it's not. Consider changing the "spacy_version" in your meta.json to a version range, with a lower and upper pin. For example: >=3.0.5,<3.1.0
  warnings.warn(warn_msg)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/scipdf/__init__.py", line 9, in <module>
    from scipdf.features.text_utils import *
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/scipdf/features/__init__.py", line 1, in <module>
    from .text_utils import compute_readability_stats, compute_text_stats
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/scipdf/features/text_utils.py", line 9, in <module>
    nlp = spacy.load('en_core_web_sm')
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/spacy/__init__.py", line 47, in load
    return util.load_model(name, disable=disable, exclude=exclude, config=config)
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/spacy/util.py", line 322, in load_model
    return load_model_from_package(name, **kwargs)
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/spacy/util.py", line 355, in load_model_from_package
    return cls.load(vocab=vocab, disable=disable, exclude=exclude, config=config)
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/en_core_web_sm/__init__.py", line 12, in load
    return load_model_from_init_py(__file__, **overrides)
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/spacy/util.py", line 520, in load_model_from_init_py
    config=config,
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/spacy/util.py", line 388, in load_model_from_path
    config = load_config(config_path, overrides=dict_to_dot(config))
  File "/Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/spacy/util.py", line 545, in load_config
    raise IOError(Errors.E053.format(path=config_path, name="config.cfg"))
OSError: [E053] Could not read config.cfg from /Users/gabe/opt/miniconda3/envs/papercast/lib/python3.7/site-packages/en_core_web_sm/en_core_web_sm-2.0.0/config.cfg

After python -m spacy download en_core_web_sm, the import is successful.

No module named fitz error

In PR #22 (d7d537e) a new dependency fitz library has been added (line number 9 in parse_pdf.py file) which has not been added in dependencies list.
Due to which when installing scipdf_parser "fitz"dependency is not installed and get module not found error when tries to use scipdf_parser.

image

KeyError: xml:id

Unfortunately I cannot share the PDF as it is proprietary:

KeyError                                  Traceback (most recent call last)
Cell In [6], line 8
      6 for app in apps:
      7     app = app.resolve()
----> 8     article_dict = scipdf.parse_pdf_to_dict(str(app))
      9     ref = article_dict["references"]
     10     ref = [l for l in ref if is_valid(l)]

File ~/.local/lib/python3.10/site-packages/scipdf/pdf/parse_pdf.py:358, in parse_pdf_to_dict(pdf_path, fulltext, soup, as_list, grobid_url)
    339 """
    340 Parse the given PDF and return dictionary of the parsed article
    341 
   (...)
    353 article_dict: dict, dictionary of an article
    354 """
    355 parsed_article = parse_pdf(
    356     pdf_path, fulltext=fulltext, soup=soup, grobid_url=grobid_url
    357 )
--> 358 article_dict = convert_article_soup_to_dict(parsed_article, as_list=as_list)
    359 return article_dict

File ~/.local/lib/python3.10/site-packages/scipdf/pdf/parse_pdf.py:321, in convert_article_soup_to_dict(article, as_list)
    319 article_dict["sections"] = parse_sections(article, as_list=as_list)
    320 article_dict["references"] = parse_references(article)
--> 321 article_dict["figures"] = parse_figure_caption(article)
    323 doi = article.find("idno", attrs={"type": "DOI"})
    324 doi = doi.text if doi is not None else ""

File ~/.local/lib/python3.10/site-packages/scipdf/pdf/parse_pdf.py:260, in parse_figure_caption(article)
    258 for figure in figures:
    259     figure_type = figure.attrs.get("type") or ""
--> 260     figure_id = figure.attrs["xml:id"] or ""
    261     label = figure.find("label").text
    262     if figure_type == "table":

KeyError: 'xml:id'

How to install it with Win10?

Thank you for open-sourcing this powerful tool!
May I ask if there are any other ways to install this package on Windows 10 besides using Docker? I have been able to install it successfully on Ubuntu, but unfortunately, I have been unable to do so on Windows.

You're parsing an XML document using an HTML parser

I'm running the demo code, referencing a specific grobid url:

import scipdf
article_dict = scipdf.parse_pdf_to_dict('examples/futoma2017improved.pdf',
                                        grobid_url='https://<my-grobid-url>/')

I'm getting the following error:

/anaconda/envs/scipdfparser/lib/python3.9/site-packages/bs4/builder/init.py:545: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument features="xml" into the BeautifulSoup constructor.

I'm running scipdf in a conda environment with Python 3.9.16. Here the installed packages:

# packages in environment at /anaconda/envs/scipdfparser:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex             5.1                       1_gnu  
asttokens                 2.0.5              pyhd3eb1b0_0  
backcall                  0.2.0              pyhd3eb1b0_0  
beautifulsoup4            4.11.2                   pypi_0    pypi
blas                      1.0                         mkl  
blis                      0.7.9                    pypi_0    pypi
ca-certificates           2023.01.10           h06a4308_0  
catalogue                 2.0.8                    pypi_0    pypi
certifi                   2022.12.7        py39h06a4308_0  
charset-normalizer        3.0.1                    pypi_0    pypi
click                     8.1.3                    pypi_0    pypi
comm                      0.1.2            py39h06a4308_0  
confection                0.0.4                    pypi_0    pypi
cymem                     2.0.7                    pypi_0    pypi
debugpy                   1.5.1            py39h295c915_0  
decorator                 5.1.1              pyhd3eb1b0_0  
en-core-web-sm            3.5.0                    pypi_0    pypi
entrypoints               0.4              py39h06a4308_0  
executing                 0.8.3              pyhd3eb1b0_0  
idna                      3.4                      pypi_0    pypi
intel-openmp              2021.4.0          h06a4308_3561  
ipykernel                 6.19.2           py39hb070fc8_0  
ipython                   8.8.0            py39h06a4308_0  
jedi                      0.18.1           py39h06a4308_1  
jinja2                    3.1.2                    pypi_0    pypi
jupyter_client            7.4.8            py39h06a4308_0  
jupyter_core              5.1.1            py39h06a4308_0  
langcodes                 3.3.0                    pypi_0    pypi
ld_impl_linux-64          2.38                 h1181459_1  
libffi                    3.4.2                h6a678d5_6  
libgcc-ng                 11.2.0               h1234567_1  
libgomp                   11.2.0               h1234567_1  
libsodium                 1.0.18               h7b6447c_0  
libstdcxx-ng              11.2.0               h1234567_1  
lxml                      4.9.2                    pypi_0    pypi
markupsafe                2.1.2                    pypi_0    pypi
matplotlib-inline         0.1.6            py39h06a4308_0  
mkl                       2021.4.0           h06a4308_640  
mkl-service               2.4.0            py39h7f8727e_0  
mkl_fft                   1.3.1            py39hd3c417c_0  
mkl_random                1.2.2            py39h51133e4_0  
murmurhash                1.0.9                    pypi_0    pypi
ncurses                   6.4                  h6a678d5_0  
nest-asyncio              1.5.6            py39h06a4308_0  
numpy                     1.23.5           py39h14f4228_0  
numpy-base                1.23.5           py39h31eccc5_0  
openssl                   1.1.1s               h7f8727e_0  
packaging                 22.0             py39h06a4308_0  
pandas                    1.5.3                    pypi_0    pypi
parso                     0.8.3              pyhd3eb1b0_0  
pathy                     0.10.1                   pypi_0    pypi
pexpect                   4.8.0              pyhd3eb1b0_3  
pickleshare               0.7.5           pyhd3eb1b0_1003  
pip                       22.3.1           py39h06a4308_0  
platformdirs              2.5.2            py39h06a4308_0  
preshed                   3.0.8                    pypi_0    pypi
prompt-toolkit            3.0.36           py39h06a4308_0  
psutil                    5.9.0            py39h5eee18b_0  
ptyprocess                0.7.0              pyhd3eb1b0_2  
pure_eval                 0.2.2              pyhd3eb1b0_0  
pydantic                  1.10.4                   pypi_0    pypi
pygments                  2.11.2             pyhd3eb1b0_0  
pyphen                    0.13.2                   pypi_0    pypi
python                    3.9.16               h7a1cb2a_0  
python-dateutil           2.8.2              pyhd3eb1b0_0  
pytz                      2022.7.1                 pypi_0    pypi
pyzmq                     23.2.0           py39h6a678d5_0  
readline                  8.2                  h5eee18b_0  
requests                  2.28.2                   pypi_0    pypi
scipdf                    0.1.dev0                 pypi_0    pypi
setuptools                65.6.3           py39h06a4308_0  
six                       1.16.0             pyhd3eb1b0_1  
smart-open                6.3.0                    pypi_0    pypi
soupsieve                 2.3.2.post1              pypi_0    pypi
spacy                     3.5.0                    pypi_0    pypi
spacy-legacy              3.0.12                   pypi_0    pypi
spacy-loggers             1.0.4                    pypi_0    pypi
sqlite                    3.40.1               h5082296_0  
srsly                     2.4.5                    pypi_0    pypi
stack_data                0.2.0              pyhd3eb1b0_0  
textstat                  0.7.3                    pypi_0    pypi
thinc                     8.1.7                    pypi_0    pypi
tk                        8.6.12               h1ccaba5_0  
tornado                   6.2              py39h5eee18b_0  
tqdm                      4.64.1                   pypi_0    pypi
traitlets                 5.7.1            py39h06a4308_0  
typer                     0.7.0                    pypi_0    pypi
typing-extensions         4.4.0                    pypi_0    pypi
tzdata                    2022g                h04d1e81_0  
urllib3                   1.26.14                  pypi_0    pypi
wasabi                    1.1.1                    pypi_0    pypi
wcwidth                   0.2.5              pyhd3eb1b0_0  
wheel                     0.37.1             pyhd3eb1b0_0  
xz                        5.2.10               h5eee18b_1  
zeromq                    4.3.4                h2531618_0  
zlib                      1.2.13               h5eee18b_0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.