Giter VIP home page Giter VIP logo

Comments (48)

jackylee1 avatar jackylee1 commented on May 12, 2024 1

made the python3.8.16 and install all except the last two en_core_web_sm and detectron2

from ask-anything.

Pseudoking avatar Pseudoking commented on May 12, 2024

The same issue

More information is as follows:
openmim 0.3.7 depends on Click
nltk 3.8.1 depends on click
typer 0.3.0 depends on click<7.2.0 and >=7.1.1
black 23.3.0 depends on click>=8.0.0

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

Which Python version is you use?

We use Python 3.8 and spacy==3.5.1

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

The same issue

More information is as follows: openmim 0.3..7 depends on Click nltk 3.8.1 depends on click typer 0.3..0 depends on click<7.2.0 and >=7.1.1 black 23.3.0 depends on click>=8.0.0

In ChatVideo, typer is 0.7.0

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

from ask-anything.

Pseudoking avatar Pseudoking commented on May 12, 2024

ubuntu 22.04 LTS (newly installed)
python 3.10.6
I modified the requirements.txt
spacy==3.0.9 -> spacy ==3.5.1

and new issue
en_core_web_sm 3.0.0 depends 3.0.0=<spacy < 3.1.0

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

ubuntu 22.04 LTS (newly installed) python 3.10.6 I modified the requirements.txt spacy==3.0.9 -> spacy ==3.5.1

and new issue en_core_web_sm 3.0.0 depends 3.0.0=<spacy < 3.1.0

I upload my conda environment in https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat/environment.yaml @Pseudoking @jackylee1

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

(base) PS D:\Ask-Anything\video_chat_with_StableLM> conda env create -f environment.yaml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • numpy==1.23.5=py38h14f4228_0
  • zstd==1.5.5=hc292b87_0
  • idna==3.4=py38h06a4308_0
  • jupyter_client==8.1.0=py38h06a4308_0
  • ca-certificates==2023.01.10=h06a4308_0
  • ipython==8.12.0=py38h06a4308_0
  • pyzmq==23.2.0=py38h6a678d5_0
  • libtasn1==4.19.0=h5eee18b_0
  • psutil==5.9.0=py38h5eee18b_0
  • certifi==2022.12.7=py38h06a4308_0
  • libgomp==11.2.0=h1234567_1
  • ffmpeg==4.3=hf484d3e_0
  • giflib==5.2.1=h5eee18b_3
  • pysocks==1.7.1=py38h06a4308_0
  • lcms2==2.12=h3be6417_0
  • python==3.8.16=h7a1cb2a_3
  • wheel==0.38.4=py38h06a4308_0
  • gnutls==3.6.15=he1e5248_0
  • libunistring==0.9.10=h27cfd23_0
  • flit-core==3.8.0=py38h06a4308_0
  • comm==0.1.2=py38h06a4308_0
  • jupyter_core==5.3.0=py38h06a4308_0
  • pyopenssl==23.0.0=py38h06a4308_0
  • libgfortran5==11.2.0=h1234567_1
  • libstdcxx-ng==11.2.0=h1234567_1
  • libtiff==4.5.0=h6a678d5_2
  • cryptography==39.0.1=py38h9ce1e76_0
  • lame==3.100=h7b6447c_0
  • gmp==6.2.1=h295c915_3
  • tornado==6.2=py38h5eee18b_0
  • cffi==1.15.1=py38h5eee18b_3
  • matplotlib-inline==0.1.6=py38h06a4308_0
  • mkl_random==1.2.2=py38h51133e4_0
  • _openmp_mutex==5.1=1_gnu
  • pip==23.0.1=py38h06a4308_0
  • jedi==0.18.1=py38h06a4308_1
  • nettle==3.7.3=hbbd107a_1
  • zlib==1.2.13=h5eee18b_0
  • tk==8.6.12=h1ccaba5_0
  • openssl==1.1.1t=h7f8727e_0
  • packaging==23.0=py38h06a4308_0
  • libgfortran-ng==11.2.0=h00389a5_1
  • libgcc-ng==11.2.0=h1234567_1
  • libffi==3.4.2=h6a678d5_6
  • typing_extensions==4.4.0=py38h06a4308_0
  • mkl==2021.4.0=h06a4308_640
  • libdeflate==1.17=h5eee18b_0
  • nest-asyncio==1.5.6=py38h06a4308_0
  • scipy==1.10.1=py38h14f4228_0
  • requests==2.28.1=py38h06a4308_1
  • pytorch-cuda==11.7=h778d358_3
  • pillow==9.4.0=py38h6a678d5_0
  • pytorch==1.13.1=py3.8_cuda11.7_cudnn8.5.0_0
  • libpng==1.6.39=h5eee18b_0
  • traitlets==5.7.1=py38h06a4308_0
  • libiconv==1.16=h7f8727e_2
  • numpy-base==1.23.5=py38h31eccc5_0
  • sqlite==3.41.2=h5eee18b_0
  • zeromq==4.3.4=h2531618_0
  • xz==5.2.10=h5eee18b_1
  • libwebp-base==1.2.4=h5eee18b_1
  • libcufile==1.6.0.25=0
  • debugpy==1.5.1=py38h295c915_0
  • jpeg==9e=h5eee18b_1
  • lerc==3.0=h295c915_0
  • mkl_fft==1.3.1=py38hd3c417c_0
  • prompt-toolkit==3.0.36=py38h06a4308_0
  • libidn2==2.3.2=h7f8727e_0
  • mkl-service==2.4.0=py38h7f8727e_0
  • platformdirs==2.5.2=py38h06a4308_0
  • ld_impl_linux-64==2.38=h1181459_1
  • libcufft==10.7.2.124=h4fbf590_0
  • lz4-c==1.9.4=h6a678d5_0
  • readline==8.2=h5eee18b_0
  • openh264==2.1.1=h4ff587b_0
  • libwebp==1.2.4=h11a3e52_1
  • intel-openmp==2021.4.0=h06a4308_3561
  • brotlipy==0.7.0=py38h27cfd23_1003
  • freetype==2.12.1=h4a9f257_0
  • urllib3==1.26.15=py38h06a4308_0
  • bzip2==1.0.8=h7b6447c_0
  • ipykernel==6.19.2=py38hb070fc8_0
  • libsodium==1.0.18=h7b6447c_0
  • ncurses==6.4=h6a678d5_0

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

ERROR: Ignored the following versions that require a different python version: 0.0.100 Requires-Python >=3.8.1,<4.0; 0.0.101 Requires-Python >=3.8.1,<4.0; 0.0.101rc0 Requires-Python >=3.8.1,<4.0; 0.0.102 Requires-Python >=3.8.1,<4.0; 0.0.102rc0 Requires-Python >=3.8.1,<4.0; 0.0.103 Requires-Python >=3.8.1,<4.0; 0.0.104 Requires-Python >=3.8.1,<4.0; 0.0.105 Requires-Python >=3.8.1,<4.0; 0.0.106 Requires-Python >=3.8.1,<4.0; 0.0.107 Requires-Python >=3.8.1,<4.0; 0.0.108 Requires-Python >=3.8.1,<4.0; 0.0.109 Requires-Python >=3.8.1,<4.0; 0.0.110 Requires-Python >=3.8.1,<4.0; 0.0.111 Requires-Python >=3.8.1,<4.0; 0.0.112 Requires-Python >=3.8.1,<4.0; 0.0.113 Requires-Python >=3.8.1,<4.0; 0.0.114 Requires-Python >=3.8.1,<4.0; 0.0.115 Requires-Python >=3.8.1,<4.0; 0.0.116 Requires-Python >=3.8.1,<4.0; 0.0.117 Requires-Python >=3.8.1,<4.0; 0.0.118 Requires-Python >=3.8.1,<4.0; 0.0.119 Requires-Python >=3.8.1,<4.0; 0.0.120 Requires-Python >=3.8.1,<4.0; 0.0.121 Requires-Python >=3.8.1,<4.0; 0.0.122 Requires-Python >=3.8.1,<4.0; 0.0.123 Requires-Python >=3.8.1,<4.0; 0.0.124 Requires-Python >=3.8.1,<4.0; 0.0.125 Requires-Python >=3.8.1,<4.0; 0.0.126 Requires-Python >=3.8.1,<4.0; 0.0.127 Requires-Python >=3.8.1,<4.0; 0.0.128 Requires-Python >=3.8.1,<4.0; 0.0.129 Requires-Python >=3.8.1,<4.0; 0.0.130 Requires-Python >=3.8.1,<4.0; 0.0.131 Requires-Python >=3.8.1,<4.0; 0.0.132 Requires-Python >=3.8.1,<4.0; 0.0.133 Requires-Python >=3.8.1,<4.0; 0.0.134 Requires-Python >=3.8.1,<4.0; 0.0.135 Requires-Python >=3.8.1,<4.0; 0.0.136 Requires-Python >=3.8.1,<4.0; 0.0.137 Requires-Python >=3.8.1,<4.0; 0.0.138 Requires-Python >=3.8.1,<4.0; 0.0.139 Requires-Python >=3.8.1,<4.0; 0.0.140 Requires-Python >=3.8.1,<4.0; 0.0.141 Requires-Python >=3.8.1,<4.0; 0.0.142 Requires-Python >=3.8.1,<4.0; 0.0.143 Requires-Python >=3.8.1,<4.0; 0.0.144 Requires-Python >=3.8.1,<4.0; 0.0.145 Requires-Python >=3.8.1,<4.0; 0.0.146 Requires-Python >=3.8.1,<4.0; 0.0.28 Requires-Python >=3.8.1,<4.0; 0.0.29 Requires-Python >=3.8.1,<4.0; 0.0.30 Requires-Python >=3.8.1,<4.0; 0.0.31 Requires-Python >=3.8.1,<4.0; 0.0.32 Requires-Python >=3.8.1,<4.0; 0.0.33 Requires-Python >=3.8.1,<4.0; 0.0.34 Requires-Python >=3.8.1,<4.0; 0.0.35 Requires-Python >=3.8.1,<4.0; 0.0.36 Requires-Python >=3.8.1,<4.0; 0.0.37 Requires-Python >=3.8.1,<4.0; 0.0.38 Requires-Python >=3.8.1,<4.0; 0.0.39 Requires-Python >=3.8.1,<4.0; 0.0.40 Requires-Python >=3.8.1,<4.0; 0.0.41 Requires-Python >=3.8.1,<4.0; 0.0.42 Requires-Python >=3.8.1,<4.0; 0.0.43 Requires-Python >=3.8.1,<4.0; 0.0.44 Requires-Python >=3.8.1,<4.0; 0.0.45 Requires-Python >=3.8.1,<4.0; 0.0.46 Requires-Python >=3.8.1,<4.0; 0.0.47 Requires-Python >=3.8.1,<4.0; 0.0.48 Requires-Python >=3.8.1,<4.0; 0.0.49 Requires-Python >=3.8.1,<4.0; 0.0.50 Requires-Python >=3.8.1,<4.0; 0.0.51 Requires-Python >=3.8.1,<4.0; 0.0.52 Requires-Python >=3.8.1,<4.0; 0.0.53 Requires-Python >=3.8.1,<4.0; 0.0.54 Requires-Python >=3.8.1,<4.0; 0.0.55 Requires-Python >=3.8.1,<4.0; 0.0.56 Requires-Python >=3.8.1,<4.0; 0.0.57 Requires-Python >=3.8.1,<4.0; 0.0.58 Requires-Python >=3.8.1,<4.0; 0.0.59 Requires-Python >=3.8.1,<4.0; 0.0.60 Requires-Python >=3.8.1,<4.0; 0.0.61 Requires-Python >=3.8.1,<4.0; 0.0.63 Requires-Python >=3.8.1,<4.0; 0.0.64 Requires-Python >=3.8.1,<4.0; 0.0.65 Requires-Python >=3.8.1,<4.0; 0.0.66 Requires-Python >=3.8.1,<4.0; 0.0.67 Requires-Python >=3.8.1,<4.0; 0.0.68 Requires-Python >=3.8.1,<4.0; 0.0.69 Requires-Python >=3.8.1,<4.0; 0.0.70 Requires-Python >=3.8.1,<4.0; 0.0.71 Requires-Python >=3.8.1,<4.0; 0.0.72 Requires-Python >=3.8.1,<4.0; 0.0.73 Requires-Python >=3.8.1,<4.0; 0.0.74 Requires-Python >=3.8.1,<4.0; 0.0.75 Requires-Python >=3.8.1,<4.0; 0.0.76 Requires-Python >=3.8.1,<4.0; 0.0.77 Requires-Python >=3.8.1,<4.0; 0.0.78 Requires-Python >=3.8.1,<4.0; 0.0.79 Requires-Python >=3.8.1,<4.0; 0.0.80 Requires-Python >=3.8.1,<4.0; 0.0.81 Requires-Python >=3.8.1,<4.0; 0.0.82 Requires-Python >=3.8.1,<4.0; 0.0.83 Requires-Python >=3.8.1,<4.0; 0.0.84 Requires-Python >=3.8.1,<4.0; 0.0.85 Requires-Python >=3.8.1,<4.0; 0.0.86 Requires-Python >=3.8.1,<4.0; 0.0.87 Requires-Python >=3.8.1,<4.0; 0.0.88 Requires-Python >=3.8.1,<4.0; 0.0.89 Requires-Python >=3.8.1,<4.0; 0.0.90 Requires-Python >=3.8.1,<4.0; 0.0.91 Requires-Python >=3.8.1,<4.0; 0.0.92 Requires-Python >=3.8.1,<4.0; 0.0.93 Requires-Python >=3.8.1,<4.0; 0.0.94 Requires-Python >=3.8.1,<4.0; 0.0.95 Requires-Python >=3.8.1,<4.0; 0.0.96 Requires-Python >=3.8.1,<4.0; 0.0.97 Requires-Python >=3.8.1,<4.0; 0.0.98 Requires-Python >=3.8.1,<4.0; 0.0.99 Requires-Python >=3.8.1,<4.0; 0.0.99rc0 Requires-Python >=3.8.1,<4.0
ERROR: Could not find a version that satisfies the requirement langchain==0.0.101 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21, 0.0.22, 0.0.23, 0.0.24, 0.0.25, 0.0.26, 0.0.27)
ERROR: No matching distribution found for langchain==0.0.101

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

here are my conda environment in Windows 11 with python3.8 @jackylee1 :

name: py38
channels:
  - msys2
  - defaults
dependencies:
  - ca-certificates=2023.01.10=haa95532_0
  - libffi=3.4.2=hd77b12b_6
  - libpython=2.1=py38_0
  - m2w64-binutils=2.25.1=5
  - m2w64-bzip2=1.0.6=6
  - m2w64-crt-git=5.0.0.4636.2595836=2
  - m2w64-gcc=5.3.0=6
  - m2w64-gcc-ada=5.3.0=6
  - m2w64-gcc-fortran=5.3.0=6
  - m2w64-gcc-libgfortran=5.3.0=6
  - m2w64-gcc-libs=5.3.0=7
  - m2w64-gcc-libs-core=5.3.0=7
  - m2w64-gcc-objc=5.3.0=6
  - m2w64-gmp=6.1.0=2
  - m2w64-headers-git=5.0.0.4636.c0ad18a=2
  - m2w64-isl=0.16.1=2
  - m2w64-libiconv=1.14=6
  - m2w64-libmangle-git=5.0.0.4509.2e5a9a2=2
  - m2w64-libwinpthread-git=5.0.0.4634.697f757=2
  - m2w64-make=4.1.2351.a80a8b8=2
  - m2w64-mpc=1.0.3=3
  - m2w64-mpfr=3.1.4=4
  - m2w64-pkg-config=0.29.1=2
  - m2w64-toolchain=5.3.0=7
  - m2w64-tools-git=5.0.0.4592.90b8472=2
  - m2w64-windows-default-manifest=6.4=3
  - m2w64-winpthreads-git=5.0.0.4634.697f757=2
  - m2w64-zlib=1.2.8=10
  - msys2-conda-epoch=20160418=1
  - openssl=1.1.1t=h2bbff1b_0
  - pip=23.0.1=py38haa95532_0
  - python=3.8.16=h6244533_3
  - sqlite=3.41.2=h2bbff1b_0
  - vc=14.2=h21ff451_1
  - vs2015_runtime=14.27.29016=h5e58377_2
  - wheel=0.38.4=py38haa95532_0
  - pip:
    - absl-py==1.4.0
    - accelerate==0.18.0
    - addict==2.4.0
    - aiofiles==23.1.0
    - aiohttp==3.8.4
    - aiosignal==1.3.1
    - altair==4.2.2
    - antlr4-python3-runtime==4.9.3
    - anyio==3.6.2
    - async-timeout==4.0.2
    - attrs==23.1.0
    - bitsandbytes==0.38.1
    - blis==0.7.9
    - boto3==1.26.117
    - botocore==1.29.117
    - braceexpand==0.1.7
    - cachetools==5.3.0
    - catalogue==2.0.8
    - certifi==2022.12.7
    - charset-normalizer==3.1.0
    - click==7.1.2
    - colorama==0.4.6
    - contourpy==1.0.7
    - cycler==0.11.0
    - cymem==2.0.7
    - cython==0.29.34
    - dataclasses-json==0.5.7
    - decord==0.6.0
    - detectron2==0.6
    - einops==0.6.1
    - en-core-web-sm==3.0.0
    - entrypoints==0.4
    - fairscale==0.4.4
    - fastapi==0.95.1
    - ffmpy==0.3.0
    - filelock==3.12.0
    - fonttools==4.39.3
    - frozenlist==1.3.3
    - fsspec==2023.4.0
    - future==0.18.3
    - google-auth==2.17.3
    - google-auth-oauthlib==1.0.0
    - gradio==3.27.0
    - gradio-client==0.1.3
    - greenlet==2.0.2
    - grpcio==1.54.0
    - h11==0.14.0
    - httpcore==0.17.0
    - httpx==0.24.0
    - huggingface-hub==0.13.4
    - idna==3.4
    - imageio==2.27.0
    - imageio-ffmpeg==0.4.8
    - importlib-resources==5.12.0
    - jinja2==3.1.2
    - jmespath==1.0.1
    - joblib==1.2.0
    - jsonschema==4.17.3
    - kiwisolver==1.4.4
    - langchain==0.0.101
    - linkify-it-py==2.0.0
    - lvis==0.5.3
    - markdown==3.4.3
    - markdown-it-py==2.2.0
    - markupsafe==2.1.2
    - marshmallow==3.19.0
    - marshmallow-enum==1.5.1
    - matplotlib==3.7.1
    - mdit-py-plugins==0.3.3
    - mdurl==0.1.2
    - mmcv==2.0.0
    - mmengine==0.7.2
    - model-index==0.1.11
    - multidict==6.0.4
    - murmurhash==1.0.9
    - mypy-extensions==1.0.0
    - nltk==3.8.1
    - numpy==1.24.2
    - oauthlib==3.2.2
    - omegaconf==2.3.0
    - openai==0.27.4
    - opencv-python==4.7.0.72
    - openmim==0.3.7
    - ordered-set==4.1.0
    - orjson==3.8.10
    - packaging==23.1
    - pandas==2.0.0
    - pathy==0.10.1
    - pillow==9.5.0
    - pkgutil-resolve-name==1.3.10
    - preshed==3.0.8
    - protobuf==4.22.3
    - psutil==5.9.5
    - pyasn1==0.5.0
    - pyasn1-modules==0.3.0
    - pycocotools-windows==2.0.0.2
    - pydantic==1.8.2
    - pydeprecate==0.3.1
    - pydub==0.25.1
    - pyparsing==3.0.9
    - pyrsistent==0.19.3
    - python-multipart==0.0.6
    - pytorch-lightning==1.5.10
    - pytz==2023.3
    - pyyaml==6.0
    - regex==2023.3.23
    - requests==2.28.2
    - requests-oauthlib==1.3.1
    - rich==13.3.4
    - rsa==4.9
    - s3transfer==0.6.0
    - sacremoses==0.0.53
    - scipy==1.10.0
    - semantic-version==2.10.0
    - sentencepiece==0.1.98
    - setuptools==59.5.0
    - simplet5==0.1.4
    - six==1.16.0
    - smart-open==6.3.0
    - sniffio==1.3.0
    - spacy==3.0.9
    - spacy-legacy==3.0.12
    - sqlalchemy==1.4.47
    - srsly==2.4.6
    - starlette==0.26.1
    - tabulate==0.9.0
    - tenacity==8.2.2
    - tensorboard==2.12.2
    - tensorboard-data-server==0.7.0
    - tensorboard-plugin-wit==1.8.1
    - termcolor==2.2.0
    - thinc==8.0.17
    - timm==0.4.12
    - tokenizers==0.13.3
    - tomli==2.0.1
    - toolz==0.12.0
    - torch==1.13.1
    - torchmetrics==0.11.4
    - torchvision==0.14.1
    - tqdm==4.65.0
    - transformers==4.16.2
    - typer==0.3.2
    - typing-extensions==4.5.0
    - typing-inspect==0.8.0
    - tzdata==2023.3
    - uc-micro-py==1.0.1
    - urllib3==1.26.15
    - uvicorn==0.21.1
    - wasabi==0.10.1
    - webdataset==0.2.48
    - websockets==11.0.2
    - werkzeug==2.2.3
    - wget==3.2
    - yapf==0.33.0
    - yarl==1.8.2
    - zipp==3.15.0
prefix: C:\Users\pjlab\anaconda3\envs\py38

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

langchain

I am using Windows 11 and python3.8.16; langchian==0.0.101 exists.

from ask-anything.

848150331 avatar 848150331 commented on May 12, 2024

俺也一样,出现各种依赖冲突

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

made the python3.8.16 and install all except the last two en_core_web_sm and detectron2

Thanks for your feedback! I fixed the requirements.txt and added the instructions for detectron2 and en_core_web_sm in the installation part.

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

NFO: pip is looking at multiple versions of wget to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of fairscale to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of transformers to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of timm to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of langchain to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install -r requirements.txt (line 6) and transformers==4.28.1 because these package versions have conflicting dependencies.

The conflict is caused by:
The user requested transformers==4.28.1
simplet5 0.1.4 depends on transformers==4.16.2
The user requested transformers==4.28.1
simplet5 0.1.3 depends on transformers==4.10.0
The user requested transformers==4.28.1
simplet5 0.1.2 depends on transformers==4.6.1
The user requested transformers==4.28.1
simplet5 0.1.1 depends on transformers==4.8.2
The user requested transformers==4.28.1
simplet5 0.1.0 depends on transformers==4.6.1
The user requested transformers==4.28.1
simplet5 0.0.9 depends on transformers==4.6.1
The user requested transformers==4.28.1
simplet5 0.0.7 depends on transformers==4.6.1

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM>

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
Collecting git+https://github.com/facebookresearch/detectron2.git
Cloning https://github.com/facebookresearch/detectron2.git to c:\users\administrator\appdata\local\temp\pip-req-build-x6pu3ck0
Running command git clone --filter=blob:none --quiet https://github.com/facebookresearch/detectron2.git 'C:\Users\Administrator\AppData\Local\Temp\pip-req-build-x6pu3ck0'
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21084 ms: Timed out
error: unable to read sha1 file of .clang-format (39b1b3d603ed0cf6b7f94c9c08067f148f35613f)
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054
error: unable to read sha1 file of .github/CONTRIBUTING.md (9bab709cae689ba3b92dd52f7fbcc0c6926f4a38)
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21122 ms: Timed out

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
Collecting git+https://github.com/facebookresearch/detectron2.git
Cloning https://github.com/facebookresearch/detectron2.git to c:\users\administrator\appdata\local\temp\pip-req-build-x6pu3ck0
Running command git clone --filter=blob:none --quiet https://github.com/facebookresearch/detectron2.git 'C:\Users\Administrator\AppData\Local\Temp\pip-req-build-x6pu3ck0'
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21084 ms: Timed out
error: unable to read sha1 file of .clang-format (39b1b3d603ed0cf6b7f94c9c08067f148f35613f)
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054
error: unable to read sha1 file of .github/CONTRIBUTING.md (9bab709cae689ba3b92dd52f7fbcc0c6926f4a38)
fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21122 ms: Timed out

Try to export your proxy for git

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

NFO: pip is looking at multiple versions of wget to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of fairscale to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of transformers to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of timm to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of langchain to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install -r requirements.txt (line 6) and transformers==4.28.1 because these package versions have conflicting dependencies.

The conflict is caused by:
The user requested transformers==4.28.1
simplet5 0.1.4 depends on transformers==4.16.2
The user requested transformers==4.28.1
simplet5 0.1.3 depends on transformers==4.10.0
The user requested transformers==4.28.1
simplet5 0.1.2 depends on transformers==4.6.1
The user requested transformers==4.28.1
simplet5 0.1.1 depends on transformers==4.8.2
The user requested transformers==4.28.1
simplet5 0.1.0 depends on transformers==4.6.1
The user requested transformers==4.28.1
simplet5 0.0.9 depends on transformers==4.6.1
The user requested transformers==4.28.1
simplet5 0.0.7 depends on transformers==4.6.1

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM>

You can loosen the transformers version in requirements.txt if you only install video_chat.

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

Try to export your proxy for git

不行,哎真是气死人啊

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python setup.py build --force develop
C:\Users\Administrator\miniconda3\envs\videochat\python.exe: can't open file 'D:\askmeany\ask-anything\video_chat_with_StableLM\setup.py': [Errno 2] No such file or directory
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> cd detectron2
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2> python setup.py build --force develop
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2\setup.py", line 11, in
from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils_init_.py", line 4, in
from .throughput_benchmark import ThroughputBenchmark
File "C:\Users\Administrator
\miniconda3\envs\videochat\lib\site-packages\torch\utils\throughput_benchmark.py", line 2, in
import torch._C
ModuleNotFoundError: No module named 'torch._C'
真的会吐血

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python setup.py build --force develop
C:\Users\Administrator\miniconda3\envs\videochat\python.exe: can't open file 'D:\askmeany\ask-anything\video_chat_with_StableLM\setup.py': [Errno 2] No such file or directory
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> cd detectron2
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2> python setup.py build --force develop
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2\setup.py", line 11, in
from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils_init_.py", line 4, in
from .throughput_benchmark import ThroughputBenchmark
File "C:\Users\Administrator
\miniconda3\envs\videochat\lib\site-packages\torch\utils\throughput_benchmark.py", line 2, in
import torch._C
ModuleNotFoundError: No module named 'torch._C'
真的会吐血

reinstall your pytorch, doyou have GPU in your machine?

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

Yes

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

[INFO] initialize InternVideo model success!
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\app.py", line 33, in
dense_caption_model.initialize_model()
File "D:\askmeany\ask-anything\video_chat_with_StableLM\models\grit_model.py", line 14, in initialize_model
self.demo = init_demo(self.device)
File "D:\askmeany\ask-anything\video_chat_with_StableLM\models\grit_src\image_dense_captions.py", line 81, in init_demo
demo = VisualizationDemo(cfg)
File "D:\askmeany\ask-anything\video_chat_with_StableLM\models\grit_src\grit\predictor.py", line 80, in init
self.predictor = DefaultPredictor(cfg)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\detectron2\engine\defaults.py", line 282, in init
self.model = build_model(self.cfg)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\detectron2\modeling\meta_arch\build.py", line 23, in build_model
model.to(torch.device(cfg.MODEL.DEVICE))
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\cuda_init
.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

以上卸载重装
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\app.py", line 36, in
bot = StableLMBot()
File "D:\askmeany\ask-anything\video_chat_with_StableLM\stablelm.py", line 30, in init
self.m = AutoModelForCausalLM.from_pretrained(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\auto\auto_factory.py", line 424, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\auto\configuration_auto.py", line 632, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\auto\configuration_auto.py", line 347, in getitem
raise KeyError(key)
KeyError: 'gpt_neox'

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024
raise KeyError(key)

KeyError: 'gpt_neox'

which transforms version are you used? In chat_video_with_stablelm
and chat_video_with_moss you should install latest version 4.28.1.

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

i put project in d driver,why the model download to c?

(videochat) PS D:\askmeany\Ask-Anything\video_chat_with_StableLM> python app.py
load checkpoint from pretrained_models/tag2text_swin_14m.pth
[INFO] initialize caption model success!
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Drop path rate: 0.0
No L_MHRA: True
Double L_MHRA: True
Use checkpoint: False
Checkpoint number: [0]
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Downloading (…)l-00003-of-00004.bin: 100%|████████████████████████████████████████| 9.75G/9.75G [14:30<00:00, 11.2MB/s]
Downloading (…)l-00004-of-00004.bin: 100%|████████████████████████████████████████| 2.45G/2.45G [03:37<00:00, 11.3MB/s]
Downloading shards: 100%|███████████████████████████████████████████████████████████████| 4/4 [18:09<00:00, 272.36s/it]
Loading checkpoint shards: 25%|██████████████▎ | 1/4 [00:16<00:48, 16.12s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │
│ y:442 in load_state_dict │
│ │
│ 439 │ │ │ ) │
│ 440 │ │ return safe_load_file(checkpoint_file) │
│ 441 │ try: │
│ ❱ 442 │ │ return torch.load(checkpoint_file, map_location="cpu") │
│ 443 │ except Exception as e: │
│ 444 │ │ try: │
│ 445 │ │ │ with open(checkpoint_file) as f: │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\serialization.py:797 in │
│ load │
│ │
│ 794 │ │ │ # If we want to actually tail call to torch.jit.load, we need to │
│ 795 │ │ │ # reset back to the original position. │
│ 796 │ │ │ orig_position = opened_file.tell() │
│ ❱ 797 │ │ │ with _open_zipfile_reader(opened_file) as opened_zipfile: │
│ 798 │ │ │ │ if _is_torchscript_zip(opened_zipfile): │
│ 799 │ │ │ │ │ warnings.warn("'torch.load' received a zip file that looks like a To │
│ 800 │ │ │ │ │ │ │ │ " dispatching to 'torch.jit.load' (call 'torch.jit.loa │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\serialization.py:283 in │
init
│ │
│ 280 │
│ 281 class _open_zipfile_reader(_opener): │
│ 282 │ def init(self, name_or_buffer) -> None: │
│ ❱ 283 │ │ super().init(torch._C.PyTorchFileReader(name_or_buffer)) │
│ 284 │
│ 285 │
│ 286 class _open_zipfile_writer_file(_opener): │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │
│ y:446 in load_state_dict │
│ │
│ 443 │ except Exception as e: │
│ 444 │ │ try: │
│ 445 │ │ │ with open(checkpoint_file) as f: │
│ ❱ 446 │ │ │ │ if f.read(7) == "version": │
│ 447 │ │ │ │ │ raise OSError( │
│ 448 │ │ │ │ │ │ "You seem to have cloned a repository without having git-lfs ins │
│ 449 │ │ │ │ │ │ "git-lfs and run git lfs install followed by git lfs pull in │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\codecs.py:322 in decode │
│ │
│ 319 │ def decode(self, input, final=False): │
│ 320 │ │ # decode input (taking the buffer into account) │
│ 321 │ │ data = self.buffer + input │
│ ❱ 322 │ │ (result, consumed) = self._buffer_decode(data, self.errors, final) │
│ 323 │ │ # keep undecoded input until the next call │
│ 324 │ │ self.buffer = data[consumed:] │
│ 325 │ │ return result │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 128: invalid start byte

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │
│ │
│ 33 dense_caption_model.initialize_model() │
│ 34 print("[INFO] initialize dense caption model success!") │
│ 35 │
│ ❱ 36 bot = StableLMBot() │
│ 37 │
│ 38 def inference(video_path, input_tag, progress=gr.Progress()): │
│ 39 │ data = loadvideo_decord_origin(video_path) │
│ │
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init
│ │
│ 27 class StableLMBot: │
│ 28 │ def init(self): │
│ 29 │ │ print(f"Starting to load the model to memory") │
│ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │
│ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │
│ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │
│ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\auto\auto │
│ _factory.py:471 in from_pretrained │
│ │
│ 468 │ │ │ ) │
│ 469 │ │ elif type(config) in cls._model_mapping.keys(): │
│ 470 │ │ │ model_class = _get_model_class(config, cls._model_mapping) │
│ ❱ 471 │ │ │ return model_class.from_pretrained( │
│ 472 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │
│ 473 │ │ │ ) │
│ 474 │ │ raise ValueError( │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │
│ y:2795 in from_pretrained │
│ │
│ 2792 │ │ │ │ mismatched_keys, │
│ 2793 │ │ │ │ offload_index, │
│ 2794 │ │ │ │ error_msgs, │
│ ❱ 2795 │ │ │ ) = cls._load_pretrained_model( │
│ 2796 │ │ │ │ model, │
│ 2797 │ │ │ │ state_dict, │
│ 2798 │ │ │ │ loaded_state_dict_keys, # XXX: rename? │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │
│ y:3109 in _load_pretrained_model │
│ │
│ 3106 │ │ │ │ # Skip the load for shards that only contain disk-offloaded weights when │
│ 3107 │ │ │ │ if shard_file in disk_only_shard_files: │
│ 3108 │ │ │ │ │ continue │
│ ❱ 3109 │ │ │ │ state_dict = load_state_dict(shard_file) │
│ 3110 │ │ │ │ │
│ 3111 │ │ │ │ # Mistmatched keys contains tuples key/shape1/shape2 of weights in the c │
│ 3112 │ │ │ │ # matching the weights in the model. │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │
│ y:458 in load_state_dict │
│ │
│ 455 │ │ │ │ │ │ "model. Make sure you have saved the model properly." │
│ 456 │ │ │ │ │ ) from e │
│ 457 │ │ except (UnicodeDecodeError, ValueError): │
│ ❱ 458 │ │ │ raise OSError( │
│ 459 │ │ │ │ f"Unable to load weights from pytorch checkpoint file for '{checkpoint_f │
│ 460 │ │ │ │ f"at '{checkpoint_file}'. " │
│ 461 │ │ │ │ "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please s │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OSError: Unable to load weights from pytorch checkpoint file for
'C:\Users\Administrator/.cache\huggingface\hub\models--stabilityai--stablelm-tuned-alpha-7b\snapshots\25071b093c15c0d1cb
2b2876c6deb621b764fcf5\pytorch_model-00002-of-00004.bin' at
'C:\Users\Administrator/.cache\huggingface\hub\models--stabilityai--stablelm-tuned-alpha-7b\snapshots\25071b093c15c0d1cb
2b2876c6deb621b764fcf5\pytorch_model-00002-of-00004.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint,
please set from_tf=True.

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

In huggingface, the default cache directory is ~/.cache/huggingface/. Change the cache location by setting the shell environment variable, TRANSFORMERS_CACHE to another directory:

export TRANSFORMERS_CACHE="/path/to/another/directory"

or

change https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat_with_StableLM/stablelm.py#L30 to

 self.m = AutoModelForCausalLM.from_pretrained(
            "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16,  cache_dir='./')).cuda()
 self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b",  cache_dir='./')

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

这个模型跟 官方项目huggingface上下下来的模型大小怎么好像不一样

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:07<00:00, 16.99s/it]
Downloading (…)neration_config.json: 100%|████████████████████████████████████████████| 111/111 [00:00<00:00, 55.5kB/s]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │
│ │
│ 33 dense_caption_model.initialize_model() │
│ 34 print("[INFO] initialize dense caption model success!") │
│ 35 │
│ ❱ 36 bot = StableLMBot() │
│ 37 │
│ 38 def inference(video_path, input_tag, progress=gr.Progress()): │
│ 39 │ data = loadvideo_decord_origin(video_path) │
│ │
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init
│ │
│ 27 class StableLMBot: │
│ 28 │ def init(self): │
│ 29 │ │ print(f"Starting to load the model to memory") │
│ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │
│ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │
│ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │
│ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in cuda │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ """ │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │
│ 0 in _apply │
│ │
│ 817 │ │ │ # track autograd history of param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ """ │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already
allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting
max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

我需要多大内存呢?8g不够吗

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │
│ │
│ 33 dense_caption_model.initialize_model() │
│ 34 print("[INFO] initialize dense caption model success!") │
│ 35 │
│ ❱ 36 bot = StableLMBot() │
│ 37 │
│ 38 def inference(video_path, input_tag, progress=gr.Progress()): │
│ 39 │ data = loadvideo_decord_origin(video_path) │
│ │
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init
│ │
│ 27 class StableLMBot: │
│ 28 │ def init(self): │
│ 29 │ │ print(f"Starting to load the model to memory") │
│ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │
│ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │
│ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │
│ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in cuda │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ """ │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │
│ 0 in _apply │
│ │
│ 817 │ │ │ # track autograd history of param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ """ │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already
allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting
max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

8G都不行吗?

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

[INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │ │ │ │ 33 dense_caption_model.initialize_model() │ │ 34 print("[INFO] initialize dense caption model success!") │ │ 35 │ │ ❱ 36 bot = StableLMBot() │ │ 37 │ │ 38 def inference(video_path, input_tag, progress=gr.Progress()): │ │ 39 │ data = loadvideo_decord_origin(video_path) │ │ │ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init │ │ │ │ 27 class StableLMBot: │ │ 28 │ def init(self): │ │ 29 │ │ print(f"Starting to load the model to memory") │ │ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │ │ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │ │ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │ │ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in cuda │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │ │ 0 in _apply │ │ │ │ 817 │ │ │ # track autograd history of param_applied, so we have to use │ │ 818 │ │ │ # with torch.no_grad(): │ │ 819 │ │ │ with torch.no_grad(): │ │ ❱ 820 │ │ │ │ param_applied = fn(param) │ │ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │ │ 822 │ │ │ if should_use_set_data: │ │ 823 │ │ │ │ param.data = param_applied │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

8G都不行吗?

In chat_video, GPU memory should be at least 12G.
StableLM and MOSS may use more GPU memory. We only test them in 80G A100.

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

[INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │ │ │ │ 33 dense_caption_model.initialize_model() │ │ 34 print("[INFO] initialize dense caption model success!") │ │ 35 │ │ ❱ 36 bot = StableLMBot() │ │ 37 │ │ 38 def inference(video_path, input_tag, progress=gr.Progress()): │ │ 39 │ data = loadvideo_decord_origin(video_path) │ │ │ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init │ │ │ │ 27 class StableLMBot: │ │ 28 │ def init(self): │ │ 29 │ │ print(f"Starting to load the model to memory") │ │ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │ │ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │ │ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │ │ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in cuda │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │ │ 0 in _apply │ │ │ │ 817 │ │ │ # track autograd history of param_applied, so we have to use │ │ 818 │ │ │ # with torch.no_grad(): │ │ 819 │ │ │ with torch.no_grad(): │ │ ❱ 820 │ │ │ │ param_applied = fn(param) │ │ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │ │ 822 │ │ │ if should_use_set_data: │ │ 823 │ │ │ │ param.data = param_applied │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
8G都不行吗?

In chat_video, GPU memory should be at least 12G. StableLM and MOSS may use more GPU memory. We only test them in 80G A100.

.。。。。。。。。。

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

In our test, you can try commenting T5 in video_chat for easy use in 8G GPU memory.

According to Stability-AI/StableLM#17 , StableLM with 3B parameters needs to use a GPU with at least 12G memory.
Further, MOSS with 16B parameters needs more video memory

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

这几个项目有的要api,有的要高配硬件,minigpt4也是需要api吗,。?moss的话是需要什么条件运行呢,。?我用过stablelm本地配置是跑得起来的,多了个视频检索就需要这么高配置了啊

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

minigpt4 does not require an API key.
How much GPU memory is occupied by stableLM? Video plugins take about 7GB.

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

In StableLM_base_7B, which uses 13G VRAM if in 8-bit quantization.

Maybe you can consider to change https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat_with_StableLM/stablelm.py#L33 to

self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, device=-1)

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

same

[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:56<00:00, 14.05s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │
│ │
│ 33 dense_caption_model.initialize_model() │
│ 34 print("[INFO] initialize dense caption model success!") │
│ 35 │
│ ❱ 36 bot = StableLMBot() │
│ 37 │
│ 38 def inference(video_path, input_tag, progress=gr.Progress()): │
│ 39 │ data = loadvideo_decord_origin(video_path) │
│ │
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init
│ │
│ 27 class StableLMBot: │
│ 28 │ def init(self): │
│ 29 │ │ print(f"Starting to load the model to memory") │
│ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │
│ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │
│ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │
│ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in cuda │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ """ │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │
│ 7 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │
│ 0 in _apply │
│ │
│ 817 │ │ │ # track autograd history of param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ """ │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already
allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting
max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

│ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │

In this line, you could try to put stablelm in CPU by removing .cuda().

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:05<00:00, 16.47s/it]
Sucessfully loaded the model to the memory
Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().

but i can not open the link

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

open in 127.0.0.1:7860,but when i ask question ,shows error below

[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:58<00:00, 14.64s/it]
Sucessfully loaded the model to the memory
Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().
[04/24 02:13:41] asyncio ERROR: Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。
Second 1:a close up image of a man playing an instrument.
Second 2:a close up image of a man playing an instrument.
Second 3:a close up image of a man playing an instrument.
Second 4:a close up image of a man playing an instrument.
Second 5:a close up image of a man playing an instrument.
Second 6:a close up image of a man playing an instrument.
Second 7:a close up image of a man playing an instrument.
Second 8:a close up image of a man playing an instrument.
Second 9:a close up image of a man playing an instrument.
Second 10:a close up of a person playing a guitar.
Second 1 : the hand of a woman,a woman in the photo.
Second 6 : the hand of a person,a black shirt on a person,a white bracelet.

Setting pad_token_id to eos_token_id:0 for open-end generation.
Traceback (most recent call last):
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\routes.py", line 401, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1302, in process_api
result = await self.call_function(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1025, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 98, in run_text
output = self.generate(history)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 75, in generate
result = self.generator(text, max_new_tokens=1024, num_return_sequences=1, num_beams=1, do_sample=True,
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 209, in call
return super().call(text_inputs, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1109, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1116, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1015, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 251, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 1485, in generate
return self.sample(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 2524, in sample
outputs = self(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 662, in forward
outputs = self.gpt_neox(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 553, in forward
outputs = layer(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 321, in forward
self.input_layernorm(hidden_states),
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Setting pad_token_id to eos_token_id:0 for open-end generation.
Traceback (most recent call last):
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\routes.py", line 401, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1302, in process_api
result = await self.call_function(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1025, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 98, in run_text
output = self.generate(history)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 75, in generate
result = self.generator(text, max_new_tokens=1024, num_return_sequences=1, num_beams=1, do_sample=True,
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 209, in call
return super().call(text_inputs, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1109, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1116, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1015, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 251, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 1485, in generate
return self.sample(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 2524, in sample
outputs = self(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 662, in forward
outputs = self.gpt_neox(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 553, in forward
outputs = layer(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 321, in forward
self.input_layernorm(hidden_states),
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

This is because this operator does not support fp16 in pytorch of cpu, it is recommended that you change

 │ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │

to torch_dtype='auto'

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

Drop path rate: 0.0
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:48<00:00, 27.02s/it]
Sucessfully loaded the model to the memory
Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().
Second 1:a close up image of a man playing an instrument.
Second 2:a close up image of a man playing an instrument.
Second 3:a close up image of a man playing an instrument.
Second 4:a close up image of a man playing an instrument.
Second 5:a close up image of a man playing an instrument.
Second 6:a close up image of a man playing an instrument.
Second 7:a close up image of a man playing an instrument.
Second 8:a close up image of a man playing an instrument.
Second 9:a close up image of a man playing an instrument.
Second 10:a close up of a person playing a guitar.
Second 1 : the hand of a woman,a woman in the photo.
Second 6 : the hand of a person,a black shirt on a person,a white bracelet.

Setting pad_token_id to eos_token_id:0 for open-end generation.

wait for quite long without return anything,still waitting

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:48<00:00, 27.02s/it] Sucessfully loaded the model to the memory Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch(). Second 1:a close up image of a man playing an instrument. Second 2:a close up image of a man playing an instrument. Second 3:a close up image of a man playing an instrument. Second 4:a close up image of a man playing an instrument. Second 5:a close up image of a man playing an instrument. Second 6:a close up image of a man playing an instrument. Second 7:a close up image of a man playing an instrument. Second 8:a close up image of a man playing an instrument. Second 9:a close up image of a man playing an instrument. Second 10:a close up of a person playing a guitar. Second 1 : the hand of a woman,a woman in the photo. Second 6 : the hand of a person,a black shirt on a person,a white bracelet.

Setting pad_token_id to eos_token_id:0 for open-end generation.

wait for quite long without return anything,still waitting

Due to this will use your CPU to run StableLM, it's slow.

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:48<00:00, 27.02s/it] Sucessfully loaded the model to the memory Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True in launch(). Second 1:a close up image of a man playing an instrument. Second 2:a close up image of a man playing an instrument. Second 3:a close up image of a man playing an instrument. Second 4:a close up image of a man playing an instrument. Second 5:a close up image of a man playing an instrument. Second 6:a close up image of a man playing an instrument. Second 7:a close up image of a man playing an instrument. Second 8:a close up image of a man playing an instrument. Second 9:a close up image of a man playing an instrument. Second 10:a close up of a person playing a guitar. Second 1 : the hand of a woman,a woman in the photo. Second 6 : the hand of a person,a black shirt on a person,a white bracelet.
Setting pad_token_id to eos_token_id:0 for open-end generation.
wait for quite long without return anything,still waitting

Due to this will use your CPU to run StableLM, it's slow.

so will it answer finally?did i need to do some changes?

from ask-anything.

yinanhe avatar yinanhe commented on May 12, 2024

Yes, but I haven't run StableLM on the CPU, so I don't know how slow it is.

from ask-anything.

jackylee1 avatar jackylee1 commented on May 12, 2024

Yes, but I haven't run StableLM on the CPU, so I don't know how slow it is.

succed,but result not good,thank you

from ask-anything.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.