Giter VIP home page Giter VIP logo

note's People

Watchers

 avatar  avatar

note's Issues

no public key & Ubuntu无法找到add-apt-repository问题的解决方法

To solve this problem use this command:

gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv 9BDB3D89CE49EC21

which retrieves the key from ubuntu key server. And then this:

gpg --export --armor 9BDB3D89CE49EC21 | sudo apt-key add -

Ubuntu无法找到add-apt-repository问题的解决方法

apt-get install python-software-properties software-properties-common

Docker Tensorflow

pull an image

docker pull tensorflow/tensorflow:latest-gpu-py3
// docker run -it --rm --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 python

run a container

docker run -it --rm --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 python
docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name tfgpu --ipc=host --network=host -p 6006:6006 -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3 /bin/bash

OpenAlea

Install

refer to the openalea wiki and the doc

https://github.com/openalea/openalea/wiki/install
https://openalea.readthedocs.io/en/latest/install.html

conda create -n openalea python=2.7
activate openalea
conda install -y -c openalea openalea.sconsx openalea.visualea openalea.components openalea.oalab

conda install -c openalea/label/unstable openalea.plantgl
conda install -y -c openalea openalea.mtg openalea.lpy pyqglviewer

PlantGL

https://anaconda.org/openalea/openalea.plantgl

在Anaconda下搜索package,各种conda安装方法尝试,最后ustable的成功了

To install plantgl, I successfully used the command as bellow,
conda install -c openalea/label/unstable openalea.plantgl

visualea

https://openalea.readthedocs.io/en/latest/tutorials/visualea/beginner.html
conda create -n visualea_tuto
activate visualea_tuto
conda install -y -c openalea openalea.visualea openalea.components openalea.plantgl boost=1.66 -c openalea/label/unstable

R_USER not defined

image
上述设置完成之后Pycharm需要重启,否则pycham中可能还有问题
如果还有问题,参考https://blog.csdn.net/bhcgdh/article/details/81357204
https://blog.csdn.net/lq_520/article/details/83856429?fps=1&locationNum=2

11. linux常用操作|查看版本|环境变量|下载源|批量解压

查看版本号

cat /etc/issue

新建环境变量

https://blog.csdn.net/Bleachswh/article/details/51334661
vim /etc/profile
source /etc/profile

更换下载源

更换下载源(中科大):https://www.linuxidc.com/Linux/2017-01/139319.htm
更换下载源(阿里):https://www.cnblogs.com/fhychzu/p/7542754.html

批量解压

https://blog.csdn.net/silentwolfyh/article/details/53909518
for i in $(ls *.tar);do tar xvf $i;done
for i in $(ls *.7z);do 7z x $i;done

统计

某文件夹下文件的个数
ls -l |grep "^-"|wc -l

统计某文件夹下目录的个数
ls -l |grep "^d"|wc -l
http://blog.sina.com.cn/s/blog_464f6dba01012vwv.html

Docker commit | push | pull | save/export | load/import

docker commit -m="提交的描述信息" -a="作者" 容器id 要创建的目标镜像名:[标签名]

推送到dockerhub

https://blog.csdn.net/boonya/article/details/74906927

  1. docker commit -m="message comment" -a='author' containerid imagename:tag
  2. docker login
    jinsc
    FA23
  3. docker tag imagename:tag jinsc/imagename:tag
  4. docker push jinsc/imagename:tag

pull from docker hub

docker pull jinsc/imagename:tag

保存本地

https://blog.csdn.net/anxpp/article/details/51810776
docker save -o /home/jinsc/dockertar/ubuntu16py36torch12cuda10.tar ubuntu16py36torch12cuda10:20190914
或者
docker export 7691a814370e > ubuntu.tar

从本地载入

docker load < ubuntu16py36torch12cuda10.tar

可以使用 docker import 从容器快照文件中再导入为镜像,例如

$ cat ubuntu.tar | docker import - test/ubuntu:v1.0
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
test/ubuntu v1.0 9d37a6082e97 About a minute ago 171.3 MB

https://yeasy.gitbooks.io/docker_practice/container/import_export.html

3. ubuntu16.04 安装NVIDIA Driver,CUDA,CuDNN, Python, docker,nvidia-docker, rancher,

DeepLearning_ubuntu16.0.4-server.docx

NVIDIA Driver安装

  1. 安装 驱动NVIDIA driver
    参考链接http://www.52nlp.cn/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E4%B8%BB%E6%9C%BA%E7%8E%AF%E5%A2%83%E9%85%8D%E7%BD%AE-ubuntu-16-04-nvidia-gtx-1080-cuda-8
    using nvidia-smi to check whether the driver has been successfully installed
    双系统的驱动参考链接:https://blog.csdn.net/zhangk9509/article/details/79300976
    更换下载源(中科大):https://www.linuxidc.com/Linux/2017-01/139319.htm
    更换下载源(阿里):https://www.cnblogs.com/fhychzu/p/7542754.html
    驱动下载地址:
    sudo chmod a+x NVIDIA-Linux-x86_64-430.40.run
    sudo sh NVIDIA-Linux-x86_64-430.40.run
    ctrl+alt+F1 进入tty模式
    sudo service lightdm stop #停止显示服务
    sudo service lightdm start

添加Graphic Drivers PPA:
$sudo add-apt-repository ppa:graphics-drivers/ppa
更新Nvidia驱动程序:
$sudo apt-get update
在System Settings -> Software & Updates -> Additional Drivers中选择适配的驱动,点击Apply Changes。

CUDA安装

2, 安装cuda(含NVIDIA driver)
https://developer.nvidia.com/cuda-90-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=runfilelocal

image
sudo chmod a+x cuda_9.0.176_384.81_linux.run
sudo sh cuda_9.0.176_384.81_linux.run

Please make sure that

  • PATH includes /usr/local/cuda-9.0/bin
  • LD_LIBRARY_PATH includes /usr/local/cuda-9.0/lib64, or, add /usr/local/cuda-9.0/lib64 to /etc/ld.so.conf and run ldconfig as root

安装完毕后,再声明一下环境变量,并将其写入到 ~/.bashrc 的尾部:
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

永久添加环境变量https://blog.csdn.net/Bleachswh/article/details/51334661
vim /etc/profile
export PATH=/usr/local/cuda-9.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

CuDNN安装

3,安装CuDNN, 选择tar.gz压缩包安装
image
tar -zxf cudnn-9.0-linux-x64-v7.6.2.24.tgz
cd cuda
ls
sudo cp lib64/* /usr/local/cuda/lib64/
sudo cp include/cudnn.h /usr/local/cuda/include/
cd /usr/local/cuda/lib64/
ls
sudo ln -sf /usr/local/cuda/lib64/libcudnn.so.7.6.2 /usr/local/cuda/lib64/libcudnn.so.7
sudo ln -sf /usr/local/cuda/lib64/libcudnn.so.7 /usr/local/cuda/lib64/libcudnn.so
sudo ldconfig
sudo reboot

Python安装

  1. 安装Python环境install Anaconda
    sudo bash Anaconda3-2019.07-Linux-x86_64.sh
    修改环境变量
    sudo gedit /etc/profile
    source /etc/profile
    conda deactivate
    或者
    export Anaconda=/anaconda3/bin/
    export PATH=$PATH:$Anaconda
    source ~/.bashrc

  2. 安装PyTorch环境pip install --user torch, torchvision
    pip install --user torch torchvision

  3. 可安装Tensorflow等其他环境

Docker安装

1. 安装docker,
https://yeasy.gitbooks.io/docker_practice/content/install/ubuntu.html

docker与nvidia-docker的区别
https://blog.csdn.net/u013355826/article/details/89633619

4. 安装nvidia-docker
the following is installing the docker and nvidia-docker for linux-based deep learning
https://blog.csdn.net/junxiacaocao/article/details/79471770
https://github.com/NVIDIA/nvidia-docker

Install官网安装方式
Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker

$ sudo service nvidia-docker start
$ sudo nvidia-docker-plugin

Usage

Test nvidia-smi with the latest official CUDA image

$ docker run --gpus all nvidia/cuda:9.0-base nvidia-smi

Start a GPU enabled container on two GPUs

$ docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi

Starting a GPU enabled container on specific GPUs

$ docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus '"device=UUID-ABCDEF,1'" nvidia/cuda:9.0-base nvidia-smi

Specifying a capability (graphics, compute, ...) for my container

Note this is rarely if ever used this way

$ docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi

If not: 非官网安装方式
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker
sudo service nvidia-docker start
sudo nvidia-docker-plugin

Rancher 安装

5. 安装rancher,拉取镜像使用
https://www.cnblogs.com/gentleman-c/p/7387856.html

1.1 run a rancher
sudo docker run -d --name rancher-server -p 8080:8080 rancher/server
-d表示在后台 –name 为给定的容器的名字 -p端口映射
docker run -it --rm --name ds -p 8888:8888 jupyter/datascience-notebookuna
-it表示交互式的终端 --rm运行玩删除
登录rancher http://10.11.30.151:8080
设置用户名 jinsc 密码 XXXX

#查看公有ip: 159.226.89.150
curl cip.cc 或者 curl ip.cip.cc 或者 curl http://members.3322.org/dyndns/getip

#添加到rancher的主机

nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:1.10.0-gpu

http://(e5cfdd71b9e0 or 127.0.0.1):8888/?token=437d92fda454fe9759e65a9c26f155ea6b2a50b41bac4299
image

docker start containerID # docker attach containerID

#主机出现之后查看其中的容器
image

5. docker中数据卷的挂载 | 已经运行中的容器

1.使用-v 参数在run时候挂载

方法1 docker run -it -v /home/jinsc :/share microsoft/dotnet:latest /bin/bash

方法2 使用Volume在主机和容器之间传输文件https://blog.csdn.net/Leafage_M/article/details/78575205

docker volume create my-vol
docker volume inspect my-vol
image

运行容器,测试同步数据
image

image

2. 对于已经运行中的容器,

docker ps 获取目标容器的ID或者容器的名称 // 我这里的是容器ID为52261df2fab6
docker inspect -f'{{.Id}}' 容器的ID // 获取容器的ID全名称
docker inspect -f'{{.Id}}' c996838f46c1 //返回一个长的ID名称

有了这个长长的ID的话,本机和容器之间的文件传输就简单了。
docker cp 本地文件路径 ID全称:容器路径
docker cp /home/jinsc/test.abc c996838f46c1261c4422054e3aeda7e301b7645439d3b850b908223751c68cc4:/workspace

docker cp ID全称:容器文件路径 本地路径
docker cp c996838f46c1261c4422054e3aeda7e301b7645439d3b850b908223751c68cc4:/workspace/fromdocker.file /home/jinsc/workspace/

10. docker 容器端口、数据卷映射 & 运行docker GPU

端口映射

image

jupyter port is 8888  无法浏览器访问

nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu

tensorbord port is 6006  加上0.0.0.0可以再在宿主机上的浏览器打开,也可以在外部电脑打开,外部打开时需要将0.0.0.0换成宿主机的ip

nvidia-docker run -p 0.0.0.0:6006:6006 -it tensorflow/tensorflow:latest-gpu bash
image

--network=host 则可以通过外网访问主机的ip+映射的端口访问容器内部的tensorborad/jupyter等

nvidia-docker run -it --name PCNN2 --network=host -p 6006:6006 -p 8888:8888 -v /home/jinsc/workspace/:/share ImageID /bin/bash
此时在宿主机本身可通过下面的界面访问,在其他电脑可通过宿主机ip 10.11.30.151:6007查看
image

数据卷挂载

方法1 docker pull 下载镜像,run 容器,内部自己安装显卡驱动等

在默认情况下Docker是不能访问任何device的, 为了能在Docker里访问显卡,必须加上--privileged=true的选项:
sudo docker run --privileged=true -i -t -v $PWD:/data centos /bin/bash

方法2,nvidia-docker直接下载带有显卡的镜像,减少自己配置的时间

https://zhuanlan.zhihu.com/p/64493662

nvidia-docker run -it -v /home/jinsc/workspace/:/share --name ubuntu16cuda10torch12 --network=host -p 6006:6006 -p 8888:8888 anibali/pytorch:cuda-10.0 /bin/bash

docker run --gpus all -it -v /home/jinsc/workspace/:/share --name ubuntu16cuda10torch12 --network=host -p 6006:6006 -p 8888:8888 anibali/pytorch:cuda-10.0 /bin/bash

docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name superpointgraph --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:superpointgraph /bin/bash

docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name pytorchgeometric --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:pytorchgeometric /bin/bash

加上--ipc=host 防止出现ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).

docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name pytorchgeometric --ipc=host --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:pytorchgeometricVIM /bin/bash

docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name superpointgraph2 --ipc=host --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:superpointgraphVIM /bin/bash

Test nvidia-smi with the latest official CUDA image

$ docker run --gpus all nvidia/cuda:9.0-base nvidia-smi

Start a GPU enabled container on two GPUs

$ docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi

Starting a GPU enabled container on specific GPUs

$ docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:9.0-base nvidia-smi

Specifying a capability (graphics, compute, ...) for my container

Note this is rarely if ever used this way

$ docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi

Reference

http://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html

4. docker拉取镜像,启动容器,安装python,tensorflow,rancher管理,镜像commit 保存导出

一个很好的学习视频

https://www.youtube.com/watch?v=sB07YOwW0SY

搜索镜像

docker search ubuntu

拉取镜像

docker pull ubuntu:16.04

显示镜像

docker images
docker image ls

启动容器

docker run -it --name ubuntu16py3tf16 ubuntu:16.04
或者 (默认启动bash)
docker run -it --name ubuntu16py3tf16 ubuntu:16.04 bash
或者加一个 --provileged 让docker实现root权限
docker run -it --name ubuntu16py3tf16 --privileged ubuntu:16.04 bash
nvidia-docker run -it -d -v /home/jinsc/workspace/:/root/workspace --name nvidiatf ubuntu:16.04

容器内安装tensorflow

容器内升级一下 apt-get

apt-get update
apt-get install libssl-dev
apt-get install make
apt-get install gcc

装python之前先装wget

apt-get install wget

去网上找python的下载链接

mkdir tools
cd tools
wget https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tgz

解压安装python

tar -zxvf Python-3.6.0.tgz
cd Python-3.6.0
./configure
make
make install

安装tensorflow

apt-get install python-pip python-dev
pip3 install tensorflow-gpu
pip3 install tensorflow-gpu==1.6 指定tensorflow版本

检查安装是否成功

python3
import tensorflow as tf
tf._ version _

辅助性开发工具

apt-get install git
apt-get install vim

容器commit生成镜像

docker commit containerID
docker images 查看镜像

镜像保存本地

docker save 镜像名 > /tmp/givename.tar

从本地加载镜像

docker load < /tmp/givename.tar

显示容器

docker container ls
docker container ls -a

docker ps
docker ps -a

进入已经退出的容器

docker start containerID
docker attach containerID

docker exec containerID bash

正常退出不关闭容器,请按Ctrl+P+Q进行退出容器

容器与宿主机之间拷贝文件

docker cp nvidiatf:/workspace/hello.py /home/jinsc/workspace/
docker cp 容器名:/容器内路径/文件 /宿主机目标路径/
image

删除容器和镜像

docker image rm XXX
docker container stop XX docker container rm XXX
https://blog.csdn.net/CSDN_duomaomao/article/details/78587103

启动并配置配置rancher

sudo docker run -d --name rancher-server -p 8080:8080 rancher/server
查看共有ip
curl cip.cc 或者 curl ip.cip.cc 或者 curl http://members.3322.org/dyndns/getip

tensorflow docker 使用

https://www.tensorflow.org/install/docker?hl=zh-cn

docker run -it -d -rm --name XX -v ~/hostdir:/notebooks -p 8888:8888 -p 6006:6006 tensorflow/tesnsorflow
docker exec -it tf tensorborad --logdir tf_logs/
参数的意义:
-it 交互式终端
-rm 容器停止后就自动删除
-d 参数表示后台守护进程运行容器
--name 参数表示容器的名称,可随意取
-v 表示主机和容器共享文件的映射,容器的目录就是 Dockerfile 中用 VOLUME 命令定义的目录
-p 表器主机和容器端口映射,容器的端口就是 Dockerfile 中用 EXPOSE 命令绑定的端口

image

进入容器的bash命令行

docker exec -it 83fd3bb36025 bash
相比于 docker attach containerID 进入默认的命令, docker exec更灵活

1. Windows server 安装深度学习环境

1.下载window服务器的镜像ios, 利用UltraISO 或者 rufus 软件制作启动优盘;插入到服务器,重启过程按键盘上有点的那个键del,或者F2,F11或者其他的键(根据自己的电脑牌子主板查一下)进入开机启动界面。进入之后更换开机启动项为从优盘启动,启动之后进入重装系统界面。
windows2016server64bit.iso 链接:https://pan.baidu.com/s/1yenJ71THEwShPxxLYWzlpw 提取码:1m7l
windows2019server64bit.iso 链接:https://pan.baidu.com/s/1O0KOSF8GUAdoZY_fTXnRhQ 提取码:nqm3
windows2016--2019镜像说明: 链接:https://pan.baidu.com/s/1SljjynVfqb5x1TkH35uVRw 提取码:5scb

**2. 安装nvidia 驱动,**下列链接选择对应平台的驱动【安装过程电脑会重启几次,提前保存其他程序文件】
https://www.nvidia.com/Download/index.aspx
image

#**3. 安装cuda,**下列链接选择对应平台的驱动
https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64
image

#--------------Cudnn可选安装
CuDNN 下载地址,需要注册登录,可以用微信等登录;https://developer.nvidia.com/rdp/form/cudnn-download-survey
image
#-------visualstudio2015(c++)版本也可选安装

4. 安装Anoconda3 版本的python https://www.anaconda.com/distribution/, 安装目录可以选择在D:\anaconda3, 放在某一个盘的根目录下比较方便;
image

#5. 在pytorch官网选择对应的平台,利用pip install torch 和torchvision 【截图中两行命令,pip3 用pip; pip的位置是在D:\anaconda3\scripts\pip.exe】
image
安装完成之后,在python中运行,
import torch
print(torch.cuda.is_available()) #如果返回True,说明安装GPU版本的torch成功了

#6.配置editplus,
image
image

VIM 配置 Python

安装http://fisadev.github.io/fisa-vim-config/

apt update
apt install vim
apt install curl vim exuberant-ctags git ack-grep
pip install pep8 flake8 pyflakes isort yapf

git clone https://github.com/fisadev/fisa-vim-config
cd fisa-vim-config/ # 隐藏文件,ls -a
cp .vimrc ~/.vimrc

加上F5执行命令 https://stackoverflow.com/questions/18948491/running-python-code-in-vim
编辑查看复制
map :exec '!python' shellescape(@%, 1)
image

插件管理https://blog.csdn.net/zhangpower1993/article/details/52184581

安装需要的插件
将想要安装的插件,按照地址填写方法,将地址填写在vundle#begin和vundle#end之间就可以
保存之后,有两种方法安装插件。
(1) 运行 vim ,再运行 :PluginInstall
(2) 通过命令行直接安装 vim +PluginInstall +qall
移除不需要的插件
编辑.vimrc文件移除的你要移除的插件所对应的plugin那一行。
保存退出当前的vim
重新打开vim,输入命令BundleClean。
其他常用命令
更新插件BundleUpdate
列出所有插件BundleList
查找插件BundleSearch

try

touch a.py
vim a.py

using pdb for python debug

https://blog.csdn.net/gwzz1228/article/details/78059651

superpoint graph install

cmake error in ply_c

查找“cmake clean”命令以清除cmake输出
时间: 2017-03-29 08:13:49.0标签: cmake
译文: 来源翻译纠错
正如make clean删除makefile生成的所有文件,我想对CMake做同样的。我经常发现自己手动通过目录删除像cmake_install.cmake和CMakeCache.txt文件和CMakeFiles文件夹。
rm -rf CMakeCache.txt cmake_install.cmake CMakeFiles/

CONDAENV=/root/anaconda3 #YOUR_CONDA_ENVIRONMENT_LOCATION
cd partition/ply_c
cmake . -DPYTHON_LIBRARY=$CONDAENV/lib/libpython3.7m.so -DPYTHON_INCLUDE_DIR=$CONDAENV/include/python3.7m -DBOOST_INCLUDEDIR=$CONDAENV/include -DEIGEN3_INCLUDE_DIR=$CONDAENV/include/eigen3
make
cd ..
cd cut-pursuit
mkdir build
cd build
cmake .. -DPYTHON_LIBRARY=$CONDAENV/lib/libpython3.7m.so -DPYTHON_INCLUDE_DIR=$CONDAENV/include/python3.7m -DBOOST_INCLUDEDIR=$CONDAENV/include -DEIGEN3_INCLUDE_DIR=$CONDAENV/include/eigen3
make

run semantic3D

SEMA3D_DIR=/jinscZ/superpoint_graph/learning/datasets/semantic3d

python partition/partition.py --dataset sema3d --ROOT_PATH $SEMA3D_DIR --voxel_width 0.05 --reg_strength 0.8 --ver_batch 5000000

python learning/sema3d_dataset.py --SEMA3D_PATH $SEMA3D_DIR

Anaconda 增删镜像源; Conda安装包

增加

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/win-64
conda config --add channels https://conda.anaconda.org/openalea/noarch
conda config --add channels https://conda.anaconda.org/openalea/win-64

删除

conda config --remove channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch

查看源

conda config --show-sources

查看所有配置信息

conda info

登录登出

anaconda login 用户名jinsc 密码F--A3
anaconda logout

##conda 创建虚拟环境
conda create -n envname python=3.6

conda 装包

conda install pyntcloud -c conda-forge

6.docker使用宿主机的显卡

方法1 docker pull下载镜像,run容器,内部自己安装显卡驱动等

在默认情况下Docker是不能访问任何device的, 为了能在Docker里访问显卡,必须加上--privileged=true的选项:
sudo docker run --privileged=true -i -t -v $PWD:/data centos /bin/bash

方法2,nvidia-docker直接下载带有显卡的镜像,减少自己配置的时间

https://zhuanlan.zhihu.com/p/64493662

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.