Comments (23)
I had already tried but I just found out how to do it
from ballonstranslator.
This should be fixed by #5 & #6, but not made into a release yet.
You can wait for the next release, or run the source code:
# First, you need to have Python(>=3.8) installed on your system.
$ python --version
# Clone this repo
$ git clone https://github.com/dmMaze/BallonsTranslator.git
# Install the dependencies
$ pip install -r requirements.txt
download the data folder from https://drive.google.com/drive/folders/1uElIYRLNakJj-YS0Kd3r3HE-wzeEvrWd?usp=sharing and move into BallonsTranslator/ballontranslator, finally run
python ballontranslator/__main__.py
from ballonstranslator.
@dmMaze There is another problem with deepl is that the Japanese -> English translation does not work because of this check made
from ballonstranslator.
@dmMaze There is another problem with deepl is that the Japanese -> English translation does not work because of this check made
It seems the translate_text method of deepl accept List[str] so we have no need to manually split&concate, set concate_text = False
like below if it works (pass above assertion) please make a pull request (.
@register_translator('Deepl')
class DeeplTranslator(TranslatorBase):
concate_text = False
setup_params: Dict = {
'api_key': ''
}
...
from ballonstranslator.
For some reason on my end, the issue still exists. Deepl works fine with all the languages except for English.
I confirmed that I have the newest version with en-US implemented.
deepl implementation in dl\translators__init__.py
@register_translator('Deepl')
class DeeplTranslator(TranslatorBase):
concate_text = False
setup_params: Dict = {
'api_key': ''
}
def _setup_translator(self):
self.lang_map['English'] = 'EN-US'
def _translate(self, text: Union[str, List]) -> Union[str, List]:
api_key = self.setup_params['api_key']
translator = deepl.Translator(api_key)
source = self.lang_map[self.lang_source]
target = self.lang_map[self.lang_target]
if source == 'EN-US':
source = "EN"
result = translator.translate_text(text, source_lang=source, target_lang=target)
return [i.text for i in result]
I still get the issue that started the thread
It won't allow me to translate to English using DeepL it says ('target_lang="EN" is deprecated, please use "EN-GB" or "EN-US" instead.'). For DeepL you need to use either English British or English American for it to work.
from ballonstranslator.
With Japanese -> English ?
from ballonstranslator.
Yes. Also, I'd like to mention that I couldn't find a way to get PyQt5<=5.15.4 installed. So I've installed a newer version.
from ballonstranslator.
Can you give me your copies of the images? All works well for me
from ballonstranslator.
from ballonstranslator.
from ballonstranslator.
Have you modified any files? I downloaded the latest release from google drive (ver 1.2.0)
from ballonstranslator.
Have you modified any files? I downloaded the latest release from google drive (ver 1.2.0)
No, It's on your side, try to re-download the repo and make a virtual python environment with : python -m venv .venv
then activate it : cd .venv/Scripts
, activate.bat
and and reinstall the requirements
from ballonstranslator.
Have you modified any files? I downloaded the latest release from google drive (ver 1.2.0)
From what you say, you use the drive version which is not up to date so the error is normal, you need to clone the repo and run __main__.py
from ballonstranslator.
I tried it and got the following error:
[INFO ] import_utils:<module>:50 - PyTorch version 1.12.0+cu116 available.
[INFO ] import_utils:<module>:50 - PyTorch version 1.12.0+cu116 available.
PyTorch version 1.12.0+cu116 available.
Traceback (most recent call last):
File "C:\Users\jassz\BallonsTranslator\ballontranslator\__main__.py", line 44, in <module>
main()
File "C:\Users\jassz\BallonsTranslator\ballontranslator\__main__.py", line 38, in main
ballontrans = MainWindow(app, open_dir=args.proj_dir)
File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\mainwindow.py", line 45, in __init__
self.setupUi()
File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\mainwindow.py", line 62, in setupUi
self.leftBar = LeftBar(self)
File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\mainwindowbars.py", line 197, in __init__
vlayout.setContentsMargins(padding, 0, padding, btn_width/2)
TypeError: arguments did not match any overloaded call:
setContentsMargins(self, int, int, int, int): argument 4 has unexpected type 'float'
setContentsMargins(self, QMargins): argument 1 has unexpected type 'int'
from ballonstranslator.
Try it on python version 3.8
from ballonstranslator.
So I did everything you told me to on python 3.8.10 and copied models and libs from google drive (as they aren't included in the repo). In the end, I got an error I had not seen before. (It crashed)
[INFO ] dl_manager:on_finish_settranslator:645 - Translator set to Deepl
Traceback (most recent call last):
File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 362, in run
self.job()
File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline
mask, blk_list = self.textdetector.detect(img)
File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect
_, mask, blk_list = self.detector(img)
File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 178, in __call__
mask = cv2.resize(mask, (im_w, im_h), interpolation=cv2.INTER_LINEAR)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:3689: error: (-215:Assertion failed) !dsize.empty() in function 'cv::hal::resize'
from ballonstranslator.
So I did everything you told me to on python 3.8.10 and copied models and libs from google drive (as they aren't included in the repo). In the end, I got an error I had not seen before. (It crashed)
[INFO ] dl_manager:on_finish_settranslator:645 - Translator set to Deepl Traceback (most recent call last): File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 362, in run self.job() File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline mask, blk_list = self.textdetector.detect(img) File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect _, mask, blk_list = self.detector(img) File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 178, in __call__ mask = cv2.resize(mask, (im_w, im_h), interpolation=cv2.INTER_LINEAR) cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:3689: error: (-215:Assertion failed) !dsize.empty() in function 'cv::hal::resize'
Delete generated .json file and run it again.
If it didn't work, please upload a copy of the image which caused the crash.
from ballonstranslator.
Deleting the .json file didn't work. If it matters, I'd like to mention that for some reason everything is set to CPU and I cannot change it to Cuda
.
from ballonstranslator.
Ok, so torch was installed without cuda for some reason. After reinstalling it like this
pip uninstall torch
pip cache purge
pip install torch -f https://download.pytorch.org/whl/torch_stable.html
I can select cuda but it still crashes. Traceback while using CPU is the same, and while using cuda is:
Traceback (most recent call last):
File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 362, in run
self.job()
File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline
mask, blk_list = self.textdetector.detect(img)
File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect
_, mask, blk_list = self.detector(img)
File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 169, in __call__
blks = postprocess_yolo(blks, self.conf_thresh, self.nms_thresh, resize_ratio)
File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 106, in postprocess_yolo
det = non_max_suppression(det, conf_thresh, nms_thresh)[0]
File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\yolov5\yolov5_utils.py", line 202, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\_ops.py", line 143, in __call__
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [Dense, Negative, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].
CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:125 [kernel]
BackendSelect: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:51 [backend fallback]
AutogradMPS: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:59 [backend fallback]
AutogradXPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:43 [backend fallback]
AutogradHPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:68 [backend fallback]
AutogradLazy: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:55 [backend fallback]
Tracer: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
from ballonstranslator.
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:3689: error: (-215:Assertion failed) !dsize.empty() in function 'cv::hal::resize'
try install opencv-python==4.5.*
from ballonstranslator.
Ok, so torch was installed without cuda for some reason. After reinstalling it like this
pip uninstall torch pip cache purge pip install torch -f https://download.pytorch.org/whl/torch_stable.html
I can select cuda but it still crashes. Traceback while using CPU is the same, and while using cuda is:
Traceback (most recent call last): File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 362, in run self.job() File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline mask, blk_list = self.textdetector.detect(img) File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect _, mask, blk_list = self.detector(img) File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 169, in __call__ blks = postprocess_yolo(blks, self.conf_thresh, self.nms_thresh, resize_ratio) File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 106, in postprocess_yolo det = non_max_suppression(det, conf_thresh, nms_thresh)[0] File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\yolov5\yolov5_utils.py", line 202, in non_max_suppression i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms return torch.ops.torchvision.nms(boxes, scores, iou_threshold) File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\_ops.py", line 143, in __call__ return self._op(*args, **kwargs or {}) NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [Dense, Negative, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID]. CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:125 [kernel] BackendSelect: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback] Named: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback] Negative: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback] ZeroTensor: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback] AutogradOther: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:35 [backend fallback] AutogradCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:39 [backend fallback] AutogradCUDA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:47 [backend fallback] AutogradXLA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:51 [backend fallback] AutogradMPS: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:59 [backend fallback] AutogradXPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:43 [backend fallback] AutogradHPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:68 [backend fallback] AutogradLazy: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:55 [backend fallback] Tracer: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback] AutocastCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:481 [backend fallback] Autocast: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:324 [backend fallback] Batched: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback] VmapMode: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] Functionalize: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback] PythonTLSSnapshot: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
try newest pytorch
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
from ballonstranslator.
So I've installed opencv-python==4.5.*
and pytorch
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
and it works perfectly.
For anyone having similar issues here is everything I did:
- copy repo and data folder from google drive
- Run these commands on python 3.8
python -m venv .venv
thencd .venv/Scripts
activate.bat
pip install -r requirements.txt
pip install install opencv-python==4.5.*
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
from ballonstranslator.
resolved in v1.3.0
from ballonstranslator.
Related Issues (20)
- 空格不能抓图片移动了?? HOT 5
- 能不能把一些旧的已经汉化过的扫图漫画,直接重嵌到新出的图片质量更高的日漫dl版,就可以 避免很多重复性的工作 HOT 2
- 是否有不启动GUI的运行方式?Could BallonsTranslator run without GUI? HOT 11
- FREE DEEPL TO BRAZILIAN PORTUGUESE DOESN'T WORK HOT 2
- Translation failed due to 'str' object has no attribute 'usage'. HOT 3
- 令牌token已经消耗,但是还是原文输出怎么解决
- 目前有自定义快捷键吗 HOT 1
- 当用程序推断字体颜色是出现问题
- 运行launch_win.bat時出現ImportError 无法启动 HOT 2
- 说一下特殊情况下的有趣的用法 HOT 9
- can you add support for arabic please? HOT 4
- 我是使用sakura翻譯的,最近出現問題了,無法翻譯 HOT 5
- 翻譯中一半時卡著 HOT 10
- 無法開啟資料夾
- commit b8c1474 seems broke the headless mode
- 求助,macOS安装到了最后一步卡住
- TGW read time out HOT 2
- Sakura model 使用上的一些小問題 HOT 8
- 有大佬知道如何设置系统自动判定字体大小的最大值吗
- Shortcut to increase and decrease the font size
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ballonstranslator.