Giter VIP home page Giter VIP logo

Comments (7)

vigneshwaran-nv-10329 avatar vigneshwaran-nv-10329 commented on June 15, 2024

Logs while running fine

I1130 14:04:12.092441 87690 metrics.cc:228] Collecting metrics for GPU 0: Quadro RTX 5000
I1130 14:04:12.093013 87690 shared_library.cc:44] OpenLibraryHandle: tritonserver/backends/onnxruntime/libtriton_onnxruntime.so
I1130 14:04:12.100461 87690 onnxruntime.cc:1830] TRITONBACKEND_Initialize: onnxruntime
I1130 14:04:12.100488 87690 onnxruntime.cc:1843] Triton TRITONBACKEND API version: 1.2
I1130 14:04:12.100498 87690 onnxruntime.cc:1849] 'onnxruntime' TRITONBACKEND API version: 1.2
I1130 14:04:12.354367 87690 pinned_memory_manager.cc:206] Pinned memory pool is created at '0x7f5cfc000000' with size 268435456
I1130 14:04:12.355837 87690 cuda_memory_manager.cc:103] CUDA memory pool is created on device 0 with size 67108864
I1130 14:04:12.357670 87690 backend_factory.h:44] Create TritonBackendFactory
I1130 14:04:12.357689 87690 ensemble_backend_factory.cc:47] Create EnsembleBackendFactory
I1130 14:04:12.363309 87690 model_repository_manager.cc:747] AsyncLoad() 'lm_english'
I1130 14:04:12.363367 87690 model_repository_manager.cc:986] TriggerNextAction() 'lm_english' version 1: 1
I1130 14:04:12.363378 87690 model_repository_manager.cc:1024] Load() 'lm_english' version 1
I1130 14:04:12.363385 87690 model_repository_manager.cc:1043] loading: lm_english:1
I1130 14:04:12.463714 87690 model_repository_manager.cc:747] AsyncLoad() 'gector_spanish'
I1130 14:04:12.463865 87690 model_repository_manager.cc:986] TriggerNextAction() 'gector_spanish' version 1: 1I1130 14:04:12.463834 87690 model_repository_manager.cc:1103] CreateInferenceBackend() 'lm_english' version 1

I1130 14:04:12.463941 87690 model_repository_manager.cc:1024] Load() 'gector_spanish' version 1
I1130 14:04:12.463949 87690 model_repository_manager.cc:1043] loading: gector_spanish:1
I1130 14:04:12.464956 87690 onnxruntime.cc:1891] TRITONBACKEND_ModelInitialize: lm_english (version 1)
I1130 14:04:12.466373 87690 model_config_utils.cc:1521] ModelConfig 64-bit fields:
I1130 14:04:12.466388 87690 model_config_utils.cc:1523] 	ModelConfig::dynamic_batching::default_queue_policy::default_timeout_microseconds
I1130 14:04:12.466395 87690 model_config_utils.cc:1523] 	ModelConfig::dynamic_batching::max_queue_delay_microseconds
I1130 14:04:12.466401 87690 model_config_utils.cc:1523] 	ModelConfig::dynamic_batching::priority_queue_policy::value::default_timeout_microseconds
I1130 14:04:12.466408 87690 model_config_utils.cc:1523] 	ModelConfig::ensemble_scheduling::step::model_version
I1130 14:04:12.466414 87690 model_config_utils.cc:1523] 	ModelConfig::input::dims
I1130 14:04:12.466420 87690 model_config_utils.cc:1523] 	ModelConfig::input::reshape::shape
I1130 14:04:12.466426 87690 model_config_utils.cc:1523] 	ModelConfig::model_warmup::inputs::value::dims
I1130 14:04:12.466432 87690 model_config_utils.cc:1523] 	ModelConfig::optimization::cuda::graph_spec::graph_lower_bound::input::value::dim
I1130 14:04:12.466438 87690 model_config_utils.cc:1523] 	ModelConfig::optimization::cuda::graph_spec::input::value::dim
I1130 14:04:12.466444 87690 model_config_utils.cc:1523] 	ModelConfig::output::dims
I1130 14:04:12.466450 87690 model_config_utils.cc:1523] 	ModelConfig::output::reshape::shape
I1130 14:04:12.466456 87690 model_config_utils.cc:1523] 	ModelConfig::sequence_batching::direct::max_queue_delay_microseconds
I1130 14:04:12.466462 87690 model_config_utils.cc:1523] 	ModelConfig::sequence_batching::max_sequence_idle_microseconds
I1130 14:04:12.466469 87690 model_config_utils.cc:1523] 	ModelConfig::sequence_batching::oldest::max_queue_delay_microseconds
I1130 14:04:12.466475 87690 model_config_utils.cc:1523] 	ModelConfig::version_policy::specific::versions
WARNING: Since openmp is enabled in this build, this API cannot be used to configure intra op num threads. Please use the openmp environment variables to control the number of threads.
I1130 14:04:12.466889 87690 onnxruntime.cc:1935] TRITONBACKEND_ModelInstanceInitialize: lm_english_0 (GPU device 0)
I1130 14:04:12.469908 87690 backend_model_instance.cc:110] Creating instance lm_english_0 on GPU 0 (7.5) using artifact 'model.onnx'
I1130 14:04:12.471307 87690 onnxruntime.cc:272] CUDA Execution Accelerator is set for 'lm_english' on device 0
2021-11-30 06:04:12.486589017 [I:onnxruntime:, inference_session.cc:230 operator()] Flush-to-zero and denormal-as-zero are off
2021-11-30 06:04:12.486622883 [I:onnxruntime:, inference_session.cc:237 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
I1130 14:04:12.565902 87690 model_repository_manager.cc:747] AsyncLoad() 'gector'
I1130 14:04:12.565971 87690 model_repository_manager.cc:986] TriggerNextAction() 'gector' version 1: 1
I1130 14:04:12.565983 87690 model_repository_manager.cc:1024] Load() 'gector' version 1
I1130 14:04:12.565991 87690 model_repository_manager.cc:1043] loading: gector:1
I1130 14:04:12.565991 87690 model_repository_manager.cc:1103] CreateInferenceBackend() 'gector_spanish' version 1
I1130 14:04:12.566906 87690 onnxruntime.cc:1891] TRITONBACKEND_ModelInitialize: gector_spanish (version 1)
WARNING: Since openmp is enabled in this build, this API cannot be used to configure intra op num threads. Please use the openmp environment variables to control the number of threads.
I1130 14:04:12.567754 87690 onnxruntime.cc:1935] TRITONBACKEND_ModelInstanceInitialize: gector_spanish_0 (GPU device 0)
I1130 14:04:12.568464 87690 backend_model_instance.cc:110] Creating instance gector_spanish_0 on GPU 0 (7.5) using artifact 'model.onnx'
I1130 14:04:12.568522 87690 onnxruntime.cc:272] CUDA Execution Accelerator is set for 'gector_spanish' on device 0
2021-11-30 06:04:12.568550335 [I:onnxruntime:, inference_session.cc:237 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
I1130 14:04:12.666394 87690 model_repository_manager.cc:1103] CreateInferenceBackend() 'gector' version 1
I1130 14:04:12.667198 87690 onnxruntime.cc:1891] TRITONBACKEND_ModelInitialize: gector (version 1)
WARNING: Since openmp is enabled in this build, this API cannot be used to configure intra op num threads. Please use the openmp environment variables to control the number of threads.
I1130 14:04:12.668062 87690 onnxruntime.cc:1935] TRITONBACKEND_ModelInstanceInitialize: gector_0 (GPU device 0)
I1130 14:04:12.668771 87690 backend_model_instance.cc:110] Creating instance gector_0 on GPU 0 (7.5) using artifact 'model.onnx'
I1130 14:04:12.668820 87690 onnxruntime.cc:272] CUDA Execution Accelerator is set for 'gector' on device 0
2021-11-30 06:04:12.668846245 [I:onnxruntime:, inference_session.cc:237 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2021-11-30 06:04:13.254131582 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cuda with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254132063 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cuda with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254174053 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.254195102 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.254219453 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for CudaPinned with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254219878 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for CudaPinned with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254234004 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.254245659 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.254275623 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for CUDA_CPU with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254282821 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for CUDA_CPU with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254286914 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.254306413 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.254362860 [I:onnxruntime:, inference_session.cc:1141 Initialize] Initializing session.
2021-11-30 06:04:13.254364858 [I:onnxruntime:, inference_session.cc:1141 Initialize] Initializing session.
2021-11-30 06:04:13.254382630 [I:onnxruntime:, inference_session.cc:1178 Initialize] Adding default CPU execution provider.
2021-11-30 06:04:13.254391728 [I:onnxruntime:, inference_session.cc:1178 Initialize] Adding default CPU execution provider.
2021-11-30 06:04:13.254395695 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cpu with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254409781 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.254417636 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cpu with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.254429174 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.259693892 [I:onnxruntime:, reshape_fusion.cc:42 ApplyImpl] Total fused reshape node count: 0
2021-11-30 06:04:13.259978761 [I:onnxruntime:, reshape_fusion.cc:42 ApplyImpl] Total fused reshape node count: 0
2021-11-30 06:04:13.260644684 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_9
2021-11-30 06:04:13.261079427 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_1172
2021-11-30 06:04:13.261089536 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_1220
2021-11-30 06:04:13.261098259 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_1179
2021-11-30 06:04:13.261133805 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Where_1196
2021-11-30 06:04:13.261142098 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Equal_1195
2021-11-30 06:04:13.261150008 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Equal_1211
2021-11-30 06:04:13.261157583 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Where_1204
2021-11-30 06:04:13.261165761 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Where_1212
2021-11-30 06:04:13.261173750 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Concat_1221
2021-11-30 06:04:13.261181516 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Equal_1203
2021-11-30 06:04:13.261340983 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Expand_1169
2021-11-30 06:04:13.261397833 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_9 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261412917 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Add_11 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261425174 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_12 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261440977 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_1172 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261461981 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_1179 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261485389 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Equal_1195 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261501090 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Where_1196 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261516698 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Equal_1203 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261532720 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Where_1204 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261547129 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Equal_1211 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261561964 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Where_1212 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261588540 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_1220 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.261601435 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_1221 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.262213499 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_2
2021-11-30 06:04:13.262227587 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_5
2021-11-30 06:04:13.262311794 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_11
2021-11-30 06:04:13.262440313 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_144
2021-11-30 06:04:13.262450126 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_150
2021-11-30 06:04:13.262619856 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_200
2021-11-30 06:04:13.262628363 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_195
2021-11-30 06:04:13.262636216 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_192
2021-11-30 06:04:13.262720742 [I:onnxruntime:, reshape_fusion.cc:42 ApplyImpl] Total fused reshape node count: 0
2021-11-30 06:04:13.262841797 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_381
2021-11-30 06:04:13.262851801 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_387
2021-11-30 06:04:13.263022487 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_437
2021-11-30 06:04:13.263030675 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_432
2021-11-30 06:04:13.263038661 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_429
2021-11-30 06:04:13.263227368 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_618
2021-11-30 06:04:13.263236065 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_624
2021-11-30 06:04:13.263406836 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_674
2021-11-30 06:04:13.263416152 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_669
2021-11-30 06:04:13.263424240 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_666
2021-11-30 06:04:13.263609686 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_855
2021-11-30 06:04:13.263617867 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_861
2021-11-30 06:04:13.263780341 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_911
2021-11-30 06:04:13.263788351 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_906
2021-11-30 06:04:13.263796180 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_903
2021-11-30 06:04:13.263936524 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_37
2021-11-30 06:04:13.264094156 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.264203371 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_5 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264219436 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Add_13 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264241939 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_17 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264255384 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_18 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264270325 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_11 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264281168 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_20 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264293667 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_21 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264312585 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_144 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264323578 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_145 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264339107 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_150 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264349808 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_151 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264367459 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Sub_152 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264378368 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_153 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264389701 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_157 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264409443 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_200 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264423348 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_201 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264434101 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_202 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264446113 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_203 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264461031 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_192 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264471629 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_206 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264483624 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_195 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264494050 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_207 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264507379 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_208 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264525129 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_381 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264535783 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_382 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264550998 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_387 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264561702 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_388 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264573895 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Sub_389 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264584382 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_390 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264597786 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_394 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264617162 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_437 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264627558 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_438 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264637998 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_439 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264649857 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_440 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264663912 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_429 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264674337 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_443 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264686600 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_432 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264699609 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_444 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264713293 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_445 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264730137 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_618 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264740781 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_619 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264755761 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_624 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264769667 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_625 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264782096 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Sub_626 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264792842 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_627 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264803280 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_631 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264822278 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_674 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264834210 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_675 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264844820 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_676 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264856905 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_677 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264871013 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_666 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264881571 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_680 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264893637 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_669 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264903975 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_681 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264917396 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_682 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264937072 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_855 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264947823 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_856 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264962792 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_861 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264973434 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_862 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264985806 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Sub_863 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.264996527 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_864 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265007047 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_868 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265025974 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_911 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265036507 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_912 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265046921 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_913 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265058727 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_914 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265072872 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_903 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265083470 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_917 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265095591 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_906 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265108763 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_918 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265122655 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_919 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265136661 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_2 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265147029 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_998 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265162101 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_37 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265172725 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Squeeze_38 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265183072 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_1000 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.265196503 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_1001 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.266062875 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.267066787 [I:onnxruntime:, reshape_fusion.cc:42 ApplyImpl] Total fused reshape node count: 0
2021-11-30 06:04:13.267957293 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.269851802 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.271799647 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.273592122 [I:onnxruntime:, graph.cc:3214 CleanUnusedInitializers] Removing initializer '1026'. It is no longer used by any node.
2021-11-30 06:04:13.273622377 [I:onnxruntime:, graph.cc:3214 CleanUnusedInitializers] Removing initializer '758'. It is no longer used by any node.
2021-11-30 06:04:13.273636637 [I:onnxruntime:, graph.cc:3214 CleanUnusedInitializers] Removing initializer '490'. It is no longer used by any node.
2021-11-30 06:04:13.273655540 [I:onnxruntime:, graph.cc:3214 CleanUnusedInitializers] Removing initializer '222'. It is no longer used by any node.
2021-11-30 06:04:13.273904433 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.275765543 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.277631148 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.279486733 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.281337533 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.282466056 [V:onnxruntime:, inference_session.cc:909 TransformGraph] Node placements
2021-11-30 06:04:13.282482567 [V:onnxruntime:, inference_session.cc:916 TransformGraph]  Provider: [CPUExecutionProvider]: [Gather (Gather_9), Add (Add_11), Unsqueeze (Unsqueeze_12), Slice (Slice_1220), Gather (Gather_1179), Gather (Gather_1172), Concat (Concat_1221), Equal (Equal_1211), Where (Where_1212), Equal (Equal_1203), Where (Where_1204), Equal (Equal_1195), Where (Where_1196), ]
2021-11-30 06:04:13.282510816 [V:onnxruntime:, inference_session.cc:916 TransformGraph]  Provider: [CUDAExecutionProvider]: [Shape (Shape_7), Slice (Slice_14), Gather (Gather_18), Gather (Gather_16), Gather (Gather_15), Add (Add_17), Add (Add_19), MatMul (MatMul_87), MatMul (MatMul_101), MatMul (MatMul_111), MatMul (MatMul_181), MatMul (MatMul_195), MatMul (MatMul_205), MatMul (MatMul_275), MatMul (MatMul_289), MatMul (MatMul_299), MatMul (MatMul_369), MatMul (MatMul_383), MatMul (MatMul_393), MatMul (MatMul_463), MatMul (MatMul_477), MatMul (MatMul_487), MatMul (MatMul_557), MatMul (MatMul_571), MatMul (MatMul_581), MatMul (MatMul_651), MatMul (MatMul_665), MatMul (MatMul_675), MatMul (MatMul_745), MatMul (MatMul_759), MatMul (MatMul_769), MatMul (MatMul_839), MatMul (MatMul_853), MatMul (MatMul_863), MatMul (MatMul_933), MatMul (MatMul_947), MatMul (MatMul_957), MatMul (MatMul_1027), MatMul (MatMul_1041), MatMul (MatMul_1051), MatMul (MatMul_1121), MatMul (MatMul_1135), MatMul (MatMul_1145), MatMul (MatMul_1159), Add (Add_1160), Softmax (Softmax_1161), Shape (Shape_1216), Range (Range_1183), Reshape (Reshape_1187), Range (Range_1176), Reshape (Reshape_1185), Add (Add_1188), Add (Add_1189), Shape (Shape_1190), Gather (Gather_1163), Shape (Shape_1168), Gather (Gather_1165), Add (Add_1167), Expand (Expand_1169), Reshape (Reshape_1222), Expand (Expand_1213), Unsqueeze (Unsqueeze_1214), Expand (Expand_1205), Unsqueeze (Unsqueeze_1206), Expand (Expand_1197), Unsqueeze (Unsqueeze_1198), Concat (Concat_1215), ScatterND (ScatterND_1223), ArgMax (ArgMax_1224), LayerNormalization (), Cast (), Attention (Attention_1), Attention (Attention_2), Attention (Attention_3), Attention (Attention_4), Attention (Attention_5), Attention (Attention_6), Attention (Attention_7), Attention (Attention_8), Attention (Attention_9), Attention (Attention_10), Attention (Attention_11), Attention (Attention_12), BiasGelu (Gelu_AddBias_1), BiasGelu (Gelu_AddBias_2), BiasGelu (Gelu_AddBias_3), BiasGelu (Gelu_AddBias_4), BiasGelu (Gelu_AddBias_5), BiasGelu (Gelu_AddBias_6), BiasGelu (Gelu_AddBias_7), BiasGelu (Gelu_AddBias_8), BiasGelu (Gelu_AddBias_9), BiasGelu (Gelu_AddBias_10), BiasGelu (Gelu_AddBias_11), BiasGelu (Gelu_AddBias_12), SkipLayerNormalization (SkipLayerNorm_AddBias_25), SkipLayerNormalization (SkipLayerNorm_AddBias_26), SkipLayerNormalization (SkipLayerNorm_AddBias_27), SkipLayerNormalization (SkipLayerNorm_AddBias_28), SkipLayerNormalization (SkipLayerNorm_AddBias_29), SkipLayerNormalization (SkipLayerNorm_AddBias_30), SkipLayerNormalization (SkipLayerNorm_AddBias_31), SkipLayerNormalization (SkipLayerNorm_AddBias_32), SkipLayerNormalization (SkipLayerNorm_AddBias_33), SkipLayerNormalization (SkipLayerNorm_AddBias_34), SkipLayerNormalization (SkipLayerNorm_AddBias_35), SkipLayerNormalization (SkipLayerNorm_AddBias_36), SkipLayerNormalization (SkipLayerNorm_AddBias_37), SkipLayerNormalization (SkipLayerNorm_AddBias_38), SkipLayerNormalization (SkipLayerNorm_AddBias_39), SkipLayerNormalization (SkipLayerNorm_AddBias_40), SkipLayerNormalization (SkipLayerNorm_AddBias_41), SkipLayerNormalization (SkipLayerNorm_AddBias_42), SkipLayerNormalization (SkipLayerNorm_AddBias_43), SkipLayerNormalization (SkipLayerNorm_AddBias_44), SkipLayerNormalization (SkipLayerNorm_AddBias_45), SkipLayerNormalization (SkipLayerNorm_AddBias_46), SkipLayerNormalization (SkipLayerNorm_AddBias_47), SkipLayerNormalization (SkipLayerNorm_AddBias_48), ]
2021-11-30 06:04:13.286018524 [V:onnxruntime:, session_state.cc:77 CreateGraphInfo] SaveMLValueNameIndexMapping
2021-11-30 06:04:13.286191105 [V:onnxruntime:, session_state.cc:123 CreateGraphInfo] Done saving OrtValue mappings.
2021-11-30 06:04:13.295780000 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cuda with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.295801651 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.295815931 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for CudaPinned with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.295824753 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.295836607 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for CUDA_CPU with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.295845383 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.296239440 [I:onnxruntime:, inference_session.cc:1141 Initialize] Initializing session.
2021-11-30 06:04:13.296250620 [I:onnxruntime:, inference_session.cc:1178 Initialize] Adding default CPU execution provider.
2021-11-30 06:04:13.296261708 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cpu with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:13.296270241 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:13.297102479 [I:onnxruntime:, reshape_fusion.cc:42 ApplyImpl] Total fused reshape node count: 0
2021-11-30 06:04:13.298081766 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_2
2021-11-30 06:04:13.298093497 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_5
2021-11-30 06:04:13.298627332 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_1189
2021-11-30 06:04:13.298637714 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Slice_1237
2021-11-30 06:04:13.298646440 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Gather_1196
2021-11-30 06:04:13.298679970 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Where_1213
2021-11-30 06:04:13.298688420 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Equal_1212
2021-11-30 06:04:13.298696481 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Equal_1228
2021-11-30 06:04:13.298704522 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Where_1221
2021-11-30 06:04:13.298712620 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Where_1229
2021-11-30 06:04:13.298720478 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Concat_1238
2021-11-30 06:04:13.298735622 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Equal_1220
2021-11-30 06:04:13.298867929 [I:onnxruntime:log, fallback_cpu_capability.cc:82 operator()] Candidate for fallback CPU execution: Expand_1186
2021-11-30 06:04:13.298947628 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_2 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.298960281 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_6 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.298973710 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_5 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.298984794 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Unsqueeze_7 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.298997293 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_8 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299012702 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_1189 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299028816 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Gather_1196 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299044597 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Equal_1212 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299059836 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Where_1213 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299074266 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Equal_1220 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299088476 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Where_1221 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299103036 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Equal_1228 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299117055 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Where_1229 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299143956 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Slice_1237 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.299156647 [I:onnxruntime:log, fallback_cpu_capability.cc:135 GetCpuPreferredNodes] ORT optimization- Force fallback to CPU execution for node: Concat_1238 because the CPU execution path is deemed faster than overhead involved with execution on other EPs  capable of executing this node
2021-11-30 06:04:13.300328472 [I:onnxruntime:, reshape_fusion.cc:42 ApplyImpl] Total fused reshape node count: 0
2021-11-30 06:04:13.301824743 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.301962323 [V:onnxruntime:, inference_session.cc:909 TransformGraph] Node placements
2021-11-30 06:04:13.301991863 [V:onnxruntime:, inference_session.cc:916 TransformGraph]  Provider: [CPUExecutionProvider]: [Gather (Gather_5), Unsqueeze (Unsqueeze_17), Concat (Concat_18), Add (Add_13), Slice (Slice_37), Squeeze (Squeeze_38), Unsqueeze (Unsqueeze_1000), Gather (Gather_2), Unsqueeze (Unsqueeze_998), Concat (Concat_1001), Slice (Slice_150), Squeeze (Squeeze_151), Unsqueeze (Unsqueeze_157), Slice (Slice_144), Squeeze (Squeeze_145), Sub (Sub_152), Unsqueeze (Unsqueeze_153), Gather (Gather_11), Unsqueeze (Unsqueeze_20), Concat (Concat_21), Gather (Gather_195), Unsqueeze (Unsqueeze_207), Gather (Gather_192), Unsqueeze (Unsqueeze_206), Concat (Concat_208), Slice (Slice_200), Squeeze (Squeeze_201), Unsqueeze (Unsqueeze_202), Concat (Concat_203), Slice (Slice_387), Squeeze (Squeeze_388), Unsqueeze (Unsqueeze_394), Slice (Slice_381), Squeeze (Squeeze_382), Sub (Sub_389), Unsqueeze (Unsqueeze_390), Gather (Gather_432), Unsqueeze (Unsqueeze_444), Gather (Gather_429), Unsqueeze (Unsqueeze_443), Concat (Concat_445), Slice (Slice_437), Squeeze (Squeeze_438), Unsqueeze (Unsqueeze_439), Concat (Concat_440), Slice (Slice_624), Squeeze (Squeeze_625), Unsqueeze (Unsqueeze_631), Slice (Slice_618), Squeeze (Squeeze_619), Sub (Sub_626), Unsqueeze (Unsqueeze_627), Gather (Gather_669), Unsqueeze (Unsqueeze_681), Gather (Gather_666), Unsqueeze (Unsqueeze_680), Concat (Concat_682), Slice (Slice_674), Squeeze (Squeeze_675), Unsqueeze (Unsqueeze_676), Concat (Concat_677), Slice (Slice_861), Squeeze (Squeeze_862), Unsqueeze (Unsqueeze_868), Slice (Slice_855), Squeeze (Squeeze_856), Sub (Sub_863), Unsqueeze (Unsqueeze_864), Gather (Gather_906), Unsqueeze (Unsqueeze_918), Gather (Gather_903), Unsqueeze (Unsqueeze_917), Concat (Concat_919), Slice (Slice_911), Squeeze (Squeeze_912), Unsqueeze (Unsqueeze_913), Concat (Concat_914), ]
2021-11-30 06:04:13.302026468 [V:onnxruntime:, inference_session.cc:916 TransformGraph]  Provider: [CUDAExecutionProvider]: [Slice (Slice_1018), ReduceSum (ReduceSum_1028), Cast (Cast_1029), Cast (Cast_1023), Slice (Slice_1013), Unsqueeze (Unsqueeze_1020), Shape (Shape_3), Range (Range_15), Unsqueeze (Unsqueeze_16), Reshape (Reshape_19), Gather (Gather_31), Reshape (Reshape_8), Gather (Gather_30), Add (Add_32), Shape (Shape_33), Split (Split_70), Reshape (Reshape_Fuse1), Transpose (Transpose_114), Reshape (Reshape_Fuse2), Transpose (Transpose_92), Shape (Shape_146), Slice (Slice_156), Slice (Slice_159), Cast (Cast_160), Where (Where_161), Shape (Shape_9), Reshape (Reshape_22), Unsqueeze (Unsqueeze_23), Unsqueeze (Unsqueeze_24), Cast (Cast_25), Sub (Sub_27), Mul (Mul_29), Add (Add_162), Softmax (Softmax_163), Reshape (Reshape_Fuse3), Transpose (Transpose_136), MatMul (MatMul_164), Transpose (Transpose_165), Reshape (Reshape_Fuse4), Shape (Shape_193), Reshape (Reshape_204), Gemm (Gemm_205), Reshape (Reshape_209), Add (Add_210), Add (Add_275), Split (Split_307), Reshape (Reshape_Fuse5), Transpose (Transpose_351), Reshape (Reshape_Fuse6), Transpose (Transpose_329), Shape (Shape_383), Slice (Slice_393), Slice (Slice_396), Cast (Cast_397), Where (Where_398), Add (Add_399), Softmax (Softmax_400), Reshape (Reshape_Fuse7), Transpose (Transpose_373), MatMul (MatMul_401), Transpose (Transpose_402), Reshape (Reshape_Fuse8), Shape (Shape_430), Reshape (Reshape_441), Gemm (Gemm_442), Reshape (Reshape_446), Add (Add_447), Add (Add_512), Split (Split_544), Reshape (Reshape_Fuse9), Transpose (Transpose_588), Reshape (Reshape_Fuse10), Transpose (Transpose_566), Shape (Shape_620), Slice (Slice_630), Slice (Slice_633), Cast (Cast_634), Where (Where_635), Add (Add_636), Softmax (Softmax_637), Reshape (Reshape_Fuse11), Transpose (Transpose_610), MatMul (MatMul_638), Transpose (Transpose_639), Reshape (Reshape_Fuse12), Shape (Shape_667), Reshape (Reshape_678), Gemm (Gemm_679), Reshape (Reshape_683), Add (Add_684), Add (Add_749), Split (Split_781), Reshape (Reshape_Fuse13), Transpose (Transpose_825), Reshape (Reshape_Fuse14), Transpose (Transpose_803), Shape (Shape_857), Slice (Slice_867), Slice (Slice_870), Cast (Cast_871), Where (Where_872), Add (Add_873), Softmax (Softmax_874), Reshape (Reshape_Fuse15), Transpose (Transpose_847), MatMul (MatMul_875), Transpose (Transpose_876), Reshape (Reshape_Fuse16), Shape (Shape_904), Reshape (Reshape_915), Gemm (Gemm_916), Reshape (Reshape_920), Add (Add_921), Add (Add_986), Reshape (Reshape_1002), MatMul (MatMul_1003), Slice (Slice_1008), LogSoftmax (LogSoftmax_1019), GatherElements (GatherElements_1021), Squeeze (Squeeze_1022), Mul (Mul_1024), ReduceSum (ReduceSum_1025), Mul (Mul_1027), Div (Div_1030), LayerNormalization (), LayerNormalization (), LayerNormalization (), LayerNormalization (), LayerNormalization (), LayerNormalization (), LayerNormalization (), LayerNormalization (), LayerNormalization (), MatMul (FullyConnect_MatMul1), Add (FullyConnect_Add1), MatMul (FullyConnect_MatMul2), MatMul (FullyConnect_MatMul3), Add (FullyConnect_Add3), MatMul (FullyConnect_MatMul4), Add (FullyConnect_Add4), MatMul (FullyConnect_MatMul5), MatMul (FullyConnect_MatMul6), Add (FullyConnect_Add6), MatMul (FullyConnect_MatMul7), Add (FullyConnect_Add7), MatMul (FullyConnect_MatMul8), MatMul (FullyConnect_MatMul9), Add (FullyConnect_Add9), MatMul (FullyConnect_MatMul10), Add (FullyConnect_Add10), MatMul (FullyConnect_MatMul11), MatMul (FullyConnect_MatMul12), Add (FullyConnect_Add12), FastGelu (FastGelu_AddBias_5), FastGelu (FastGelu_AddBias_6), FastGelu (FastGelu_AddBias_7), FastGelu (FastGelu_AddBias_8), FusedMatMul (MatMul_137_FusedMatMulAndScale), FusedMatMul (MatMul_374_FusedMatMulAndScale), FusedMatMul (MatMul_611_FusedMatMulAndScale), FusedMatMul (MatMul_848_FusedMatMulAndScale), ]
2021-11-30 06:04:13.305872469 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.307999665 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.310111734 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.312218740 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.314323978 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.316434258 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.318522564 [V:onnxruntime:, session_state.cc:77 CreateGraphInfo] SaveMLValueNameIndexMapping
2021-11-30 06:04:13.318540035 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.318748103 [V:onnxruntime:, session_state.cc:123 CreateGraphInfo] Done saving OrtValue mappings.
2021-11-30 06:04:13.320647148 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.322763040 [V:onnxruntime:, embed_layer_norm_fusion.cc:632 FuseSubGraph] Failed to match position embedding subgraph.
2021-11-30 06:04:13.323949082 [V:onnxruntime:, inference_session.cc:909 TransformGraph] Node placements
2021-11-30 06:04:13.323964313 [V:onnxruntime:, inference_session.cc:916 TransformGraph]  Provider: [CPUExecutionProvider]: [Gather (Gather_5), Unsqueeze (Unsqueeze_7), Gather (Gather_2), Unsqueeze (Unsqueeze_6), Concat (Concat_8), Slice (Slice_1237), Gather (Gather_1196), Gather (Gather_1189), Concat (Concat_1238), Equal (Equal_1228), Where (Where_1229), Equal (Equal_1220), Where (Where_1221), Equal (Equal_1212), Where (Where_1213), ]
2021-11-30 06:04:13.323992881 [V:onnxruntime:, inference_session.cc:916 TransformGraph]  Provider: [CUDAExecutionProvider]: [Equal (Equal_18), Not (Not_19), Cast (Cast_20), CumSum (CumSum_22), Add (Add_24), Mul (Mul_25), Cast (Cast_26), Add (Add_28), Gather (Gather_33), Shape (Shape_3), ConstantOfShape (ConstantOfShape_9), Gather (Gather_31), Gather (Gather_30), Add (Add_32), Add (Add_34), MatMul (MatMul_102), MatMul (MatMul_116), MatMul (MatMul_126), MatMul (MatMul_196), MatMul (MatMul_210), MatMul (MatMul_220), MatMul (MatMul_290), MatMul (MatMul_304), MatMul (MatMul_314), MatMul (MatMul_384), MatMul (MatMul_398), MatMul (MatMul_408), MatMul (MatMul_478), MatMul (MatMul_492), MatMul (MatMul_502), MatMul (MatMul_572), MatMul (MatMul_586), MatMul (MatMul_596), MatMul (MatMul_666), MatMul (MatMul_680), MatMul (MatMul_690), MatMul (MatMul_760), MatMul (MatMul_774), MatMul (MatMul_784), MatMul (MatMul_854), MatMul (MatMul_868), MatMul (MatMul_878), MatMul (MatMul_948), MatMul (MatMul_962), MatMul (MatMul_972), MatMul (MatMul_1042), MatMul (MatMul_1056), MatMul (MatMul_1066), MatMul (MatMul_1136), MatMul (MatMul_1150), MatMul (MatMul_1160), MatMul (MatMul_1176), Gather (Gather_1244), MatMul (MatMul_1174), Shape (Shape_1233), Range (Range_1200), Reshape (Reshape_1204), Range (Range_1193), Reshape (Reshape_1202), Add (Add_1205), Add (Add_1206), Shape (Shape_1207), Gather (Gather_1180), Shape (Shape_1185), Gather (Gather_1182), Add (Add_1184), Expand (Expand_1186), Reshape (Reshape_1239), Expand (Expand_1230), Unsqueeze (Unsqueeze_1231), Expand (Expand_1222), Unsqueeze (Unsqueeze_1223), Expand (Expand_1214), Unsqueeze (Unsqueeze_1215), Concat (Concat_1232), ScatterND (ScatterND_1240), ArgMax (ArgMax_1241), LayerNormalization (), Cast (), Attention (Attention_1), Attention (Attention_2), Attention (Attention_3), Attention (Attention_4), Attention (Attention_5), Attention (Attention_6), Attention (Attention_7), Attention (Attention_8), Attention (Attention_9), Attention (Attention_10), Attention (Attention_11), Attention (Attention_12), BiasGelu (Gelu_AddBias_1), BiasGelu (Gelu_AddBias_2), BiasGelu (Gelu_AddBias_3), BiasGelu (Gelu_AddBias_4), BiasGelu (Gelu_AddBias_5), BiasGelu (Gelu_AddBias_6), BiasGelu (Gelu_AddBias_7), BiasGelu (Gelu_AddBias_8), BiasGelu (Gelu_AddBias_9), BiasGelu (Gelu_AddBias_10), BiasGelu (Gelu_AddBias_11), BiasGelu (Gelu_AddBias_12), SkipLayerNormalization (SkipLayerNorm_AddBias_25), SkipLayerNormalization (SkipLayerNorm_AddBias_26), SkipLayerNormalization (SkipLayerNorm_AddBias_27), SkipLayerNormalization (SkipLayerNorm_AddBias_28), SkipLayerNormalization (SkipLayerNorm_AddBias_29), SkipLayerNormalization (SkipLayerNorm_AddBias_30), SkipLayerNormalization (SkipLayerNorm_AddBias_31), SkipLayerNormalization (SkipLayerNorm_AddBias_32), SkipLayerNormalization (SkipLayerNorm_AddBias_33), SkipLayerNormalization (SkipLayerNorm_AddBias_34), SkipLayerNormalization (SkipLayerNorm_AddBias_35), SkipLayerNormalization (SkipLayerNorm_AddBias_36), SkipLayerNormalization (SkipLayerNorm_AddBias_37), SkipLayerNormalization (SkipLayerNorm_AddBias_38), SkipLayerNormalization (SkipLayerNorm_AddBias_39), SkipLayerNormalization (SkipLayerNorm_AddBias_40), SkipLayerNormalization (SkipLayerNorm_AddBias_41), SkipLayerNormalization (SkipLayerNorm_AddBias_42), SkipLayerNormalization (SkipLayerNorm_AddBias_43), SkipLayerNormalization (SkipLayerNorm_AddBias_44), SkipLayerNormalization (SkipLayerNorm_AddBias_45), SkipLayerNormalization (SkipLayerNorm_AddBias_46), SkipLayerNormalization (SkipLayerNorm_AddBias_47), SkipLayerNormalization (SkipLayerNorm_AddBias_48), BiasSoftmax (BiasSoftmax), BiasSoftmax (BiasSoftmax_token_0), ]
2021-11-30 06:04:13.327832514 [V:onnxruntime:, session_state.cc:77 CreateGraphInfo] SaveMLValueNameIndexMapping
2021-11-30 06:04:13.328008259 [V:onnxruntime:, session_state.cc:123 CreateGraphInfo] Done saving OrtValue mappings.
2021-11-30 06:04:14.732447623 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cuda with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:14.732447786 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cuda with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:14.732478459 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:14.732492788 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:14.732791580 [I:onnxruntime:log, bfc_arena.cc:25 BFCArena] Creating BFCArena for Cuda with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-11-30 06:04:14.732814341 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-11-30 06:04:14.734435847 [I:onnxruntime:, session_state_utils.cc:142 SaveInitializedTensors] Saving initialized tensors.
2021-11-30 06:04:14.734723025 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:0 (requested) num_bytes: 16 (actual) rounded_bytes:256
2021-11-30 06:04:14.734739495 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 1048576 bytes.
2021-11-30 06:04:14.734748150 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 1048576
2021-11-30 06:04:14.734757373 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bec8c7380 to 0x7f5bec9c7380
2021-11-30 06:04:14.734790546 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:3 (requested) num_bytes: 3072 (actual) rounded_bytes:3072
2021-11-30 06:04:14.734832841 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 1048576 bytes.
2021-11-30 06:04:14.734842178 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 1048576
2021-11-30 06:04:14.734850165 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5c32adca00 to 0x7f5c32bdca00
2021-11-30 06:04:14.735059882 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.735400292 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 16777216 bytes.
2021-11-30 06:04:14.735413796 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 17825792
2021-11-30 06:04:14.735422571 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bea800000 to 0x7f5beb800000
2021-11-30 06:04:14.735462987 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.735482542 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 16777216 bytes.
2021-11-30 06:04:14.735490856 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 17825792
2021-11-30 06:04:14.735498556 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bec9c7400 to 0x7f5bed9c7400
2021-11-30 06:04:14.735843747 [I:onnxruntime:, session_state_utils.cc:142 SaveInitializedTensors] Saving initialized tensors.
2021-11-30 06:04:14.736214501 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:0 (requested) num_bytes: 8 (actual) rounded_bytes:256
2021-11-30 06:04:14.736250196 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 1048576 bytes.
2021-11-30 06:04:14.736262189 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 1048576
2021-11-30 06:04:14.736272460 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5ca3821100 to 0x7f5ca3921100
2021-11-30 06:04:14.736303494 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:0 (requested) num_bytes: 4 (actual) rounded_bytes:256
2021-11-30 06:04:14.736661445 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 1048576 bytes.
2021-11-30 06:04:14.736677039 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 1048576
2021-11-30 06:04:14.736686847 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5beb800000 to 0x7f5beb900000
2021-11-30 06:04:14.736933632 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:13 (requested) num_bytes: 2359296 (actual) rounded_bytes:2359296
2021-11-30 06:04:14.737252495 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 4194304 bytes.
2021-11-30 06:04:14.737267120 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 5242880
2021-11-30 06:04:14.737277022 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5beba00000 to 0x7f5bebe00000
2021-11-30 06:04:14.737315955 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:13 (requested) num_bytes: 2359296 (actual) rounded_bytes:2359296
2021-11-30 06:04:14.737339530 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 4194304 bytes.
2021-11-30 06:04:14.737349837 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 5242880
2021-11-30 06:04:14.737365586 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5ca3921140 to 0x7f5ca3d21140
2021-11-30 06:04:14.739787209 [I:onnxruntime:, session_state_utils.cc:142 SaveInitializedTensors] Saving initialized tensors.
2021-11-30 06:04:14.740011792 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:0 (requested) num_bytes: 8 (actual) rounded_bytes:256
2021-11-30 06:04:14.740028936 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 1048576 bytes.
2021-11-30 06:04:14.740040197 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 1048576
2021-11-30 06:04:14.740051426 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5ca411df40 to 0x7f5ca421df40
2021-11-30 06:04:14.740082748 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:4 (requested) num_bytes: 6144 (actual) rounded_bytes:6144
2021-11-30 06:04:14.740110016 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 1048576 bytes.
2021-11-30 06:04:14.740117289 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:14 (requested) num_bytes: 7077888 (actual) rounded_bytes:7077888
2021-11-30 06:04:14.740120625 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 1048576
2021-11-30 06:04:14.740147325 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5beb900000 to 0x7f5beba00000
2021-11-30 06:04:14.740536286 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 8388608 bytes.
2021-11-30 06:04:14.740554106 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 13631488
2021-11-30 06:04:14.740564219 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5be8000000 to 0x7f5be8800000
2021-11-30 06:04:14.740633577 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:19 (requested) num_bytes: 154414080 (actual) rounded_bytes:154414080
2021-11-30 06:04:14.740781665 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:14 (requested) num_bytes: 7077888 (actual) rounded_bytes:7077888
2021-11-30 06:04:14.741102081 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 8388608 bytes.
2021-11-30 06:04:14.741111117 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 13631488
2021-11-30 06:04:14.741119050 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bd4000080 to 0x7f5bd4800080
2021-11-30 06:04:14.741375856 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 268435456 bytes.
2021-11-30 06:04:14.741392208 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 269484032
2021-11-30 06:04:14.741402844 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bd8000000 to 0x7f5be8000000
2021-11-30 06:04:14.744438255 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:19 (requested) num_bytes: 154414080 (actual) rounded_bytes:154414080
2021-11-30 06:04:14.744480035 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 268435456 bytes.
2021-11-30 06:04:14.744490741 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 269484032
2021-11-30 06:04:14.744500528 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bc3fff040 to 0x7f5bd3fff040
2021-11-30 06:04:14.749968720 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.750332615 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 16777216 bytes.
2021-11-30 06:04:14.750348464 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 30408704
2021-11-30 06:04:14.750364732 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5be8800000 to 0x7f5be9800000
2021-11-30 06:04:14.750722400 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.750756894 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 16777216 bytes.
2021-11-30 06:04:14.750768046 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 30408704
2021-11-30 06:04:14.750777453 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bd48c0100 to 0x7f5bd58c0100
2021-11-30 06:04:14.761947293 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:13 (requested) num_bytes: 3145728 (actual) rounded_bytes:3145728
2021-11-30 06:04:14.764301078 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 16777216 bytes.
2021-11-30 06:04:14.764315596 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 47185920
2021-11-30 06:04:14.764324501 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bc0000000 to 0x7f5bc1000000
2021-11-30 06:04:14.764482082 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.764832865 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 16777216 bytes.
2021-11-30 06:04:14.764845499 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 34603008
2021-11-30 06:04:14.764854029 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bc1000000 to 0x7f5bc2000000
2021-11-30 06:04:14.787955881 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.788425530 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 33554432 bytes.
2021-11-30 06:04:14.788442829 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 68157440
2021-11-30 06:04:14.788452812 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bbe000000 to 0x7f5bc0000000
2021-11-30 06:04:14.791740323 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:14 (requested) num_bytes: 7077888 (actual) rounded_bytes:7077888
2021-11-30 06:04:14.794119431 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 33554432 bytes.
2021-11-30 06:04:14.794134473 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 80740352
2021-11-30 06:04:14.794143341 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bbc000000 to 0x7f5bbe000000
2021-11-30 06:04:14.825482245 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:13 (requested) num_bytes: 2359296 (actual) rounded_bytes:2359296
2021-11-30 06:04:14.825929630 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 67108864 bytes.
2021-11-30 06:04:14.825944153 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 147849216
2021-11-30 06:04:14.825952887 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bb8000000 to 0x7f5bbc000000
2021-11-30 06:04:14.827862934 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.828367620 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 67108864 bytes.
2021-11-30 06:04:14.828385946 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 135266304
2021-11-30 06:04:14.828395563 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5bb4000000 to 0x7f5bb8000000
2021-11-30 06:04:14.871276511 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:19 (requested) num_bytes: 154389504 (actual) rounded_bytes:154389504
2021-11-30 06:04:14.872057280 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 268435456 bytes.
2021-11-30 06:04:14.872076075 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 416284672
2021-11-30 06:04:14.872086204 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5ba4000000 to 0x7f5bb4000000
2021-11-30 06:04:14.877697828 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:19 (requested) num_bytes: 154389504 (actual) rounded_bytes:154389504
2021-11-30 06:04:14.877731346 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 268435456 bytes.
2021-11-30 06:04:14.877741061 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 298844160
2021-11-30 06:04:14.877751582 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5b93fff040 to 0x7f5ba3fff040
2021-11-30 06:04:14.983605617 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:14.986741577 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 268435456 bytes.
2021-11-30 06:04:14.986763218 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 537919488
2021-11-30 06:04:14.986775069 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5b82000000 to 0x7f5b92000000
2021-11-30 06:04:14.986770017 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:14 (requested) num_bytes: 7077888 (actual) rounded_bytes:7077888
2021-11-30 06:04:14.987413110 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 134217728 bytes.
2021-11-30 06:04:14.987430917 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 269484032
2021-11-30 06:04:14.987440758 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5b7a000000 to 0x7f5b82000000
2021-11-30 06:04:15.180388998 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:18 (requested) num_bytes: 95238144 (actual) rounded_bytes:95238144
2021-11-30 06:04:15.182991227 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 268435456 bytes.
2021-11-30 06:04:15.183021196 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 537919488
2021-11-30 06:04:15.183032169 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5b6a000000 to 0x7f5b7a000000
2021-11-30 06:04:15.188342046 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:18 (requested) num_bytes: 95238144 (actual) rounded_bytes:95238144
2021-11-30 06:04:15.188387894 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 134217728 bytes.
2021-11-30 06:04:15.188399022 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 152043520
2021-11-30 06:04:15.188408476 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5b61fff040 to 0x7f5b69fff040
2021-11-30 06:04:15.228520309 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:15 (requested) num_bytes: 9437184 (actual) rounded_bytes:9437184
2021-11-30 06:04:15.229765738 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 536870912 bytes.
2021-11-30 06:04:15.229787089 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 1074790400
2021-11-30 06:04:15.229798052 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5b40000000 to 0x7f5b60000000
2021-11-30 06:04:15.359677219 [I:onnxruntime:, session_state_utils.cc:263 SaveInitializedTensors] Done saving initialized tensors
2021-11-30 06:04:15.375144189 [I:onnxruntime:, inference_session.cc:1329 Initialize] Session successfully initialized.
I1130 14:04:15.375754 87690 dynamic_batch_scheduler.cc:230] Starting dynamic-batch scheduler thread 0 at nice 5...
I1130 14:04:15.375873 87690 model_repository_manager.cc:1210] successfully loaded 'gector' version 1
I1130 14:04:15.375898 87690 model_repository_manager.cc:986] TriggerNextAction() 'gector' version 1: 0
I1130 14:04:15.375911 87690 model_repository_manager.cc:1001] no next action, trigger OnComplete()
2021-11-30 06:04:15.439794562 [I:onnxruntime:log, bfc_arena.cc:305 AllocateRawInternal] Extending BFCArena for Cuda. bin_num:19 (requested) num_bytes: 154389504 (actual) rounded_bytes:154389504
2021-11-30 06:04:15.440743886 [I:onnxruntime:log, bfc_arena.cc:185 Extend] Extended allocation by 268435456 bytes.
2021-11-30 06:04:15.440773360 [I:onnxruntime:log, bfc_arena.cc:188 Extend] Total allocated bytes: 684720128
2021-11-30 06:04:15.440783952 [I:onnxruntime:log, bfc_arena.cc:191 Extend] Allocated memory at 0x7f5b28000000 to 0x7f5b38000000
2021-11-30 06:04:15.724885651 [I:onnxruntime:, session_state_utils.cc:263 SaveInitializedTensors] Done saving initialized tensors
2021-11-30 06:04:15.736878158 [I:onnxruntime:, inference_session.cc:1329 Initialize] Session successfully initialized.
I1130 14:04:15.737111 87690 dynamic_batch_scheduler.cc:230] Starting dynamic-batch scheduler thread 0 at nice 5...
I1130 14:04:15.737170 87690 model_repository_manager.cc:1210] successfully loaded 'gector_spanish' version 1
I1130 14:04:15.737191 87690 model_repository_manager.cc:986] TriggerNextAction() 'gector_spanish' version 1: 0
I1130 14:04:15.737205 87690 model_repository_manager.cc:1001] no next action, trigger OnComplete()
2021-11-30 06:04:15.794247966 [I:onnxruntime:, session_state_utils.cc:263 SaveInitializedTensors] Done saving initialized tensors
2021-11-30 06:04:15.820579521 [I:onnxruntime:, inference_session.cc:1329 Initialize] Session successfully initialized.
I1130 14:04:15.820772 87690 dynamic_batch_scheduler.cc:230] Starting dynamic-batch scheduler thread 0 at nice 5...
I1130 14:04:15.820805 87690 model_repository_manager.cc:1210] successfully loaded 'lm_english' version 1
I1130 14:04:15.820821 87690 model_repository_manager.cc:986] TriggerNextAction() 'lm_english' version 1: 0
I1130 14:04:15.820832 87690 model_repository_manager.cc:1001] no next action, trigger OnComplete()
I1130 14:04:15.820923 87690 model_repository_manager.cc:592] VersionStates() 'lm_english'
I1130 14:04:15.820967 87690 model_repository_manager.cc:592] VersionStates() 'gector_spanish'
I1130 14:04:15.820993 87690 model_repository_manager.cc:592] VersionStates() 'gector'
I1130 14:04:15.821013 87690 model_repository_manager.cc:592] VersionStates() 'lm_english'
I1130 14:04:15.821021 87690 model_repository_manager.cc:592] VersionStates() 'gector'
I1130 14:04:15.821030 87690 model_repository_manager.cc:592] VersionStates() 'gector_spanish'
I1130 14:04:15.821120 87690 server.cc:504]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I1130 14:04:15.821170 87690 server.cc:543]
+-------------+------------------------------------------------------------+--------+
| Backend     | Path                                                       | Config |
+-------------+------------------------------------------------------------+--------+
| onnxruntime | tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {}     |
+-------------+------------------------------------------------------------+--------+

I1130 14:04:15.821178 87690 model_repository_manager.cc:568] BackendStates()
I1130 14:04:15.821241 87690 server.cc:586]
+----------------+---------+--------+
| Model          | Version | Status |
+----------------+---------+--------+
| gector         | 1       | READY  |
| gector_spanish | 1       | READY  |
| lm_english     | 1       | READY  |
+----------------+---------+--------+

I1130 14:04:15.821434 87690 tritonserver.cc:1659]
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                                                                                  |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                                                                                 |
| server_version                   | 2.11.0dev                                                                                                                                                                              |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics |
| model_repository_path[0]         | models/                                                                                                                                                                                |
| model_control_mode               | MODE_NONE                                                                                                                                                                              |
| strict_model_config              | 1                                                                                                                                                                                      |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                                                                              |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                                                                                                               |
| min_supported_compute_capability | 6.0                                                                                                                                                                                    |
| strict_readiness                 | 1                                                                                                                                                                                      |
| exit_timeout                     | 30                                                                                                                                                                                     |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I1130 14:04:15.825711 87690 grpc_server.cc:225] Ready for RPC 'ServerLive', 0
I1130 14:04:15.825755 87690 grpc_server.cc:225] Ready for RPC 'ServerReady', 0
I1130 14:04:15.825765 87690 grpc_server.cc:225] Ready for RPC 'ModelReady', 0
I1130 14:04:15.825779 87690 grpc_server.cc:225] Ready for RPC 'ServerMetadata', 0
I1130 14:04:15.825791 87690 grpc_server.cc:225] Ready for RPC 'ModelMetadata', 0
I1130 14:04:15.825801 87690 grpc_server.cc:225] Ready for RPC 'ModelConfig', 0
I1130 14:04:15.825813 87690 grpc_server.cc:225] Ready for RPC 'ModelStatistics', 0
I1130 14:04:15.825826 87690 grpc_server.cc:225] Ready for RPC 'SystemSharedMemoryStatus', 0
I1130 14:04:15.825839 87690 grpc_server.cc:225] Ready for RPC 'SystemSharedMemoryRegister', 0
I1130 14:04:15.825848 87690 grpc_server.cc:225] Ready for RPC 'SystemSharedMemoryUnregister', 0
I1130 14:04:15.825861 87690 grpc_server.cc:225] Ready for RPC 'CudaSharedMemoryStatus', 0
I1130 14:04:15.825873 87690 grpc_server.cc:225] Ready for RPC 'CudaSharedMemoryRegister', 0
I1130 14:04:15.825886 87690 grpc_server.cc:225] Ready for RPC 'CudaSharedMemoryUnregister', 0
I1130 14:04:15.825895 87690 grpc_server.cc:225] Ready for RPC 'RepositoryIndex', 0
I1130 14:04:15.825909 87690 grpc_server.cc:225] Ready for RPC 'RepositoryModelLoad', 0
I1130 14:04:15.825920 87690 grpc_server.cc:225] Ready for RPC 'RepositoryModelUnload', 0
I1130 14:04:15.825944 87690 grpc_server.cc:416] Thread started for CommonHandler
I1130 14:04:15.826048 87690 grpc_server.cc:3144] New request handler for ModelInferHandler, 1
I1130 14:04:15.826069 87690 grpc_server.cc:2202] Thread started for ModelInferHandler
I1130 14:04:15.826180 87690 grpc_server.cc:3497] New request handler for ModelStreamInferHandler, 3
I1130 14:04:15.826200 87690 grpc_server.cc:2202] Thread started for ModelStreamInferHandler
I1130 14:04:15.826210 87690 grpc_server.cc:4062] Started GRPCInferenceService at 0.0.0.0:8008
I1130 14:04:15.826593 87690 http_server.cc:2744] Started HTTPService at 0.0.0.0:8009
I1130 14:04:15.868895 87690 http_server.cc:162] Started Metrics Service at 0.0.0.0:8002

from onnxruntime_backend.

CoderHam avatar CoderHam commented on June 15, 2024

@vignesh34v please share the backtrace for the segfault

from onnxruntime_backend.

vigneshwaran-nv-10329 avatar vigneshwaran-nv-10329 commented on June 15, 2024

@CoderHam
Backtrace :


#0  0x00007fffecd5950a in std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnxruntime::KernelCreateInfo>, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnxruntime::KernelCreateInfo> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnxruntime::KernelCreateInfo> > >::equal_range(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
   from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0
#1  0x00007fffecd59a01 in onnxruntime::KernelRegistry::Register(onnxruntime::KernelCreateInfo&&) () from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0
#2  0x00007fffecd6e9ff in onnxruntime::ProviderHostImpl::KernelRegistry__Register(onnxruntime::KernelRegistry*, onnxruntime::KernelCreateInfo&&) ()
   from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0
#3  0x00007fff0991b6b6 in onnxruntime::CUDAExecutionProvider::GetKernelRegistry() const () from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime_providers_cuda.so
#4  0x00007fffecd5e0a7 in onnxruntime::KernelRegistryManager::RegisterKernels(onnxruntime::ExecutionProviders const&) ()
   from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0
#5  0x00007fffec87377e in onnxruntime::InferenceSession::Initialize() () from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0
#6  0x00007fffec835ae0 in (anonymous namespace)::InitializeSession(OrtSessionOptions const*, std::unique_ptr<onnxruntime::InferenceSession, std::default_delete<onnxruntime::InferenceSession> >&, OrtPrepackedWeightsContainer*) () from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0
#7  0x00007fffec835c5d in OrtApis::CreateSession(OrtEnv const*, char const*, OrtSessionOptions const*, OrtSession**) ()
   from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0
#8  0x00007fffed42398d in triton::backend::onnxruntime::OnnxLoader::LoadSession(bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, OrtSessionOptions const*, OrtSession**) () from tritonserver/backends/onnxruntime/libtriton_onnxruntime.so
#9  0x00007fffed40bacf in triton::backend::onnxruntime::ModelState::LoadModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, TRITONSERVER_instancegroupkind_enum, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, OrtSession**, OrtAllocator**) () from tritonserver/backends/onnxruntime/libtriton_onnxruntime.so
#10 0x00007fffed411abd in triton::backend::onnxruntime::ModelInstanceState::ModelInstanceState(triton::backend::onnxruntime::ModelState*, TRITONBACKEND_ModelInstance*) ()
   from tritonserver/backends/onnxruntime/libtriton_onnxruntime.so
#11 0x00007fffed4120fe in triton::backend::onnxruntime::ModelInstanceState::Create(triton::backend::onnxruntime::ModelState*, TRITONBACKEND_ModelInstance*, triton::backend::onnxruntime::ModelInstanceState**) () from tritonserver/backends/onnxruntime/libtriton_onnxruntime.so
#12 0x00007fffed412506 in TRITONBACKEND_ModelInstanceInitialize () from tritonserver/backends/onnxruntime/libtriton_onnxruntime.so
#13 0x00007ffff7916489 in nvidia::inferenceserver::TritonModelInstance::CreateInstance(nvidia::inferenceserver::TritonModel*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, TRITONSERVER_instancegroupkind_enum, int, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, bool) () from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/bin/../lib/libtritonserver.so
#14 0x00007ffff7916cea in nvidia::inferenceserver::TritonModelInstance::CreateInstances(nvidia::inferenceserver::TritonModel*, inference::ModelConfig const&) ()
   from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/bin/../lib/libtritonserver.so
#15 0x00007ffff7914ad2 in nvidia::inferenceserver::TritonModel::Create(nvidia::inferenceserver::InferenceServer*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::vector<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > > > > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, long, inference::ModelConfig const&, std::unique_ptr<nvidia::inferenceserver::TritonModel, std::default_delete<nvidia::inferenceserver::TritonModel> >*) ()
   from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/bin/../lib/libtritonserver.so
#16 0x00007ffff781605e in nvidia::inferenceserver::ModelRepositoryManager::BackendLifeCycle::CreateInferenceBackend(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, long, nvidia::inferenceserver::ModelRepositoryManager::BackendLifeCycle::BackendInfo*) () from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/bin/../lib/libtritonserver.so
#17 0x00007ffff7822405 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<nvidia::inferenceserver::Status (nvidia::inferenceserver::ModelRepositoryManager::BackendLifeCycle::*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, long, nvidia::inferenceserver::ModelRepositoryManager::BackendLifeCycle::BackendInfo*), nvidia::inferenceserver::ModelRepositoryManager::BackendLifeCycle*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, long, nvidia::inferenceserver::ModelRepositoryManager::BackendLifeCycle::BackendInfo*> > >::_M_run() () from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/bin/../lib/libtritonserver.so
#18 0x00007ffff66f3039 in std::execute_native_thread_routine (__p=0x41e0a20)
    at /home/builder/ktietz/cos6/ci_cos6/ctng-compilers_1622658800915/work/.build/x86_64-conda-linux-gnu/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#19 0x00007ffff710aea5 in start_thread () from /lib64/libpthread.so.0
#20 0x00007ffff601b9fd in clone () from /lib64/libc.so.6

Run :

2021-12-06 10:40:47.943428555 [V:onnxruntime:log, bfc_arena.cc:61 BFCArena] Creating 21 bins of max chunk size 256 to 268435456

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffee47c8700 (LWP 44734)]
0x00007fffecd5950a in std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnxruntime::KernelCreateInfo>, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnxruntime::KernelCreateInfo> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnxruntime::KernelCreateInfo> > >::equal_range(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
   from /home/sas/temp/zlabsnlp/zlabsnlp_api/tritonserver/backends/onnxruntime/libonnxruntime.so.1.8.0

from onnxruntime_backend.

CoderHam avatar CoderHam commented on June 15, 2024

@GuanLuo have you seen something like this before?

from onnxruntime_backend.

GuanLuo avatar GuanLuo commented on June 15, 2024

No. Can you share the repro steps?

from onnxruntime_backend.

vigneshwaran-nv-10329 avatar vigneshwaran-nv-10329 commented on June 15, 2024

@GuanLuo @CoderHam May I know where does the error raise from onnxruntime or Triton ?

It is build on top of the docker image nvidia/cuda:11.2.2-cudnn8-devel-centos7

yum dependencies:

wget https://yum.oracle.com/repo/OracleLinux/OL8/developer/EPEL/x86_64/getPackage/libb64-devel-1.2-6.el8.x86_64.rpm
yum install -y libb64-devel-1.2-6.el8.x86_64.rpm
yum install -y numactl-devel numactl

Additional dependencies are fulfilled by conda packages

conda install -y cmake
conda install -c conda-forge -y gcc==8.5.0 cxx-compiler
conda install -y rapidjson boost
conda install -c conda-forge -y cudatoolkit==11.5 cudatoolkit-dev cudnn
conda install -c conda-forge -y numactl-cos6-x86_64
conda install -c conda-forge/label/gcc7 -y re2
conda update -c anaconda -y openssl

Used usual build.py script

from onnxruntime_backend.

askhade avatar askhade commented on June 15, 2024

@vignesh34v : You are using an older version of the server. Can you move to the latest and try again. If the issue persists, please share repro steps along with the model.

from onnxruntime_backend.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.