Giter VIP home page Giter VIP logo

Tomoya Sawada, Ph.D.

Tomoya Sawada is a Research Scientist/Engineer in Computer Vision at Mitsubishi Electric. He graduated from Yamanashi University, Japan in 2012, and has received M.Sc. and Ph.D. degree from the department of computer science at the Yamanashi University, Japan in 2014 and 2015. His research interests include scene understanding such as object detection/semantic segmentation/object tracking/action recognition, AI assisted user interface, generating the natural scene using GANs/Diffusion model, multimodal learning and domain shift/adaptation/transfer. 

Languages and Tools:

azure AWS c cplusplus docker git java linux matlab mongodb mysql opencv photoshop python pytorch scikit_learn tensorflow

Working Experience

July 2023 - Joint Research, The University of Tokyo (Matsuo Laboratory)

Research on Generative AI for Industries, adopting various IoT devices
Collaborate with: Rei Araki, Takaomi Hasegawa, Tadahiro Shidara, Keichi Yokoyama

In this project, we use generative AI to control the optimal temperature settings of air conditioners. When there are multiple air conditioners in a room, generatie AI is used to control them in a way that maintains comfort for office workers while minimizing total electricity consumption.

April 2023 - Researcher/Manager, Mitsubishi Electric

Business Intelligence Strategic Promotion Dept., DX Innovation Center.
Managing Executive Office: Hiroshi Sakakibara CDO, Executive Officer: Nobuo Asahi
MITSUBISHI ELECTRIC CORPORATION, Headquarters.

I have advanced research on generative AI as an expert in this organization, and have been involved in its practical application in business from planning to implementation. I have also been actively expanding my connections with partner companies to promote its social implementation.

April 2021 - March 2023, Researcher/Manager, Mitsubishi Electric

Maisart Co-Innovation Center, Business Innovation & DX Strategy Div.
Managing Executive Office: Hiroshi Sakakibara CDO, Executive Officer: Tomoki Ichikawa, Takashi Mizuochi, Ph.D
MITSUBISHI ELECTRIC CORPORATION, Headquarters.

I have used Amazon's Working Backwards method to develop new business from the customer's point of view. I have also provided consulting services as an AI expert, proposing altorithms to solve vustomer problems and considering the profitability of business.

October 2018 - October 2021, Joint Research, Massachusetts Institute of Technology(MIT)

Research on Generative Adversarial Networks(GAN), Annotation System for Semantic Segmentation
Collaborate with: Professor Antonio Torralba

Generative Adversarial Networks(GANs) have been used to generate images with high resolution. However, the placement and detail of the individual parts were not natural. In this project, we developed a method based on the SPADE network to generate images with natural object placement and detail granularity. To overcome the limitations of GANs in generating detailed edges, a method was developed that utilizes CG as a base to clarify boundaries and applies real-world reconstructed textures to achieve high-resolution generated images. We investigated the effectiveness of GAN-based reconstruction methods for object generation by applying a method developed by MIT to the CityScape dataset. The generated objects were then used to train the state-of-the-art object detection method DetectoRS. We also evaluated the extent to which real-world data could be reduced without compromising accuracy by varying the proportion of real images in the training dataset. Results showed that reducing the proportion of real images by 50% led to a decrease in mAP (mean Average Precision) of only about 4%, demonstrating the potential of GAN-based data augmentation for reducing the need for large amounts of real-world data in object detection tasks. We also developed a tool for automatic annotation to enable a smooth transition from data aquisition to learning.

May,December 2019, Visiting Researcher, Mitsubishi Electric Research Laboratory(MERL)

Research on Object Detection/Segmentation, Face Recognition and Video Anomaly Detection
Advisors: Michael Jones, Tim Marks, Anoop Cherian, Teng-yok Lee and Alan Sullivan

In this project, I developed a variety of computer vision technologies:

  • Object Detection: I developed an object detection system for the Camera Monitoring System(CMS) of car manufacturing products. This system uses deep learning to identify vehicles and other objects around the car using a camera mounted in the rearview mirrors.
  • Semantic Segmentation: I developed a system for automatically extracting people from video footage captured by surveillance cameras. This system uses a combination of computer vision techniques like background subtraction and deep learning to segment people from other objects in the scene.
  • Face recognition: I developed a face recognition system that can identif people from high/low-angle images, such as those captured by surveillance cameras. This system was trained on a large dataset of human faces, and it can accurately identify people even in challenging conditions like poor lightining and occulusion.
  • Anomalous behavior detections: I developed a system for detecting anomalous behavior in video footage. This system uses a deep learning-based model to learn the patterns of normal behavior and identify any deviations from these patterns.
October 2017 - October 2018, Joint Research, FreshlyGround Pte Ltd.

Research on Design Thinking, Business Incubation
Collaborate with: Thierry DO, Shang LIM

In this project, we proposed a system that improves the quality of customer service in Singaporean hotels by using wearable devices such as camera-equipped smart glasses. The system performs real-time facial recognition to identify customers and displays their information on the display of the smart glasses. We developed a prototype and demonstrated its value to several hotel operators. We also learned how to create customer value by applying Design Thinking method.

October 2016 - October 2018, Joint Research, University of Southern California(USC)

Research on Graph Signal Processing
Collaborate with: Professor Antonio Ortega 

Graph Signal Processing(GSP) is a mathematical technique for processing signals defined on networks(graphs). Conventional sigal processing deals with signals with regular structures such s time and space, while graph signal processing deals with signals with non-structures such as social networks and transportation networks. We applied this technology to manufactring and successfully developed a system that automatically classifies videos of human assembly movements into individual work steps. The system uses a graph that represents the bones of the human hand and can also be used to detect abnormalities in assembly work.

April 2015 - March 2021, Research Scientist/Engineer, Mitsubishi Electric

Image Analytics and Processing Technology Group,
Smart Information Processing Technology Dept.
INFORMATION TECHNOLOGY R&D CENTER

I developed a variety of computer vision technologies:

  • Training Data Minimization: A state-of-the-art object detector was implemented and evaluated using a dataset of approximately 30,000 infrared images for person tracking. The object detector was trained using only RGB images and applied inference after converting the appearance of infrared images to RGB. The proposed method achieved an average precision of 83.3% and an average detection success rate of 63.3%, demonstrating high-accuracy person tracking.
  • Knowledge Graph Visualization: We applied Scene Graph, which describes the relationships between objects in a video, to production site footage to extract the tacit knowledge and experience-based actions of workers. In addition, we developed an attention mechanism that inputs the spatial position of objects in the channel direction as a teacher signal and confirmed that it can recognize objects with higher accuracy than the latest object recognition technology.
  • Online Learning: We developed a prototype system that can automatically extract learning targets from customer-accumulated information, enabling continuous improvement of AI recognition accuracy over time. We also identified the issue of low performance when not retraining on inference-side data for the Semantic Segmentation task. By adopting online learning and using background subtraction to generate and refine masks, they demonstrated the ability to perform high-precision inference without inference-side training data.
  • Object Detection: Small objects are difficult to detect due to the limited amount of information they contain in images. Additionally, in some datasets, small objects are often not annotated, leading to poor learning efficiency. To address these challenges, we developed a mechanism using attention to efficiently collect and acquire information from small objects and integrated it into an object detection system. The system was intended for integration into an electronic mirror (camera monitoring system) as a vehicle-mounted device.
  • Driving Warning System: This project aimed to develop an intelligent alert system that combines CMS (camera monitoring system) object recognition technology to assess surrounding vehicle risks with DMS (driver monitoring system) driver information to optimize alert methods, notification content, and notification timing for each driver. The system integrates camera-based sensing of driver information and external risks with AI technology to create an intelligent alert system that adjusts notification levels based on driver awareness.
  • AI on device system: A prototype system was developed to perform real-time object recognition on electronic mirror images and display the recognition results on the electronic mirror display. The demo system was showcased at CES2020 to introduce the company's AI technology to car manufacturers and enhance its presence in the industry. The system achieved an average recognition rate of 88.8% under various weather conditions.
  • Optimizer: We proposed a novel general-purpose optimizer and demonstrated its effectiveness through preliminary experiments on an object recognition dataset that was previously difficult to converge using conventional methods. The proposed method automatically adjusts the learning rate, enabling the optimizer to focus on learning rare and important information more effectively. Additionally, the method prevents overfitting during the initial training phase by stabilizing the gradient field.
  • Attentive Notifications for Drivers: This project involved integrating the company's proprietary object detection technology into a prototype vehicle. The real-time object recognition technology was applied to an ultra-wide-angle 360° camera to detect objects in the vehicle's blind spots and alert the driver. The system linked the object recognition results from the external camera with driver information obtained from the DMS (Driver Monitoring System) for notification. The technology was announced in a press release.
  • Optical Character Recognition: We developed AI technology that was applied to OCR and symbol recognition, and advanced image processing was used to improve detection and recognition rates. In response to a request for the introduction of a conversion system in the second half of the year, a system was built and provided that automates part of the conversion process using AI processing and allows users to perform conversion processing using a GUI tool. A paper symbol recognition technology using deep learning was developed and achieved a recognition rate of 98%. For OCR technology, fine-tuning was also performed and a recognition rate of 99% was achieved.
  • Document Alignment: In this project, I developed and deployed an automated financial document position correction system. The software takes as input a set of images captured or scanned by a camera and projects the positions of all images onto the appearance of a front-facing shot. After the software was provided, a new challenge of improving speed was identified, and the system was incorporated into a simple system that estimates an external rectangle.
  • Action Recognition: We were engaged in collaborative research with Takenaka Corporation and serves as the technical representative for Takenaka Technical Research Institute's open technology demonstrations. They have developed a system that utilizes sensors to identify and visualize human active behaviors (affirmation, negation, and smiling) using HoloLens.
  • Anomaly Detection: I evaluated a Deep Learning-based person detection method using data from surveillance cameras on actual railway vehicles and conducted a feasibility study. I also developed a low-compute method for crowd analysis and participated in meetings with railway operators to discuss the feasibility of using Deep Learning. In addition, I constructed an algorithm for detecting suspicious objects in station premises and vehicles and developed a system with visualizing pedestrian.

Professional Activities

  • The Visual Computer, International Journal of Computer Graphics(2020-2023)
  • Consumer Device & System(CDS) Transaction, Information Processing Society of Japan(2020)

Education

  1. Registered Scrum Master, Having Completed Scrum Inc.'s Registered Scrum Master Training, and having passed the credentialing exam, 2023.
  2. CEFR C1 (Advanced Level), Completed a 272 lesson program, St Giles International Language Centres(Canada) Ltd., 2020.
  3. Ph.D in Computer Science(GPA 3.8), University of Yamanashi, 2015.
    'Sequential Images Summarization based on Viewer's Interests and Aesthetic Composition'
    (Advisor: Professor Xiaoyang Mao, Associate Professor Masahiro Toyoura)
  4. M.S. in Computer Science(GPA 3.6), University of Yamanashi, 2014.
    'Film Comic Generation based on Viewer's Interests and Aesthetic Composition' (Best Presentation Award)
    (Advisor: Professor Xiaoyang Mao, Associate Professor Masahiro Toyoura)
  5. B.S. in Engineering, University of Yamanashi, 2012.
    'Automatic Film Comic Generation with Eye-tracking data' (Best Presentation Award)
    (Advisor: Professor Xiaoyang Mao, Associate Professor Masahiro Toyoura)

Awards

  • Director of Export Control Yoshifumi Sakamoto Award, T.Sawada(2022, June) 'Realization of data handling with risk minimization in overseas cloud for data linkage and data-driven business', Mitsubishi Electric Corporation.
  • CTO Masahiro Fujita Award, T.Sawada(2018, March) 'Development and Expansion of Maisart Technology', Mitsubishi Electric Corporation.
  • Information Technology R&D Center GM Tetsuo Nakagawaji Award, T.Sawada(2017, March) 'Fundamental Technology Development of Self-driving Vehicle(Level3)', Mitsubishi Electric Corporation.
  • Best Student Award, T.Sawada (2014, March), Integrated Graduate School of Medicine and Engineering, University of Yamanashi.
  • Scholarship Full Exemption
    T.Sawada (2014, March), Repayment Exemption for Students with Excellent Grades -FY2014-, Japan Student Services Organization (JASSO) Scholarship)
  • Best Student Award, T.Sawada (2012, March), Faculty of Engineering, University of Yamanashi.
  • Best Poster Award
    T.Sawada, M.Toyoura, X.Mao (2012, March) 'Using Eye-tracking Data for Automatic Film Comic Generation', Forum on Art and Science.

Journal Articles

  • T.Sawada, Y.Goto, M.Toyoura, X.Mao, J.Gyoba (2015, June) 'Saliency Map for Images with Leading Lines', IEICE‐JA, Vol.J98‐A, No.6, pp.446‐449.
  • T.Sawada, M.Toyoura, X.Mao (2013, September) 'Automatic Film Comic Generation Using iMap', The Journal of the Institute of Image Electronics Engineers of Japan, Vol. 42, No. 5, pp. 671‐680.

International Conference (Refereed)

  • T. Sawada and M. Nakamura, "INTELLIGENT WARNING SYSTEM MONITORING VEHICLE SURROUNDING AND DRIVER'S BEHAVIOR," 2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), July, 2022.
  • T. Sawada, T. -Y. Lee and M. Mizuno, "VIDEO OBJECT SEGMENTATION WITH ONLINE MASK REFINEMENT," 2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), July, 2022.
  • T. Sawada, T. -Y. Lee and M. Mizuno, "Bottom-Up Saliency Meets Top-Down Semantics For Object Detection," 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 729-733, doi: 10.1109/ICIP42928.2021.9506475.
  • P. Das, J. Kao, A. Ortega, T. Sawada, H. Mansour, A. Vetro, A. Minezawa (2019, May) 'Hand Graph Representations for Unsupervised Segmentation of Complex Activities', International Conference on Acoustics, Speech, and Signal Processing(ICCASP), pp.4075-4079.
  • T. Sawada, M. Toyoura, X. Mao (2017, June) 'Auto‐Framing Based on User Camera Movement', Computer Graphics International(CGI), pp. 1‐6 (Article 18).
  • K. Fujimori, S. Zhu, T. Sawada, M. Toyoura, X. Mao (2014, November) 'Image based Kansei-words Labeling by Game', NICOGRAPH, pp.81-86.
  • T. Sawada, M. Toyoura, X. Mao (2013, January) 'Film Comic Generation with Eye Tracking', International Conference on MultiMedia Modeling (MMM, Lecture Notes in Computer Science), Vol.7732/2013, pp.467‐478.
  • M. Toyoura, T. Sawada, M. Kunihiro, X. Mao (2012, March) 'Using Eye‐Tracking Data for Automatic Film Comic Creation', ACM Simposium on Eye Tracking Research & Applications(ETRA), pp. 373‐376.

Exhibition and Publication

Patents

(Domestic Application)

  • "資料生成装置、資料生成方法、およびプログラム"、澤田友哉、(三菱電機株式会社)、出願番号 PCT/JP2024/009477.
  • "人材レーティングシステム、人材レーティング方法、人材レーティング装置、および人材レーティングプログラム"、澤田友哉、(三菱電機株式会社)、出願番号 PCT/JP2023/030235.
  • "人材入札支援システム、人材入札支援方法、人材入札支援装置、および人材入札支援プログラム"、澤田友哉、(三菱電機株式会社)、出願番号 PCT/JP2023/030251.
  • "学習装置"、澤田友哉、(三菱電機株式会社)、登録番号 07274071.
  • "配筋検査装置、学習装置、配筋検査システム及びプログラム"、宮本健、澤田友哉、(三菱電機株式会社)、出願番号 PCT/JP2022/039215.
  • "推論装置、推論方法及び推論プログラム"、澤田友哉、(三菱電機株式会社)、出願番号 PCT/JP2022/029598、登録番号 07345680.
  • "推論装置、推論方法及び推論プログラム"、澤田友哉、(三菱電機株式会社)、出願番号 PCT/JP2022/029597、登録番号 07317246.  
  • "学習済モデル生成システム、学習済モデル生成方法、情報処理装置、プログラム、学習済モデル、および推定装置"、澤田友哉、(三菱電機株式会社)、出願番号 PCT/JP2021/044400、登録番号 07413528.
  • "運転支援制御装置及び運転支援制御方法"、澤田友哉 、福地賢、(三菱電機株式会社)、登録番号 6932269.
  • "推論装置、推論方法、学習装置、学習方法、及びプログラム"、澤田友哉、福地賢、守屋芳美、(三菱電機株式会社)、出願番号 PCT/JP2021/013407.
  • "物体検出装置、モニタリング装置、学習装置、及び、モデル生成方法"、澤田友哉、福地賢、(三菱電機株式会社)、出願番号 PCT/JP2020/048617、登録番号 07031081.
  • "ラベリング装置及び学習装置"、澤田友哉、福地賢、守屋芳美、(三菱電機株式会社)、出願番号 PCT/JP2020/009092、登録番号 07055259.
  • "物体検出装置、モニタリング装置及び学習装置"、澤田友哉、福地賢、守屋芳美、(三菱電機株式会社)、公開番号 WO2021-130881、登録番号 07361949.
  • "対話支援装置、対話支援システム、及び対話支援プログラム"、黒木友裕 、高橋幹雄 、高井勇志 、大塚貴弘 、内出隼人 、澤田友哉 、川島啓吾 、津田由佳 、志田哲郎 、吉田諒 、石川美穂 、飯田隆義、(株式会社竹中工務店、三菱電機株式会社)、公開番号 2020-173714、登録番号 07323098 .
  • "物体検出装置および物体検出方法"、澤田友哉 、三嶋英俊 、前原秀明 、守屋芳美 、宮澤一之 、峯澤彰 、日野百代 、王夢雄 、澁谷直大、(三菱電機株式会社)、公開番号 WO2018-051459.
  • "事故情報収集システムおよび事故情報収集方法"、宮澤一之 、関口俊一 、前原秀明 、守屋芳美 、峯澤彰 、服部亮史 、日野百代 、澤田友哉 、澁谷直大、(三菱電機株式会社)、 公開番号 WO2018-008122.
  • "サーバ装置、ネットワークシステム及びセンサ機器"、峯澤彰 、宮澤一之 、服部亮史 、日野百代 、澤田友哉 、守屋芳美 、関口俊一、(三菱電機株式会社)、公開番号 2018-061110.
  • "物体検出装置及び物体検出方法"、宮澤一之 、関口俊一 、前原秀明 、守屋芳美 、峯澤彰 、服部亮史 、長瀬百代 、澤田友哉、(三菱電機株式会社)、公開番号 WO2017-094140.
  • "画像特徴記述子符号化装置、画像特徴記述子復号装置、画像特徴記述子符号化方法及び画像特徴記述子復号方法"、 峯澤彰 、守屋芳美 、関口俊一 、服部亮史 、宮澤一之 、澤田友哉 、澁谷直大、(三菱電機株式会社)、公開番号 2017-143425.
  • "映像収集システム"、宮澤一之 、関口俊一 、前原秀明 、守屋芳美 、峯澤彰 、服部亮史 、日野百代 、澤田友哉 、澁谷直大、(三菱電機株式会社)、登録番号 06104482.
  • "移動体検出装置およびその方法"、澤田友哉、三嶋英俊 、前原秀明 、守屋芳美 、宮澤一之 、峯澤彰 、日野百代 、王夢雄 、澁谷直大、(三菱電機株式会社)、登録番号 06230751.
  • "映像処理装置および方法"、宮澤一之 、関口俊一 、前原秀明 、守屋芳美 、峯澤彰 、服部亮史 、長瀬百代 、澤田友哉、(三菱電機株式会社)、登録番号 06116765.
  • "ネットワークシステム、ノード装置群、センサ機器群、サーバ装置群およびセンサデータ送受信方法"、峯澤彰 、宮澤一之 、服部亮史 、長瀬百代、澤田友哉、守屋芳美 、関口俊一 、(三菱電機株式会社)、出願番号 2016196292.

--

(PCT Application)

  • "Inference device, inference method, and non-transitory computer-readable medium", Tomoya SAWADA, (MITSUBISHI ELECTRIC CORPORATION)," 62023078099.6
  • "Inference device, inference method, and non-transitory computer-readable medium", Tomoya SAWADA, (MITSUBISHI ELECTRIC CORPORATION), CA3193358A1.
  • "Dispositif d'étiquetage et dispositif d'apprentissage", Tomoya SAWADA, Ken FUKUCHI, Yoshimi MORIYA, (MITSUBISHI ELECTRIC CORPORATION), EP4099263A4.
  • "LABELING DEVICE AND LEARNING DEVICE", Tomoya SAWADA, Ken FUKUCHI, Yoshimi MORIYA, (MITSUBISHI ELECTRIC CORPORATION), Publication number: 20220366676.
  • "OBJECT DETECTION DEVICE, MONITORING DEVICE, TRAINING DEVICE, AND MODEL GENERATION METHOD", Tomoya SAWADA, Ken FUKUCHI, (MITSUBISHI ELECTRIC CORPORATION), Publication number: WO/2022/137476 .
  • "OBJECT DETECTION DEVICE, MONITORING DEVICE, AND LEARNING DEVICE", Tomoya SAWADA, Ken FUKUCHI, Yoshimi MORIYA, (MITSUBISHI ELECTRIC CORPORATION), Publication number: WO/2021/130881.
  • "ACCIDENT INFORMATION COLLECTION SYSTEM, AND ACCIDENT INFORMATION COLLECTION METHOD", Kazuyuki MIYAZAWA, Shunichi SEKIGUCHI, Hideaki MAEHARA, Yoshimi MORIYA, Akira MINEZAWA, Ryoji HATTORI, Momoyo HINO, Tomoya SAWADA, Naohiro SHIBUYA, (MITSUBISHI ELECTRIC CORPORATION), Publication number: 20190193659.
  • "Object detection device and object detection method", Tomoya Sawada, Hidetoshi Mishima, Hideaki Maehara, Yoshimi Moriya, Kazuyuki Miyazawa, Akira Minezawa, Momoyo Hino, Mengxiong Wang, Naohiro Shibuya, (MITSUBISHI ELECTRIC CORPORATION), Patent number: 10943141.
  • "Object detection device and object detection method", Kazuyuki Miyazawa, Shunichi Sekiguchi, Hideaki Maehara, Yoshimi Moriya, Akira Minezawa, Ryoji Hattori, Momoyo Nagase, Tomoya Sawada, (MITSUBISHI ELECTRIC CORPORATION), Patent number: 10643338.

Domestic Conferece

  • 中村光貴,澤田 友哉,杉本和夫,“時系列差分情報を付与した軽量な物体認識手法”,画像符号化シンポジウム・映像メディア処理シンポジウム(PCSJ/IMPS),Vol.34,pp.84-85,2019-11.
  • 湧川翔太,澤田友哉,“物体認識精度に対する画像品質の影響調査”,画像符号化シンポジウム・映像メディア処理シンポジウム(PCSJ/IMPS),Vol.34,2019-11.
  • 宮澤一之,澤田 友哉,関口俊一,“車載カメラによる後側方映像監視のための移動物体検出”,Vision Engineering Workshop(ViEW), 2015-12.
  • 澤田 友哉,豊浦 正広,茅 暁陽,“カメラ移動に基づくオートフレーミングの実現”,Visual Computing/グラフィクスとCAD合同シンポジウム,Article 5,2015-6.
  • 後藤 悠汰,澤田 友哉,豊浦 正広,茅 暁陽,行場 次朗,“リーディングラインを考慮した顕著性マップの作成”,Visual Computing/グラフィクスとCAD合同シンポジウム,2014-6.
  • 澤田 友哉,豊浦 正広,茅 暁陽,“視線パターンに基づく映像コンテンツからのコミックの自動生成”,Visual Computing/グラフィクスとCAD合同シンポジウム,2014-6.
  • 澤田 友哉,豊浦 正広,茅 暁陽,“視線情報を利用した映像からのコミック自動生成システム”,やまなし産学官連携研究交流事業,Article 17,2012-9
  • 澤田 友哉,豊浦 正広,茅 暁陽,“フィルムコミックの自動生成における視線情報の利用”,情報処理学会全国大会,1Q-9,2012-3

Tomoya Sawada's Projects

ssd_keras icon ssd_keras

A Keras port of Single Shot MultiBox Detector

tensorflow icon tensorflow

Computation using data flow graphs for scalable machine learning

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.