Unofficial implementation of "TTNet: Real-time temporal and spatial video analysis of table tennis" (CVPR 2020)
-
Updated
Aug 2, 2024 - Python
Unofficial implementation of "TTNet: Real-time temporal and spatial video analysis of table tennis" (CVPR 2020)
An On-Chain Open-Source Platform for Rapid AI Model Productization Using Decentralized Resources with Flexibility and Scalability
Distributed training (multi-node) of a Transformer model
Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.
Helmet Detector based on the CenterNet.
A simple API to launch Python functions to run on multiple ranked processes, mpify is designed to enable interactive multiprocessing experiments in Jupyter/IPython, such as distributed data parallel training over multiple GPUs.
Unofficial implementation for Sigmoid Loss for Language Image Pre-Training
Different template codes for Deep Learning with PyTorch.
demo for pytorch-distributed
A Tiny Version of the Original ultralytics/yolov5
This is a simulator for access strategies for distributed caching. The simulator considers a user who is equipped by several caches, and receives from them periodical updates about the cached content. The problem and algorithms implemented here are detailed in the paper: I. Cohen, G. Einziger, R. Friedman, and G. Scalosub, “Access Strategies for…
[CVPR 2016]You Only Look Once: Unified, Real-Time Object Detection
This repository is intended to be a template for starting new projects with PyTorch, in which deep learning models are trained and evaluated on medical imaging data.
YOLOv3: An Incremental Improvement
Classification of Thoracic Diseases on Chest X-ray Images.
[CVPR 2017]YOLO9000: Better, Faster, Stronger
Final project of the course "Large Scale AI Engineering" at ETH Zürich, FS2025. Implementation and benchmarking of pretokenization and Distributed Data Parallel (DDP) for efficient LLM training on the CSCS Alps supercomputer.
Add a description, image, and links to the distributed-data-parallel topic page so that developers can more easily learn about it.
To associate your repository with the distributed-data-parallel topic, visit your repo's landing page and select "manage topics."