PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
-
Updated
Dec 18, 2025 - Python
PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
Document level Attitude and Relation Extraction toolkit (AREkit) for sampling and processing large text collections with ML and for ML
Batch-scheduler framework for controlling execution in a packet-processing pipeline based on strict service-level objectives
FastDynamicBatcher is a library for batching inputs across requests to accelerate machine learning workloads
TurboBatch accelerates transformer inference by up to 10.2x with dynamic batching. It's lightweight, HuggingFace-compatible, and ideal for real-time NLP tasks.
Production-grade self-hosted LLM inference server optimized for GPU batching, parallel request scheduling, and high-throughput LAN deployment.
simulation of Bucket brigade in production lines.
SHIRT: SHIRT Handles Intense Renaming Transformations - A command-line tool for renaming and encoding files and directories.
Migrates public entity financial data from EDGAR zip into AWS S3 bucket
Batch-aware online task creation for meta-learning.
High-performance async HTTP logging handler for Datadog with batching, retry logic, and comprehensive error handling
🌐 Empower alumni in Pasig City with a data-driven platform that connects education to career opportunities, enhancing employability and growth.
Add a description, image, and links to the batching topic page so that developers can more easily learn about it.
To associate your repository with the batching topic, visit your repo's landing page and select "manage topics."