There are no reviews yet. Be the first to send feedback to the community and the maintainers!
InternVL
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M ParametersDragGAN
Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功能实现,在线Demo,本地部署试用,代码、模型已全部开源,支持Windows, macOS, Linux)InternGPT
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)Ask-Anything
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.InternImage
[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable ConvolutionsInternVideo
[ECCV2024] Video Foundation Models & Data for Multimodal UnderstandingVisionLLM
VisionLLM SeriesVideoMamba
[ECCV2024] VideoMamba: State Space Model for Efficient Video UnderstandingOmniQuant
[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.VideoMAEv2
[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual MaskingDCNv4
[CVPR 2024] Deformable Convolution v4all-seeing
[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"GITM
Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and MemoryMulti-Modality-Arena
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!Vision-RWKV
Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like ArchitecturesCaFo
[CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot LearnersPonderV2
PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training ParadigmLAMM
[NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI AgentsUniFormerV2
[ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormerunmasked_teacher
[ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation ModelsOmniCorpus
OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with TextHumanBench
This repo is official implementation of HumanBench (CVPR2023)Instruct2Act
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language ModelEfficientQAT
EfficientQAT: Efficient Quantization-Aware Training for Large Language Modelsgv-benchmark
General Vision Benchmark, GV-B, a project from OpenGVLabControlLLM
ControlLLM: Augment Language Models with Tools by Searching on GraphsInternVideo2
UniHCP
Official PyTorch implementation of UniHCPefficient-video-recognition
SAM-Med2D
Official implementation of SAM-Med2DEgoVideo
[CVPR 2024 Champions] Solutions for EgoVis Chanllenges in CVPR 2024DiffRate
[ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging techniques, while incorporating a differentiable compression rate.MMT-Bench
ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGIAwesome-DragGAN
Awesome-DragGAN: A curated list of papers, tutorials, repositories related to DragGANMM-NIAH
This is the official implementation of the paper "Needle In A Multimodal Haystack"M3I-Pretraining
STM-Evaluation
MUTR
[AAAI 2024] Referred by Multi-Modality: A Unified Temporal Transformers for Video Object SegmentationLCL
Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression LearningChartAst
ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.DDPS
Official Implementation of "Denoising Diffusion Semantic Segmentation with Mask Prior Modeling"Awesome-LLM4Tool
A curated list of the papers, repositories, tutorials, and anythings related to the large language models for toolsPIIP
NeurIPS 2024 Spotlight ⭐️ Parameter-Inverted Image Pyramid Networks (PIIP)InternVL-MMDetSeg
Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeedGUI-Odyssey
GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.Siamese-Image-Modeling
[CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation LearningDe-focus-Attention-Networks
Learning 1D Causal Visual Representation with De-focus Attention NetworksMultitask-Model-Selector
Implementation of Foundation Model is Efficient Multimodal Multitask Model SelectorOfficial-ConvMAE-Det
perception_test_iccv2023
Champion Solutions repository for Perception Test challenges in ICCV2023 workshop.opengvlab.github.io
MovieMind
EmbodiedGPT
DriveMLM
.github
Love Open Source and this site? Check out how you can help us