There are no reviews yet. Be the first to send feedback to the community and the maintainers!
DomiKnowS
Cross_Modality_Relevance
The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"SpartQA_generation
Generating SpartQA datasetSRLGRN
The source code of EMNLP 2020 accepted paper.SpartQA-baselines
All the baselines and experiments settings on the SpartQATSLM
The Implementation for the Paper "Time-Stamped Language Model: Teaching Language Models toUnderstand The Flow of Events"DRGN
COLING 2022: Dynamic Relevance Graph Network for Knowledge-Aware Question AnsweringVLN-trans
MRRG
ACL 2022 findings short paper source codeRGN
The source code of IJCAI 2021 paperInference-Masked-Loss
LOViS
Coding for LOViSHetSaul
This project augments Saul to support working with heterogeneous and multimodal information, e.g., vision and language. It uses a mirror of (https://github.com/CogComp/saul).Object-Grounding-for-VLN
The implementation for Explicit Object Relation Alignment for Vision and Language NavigationSpatial-QA-tasks
SpRL_TextOnly
This repo is for spatial role labeling using text only data.Dual-Action-VLN-CE
VisualGenomeSpatialRelations
OntologyBasedLearning
LatentAlignmentProcedural
The Pytorch implementation of Latent Alignment of Procedural Concepts in Multimodal Recipes paper.SpaRTUN
GIPCOL
Love Open Source and this site? Check out how you can help us