MIV-XJTU/JanusVLN

[ICLR2026] Official implementation for "JanusVLN: Decoupling Semantics and Spatiality with Dual Implicit Memory for Vision-Language Navigation"

41
/ 100
Emerging

This project helps create AI agents that can navigate complex indoor environments based on natural language instructions. You provide the AI with a written command, like "Go past the kitchen and turn left into the living room," and a 3D map of the space. The AI then plans and executes a path through the virtual environment. This is for researchers and developers working on embodied AI, robotics, and virtual reality applications.

508 stars.

Use this if you are developing or evaluating AI models that need to understand spatial relationships and follow human-like directions within simulated 3D spaces.

Not ideal if you need a pre-built navigation system for real-world robots or applications outside of research on vision-language navigation.

Embodied AI Robotics Simulation Virtual Environment Navigation Natural Language Understanding Spatial Cognition
No License No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 7 / 25
Community 14 / 25

How are scores calculated?

Stars

508

Forks

35

Language

Python

License

Last pushed

Jan 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MIV-XJTU/JanusVLN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.