Project Database
This page contains the database of possible research projects for master and bachelor students in the Biorobotics Laboratory (BioRob). Visiting students are also welcome to join BioRob, but it should be noted that no funding is offered for those projects (see https://biorob.epfl.ch/students/ for instructions). To enroll for a project, please directly contact one of the assistants (directly in his/her office, by phone or by mail). Spontaneous propositions for projects are also welcome, if they are related to the research topics of BioRob, see the BioRob Research pages and the results of previous student projects.
Search filter: only projects matching the keyword Vision are shown here. Remove filter
Amphibious robotics
Computational Neuroscience
Dynamical systems
Human-exoskeleton dynamics and control
Humanoid robotics
Miscellaneous
Mobile robotics
Modular robotics
Neuro-muscular modelling
Quadruped robotics
Amphibious robotics
| 767 – Data collection pipeline for sensorized amphibious robot experiments |
| Category: | semester project, master project (full-time) | |
| Keywords: | 3D, C, C++, Communication, Computer Science, Data Processing, Experiments, Firmware, Image Processing, Motion Capture, Programming, Python, Vision | |
| Type: | 5% theory, 20% hardware, 75% software | |
| Responsible: | (MED 1 1626, phone: 38676) | |
| Description: | In this project, the student will work closely with the other team members to develop data collection pipelines during the experiments of a sensorized amphibious robot and, optionally, use them to collect and analyze experimental data. Specifically, the student needs to:
The student is expected to be familiar with programming in C/C++ and Python, have experience using ROS2, and have learned about robot kinematics. Experience with Docker, Linux kernel, communication protocols, and computer vision algorithms would be a bonus. The student who is interested in this project shall send the following materials to the assistant: (1) resume, (2) transcript showing relevant courses and grades, and (3) other materials that can demonstrate their skills and project experience (such as videos, slides, code repositories, etc.). Last edited: 22/11/2025 | |
| 757 – Development of radio and vision electronics for a salamander inspired robot |
| Category: | semester project, master project (full-time) | |
| Keywords: | Bio-inspiration, Biomimicry, Communication, Electronics, Embedded Systems, Firmware, Programming, Prototyping, Radio, Robotics, Sensor Fusion, Vision, sensor | |
| Type: | 70% hardware, 30% software | |
| Responsible: | (MED 1 1626, phone: 38676) | |
| Description: | This project has been taken. Pleurobot is a salamander-inspired robot that is capable of moving in and transitioning between terrestrial and aquatic environments. Some research projects in our lab have demonstrated the effectiveness of vision-guided or human-controlled locomotion transition strategies. However, the present Pleurobot is unable to use similar strategies robustly, especially in outdoor environments, because of lacking vision systems or robust wireless controllers. In this project, the student will need to add vision systems (e.g., RGB-D camera) for Pleurobot that can operate in amphibious environments. In addition, a robust radio controller is needed to operate the robot in outdoor environments. Alternatively, the student can choose to implement algorithms for the vision system for recognizing terrain and obstacles in real-time. Both systems need to be integrated into the ROS 2 controller running on the onboard computer. The major challenges include the requirements for waterproofing, the limited space for electronics, and the fusion of multiple sensory systems in an embedded system. The student is expected to have a solid background in circuit design for embedded systems, firmware programming, and familiarity with ROS 2. The student who is interested in this project could send his/her transcript, CV, and materials that can demonstrate his/her past project experience to qiyuan.fu@epfl.ch. Last edited: 02/09/2025 | |
Mobile robotics
| 768 – Aria2Robot: Egocentric Meta Zürich Wearable Glasses for Robot Manipulation |
| Category: | semester project, master project (full-time), internship | |
| Keywords: | Machine learning, Programming, Python, Robotics, Vision | |
| Type: | 30% theory, 10% hardware, 60% software | |
| Responsible: | (MED11626, phone: 41783141830) | |
| Description: | INTRODUCTION Egocentric wearable sensing is becoming a key enabler for embodied AI and robotics. Meta’s Project Aria (https://www.projectaria.com/) research glasses provide rich, multimodal, first-person observations (RGB video, scene cameras, IMUs, microphones, eye-tracking, and more) in a socially acceptable, all-day wearable form factor, specifically designed to advance egocentric AI, robotics, and contextual perception research. In collaboration with Meta Zürich, we aim to tightly couple Aria research glasses with our existing manipulation platforms at EPFL. This project will 1) integrate the Aria Research Kit with our ROS2-based robot platforms (ViperX 300S and WidowX-250 arms on a mobile base), including calibration and time-synchronisation with RGB-D cameras and robot state; 2) design and execute egocentric data collection in household-like environments (Aria + RealSense + robot joint trajectories + language annotations); 3) explore one or more robotics applications powered by Aria signals, such as intention-aware teleoperation, egocentric demonstrations for policy learning, or vision-language(-action) fine-tuning for assistance tasks; and 4) perform systematic platform testing, validation and documentation to deliver a reusable research pipeline for future projects. Excellent programming skill (Python) is a plus. IMPORTANCE: We have well-documented tutorials on using the robots, teleoperation interfaces for data collection, using the HPC cluster, and a complete pipeline for training robot policies. The Aria Research Kit and tools (recording, calibration, dataset tooling, SDK) will be integrated into this ecosystem, so the student can focus on the research questions rather than low-level setup. What makes Aria special for robotics? Project Aria glasses are multi-sensor “research smart glasses”: multiple cameras (wide FOV), IMUs, microphones, eye gaze, and a Machine Perception Service (MPS) that provides SLAM poses, hand poses, etc. They’re explicitly marketed by Meta as a research kit for contextual AI and robotics – i.e., using egocentric sensing to build embodied agents that understand and act in the world. Compared to a normal RGB-D camera, Aria gives you: Egocentric view: “what the human (or robot) sees” while acting. Calibrated head pose/trajectory (via SLAM in MPS). Hand/gaze info (depending on what parts you use). A portable, wearable, socially acceptable form factor. WHAT WE HAVE: [1] Ready-and-easy-to-use robot platforms: including ViperX 300S and WidowX-250 arms, configured with 4 RealSense D405 cameras, various grippers, and a mobile robot platform. [2] Egocentric sensing hardware: Meta Project Aria research glasses (via collaboration with Meta Zürich), including access to the Aria Research Kit and tooling for data recording and processing. [3] Computing resources: TWO desktop PCs with NVIDIA GPUs 5090 and 4090. Interested students can apply by sending an email to sichao.liu@epfl.ch. Please attach your transcript and a short description of your past/current experience on related topics (robotics, computer vision, machine learning, AR/egocentric perception). The position is open until we have final candidates. Otherwise, the position will be closed. Recommend to read: [1] Aria project: https://www.projectaria.com/resources/ [2] Aria GitHub: https://github.com/facebookresearch/projectaria_tools [3] Liu V, Adeniji A, Zhan H, Haldar S, Bhirangi R, Abbeel P, Pinto L. Egozero: Robot learning from smart glasses. arXiv preprint arXiv:2505.20290. [4] Zhu LY, Kuppili P, Punamiya R, Aphiwetsa P, Patel D, Kareer S, Ha S, Xu D. Emma: Scaling mobile manipulation via egocentric human data. arXiv preprint arXiv:2509.04443. [5] Lai Y, Yuan S, Zhang B, Kiefer B, Li P, Deng T, Zell A. Fam-hri: Foundation-model assisted multi-modal human-robot interaction combining gaze and speech. arXiv preprint arXiv:2503.16492. [56 Banerjee P, Shkodrani S, Moulon P, Hampali S, Zhang F, Fountain J, Miller E, Basol S, Newcombe R, Wang R, Engel JJ. Introducing HOT3D: An egocentric dataset for 3D hand and object tracking. arXiv preprint arXiv:2406.09598. Last edited: 25/11/2025 | |
| 754 – Vision-language model-based mobile robotic manipulation |
| Category: | semester project, master project (full-time), internship | |
| Keywords: | Control, Experiments, Learning, Python, Robotics, Vision | |
| Type: | 30% theory, 10% hardware, 60% software | |
| Responsible: | (MED11626, phone: 41783141830) | |
| Description: | INTRODUCTION Recent vision-language-action models (VLAs) build upon pre-trained vision-language models and leverage diverse robot datasets to demonstrate strong task execution, language-following ability, and semantic generalisation. Despite these successes, VLAs struggle with novel robot setups and require fine-tuning to achieve good performance; however, the most effective way to fine-tune them is unclear, given the numerous possible strategies. This project aims to 1) develop a customised mobile robot platform that is composed of a customised and ROS2-based mobile base and robot arms with 6DOF (ViperX 300 S and Widowx 250), and 2) establish a vision system equipped with RGBD cameras which is used for data collection, 3) deploy a pre-trained VLA model locally for robot manipulation by using reinforcemnet and imittaion learning, with a focus of household environment, and 4) platform testing, validation and delivery. Excellent programming skill (Python) is a plus. Importance: We have well-documented tutorials of how to use robots, teleoperation for data collection, how to use the HPC cluster, and a complete pipeline to train robot policy. For applicants not from EPFL, to obtain the student status at EPFL, the following conditions must be fulfilled (an attestation has to be provided during the online registration): [1] To be registered at a university for the whole duration of the project [2] The project must be required in the academic program and recognised by the home university [3] The duration of the project is a minimum of 2 months and a maximum of 12 months [4] To be accepted by an EPFL professor to do a project under his supervision For an internship, six months at least is suggested. WHAT WE HAVE: [1] Ready-and-easy-to-use robot platforms: including ViperX 300S and WidowX-250, configured with 4 RealSense D405, various grippers, and mobile robot platform [2] Computing resources: TWO desktop PC with NVIDIA GPU 5090 and 4090 [3] HPC cluster with 1000h/month on NVIDIA A100 and A100fat : can use 1000 hours of A100 and A100 fat NVIDIA GPU every month, supports large-scale training and fine-tuning. Interested students can apply by sending an email to sichao.liu@epfl.ch. Please attach your transcript and past/current experience on the related topics. The position is open until we have final candidates. Otherwise, the position will be closed. Recommend to read: [1] LeRobot: Making AI for Robotics more accessible with end-to-end learning, https://github.com/huggingface/lerobot [2] Kim, Moo Jin, Chelsea Finn, and Percy Liang. "Fine-tuning vision-language-action models: Optimizing speed and success." arXiv preprint arXiv:2502.19645 (2025). [3] https://docs.trossenrobotics.com/aloha_docs/2.0/specifications.html [4] Lee BK, Hachiuma R, Ro YM, Wang YC, Wu YH. Unified Reinforcement and Imitation Learning for Vision-Language Models. arXiv preprint arXiv:2510.19307. 2025 Oct 22. Benchmark: [1] LeRobot: Making AI for Robotics more accessible with end-to-end learning [2] DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset [3] DiT-Block Policy: The Ingredients for Robotic Diffusion Transformers [4] Open X-Embodiment: Robotic Learning Datasets and RT-X Models Last edited: 10/11/2025 | |
4 projects found.