Logo EPFL, École polytechnique fédérale de Lausanne

BIOROB

    Afficher / masquer le formulaire de recherche
    Masquer le formulaire de recherche
    • EN
    Menu
    1. Laboratories
    2. Biorobotics Laboratory (BioRob)
    3. Teaching and student projects
    4. Student project list

    Project Database

    This page contains the database of possible research projects for master and bachelor students in the Biorobotics Laboratory (BioRob). Visiting students are also welcome to join BioRob, but it should be noted that no funding is offered for those projects (see https://biorob.epfl.ch/students/ for instructions). To enroll for a project, please directly contact one of the assistants (directly in his/her office, by phone or by mail). Spontaneous propositions for projects are also welcome, if they are related to the research topics of BioRob, see the BioRob Research pages and the results of previous student projects.

    Search filter: only projects matching the keyword Programming are shown here. Remove filter

    Amphibious robotics
    Computational Neuroscience
    Dynamical systems
    Human-exoskeleton dynamics and control
    Humanoid robotics
    Miscellaneous
    Mobile robotics
    Modular robotics
    Neuro-muscular modelling
    Quadruped robotics


    Amphibious robotics

    767 – Data collection pipeline for sensorized amphibious robot experiments
    Show details
    Category:semester project, master project (full-time)
    Keywords:3D, C, C++, Communication, Computer Science, Data Processing, Experiments, Firmware, Image Processing, Motion Capture, Programming, Python, Robotics, Synchronization, Vision
    Type:5% theory, 20% hardware, 75% software
    Responsible: (MED 1 1626, phone: 38676)
    Description:

    This project has been taken

    In this project, the student will work closely with the other team members to develop data collection pipelines during the experiments of a sensorized amphibious robot and, optionally, use them to collect and analyze experimental data. Specifically, the student needs to:

    • Build a multi-camera system for tracking 3-D kinematics of the robots, ideally in real time. The system is expected to work in both indoor and outdoor experiments. We already have a few working setups, and the student needs to replicate them using new hardware, calibrate the system, and implement real-time tracking nodes in ROS2.
    • Synchronize data collected from multiple resources: cameras, force sensors, motors, etc.
    • Visualize the data collected in Mujoco viewer, RViz, Blender, or other 3D visualizers.
    • Help collect experimental data.
    • (For master project only) Help analyze data or use learning algorithms to find underlying patterns.

    The student is expected to be familiar with programming in C/C++ and Python, using ROS2, and robot kinematics. Experience with Docker, Linux kernel, communication protocols, and computer vision algorithms would be a bonus.

    The student who is interested in this project shall send the following materials to the assistant: (1) resume, (2) transcript showing relevant courses and grades, and (3) other materials that can demonstrate their skills and project experience (such as videos, slides, code repositories, etc.).



    Last edited: 17/01/2026
    758 – Optimization of compliant structure designs in a salamander robot using physics simulation
    Show details
    Category:master project (full-time)
    Keywords:Bio-inspiration, Biomimicry, Compliance, Dynamics Model, Experiments, Locomotion, Optimization, Programming, Python, Robotics, Simulator, Soft robotics
    Type:30% theory, 20% hardware, 50% software
    Responsibles: (MED 1 1611, phone: 36620)
    (MED 1 1626, phone: 38676)
    Description:

    In nature, animals have many compliant structures that benefit their locomotion. For example, compliant foot/leg structures help adapt to uneven terrain or negotiate obstacles, flexible tails allow efficient undulatory swimming, and muscle-tendon structures help absorb shock and reduce energy loss. Similar compliant structures may benefit salamander-inspired robots as well.

    In this study, the student will try simulating compliant structures (the feet of the robot) in Mujoco and optimizing the design. To bridge the sim-to-real gap, the student will first work with other lab members to perform experiments to measure the mechanical properties of a few simple compliant structures. Then, the student needs to simulate these experiments using the flexcomp plugin of Mujoco or theoretical solid mechanics models, and tune the simulation models to match the dynamical response in simulation with the experiments. Afterward, the student needs to optimize the design parameters of the compliant structures in simulation to improve the locomotion performance of the robot while maintaining a small sim-to-real gap. Finally, prototypes of the optimal design will be tested on the physical robot to verify the results.

    The student is thus required to be familiar with Python programming, physics engines (preferably Mujoco), and optimization/learning algorithms. The student should also have basic mechanical design abilities to design models and perform experiments. Students who have taken the Computational Motor Control course or have experience with data-driven design and solid mechanics would also be preferred.

    The student who is interested in this project shall send the following materials to the assistants: (1) resume, (2) transcript showing relevant courses and grades, and (3) other materials that can demonstrate your skills and project experience (such as videos, slides, Git repositories, etc.).



    Last edited: 08/12/2025

    Computational Neuroscience

    755 – High-performance encoder-decoder design for computational neural signal processing
    Show details
    Category:semester project, master project (full-time), internship
    Keywords:Computational Neuroscience, Data Processing, Linux, Programming, Python
    Type:20% theory, 5% hardware, 75% software
    Responsible: (MED11626, phone: 41783141830)
    Description:Background Brain-computer interfaces (BCIs) using signals acquired with intracortical implants have enabled successful high-dimensional robotic device control, making it possible to complete daily tasks. However, the substantial amount of medical and surgical expertise required to correctly implant and operate these systems greatly limits their use beyond a few clinical cases. A non-invasive counterpart that requires less intervention and can provide high-quality control would profoundly improve the integration of BCIS into multiple settings, representing a nascent research field known as brain robotics. However, this is challenging due to the inherent complexity of neural signals and difficulties in online neural decoding with efficient algorithms. Moreover, brain signals created by an external stimulus (e.g., vision) are most widely used in BCI-based applications; however, they are impractical and infeasible in dynamic yet constrained environments. A question arises here: "How to circumvent constraints associated with stimulus-based signals? Is it feasible to apply non-invasive BCIS to read brain signals, and how to do so?". To take a step further, I wonder if it would be possible to accurately decode complete, semantic-based command phrases in real time and further achieve seamless and natural brain-robot systems for control and interaction? The project is for a team of 1-2 Master's students, and breakdown tasks will be assigned to each student later according to their skill set. What needs to be implemented and delivered at the end of the project? 1) A method package of brain signal (MEG and EEG) pre-processing and feature formulation 2) An algorithm package of an encoder and a decoder of neural signals. 3) A model of training brain signals with spatial and temporal features. Importance: We have well-documented tutorials on how to use the brain signal dataset, how to use the HPC cluster to train the encoder and decoder, and a complete pipeline to decode EEG-image pairs and MEG-Audio pairs. Please come to office MED01612

    Last edited: 11/12/2025

    Mobile robotics

    768 – Aria2Robot: Egocentric Meta Zürich Wearable Glasses for Robot Manipulation
    Show details
    Category:semester project, master project (full-time), internship
    Keywords:Machine learning, Programming, Python, Robotics, Vision
    Type:30% theory, 10% hardware, 60% software
    Responsible: (MED11626, phone: 41783141830)
    Description:INTRODUCTION Egocentric wearable sensing is becoming a key enabler for embodied AI and robotics. Meta’s Project Aria (https://www.projectaria.com/) research glasses provide rich, multimodal, first-person observations (RGB video, scene cameras, IMUs, microphones, eye-tracking, and more) in a socially acceptable, all-day wearable form factor, specifically designed to advance egocentric AI, robotics, and contextual perception research. In collaboration with Meta Zürich, we aim to tightly couple Aria research glasses with our existing manipulation platforms at EPFL. This project will 1) integrate the Aria Research Kit with our ROS2-based robot platforms (ViperX 300S and WidowX-250 arms on a mobile base), including calibration and time-synchronisation with RGB-D cameras and robot state; 2) design and execute egocentric data collection in household-like environments (Aria + RealSense + robot joint trajectories + language annotations); 3) explore one or more robotics applications powered by Aria signals, such as intention-aware teleoperation, egocentric demonstrations for policy learning, or vision-language(-action) fine-tuning for assistance tasks; and 4) perform systematic platform testing, validation and documentation to deliver a reusable research pipeline for future projects. Excellent programming skill (Python) is a plus. IMPORTANCE: We have well-documented tutorials on using the robots, teleoperation interfaces for data collection, using the HPC cluster, and a complete pipeline for training robot policies. The Aria Research Kit and tools (recording, calibration, dataset tooling, SDK) will be integrated into this ecosystem, so the student can focus on the research questions rather than low-level setup. What makes Aria special for robotics? Project Aria glasses are multi-sensor “research smart glasses”: multiple cameras (wide FOV), IMUs, microphones, eye gaze, and a Machine Perception Service (MPS) that provides SLAM poses, hand poses, etc. They’re explicitly marketed by Meta as a research kit for contextual AI and robotics – i.e., using egocentric sensing to build embodied agents that understand and act in the world. Compared to a normal RGB-D camera, Aria gives you: Egocentric view: “what the human (or robot) sees” while acting. Calibrated head pose/trajectory (via SLAM in MPS). Hand/gaze info (depending on what parts you use). A portable, wearable, socially acceptable form factor. WHAT WE HAVE: [1] Ready-and-easy-to-use robot platforms: including ViperX 300S and WidowX-250 arms, configured with 4 RealSense D405 cameras, various grippers, and a mobile robot platform. [2] Egocentric sensing hardware: Meta Project Aria research glasses (via collaboration with Meta Zürich), including access to the Aria Research Kit and tooling for data recording and processing. [3] Computing resources: TWO desktop PCs with NVIDIA GPUs 5090 and 4090. Interested students can apply by sending an email to sichao.liu@epfl.ch. Please attach your transcript and a short description of your past/current experience on related topics (robotics, computer vision, machine learning, AR/egocentric perception). The position is open until we have final candidates. Otherwise, the position will be closed. Recommend to read: [1] Aria project: https://www.projectaria.com/resources/ [2] Aria GitHub: https://github.com/facebookresearch/projectaria_tools [3] Liu V, Adeniji A, Zhan H, Haldar S, Bhirangi R, Abbeel P, Pinto L. Egozero: Robot learning from smart glasses. arXiv preprint arXiv:2505.20290. [4] Zhu LY, Kuppili P, Punamiya R, Aphiwetsa P, Patel D, Kareer S, Ha S, Xu D. Emma: Scaling mobile manipulation via egocentric human data. arXiv preprint arXiv:2509.04443. [5] Lai Y, Yuan S, Zhang B, Kiefer B, Li P, Deng T, Zell A. Fam-hri: Foundation-model assisted multi-modal human-robot interaction combining gaze and speech. arXiv preprint arXiv:2503.16492. [56 Banerjee P, Shkodrani S, Moulon P, Hampali S, Zhang F, Fountain J, Miller E, Basol S, Newcombe R, Wang R, Engel JJ. Introducing HOT3D: An egocentric dataset for 3D hand and object tracking. arXiv preprint arXiv:2406.09598. Please come to office MED 01612

    Last edited: 11/12/2025

    4 projects found.

    Quick links

    • Teaching and student projects
      • Project database
      • Past student projects
      • Students FAQ
    Logo EPFL, École polytechnique fédérale de Lausanne
    • Contact
    • Alessandro Crespi
    • +41 21 693 66 30
    Accessibility Disclaimer

    © 2026 EPFL, all rights reserved