Project Database
This page contains the database of possible research projects for master and bachelor students in the Biorobotics Laboratory (BioRob). Visiting students are also welcome to join BioRob, but it should be noted that no funding is offered for those projects (see https://biorob.epfl.ch/students/ for instructions). To enroll for a project, please directly contact one of the assistants (directly in his/her office, by phone or by mail). Spontaneous propositions for projects are also welcome, if they are related to the research topics of BioRob, see the BioRob Research pages and the results of previous student projects.
Search filter: only projects matching the keyword Python are shown here. Remove filter
Amphibious robotics
Computational Neuroscience
Dynamical systems
Human-exoskeleton dynamics and control
Humanoid robotics
Miscellaneous
Mobile robotics
Modular robotics
Neuro-muscular modelling
Quadruped robotics
Amphibious robotics
758 – Optimization of compliant structure designs in a salamander robot using physics simulation |
Category: | master project (full-time) | |
Keywords: | Bio-inspiration, Biomimicry, Compliance, Dynamics Model, Experiments, Locomotion, Optimization, Programming, Python, Robotics, Simulator, Soft robotics | |
Type: | 30% theory, 20% hardware, 50% software | |
Responsibles: |
(MED 1 1611, phone: 36620)
(MED 1 1626, phone: 38676) | |
Description: | In nature, animals have many compliant structures that benefit their locomotion. For example, compliant foot/leg structures help adapt to uneven terrain or negotiate obstacles, flexible tails allow efficient undulatory swimming, and muscle-tendon structures help absorb shock and reduce energy loss. Similar compliant structures may benefit salamander-inspired robots as well. In this study, the student will try simulating compliant structures (the feet of the robot) in Mujoco and optimizing the design. To bridge the sim-to-real gap, the student will first work with other lab members to perform experiments to measure the mechanical properties of a few simple compliant structures. Then, the student needs to simulate these experiments using the flexcomp plugin of Mujoco or theoretical solid mechanics models and tune the simulation models to match the dynamical response in simulation with the experiments. Afterward, the student needs to optimize the design parameters of the compliant structures in simulation to improve the locomotion performance of the robot while maintaining a small sim-to-real gap. Finally, prototypes of the optimal design will be tested on the physical robot to verify the results. The student is thus required to be familiar with Python programming, physics engines (preferably Mujoco), and optimization/learning algorithms. The student should also have basic mechanical design abilities to design models and perform experiments. Students who have taken the Computational Motor Control course or have experience with data-driven design and solid mechanics would also be preferred. The student who is interested in this project shall send the following materials to the assistants: (1) resume, (2) transcript showing relevant courses and grades, and (3) other materials that can demonstrate your skills and project experience (such as videos, slides, Git repositories, etc.). Last edited: 17/06/2025 |
Computational Neuroscience
755 – High-performance enconder-decoder design for computational neural signal processing |
Category: | semester project, master project (full-time), internship | |
Keywords: | Computational Neuroscience, Data Processing, Linux, Programming, Python | |
Type: | 20% theory, 5% hardware, 75% software | |
Responsible: | (MED11626, phone: 41783141830) | |
Description: | Background Brain-computer interfaces (BCIs) using signals acquired with intracortical implants have achieved successful high-dimensional robotic device control useful for completing daily tasks. However, the substantial amount of medical and surgical expertise required to correctly implant and operate these systems greatly limits their use beyond a few clinical cases. A non-invasive counterpart requiring less intervention that can provide high-quality control would profoundly improve the integration of BCIS into multiple settings, and represent a nascent research field, brain robotics. However, this is challenging due to the inherent complexity of neural signals and difficulties in online neural decoding with efficient algorithms. Moreover, brain signals created by an external stimulus (e.g., vision) are most widely used in BCI-based applications, but it is impractical and infeasible in dynamic yet constrained environments. A question arises here: "How to circumvent constraints associated with stimulus-based signals? Is it feasible to apply non-invasive BCIS to read brain signals, and how to do so?". To a step further, I wonder could it be possible to accurately decode complete semantic-based command phrases in real time, and further achieve seamless and natural brain-robot systems for control and interactions? The project is for a team of 1-2 Master's students, and breakdown tasks will be assigned to each student later according to their skill set. What needs to be implemented and delivered at the end of the project? 1) A method package of brain signal pre-processing and feature formulation 2) An algorithm package of an encoder and a decoder of neural signals. 3) A model of training brain signals with spatial and temporal features. Last edited: 13/05/2025 |
Human-exoskeleton dynamics and control
735 – Hip exoskeleton to assist walking - multiple projects |
Category: | semester project, master project (full-time), bachelor semester project, internship | |
Keywords: | Bio-inspiration, C, C++, Communication, Compliance, Control, Data Processing, Dynamics Model, Electronics, Experiments, Inverse Dynamics, Kinematics Model, Learning, Locomotion, Machine learning, Online Optimization, Optimization, Programming, Python, Robotics, Treadmill | |
Type: | 30% theory, 35% hardware, 35% software | |
Responsible: | (MED 3 1015, phone: 31153) | |
Description: | Exoskeletons have experienced an unprecedented growth in recent years and hip-targeting active devices have demonstrated their potential in assisting walking activities. Portable exoskeletons are designed to provide assistive torques while taking off the added weight, with the overall goal of increasing the endurance, reducing the energetic expenditure and increase the performance during walking. The design of exoskeletons involves the development of the sensing, the actuation, the control, and the human-robot interface. In our lab, a hip-joint active hip orthosis (“eWalk”) has been prototyped and tested in recent years. Currently, multiple projects are available to address open research questions. Does the exoskeleton reduce the effort while walking? How can we model human-exoskeleton interaction? How can we design effective controls? How can we optimize the interfaces and the control? Which movements can we assist with exoskeletons? To address these challenges, the field necessitates knowledge in biology, mechanics, electronics, physiology, informatics (programming, learning algorithms), and human-robot interaction. If you are interested in collaborating in one of these topics, please send an email to giulia.ramella@epfl.ch with (1) your CV+transcripts, (2) your previous experiences that could be relevant to the project, and (3) what interests you the most about this research topic (to be discussed during the interview). Last edited: 22/01/2025 |
Mobile robotics
754 – Vision-language model-based mobile robotic manipulation |
Category: | master project (full-time), internship | |
Keywords: | Control, Experiments, Learning, Python, Robotics, Vision | |
Type: | 20% theory, 20% hardware, 60% software | |
Responsible: | (MED11626, phone: 41783141830) | |
Description: | INTRODUCTION Recent vision-language-action models (VLAs) build upon pretrained vision-language models and leverage diverse robot datasets to demonstrate strong task execution, language following ability, and semantic generalization. Despite these successes, VLAs struggle with novel robot setups and require fine-tuning to achieve good performance, yet how to most effectively fine-tune them is unclear, given many possible strategies. This project aims to 1) develop a customised mobile robot platform that is composed of a customised and ROS2-based mobile base and robot arms with 6DOF (ViperX 300 S and Widowx 250), and 2) establish a vision system equiped with RGBD cameras which is used for data collection, 3) deploy a pre-trained VLA model locally for robot manipulation with a focus of household environment, and 4) platform testing, validation and delivery. Excellent programming skill (Python) is a plus. For applicants not from EPFL, to obtain the student status at EPFL, the following conditions must be fulfilled (an attestation has to be provided during the online registration): [1] To be registered at a university for the whole duration of the project [2] The project must be required in the academic program and recognized by the home university [3] The duration of the project is a minimum of 2 months and a maximum of 12 months [4] To be accepted by an EPFL professor to do a project under his supervision For an internship, six months at least is suggested. WHAT WE HAVE: [1] Ready-and-easy-to-use robot platforms: including ViperX 300S and WidowX-250, configured with 4 RealSense D405, various grippers, and mobile robot platform [2] Computing resources: TWO desktop PC with NVIDIA GPU 4090 [3] HPC cluster with 1000h/month on NVIDIA A100 and A100fat : can use 1000 hours of A100 and A100 fat NVIDIA GPU every month, supports large-scale training and fine-tuning. Interested students can apply by sending an email to sichao.liu@epfl.ch. Please attach your transcript and past/current experience on the related topics. The position is open until we have final candidates. Otherwise, the position will be closed. Recommend to read: [1] https://www.physicalintelligence.company/blog [2] Kim, Moo Jin, Chelsea Finn, and Percy Liang. "Fine-tuning vision-language-action models: Optimizing speed and success." arXiv preprint arXiv:2502.19645 (2025). [3] https://docs.trossenrobotics.com/aloha_docs/2.0/specifications.html Benchmark: [1] LeRobot: Making AI for Robotics more accessible with end-to-end learning [2] DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset [3] DiT-Block Policy: The Ingredients for Robotic Diffusion Transformers [4] Open X-Embodiment: Robotic Learning Datasets and RT-X Models Last edited: 30/06/2025 |
744 – Multi-robot coordination of assistive furniture swarm in multiple layers using velocity potential field modulation |
Category: | master project (full-time) | |
Keywords: | 3D, Control, Motion Capture, Programming, Python, Robotics, Simulator | |
Type: | 40% theory, 10% hardware, 50% software | |
Responsible: | (undefined, phone: 37432) | |
Description: | We are exploring the concept of a mobile furniture swarm that are intended to assist users with limited mobility in their daily indoor activities. To facilitate multiple uses of limited space, mobile furniture pieces can autonomously rearrange their formation (e.g., setups for meetings, parties, or cleaning). To enhance daily autonomy, assistive furniture can actively move out of the way for a wheelchair user passing by, or follow the user to help carrying objects. Our previous algorithm, Velocity Potential Field Modulation (VPFM), has been proposed to deal with the dense coordination problem of a polytopic swarm in 2D scenarios. For more information, please check out our recent publication in IEEE RA-L: -- Title: Velocity Potential Field Modulation for Dense Coordination of Polytopic Swarms and Its Application to Assistive Robotic Furniture -- Paper: https://ieeexplore.ieee.org/document/11027457 -- Code: https://github.com/Mikasatlx/VPFM-BioRob-EPFL In this master thesis, we will focus on extending VPFM to multiple layers (or height), which can increase the efficiency of using the 3D space. For example, the lower table can move through a higher table, and the seat of a chair can go under a higher table, but the back of a chair can not. The current framework is to couple the coordination behavior over multiple layers, and introduce stochastic component to break the deadlocks/oscillations. We will conduct both simulations and real-world experiments (using VICON motion capture system) to evaluate its effectiveness and real-time performance. For thesis with meaningful results, we will aim for a submission to the 2026 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2026). Last edited: 15/06/2025 |
5 projects found.