About ExCEllence (ECE Project Showcase)

The ExCEllence (ECE Project Showcase) is an annual flagship event by NUS ECE. The showcase celebrates the ingenuity, technical depth, and real-world impact of student-driven engineering projects across diverse domains — including artificial intelligence, communications, robotics, microelectronics, energy systems, and emerging interdisciplinary technologies.

Following a successful inaugural run, the 2026 edition returns with an even stronger lineup of projects, broader industry participation, and deeper engagement opportunities.

At its core, ExCEllence is about:

🎯Recognising excellence in student innovation and achievement
🤝Bridging academia and industry through authentic engagement
🚀Inspiring the next generation of engineers to design, build, and lead

Engineering Intelligence!

What to Expect at ExCEllence 2026

Know an outstanding project that deserves recognition?

Whether it’s a Capstone project, a course-based innovation, or an award-winning competition entry, we want to spotlight the very best work from our ECE students.

📌 Who can nominate? Students and faculty members
📌 What are we looking for? Innovative, impactful, and well-executed projects
📌 Why nominate? Give deserving projects the spotlight and inspire the ECE community!

🌟Selected projects stand a chance to win attractive prizes. Find out more about the eligibility and the prizes/awards details.

🌟Submit your nominations today and help us showcase the best of ECE!
Find out more about the eligibility for the competition.

Experience ExCEllence!

Discover bold ideas. Meet future engineers. Celebrate innovation in action.

Explore cutting-edge ECE projects, engage with industry professionals, and witness students competing for top honours.

Be Part of the Excitement!

✅ Explore breakthrough innovations
✅ Vote for the Most Inspiring, Most Impactful, and Most Sustainable projects (note: Student-Run Teams are not eligible to participate)
✅ Support outstanding student achievements
✅ Stand a chance to win attractive prizes as a voter

👉 RSVP by 8 April: https://forms.cloud.microsoft/r/Hm0hSKCmuv

Categories

Competitive

Non-Competitive

Capstone Posters

This category focuses on the thorough and meticulous analysis and presentation of Capstone findings through an A1-sized poster.
  • The evaluation criteria include the clarity and design of your poster, the overall impact of your work, your presentation skills, etc.
Capstone_Loh Yin Heng

A Low Cost Intuitive Robot Arm Teleoperation Framework

Teleoperation systems have gained significant attention in robotics, especially for applications where human operators need to control robots remotely in real time. This is important in settings where direct human involvement is difficult, unsafe, or inefficient, and where intuitive robot control is needed for testing, training, or task execution. This project demonstrates a low-cost robotic teleoperation system for a robot arm, with a focus on improving usability and reducing setup complexity. By supporting simulation, real robot deployment, automated calibration, and a custom software launcher, the system makes it possible.
Capstone_Lei Lingkai

Advanced LEGO Robotics Platform

This project develops a low-cost robotics platform that integrates LEGO SPIKE Prime with Raspberry Pi using a custom PCB and UART communication. By enabling ROS2 support on Ubuntu, it overcomes hardware and software limitations of existing systems, allowing flexible sensor integration and advanced control. The platform supports motor control via H-bridge, real-time data handling, and modular expansion, lowering the barrier to entry for robotics education while maintaining high-level functionality.
Capstone_Gordon Hoon

AI/ML-Driven Fault Detection and Diagnosis in Multi-Level Three-phase Inverter Topologies

This project integrates advanced Machine Learning (ML) techniques to address challenges in fault detection and identification. We have achieved 100% accuracy in detecting both 2 level and 3 level inverter open-circuit faults.
Capstone_Jie Hao Tan

Autonomous Vehicle (AV) Perception Systems: Reviewed and Reengineered

The global autonomous car market has been experiencing rapid growth. This project yields an improved traffic sign detection system using YOLOv11. The model is trained on annotated traffic sign images to detect and classify different road signs in real time. Inspired by recent improvements in object detection research, the project enhances the detection pipeline by optimizing feature extraction and training strategies to improve accuracy and robustness. The trained model is intended for efficient deployment on edge or embedded systems. The final system aims to support intelligent transportation applications by enabling reliable detection of traffic signs under various environmental conditions.
Capstone_Xinyu Li

Biofeedback and Freezing of Gait Detection for Parkinson’s Disease

This project developed a wearable Edge-AI system for real-time detection and biofeedback of Freezing of Gait in patients with Parkinson’s Disease. Motion sensors worn on the ankles monitor a patient’s gait to detect Freezing of Gait using on-device machine learning, and alert them to stop to prevent falls. Subsequently, left and right cues are provided to assist the patient in resuming walking. To improve the reliability of Freezing of Gait detection, the device uses a patient-independent model, and transitions to a patient-dependent model as more Freezing of Gait episodes are detected.

Capstone_Leong Deng Jun

Design of a Redundant Inertial Navigation System with Hardware-Level Voting and Sensor Fusion for Autonomous Platforms

This project presents a novel redundancy architecture for autonomous maritime systems based on hardware-level voting. Supported by ST Engineering, the system integrates multiple GNSS-aided Inertial Navigation Systems (INS) onto a single PCB, producing an aggregated navigation datastream. Unlike conventional software-based failover approaches, the design utilizes an arbitration mechanism on a hardware-level to evaluate sensor consistency in real time. Faulty inputs are identified and excluded through voting logic, while valid measurements are fused to generate a consolidated output. This approach enhances system robustness, improves stability, and enables rapid fault detection for safety-critical autonomous maritime operations.
Capstone_Hannah

Federated Domain Adaptation for Video Action Recognition

Federated Video Domain Adaptation enables collaborative learning across distributed and non-IID video datasets while preserving privacy, but has yet to be explored. We propose Multi-scalE Temporal domAin aLignment (METAL), a multi-temporal-scale knowledge distillation-based framework that leverages temporal information at multiple resolutions to improve cross-domain video action recognition with only model parameter transfers. METAL trains per-scale transformer encoders on source-clients, then performs independent knowledge voting at each temporal scale to generate robust pseudo-labels on the target-server. A novel L2 variance penalty enforces cross-scale consistency during scale-based knowledge distillation, preventing a singular dominant scale. Late fusion and feature-based knowledge distillation with confidence-weighted ensemble pseudo-labels combines complementary temporal information for final predictions.

High Speed Image Processing on FPGA Using Convolutional Neural Network (CNN)

This project implements a Convolutional Neural Network (CNN) for object detection on a Field-Programmable Gate Array (FPGA), targeting high frame rates, low latency, and reduced power consumption compared to CPU and GPU based solutions. Conventional CNNs are computationally intensive and memory demanding which poses challenges for resource-constrained FPGA platforms. To address this, model optimization techniques such as pruning and quantization are applied to reduce complexity and memory usage. These optimizations enable efficient deployment of the CNN onto FPGA hardware while maintaining effective object detection performance.
Espressif_Kai Sheng Bock

Human Detection and Tracking in Indoor Spaces Using Nano Drones

In the aftermath of disasters such as earthquakes, rapid and efficient search and rescue operations are critical for saving lives and minimizing casualties. Traditional methods often face significant challenges in navigating dangerous, complex and unstable environments, such as partially collapsed buildings, where visibility and access are severely restricted. The recent and rapid advancement of drone technology has unlocked new possibilities for such applications.

The project aims to develop a thermal-based approach to search for and track humans in indoor environments. This work integrates thermal sensors and develops algorithms that enable palm-sized nano Crazyflie drones, despite their limited computing and sensing capabilities, to autonomously search for, detect, and track humans in indoor environments while avoiding simple obstacles such as walls.

Capstone_Zakariyyaa Chachia

Investigating Ultra-Low Power, Brain-Inspired, Neuromorphic Hardware to Detect Interference Patterns on Board Satellites

Investigating ultra-low power, brain-inspired, neuromorphic hardware to detect interference patterns on board satellites. Radio frequency interference is an ongoing and evolving issue within the satellite industry, but currently developing systems involving deep learning models can be computationally expensive. This project offers a low power solution to this problem, using cutting edge hardware modelled after the human brain, Spiking Neural networks and deep learning.
Capstone_Tiara Mirchandani

Multi-Layered Anatomical Hand Model for Physics-Based Interaction

This project presents a fully anatomical, physics-driven digital human hand model. To overcome the limitations of hollow models like MANO, this framework combines a high-fidelity CT-derived skeletal mesh with a multi-layered physical simulation. By coupling a volumetric Finite Element Method (FEM) muscle layer with mass-spring skin constraints and torque-driven rigid body joint mechanics, the model achieves real-time, volume-preserving soft-tissue deformation and realistic collision responses, creating an anatomically accurate model for hand-object interaction.
Capstone_Jia Cheng, Raymand T

Near-Infrared Transmission-Control Mechanism of Metasurface Arrays on Novel Substrate Materials

This project pioneers a highly efficient near-infrared (NIR) photodetector by integrating Titanium (Ti) plasmonic metasurfaces directly onto 4H-SiC, a premier next-generation semiconductor substrate. Through rigorous 3D electromagnetic simulations exploring complex optical phenomena, we engineered an optimized nanoline array that achieves an exceptional ~90% optical absorbance near the target wavelength. By successfully translating these theoretical nanoscale models into comprehensive Electron Beam Lithography (EBL) layouts, this work bridges the gap between computational physics and physical nanofabrication for robust, integrated photonic architectures.
Capstone_Ricky Wang

Non-Invasive Monitoring of Implantable Battery Using Ultrasonics and AI

Reliable monitoring of implantable batteries is critical, but traditional invasive electrical methods drain limited power reserves and pose patient hazards. While ultrasonic sensing provides a non-invasive alternative, varying skin thicknesses severely distort acoustic signals. To resolve this, we propose a robust diagnostic framework utilizing Swept Frequency Ultrasonic Reflection (SFUR) paired with a novel cascaded Orthogonal Spectral Extreme Gradient Boosting (OS-XGBoost) architecture. By employing PCA dimensionality reduction and dynamically coupling battery health with charge estimation, the model successfully filters physiological noise to achieve an R2 of 0.99 for both SOH and SOC estimation. Its minimized 6.43 KB memory footprint demonstrates exceptional viability for resource-constrained edge microcontrollers.
Capstone_Ang Hui Jie

Passive Acoustic Monitoring of Melting Glaciers

Glaciers are sensitive indicators of climate change, and their accelerated melting is a visible consequence of global warming. This project investigates melting activity using acoustic data captured from tidewater glaciers, which produce distinct sound signatures. Passive Acoustic Monitoring (PAM) is a non-invasive and cost-effective approach that offers high temporal resolution for studying underwater sound propagation at the Hansbreen glacier terminus in Hornsund Fjord, Spitsbergen. Acoustic simulations were conducted in the Julia ecosystem using UnderwaterAcoustic.jl and Bellhop beam-tracing to model sound transmission under varying environmental conditions, including bathymetry, sound-speed profiles, glacier-front geometry, sound directionality, and seabed properties.
Capstone_Mahir Faysal

Real-Time Underwater Voice Communication

This project presents the design and implementation of a real-time underwater voice streaming system using low-bitrate speech compression and an underwater communication framework. Audio is recorded from one webpage, compressed into encoded chunks, transmitted underwater, and then decoded and played back on a second webpage with low latency. To achieve this, the system integrates Lyra with UnetStack and is evaluated through benchmarking, latency analysis, and parameter tuning. The final system demonstrates reliable webpage-to-webpage underwater voice communication, showing that low-latency digital speech transmission can be successfully achieved in an underwater environment.
Capstone_Mihindukulasooriya S

Self-Balancing Control System for Spacecraft Simulator

The rapid growth of small satellite applications demands highly reliable subsystems, particularly the Attitude Determination and Control System (ADCS), which governs precise satellite orientation for stability, power generation, and payload pointing. However, ADCS failures are often linked to limitations in ground-based testing. Air-bearing simulators, used to emulate near torque-free conditions, are highly sensitive to misalignment between the centre of mass and centre of rotation, introducing disturbance torques. This work presents an automatic balancing system using least squares estimation of CM offset from IMU data and piezoelectric actuation, achieving a fourfold improvement in reaction wheel saturation time and significantly enhancing test fidelity.
Capstone_Gayathri Thattarupar

Self-Sustaining Communication Module for Enhanced Resilience in Miniaturized Satellite Operations

This project presents the design and validation of an in-house developed RF-based ground node for satellite communications. A modular RF front-end printed circuit board (PCB) operating in the UHF band was developed to enhance satellite telemetry, tracking, and localisation. The design integrates filtering, low-noise amplification, and power amplification using controlled-impedance RF layout techniques. Experimental testing demonstrated an uplink gain of approximately 16 dB, closely matching theoretical predictions. The system supports resilient, low-cost ground-station infrastructure for emerging small-satellite communication and tracking networks.
Capstone_Ang Jing Neng

Study of a Non-Iterative Method Using Phaseless Data for Solving Inverse Scattering Problems

Solving inverse scattering problems using non-iterative methods and phaseless data is increasingly important in optical imaging, radar detection, biomedical diagonstics, and space-based remote sensing, where phase measurements are costly, or physically inaccessible. This project aims to develop an improved phase retrieval algorithm to accurately reconstruct phase information from intensity data, alongside a novel non-iterative inversion algorithm that enhances reconstruction accuracy beyond traditional methods such as Born, Rytov, and modified Born approximation inversion methods. MATLAB simulations are used to validate the performance of the algorithms.
Capstone_Zi Yang Chai

Systematic RF Hardware Design Methodology for TEMPEST Attacks Based on Display Pixel Harmonics

TEMPEST attacks recover video display content from unintentional electromagnetic emissions, where the pixel rate and its harmonics carry the display information. Since different resolutions and refresh rates produce different pixel rates, and different displays amplify different harmonics, reliable carrier capture requires hardware covering a wide frequency range. Existing literature, largely focused on demodulation and image reconstruction algorithms, employs overly wideband hardware, resulting in unnecessarily large and impractical antennas. This work proposes a systematic methodology using a pixel harmonic based design table to identify candidate carriers and select the minimum required bandwidth for a target set of displays.
Capstone_Leipakshi Gupta

Terrain Aware SE(2) Planning for Multifloor Navigation in Indoor Digital Twins

This project studies global path planning for wheeled robots in indoor point cloud based digital twins, with a particular focus on navigation across multiple floors. While existing digital twin systems can reconstruct buildings and support localisation, they do not generally provide a complete method for safe and feasible multifloor navigation for SE(2) constrained wheeled agents such as wheelchairs and service robots. This is especially important in real indoor environments, where narrow corridors, clutter, ramps, floor level changes, and elevator transitions affect whether a path is actually traversable.

The work builds on the SEB Naver framework as a baseline local planner and examines how raw point cloud data can be converted into an SE(2) terrain representation containing elevation, surface normal, risk, and signed distance information. On this representation, local motion planning is performed using Kino A* and trajectory optimisation. To extend this framework beyond a single floor, this report proposes a higher level multifloor planning approach in which each floor is represented by its own terrain map and vertical connections such as elevators or ramps are modelled as transitions in a graph. Global planning is then carried out by combining inter-floor graph search with floor level SE(2) planning.

Espressif Competition

This category features selected Capstone projects developed in collaboration with Espressif Systems, showcasing innovative applications built using Espressif technologies.

Participation in this category is by invitation only and is limited to Capstone teams whose projects are part of the Espressif collaboration programme.

  • The showcase features presentations and live demonstrations of the prototypes or the completed systems.
  • The evaluation criteria assess the understanding of, and ability to apply, fundamental engineering principles in solving real-world problems, creativity in design and execution, among other things.
Capstone and Espressif_Yalin

Biofeedback and Freezing of Gait Detection for Parkinson’s Disease

This project developed a wearable Edge-AI system for real-time detection and biofeedback of Freezing of Gait in patients with Parkinson’s Disease. Motion sensors worn on the ankles monitor a patient’s gait to detect Freezing of Gait using on-device machine learning, and alert them to stop to prevent falls. Subsequently, left and right cues are provided to assist the patient in resuming walking. To improve the reliability of Freezing of Gait detection, the device uses a patient-independent model, and transitions to a patient-dependent model as more Freezing of Gait episodes are detected.

Centralized & Distributed Object Detection using Autonomous Vehicles on any Unknown Terrain using Machine Learning

Object detection using Machine Learning is a reasonably well-studied topic with several ML algorithms in place. Recent YOLO series based algorithms detect objects in real-time and hence become attactive when mounted on an autonomous vehicle! In this project, we will consider a co-operative multi-vehicle scenario in which vehicles(mBots) participate in a coordinated and cooperative manner to detect objects on an unknown terrain.
Espressif_Kai Sheng Bock

Human Detection and Tracking in Indoor Spaces Using Nano Drones

In the aftermath of disasters such as earthquakes, rapid and efficient search and rescue operations are critical for saving lives and minimizing casualties. Traditional methods often face significant challenges in navigating dangerous, complex and unstable environments, such as partially collapsed buildings, where visibility and access are severely restricted. The recent and rapid advancement of drone technology has unlocked new possibilities for such applications.

The project aims to develop a thermal-based approach to search for and track humans in indoor environments. This work integrates thermal sensors and develops algorithms that enable palm-sized nano Crazyflie drones, despite their limited computing and sensing capabilities, to autonomously search for, detect, and track humans in indoor environments while avoiding simple obstacles such as walls.

PSC modules

Personal Smart Companion (PSC)

PSC is a next-generation, personalized smart alarm clock and desktop or wearable companion designed for university students. Through voice and touch interaction, PSC supports productivity, healthy routines, emotional wellbeing, and focus on habits in a seamless, non-intrusive way. Grounded in behavioral science principles, it uses IoT-enabled hardware and cloud intelligence to deliver timely nudges that help students stay motivated, organized, and emotionally supported throughout their academic journey.  As an ambient companion device, PSC passively and continuously monitors aspects of the user’s routines and wellbeing, responding in subtle and supportive ways. It combines six core features: a Pomodoro productivity timer, habit tracker, multimodal mood logging, motivational audio support, environmental sensing for air quality and ambient conditions, and a Telegram bot for remote access to wellness insights and data. Together, these features transform PSC into a dependable, trustworthy companion that promotes student wellbeing, productivity, and healthier daily habits.

Smart Traffic Cone System

Smart traffic cone is an integral part of smart city trend. Traffic cones are bright orange markers on the road used for safety and traffic management. These cones are placed on roads or footpaths to temporarily redirect traffic in a safe manner. They are also used to create separation or merge lanes during road works or car accidents. An upgraded version of the traditional traffic cone that are equipped with sensors and connectivity to improve road safety, traffic flow management, and communication is a smart traffic cone. If equipped with mobility, they can be deployed from a remote location, and because of connectivity, can be monitored remotely. Thus, the smart traffic cone system enhances safety for road workers compared to passive traffic cones.

EE/CEG Courses Projects

This category highlights the outstanding project executions across various EE- and CEG-coded courses in ECE. 
  • The showcase should ideally include a live demonstration of the system.
  • The evaluation criteria assess the understanding of, and ability to apply, fundamental engineering principles in solving real-world problems, creativity in design and execution, etc.
  • A printed or displayed visual aid is optional. If included, it will serve only to aid in the explanation and will not be evaluated.
CDE2605R_Sai

CDE2605R - Live Video from Stratospheric Weather Balloon

This project is about a weather balloon payload that transmits live video at 1280MHz using a Commercial Off the Shelf Video transmitter and a student developed flight telemetry board transmitting Automatic Packet Reporting System (APRS) packets at 144.390MHz. This is the first such payload in the region involving video transmission from the stratosphere using the Amateur Radio band. The payload was launched via a weather balloon at 0650H, after securing all required approvals. The payload ascended to an altitude of 31 km, the edge of space. Throughout the mission, the team maintained continuous real-time video feed alongside telemetry data.
CG2028_Yi Feng Chan

CG2028 - LIFT (Live Intelligent Fall Tracker)

This Smart IoT wearable addresses the "long lie" problem by providing real-time fall detection for the elderly wearer. Powered by a STM32 Cortex-M4 and ESP32 microcontroller, this system uses a two-stage state machine to track free fall and a subsequent impact phase via accelerometer and gyroscope. An assembly moving average filter processes raw sensor data rapidly. Beyond motion tracking, an integrated heartbeat sensor transmits vital health analysis. Upon fall detection, a buzzer notifies bystanders, while WhatsApp and Telegram notifications are sent to the elderly's emergency point of contact. All sensor data is logged to Ubidots cloud dashboard.
CG2111A_Khushi Madnani

CG2111A - Alex Rover

Alex is a teleoperated search-and-rescue robot developed for a simulated lunar mission environment. The system features a dual-processor architecture, integrating a Raspberry Pi for high-level sensing and mapping with an Arduino for real-time motor and actuator control. It utilizes LiDAR for environment scanning, a color sensor for zone detection, and a constrained camera for visual feedback. A 4-DoF robotic arm enables precise payload manipulation. The robot supports dual-operator control over a networked interface, enabling efficient navigation, object retrieval, and mission execution in an unknown, obstacle-rich environment.
CG2111A_Divyansh Garg

CG2111A - Moonbase SG Rescue Mission (Alex)

Our remotely operated rescue robot navigates an obstacle course to locate, retrieve, and deliver medical supplies to injured astronauts. What sets our implementation apart is a full ROS2-based SLAM pipeline running on a Raspberry Pi inside Docker, using an RPLidar A1M8 and slam_toolbox to build a live occupancy grid map — visualised in real-time on Foxglove Studio over Tailscale VPN. With no wheel encoders, we engineered a scan-matching-only odometry solution. Two operators control the robot simultaneously over a TLS-secured TCP link: one for navigation, one for the 4-DoF robotic arm, supported by a colour sensor, camera, and hardware E-stop.
CG3207_Teng Yi Heng

CG3207 - Dual-Core Pipelined RISC-V Processor with Atomic Instruction Support

This project presents the design and implementation of an dual-core, 5-stage pipelined processor implementing the RV32IM ISA, written in Verilog and tested on the Nexys 4 FPGA. Each core can run its own program, allowing for high thread-level parallelism. To share memory and peripherals between cores while avoiding race conditions, we implemented atomic Load-Reserved/Store-Conditional instructions to enable synchronization primitives such as mutexes to be used in the programs. To demonstrate its utility, we present a rendering demo (moving ball) where each core draws different halves of the screen, sharing variables and a memory-mapped VGA peripheral.

Other enhancements include hazard detection to automatically stall/flush pipeline stages when needed, data forwarding to avoid pipeline stalls where possible, global history-based branch prediction scheme for better speculative execution, and a Karatsuba multiplier for fast and efficient 32b multiplication.

CG3207_The Trung Bui

CG3207 - In-Order Dual-Issue RISC-V32IM Processor on FPGA with CoreMark Benchmarking

This project presents the design and implementation of a processor based on the RV32IM ISA. The processor is developed using Verilog and prototyped on a Nexys 4 FPGA. It supports execution of compiled assembly programs in binary format.

To improve performance, the processor incorporates a dual-issue in-order execution pipeline with a 5-stage architecture, allowing two instructions to be issued simultaneously. Key enhancements include a hazard detection, data forwarding, and branch prediction, all designed to mitigate pipeline stalls and improve throughput.

The performance is evaluated using the CoreMark benchmark and achieves a score of 2.22 CoreMark/MHz.

CG3207_Wenbo Zhu

CG3207 - Mach-V: In-order SuperScalar RISC-V CPU Implementation on Nexys 4 FPGA

The open-standard RISC-V ISA is driving increasing demand for high-performance, customizable processors that go beyond traditional in-order academic designs. This project investigates the design and implementation of a superscalar RISC-V processor incorporating advanced microarchitectural techniques, including a 5-stage pipeline, dynamic branch prediction, and a dual-issue superscalar architecture. The processor’s performance is evaluated using the CoreMark benchmark, achieving a score of 2.51 CoreMark/MHz and delivering a 2× throughput improvement over the baseline single-issue, single-cycle implementation.
Looking ahead, this work aims to explore further enhancements to improve instruction-level parallelism, overall throughput, and operating frequency. Potential directions include deeper pipelining and more sophisticated branch prediction mechanisms, among other advanced architectural optimizations.
6240104512475565666

CG4002 - PokemonAR

This project presents a real-time augmented reality Pokémon battle system that transforms physical spaces into interactive arenas. Players command Pokémon through physical gestures captured by a wrist-mounted MPU6050 IMU on a FireBeetle ESP32, and voice commands for hands-free Pokémon switching. Pretrained neural networks for action classification and voice recognition are deployed on the Ultra96 FPGA using fixed-point HLS-synthesized accelerators. The iOS visualizer leverages LiDAR mesh reconstruction for depth-aware occlusion, rendering world-anchored 3D battle scenes in real time. All components communicate over an MQTT broker hosted on AWS EC2, enabling synchronized turn-based PvP and PvE gameplay across a distributed edge-cloud architecture.
EE1111A_Zhu Yuehua

EE1111A - Design of Off-Grid Solar Panel Array with Suntracking for Jakarta

This project presents the design of an off-grid floating solar photovoltaic (PV) system with single-axis sun tracking for coastal regions in North Jakarta. By utilizing water surfaces and incorporating tracking mechanisms, the system overcomes land scarcity while enhancing energy yield. A prototype using Arduino, LDR sensors, and servo motors demonstrates real-time tracking capability. Simulation results show that the tracking system increases annual energy output by approximately 14% compared to fixed-tilt panels. The proposed design offers a scalable, sustainable solution for urban renewable energy generation in tropical environments.
EE1111A_He Jiaxie

EE1111A - OrbitGrow: An Automated Rotary Hydroponic System with Adaptive Nutrient Dosing

This project presents an automated rotary hydroponic system that maximizes space efficiency and crop yield through synchronized mechanical rotation, smart sensing, and adaptive control. A rotating drum and side LED panel enhance uniform light exposure. A centralized watering and nutrient system uses a main reservoir and pump to circulate solution through the hydroponic loop, while EC, pH, and temperature sensors monitor reservoir conditions. Nutrient dosing is applied directly to the reservoir with temperature-compensated EC control to maintain stability. An intelligent atomizer-fan system regulates climate, and adaptive lighting control optimizes growth, delivering a precise monitoring and controlling tailored to each crops' specific needs.
EE2026_Du Derrick

FPGA-Based Guitar Multi-Effect Pedal

This project is a real-time digital guitar effects pedal built on a single Basys3 FPGA. It addresses the cost, bulk, and inflexibility of traditional analogue pedalboards by consolidating an entire signal chain into one reconfigurable device. Audio from an electric guitar is processed through a Pmod I2S2 codec at studio-quality 48 kHz / 24-bit. The pipeline includes a multi-mode preamp, cabinet speaker simulation, tap-tempo delay, reverb, and a chromatic tuner. Users interact through a performance mode for live playing and an edit mode for detailed parameter adjustment, displayed on both an OLED and a VGA monitor with real-time waveform visualisation. By exploiting the FPGA's deterministic parallel processing, the entire pipeline runs within a single sample period with zero perceptible latency.
EE2026__Hsu, Yu-Chen

EE2026 - Metacircuit

This project presents the design and implementation of an interactive FPGA-based analog circuit simulator on the Basys3 board using Verilog. Users construct circuits via a VGA interface with mouse input, placing components such as resistors and sources. The system generates a netlist, builds the system matrix using modified nodal analysis, and solves it using single-precision floating-point computation. Initial support focuses on DC analysis. Visualization includes waveform display and current animation. Switches, LEDs, and an OLED provide a compact status display and control for system interaction.
EE2026_Kenneth Wong

EE2026 - Object Tracker with FPGA

This project is a real-time computer vision system built on a single Basys3 FPGA. It serves as an experiential learning tool on the principles of object tracking. A camera feed and graphical user interface overlays are displayed through a VGA monitor. Using a mouse, users can interactively customise the image processing pipeline via intuitive drag & drop controls, dynamic sliders and scroll-wheel. They can also gain insights into our Union-Find Disjoint Set blob detection algorithm through pop-up explanations and modify its parameters. The switches and buttons on the FPGA are used to tune the PD controller of the pan-tilt servo, which will follow the detected object smoothly.
EE4218_The Trung Bui

EE4218 - FPGA-Based Hardware Accelerator for Ternary Transformer Sentence Sentiment Analysis

This project designs and implements hardware co-processors to accelerate a ternary Transformer for sentence sentiment analysis on the KV260 FPGA. Modules include a projection engine for Q/K/V/O layers and a score engine for attention computation.

The accelerators interface with the on-board CPU via AXI and AXI-Stream. Two implementations are explored: Verilog HDL and HLS (C/C++), both integrated with software. The model supports sentiment and topic classification (Sports, Business, Sci/Tech, World), with a Python software version used as a performance baseline.

This work demonstrates FPGA acceleration of Transformer models and trade-offs between HDL and HLS.

final_Trijal Srimal

EE4218 - Hardware Accelerated Inference Of Satellite Imagery On FPGA's

This project presents a hardware-accelerated ship detection system for real time inference of satellite imagery on the Kria KV260 platform. A sliding-window pipeline extracts seven features from each image patch and feeds them to a trained 7-2-1 MLP classifier. The system compares software, HLS, and custom HDL implementations, using AXI DMA and AXI-Lite to move data and weights between the ARM processor and FPGA fabric. A Tkinter-based GUI enables real-time testing and timing comparison across modes. Results demonstrate how FPGA acceleration reduces inference latency while preserving the accuracy of the trained model for binary ship-versus-background classification on the board and evaluation dataset.
tinyissimo_demo_screenshot_Hannah Lee

EE4218 - Hardware Acceleration of a Lightweight YOLO Implementation on an FPGA-based SoC

This project deploys TinyissimoYOLO, a quantized lightweight YOLO model, on the Kria KV260 FPGA-based SoC. Trained on a 3-class COCO subset, quantized via PTQ and exported to TFLite, three inference modes are compared: a TFLite software baseline on 4 ARM A53 cores, an HDL accelerator, and an HLS-generated accelerator, achieving ~2× and ~4× speedups with minimal accuracy loss (<1% mAP) post-quantisation. Proposing a single configurable layer, HDL and HLS utilize only 6% LUTs as Logic + 12% DSPs and 27% LUTs as Logic + 16% DSPs respectively, with significant FPGA logic resource headroom remaining, highlighting the potential of custom hardware accelerators over software solutions for real-world edge machine learning deployment.
EE4218_Wenbo Zhu

EE4218 - Verilog Neural Network (VNN): Hardware Acceleration of Basic Image Classifier

In this project, we develop a custom hardware accelerator to improve the speed of an image classification system.

Our approach uses a Convolutional Neural Network (CNN) trained on the CIFAR-10 dataset to recognize objects such as animals and vehicles. With an innovative and highly efficient RTL microarchitecture optimized using various RTL transformation techniques, our RTL implementation achieves a 2709× speedup (8127 images/sec), while the HLS implementation delivers a 585× speedup (1705 images/sec) compared to CPU-only execution (3 images/sec) on the KV260 platform.

Through this work, we demonstrate how specialized hardware can accelerate machine learning workloads and offer advantages over traditional software-based solutions.

MSc Projects

This category showcases the year-long MSc projects and celebrates creativity, technical excellence, and real-world application of knowledge.
  • The showcase should ideally include a live demonstration of the system.
  • The evaluation criteria include technical complexity and depth, ingenuity, the broader implication of the project, presentation skills, etc.
  • A printed or displayed visual aid is optional. If included, it will serve only to aid in the explanation and will not be evaluated.
MSC_Jia Mingxuan

FPGA-Accelerated NAND Flash Controller with Parallel LDPC-Based Error Correction

This project develops an FPGA-accelerated NAND Flash controller integrated with a Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) engine to combat bit-flip errors caused by storage wear-out. A zero-latency parallel encoder and a 64-lane folded Layered Min-Sum decoder were designed for high-throughput data recovery. Architectural optimizations, including Q5.2 fixed-point quantization and combinational early-termination, were applied to minimize hardware overhead. The RTL implementation was rigorously verified against a MATLAB golden model, demonstrating highly robust error correction efficiency.

External Competition Projects

Discover ECE students’ outstanding achievements in competitions beyond NUS!

Knowledge Sharing Projects

ELExAI Chatbot

ELExAI is an intelligent Telegram-based admissions chatbot designed to support NUS ECE outreach by providing accurate, customised responses to frequently asked questions. Built on AWS, ELExAI operates seamlessly in both private and group chats, including admissions Telegram groups. The system uses a Retrieval‑Augmented Generation (RAG) approach to ground AI-generated answers in official sources such as the ECE FAQ website. By delivering context-aware and personalised replies, ELExAI reduces repetitive staff workload while improving information accessibility and engagement for prospective students.

Student-Run Team

Explore existing student-led project teams 

NUS Bumblebee

Bumblebee is a student-run, multi-disciplinary robotics team. The Bumblebee team designs and builds Autonomous Underwater Vehicles (AUVs) and Autonomous Surface Vessels (ASVs) to navigate across oceans independently, from the shore line and the water surface, to deep waters.

NUS Calibur Robotics

NUS Calibur Robotics is a student-led competitive team at the National University of Singapore, bringing together talent from various disciplines to design, build, and program autonomous combat robots. Established in 2019, the team competes in the prestigious DJI RoboMaster University Championships, showcasing advanced robotics and control systems. Calibur Robotics aims to field a full fleet of seven specialized robots for the global championships in China, continually pushing the boundaries of student-led robotics innovation.

 

NUS Formula SAE Race Car

The National University of Singapore Formula SAE (NUS FSAE) team comprises of passionate and talented undergraduates from the NUS Innovation and Design Program. The team designs, builds, tests and race a formula style race car every year to compete in FSAE Michigan, one of the toughest inter-varsity competitions.

With a rich history from combustion engines to cutting-edge electric vehicles, their journey has been a testament to relentless dedication, teamwork, and a pursuit of excellence.

NUS Mars Rover

The NUS Mars Rover is a multidisciplinary group of undergraduate students from the NUS Innovation & Design Program. Our mission is to design and build an autonomous Mars rover to compete annually in the University Rover Challenge, held at the Mars Desert Research Station in the USA.

 
We are passionate about pushing the boundaries of robotics and engineering, leveraging innovative solutions to tackle the challenges of planetary exploration. Through collaboration and hands-on learning, we aim to develop a robust and versatile rover capable of performing complex tasks in a Mars-like environment.

Judges

Tham Chen Khong

A/Prof Tham Chen Khong

Associate Professor
Jun Wei

Dr Lee Jun Wei

Principal Research Engineer at DSO
Abhishek

Dr Abhishek Rai

Assistant Professor
Mr Wong Jit Chin (Principal Research Engineer  at DSO)

Mr Wong Jit Chin

Principal Research Engineer at DSO

Judges for Espressif Competition

junius pun

Mr Junius Pun

Software Engineer at Espressif
Yogesh Mantri

Mr Yogesh Mantri

Software Architect at Espressif
Dr Tang Kok Zuea

Dr Tang Kok Zuea

Senior Lecturer
Xu Yuecong

Dr Xu Yuecong

Lecturer

Countdown to ExCEllence 2026

Day
Hour
Minute
Second