BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northeastern University College of Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Northeastern University College of Engineering
X-ORIGINAL-URL:https://coe.northeastern.edu
X-WR-CALDESC:Events for Northeastern University College of Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240718T110000
DTEND;TZID=America/New_York:20240718T120000
DTSTAMP:20260515T100934
CREATED:20240820T182625Z
LAST-MODIFIED:20240820T182625Z
UID:45089-1721300400-1721304000@coe.northeastern.edu
SUMMARY:Jagatpreet Nir PhD Dissertation Defense
DESCRIPTION:Announcing:\nPhD Dissertation Defense \nName:\nJagatpreet Nir \nTitle:\nLow Contrast Visual Sensing and Inertial-Aided Navigation in GPS-Denied  Environments \nDate:\n7/18/2024 \nTime:\n11:00:00 AM \nCommittee Members:\nProf. Hanumant Singh (Advisor)\nProf. Martin Ludvigsen\nProf. Michael Everett\nProf. Pau Closas \nAbstract:\nField robots perform complex tasks\, necessitating high autonomy and reliable navigation capabilities. Integrating complementary sensors at the hardware level is crucial to maintaining navigation estimates even during sensor failure. This work is motivated by the need for robust and accurate navigation systems for robotic field applications\, particularly in diverse and challenging environments. The development of such systems involves balancing design requirements with constraints such as size\, weight\, power\, computational capacity\, and cost. Underwater navigation exemplifies navigation in Visually Degraded Environments (VDEs)\, where Autonomous Underwater Vehicles (AUVs) and Remote Operated Vehicles ( ROVs) navigate in challenging conditions. This thesis focuses on exploring methods to enhance the robustness of visual-inertial odometry systems in VDEs. \nThe current state-of-the-art Visual Inertial Odometery (VIO) techniques provide high-accuracy navigation estimates in texture-rich scenes. However\, robots operating in harsh and unpredictable environments\, such as underwater\, often encounter VDEs due to low texture\, uneven illumination\, or backscatter. During prolonged visual degradation\, the Inertial Measurement Units (IMUs) become the primary sensor as visual measurements are unreliable. In this reserach\, we address the problem of designing an underwater VIO navigation system and algorithmic pipelines to ensure reliable navigation estimates during several seconds of visual degradation\, emphasizing the importance of selecting better Micro Electro Mechanical Systems (MEMS) IMUs for dependable performance within a cost budget. \nA robust VIO system designed for underwater settings is introduced. Our contributions include a general system design approach for underwater VIO\, an algorithmic formulation for fusing deep learning-based Visual Odometry (VO) with IMUs data. The underwater datasets depict visual degradation in real-world settings with a time-synchronized 8-bit grayscale camera and IMU. Our hybrid VIO pipeline integrates IMU measurements with VO estimates from a deep-learning VO engine\, combining deep learning with classical sensor fusion techniques to achieve accurate metric and gravity-aligned trajectory estimates even in visually degraded conditions. The proposed system outperforms traditional VIO methods\, demonstrating robustness with consistent trajectory estimates and minimal drift during complete visual outages. The extensible design allows for the incorporation of new sensors\, addressing various underwater navigation challenges. \nTo conclude\, this thesis focuses on environments where exteroceptive sensing\, like cameras\, is compromised for extended periods\, relying on proprioceptive sensors such as IMUs to navigate. The aim is to quantify navigation accuracy in harsh environments and improve system design at both hardware and software levels. Specifically\, underwater visual-inertial navigation for small vehicles is used to demonstrate the principles and algorithms developed. The outlined methodology showcases sensor selection\, sensor-fusion algorithms\, and individual improvements to build enhanced visual-inertial systems for VDEs and the applicability of the proposed approach from controlled settings to field tests. \n 
URL:https://coe.northeastern.edu/event/jagatpreet-nir-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240718T113000
DTEND;TZID=America/New_York:20240718T123000
DTSTAMP:20260515T100934
CREATED:20240820T182954Z
LAST-MODIFIED:20240820T182954Z
UID:45085-1721302200-1721305800@coe.northeastern.edu
SUMMARY:Mehrshad Zandigohar PhD Proposal Review
DESCRIPTION:Announcing:\nPhD Proposal Review \nName:\nMehrshad Zandigohar \nTitle:\nDeployable and Multimodal Human Grasp Intent Inference in Prosthetic Hand Control \nDate:\n7/18/2024 \nTime:\n11:30:00 AM \nLocation: https://teams.microsoft.com/l/meetup-join/19%3ameeting_N2QyNzc1MWMtOWJmMi00NGNmLThlNzctN2JlNjU2Y2I1MmI1%40thread.v2/0?context=%7b%22Tid%22%3a%22a8eec281-aaa3-4dae-ac9b-9a398b9215e7%22%2c%22Oid%22%3a%22de13c261-ac42-49d7-8950-6dec3adaca4e%22%7d\nISEC 532 – \nCommittee Members:\nProf. Gunar Schirner (Advisor)\nProf. Deniz Erdogmus\nProf. Mallesham Dasari\nProf. Mariusz P. Furmanek \nAbstract:\nFor transradial amputees\, robotic prosthetic hands promise to regain the capability to perform daily living activities. Among robotic control methods for prosthetic hand actuators\, coarse-grained grasp types are a common means of effortless yet effective control. However\, to advance next-generation prosthetic hand control design\, it is crucial to address current shortcomings in robustness to out of lab artifacts\, generalizability to new environments and deployment of such compute-intensive grasp estimators. \nFirst and foremost\, current control methods based on physiological modality such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts\, muscle fatigue\, and many more. Similarly\, methods based on visual modality are also susceptible to its own artifacts\, most often due to object occlusion\, lighting changes\, etc. To address such drawbacks of single modality approaches\, we present a multimodal evidence fusion framework for grasp intent inference using eye-view video\, eye-gaze\, and EMG from the forearm processed by neural network models. Given the lack of a synchronized multimodal dataset for evaluating multimodal grasp estimation\, we propose our own customized HANDSv2 dataset with the most complete EMG profile and visual data synchronized in time. Our experimental results indicate that fusing both modalities\, on average\, improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%\, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually\, resulting in an overall fusion accuracy of 95.3%. \nAlthough visual grasp classification has shown promising results\, the generalizability to unseen object classes remains a significant challenge within the research community. This limitation arises from the fixed number of grasp types available in existing models\, contrasted with the virtually infinite variety of objects encountered in the real world. The poor performance of grasp detection models on unseen objects negatively affects users’ independence and quality of life. To address this\, we propose Grasp Vision Language Model (Grasp-VLM). Grasp-VLM takes advantage of the zero-shotness capability of large vision language models and teach them to perform human-like reasoning to infer the suitable grasp type estimate based on the object’s physical characteristics suitable for previously unseen objects\, resulting in better generalizability in real-life scenarios. Our initial results show a significant 49% accuracy of Grasp-VLM over unseen object types compared to 15.3% accuracy of the current State-of-the-Art. \nLastly\, given the computational intensity of such models\, which often contain billions of parameters\, deploying them to edge devices poses a serious challenge. To mitigate this\, we investigate Hybrid Grasp Network (HGN)\, a deployment infrastructure that combines an edge-specialized model for low-latency operations with a cloud-based universal model ensuring high generalization\, effectively balancing performance and resource constraints. \nThe holistic approach presented in this dissertation tackles four essential areas of robotic prosthetic hand control design. Handsv2 provides a customized dataset filling the gap for a multimodal synchronized dataset. Our multimodal fusion approach effectively outperforms single modality approaches providing accurate and robust grasp type estimations during the entire grasping timeline. In addition\, Grasp-VLM addresses the lack of generalizability to new object types providing a more realistic grasp estimation. Lastly\, our HGN design aims at providing a real-time solution investigating both speed and accuracy objectives.
URL:https://coe.northeastern.edu/event/mehrshad-zandigohar-phd-proposal-review/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240718T160000
DTEND;TZID=America/New_York:20240718T170000
DTSTAMP:20260515T100934
CREATED:20240517T125021Z
LAST-MODIFIED:20240603T191427Z
UID:44141-1721318400-1721322000@coe.northeastern.edu
SUMMARY:Mock Interview: CommLab Drop-In Workshops
DESCRIPTION:Join the CommLab any Thursday from 4-5pm ET\, we’ll delve into the intricacies of interviews\, unveiling effective preparation strategies for any interview scenario. Engage in an interactive setting as we dissect the overall interview experience\, discuss common interview scenarios\, and share insights on what to do during critical moments. Join this hybrid workshop series through Zoom.
URL:https://coe.northeastern.edu/event/mock-interview-commlab-drop-in-workshops/2024-07-18/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240718T170000
DTEND;TZID=America/New_York:20240718T180000
DTSTAMP:20260515T100934
CREATED:20240517T125712Z
LAST-MODIFIED:20240701T135456Z
UID:43937-1721322000-1721325600@coe.northeastern.edu
SUMMARY:Poster Design and Presentation: CommLab Drop-In Workshops
DESCRIPTION:The CommLab will host drop-in workshops for poster design and presentation to focus on crafting the best visual communication of your research and telling your research story! We will discuss techniques and implement communication strategies to successfully showcase your work. No matter where you are in the process\, whether it is just in the idea phase or you are trying to polish your final poster\, we are happy to help you.  Join us any Thursday from 5-6pm\,  on Zoom.
URL:https://coe.northeastern.edu/event/poster-design-and-presentation-commlab-drop-in-workshops/2024-07-18/
END:VEVENT
END:VCALENDAR