<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Aptos;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        font-size:12.0pt;
        font-family:"Aptos",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#467886;
        text-decoration:underline;}
span.EmailStyle19
        {mso-style-type:personal-compose;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;
        mso-ligatures:none;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style>
</head>
<body lang="EN-US" link="#467886" vlink="#96607D" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal">CV attached. Zoom link below. <o:p></o:p></p>
<p class="MsoNormal">──────────<o:p></o:p></p>
<p style="margin-bottom:12.0pt"><b>Title:</b><br>
Sequential, Memory-Efficient Perception for Autonomous and Intelligent Systems<br>
<br>
<b>Abstract:</b><br>
Real-time perception is a critical bottleneck for autonomous and intelligent systems operating under strict constraints on latency, memory, and energy. Although modern deep learning models achieve high accuracy, their computational cost often limits deployment
 in safety-critical and resource-constrained environments such as autonomous vehicles, edge systems, and intelligent infrastructure. By leveraging temporal-to-spatial representations that compactly encode continuous visual streams, semantic information can
 be preserved while significantly reducing input complexity.<br>
<br>
This talk presents a sequential, memory-efficient perception framework that enables low-latency semantic inference on streaming data. It introduces Sequential Semantic Segmentation (SE3), an inference-time model that reuses structured computational overlap
 in encoder–decoder networks through indexed temporal memory, decoupling heavy training from lightweight real-time deployment. The effectiveness of this approach is demonstrated across challenging driving conditions, motion-based pedestrian detection, and large-scale
 panorama and aerial imagery. Beyond perception, the talk shows how these semantic outputs directly support downstream decision-making tasks, including multi-level speed control and path planning.<br>
<br>
The talk concludes by outlining future research directions at Texas A&M University–Corpus Christi, including multimodal perception, vision–language integration, and geospatial and coastal monitoring applications that support student-driven, interdisciplinary
 research in AI and intelligent systems.<o:p></o:p></p>
<p>──────────<br>
Join Zoom Meeting<br>
<a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftamucc.zoom.us%2Fj%2F98337340011%3Fpwd%3D43y5k3mPYDjUAoaLCJbGlKS6dquRLt.1&data=05%7C02%7Ccosc-grad-students-list%40listserv.tamucc.edu%7C0cff2a847d6e4b1cbe0808de5aafb5cc%7C34cbfaf167a64781a9ca514eb2550b66%7C0%7C0%7C639047908031874744%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=9PXX4PMjmRpBE%2BO3ToRr3xR%2FGQTPcWtwFEwRcAqrRGY%3D&reserved=0" originalsrc="https://tamucc.zoom.us/j/98337340011?pwd=43y5k3mPYDjUAoaLCJbGlKS6dquRLt.1">https://tamucc.zoom.us/j/98337340011?pwd=43y5k3mPYDjUAoaLCJbGlKS6dquRLt.1</a><br>
<br>
<br>
Meeting ID: 983 3734 0011<br>
Passcode: 731614<br>
<br>
---<br>
<br>
One tap mobile<br>
+13462487799,,98337340011#,,,,*731614# US (Houston)<br>
+17193594580,,98337340011#,,,,*731614# US<br>
<br>
---<br>
<br>
Join by SIP<br>
• <a href="mailto:98337340011@zoomcrc.com">98337340011@zoomcrc.com</a><br>
Passcode: 731614<br>
<br>
Join instructions<br>
<a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftamucc.zoom.us%2Fmeetings%2F98337340011%2Finvitations%3Fsignature%3DtTpGMl2hZI98wXJEDn56ZlzruP5GtfT7IUJhHMw78gE&data=05%7C02%7Ccosc-grad-students-list%40listserv.tamucc.edu%7C0cff2a847d6e4b1cbe0808de5aafb5cc%7C34cbfaf167a64781a9ca514eb2550b66%7C0%7C0%7C639047908031900609%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=8zh6HdD9AovnLYGhyz7WpdojonCdMuxkv%2FlZNLoU0tQ%3D&reserved=0" originalsrc="https://tamucc.zoom.us/meetings/98337340011/invitations?signature=tTpGMl2hZI98wXJEDn56ZlzruP5GtfT7IUJhHMw78gE">https://tamucc.zoom.us/meetings/98337340011/invitations?signature=tTpGMl2hZI98wXJEDn56ZlzruP5GtfT7IUJhHMw78gE</a><br>
<br>
<br>
──────────<o:p></o:p></p>
</div>
</body>
</html>