Students: Sen Wang, Shiwei He, Tiange Wu
Faculty Mentor: Ashutosh Dutta
Abstract: This project explores how to detect and reduce DoS attacks in 5G networks using an automated framework. We focus on two types of attacks: NGAP attach storm on the control plane and GTP-U f looding on the user plane. To simulate these attacks, we use the Abot platform, and we monitor the traffic using the NIKSUN NetDetector. Once abnormal traffic is detected, alerts are sent and mitigation actions are triggered automatically. Our results show that both attacks can cause seri ous network issues if there are no protections. They put stress on core parts of the network, such as AMF and UPF, causing network functions to stop working properly. However, with real-time alerts and response, the system can handle quickly. With preset mitigation strategies, the network can recover quickly. This experiment not only offers understanding of how to protect 5G networks using automation and coordination between different tools, but also provides hands-on experience of coordination and interaction among different components in the 5G core network.
Students: Hortencia Mendoza, Anjie Shen, Xuecheng Wang
Faculty Mentor: Lanier Watkins
Abstract: This research paper explores the challenges and solutions associated with Unmanned Aerial Vehicle (UAV) detection and mitigation systems. It aims to address the growing security concerns posed by unauthorized drone activities, which affect both military and civilian contexts. This study thoroughly examines the use of advanced detection methods, such as computer vision and radio frequency signal analysis, alongside mitigation techniques like GPS spoofing and ultrasonic interference. Through extensive collection and evaluation of data, this research highlights the effectiveness of integrating multiple sensors for improved detection accuracy. In conclusion, the importance of developing comprehensive regulatory frameworks to complement these technological advancements is also outlined. Henceforth, proposing future work to expand assessment across more diverse drone models and scenarios.
Students: Gourisha Sethi, Jiawei Hu, Naina Aggarwal
Faculty Mentor: Matthew Green
Abstract: With AI and ML tools becoming increasingly integrated into our daily lives—from chatbots and recommendation engines to healthcare diagnostics—there’s a growing need to take a step back and ask: how secure are these systems, really? While AI offers huge potential, especially through large language models (LLMs) like GPT, it also introduces a wide range of security challenges that are still not fully understood. To ground our research in a meaningful real-world application, we tested our tool on a medical system we built in parallel as part of our Medical Device Cybersecurity course. As part of that project, we designed a Class II medical device aimed at studying the physiological patterns of individuals with PTSD and anxiety. Using GSR (Galvanic Skin Response) and SpO2 sensors connected to a Raspberry Pi, we collected raw physiological data, which was stored in a PostgreSQL database. We then integrated OpenAI’s GPT model via API to interpret the data and generate preliminary diagnoses. These diagnoses were served through a secure web application with role-based access control (RBAC), enabling doctors, patients, and admins to view data according to their access level. Our vulnerability scanner was used to tag the GPT API interactions specific to our medical use case and run targeted probes that tested for data leakage, hallucination risks, and encoding issues. This helped us assess how generative AI behaves in a sensitive, high-stakes healthcare environment and what safeguards are necessary to ensure trust and safety.Through this interdisciplinary work—combining AI security research with real-world deployment in a medical device—we aim to contribute to the responsible use of AI in not just healthcare but across industries. As these technologies continue to evolve, ensuring their secure and ethical implementation will be essential in protecting both users and institutions. Maturity of a tool like ours will also help us understand the aspect of policies around these tools should be created when considering its implications towards cybersecurity.
Students: Pu Ji, Sikai Teng, Ahmad Faridi
Faculty Mentor: Michael Rushanan
Research Assistant: Logan Kostick (CS PhD Student)
Abstract: This research will focus on identifying and addressing vulnerabilities in widely used projects and applications built with the C and C++ programming languages through pipeline integration to automate SAST tools. These languages are widely adopted for performance and flexibility but are naturally susceptible to security issues, such as memory management errors, including buffer overflows and integer overflows, signed and unsigned mismatch, format string, insecure library functions, improper input validation, uninitialized memory access. By embedding the specific open source SAST tools into pipeline workflows, security scans are automatically triggered with each new code, generating both pre-patched (upstream) and post-patched packages. These packages allow for direct comparison of vulnerability reports before and after fixes are applied and can be validated using the Juliet Test Suite (JTS). This test is developed by the National Institute of Standards and Technology (NIST) to evaluate the effectiveness of static and dynamic analysis tools. This methodological approach will measure the accuracy of security tools integrated into pipelines, enabling a clearer understanding and reliability. This research also includes a performance analysis of selected SAST tools which focuses on two main aspects: the Actual CWEs (ACWE) and the Total CWEs (TCWE). We conducted systematic Static Application Security Testing (SAST) on over 100 Debian 11 (bullseye) Linux packages, employing five different tools: Clang, Cppcheck, Flawfinder, Semgrep, and ClangSA. This research also analyzes the quantity of vulnerabilities detected by each tool, the overlap among tool detections, and the distribution of identified CWE vulnerability types.
Students: Jonathon Negron, Marcos von Sydow
Faculty Mentor: Timothy Leschke
Abstract: In computer forensics, time is one of the greatest enemies to face, both when it comes to your case, and when it comes to the volatile evidence memory. While traditional forensics focuses on static recovery from data at rest, this study targets data that can be crucial to connecting the dots in an investigation: live memory and transient system logs, fleeting evidence that is easy to alter or lose during the evidence acquisition process, before an investigator arrives. This paper present a Comparative Tool Analysis of open-source forensic tools for memory and log analysis across Windows and Linux platforms, evaluating Volatility 3, MemProcFS, Plaso, and ELK stack. Through simulated insider threat scenarios, the study benchmarks artifact extraction, anomaly detection, usability, and legal admissibility in a real-world-inspired environment. The results reveal critical trade-offs between tool depth and usability, underscoring the importance of a diverse forensic toolkit. This study provides practitioners and educators with a structured evaluation framework and a foundational toolkit for addressing ephemeral data, emphasizing a hybrid, cross-platform approach as essential for comprehensive digital investigations.
Students: Xinyue Huang, Qize Zhang, Elizabeth Grzyb
Faculty Mentor: Xiangyang Li, Lanier Watkins, Reuben Johnston
External Mentor: Zachary England (JHU/APL)
Research Assistant: Jiacheng Zhong (MSSI Student)
Abstract: Satellite communication SATCOM is an essential part of global critical infrastructure, relied on for broadcasting, navigation, internet, telephone communication, and military applications. However, many SATCOM systems remain vulnerable to a range of hardware and software attacks. This capstone project investigates the security of satellite television systems from the perspective of secure system design and offensive vulnerability analysis. Part of the MITRE Embedded Capture the Flag (eCTF) competition, we develop a simulated direct-to-home (DTH) satellite television system utilizing AES-GCM, Ed25519, and other protection mechanisms, in parallel with more than 100 competing teams globally. We conduct 46 successful attacks against other teams, revealing common vulnerabilities such as hardcoded keys, weak encryption schemes, lack of authentication, and improper subscription validation, among others. Our design was only susceptible to an attack from one team, which utilized voltage glitching. Our results demonstrate the importance of holistic threat modeling, robust and fault-tolerant cryptographic design, and hardware security mechanisms. Finally, as a result of this study we propose a future tool concept, utilizing large language models (LLMs) via a model context protocol (MCP) server, for autonomous vulnerability detection, analysis, and exploit development.
Students: Likang Lu, Jiaxin Wu
Faculty Mentor: Yinzhi Cao
Abstract: Federated optimization is a privacy-focused distributed method for training a centralized model using decentralized data. The data sent by agents in federated optimization may expose their privacy, which is undesirable or even unacceptable when involved data are sensitive. However, directly incorporating differential privacy design in existing Federated optimization approaches significantly compromises optimization accuracy. In this project, we study the private Federated optimization problem (PFOP) with the additional requirement that the cost function of the individual agents should remain differentially private in the presence of Byzantine agents. we propose to tailor gradient of agents for differentially private federated optimization by judiciously designing parameters in their interactions. The proposed algorithm converges to the exact global optimal solution even in the presence of persistent DP-noise and the impact of Byzantine agents. Numerical comparison results with existing counterparts confirm the effectiveness of the proposed approach.
Students: Junxian Lu, Hanpei Hu
Faculty Mentor: Anton Dahbura
External Mentor: Abdo Ahmed (JHU/APL)
Abstract: Vehicle-to-Everything (V2X) communications are pivotal for the advancement of Connected Autonomous Vehicles (CAVs), enabling the real-time exchange of safety-critical information. The safety-critical information must be secured in terms of authenticity and authority. The Security Credential Management System (SCMS) is developed to ensure the authentication and authorization of V2X messages while preserving user privacy with pseudonyms and digital signatures. However, SCMS is vulnerable to Sybil attacks, where adversaries exploit multiple valid pseudonym certificates to impersonate multiple vehicles within the network. This paper presents a comprehensive approach to enhancing CAV cybersecurity by simulating Sybil attacks and developing a machine learning-based detection system within the VESNOS simulation platform. We extended VESNOS to incorporate dynamic SCMS functionalities and integrated the HighD dataset into the SUMO traffic simulator to emulate realistic vehicular mobility patterns. Various Sybil attack scenarios, including multiple identity usage, false Basic Safety Messages (BSMs), and platoon takeover attacks, were implemented to evaluate the efficacy of our detection mechanisms. A machine learning detection model, utilizing algorithms such as Random Forest, Support Vector Machine (SVM), and Long Short-Term Memory (LSTM) neural networks, was trained and validated using a balanced dataset derived from our simulations. The Random Forest classifier demonstrated superior performance in effectively identifying Sybil attacks while minimizing false positives. These results underscore the potential of machine learning techniques in safeguarding V2X communications against sophisticated cyber threats. Future work will focus on implementing real-time revocation mechanisms and exploring advanced detection strategies, such as ensemble learning and contextual feature engineering, to further enhance the resilience and security of CAV networks.
Students: Ashwin Srinivasa Ramanujan, Shailesh Rajendran, Yu-Rong Yeh
Faculty Mentor: Anton Dahbura
Research Assistant: Ilya Sabnani, Ruirong Huang (PhD student)
Abstract: Vehicle-to-Everything (V2X) communications technologies are integral to advancing intelligent transportation systems but remain susceptible to complex cyber threats that can harm road safety and data integrity. This study tackles these vulnerabilities by enhancing a co-verification algorithm through the innovative application of machine learning models, focusing on identifying and mitigating falsified vehicular data.
Using the Next Generation Simulation (NGSIM) Interstate 80 freeway dataset, we developed a comprehensive anomaly detection framework that integrates three machine learning models—Autoencoder, Random Forest, and XGBoost—within a collaborative verification system. To rigorously evaluate the approach, we introduced various anomalies in velocity, acceleration, and positional data to emulate cyberattacks, creating a robust testing environment.
The enhanced system achieved notable performance, with a 95% accuracy in detecting anomalies, alongside a precision of 0.99 and a recall of 0.63. The co-verification algorithm further improved these metrics, identifying malicious vehicles with an impressive accuracy of 0.987 and a perfect recall rate. This refined method not only enhances the algorithm’s effectiveness but also offers a scalable solution for real-time threat detection and mitigation in V2X networks.
This research provides a significant contribution to the field by introducing an adaptive, efficient framework for safeguarding the reliability and security of intelligent transportation systems, paving the way for more secure autonomous and connected vehicle technologies.
Students: Zhongyang Li, Yi Liu, Yutong Guo
Faculty Mentor: Ashutosh Dutta
Abstract: Large language models (LLMs) have become integral to numerous applications, showcasing their transformative potential across various domains. To ensure the traceability and intellectual property protection of LLM-generated content, watermarking techniques have emerged as a crucial safeguard. However, existing watermarking methods often suffer from vulnerabilities, including susceptibility to adversarial attacks such as token replacement and temperature scaling. In this study, we propose an enhanced watermarking framework that embeds robust signals while maintaining text quality. Our experiments demonstrate the framework’s resilience against diverse attack scenarios, validating its effectiveness in improving watermark robustness and ensuring reliable detection in adversarial environments.
Students: Yuyang Lei, Zhe Yan
Faculty Mentor: Lanier Watkins
Abstract: The proliferation of Internet of Things (IoT) devices has transformed modern living. People rely on the convenience of smart devices without distinguishing between them. However, some of these devices, including those from leading brands, introduce significant security vulnerabilities. This capstone project investigates the security of selected popular smart home devices and personal wearable devices. Besides, the project identifies potential attack vectors and offers mitigations against known threats. The entire lifecycle of each device was analyzed, and various exploitation techniques were employed. Leveraging tools such as HackRF One or Buffalo Router are also used for tests. Given real-world scenarios, users must be vigilant in utilizing IoT devices. The project aims to provide actionable insights into improving IoT device security, emphasizing user awareness, secure design principles, and industry-standard practices. The findings contribute to advancing IoT security research and highlight the urgent need for strengthened defenses in the ever-expanding IoT ecosystem.
Students: Spencer Stevens, Andrew Zitter, Chinmay Lohani
Faculty Mentor: Lanier Watkins
External Mentor: Dr. Denzel Hamilton (JHU/APL)
Abstract: Unmanned aircraft systems (UAS) such as do-it-yourself (DIY) kits and commercial UAS have become increasingly available. This availability has caused them to become a concern for public safety as adversaries can easily acquire UAS to be used maliciously. In this paper, we propose and document the methods required to create a secure DIY UAS. This secure UAS includes modern protocols and an intrusion detection system (IDS) to detect an adversary’s attempt to hijack or damage the UAS. Eventually, the IDS can be improved into an Intrusion Protection System (IPS) using decision logic or machine learning.
Students: Dorian Liu, Akash Gupta
Faculty Mentor: Matt Green
Abstract: In a rapidly evolving digital world, the way we establish trust in agreements continues to change. Traditionally, trust between two parties is anchored in paper-based documents such as laws, business contracts, and service-level agreements. These text-heavy contracts often suffer from ambiguity and lack of intelligence.
The limitations of traditional contracts have motivated researchers to develop computable legal contracts that transform legal terms into computable code. Based on this, computable legal contracts have been integrated with smart contracts to enhance enforceability and expedite dispute resolution. However, distributed ledger technologies present challenges in maintaining privacy, as they can expose confidential agreements on the blockchain, making them accessible to the public.
In this work, we construct the zk-Agreement protocol that enables a transition from paper-based trust to cryptographic-guaranteed trust while preserving the privacy of confidential agreements. Our approach utilizes zero-knowledge proofs to protect the privacy of confidential agreements, applies a Trusted Execution Environment to securely evaluate that both parties have fulfilled their commitments, and leverages smart contracts to finalize transactions, including payments and function executions.
Students: Siwei Peng, Zhemin Wang
Faculty Mentor: Tim Leschke
Abstract: The Autopsy Ingestion Module Parsing Facebook and Other Social Media Artifacts project focuses on automatically extracting and analyzing social media data for digital forensic investigations. Its brief emphasizes the development of a tool that can effectively parse key social media evidence and organize them for investigation use. From a chronological perspective, the project has gone through research, design, implementation, and testing phases to ensure the accuracy and efficiency of extracting relevant artifacts. The ultimate goal is to provide investigators with a reliable automated tool that can be seamlessly integrated into the forensic workflow.
Students: Qiyao Tang
Faculty Mentor: Xiangyang Li
Abstract: Spam messages continue to present significant challenges to digital users, cluttering inboxes and posing security risks. Traditional spam detection methods, including rules-based, collaborative, and machine learning approaches, struggle to keep up with the rapidly evolving tactics employed by spammers. This project studies new spam detection systems that leverage Large Language Models (LLMs) fine-tuned with spam datasets. More importantly, we want to understand how LLM-based spam detection systems perform under adversarial attacks that purposefully modify spam emails and data poisoning attacks that exploit the differences between the training data and the massages in detection, to which traditional machine learning models are shown to be vulnerable. This experimentation employs two LLM models of GPT2 and BERT and three spam datasets of Enron, LingSpam, and SMSspamCollection for extensive training and testing tasks. The results show that, while they can function as effective spam filters, the LLM models are susceptible to the adversarial and data poisoning attacks. This research provides very useful insights for future applications of LLM models for information security.
Students: Han Li, Yi He, Jiacheng Zhong
Faculty Mentor: Yinzhi Cao
Abstract: JavaScript is widely used in web development due to its dynamic capabilities and rich ecosystem, but its built-in functions are mostly implemented in native code such as C or C++, posing analysis challenges for developers and security researchers. Tasks such as static code analysis, taint tracing, and control flow modeling require a deep understanding of the workflow and data flow of these native implementations, and the lack of transparency makes security auditing and vulnerability assessment difficult.
Our project aims to train a specialized JavaScript bytecode prediction model through Large Language Modeling (LLM) techniques to advance the development of autonomous code analysis systems with less human intervention. We use bytecode-level learning, which, due to its proximity to execution and encapsulation of the dynamic nature of JavaScript, enables generalization to multiple JavaScript constructs while improving efficiency and accuracy. The results show that specialized LLM can effectively predict bytecode, and reduce manual efforts, improve accuracy, and address critical issues in secure code analysis.
Students: Han Wang, Bochun Hu, Zizhuang Guo
Faculty Mentor: Yinzhi Cao
Abstract: Vertical Federated Learning (VFL) enables collaborative model training across multiple organizations while preserving data privacy. However, the inherent feature heterogeneity among participants presents significant challenges to model accuracy and privacy protection. This study extends the PrivateFL framework, originally developed for Horizontal Federated Learning (HFL), to the VFL context by incorporating personalized data transformation and differential privacy mechanisms. The proposed VFL-PRIVATEFL framework effectively addresses feature heterogeneity through tailored feature alignment and ensures robust privacy protection via Local Differential Privacy (LDP) and Central Differential Privacy (CDP). Experimental results across diverse datasets demonstrate that VFL-PRIVATEFL significantly outperforms traditional methods, in terms of accuracy, robustness, and scalability, especially under varying privacy budgets and client configurations. These findings highlight the potential of VFL-PRIVATEFL to enhance collaborative machine learning in privacy-sensitive domains, paving the way for secure and efficient applications in finance, healthcare, and beyond.
Students: Chen Xue, Tianshu Li, Ziyu Zhu
Faculty Mentor: Yinzhi Cao
Abstract: This paper presents the integration of FAST (Fast Abstract Interpretation for Scalability) into Visual Studio Code (VS Code). The goal is to create a user-friendly, enhanced JavaScript debugger. It can be used to detect vulnerabilities in JavaScript applications in real time. By leveraging the Debug Adapter Protocol (DAP), the system embeds advanced static analysis capabilities directly into the IDE. With this tool, developers can effectively identify and mitigate taint vulnerabilities such as command injection and path traversal. The design emphasizes usability and adopts human-computer interaction (HCI) principles to achieve intuitive visual workflows. Although the experimental framework to evaluate usability and effectiveness is deferred to future work, this integration represents the authors’ efforts to improve the Node.js ecosystem.