Students: Junxian Lu, Hanpei Hu
Faculty Mentor: Anton Dahbura
External Mentor: Abdo Ahmed (JHU/APL)
Abstract: Vehicle-to-Everything (V2X) communications are pivotal for the advancement of Connected Autonomous Vehicles (CAVs), enabling the real-time exchange of safety-critical information. The safety-critical information must be secured in terms of authenticity and authority. The Security Credential Management System (SCMS) is developed to ensure the authentication and authorization of V2X messages while preserving user privacy with pseudonyms and digital signatures. However, SCMS is vulnerable to Sybil attacks, where adversaries exploit multiple valid pseudonym certificates to impersonate multiple vehicles within the network. This paper presents a comprehensive approach to enhancing CAV cybersecurity by simulating Sybil attacks and developing a machine learning-based detection system within the VESNOS simulation platform. We extended VESNOS to incorporate dynamic SCMS functionalities and integrated the HighD dataset into the SUMO traffic simulator to emulate realistic vehicular mobility patterns. Various Sybil attack scenarios, including multiple identity usage, false Basic Safety Messages (BSMs), and platoon takeover attacks, were implemented to evaluate the efficacy of our detection mechanisms. A machine learning detection model, utilizing algorithms such as Random Forest, Support Vector Machine (SVM), and Long Short-Term Memory (LSTM) neural networks, was trained and validated using a balanced dataset derived from our simulations. The Random Forest classifier demonstrated superior performance in effectively identifying Sybil attacks while minimizing false positives. These results underscore the potential of machine learning techniques in safeguarding V2X communications against sophisticated cyber threats. Future work will focus on implementing real-time revocation mechanisms and exploring advanced detection strategies, such as ensemble learning and contextual feature engineering, to further enhance the resilience and security of CAV networks.
Students: Ashwin Srinivasa Ramanujan, Shailesh Rajendran, Yu-Rong Yeh
Faculty Mentor: Anton Dahbura
Research Assistant: Ilya Sabnani, Ruirong Huang (PhD student)
Abstract: Vehicle-to-Everything (V2X) communications technologies are integral to advancing intelligent transportation systems but remain susceptible to complex cyber threats that can harm road safety and data integrity. This study tackles these vulnerabilities by enhancing a co-verification algorithm through the innovative application of machine learning models, focusing on identifying and mitigating falsified vehicular data.
Using the Next Generation Simulation (NGSIM) Interstate 80 freeway dataset, we developed a comprehensive anomaly detection framework that integrates three machine learning models—Autoencoder, Random Forest, and XGBoost—within a collaborative verification system. To rigorously evaluate the approach, we introduced various anomalies in velocity, acceleration, and positional data to emulate cyberattacks, creating a robust testing environment.
The enhanced system achieved notable performance, with a 95% accuracy in detecting anomalies, alongside a precision of 0.99 and a recall of 0.63. The co-verification algorithm further improved these metrics, identifying malicious vehicles with an impressive accuracy of 0.987 and a perfect recall rate. This refined method not only enhances the algorithm’s effectiveness but also offers a scalable solution for real-time threat detection and mitigation in V2X networks.
This research provides a significant contribution to the field by introducing an adaptive, efficient framework for safeguarding the reliability and security of intelligent transportation systems, paving the way for more secure autonomous and connected vehicle technologies.
Students: Zhongyang Li, Yi Liu, Yutong Guo
Faculty Mentor: Ashutosh Dutta
Abstract: Large language models (LLMs) have become integral to numerous applications, showcasing their transformative potential across various domains. To ensure the traceability and intellectual property protection of LLM-generated content, watermarking techniques have emerged as a crucial safeguard. However, existing watermarking methods often suffer from vulnerabilities, including susceptibility to adversarial attacks such as token replacement and temperature scaling. In this study, we propose an enhanced watermarking framework that embeds robust signals while maintaining text quality. Our experiments demonstrate the framework’s resilience against diverse attack scenarios, validating its effectiveness in improving watermark robustness and ensuring reliable detection in adversarial environments.
Students: Yuyang Lei, Zhe Yan
Faculty Mentor: Lanier Watkins
Abstract: The proliferation of Internet of Things (IoT) devices has transformed modern living. People rely on the convenience of smart devices without distinguishing between them. However, some of these devices, including those from leading brands, introduce significant security vulnerabilities. This capstone project investigates the security of selected popular smart home devices and personal wearable devices. Besides, the project identifies potential attack vectors and offers mitigations against known threats. The entire lifecycle of each device was analyzed, and various exploitation techniques were employed. Leveraging tools such as HackRF One or Buffalo Router are also used for tests. Given real-world scenarios, users must be vigilant in utilizing IoT devices. The project aims to provide actionable insights into improving IoT device security, emphasizing user awareness, secure design principles, and industry-standard practices. The findings contribute to advancing IoT security research and highlight the urgent need for strengthened defenses in the ever-expanding IoT ecosystem.
Students: Spencer Stevens, Andrew Zitter, Chinmay Lohani
Faculty Mentor: Lanier Watkins
External Mentor: Dr. Denzel Hamilton (JHU/APL)
Abstract: Unmanned aircraft systems (UAS) such as do-it-yourself (DIY) kits and commercial UAS have become increasingly available. This availability has caused them to become a concern for public safety as adversaries can easily acquire UAS to be used maliciously. In this paper, we propose and document the methods required to create a secure DIY UAS. This secure UAS includes modern protocols and an intrusion detection system (IDS) to detect an adversary’s attempt to hijack or damage the UAS. Eventually, the IDS can be improved into an Intrusion Protection System (IPS) using decision logic or machine learning.
Students: Dorian Liu, Akash Gupta
Faculty Mentor: Matt Green
Abstract: In a rapidly evolving digital world, the way we establish trust in agreements continues to change. Traditionally, trust between two parties is anchored in paper-based documents such as laws, business contracts, and service-level agreements. These text-heavy contracts often suffer from ambiguity and lack of intelligence.
The limitations of traditional contracts have motivated researchers to develop computable legal contracts that transform legal terms into computable code. Based on this, computable legal contracts have been integrated with smart contracts to enhance enforceability and expedite dispute resolution. However, distributed ledger technologies present challenges in maintaining privacy, as they can expose confidential agreements on the blockchain, making them accessible to the public.
In this work, we construct the zk-Agreement protocol that enables a transition from paper-based trust to cryptographic-guaranteed trust while preserving the privacy of confidential agreements. Our approach utilizes zero-knowledge proofs to protect the privacy of confidential agreements, applies a Trusted Execution Environment to securely evaluate that both parties have fulfilled their commitments, and leverages smart contracts to finalize transactions, including payments and function executions.
Students: Siwei Peng, Zhemin Wang
Faculty Mentor: Tim Leschke
Abstract: The Autopsy Ingestion Module Parsing Facebook and Other Social Media Artifacts project focuses on automatically extracting and analyzing social media data for digital forensic investigations. Its brief emphasizes the development of a tool that can effectively parse key social media evidence and organize them for investigation use. From a chronological perspective, the project has gone through research, design, implementation, and testing phases to ensure the accuracy and efficiency of extracting relevant artifacts. The ultimate goal is to provide investigators with a reliable automated tool that can be seamlessly integrated into the forensic workflow.
Students: Qiyao Tang
Faculty Mentor: Xiangyang Li
Abstract: Spam messages continue to present significant challenges to digital users, cluttering inboxes and posing security risks. Traditional spam detection methods, including rules-based, collaborative, and machine learning approaches, struggle to keep up with the rapidly evolving tactics employed by spammers. This project studies new spam detection systems that leverage Large Language Models (LLMs) fine-tuned with spam datasets. More importantly, we want to understand how LLM-based spam detection systems perform under adversarial attacks that purposefully modify spam emails and data poisoning attacks that exploit the differences between the training data and the massages in detection, to which traditional machine learning models are shown to be vulnerable. This experimentation employs two LLM models of GPT2 and BERT and three spam datasets of Enron, LingSpam, and SMSspamCollection for extensive training and testing tasks. The results show that, while they can function as effective spam filters, the LLM models are susceptible to the adversarial and data poisoning attacks. This research provides very useful insights for future applications of LLM models for information security.
Students: Han Li, Yi He, Jiacheng Zhong
Faculty Mentor: Yinzhi Cao
Abstract: JavaScript is widely used in web development due to its dynamic capabilities and rich ecosystem, but its built-in functions are mostly implemented in native code such as C or C++, posing analysis challenges for developers and security researchers. Tasks such as static code analysis, taint tracing, and control flow modeling require a deep understanding of the workflow and data flow of these native implementations, and the lack of transparency makes security auditing and vulnerability assessment difficult.
Our project aims to train a specialized JavaScript bytecode prediction model through Large Language Modeling (LLM) techniques to advance the development of autonomous code analysis systems with less human intervention. We use bytecode-level learning, which, due to its proximity to execution and encapsulation of the dynamic nature of JavaScript, enables generalization to multiple JavaScript constructs while improving efficiency and accuracy. The results show that specialized LLM can effectively predict bytecode, and reduce manual efforts, improve accuracy, and address critical issues in secure code analysis.
Students: Han Wang, Bochun Hu, Zizhuang Guo
Faculty Mentor: Yinzhi Cao
Abstract: Vertical Federated Learning (VFL) enables collaborative model training across multiple organizations while preserving data privacy. However, the inherent feature heterogeneity among participants presents significant challenges to model accuracy and privacy protection. This study extends the PrivateFL framework, originally developed for Horizontal Federated Learning (HFL), to the VFL context by incorporating personalized data transformation and differential privacy mechanisms. The proposed VFL-PRIVATEFL framework effectively addresses feature heterogeneity through tailored feature alignment and ensures robust privacy protection via Local Differential Privacy (LDP) and Central Differential Privacy (CDP). Experimental results across diverse datasets demonstrate that VFL-PRIVATEFL significantly outperforms traditional methods, in terms of accuracy, robustness, and scalability, especially under varying privacy budgets and client configurations. These findings highlight the potential of VFL-PRIVATEFL to enhance collaborative machine learning in privacy-sensitive domains, paving the way for secure and efficient applications in finance, healthcare, and beyond.
Students: Chen Xue, Tianshu Li, Ziyu Zhu
Faculty Mentor: Yinzhi Cao
Abstract: This paper presents the integration of FAST (Fast Abstract Interpretation for Scalability) into Visual Studio Code (VS Code). The goal is to create a user-friendly, enhanced JavaScript debugger. It can be used to detect vulnerabilities in JavaScript applications in real time. By leveraging the Debug Adapter Protocol (DAP), the system embeds advanced static analysis capabilities directly into the IDE. With this tool, developers can effectively identify and mitigate taint vulnerabilities such as command injection and path traversal. The design emphasizes usability and adopts human-computer interaction (HCI) principles to achieve intuitive visual workflows. Although the experimental framework to evaluate usability and effectiveness is deferred to future work, this integration represents the authors’ efforts to improve the Node.js ecosystem.