College of Engineering and Computing
Permanent URI for this collection
Browse
Browsing College of Engineering and Computing by Title
Now showing 1 - 20 of 694
Results Per Page
Sort Options
Item 2D Cartoon Rigs From Uncorresponded Vector GraphicsMehmanesh, Nusha; Mehmanesh, Nusha; Gingold, YotamThis thesis introduces a novel algorithm that finds the correspondence between different poses of character(s) in two consecutive vectorized 2D key frames.This algorithm also derives a set of handles that allow the illustrator to animate the character to any desired pose. This is done in the hopes of reducing the time consuming manual work done by the animators in the process of in-betweening between key frames and to allow the animator to produce other frames by using the handles. The automated iterative approach described in this thesis finds the set of best possible transformations between two subsequent key frames by minimizing an energy function and performing a multi-label optimization. The resulting transformations are then used to calculate the blending weights that minimizes the displacement error. The final transformations can then either be interpolated to produce intermediate frames at a user specified rate or modified to animate the character.Item 3D Model-Assisted Learning for Object Detection and Pose Estimation(2020) Georgios GeorgakisSupervised learning paradigm for training Deep Convolutional Neural Networks (DCNN) rests on the availability of large amounts of manually annotated images, which are necessary for training deep models with millions of parameters. In this thesis, we present novel techniques for mitigating the required manual annotation, by generating large object instance datasets through compositing textured 3D models onto commonly encountered background scenes to synthesize training images. The generated training data augmented with real world annotations outperforms models trained only on real data. Non-textured 3D models are subsequently used for keypoint learning and matching, and 3D object pose estimation from RGB images. The proposed methods showcase promising results with regards to generalization on new and standard benchmark datasets. In the final part of the thesis, we investigate how these perception capabilities can be leveraged and encoded in a spatial map, in order to enable an agent to successfully navigate towards a target object.Item 3D Stress Estimation Using Adapted Finite Element Model Updating TechniquesKhan, Affan Danish; Khan, Affan Danish; Lattanzi, DavidAccording to a 2016 study by the American Road and Transportation Builders Association (ARTBA) one bridge in every ten is structurally deficient. Two major contributors of structural deficiency are corrosion, which causes material loss and thinning of cross sections, and permanent plastic deformations. Currently, there are no standard methods for understanding how measurements of these damages impact stress and capacity analysis. The research presented in this thesis focuses on the use of 3D images to create “point clouds” for such structural capacity analysis. Using a set of previously developed techniques that measure both section loss and deformations in point clouds, two studies were performed to analyze the effectiveness of using these techniques to update corresponding finite element models. The first study was a sensitivity analysis to quantify the effect of image noise on stress concentration estimates, and to better understand the limits of the updating approach. In the second study, point cloud xi deflection measurements from three-point bending tests were used to induce translations and stresses in a finite element model. The results of the first study showed that increasing image noise resulted in a higher likelihood that artifacts would form in the finite element model, leading to a localized increase in stress; however, it was also found that subsurface stresses matched the values expected from elastic theory and methods of analyzing the data with these anomalies are discussed. The findings of the second study showed that applying localized displacements in the 3D finite element model created localized stress concentrations that do not represent the expected stress profiles. While both studies provide important insight into this relatively new technology, future work to be performed might include creating methods to better differentiate between artificial stress anomalies and actual states of stress, as well as experimental validation.Publication A Behavioral Approach to Worm Detection(2006-08) Ellis, Daniel R.; Ammann, PaulThis dissertation presents a novel approach to the automatic detection of worms using behavioral signatures. A behavioral signature describes aspects of any worm’s behavior that are common across manifestations of the worm and that span its nodes in temporal order. Characteristic patterns of worm behaviors in network traffic include 1) engaging in similar network behaviors from one target machine to the next, 2) tree-like propagation, and 3) changing a server into a client. These behavioral signatures are presented within the context of a general worm model. The most significant contribution of this dissertation is the demonstration th at an accurate and fast worm detection system can be built using the above patterns. Further, I show that the class of worms detectable using these patterns exceeds what has been claimed in the literature and covers a significant portion of the classes of worms. Another contribution is the introduction of a novel paradigm—Network Application Architecture (NAA), which concerns possible ways to distribute network application functionality across a network. Three NAAs are discussed. As an NAA becomes more constrained, worm detection gets easier. It is shown that for some NAAs certain classes of worms can be detected with only one packet. The third significant contribution of this dissertation is the capability to evaluate worm detection systems in an operational environment. This capability can be used by other researchers to evaluate their own or others’ worm detection systems. The claim is that the capability can emulate practically all worms and that it can do so safely, even in an operational enterprise environment.Item A CNN/MLP Neural Processing Engine, Powered by Novel Temporal-Carry-deferring MACs(2021) Ali MirzaeianThe applications of machine learning algorithms are innumerable and cover nearly every domain of modern technology. During this rapid growth of this area, more and more companies have expressed a desire to utilize machine learning techniques in smaller devices, such as cell phones or smart Internet of Things (IoT) instruments. However, machine learning has so far required a power source with more capacity and higher efficiency than a conventional battery. Therefore, introducing neural network accelerators with low energy demands and low latency for executing machine learning techniques has drawn lots of attention in both academia and industry. In this work, we first propose the design of Temporal-Carry-deferring MAC (TCD-MAC) and illustrate how our proposed solution can gain significant energy and performance benefits when utilized to process a stream of input data. We then propose using the TCD-MAC to build a reconfigurable, high speed, and low power Neural Processing Engine (TCD-NPE). Furthermore, we expand the idea of TCD-MAC to present NESTA, which is a specialized Neural engine that reformats Convolutions into $3 \times 3$ batches and uses a hierarchy of Hamming Weight Compressors to process each batch.Item A Comparison of Hydraulic Modeling Results between Unmanned Aerial Vehicle with Structure-from-Motion and LIDAR Produced Digital Elevation ModelsZahirieh, Sherwin; Zahirieh, Sherwin; Ferreira, CelsoCurrent traditional methods of gathering topographic data for hydraulic modeling can be time consuming and costly. One potential method to gather topographic data in a more timely and costly efficient manner is the use of unmanned aerial vehicles (UAVs) and structure-from-motion (SfM) image processing. This thesis describes the comparison of Digital Elevation Models (DEM) generated from LIDAR and from Structure-from-Motion and Unmanned Aerial Vehicles and the results of open channel hydraulic modeling based on these models. Unlike LIDAR, UAV+SfM topographic modeling is both time and cost efficient. However, it is still necessary to study the accuracy of its model results in comparison to traditional LIDAR models. Using Agisoft Photoscan, photos from the UAV were converted into DEM through the structure-from-motion process. The two different DEMs were then used in the HEC-RAS hydraulic model. By examining several different flows through both cross-sections, the wetted area and floodplain areas for different study sites were calculated. By analyzing the differences in elevations, channel areas, and floodplain areas between the two models, the accuracy was determined. Under the assumption that LIDAR data are correct, the results indicate that the UAV+SfM method has the promise of developing similar topographic data models compared to LIDAR, but not with enough accuracy to use in place of LIDAR given the differences in areas between the two sources.Item A Compiler-Based Approach to Implementing Smart Pointers(2007-12-13T18:41:46Z) Hoskins, Stephen; Hoskins, StephenBecause of the growing popularity of programming languages with garbage collectors, such as C# and Java, there is a clearly a desire for languages that support automated memory management. However, as a result of the inefficiencies of the garbage collectors of C# and Java, there is a requirement that programmers have a better understanding of the underlying implementations of the garbage collectors in order to make applications more robust or so that they can run on real-time systems. Using an implementation of smart pointers written from scratch, this paper attempts to address this problem by exploring techniques that are used by garbage collectors and ultimately concluding which features of object-oriented languages make the task of automating efficient garbage collection more difficult. As a result of the conclusions produced in this paper, it may be possible to create a brand new language with the simplicity and elegance of Java and the robustness and efficiency of C without the developer ever needing to perform memory management.Item A Control Plane for Low Power Lossy Networks(2016) Pope, James; Pope, James; Simon, RobertLow power, lossy networks (LLNs) are becoming a critical infrastructure for future applications that are ad-hoc and untethered for periods of years. The applications enabled by LLNs include Smart Grid, data center power control, industrial networks and building and home automation systems. LLNs intersect a number of research areas to include the Internet of Things (IoT), Cyber Physical Systems (CPSs) and Wireless Sensor Networks (WSNs). A number of LLN applications require quality of service guarantees, such as industrial sensor networks. These applications are not currently supported by LLN routing protocols that allow dynamic changes in the network structure, specifically the standardized IPv6 based Routing Protocol for Low-Power and Lossy Networks (RPL).Item A cost-effective distributed architecture for content delivery and exchange over emerging wireless technologies(2013) Islam, Khondkar; Islam, Khondkar; Pullen, J MarkOpportunities in education are lacking in many parts of the developed nations and are missing in most parts of the developing nations. This is, in significant part, due to shortages of classroom instructional resources such as quality teaching staff, hardware and software. Distance education (DE) has proved to be a successful teaching approach and overcomes some of the barriers imposed by classroom instruction, primarily due to the shortage of teachers.Item A Cross-Dataset Evaluation of Genetically Evolved Neural Network ArchitecturesGelman, Ben; Gelman, Ben; Domeniconi, CarlottaThe design of deep neural networks is often colloquially described as an `art.' Although there are some common, guiding principles such as using convolutions for data with spatial locality or using recurrence for data with temporal characteristics, neural network architectures tend to be manually engineered. Few works currently provide methods to determine optimal architectures and hyper parameters. In this work, we generate empirical evidence for neural network architecture choices. We use a genetic algorithm to evolve 980 neural networks for a variety of common data sets. We analyze the characteristics of the highest performing architectures, compare those traits across data sets, and present a set of generalizable neural network design patterns.Item A Decision-Guided Group Package Recommender Based on Multi-Criteria Optimization and Voting(2016) Mengash, Hanan Abdullah; Mengash, Hanan Abdullah; Brodsky, AlexanderRecommender systems are intended to help users make effective product and service choices, especially over the Internet. They are used in a variety of applications and have proven to be valuable for predicting the utility or relevance of a particular item and for providing personalized recommendations. State-of-the-art recommender systems focus on atomic (single) products or services and on individual users. This dissertation considers three ways of extending recommender systems: (1) to make composite (package) rather than atomic recommendations; (2) to use multiple rather than single criteria for recommendations; and, most importantly, (3) to support groups of diverse users or decision makers who might have different, even strongly conflicting, views on the weights of different criteria.Item A Digital Media Similarity Measure for Triage of Digital Forensic EvidenceLim, Myeong Lyel; Lim, Myeong Lyel; Jones, James H JrAs the volume of potential digital evidence increases, digital forensics investigators are challenged to find the best allocation of their limited resources. While automation will continue to partially mitigate this problem, the preliminary question of which media should be examined by human or machine remains largely unsolved. Prior work has established various methods to assess digital media similarity which may aid in prioritization decisions. Similarity measures may also be used to establish links between media, and by extension, links between the individuals or organizations associated with that media. Existing similarity measures, however, have high computational costs which can delay identification of digital media warranting immediate attention or render link establishment across large collections of data impractical. In this work, I propose, develop, and validate a methodology for assessing digital media similarity to assist with digital media triage decisions. The application of my work is predicated on the idea that unexamined media is likely to be relevant and interesting to an investigator if this unexamined media is similar to other media previously determined to be interesting and relevant. My methodology builds on prior work using sector hashing and the Jaccard index similarity measure. I combine these methods in a novel way and demonstrate the accuracy of my method against a test set of hard disk images with known ground truth. My method is called Jaccard Index with Normalized Frequency (JINF) and calculates the similarity measure between two disk images by normalizing the frequency of the distinct sectors. I also developed and tested two extensions to improve performance. The first extension randomly samples sectors from digital media under examination and applies a modified JINF method. I demonstrate that the JINF disk similarity measure remains useful with sampling rates as low as 5%. The second extension takes advantage of parallel processing. The method distributes the computation across multiple processors after partitioning the digital media, then it combines the results into an overall similarity measure which preserves the accuracy of the original method on a single processor. Experimental results provided as much as a 51% reduction in processing time. My work goes beyond interesting file and file fragment matching; rather, I assess the overall similarity of digital media to identify systems which might share applications and user content, and hence be related, even if some common files of interest are encrypted, deleted, or otherwise not available. In addition to triage decisions, digital media similarity may be used to infer links and associations between the disparate entities owning or using the respective digital devices.Item A Distributed Digital Body Farm for Dynamic Monitoring of File Decay Patterns on the NTFS Filesystem(2022) Agada, Omoche Cheche; Jones, James H.Forensic recovery of previously deleted data could be a time-consuming activity during a digital forensic investigation. As tedious as the procedure is, it may not produce useful results or any results at all, as the deleted file data decay process may have rendered such data irrecoverable. Proper insight into the rates, patterns, and factors that influence digital artifact decay is necessary to help investigators determine if attempting a recovery is a wise use of time. A significant amount of research effort has been invested in the study of deleted artifact decay, but knowledge gaps still exist. This work developed, implemented, tested, and applied a tool to collect deleted file decay data, then analyzed that data to determine decay rates and patterns, as well as factors affecting those decay rates and patterns. The work describes a methodology and the implementation of a distributed digital body farm (DDBF) – a suite of applications that use differential analysis to monitor and record patterns of decay as data are erased or overwritten on a secondary storage medium attached to a live system. The patterns are remotely collected from systems belonging to independent users without violating the privacy of such users. The extracted patterns are subsequently analyzed using multiple data models to provide further insight into the deleted file decay rates, processes, and influencing factors.Item A Dynamic Dialog System Using Semantic Web Technologies(2014-05) Ababneh, Mohammad; Ababneh, Mohammad; Wijesekera, DumindaA dialog system or a conversational agent provides a means for a human to interact with a computer system. Dialog systems use text, voice and other means to carry out conversations with humans in order to achieve some objective. Most dialog systems are created with specific objectives in mind and consist of preprogrammed conversations. The primary objective of this dissertation is to show that dialogs can be dynamically generated using semantic technologies. I show the feasibility by constructing a dialog system that can be specific to control physical entry points, and that can withstand an attempt to misuse the system by playing back previously recorded conversations. As a solution my system randomly generates questions with a pre-specified difficulty level and relevance, thereby not repeating conversations. In order to do so, my system uses policies to govern the entrance rules, and Item Response Theory to select questions derived from ontologies relevant to those policies. Furthermore, by using contextual reasoning to derive facts from a chosen context, my dialog system can be directed to generate questions that are randomized within a topic, yet relevant to the task at hand. My system design has been prototyped using a voice interface. The prototype demonstrates acceptable performance on benchmark data.Item A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation(2013-08) Ngan, Chun-Kit; Ngan, Chun-Kit; Brodsky, Alexander; Lin, JessicaMaking decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts utilize multivariate time series to make a vital decision. Through studying multivariate time series, specialists are able to understand problems of events from different perspectives within particular domains. Identification and detection of those significant events over multivariate time series can lead to a better decision-making and actionable recommendations.Item A Framework and Methodology for Ontology Mediation through Semantic and Syntactic Mapping(2008-06-09T19:11:21Z) Muthaiyah, Saravanan; Muthaiyah, SaravananOntology mediation is the process of establishing a common ground for interoperability between domain ontologies. Ontology mapping is the task of identifying concept and attribute correspondences between ontologies through a matching process. Ontology mediation and mapping enable ontologists to borrow and reuse rich schema definitions from existing domain ontologies that have already been developed by other ontologists. For example, a white wine distributor could maintain a white wine ontology that only has white wine concepts. This distributor may then decide at some point in the future to include other wine classifications as well in his current ontology. Instead of creating red wine or desert wine concepts in his existing ontology, the distributor could just borrow these concepts from existing red wine and desert wine ontologies. As such ontology mapping becomes necessary. The practice of matching ontology schemas today is one that is labor-intensive. Although semi-automated systems have been introduced, they are based on syntactic matching algorithms which do not produce reliable results. Thus my thesis statement is that the hybrid approach i.e., Semantic Relatedness Score (SRS), which combines both semantic and syntactic matching algorithms, provides better results in terms of greater reliability and precision when compared to pure syntactic matching algorithms. This research validates that SRS provides higher precision and relevance compared to syntactic matching techniques that have been used previously. SRS was developed through the process of rigorously testing thirteen well established matching algorithms and choosing a composite measure of the best combination of five out of those thirteen measures. This thesis also provides an end-to-end approach by providing a framework, process methodology and architecture for the process of ontology mediation. Since implementing a fully automated system without any human intervention would not be a realistic goal, a semi-automated approach is undertaken in this thesis. In this approach, an ontologist is assisted by a mapping system which selects the best candidates to be matched from the source and target ontology using SRS. The goal was not only to reduce the workload of the ontologist, but also provide results that are reliable. Literature survey on current ontology mediation research initiatives such as InfoSleuth, XMapper, ONION, FOAM, FCA-Merge, KRAFT, CHIMERA, PROMPT and OBSERVER, among others, revealed that the state-of-art of ontology mediation is to a large extent based on mainly syntactic schema matching that supported binary schema matches (1:1) only. A generic solution for schema matching based on SRS is presented in this thesis to overcome these limitations. A similarity matrix for concept similarity measures is introduced based on several cognitive and quantitative techniques such as computational linguistics, Latent Semantic Analysis (LSA), distance vectors and lexical databases (WordNet). The six-part matching algorithm is used to analyze RDF, OWL and XML schemas and to provide a similarity scores which are then used to populate a similarity matrix. The contribution here is twofold. Firstly, this approach gives a composite similarity metric and also supports complex mappings (1:n, 1:m, m:1 and n:m). Secondly, it provides higher relevance, reliability and precision. The validation of this approach is demonstrated by comparing SRS results with that of human domain experts. Empirical evidence provided in this document clearly shows that the hybrid method resulted in a higher correlation, better relevance and more reliable results than purely syntactic matching systems. Predefined Semantic Web Rule Language (SWRL) rules are also introduced to concatenate attributes, discover new relations and enforce the assertion box (ABox) instances. Reasoning for consistency, coherence, ontology classification and inference measures are also introduced. An actual implementation of this framework and process methodology for the mapping of security policy ontologies (SPRO) is provided as a case study. Another case study on achieving interoperability for e-government services with SWRL rules is also presented. Both SRS and SWRL rules are highlighted in this document as being complementary measures for the process of semantic bridging. Several tools were used for a proof-of-concept for the implementation of the methodology, including Protégé, Racer Pro, Rice and PROMPT.Item A Framework for Finding Patterns in Mixed and Streaming Data(2019) Rohan KhadePattern mining is an umbrella term used for data mining algorithms with the goal of finding relationships between attributes, such as association rules, frequent sets, contrast patterns, emerging patterns, etc. While pattern mining is a well-researched topic in data mining, with many applications in diverse disciplines, there remain some open problems that have not been addressed by existing work. We summarize some of the problems encountered and addressed in this dissertation. In many real-world applications such as manufacturing, data contain both continuous and categorical attributes. In our work, we propose novel methodologies to find patterns in datasets with such mixed attributes. More specifically, our algorithms dynamically discretizes continuous attributes in an itemset in a supervised fashion. We propose a top-down recursive approach to find intervals for continuous attributes that result in statistically significant patterns. As opposed to a global discretization scheme, where each attribute is discretized exactly once, our approach allows local discretization --- that is, any continuous attribute can be discretized in different ways based on the consequent. This approach makes it possible to capture different inter-variable relationships. We evaluate our algorithm with several synthetic and real datasets, including Intel manufacturing data that motivated this research. The experimental results and analysis indicate that our algorithm is capable of finding more meaningful rules for multivariate data than existing algorithms. Also, in many real-world scenarios, the data arrives in a streaming manner; the goal is to find and maintain the most current representation/model of the data. General challenges for streaming data include data arriving at high intensity, detecting and handling concept drift and updating the model in reasonable time. Since the data arrive at a fast speed, we propose a weighted average method using a sliding window to update patterns. Our updating strategy detects concept drift, detects anomalous patterns and provides a consistent view of the data. To overcome the challenge of handling large volumes of data, we propose scalable solutions using pruning and parallelization. Most pattern mining algorithms rely on pruning to be computationally feasible. This works well if the data fits in the main memory. However, in a parallel environment it may not be possible to share pruning information among nodes. In our work, we propose a way to divide the data among nodes to maximize pruning. We developed algorithms to find contrast patterns for large, streaming and mixed data using methods we developed for finding association rules and frequent sets. By incorporating feedback from users, we improve the quality of patterns discovered and shown to the user.Item A Framework for Testing and Evaluating Secure and Verifiable Computational Offloading in Edge ComputingCrowley, Thomas B; Crowley, Thomas B; Zeng, KaiThroughout the world, 8.74 billion Internet of Things (IoT) devices have been deployed, ranging from household thermostats to sensors in remote areas. However, these IoT devices are resource-constrained, not only in computational speed, but often in available electrical power. Computational offloading can provide significant power and latency savings, but often exposes data and systems to security breaches. Researchers have proposed a plethora of protocols to address these security gaps. These published works focus solely on theoretical power and latency savings and do not include end-to-end implementation or data. Furthermore, few, if any, of these protocols have been fielded by either academic or commercial projects. This paper presents the results from the end-to-end implementation of an encryption offloading protocol. Latency and power data were collected to enable comparisons of security computations done solely on the IoT device and partially outsourced to a nearby device. Using the analysis of this data and lessons learned from the end-to-end implementation, the author of this research also created a generic software library to implement computational offloading to the edge. The new software library enabled the integration of a known, secure and verifiable computing technique into the encryption offloading protocol.Item A Geospatial Framework to Estimate Depth of Scour under Buildings Due to Storm Surge in Coastal AreasBorga, Mariamawit; Borga, Mariamawit; Tanyu, Burak FHurricanes and tropical storms represent one of the major hazards in coastal communities. Storm surge generated by strong winds and low pressure from these systems have the potential to bring extensive flooding in coastal areas. In many cases, the damage caused by the storm surge may exceed the damage from the wind resulting in the total collapse of buildings. Therefore, in coastal areas, one of the sources for major structural damage could be due to scour, where the soil below the building that serves as the foundation is swept away by the movement of the water. The existing methodologies to forecast hurricane flood damage do not differentiate between the different damage mechanisms (e.g., inundation vs. scour). Currently, there are no tools available that predominantly focus on forecasting scour related damage for buildings. Such a tool could provide significant advantages for planning and/or preparing emergency responses. Therefore the focus of this study was to develop a methodology to predict possible scour depth due to hurricane storm surges using an automated ArcGIS tool that incorporates the expected hurricane conditions (flow depth, velocity, and flood duration), site-specific building information, and the associated soil types for the foundation. A case study from Monmouth County (NJ), where the scour damages from 2012 Hurricane Sandy were recorded after the storm, was used to evaluate the accuracy of the developed forecasting tool and to relate the scour depth to potential scour damage. The results indicate that the developed tool provides relatively consistent results with the field observations.Item A Hardware Implementation of the SOM for a Network Intrusion Detection System(2011-08-18) Roeder, Brent W.; Roeder, Brent W.; Gaj, KrisThis thesis describes the research and development of a hardware implementation of the Self Organizing Map (SOM) for a network intrusion detection system. As part of the thesis research, Kohonen’s SOM algorithm was examined and different hardware implementations for the SOM were surveyed. This survey resulted in the design and implementation of a conventional SOM, which was then modified for use as a detector of anomalous network traffic as part of a network intrusion detection system. The resulting implementation known as the port agent SOM is both smaller in area and supports higher data throughput than the conventional SOM, as was quantified through post place and route analysis. This thesis can serve as a tool for developing hardware implementations of the SOM, especially if their intended application is anomaly detection.