College of Engineering and Computing

Permanent URI for this collection


Recent Submissions

Now showing 1 - 20 of 606
  • Item
    A Systematic and Comparative Study in Deep Learning Approaches in Extraocular Muscle Segmentation and Analysis in Orbit Magnetic Resonance Images
    (2023-08) Qureshi, Amad Aamir; Wei, Qi
    Strabismus is an ocular condition characterized by binocular misalignment, which impacts about 5% of the global population. It can cause double vision, reduced vision, and impair the quality of life. Accurate diagnosis and treatment planning often benefits from the anatomical evaluation of the extraocular muscles (EOMs) that can be obtained by imaging modalities, such as magnetic resonance imaging (MRI). Such image-based examination requires segmenting the ocular structures from images, which is a labor and time-intensive task, subject to error when done manually. Deep learning-based segmentation has shown promise to outline anatomical structures automatically and objectively. We performed three sets of experimentation for EOM segmentation via DLmethods. Furthermore, we analyzed the performance of the deep learning methods through F-measure-based metrics, intersection over union (IoU) and Dice coefficient, and estimation of the EOM centroid (centroid offset). We first investigated the performance of U-Net, U-NeXt, DeepLabV3+, and ConResNet in multi-class pixel-based segmentation of the EOMs from ocular MRI taken in the quasi-coronal plane. Based on the performance evaluation (visual and the quantitative metrics mentioned), the U-Net model achieved the highest overall segmentation accuracy, and lowest centroid offset. It was noted that segmentation accuracy varied in spatially different image plane – relative to the middle slice (optic nerve junction point) in the MRI stack. In the second set of experiments, we compared the performance of the U-Net model with its variants, UNeXt, Attention U-Net and FD-UNet and subjected the prediction outputs to the same evaluation as before, with U-Net achieving the best performance. We also explored methods in an attempt to improve the model performance – particularly with data augmentation and enhancement, where methods such as Adaptive Gamma Correction and CLAHE enhancement were used with the U-Net model. No significant difference was observed when CLAHE, Adaptive Gamma Correction and a dataset with unenhanced, CLAHE, and adaptive gamma corrected images were tested against unenhanced data, however, did result in better quantitative performance than the standard augmentation technique. Our study provides the insights into the factors that impact the accuracy of deep learning models in segmenting the EOMs, such as spatial slice location, image quality, and contrast and demonstrate the potential of these models in translating into 3D space for potential diagnosis and treatment planning for patients with strabismus and other ocular conditions.
  • Item
    Finite Element Modeling of Reinforced Concrete Columns Subjected to Air and Underwater Explosions
    Abyu, Getu Zewdie; Urgessa, Girum
    Ever since the tragic events of the 9/11 attacks in New York, global infrastructures have suffered significant damage caused by acts of terrorism, military strikes, and accidental explosions. Coastal regions and critical infrastructure, including bridges, face a significant threat from maritime terrorism. Furthermore, intentional car bomb explosions in acts of terrorism and military assaults also pose substantial risks to the structural integrity of bridges. Among the various components comprising a bridge structure, bridge piers play a crucial role in providing vertical support. Hence, it is crucial to study the structural response of reinforced concrete (RC) columns under blast loading. This study involved the development of two comprehensive numerical models, using LS-DYNA software, to analyze the air blast and underwater explosion (UNDEX) responses of RC columns. The validation process entailed comparing the simulation results with experimental data obtained from previous studies by Yuan et al. (2017), Yang et al. (2019), and Zhuang et al. (2020). Both numerical models exhibited reasonably good agreement with the experimental findings, demonstrating their reliability in replicating real-world air blast and UNDEX scenarios. With the numerically calibrated and verified UNDEX model, a parametric study was conducted to examine the effects of blast loads from TNT explosive charges on RC columns. The study considered various parameters, including stand-off distance, charge weight, and water depth. Nonlinear finite element analysis using LS-DYNA was performed, investigating a total of 60 cases. The simulation results provided valuable insights and findings regarding the behavior of RC columns under different air blast and UNDEX loading scenarios. This study is particularly pioneering in its investigation of RC columns subjected to partially submerged explosions. Additionally, the response of RC columns for both contact and non-contact air and UNDEX explosions were investigated.
  • Item
    Optimization of the Placement of the Ultrasound Scanlines on the Forearm for an Upper Limb Prosthesis
    De Marzi, Laura; Sikdar, Siddhartha
    Sonomyography is an emerging technique that uses ultrasound to detect muscle deformation and is being explored as a real-time alternative to surface electromyography for deriving control signals from functional activity. Many groups have demonstrated the feasibility of using commercial ultrasound systems to control upper limb prostheses; however, these systems are bulky and not optimized for wearable use. In this study, a novel 4-channel ultrasound system with miniaturized electronics optimized for forearm applications has been used. While previous work has demonstrated that data from 4 channels may be sufficient to classify multiple grasps, the performance may be dependent on the anatomical placement of the individual transducers on the forearm. In this study, we evaluated the effects of transducer placement on classification performance and explored different measurements to determine optimal anatomical region for placement. These metrics consisted of Mutual Information (MI), Structural Similarity Index (SSIM), and Sum of Squared Distance (SSD); which quantify the amount of information is derived from every ultrasound transducer. Ultrasound M-mode images of different hand/wrist gestures were collected from 4 subjects with ultrasound transducers placed at three different positions on the forearm. The first position was at the flexor muscles, the second observing the extensor muscles, and the third was a custom placement of the transducers targeting specific muscle compartments and regions on the forearm. MI/SSIM/SSD were used to calculate how much information each ultrasound transducer contained, and the values were correlated to the performance of a LDA classifier's ability to differentiate between the gestures. The results show that the LDA was able to discriminate between the different hand gestures with an average accuracy of 76.4 ± 4.04% for the extensor muscle position, 97.5 ± 1.72% and 99.4 ± 0.61% for the flexor muscles and custom targeted position, respectively. No correlation was found between MI and the classification performance. Strong statistically significant correlation was found between SSD and SSIM values and classification performance (p-value < 0.001) . This study demonstrates the feasibility of using 4-channel single element M-mode ultrasound transducers to recognize complex hand gestures and emphasizes the importance of targeting specific muscle compartments and regions on the forearm to obtain high classification accuracy.
  • Item
    Accuracy Analysis of Photogrammetrically Derived Point Clouds for Partially Submerged Models
    Stoiber, Paul; Lattanzi, David
    There are many marine applications for 3D reconstruction ranging from the analysis of coastal erosion and bathymetric mapping using LiDAR to assisting in the structural health assessment of ships using photogrammetrically derived 3D models. As the quantity of data in all sectors of the global economy continue to grow, the historic methods of accomplishing activities such as structural inspections of ships must be succeeded by methods that cost less, save time, and provide for a safer work environment. The benefits from incorporating photogrammetrically derived 3D models can then clearly be seen when performing inspections on ships with the cost of Unmanned Aerial Systems (UAS), Unmanned Underwater Vehicles (UUV), and mounted camera systems replacing the cost of mobilizing equipment, reducing time to complete a task, and reducing the risks of in-person inspection. This study aimed to find out how accuracy was affected by merging two sets of photogrammetrically derived point cloud data that were not collected simultaneously, both above and below the water surface. Due to phenomena such as Snell’s law and barrel distortion, the image data and resulting 3D model can experience a decrease in accuracy to the real-world dimensions of a model in underwater sections compared to above water sections. This problem space has been harder to evaluate in prior work because of the typical subjects of partially submerged 3D models being large in scale, such as ships or caves, which results in non-exhaustive attempts to establish reference and control in complex physical environments To evaluate the impacts of these distortions, a new benchmark 3D model representative of a ship’s structural hull was designed, fabricated, and tested. This benchmark structure incorporated a uniform coordinate system based on target points along the surface of the hull shape, providing a basis for universal 3D reconstruction error. The impact of partial submersion on reconstruction accuracy was determined by comparing a fused model derived from a partially submerged benchmark model to a ground truth representation of the benchmark model that was unsubmerged. The results show that the absolute distance between the reference and fused model was less than 2 millimeters on average, but the maximum distances between the two models reached up to approximately 34 millimeters because of distortion caused by the water’s surface during 3D model generation. Future efforts should include the application of a benchmark uniform coordinate system on physical features of a greater scale. Additionally, the development of 3D model survey quality standards independent from a geospatial reference system is a critical future work opportunity. This would allow researchers to assess how the level of accuracy captured during one surveying effort compare to the level of accuracy in a subsequent survey.
  • Item
    Large v Small Organization Software Development: Are Software Development Best Practices One-Size Fits All?
    Longo, Jeffrey; LaToza, Thomas D
    This research examines the differences in the way large and small organizations approach software development to better understanding if small organizations should attempt to model large organizations' practices. Practices examined include software development methodology, development tooling, how organizations deal with legacy software, documentation practices, testing practices, and how organizations differ in their code and development practice quality metrics. Mixed method research was applied starting with a qualitative semi-structured survey of 11 participants followed by a broader quantitative survey. Our analysis showed differences that could be statistically tracked to organization size in 4 of the 6 practices researched. Semi-structured interview responses did not always match results from the survey, indicating there may be common misconceptions that do not match reality when it comes to how large and small organizations operate. In one significant finding, a statistical correlation was discovered between large organization size, worker specialization, and the use of formal test plans on a low change failure rate.
  • Item
    Reconfigurable FET Approximate Computing-based Accelerator for Deep Learning Applications
    Saravanan, Raghul; PD, Sai Manoj
    Artificial Intelligence (AI) has recently surged in the last few years, facilitating revolutionary state-of-the-art solutions in healthcare, banking, data & business analytics, transportation, retail, and much more. The tremendous increase in data to deliver AI solutions has led to the need for ML acceleration, enabling improved performance, efficient realtime processing, scalability, energy, and cost efficiency. In recent years, active research has been on ML acceleration using FPGAs, GPUs, and ASICs. ASIC-based ML accelerators have superior performance, reduced latency, energy-efficient and cost-efficient compared to their counterparts. However, the traditional CMOS-based ASIC accelerator lacks flexibility leading to reconfigurability overheads. The hardware’s reconfigurability enables multiple functionalities per computational unit with less resource consumption. Emerging transistor technology devices such as FinFETs, RFETs, and Memristors are adopted in designing accelerators to facilitate reconfigurability at to the transistor level. Furthermore, some of these devices, such as Memristors, also support storage along with computations. Among multiple emerging devices, the recent research on reconfigurable nanotechnologies such as Silicon Nanowire Reconfigurable Field Effect Transistors (SiNW RFET) serve as a promising technology that not only facilitates lower power consumption but also supports multi-functionality through reconfigurability. It enables reconfigurability and supports multiple functionalities per computational unit. These features motivate us to design a novel state-of-the-art energy-efficient hardware accelerator for implementing memory-intensive applications, including convolutional neural networks (CNNs) and deep neural networks (DNNs). To accelerate the computations, we design Multiply and Accumulate (MAC) units to perform the computations. For the design of MAC units, we employ Silicon nanowire reconfigurable FETs (RFETs). The use of RFETs leads to nearly 70% power reduction compared to the traditional CMOS implementation and also reduced latency in performing the computations. Further, to optimize the overheads and improve memory efficiency, we introduce a novel approximation technique for RFETs. The RFET-based approximate adders lead to reduced power, area, and delay while having a minimal impact on the accuracy of the DNN/CNN. In addition, we carry out a detailed study of varied combinations of architectures involving CMOS, RFETs, accurate adders, and approximate adders to demonstrate the benefits of the proposed RFET-based approximate accelerator. The proposed RFET-based accelerator achieves an accuracy of 94% on MNIST datasets with 93% and 73%reduction in the area, power and delay metrics, respectively compared to the state-of-the- art hardware accelerator architectures.
  • Item
    I ❤️ IQ: A Shader Graphing Calculator for Signed Distance Functions (SDFs)
    Kriel, Henro; Gingold, Yotam
    Shaders are useful in real time graphics and high performance computing applications as they specify computation to be run in parallel on the GPU. In the community, these shaders are often described in mathematical syntax and translating that math to executable code can be tedious and error prone. We present I ❤️ IQ, a graphing calculator of sorts for signed distance functions (SDFs) and materials that expedites the process of prototyping shaders. It provides a math-like syntax using I ❤️ LA and comes with a built-in raycasting architecture, removing the overhead of translation and implementation details. I ❤️ IQ is designed to be responsive and interactive. The system automatically detects free parameters and lets users tweak them using slider controls, allowing for seamless manipulation of the scene in real time. The code generated by I ❤️ IQ can then be exported to third party programs and used outside of the I IQ environment. The I ❤️ IQ repository can be found at
  • Item
    Machine Learning Automation for Virtual Reality
    Murat, Erdem; Yu, Lap-Fai
    Virtual Reality (VR) game development techniques are relatively new in relation to conventional 2-dimensional (2D) content. Although there has been significant research conducted in this new field, more work is still needed as there are still some prevalent issues. A significant issue reported by some users is that the perceived difficulty of a game can vary drastically between users. This is because the nature of VR gives more autonomy to users and lets them play games differently than the developer might've intended. To address this, I have proposed a system that tracks user difficulty perception on the manipulation of various game parameters that affect difficulty. The collected user data is used to train a machine learning regressor to predict the perceived difficulty of different game levels. The initial findings show a 53% prediction error. However, further analysis has shown that the predictions are realistic and adequate. Anomalies in prediction are explainable and prediction error can be reduced to 26% through the removal of some outliers. Limitations of this work, like the limited dataset size, are also addressed for future work to improve accuracy and performance. This thesis was primarily written with future work in mind, as the addressed problem is complex and requires further examination for a final and applicable model. The final model proposed uses MCMC optimization and is aimed at automating optimization of game parameters to tailor experiences to intended difficulty and/or emotions. Thus, the main contribution of this paper is its address of an insufficiently covered issue by producing a key approach and proposing detailed suggestions for future research.
  • Item
    Soil Water Retention Behavior of Unsaturated Bentonite Polymer Composite Geosynthetic Clay Liners
    Benavides, Monica Paulina; Tian, Kuo
    When geosynthetic clay liners (GCLs) are placed in the subsoil to be used as hydraulic barriers in waste containment facilities such as landfills, the in-field hydration driven by differences in suction is an important component in the longstanding hydraulic performance of GCLs. Generally, the hydration process is generally not simple due to several factors, like soil properties, environmental and operating conditions of the landfill. As some liquids like water or leachates can migrate into the GCLs, and the moisture content can be variable increasing and decreasing along with changes in suction. In order to provide further insight into the quantification of water movement or distribution in unsaturated soil, analysis of soil water retention behavior of BPC-GCLs is essential. To simulate potential hydration and dehydration processes that GCLs go through in the field, moisture-suction relationships were examined along both wetting and drying paths. Five variable GCLs hydrated with three different solutions were analyzed. Two different suction measurement methods were carried out with the purpose of observing how and why the characteristics of each material influences water retention behavior. Both measurement methods were used to form wetting and drying paths in a range of suction between 50 and 300 MPa and are presented with their corresponding gravimetric water content. The obtained water retention curves of the different material tested showed significant variation for suction values below 20 MPa in the wetting path, while for the drying path the variation in suction occurs for values below 10 MPa. This difference indicates that the polymer loading has a significant influence on the swelling behavior of the GCLs. In addition, to relate soil water content and suction, individual moisture-suction relationship for wet and dry condition was analyzed mathematically using the sigmoidal van Genuchten (1980) and Lu (2016) functions. The fitting parameters, obtained based on the empirical data for the wetting and drying path, can offer insight into the soil water retention curve like the air entry, water entry, and air expulsion values, pore-size distribution, and residual suction.
  • Item
    Designing Agent-based Simulation to Assess the Impact of Coordination Schemes on Infrastructure Networks Resilience
    Dsouza, Mark Herman; Mohebbi, Shima
    Critical infrastructures systems are governed by several sectors working together to maintain social, economic, and environmental well-being. Their cyber-physical interdependencies, on the other hand, influence their performance and resilience to routine failures and extreme events. To balance investment and restoration decisions before, during, and after disruptive events, different mathematical formulations and solutions, mainly focused on centralized view, were presented in the literature. While necessary and useful, not all physical and dynamic characteristics of infrastructure systems and their decision makers can be modeled via mathematical models. In this study, we take a different approach and utilize agent-based modeling to simulate city-scale interdependent infrastructure networks as a complex adaptive system. We first model each infrastructure as a weighted graph with relevant geospatial attributes. Decision makers (e.g., maintenance crew) for each infrastructure sector are represented by intelligent agents. We then define three information and coordination structures among agents, including no communication, leader-follower, and decentralized coalitions. The framework is applied to the interdependent water distribution and road networks in the City of Tampa, FL.We simulate different magnitudes of cyberphysical failures, evaluate resource allocation decisions, made by agents under each coordination structure, and quantify the aggregated resilience. Specifically, we develop a rank aggregation performance measure to evaluate restoration effectiveness for each scenario. This research helps municipalities to quantify the impact of their collective decision making and identify the best coordination structures when interdependencies are modeled in infrastructure systems.
  • Item
    Improving IoT Connection Resiliency in Wireless Networks
    Nguyen, Hieu Thanh; Jabbari, Bijan
    Internet of Things (IoT) wireless network is expected to connect billions of IoT devices in the next period of modern technologies. This is because IoT applications have more and more applicability in various fields, including services, health, agriculture, and so on. However, along with the significant benefits, IoT requires low-latency and high resilience of wireless communications in order to maintain a high quality of service. IoT networks should constantly maintain a high level of resilience in wireless communication in order to sustain the increasing number of new IoT devices connected to the networks. Since IoT networks consist of thousands of devices sharing the frequency spectrum in a given local area, IoT networks also address the problem of wireless interference that results in link degradation and low network connectivity. Therefore, the performance of In this thesis, we propose two technical solutions to improve the resilience of communications in IoT networks by suppressing wireless interference. We develop our system models that represent the interference with IoT network access and elements of graph theory for improving the resilience of connections. Our system models include node distribution following Point Poisson Process, wireless network as a graph, modeling interference in IoT network access, node criticality, and elasticity theory. Then, we utilize these models in our proposed solutions for improving the resilience of wireless communications. In order to avoid channel interference, we implement an algorithm based on the concept of graph theory to efficiently allocate channels used by IoT devices in the network. We observe that the number of colors labeled for each node can be minimized by eliminating several less important nodes, but it is a trade-off between color reduction and network connectivity. Also, we propose an additional solution using deep deterministic policy gradient (DDPG) based on graph coloring to determine the minimum number of colors used. Our simulation results indicate that the gain of eliminating the least important nodes is color reduction, but depending on each particular wireless network, the solution can achieve a high probability. Another proposed solution is to determine the chromatic number by using deep reinforcement learning-based channel allocation. Although several nodes in the network have the same colors, which leads to invalid/disconnected links, the number of colors using the DDPG algorithm is always smaller than the greedy coloring algorithm.
  • Item
    An Agent Based Distributed Control for Networked SIR Epidemics
    Mubarak, Mohammad; Nowzari, Cameron
    This paper revisits a longstanding problem of interest concerning the distributed control of an epidemic process on human contact networks. Due to the stochastic nature and combinatorial complexity of the problem, Finding optimal policies are intractable even for small networks. Even if a solution could be found efficiently enough, a potentially larger problem is such policies are notoriously brittle when confronted with small disturbances or uncooperative agents in the network. Unlike the vast majority of related works in this area, we circumvent the goal of directly solving the intractable and instead seek simple control strategies to address this problem. More specifically, based on the locally available information to a particular person, how should that person make use of this information to better protect their self? How can that person socialize as much as possible while ensuring some desired level of safety? More formally, the solution to this problem requires a rigorous understanding of the trade-off between socializing with potentially infected individuals and the increased risk of infection. We set this up as a finite time optimal control problem using a well known exact Markov chain compartmental Susceptible-Infected-Removed (SIR) model. Unfortunately, the problem set up is intractable and requires a relaxation. Leveraging results from the literature, we employ a commonly used mean-field approximation (MFA) technique to relax the problem. However, the main contribution distinguishing our work from the myriad works which study networked MFA models is that we verify the effectiveness of our solutions on the original stochastic problem, rather than the relaxed problem. We find that the optimal solution of the problem to be a form of threshold on the chance of infection of the neighbors of that person. Simulations illustrate our results.
  • Item
    Automated Generation of Geometric Eye Models
    Mutawak, Bassam; Wei, Qi
    Visualization of the ocular motor system is an innovative technique to examine the underlying causes of different ocular disorders. Creating three-dimensional (3D) ocular models, including the extraocular muscles and other ocular structures, is one method for ocular system visualization. Effective examination of the different ocular disorders necessitates these 3D models to be developed in a patient-specific manner, using medical imaging techniques to image a patient's ocular structures and laborious post-processing to generate the three-dimensional models. Biomechanical simulators employ these patient-specific models to simulate eye movements such as fixations and saccades in normal or abnormal conditions. Such realistic computational simulation can be helpful to quantitatively study factors contributing to eye movement disorders and effective surgical treatment procedures. Current patient-specific ocular modeling, however, is limited due to the lengthy initial static model creation process. Furthermore, a recognized pipeline to create these static models does not exist. In this thesis, we introduce an automated pipeline to generate patient-specific 3D ocular models that stream-lines and unifies the multi-step model creation process. Several solutions are compared at step to optimize quantitative accuracy to real-world experimental results. The pipeline is implemented as a plugin in Autodesk Maya and seven subject datasets are used to demonstrate modeling fitness. Modeling creation time is drastically reduced, enabling quicker turnaround of ocular visualization and allowing for a broad set of ocular models to be leveraged in the development of biomechanical simulators.
  • Item
    Oracles for Privacy-Preserving Machine Learning
    Do, Minh Quan; Baldimtsi, Foteini
    Currently, the process of deploying machine learning models in production can leak information about the model such as model parameters. This leakage of information is problematic because it opens the door to a plethora of attacks that can compromise the privacy of the data used to train the model. In this thesis, we will introduce definitions for new primitives that are specifically designed for deploying machine learning models into production in such a way that guarantees the privacy of the model’s parameters and the underlying dataset. We will also provide definitions for security, propose a scheme for deploying a model into production, and informally argue the security of our scheme.
  • Item
    Security Through Frequency Diversity in The 5G NR Standard
    Weitz, Joshua D; Mark, Brian
    This thesis explores the use of pseudo-random frequency hopping for added security in the 5G New Radio specification. Frequency hopping makes it more difficult for an attacker to intercept, detect, or jam a wireless connection in a 5G network. Current 5G resource allocation options are examined, and the state-of-the-art literature regarding Orthogonal Frequency Division Multiple Access (OFDMA) frequency hopping under various channel conditions is reviewed. Computer simulations were conducted to compare the throughput performance of the frequency hopping technique vs. static resource allocation. It is shown that under certain channel conditions and power allocation schemes, the aggregate user throughput under frequency hopping is within 95% of that of static allocation, although less under more realistic power allocations, while the probabilities of intercept and detection is significantly reduced.
  • Item
    Random Matrix Theory Models for Predicting Dominant Mode Rejection Beamformer Performance
    (2022) Hulbert, Christopher; Wage, Kathleen E
    Adaptive beamformers (ABFs) use a spatial sample covariance matrix (SCM) that is estimated from data snapshots, i.e., temporal samples from each sensor, to mitigate directional interference and attenuate uncorrelated noise. Thus, ABFs improve signal-to-interference-plus-noise ratio (SINR), an optimal criteria for many detection and estimation algorithms, over that of a single sensor and the conventional beamformer. SINR is a function of white noise gain (WNG), the beamformer’s array gain versus spatial white noise, and interference leakage (IL), the interference power in the beamformer output. Dominant mode rejection (DMR) is a variant of the classic minimum variance distortionless response (MVDR) algorithm that replaces the smallest SCM eigenvalues by their average. By not inverting the smallest eigenvalues, DMR achieves a higher WNG than MVDR. Moreover, DMR still suppresses the loud interferers as the largest eigenvalues are unmodified, yielding a higher SINR than MVDR. This dissertation derives analytical models of WNG and IL for the DMR ABF. The model predictions are shown to match the sample mean, computed via Monte Carlo simulations, for a broad range of scenarios including with and without the signal of interest (SOI) in the training data. Both cases for the SOI in the training data are analyzed when the number of interferers is known, and when the number of interferers is overestimated. The models leverage a new random matrix theory (RMT) spiked covariance model that is derived in this dissertation. The new RMT model more accurately predicts the SCM eigenspectrum, and hence the ABF metrics, when the number of snapshots is on the same order or less than the dimension and there are a large number of interferers relative to the SCM dimension. Assuming the SOI is not in the training data and a known number of interferers are loud, the analytical models show DMR achieves an average SINR loss of -3 dB when the number of snapshots is approximately twice the number of interferers, an analogous result to the Reed-Mallett-Brennan rule for MVDR.
  • Item
    Towards Designing Reliable and Efficient Millimeter-Wave Wireless LANS
    (2022) Zhang, Ding; Pathak, Parth
    With the increasing amount of mobile data and the demand for high data rates, current 2.4GHz/5GHz wireless local area networks (WLANs) are facing the problem of limited capacity. Millimeter-wave (mmWave) networks with gigahertz of channel bandwidth can provide multi-gigabit per second data rates, making it possible to support novel applications such as augmented/virtual reality (AR/VR), mobile offloading, high-resolution video streaming, etc. However, despite the potential, the directional nature of mmWave WLAN makes it prone to blockages and mobility. The dense deployment of Access Points (APs) brings unpredictable interference and non-negligible beamforming overhead. Moreover, current mmWave WLANs are application agnostic, resulting in inefficient usage of resources in supporting AR/VR type bandwidth-intensive applications. In this dissertation, I propose novel solutions to four key challenges, aiming to build practical, reliable, and efficient mmWave WLANs. Firstly, I explore a proactive blockage mitigation technique that utilizes joint transmissions of multiple APs to provide blockage resilience. Secondly, I characterize interference in dense mmWave WLANs and implement three interference mitigation techniques using commercial-off-the-shelf (COTS) devices. Thirdly, focusing on reducing the beamforming overhead, I propose a “Networked beamforming” model to reduce the number of APs that conduct beamforming in dense mmWave WLANs, resulting in significant improvements in network throughput. Lastly, I design novel solutions for blockage prediction and prefetching based on users’ six-degree-of-freedom (6DoF) position and orientation information to facilitate high-quality volumetric video streaming over mmWave WLANs.
  • Item
    Towards Flood Resilience in Large Metropolitan Areas: Real-time Flood Forecast and Planning for Climate Uncertainty
    (2022) de Almeida Coelho, Gustavo; Ferreira, Celso M
    Urban floods generated by heavy and short-duration rainfall are a major concern in urban areas due to their potential socioeconomic impacts and threat to life. Moreover, such threats are expected to continue with increasing trends of population growth, urbanization, and extreme meteorological events. In this dissertation, two main goals are defined to support flood resilience in metropolitan areas on different temporal scales: (1) In longer-term planning horizon (years to decades), to assess how extreme precipitation is expected to change in the future due to climate change and to incorporate these changes in flood engineering design; (2) In short-term (hours to days), to explore the predictive capability of real-time flood forecast systems integrating meteorological variables from state-of-the-art numerical weather prediction to urban scale hydrodynamic models. The long-term analysis of the most recent large ensemble climate projections revealed that the current engineering design standard is expected to become obsolete by 2080 in most of the United States, and a novel method was presented for incorporating precipitation changes in flood engineering design at a continental scale. This method can support water resources engineers and decision-makers in planning for climate uncertainty. When focusing on the short-term, the performance of two distinct real-time flood forecast systems for small urban and suburban watersheds (<200 km2) was evaluated for multiple flood events between 2020 and 2021. The first system based on a fully distributed hydrological model, and the second consisting of a two-dimensional hydrodynamic model enabled real-time flood forecasts with 36- and 48-hour lead time, respectively. This research advances the scientific knowledge of urban flood modeling by highlighting key insights on the current short-range real-time flood predictability that can guide preparedness and response action to approaching flood events.
  • Item
    A Statistical Approach to Point Cloud Analysis for Infrastructure Assessment
    (2022) Graves, William; Lattanzi, David
    Engineers implement structural health monitoring and nondestructive evaluation techniques to effectively assess the status and safety of civil infrastructure. In addition to the financial cost, the logistical burden involved with traditional techniques includes a large suite of hardware and sensors for data collection, and this data is typically carried forward into computer simulations using finite element models to fully understand the behavior of the structure. However, with the continued emergence of computer vision techniques, engineers are looking at new methods for data collection to support infrastructure assessment. The use of these techniques enables the collection of point cloud data, which is a compilation of spatial data points (typically defined in the Cartesian coordinate system) that are located on the surface of the target structure, through sensor packages that can be as simple as a single digital camera. The point cloud data, which can also be collected through laser scanners, is unique in that it can capture the full 3D geometry and deformations of a structure while other data typically provides information only at individual sensor locations. However, the nature of point clouds is usually unstructured and noisy, requiring statistical techniques for its analysis. As such, this research investigates a new pathway for evaluation and assessment of civil infrastructure using point cloud data in a manner that provides accurate and organized representation of surface deformations, gives explicit measurement uncertainty quantification, and uses the data in a unique manner to update computer simulation models. This approach is unique because the performance metrics of the target structure are therefore provided as a range of values that reflect the uncertainty in the collected data. Such a pathway is significant not only because it demonstrates the implementation of data that is collected with a lower cost and lighter logistical burden than traditional methods, but also because it provides information that allows decision-makers to quantify risk when determining future steps for infrastructure maintenance or remediation. This research pathway is divided into three tasks: point cloud registration and deformation measurement, surface definition and uncertainty quantification, and finite element model updating. The data sets for this research include point clouds at both the laboratory scale and the field scale. The laboratory scale data includes point clouds of 3D printed shapes that represent different deformation patterns and buckling modalities of structural components. The field scale data includes point clouds of a highway bridge in Delaware and point clouds of a stiffened floor panel used in the decks of large maritime vessels.
  • Item
    Measuring Attention, Working Memory and Visual Perception to Reduce the Risk of Injuries in the Construction Industry
    Aroke, Olugbemi M.; Aroke, Olugbemi M.; Esmaeili, Behzad
    The construction industry has consistently held one of the highest injury rates among all sectors and failure to recognize hazards due to poor selective attention, cognitive overload, and distractibility have been identified as critical human factors that lead to accidents. Considering that falls are the leading cause of deaths in the construction industry and accountable for over 33% of all construction worker deaths, this project investigated the extent to which worker characteristics (work experience, safety training, and previous injury involvement), personality dimensions (extraversion, neuroticism, conscientiousness, agreeableness, and openness to experience), working memory load and workplace conditions (e.g. time pressure) interact to influence visual attention and the identification of fall hazards. By continuously monitoring the eye movements of participants using eye-tracking technology, this study identified precursors of human error by carrying out a batch of visual search tasks to: (i) evaluate the influence of worker characteristics on visual attention and hazard identification as workers viewed 35 construction-scenario images containing 115 fall hazard areas of interests (AOIs); ii) investigate the effect of working memory load on hazard recognition for various personality traits while performing a visual attention task of identifying fall hazards across 231 AOIs and memorizing 3-digit and 6-digit strings of numbers (simulating low and high memory load conditions respectively) in a secondary cognitive task, and iii) examine the impact of time pressure on attention to fall and hand injury hazards as participants installed 27 pieces of 40 ft2 shingles standing on a low-sloped roof model 4ft wide, 6ft long and 3ft high in two experimental conditions—a baseline study without time pressure and a second manipulation with a 7-minute time limit. Multilevel analyses of data revealed that work experience, safety training and individual differences in the conscientiousness, agreeableness and openness to experience personality dimensions demonstrated significant direct associations with visual attention and superior hazard identification performance. Furthermore, residential roofers may be at a heightened risk of slip, trip, fall and hand injury hazards as a result of impaired visual attention due to time pressure. Findings have wide implications for improving safety performance and would assist organizations to assign workers to suitable tasks based on a combination of their cognitive abilities and personality variables to reduce the risk of injury among vulnerable workers whose attention may become impaired when handling multiple tasks in dynamic environments. In addition, this research is a proof of concept to construction managers on the need to prevent tight work schedules that induce time pressure and promote risk-taking, impact hazard awareness and increase workers’ susceptibility to fall and hand injury hazards.