Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Simon Laflamme et al 2023 Meas. Sci. Technol. 34 093001
Mohammadmahdi Abedi et al 2024 Meas. Sci. Technol. 35 065601
In this study, a self-sensing and self-heating natural fibre-reinforced cementitious composite for the shotcrete technique was developed using Kenaf fibres. For this purpose, a series of Kenaf fibre concentrations were subjected to initial chemical treatment, followed by integration into the cement-based composite containing hybrid carbon nanotubes (CNT) and graphene nanoplatelets (GNP). The investigation encompassed an examination of mechanical, microstructural, sensing, and joule heating performances of the environmentally friendly shotcrete mixture, with subsequent comparisons drawn against a counterpart blend featuring a conventionally synthesized polypropylene (PP) fibre. Following the experimental phase, a comprehensive 3D nonlinear finite difference (3D NLFD) model of an urban twin road tunnel, completed with all relevant components, was meticulously formulated using the FLAC3D (fast lagrangian analysis of continua in 3 dimensions) code. This model was subjected to rigorous validation procedures. The performances of this green shotcrete mixture as the lining of the inner shell of the tunnel were assessed comparatively using this 3D numerical model under static and dynamic loading. The twin tunnel was subjected to a harmonic seismic load as a dynamic load with a duration of 15 s. The laboratory findings showed a reduction in the composite sensing and heating potentials in both cases of Kenaf and PP fibre reinforcement. Incorporating a specific quantity of fibre yields a substantial enhancement in both the mechanical characteristics and microstructural attributes of the composite. An analysis of digital image correlation demonstrated that Kenaf fibres were highly effective in controlling cracks in cement-based composites. Furthermore, based on the static and dynamic 3DNLFD analysis, this green cement-based composite demonstrated its potential for shotcrete applications as the lining of the inner shell of the tunnel. This study opens an appropriate perspective on the extensive and competent contribution of natural fibres for multifunctional sustainable, reliable and affordable cement-based composite developments for today's world.
Adam Thompson et al 2021 Meas. Sci. Technol. 32 105013
Maximum permissible errors (MPEs) are an important measurement system specification and form the basis of periodic verification of a measurement system's performance. However, there is no standard methodology for determining MPEs, so when they are not provided, or not suitable for the measurement procedure performed, it is unclear how to generate an appropriate value with which to verify the system. Whilst a simple approach might be to take many measurements of a calibrated artefact and then use the maximum observed error as the MPE, this method requires a large number of repeat measurements for high confidence in the calculated MPE. Here, we present a statistical method of MPE determination, capable of providing MPEs with high confidence and minimum data collection. The method is presented with 1000 synthetic experiments and is shown to determine an overestimated MPE within 10% of an analytically true value in 99.2% of experiments, while underestimating the MPE with respect to the analytically true value in 0.8% of experiments (overestimating the value, on average, by 1.24%). The method is then applied to a real test case (probing form error for a commercial fringe projection system), where the efficiently determined MPE is overestimated by 0.3% with respect to an MPE determined using an arbitrarily chosen large number of measurements.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data 'twin-friendly' and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
Liisa M Hirvonen and Klaus Suhling 2017 Meas. Sci. Technol. 28 012003
Time-correlated single photon counting (TCSPC) is a widely used, robust and mature technique to measure the photon arrival time in applications such as fluorescence spectroscopy and microscopy, LIDAR and optical tomography. In the past few years there have been significant developments with wide-field TCSPC detectors, which can record the position as well as the arrival time of the photon simultaneously. In this review, we summarise different approaches used in wide-field TCSPC detection, and discuss their merits for different applications, with emphasis on fluorescence lifetime imaging.
W Hortschitz et al 2024 Meas. Sci. Technol. 35 052001
Due to the necessary transition to renewable energy, the transport of electricity over long distances will become increasingly important, since the sites of sustainable electricity generation, such as wind or solar power parks, and the place of consumption can be very far apart. Currently, electricity is mainly transported via overhead AC lines. However, studies have shown that for long distances, transport via DC offers decisive advantages. To make optimal use of the existing route infrastructure, simultaneous AC and DC, or hybrid transmission, should be employed. The resulting electric field strengths must not exceed legally prescribed thresholds to avoid potentially harmful effects on humans and the environment. However, accurate quantification of the resulting electric fields is a major challenge in this context, as they can be easily distorted (e.g. by the measurement equipment itself). Nonetheless knowledge of the undisturbed field strengths from DC up to several multiples of the fundamental frequency of the power-grid (up to 1 kHz) is required to ensure compliance with the thresholds. Both AC and DC electric fields can result in the generation of corona ions in the vicinity of the line. In the case of pure AC fields, the corona ions generated typically recombine in the immediate vicinity of the line and, therefore, have no influence on the field measurement further away. Unfortunately, this assumption does not hold for DC fields and hybrid fields, where corona ions can be transported far away from the line (e.g. by wind), and potentially interact with the measurement equipment yielding incorrect measurement results. This review will provide a comprehensive overview of the current state-of-the-art technologies and methods which have been developed to address the problems of measuring the electric field near hybrid power lines.
Fernando Zigunov and John J Charonko 2024 Meas. Sci. Technol. 35 065302
Experimentally-measured pressure fields play an important role in understanding many fluid dynamics problems. Unfortunately, pressure fields are difficult to measure directly with non-invasive, spatially resolved diagnostics, and calculations of pressure from velocity have proven sensitive to error in the data. Omnidirectional line integration methods are usually more accurate and robust to these effects as compared to implicit Poisson equations, but have seen slower uptake due to the higher computational and memory costs, particularly in 3D domains. This paper demonstrates how omnidirectional line integration approaches can be converted to a matrix inversion problem. This novel formulation uses an iterative approach so that the boundary conditions are updated each step, preserving the convergence behavior of omnidirectional schemes while also keeping the computational efficiency of Poisson solvers. This method is implemented in Matlab and also as a GPU-accelerated code in CUDA-C++. The behavior of the new method is demonstrated on 2D and 3D synthetic and experimental data. Three-dimensional grid sizes of up to 125 million grid points are tractable with this method, opening exciting opportunities to perform volumetric pressure field estimation from 3D PIV measurements.
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Bora O Cakir et al 2024 Meas. Sci. Technol. 35 075201
The degraded resolution and sensitivity characteristics of background-oriented schlieren (BOS) can be recovered by utilizing an optical flow (OF)-based image processing scheme. However, the background patterns conventionally employed in BOS setups suit the needs of the cross-correlation approach, whereas OF is based on a completely different mathematical background. Thus, in order to characterize the resolution and sensitivity response of OF-based BOS to the background generation configurations, a parametric study is performed. First, a synthetic assessment based on an analytical solution of a one-dimensional shock tube problem is conducted. Then, a numerical assessment utilizing direct numerical simulation data of density-driven turbulence is performed. Finally, the applicability of the documented conclusions in realistic scenarios is tested through an experimental assessment over a plume of a swirling heated jet.
Open all abstracts, in this tab
Bright Awuku et al 2024 Meas. Sci. Technol. 35 076006
Pipelines are critical arteries in the oil and gas industry and require massive capital investment to safely construct networks that transport hydrocarbons across diverse environments. However, these pipeline systems are prone to integrity failure, which results in significant economic losses and environmental damage. Accurate prediction of pipeline failure events using historical oil pipeline accident data enables asset managers to plan sufficient maintenance, rehabilitation, and repair activities to prevent catastrophic failures. However, learning the complex interdependencies between pipeline attributes and rare failure events presents several analytical challenges. This study proposes a novel machine learning (ML) framework to accurately predict pipeline failure causes on highly class-imbalanced data compiled by the United States Pipeline and Hazardous Materials Safety Administration. Natural language processing techniques were leveraged to extract informative features from unstructured text data. Furthermore, class imbalance in the dataset was addressed via oversampling and intrinsic cost-sensitive learning (CSL) strategies adapted for the multi-class case. Nine machine and deep learning architectures were benchmarked, with LightGBM demonstrating superior performance. The integration of CSL yielded an 86% F1 score and a 0.82 Cohen kappa score, significantly advancing prior research. This study leveraged a comprehensive Shapley Additive explanation analysis to interpret the predictions from the LightGBM algorithm, revealing the key factors driving failure probabilities. Leveraging sentiment analysis allowed the models to capture a richer, more multifaceted representation of the textual data. This study developed a novel CSL approach that integrates domain knowledge regarding the varying cost impacts of misclassifying different failure types into ML models. This research demonstrated an effective fusion of text insights from inspection reports with structured pipeline data that enhances model interpretability. The resulting AI modeling framework generated data-driven predictions of the causes of failure that could enable transportation agencies with actionable insights. These insights enable tailored preventative maintenance decisions to proactively mitigate emerging pipeline failures.
Murali R Cholemari et al 2024 Meas. Sci. Technol. 35 075206
In planar laser induced fluorescence (PLIF), scalar concentration fields are obtained from fluorescence images of the flow where the scalar is 'tagged' with a suitable fluorescent dye and illuminated with a laser sheet. While the fluorescence occurs, the intensity of the laser sheet is attenuated along its passage through the dye field. This paper discusses the effect of correcting for the attenuation of the laser by the dye while obtaining the scalar concentration field from the fluorescent images of the flow. If the attenuation correction cannot be done, the paper discusses three possible calibration scenarios to minimise the error. It is shown that the error in ignoring the attenuation due to turbulent concentration fluctuations cannot be corrected by any means. However, the remaining error can be exactly corrected if it were possible to calibrate with the mean concentration field. It is shown that a uniform concentration field, centred around the mean field, greatly reduces the error when compared to having no attenuation correction but still is significantly more when compared to the use of the average field. Relying on these results, we propose an iterative technique which ideally achieves the accuracy obtainable when calibrating with the mean concentration field, but which requires no reference cell. In addition, the paper explores the removal striations, of a common artefact during PLIF image acquisition with origins in the image optics. A technique to remove the same is proposed and its advantages discussed when compared to available techniques.
Yueyang Huan et al 2024 Meas. Sci. Technol. 35 076305
In the field of global navigation satellite system (GNSS) time series noise analysis, appropriately modeling the noise components plays an important role in determining the velocity of GNSS sites and quantifying the uncertainty associated with the velocity estimation. Over the years, researchers have focused on only one optimal noise model, while other noise models that show similar performance to the optimal model have been ignored. We investigated whether these ignored noise models can be made use of to describe the noise in the GNSS time series after applying a model averaging algorithm. The experimental data were derived from 28 International GNSS Service (IGS) sites in the California region of the United States and 110 IGS sites worldwide. The results showed that for the GNSS time series of 28 IGS sites in the California, 79%, 68%, and 75% of the site components can be applied the model averaging algorithm in the east/north/up (E/N/U) directions, respectively. Based on it, the east direction showed the best performance, with 50% of the site components obtaining more conservative velocity uncertainty after applying the model averaging algorithm compared to the optimal noise model. For GNSS time series of 110 IGS stations worldwide, the model averaging algorithm demonstrates excellent performance in all the E/N/U directions. In the E/N/U directions, 86%, 94%, and 57% of the site components can apply the model averaging algorithm. Building upon this, 77%, 65%, and 62% of the site components achieve more conservative velocity uncertainty in the E/N/U directions compared to the optimal noise model. To fully validate the feasibility of the model averaging algorithm, we also tested GNSS time series of varying lengths and different thresholds of the model averaging algorithm. In summary, the model averaging algorithm performs exceptionally well in the noise analysis of GNSS time series. It helps prevent overly optimistic estimation results.
Jinyu Tong et al 2024 Meas. Sci. Technol. 35 075108
In real industrial environments, vibration signals generated during the operation of rotating machinery are typically accompanied by significant noise. Existing deep learning methods often yield unsatisfactory diagnostic results when dealing with noisy signals. To address this problem, a novel residual global context shrinkage network (RGNet) is proposed in this paper. Firstly, to fully utilize the useful information in the raw vibration signal, a multi-sensor fusion strategy based on dispersion entropy is designed as the input of the deep network. Then, the RGNet is designed, which improves the long-distance modeling capability of the deep network while suppressing noise, optimizes the network gradient and computational performance. Finally, the noise suppression ability and feature extraction ability of the RGNet are intuitively revealed through an interpretability study. The advantages of the proposed method are proved through a series of comparison experiments under noisy backgrounds.
Chi Zhang et al 2024 Meas. Sci. Technol. 35 075107
Parity-time symmetry concept has been utilized to develop high precision LC passive wireless sensors. However, they often use the traditional frequency sweeping method for measurements, thus the measurement precision and speed are strongly influenced by the performance of the frequency domain analysis instrument. To solve this issue, herein we proposed a time domain measurement method and extracted sensing information from the transient response signals of the reader. Its measurement speed was much faster than that using the frequency domain analysis instrument. A distance sensing system was developed to demonstrate the feasibility of the new method. It showed a resolution of less than 300 nm for detections of centimeter range, and the measurement time was as short as 100 μs, which was at least 1000 times faster than that using the traditional method. This technology can be explored as an innovative strategy for LC passive telemetry sensing.
Open all abstracts, in this tab
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Weiqing Liao et al 2024 Meas. Sci. Technol. 35 062002
Mechanical fault diagnosis is crucial for ensuring the normal operation of mechanical equipment. With the rapid development of deep learning technology, the methods based on big data-driven provide a new perspective for the fault diagnosis of machinery. However, mechanical equipment operates in the normal condition most of the time, resulting in the collected data being imbalanced, which affects the performance of mechanical fault diagnosis. As a new approach for generating data, generative adversarial network (GAN) can effectively address the issues of limited data and imbalanced data in practical engineering applications. This paper provides a comprehensive review of GAN for mechanical fault diagnosis. Firstly, the development of GAN-based mechanical fault diagnosis, the basic theory of GAN and various GAN variants (GANs) are briefly introduced. Subsequently, GANs are summarized and categorized from the perspective of labels and models, and the corresponding applications are outlined. Lastly, the limitations of current research, future challenges, future trends and selecting the GAN in the practical application are discussed.
Jianghong Zhou et al 2024 Meas. Sci. Technol. 35 062001
Predictive maintenance (PdM) is currently the most cost-effective maintenance method for industrial equipment, offering improved safety and availability of mechanical assets. A crucial component of PdM is the remaining useful life (RUL) prediction for machines, which has garnered increasing attention. With the rapid advancements in industrial internet of things and artificial intelligence technologies, RUL prediction methods, particularly those based on pattern recognition (PR) technology, have made significant progress. However, a comprehensive review that systematically analyzes and summarizes these state-of-the-art PR-based prognostic methods is currently lacking. To address this gap, this paper presents a comprehensive review of PR-based RUL prediction methods. Firstly, it summarizes commonly used evaluation indicators based on accuracy metrics, prediction confidence metrics, and prediction stability metrics. Secondly, it provides a comprehensive analysis of typical machine learning methods and deep learning networks employed in RUL prediction. Furthermore, it delves into cutting-edge techniques, including advanced network models and frontier learning theories in RUL prediction. Finally, the paper concludes by discussing the current main challenges and prospects in the field. The intended audience of this article includes practitioners and researchers involved in machinery PdM, aiming to provide them with essential foundational knowledge and a technical overview of the subject matter.
Zheyu Wang et al 2024 Meas. Sci. Technol. 35 052003
The market for service robots is expanding as labor costs continue to rise. Faced with intricate working environments, fault detection and diagnosis are crucial to ensure the proper functioning of service robots. The objective of this review is to systematically investigate the realm of service robots' fault diagnosis through the application of Structural Topic Modeling. A total of 289 papers were included, culminating in ten topics, including advanced algorithm application, data learning-based evaluation, automated equipment maintenance, actuator diagnosis for manipulator, non-parametric method, distributed diagnosis in multi-agent systems, signal-based anomaly analysis, integrating complex control framework, event knowledge assistance, mobile robot particle filtering method. These topics spanned service robot hardware and software failures, diverse service robot systems, and a range of advanced algorithms for fault detection in service robots. Asia-Pacific, Europe, and the Americas, recognized as three pivotal regions propelling the advancement of service robots, were employed as covariates in this review to investigate regional disparities. The review found that current research tends to favor the use of artificial intelligence (AI) algorithms to address service robots' complex system faults and vast volumes of data. The topics of algorithms, data learning, automated maintenance, and signal analysis are advancing with the support of AI, gaining increasing popularity as a burgeoning trend. Additionally, variations in research focus across different regions were found. The Asia-Pacific region tends to prioritize algorithm-related studies, while Europe and the Americas show a greater emphasis on robot safety issues. The integration of diverse technologies holds the potential to bring forth new opportunities for future service robot fault diagnosis.Simultaneously, regional standards about data, communication, and other aspects can streamline the development of methods for service robots' fault diagnosis.
Open all abstracts, in this tab
Zhang et al
Due to factors such as high temperatures, elevated pressures, and severe high-frequency shocks in the bore, there is considerable noise interference in acceleration test signals. This makes it challenging to accurately measure projectile motion acceleration using existing methods. To address this issue, we propose an advanced measurement approach that uses bottom pressure correction. Our model suggests a significant correlation between projectile motion acceleration and thrust. By utilizing the correlations between bottom pressure and motion acceleration in both temporal and frequency domains, we can improve the accuracy of acceleration measurements. Based on these insights, we have developed a novel testing system that synchronously measures bottom pressure and acceleration, using bottom pressure as a corrective mechanism for the measured acceleration signals. Empirical results show that the maximum relative error in peak motion acceleration is only 4.86%, demonstrating the effectiveness of our proposed method.
Çorakçı et al
In this paper, application of a Two-Equations Two-Unknowns (2E-2U) method is described for calibration of hydrophones and projectors below 1 kHz in a laboratory test tank. At low frequencies, amplitude and phase measurements for the calibration of the hydrophones and projectors in the test tank are diffucult to perform since echo-free time of the laboratory test tank is not large enough due to transducer initial transients and tank wall boundary reflections. To overcome these diffuculties, the 2E-2U method is applied to received (windowed) signals obtained during calibration measurements. Thus, the calibration measurements become possible at a frequency down to 250 Hz. These measurements in the test tank are performed for a hydrophone and a developed flextensional projector. First, the receive sensitivities for the hydrophone are calculated and validated by comparisons with pressure calibration in a closed chamber. Good agreements are obtained between two measurement platforms, with a maximum difference of 0.5 dB and uncertainty of 1.3 dB. Then, transmitting voltage response (TVR) of the flextensional projector is calculated and compared with the calibration data obtained from the method defined in the relevant standards. Good agreements are obtained between two TVR data with a maximum difference of 1.1 dB and uncertainty of 1.7 dB.
LV et al
Compared with traditional cameras, event cameras have the significant advantages of high temporal resolution, low data redundancy, and microsecond delay, which are beneficial in structural monitoring to extract the dense response of structures in both spatial and temporal dimensions. In this paper, the vibration frequency detection method based on event cameras is studied. This study investigates vibration frequency detection methods based on event cameras, and proposes two algorithms for vibration frequency detection based on event streams: marker tracking and event count. Experimental verification is conducted through forced vibration experiments. The results indicate that the event count method achieves high-precision measurement of vibration frequencies in the range of 10-190 Hz for different vibration scales, with a maximum relative error of 1% and an average relative error of 0.673%. The marker tracking method demonstrates a maximum relative error of 1.43% and an average relative error of 0.575% in frequency measurement for large-amplitude vibrations. However, as the amplitude decreases, the frequency measurement error increases. When the amplitude is less than 3 pixels, the frequency measurement error exceeds 30%, rendering the measurement results unreliable. This research provides technical support for high-precision structural vibration frequency monitoring and further expands the application of event cameras in structural monitoring.
Shao et al
The advancement of deep transfer learning has motivated research into the realization of intelligent fault diagnosis schemes for rotating machinery. Nevertheless, existing research rarely provides further insight into the importance of statistical distance metric-based methods and adversarial learning-based methods in domain adaptation, and the commonly used feature extractors are more difficult to extract features suitable for domain transformation.
 In this paper, a dynamic fusion of statistical metric and adversarial learning for domain adaptation network (DFSA-DAN) is proposed to achieve a dynamic measure of the importance of different domain adaptation methods. This new model utilizes a local maximum mean discrepancy metric to adjust the conditional distribution and adversarial training to adjust the marginal distribution between domains. Meanwhile, to assess the importance of the two distributions, a dynamic adaptation factor is introduced for dynamic evaluation. In addition, to extract features that are more suitable for domain transformation, the model incorporates a dual depth convolutional path with an attention mechanism as a feature extractor, enabling multi-scale feature extraction. Experimental results demonstrate the model's superior generalization capability and robustness, enabling effective cross-domain fault diagnosis in diverse scenarios.
GUO et al
Recent researches have shown that the multivariable entropy based feature extraction method can obtain better diagnosis results for fault diagnosis of planetary gearboxes. However, the nature properties of multivariable entropy have still not been deeply explored: the reliability of multi-source information fusion and cluster consistency for the same fault signal. These two properties will affect the accuracy of fault diagnosis based on multivariate entropy. This paper aims to reveal the nature properties of multivariate entropy. Firstly, a rigid-flexible coupling dynamic model of a planetary gearbox is conducted to establish a pure test environment. Then the generated vibrationsignals are used to evaluate the fusion reliability and cluster consistency of multivariable entropy. Additionally, a new multivariable entropy feature extraction method called variational embedding refined composite multiscale diversity entropy (veRCMDE) is proposed. Finally, the simulation and experiment results show that high fusion reliability and high cluster consistency enable multivariate entropy to extract more valuable features, and the proposed veRCMDE performs the best in all experiments.
Open all abstracts, in this tab
A Spaett and B G Zagar 2024 Meas. Sci. Technol. 35 075013
Fully developed laser speckle patterns are, due to their high contrast and statistical nature, well suited to measure strain and displacement via an appropriately designed measurement system. Laser speckle patterns are formed when a sufficiently coherent light source, such as a HeNe-laser, illuminates an optically rough surface. Therefore, methods based on laser speckle patterns can be applied to any surface scatterer with a minimum mean surface roughness of about a quarter of the laser's wavelength. This includes also materials such as thin natural and technical fibres as well as foils, for which the presented measurement system, including the digital signal processing, was designed. In order to achieve the best possible resolution of a speckle-based measurement system, combined with a sufficiently small measurement uncertainty, all available design parameters must be optimised. One of these parameters is the speckle size, which is dependant on the properties of the imaging optics. In this paper a subjective laser speckle-based measurement system based on a so-called 4f-optical setup is presented. This setup allows the speckle size to be controlled in both axial and lateral dimensions separately, which is achieved with the help of an aperture in the Fourier plane of the optics. It is shown that the optimal speckle size for the presented measurement system, not only depends on the physical setup, but also on the signal processing applied. The signal processing routine estimates displacements of the speckle pattern, leading to an estimate for the strain. Additionally, it is demonstrated that the optimal speckle size can be lower than the commonly reported optimum between two and five pixel pitches, necessary to circumvent aliasing in the image data. While this is shown for a measurement setup using 4f-optics, the results are of general importance to speckle-based strain or displacement measurement systems and should thus be taken into account.
Ata Can Çorakçı et al 2024 Meas. Sci. Technol.
In this paper, application of a Two-Equations Two-Unknowns (2E-2U) method is described for calibration of hydrophones and projectors below 1 kHz in a laboratory test tank. At low frequencies, amplitude and phase measurements for the calibration of the hydrophones and projectors in the test tank are diffucult to perform since echo-free time of the laboratory test tank is not large enough due to transducer initial transients and tank wall boundary reflections. To overcome these diffuculties, the 2E-2U method is applied to received (windowed) signals obtained during calibration measurements. Thus, the calibration measurements become possible at a frequency down to 250 Hz. These measurements in the test tank are performed for a hydrophone and a developed flextensional projector. First, the receive sensitivities for the hydrophone are calculated and validated by comparisons with pressure calibration in a closed chamber. Good agreements are obtained between two measurement platforms, with a maximum difference of 0.5 dB and uncertainty of 1.3 dB. Then, transmitting voltage response (TVR) of the flextensional projector is calculated and compared with the calibration data obtained from the method defined in the relevant standards. Good agreements are obtained between two TVR data with a maximum difference of 1.1 dB and uncertainty of 1.7 dB.
S Soman et al 2024 Meas. Sci. Technol. 35 075905
Inspection of surface and nanostructure imperfections play an important role in high-throughput manufacturing across various industries. This paper introduces a novel, parallelised version of the metrology and inspection technique: Coherent Fourier scatterometry (CFS). The proposed strategy employs parallelisation with multiple probes, facilitated by a diffraction grating generating multiple optical beams and detection using an array of split detectors. The article details the optical setup, design considerations, and presents results, including independent detection verification, calibration curves for different beams, and a data stitching process for composite scans. The study concludes with discussions on the system's limitations and potential avenues for future development, emphasizing the significance of enhancing scanning speed for the widespread adoption of CFS as a commercial metrology tool.
Zelin Zhou et al 2024 Meas. Sci. Technol. 35 076304
Global navigation satellite system (GNSS) positioning performance in the urban dense environment experiences significant deterioration due to frequent non-line-of-sight (NLOS) and multipath errors. An accurate weighting scheme is critical for positioning, especially in urban environment. Traditional methods for determining the weights of observations typically rely on the carrier-to-noise density ratio (C/N0) and the elevations from satellites to receivers. Nevertheless, the performance of these methods is degraded in the dense urban settings, as C/N0 and elevation measurements fail to fully capture the intricacies of NLOS and multipath errors. In this paper, a novel GNSS observations weighting scheme based on Hopular GNSS signal classifier, which can accurately identify the LOS/NLOS signals using medium-sized training dataset, is proposed to improve the urban kinematic navigation solution in real-time kinematic positioning mode. Four GNSS features: C/N0, time-differenced code-minus-carrier, loss of lock indicator and satellite's elevation, are employed in the training of the Hopular based signal classifier. The performance of the new method is validated using two urban kinematic datasets collected by a U-blox F9P receiver with a low-cost antenna, in downtown Calgary. For the first testing dataset, the results show that the Hopular based weighting scheme outperforms the three most commonly used GNSS observations weighting schemes: C/N0, elevation, and a combined C/N0-elevation approach. Approximately 10.089 m of horizontal root-mean-squared (RMS) positioning error and 12.592 m of vertical RMS error are achieved using the proposed method; with improvements of 78.83%, 46.82% and 43.27% on horizontal positioning accuracy and 54.00%, 47.51% and 49.69% on vertical positioning accuracy, compared to using C/N0, elevation and C/N0-elevation combined weighting schemes, respectively. For the second testing dataset, a similar performance is achieved with nearly 11.631 m of horizontal RMS error and 10.158 m of vertical RMS error; improvements of 64.58%, 32.90% and 22.40% on horizontal positioning accuracy and 71.99%, 65.24% and 55.88% on vertical positioning accuracy are achieved, compared to using C/N0, elevation and C/N0-elevation combined weighting schemes, respectively.
Jakub Svatos and Jan Holub 2024 Meas. Sci. Technol. 35 076122
This paper analyses the efficiency of various frequency cepstral coefficients (FCC) in a non-speech application, specifically in classifying acoustic impulse events-gunshots. There are various methods for such event identification available. The majority of these methods are based on time or frequency domain algorithms. However, both of these domains have their limitations and disadvantages. In this article, an FCC, combining the advantages of both frequency and time domains, is presented and analyzed. These originally speech features showed potential not only in speech-related applications but also in other acoustic applications. The comparison of the classification efficiency based on features obtained using four different FCC, namely mel-FCC (MFCC), inverse mel-frequency cepstral coefficients (IMFCC), linear-frequency cepstral coefficients (LFCC), and gammatone-frequency cepstral coefficients (GTCC) is presented. An optimal frame length for an FCC calculation is also explored. Various gunshots from short guns and rifle guns of different calibers and multiple acoustic impulse events, similar to the gunshots, to represent false alarms are used. More than 600 acoustic events records have been acquired and used for training and validation of two designed classifiers, support vector machine, and neural network. Accuracy, recall and Matthew's correlation coefficient measure the classification success rate. The results reveal the superiority of GFCC to other analyzed methods.
Geoffrey de Villiers et al 2024 Meas. Sci. Technol.
Gravity measurements have uses in a wide range of fields including geological mapping and mine-shaft inspection. The specific application in question sets limits on the survey and the amount of information that can be obtained. For example, in a conventional gravity survey at the Earth's surface a gravimeter is translated on a two-dimensional planar grid taking measurements of vertical component of gravity. If, however, the survey points cannot be chosen so freely, for example if the gravimeter is constrained to operate in a tunnel where only a one-dimensional line of data could be taken, less information will be obtained. To address this situation, we investigate an alternative approach, in the form of an instrument which rotates around a central point measuring the gravitational potential on the boundary of a sphere around the centre of the instrument. The ability to record additional components of gravity by rotating the gravimeter will give more information than obtained with a single measurement traditionally taken at each point on a survey, consequently reducing ambiguities in interpretation. We term a device which measures the potential, or its radial derivatives, around the surface of a sphere a gravitational eye. In this article we explore ideas of resolution and propose a thought experiment for comparing the performance of diverse types of gravitational eye. We also discuss radial analytic continuation towards sources of gravity and the resulting resolution enhancement, before finally discussing the possibility of using cold-atom gravimetry and gradiometry to construct a gravitational eye. If realised, the gravitational eye will offer revolutionary capability enabling the maximum information to be obtained about features in all directions around it.
Hamidreza Eivazi et al 2024 Meas. Sci. Technol.
High-resolution reconstruction of flow-field data from low-resolution and noisy
measurements is of interest due to the prevalence of such problems in experimental
fluid mechanics, where the measurement data are in general sparse, incomplete and
noisy. Deep-learning approaches have been shown suitable for such super-resolution
tasks. However, a high number of high-resolution examples is needed, which may not
be available for many cases. Moreover, the obtained predictions may lack in complying
with the physical principles, e.g. mass and momentum conservation. Physics-informed
deep learning provides frameworks for integrating data and physical laws for learning.
In this study, we apply physics-informed neural networks (PINNs) for super-resolution
of flow-field data both in time and space from a limited set of noisy measurements
without having any high-resolution reference data. Our objective is to obtain a
continuous solution of the problem, providing a physically-consistent prediction at
any point in the solution domain. We demonstrate the applicability of PINNs for the
super-resolution of flow-field data in time and space through three canonical cases:
Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the
minimal turbulent channel flow. The robustness of the models is also investigated
by adding synthetic Gaussian noise. Our results show the adequate capabilities of
PINNs in the context of data augmentation for experiments in fluid mechanics.
Isaac Spotts et al 2024 Meas. Sci. Technol.
To improve the temporal resolution in an optical delay system that uses a conventional mechanical delay stage, we integrate an in-line liquid crystal (LC) wave retarder. Previous implementations of LC optical delay methods are limited due to the small temporal window provided. Using a conventional mechanical delay stage system in series with the LC wave retarder, the temporal window is lengthened. Additionally, the limitation on temporal resolution resulting from the minimum optical path alteration (resolution of 400 nm) of the conventionally used mechanical delay stage is reduced via the in-line wave retarder (resolution of 50 nm). Interferometric autocorrelation measurements are conducted at multiple laser emission frequencies (349, 357, 375, 393, and 405 THz) using the in-line LC and conventional mechanical delay stage systems. The in-line LC system is compared to the conventional mechanical delay stage system to determine the improvements in temporal resolution relating to maximum resolvable frequency. This work demonstrates that the integration of the in-line LC system can extend the maximum resolvable frequency from 375 to 3000 THz. The in-line LC system is also applied for measurement of terahertz pulses.
Simon Burkhard and Alain Küng 2024 Meas. Sci. Technol. 35 075008
A method is presented for fitting the projected centres of spheres in cone beam x-ray imaging. By using a suitable coordinate system, the method allows direct and exact calculation of the sphere centre without fitting the projection shape with an ellipse and correcting from the ellipse centre to the sphere centre. Advantages in numerical implementation result from the number of unknown variables being reduced compared to ellipse fits. Additionally, the orientation of the detector relative to the x-ray source can be obtained from fitting the shapes of projections of multiple spheres without knowledge of the positions or dimensions of the spheres. The accuracy of the method is compared to other techniques using simulated x-ray projections.
Bartosz Czesław Pruchnik et al 2024 Meas. Sci. Technol.
Scanning probe microscopy (SPM) is a broad family of diagnostic methods. Common restraint of SPM is only surficial interaction with specimen, especially troublesome in case of complex volumetric systems, e.g. microbial or microelectronic. Scanning thermal microscopy (SThM) overcomes that constraint, since thermal information is collected from broader space. We present transformer bridge-based setup for resistive nanoprobe-based microscopy. With low-frequency (approx. 1 kHz) detection signal bridge resolution becomes independent on parasitic capacitances present in the measurement setup. We present characterization of the setup and metrological description – with resolution of the system 2 mK with sensitivity as low as 5 mV/K. Transformer bridge setup brings galvanic separation, enabling measurements in various environments, pursued for purposes of molecular biology. We present results SThM measurement results of high-thermal contrast sample of carbon fibers in an epoxy resin. Finally, we analyze influence of thermal imaging on topography imaging in terms of information channel capacity (ICC). We state that transformer bridge-based SThM system is a fully functional design along with low driving frequencies and resistive thermal nanoprobes by Kelvin Nanotechnology.