Automated analysis of "big data" nanoindentation maps
Thursday, March 16, 2023: 2:30 PM
202C (Fort Worth Convention Center)
Dr. Eric Hintsala, Ph.D.
,
Bruker, Eden Prairie, MN
Mr. Mike Berg
,
Bruker, Eden Prairie, MN
Machine learning techniques are useful not just for predicting microstructures and processing routes, but also for enabling high throughput evaluation of characterization data, such as nanoindentation. High speed nanoindentation mapping can operate at multiple tests per second rates to generate highly localized elasticity and plasticity data. This enables maps of thousands to even millions of data points to span millimeter length scales with nano-to-micro scale resolution. As such, samples with complex heterogeneous microstructures, such as additive manufactured multiphase alloys, weld zones, composites and more can be evaluated. Nanoindentation allows these microstructural regions of the material to be distinguished based upon their mechanical properties, which can further be correlated with other structural and chemical analyses. These techniques can also be employed while controlling the sample environment, enabling mechanical mapping at temperature extremes from -120°C up to 1000°C. Operating at extreme temperature conditions is important not only for determining reliability during operando conditions but can also be useful for optimizing materials processing routes.
The caveat with working with datasets this scale is that identifying similar microstructural regions and extracting useful statistics would require too much time for a manual selection approach. Clustering is a simple set of machine learning techniques that can employ a variety of algorithms to group similar data together. However, there are several important considerations with regards to optimizing the clustering technique, including the number of clusters to sort the data into and the choice of sorting algorithm. To evaluate this, one can utilize a variety of statistical techniques to compare clustering algorithms based upon their bias and uncertainty, as well as explore the effects of the number of gathered datapoints on the overall measurement uncertainties.