Combine Multiple Load Cases into a Block Cycle Schedule that Executes as a Single Endurica Job

Our most recent Users Survey garnered two surprising requests:

  • “Very interested in ability to run a single model with increasing load and combine with “Duty cycle” definition to predict/calculate expected lifetime.”
  • “Would like to see more on how to use duty cycles (loads) within one analysis rather than running at one load.”

Endurica already does this! Allow me to break down the process and show how easy it is.

Multiple loading cases for a specific duty cycle is often part of Fatigue analysis. You can piece together a schedule of varying Loads, Displacements, Temperatures, Ozone Exposure, and more with Endurica DT.

I focus on load variability in this example. This duty cycle contains three unique loading conditions for a Simple Tension Strip: (A) 10mm displacement, (B) 20mm displacement, and (C) 35mm displacement.

Each load case is a separate FEA simulation. The strains are all exported separately for use with Endurica DT. Each FEA job is a single cycle of the desired loading.

Figure 1.  Contours of maximum principal engineering strain for each of load cases A, B and C. 

Here is a breakdown of the Duty Cycle for this analysis. One Cycle or “Life” is equivalent to 300 repeats of 10mm, 200 repeats of 20mm, and 100 repeats of 35mm.

Figure 2.  Block cycle schedule consisting of 300 repeats of load case A (displaced of 10mm), followed by 200 repeats of load case B (displaced of 20mm), and by 100 repeats of load case C (displaced of 30mm). 

When setting up the Endurica input file we specify the “schedule” under the “history” header in the input file. The number of “block_repeats” is then specified for each of the loading conditions. Once they are specified you submit the Endurica DT job like you would a single load Endurica CL job. The resulting life you receive will be the total number of cycles till failure.

Figure 3.  Endurica input file json syntax defining the block cycle schedule. 

Once submitted, Endurica provides a minimum life prediction of 2,944 Cycles of the full schedule. That is 883,200 cycles of 10mm, 588,800 cycles of 20mm, and 103,040 cycles of 35mm.

Figure 4.  Contours of fatigue life, reported as repeats of the total block cycle schedule. 

Want more information? Check out more details of Endurica DT’s capabilities.

For tutorials visit Endurica Academy:

twitterlinkedinmail

2023 – a Year of Magnitude and Direction

2023 marked year 15 for Endurica.  If I had to pick one word to describe the past year, that word would be “vector”.  Because magnitude and direction.  😊

We updated our core value statement this year.  The first one I ever wrote as part of Endurica’s original business plan listed 3 values: technical leadership, customer focus, and trustworthiness.  Those values served us well for many years and in many ways shaped who we have become.  But it was important this year to take stock again.  We’ve grown 8-fold since I wrote those down!  So our team spent many hours revisiting our shared values and deliberating over which will best define our culture and steer us right going forward.  In the end, we decided to keep the first 3, and we added 3 more:  embrace the grit, make an impact, and better every day.

We also completed an exercise to articulate what makes Endurica truly unique in the CAE / durability simulation space.  The 3 words we chose are… Accurate, Complete, and Scalable.

  • Accurate refers to the accurate material models that capture rubber’s many “special effects”, the accurate critical plane analysis method for analyzing multiaxial history, the accurate handling of nonlinear relationships between global input load channels and local crack experiences, and the extensive set of validation cases that have demonstrated our accuracy over the years. Nobody offers a more accurate solution for rubber durability.
  • Complete refers to our complete coverage of infinite life, safe life and damage tolerant approaches to testing and simulation. It refers to feature completeness that enables users to account for nearly any material behavior under nearly any service conditions.  Finally, it refers to the documentation, the materials database, and the examples we distribute with the software and with our webinar series.  Nobody offers a more complete solution for rubber durability.
  • Scalable refers to our capacity to apply our solutions efficiently in all circumstances. Scalability is the training we provide so that users can learn our tools quickly.  Scalability is access to powerful, ready-to-use workflows right when you need them.  Scalability is the modular approach we take to material testing and modeling so that simple problems can be solved cheaply and complex problems can be solved accurately in the same framework.  Scalability is our multi-threading that allows job execution time to be accelerated to complete impactful analysis on tough deadlines.  Nobody offers a more scalable solution for rubber durability.

2023 was not all navel-gazing and new marketing.  We also had magnitude and direction in other areas.

Top 10 Code Developments:

  1. New Endurica Architecture: After several years of development and a soft launch under the Katana project name, we finally completed our migration to the new architecture.  The new architecture provides a huge speed advantage for single thread and now for multithread execution. It uses a new input file format (.json). The json format makes it easier than ever for users to build customized and automated workflows via Python scripting.
  2. Sequence Effects: Sometimes the order of events matters to durability, and sometimes it doesn’t. We introduced Steps and Blocks to our input file, giving users complete control over the specification of multi-block, multi-step scheduling of load cases.  There is also a new output request that came out of this work: residual strength.
  3. EIE: 6 channels and support for RPC: Support for 6 channels of load input was one of our most highly requested new features.  Fast growing use of this feature led to further enhancements of the workflow (support for rpc file format, studies of map building techniques), and new recommendations on how to implement boundary conditions for specified rotation histories in explicit and implicit finite element models.
  4. Queuing: Design optimization studies need efficient management and execution of multiple jobs. Endurica’s software license manager now supports queueing for licenses. Queuing allows a submitted job to automatically wait to start until a license is available, instead of the prior behavior of exiting with a license error. Now you can submit many jobs without worrying about license availability.
  5. Haigh Diagram Improvements: We implemented an improved discretization of the Haigh diagram, and parallelized its evaluation. Now you get much nicer looking results in a fraction of the time. For details, check out our blog post on Haigh diagrams and also read about other improvements like axis limit setting and smoother contour plots.
  6. Viewer image copy: There is now a button! Its easier than ever to get your images into reports.
  7. Documentation Updates: We have been focusing on improving documentation this year. There are many new sections in the theory manual and user guide, as well as a getting started guide and more examples.  Stay tuned for many more examples coming in 2024!
  8. User Defined Planes: It is now possible to define your own set of planes for the critical plane search. One example where you might want to do this would be the situation where you would like to refine the critical plane search on a limited domain of the life sphere.
  9. New Database Materials: We added 7 new carbon black and silica filled EPDM compounds to the database. We are now up to 42 unique rubber compounds in the database.
  10. Uhyper Support: The new architecture now supports user-defined hyperelasticity. If you have a Uhyper subroutine for your finite element analysis, you can use it directly with Endurica.

 

Testing Hardware

We completed the acquisition and installation at ACE labs of a Coesfeld Instrumented Cut and Chip Analyser (ICCA).  The ICCA provides unmatched measurement and control of impact conditions, and provides a way to evaluate rubber compounds for their resistance to cutting and chipping.

 

Applications, Case Studies, Webinars

Never underestimate the students! We were blown away by the work of undergraduates at the University of Calgary with our tools and Ansys.  The students designed an airless tire, completing durability simulations using Endurica software within the scope of a senior design project. They were able to Get Durability Right on a short timeline and a student budget. Check out their multi-objective, high-performance design project here.

Analyzing what happens to tires as they take on the most celebrated testing track in the world might have been the funnest project Endurica’s engineers tackled in 2023. We presented the technical details at The Tire Society annual meeting and more in a followup webinar. An extensive Q&A session followed, and I loved the final question: “So, how long before we have a dashboard display of ‘miles to tire failure’ in our cars?”  Bring it.  We are ready!

Our Winning on Durability webinar series hit a nerve with the Metal Fatigue DOES NOT EQUAL Rubber Fatigue episodes on mean strain (the tendency of larger mean strains to significantly INCREASE the fatigue life of some rubbers!) and linear superposition (for converting applied load inputs to corresponding stress/strain responses). The great response has lead to our third installment on the differences between rubber and metal fatigue with an upcoming presentation on temperature effects.

twitterlinkedinmail

The New Endurica Architecture – It’s Time to Migrate

Our transition to a new software architecture is a vital move in navigating the dynamic technological landscape. In a recent webinar, we discussed the aspects of this transition, providing insights into the why and how of adopting a new architectural approach despite having a functional existing one. This post will highlight the motivations behind the shift, the present status of feature migration, alterations in the latest software release, and an overview of projects within this new framework.

The Rationale and Benefits

Why Overhaul?

The complete rewrite of our software’s architecture was not a decision made lightly. The reasoning extends beyond merely wanting a refresh; it was driven by pivotal motivations, primarily surrounding the necessity for speed and efficiency in executing computing processes. Speed is invariably tied to productivity and operational fluency in software and technology. The plot below illustrates a compelling story: the old architecture (represented by the blue line), exhibited a static runtime, regardless of the number of threads engaged, revealing its inability to utilize parallel processing. Contrastingly, the new architecture demonstrates a significant speed-up, even with just a single thread, and scales to allow an increase in speed by many multiples, contingent on thread capacity.

Solving Larger Problems

The pursuit of faster execution isn’t arbitrary; it is intrinsically linked to our objective of solving larger problems. With larger tasks and projects on the horizon, scaling up and utilizing more CPU threads became essential. Exemplified through a job run on a virtual machine with 96 available CPU threads, the linear decrease in runtime with increasing threads (until certain hardware limitations are met) exhibits the new architecture’s adept handling of larger jobs (see plot below). The capability to scale and manage tasks of escalating complexity and size was a crucial driver for our transition.

Enhancing Integrations and Streamlining Workflows

Then, we turned our attention toward improving the user experience in interfacing with our software. Our prior use of the HFI and HFO file formats, while functional, presented numerous challenges regarding modification and integration, particularly when scripted modifications were necessary. The new architecture employs the JSON file format, widely recognized for its robustness and versatility across various industries and applications. With JSON, modifying job inputs and managing data become significantly simplified, as illustrated by a Python script example, wherein the entirety of job modifications, inputs, and submissions can be seamlessly handled with a handful of lines of code.

Improved Usability and Real-Time Error Checking

In an effort to enhance usability and mitigate the common issue of erroneous entries and syntax use, the new architecture, especially when utilized with a text editor like VS Code, offers real-time checking and syntax suggestions. This not only makes job submission more precise but also substantially reduces the trial-and-error cycle, saving valuable time. Additionally, upon job submission, the new architecture performs rigorous error and syntax checks, ensuring smooth execution and user experience.

Comprehensive Feature Migration: A Successful Transition

Reflecting on the past two years, we have accomplished a near-complete feature migration to the new software architecture, with 99% of features now successfully transitioned. This includes all outlined output requests, material models, history types, and various procedures.
Our commitment to supporting multiple interfaces remains, with support for Abaqus, Ansys, and Marc using the new architecture. Furthermore, Endurica Viewer is fully compatible, providing enhanced visualization capabilities under the new system.
The comprehensive migration and the incorporation of new functionalities marks the new architecture as fully operational and ready for use across all undertakings.

Implementation of Directory and Execution Changes in Endurica Software

Refined Directory Structure

In efforts to provide a seamless transition and user experience with the upgraded Endurica software, modifications have been made to the directory structure. The new architecture, once labeled “Katana” during its development phase, has now been ubiquitously integrated into the top-level Endurica directory. With the most recent software installation, users will observe the top-level CL and DT directories contain the new architecture, and the Katana directory has been removed.

Consequently, when we refer to Endurica CL and Endurica DT moving forward, it denotes reference to the new architecture.

Accommodating Transition: The Legacy Folder

Acknowledging that the transition to the new architecture may not be instantaneous for all users, the old architecture will still be available and designated within a “Legacy” folder. Though it requires navigation into subfolders, we ensure its accessibility for users who need more time to transition fully into the new structure.

Executable Naming Conventions

In tandem with the directory adjustments, executable naming conventions have been revised to be more intuitive. Previously, “endurica” was employed to submit fatigue analyses in the old architecture, while “katana” pertained to the new. To streamline, “katana” has been rebranded as “endurica” for submitting the JSON input file, with the legacy version adopting the name “endurica-legacy.” It is crucial to note that users accustomed to utilizing “katana” may continue to do so — “endurica” and “katana” will run the same executable. However, usage of the old architecture requires invoking a new “endurica-legacy” command.

Delivering the Unattainable with Endurica’s New Software Architecture

Embarking upon two recent projects with our new computational architecture, we explored the realms of virtual simulation and data management in tire durability and elastomeric mount durability performance.

Project 1: Tire Durability with Dassault Systems

In collaboration with Dassault Systems, a multi-body dynamic simulation was conducted to compute tire durability at the Nurburgring circuit. Utilizing SIMPACK for generating virtual road load data and employing Endurica EIE and Abaqus to establish a workspace map of driving conditions, the endeavor yielded significant data, processed through 176,000 time steps to evaluate the tire’s fatigue life. After a meticulous analysis, the results spotlighted the fatigue life to be 214 laps, pinpointing the most critical point around the tire bead edge.

Project 2: Durability of an Elastomeric Mount with Ford

Undertaken with Ford, the second project navigated through the durability performance of an elastomeric mount, involving a behemoth of data from 144 load history files, each load file containing tens or hundreds of thousands of time points, accumulating to over 15 million total time points. Utilizing a similar approach as the Nurburgring project, Endurica EIE and Abaqus were used together to generate the strain history data. The analysis focused on membrane elements on the mount’s free surfaces to precisely gauge surface strains. Culminating the analysis, the project succeeded in qualifying the part with a fatigue life of 9.4 repeats of the entire schedule, wherein the requisite was just one repeat.

These projects underscored the capabilities of our new architecture, navigating through large data sets and providing tangible insights in significantly reduced timeframes compared to the old architecture. In essence, the implementation of the new architecture has not only streamlined our processes but also expanded our horizons in handling large data and achieving nuanced analyses in our projects.

Summary

The new Endurica CL and Endurica DT architectures have now fully replaced our old system, maintaining the accuracy our users expect while introducing an easier, more powerful, and scalable solution. Everything has been successfully migrated over to this complete solution. With its enhanced capabilities, it addresses problems that were previously too large or took too long to solve, enabling our customers to tackle challenges they might not have considered before. The ability to solve unprecedented problems is just one more example of our steadfast commitment to providing accurate, complete, and scalable solutions.

twitterlinkedinmail

License Queueing

Design optimization studies are driving a need to support the efficient management and execution of many jobs.  This is why we are announcing that Endurica’s software license manager now supports queueing for licenses. This allows a submitted job to automatically wait to start until enough licenses are available, instead of the prior behavior of exiting with a license error. Now you can submit many jobs without worrying about license availability.

License queueing is only available for network licenses (not node-locked). It is currently supported for Katana CL/DT jobs and EIE jobs submitted from a command prompt.

To enable queueing, set the environment variable RLM_QUEUE to any value. This environment variable must be set on the client machine (not the license server).

To learn more about license queueing, search for “How to Queue for Licenses” in the RLM License Administration documentation here: https://www.reprisesoftware.com/RLM_License_Administration.pdf

 

twitterlinkedinmail

The View on ‘22 – The Top 10 Happenings for Endurica in 2022

  1. Expanded our team! We welcomed 35-year Goodyear veteran Tom Ebbott to our team as Vice President, and at one point we had 3 interns working with us this year.  It wasn’t all hard work – we enjoyed our first company canoe trip / picnic in July.
  2. Solved much bigger problems. We set a record this summer for the largest rubber fatigue analysis ever. Ford Motor Company gave us multi-channel recorded road load histories from the full schedule of 144 distinct test track events that they use to qualify a motor mount for durability. We used Endurica EIE to map the load space and generate 3.2 Terabytes of stress-strain history for fatigue analysis. The new Katana multi-threading architecture of our Endurica CL fatigue solver enabled us to process 152k elements through all 15,693,824 timesteps of the schedule.  Check out our presentation at RubberCon 23 in Edinborough UK.
  3. Made analysis of block cycles easier. The Endurica CL and DT solvers’ Katana architecture now enables multiple blocks of load history to be specified in a single analysis.WoD 6 - Strain Crystallization
  4. Added a Haigh diagram visualization to the Endurica Viewer. Use it to quickly understand your material’s dependence of fatigue life on mean strain and strain amplitude.
  5. Implemented a channel reduction algorithm to Endurica EIE. It will analyze your multi-channel loading history to check for opportunities to reduce the dimensionality of your analysis through a change of coordinate basis.  Often, a 6-channel signal can be reduced to 3, 4 or 5 channels, greatly reducing computational requirements for building the map for EIE’s interpolation process.
  6. Expanded our licensing model to offer local, regional and global options. If your organization uses Endurica at multiple sites around the world, ask us about the advantage of regional or global licenses. These licenses allow any number of users to share a pool of solver threads for maximum flexibility and compute power.
  7. Added an experimental characterization for ozone cracking. Ozone is a trace gas that strongly reacts with some rubbers to produce surface cracking. It limits useful product life, even for loads below the fatigue threshold.Ozone Module. quantify ozone attack critical energy and rate Our testing method gives you the parameters you need to set up the ozone attack model in your Endurica CL / DT analyses. Perfect for analysis of tire sidewall endurance.
  8. Were honored when our founder and president, Will Mars, received the Herzlich Medal – the highest award in the tire industry – at the International Tire Exhibition and Conference. This honor is bestowed every other year to recognize an individual whose career and accomplishments have changed the tire industry for the better and left a lasting impact on tire design, development and manufacturing.
  9. Strengthened our documentation. New and experienced users alike will find it easier than ever to find the theory, procedures and examples that will yield rapid success in applying our software workflows. Check out the new sections on Mullins Effect, Ageing, Safety Factor, and Block Cycle analysis.
  10. Celebrated our client’s success. Technetics Group (Pierrelatte, France Maestral® R&D Sealing Laboratory) and Delkor Rail (New South Wales, Australia) shared their Winning on Durability success in case studies.
twitterlinkedinmail

Use This One Simple Trick to Ensure Rubber Part Durability

We’ve just added a new output to the Endurica fatigue solver: Safety Factor.  This feature makes it simple to focus your analysis on whether cracks have the minimum energy required to grow. Safety Factor is a quick and inexpensive way to identity potential failure locations.  It minimizes the number of assumptions you need to defend, and it is backed by hard science.  You don’t need to measure or explain the many influences that together determine how fast cracks grow.  You don’t need lengthy materials characterization experiments that take days or weeks.  You do need to know your material’s Intrinsic Strength T0 (ie Fatigue Threshold) and its crack precursor size c0. The test takes about an hour using the Coesfeld Intrinsic Strength Analyser.

The Safety Factor S is computed as the ratio of T0 to the driving force T on a potential crack precursor.  If the value of the Safety Factor S = T0/T is greater than 1, it indicates the margin by which crack growth is avoided.  If S is less than 1, it indicates that crack growth is inevitable. The calculation of the Safety Factor includes a search for the most critical plane, as we do for our full fatigue life computations.

Although the Safety Factor can’t tell you how long a part will endure, it nevertheless does offer great utility.  You can make a contour plot showing the locations in your part where the Safety Factor is the lowest.  This is a quick and inexpensive way to identity potential failure locations.  You can make statements about the reserve capacity of your design that are easy to communicate and understand with a wide audience.

 A vibration isolation grommet operating under small displacement  A vibration isolation grommet operating under large displacement

The images above show a vibration isolation grommet operating under small (Safety Factor 2.6) and large displacements (Safety Factor 0.83).  Color contours indicate the Endurica-computed Safety Factor, and use the same scale for both images.  Large Safety Factors are shown in blue.  Safety Factors approaching 1 are shown in red.  Safety Factors smaller than 1 are indicated in black.  These results show that the grommet can be expected to operate indefinitely under the small displacements, but that large displacements will produce cracks at some point, in the regions colored black.

twitterlinkedinmail

Durability by Design on Any Budget

Durability by Design

So, you’ve got a tricky durability problem to solve, a budget, and a deadline.  Let’s look at a helpful framework for sorting which Endurica workflows you need.  In the grid below, each row represents a potential approach you can take.  The approaches are, in order of increasing complexity and cost, the Infinite Life approach, the Safe Life approach, the Damage Tolerant approach, and the Fail Safe approach.

Endurica Durability Workflows

The Infinite Life approach is by far the simplest approach.  Here, we say that damage will not be allowed at all.  All locations in the part must operate, at all times, below the fatigue limit (ie intrinsic strength) of the rubber.  The required material testing is minimal: we need only know the fatigue limit T0 and the crack precursor size c0.  We avoid the question of how long the part may last, and we focus on whether or not we can expect indefinite life.  We report a safety factor S indicating the relative margin (ie S = T0 / T) by which each potential failure location avoids crack development.  When S>1, we predict infinite life.  For S<=1, failure occurs in finite time and we must then go on to the next approach…

In the Safe Life approach, the chief concern is whether or not the part’s estimated finite life is adequate relative to the target life.  The material characterization now becomes more sophisticated.  We must quantify the various “special effects” that govern the crack growth rate law (strain crystallization, temperature, frequency, etc.).  We consider the specific load case(s), then compute and report the number of repeats that the part can endure.  If the estimated worst-case life is greater than the target life then we may say that the design is safe under the assumptions considered.  If not, then we may need to increase the part’s load capacity, or alternatively to decrease the applied loading to a safe level.  In critical situations, we may also consider implementing the next level…

The Damage Tolerant approach acknowledges that, whatever the reasons for damage, the risk of failure always exists and therefore should be actively monitored.  This approach monitors damage development via inspection and via tracking of accrued damage under actual loading history.  A standard nominal load case may be assumed for the purpose of computing a remaining residual life, given the actual loading history to date.  Changes in material properties due to cyclic softening or ageing may also be tracked and considered in computing forecasts of remaining life.

The Fail Safe approach takes for granted that failure is going to occur, and obliges the designer to implement measures that allow for this to happen safely.  This can take the form of a secondary / redundant load path that carries the load once the primary load path has failed.  It can take the form of a sacrificial weak link / “mechanical fuse” that prevents operation beyond safe limits.  It can take the form of a Digital Twin that monitors structural health, senses damage, and requests maintenance when critical damage occurs.

The last three columns of the grid show which Endurica fatigue solver workflows align with each design approach.  The Endurica solvers give you complete coverage of all approaches.  Whether you need a quick Infinite Life analysis of safety factors for a simple part, or deep analysis of Damage Tolerance or Fail Safety, or anything in-between, our solvers have just what you need to get durability right.

twitterlinkedinmail

Road Loads to Block Cycle Schedule

 Road loads being converted into block cycle schedules through Endurica softwareRoad load signals are notoriously difficult to work with. The signals feature so many different time increments that it becomes too much to directly model efficiently in FEA. It is difficult to tell which portions of the loading do the most damage. Experimental fatigue testing would be too time-consuming and costly to run on the full complex road load signal. For these reasons simplifying road loads into block cycle schedules has become the gold standard for working with road load signals. Experimental testing and FEA modeling are more manageable when using a block cycle schedule instead of the full road load signal. Traditional methods of converting a road load signal to block cycle schedule can often fall short. Endurica recently added a built-in method in the Endurica CL software that uses the power of critical plane analysis and rain-flow counting to automate block cycle creation.

Let us dive into the process of block cycle creation using an example of a bushing and a road load history. The road loading history shown below contains results for loadings in 3 axes over a time history.

 Road Load Time History Graph

The first step in creating the block cycle schedule is solving for the strain history over the entire road load history. Fortunately, Endurica EIE comes to the rescue in solving for the long strain history. The road load time history does not need to be modeled directly in FEA. Instead, a map is run in FEA to solve for strain history within the bounds of the road loading. Endurica EIE quickly interpolates the strains from this map to create the full loading strain history. In the animation below the map points solved for in FEA are shown as black dots and the bushing traces out the path of the map.

Endurica EIE quickly interpolating the strains from this map to create the full loading strain history

After the full road load strain history has been solved for in EIE the fatigue life for the road load signal is ready to be analyzed in CL. The fatigue analysis of the entire road load signal gives valuable insight into finding the critical location, developing the block cycle, and allowing the fatigue life of the block schedule to be validated against the fatigue life of the road load. The critical location of the bushing is shown in the image below:

The fatigue analysis of the entire load signal shows the critical location along with an estimated fatigue life

At the bushing critical location, all damaging events on the critical plane are taken into account when creating the block cycle schedule. The events are grouped into different bins categorized by two parameters: the peak CED and R ratio. The analyst remains in control by selecting the number of bins to group into. Each of the bins contains events with similar peak CED and R ratio that falls within the bounds of the bin. Within each bin, a representative cycle is identified that when repeated in the block schedule will contribute at least as much damage as all the various events in the bin. This selection process produces a conservative result that ensures that the block cycle will be at least as damaging as the road load.

 Grouping Damaging events into Bins

The bin results from the original history show the number of times each bin is repeated and the total damage from each bin. At this point, the bins that contribute insignificant damage can be safely eliminated from the block cycle schedule to save testing time and complexity without changing the results.

Comparison of Original history to Block Schedule

 

The simplified block schedule is then modeled to check the fatigue life vs the full road load signal. The results show that the critical location and fatigue life has been accurately maintained in the block schedule.

 Road Load vs. Block Cycle Fatigue + Damage Spheres

This automated block cycle creation procedure succeeded in producing a block cycle with the same critical location and very similar fatigue life. The block cycle selection was able to re-create the full road load signal using only three different loading blocks.

Endurica CL automated block cycle creation lets you take the guesswork out of block cycle creation and harness the proven power of Endurica fatigue analysis technology to get durability right.

twitterlinkedinmail

Endurica 2019 Updates Released

Endurica CL

Endurica CL received many improvements over the past year.  These improvements cover a wide variety of different aspects of the software:

Reducing Run-time

Our investments in code benchmarking and performance are paying off! We’ve been able to make internal optimizations to the code that reduce analysis run-times by approximately 30%. 

HFM and HFO Formatting

To make our output cleaner and more meaningful, small changes have been made to the number formatting in the HFM and HFO files.

All results reported in scientific notation are now formatted in standard form where the leading digit before the decimal point is non-zero (previously the leading digit was always zero).  This gives one more significant figure to all the results without increasing the output file size.

Signal compression

The shortest fatigue life for the analysis is now printed to the console and HFM file with six significant figures.  Previously, the life was reported with only two significant figures.  This change makes it easier to quickly compare two different analyses, especially when the analyses have similar fatigue lives.

New features have been added to Endurica CL to make it easier to process and analyze histories.  Using the new COMPRESS_HISTORY output request, you can generate new HFI files containing compressed versions of your original history.  The generated history is composed of the rainflow counted cycles from your original history.  An optional output parameter allows you to further compress the signal by specifying the minimum percentage of the original damage that should be retained in the new history.  When keeping a percentage of the damage, the cycles are sorted from most to least damaging so that the generated history always contains the most damaging cycles and discards the least damaging cycles.

This output request is useful when you want to reduce a long complex history while keeping the important damaging cycles.  This can reduce file sizes and simplify experimental testing setups as well as give you a deeper insight into your duty cycle. 

Endurica DT

Endurica DT is our incremental fatigue solver.  With Endurica CL, your analysis starts at time zero and integrates the given strain history until end-of-life.  With Endurica DT, you can start and end at a series of times that you specify.  This lets you accumulate many different histories and loading conditions repeatedly until end-of-life.

With Endurica DT, it allows you to start and end at a series of times, when specified.

Endurica DT gives you new ways to control your analyses, and we have been using it over the past year in many applications.  For example, fatigue results for laboratory test procedures that involve multiple loading stages (such as FMVSS No. 139 for light vehicle tires, or block cycle schedules for automotive component applications) can be fully simulated using Endurica DT. You can also compute residual life following some scheduled set of load cases. 

Endurica DT can also be used to accumulate the actual loads measured on a part in situ.  This allows you to create a digital twin that keeps a near real-time record of the part’s current simulated damage state and the part’s remaining fatigue life. 

Stiffness Loss Co-Simulation

Endurica DT now includes a stiffness loss co-simulation workflow that allows you to iteratively update the stiffness of your part over a series of time steps, based on the amount of damage occurring in the part.  The stiffness loss is computed per element so you will have a gradient where the more damaged regions become softer.  Endurica DT computes the current fraction h of stiffness loss based on the stress and strain, and the finite element solver computes the stress and strain based on the current fractions of stiffness loss. The capability accurately predicts the effects of changing mode of control during a fatigue test.  For example, stress controlled fatigue tests show shorter life than strain controlled fatigue tests. 

Endurica DT now includes a stiffness loss co-simulation workflow

Endurica EIE

Endurica EIE, our efficient interpolation engine, quickly generates long, complex histories using a set of precomputed finite element results (i.e. the ‘nonlinear map’).  We first launched EIE last year with the ability to interpolate 1-channel and 2-channel problems.  We have recently added the ability to interpolate 3-channel problems. 

In the example below, EIE was benchmarked with three-channels.  Three separate road load signals were computed from a single nonlinear map.  With EIE, you don’t need to rerun the finite element model for each history.  Instead, EIE interpolates from the nonlinear map, providing the equivalent results with a 60x speed-up in compute time. 

Endurica EIE interpolates from the nonlinear map, providing the equivalent results
twitterlinkedinmail

EIE – Effect of Map Discretization on Interpolation Accuracy

Overview

The accuracy of the interpolated results performed by EIE is dependent on the discretization of the map. Specifically, the results will become more accurate as the map’s point density increases. This study uses a simple 2D model to quantify the accuracy of results interpolated from maps with different densities.

Model

A 1 mm x 1 mm rubber 2D plane strain model with two channels is used. The square’s bottom edge is fixed and the top edge is displaced in the x and y directions as shown below. The x displacement corresponds to channel 1 and the y displacement corresponds to channel 2. The working space of the model is defined by the x displacement ranging from 0 mm to 0.8 mm and the y displacement ranging from -0.08 mm to 0.8 mm.

Plane strain model with two channels
Plane strain model with two channels

The model is meshed with 100 8-node, quadrilateral, plane strain, hybrid, reduced integration elements (shown below).

100 element mesh
100 element mesh

History

We define as the benchmark reference solution a history that covers the model’s entire working space with a high density of points. An evenly spaced grid of 128×128 points for a total of 16384 points is used as the history (shown below). It is important that this history is more refined than the maps that we will create to ensure that we are testing all regions of our maps.

128 x 128 history points
128×128 history points

These points are used to drive the finite element model and the results are recorded. For this study, we record the three non-zero strain components and the hydrostatic pressure (NE11, NE22, NE12, and HP) for each element at each time point. In summary, there are 4 result components, 100 elements, and 16384 time increments. This set of results is the reference solution since it is solved directly by the finite element model. We will compare this solution to our interpolated results to measure our interpolation accuracy.

Maps

Six maps with different levels of refinement are used to compute interpolated results for our history points. All of the maps structure their points as an evenly spaced grid. The first map starts with two points along each edge. With each additional map, the number of points along each edge is doubled so that the sixth and final map has 64 edge points. The map points for the six maps are shown below.

Six maps with increasing levels of refinement, structuring their points on an evenly spaced grid
Six maps with increasing levels of refinement

The map points for these six maps are used to drive the finite element model’s two channels. The strain and hydrostatic pressure results from the FEA solutions are recorded at each map point in a similar way to how the results were recorded for the FEA solution that was driven by the history points. Next, EIE is used six times to interpolate the map point results at each resolution onto the high resolution reference history points.

We now have seven sets of history results: the true set of results and six interpolated sets of results.

Results

To compare our results, we look at the absolute difference between the sets of results. The absolute error is used, opposed to a relative error, since some regions of the model’s working space will give near zero strain and hydrostatic pressure. Division by these near zero values would cause the relative error to spike in those regions.

Since we have 100 elements and 4 components per element, there are a lot of results that could be compared. To focus our investigation, we look at the element and component that gave the maximum error. The figure below shows contour plots for each of the six maps for this worst-case element and component. The component that gave the maximum error was NE12. The title of each of the contour plot also shows the maximum error found for each of the plots.

Error contours for the worst-case element and component. Titles report the maximum log10 error.
Error contours for the worst-case element and component. Titles report the maximum log10 error.

You can see that the error decreases as the map density increases. Also, you can identify the grid pattern in the contour plots since the error gets smaller near the map points.

Plotting the maximum error for each of the maps against the number of map points on a log scale is shown below. The slope of this line is approximately equal to 1 which is expected since a linear local interpolation was used to compute the results.

Maximum error vs the number of points for each of the six maps
Maximum error vs the number of points for each of the six maps
twitterlinkedinmail

Our website uses cookies. By agreeing, you accept the use of cookies in accordance with our cookie policy.  Continued use of our website automatically accepts our terms. Privacy Center