Behind the Scenes Tour of Endurica Software Development and QA Practices

Ever wonder what it takes to consistently deliver quality and reliability in our software releases?  Here’s a brief overview of the systems and disciplines we use to ensure that our users receive timely, trouble-free updates of Endurica software.

Automation:

Throughout the life of our software, changes are made to our source code for a variety of reasons.  Most commonly, we are adding new features and capabilities to our software.  We also make updates to the code to improve performance and to squash the inevitable bugs that occasionally occur.

With each change committed to the code repository, the software needs to be built, tested, and released.  Endurica’s workflow automates these steps so that any change to the source repository triggers a clean build of the software.  A successful build is automatically followed by a testing phase where our suite of benchmarks is executed and compared to known results.  Finally, the build is automatically packaged and stored so that it is ready to be delivered.  At each step along the way, a build error or failed test will cancel the workflow and send an alert warning that the release has been rejected, so that the issue can be addressed, and the workflow restarted.

Figure 1: Endurica’s build and testing process ensures that high quality standards are met for every new release. Black arrow: normal flow, Red arrow: on error or failed test.

Reliability:

The automated testing phase that every release goes through helps ensure the reliability of our software.  For example, every Endurica CL release must pass all 70 benchmarks.  Each benchmark is a separate Endurica CL analysis made up of different materials, histories, and output requests.  Results from a new build are compared to known results from the previous successful build.  If results do not agree, or if there are any errors, the benchmark does not pass and the build is rejected.

The testing phase prevents “would-be” bugs from making it into a release and makes sure that any issues get resolved.

Repeatability:

The automated nature of our development workflow naturally helps with repeatability in our releases.  Each build flows through the same pipeline, creating consistent releases every time.  There is less worry, for example, that a component will be forgotten to be included.  It also allows us to recreate previous versions if comparisons need to be made.

Traceability:

Our version control system enables us to easily pinpoint where and when prior changes were introduced into the software.  Each release is tied to a commit in the repository. This allows any future issues to be easily traced back and isolated to a small set of changes in the source for quick resolution.

Responsiveness:

Automating the build and release pipelines greatly increases our responsiveness.  If an issue is discovered in a release, the problem can be resolved, and a fully corrected and tested release can be made available the same day.  We can also quickly respond to user feedback and suggestions by making small and frequent updates.

The systems and disciplines we use in our development process make us very efficient, and they protect against many errors. This means we can spend more of our time on what matters: delivering and improving software that meets high standards and helps you to get durability right.

twittergoogle_pluslinkedinmail

Will Mars on the Rubber Industry: A Look Back 10 Years, Where We Are Now, A Look Ahead 10 Years

Q: With regards to fatigue life prediction methods, where was the rubber industry 10 years ago?

Will There was plenty of great academic work and good understanding of fundamentals, but the methods were only deployed – if at all – via “homebuilt” solutions that could never support a broad enough audience to really impact daily product design decisions.  Simulation methods and experimental methods shared theoretical foundations but they were poorly integrated.  They suffered from operational problems, noisy data and open-ended test duration.  It was possible to analyze a crack if you could mesh it, but the added bookkeeping and convergence burdens were usually not sustainable in a production engineering context.  Mostly, analysts relied on tradition-based crack nucleation approaches that would look at quantities like strain or stress or strain energy density.  These were not very accurate and they were limiting in many ways, even though they were widely used.  They left companies very dependent on build and break iterations.

Q: Where is the industry today?

Will: The early adopters of our solutions have been off and running now for a number of years.  Our critical plane method has gained recognition for its high accuracy when dealing with multiaxial cases, cases involving crack closure, cases involving strain crystallization.  Our testing methods have gained recognition for high reliability and throughput.  Our users are doing production engineering with our tools.  They are consistently winning on durability issues.  They are handling durability issues right up front when they bid for new business.  They are expanding their in-house labs to increase testing capacity and they are winning innovation awards from OEMs.  They are using actual road-load cases from their customers to design light-weight, just-right parts that meet durability requirements.  The automotive industry has lead adoption but aerospace, tires, energy, and consumer products are also coming up.  We have users across the entire supply chain: raw material suppliers, component producers and OEMs.  The huge value that was locked up because durability was previously so difficult to manage is now unlocked in new ways for the first time.  This has been the wind in Endurica’s sails for the last 10 years.

Q: Where do you see the industry in 10 years?

Will: In 10 years, OEMs will expect durability from all component producers on day 1, even for radical projects.  They will expect designs already optimized for cost and weight.  They will push more warrantee responsibility to the supplier.  They will monitor durability requirements via shared testing and simulation workflows.  Suppliers will pitch solutions using characterization and simulation to show their product working well in your product.  The design and selection of rubber compounds to match applications will enter a golden age as real-world customer usage conditions will finally be taken fully into account.  Where design and selection was previously limited by the budget for a few build and break iterations, and low visibility of design options, they will soon be informed by an almost unlimited evaluation of all possibilities.  Where simulation methods have traditionally had greatest impact on product design functions, we will also start to see rubber part Digital Twins that track damage accumulation and create value in the operational functions of a business.  Durability is definitely set to become a strong arena for competition in the next 10 years.

 

twittergoogle_pluslinkedinmail

Durability Simulation and the Value of Product Development Resources

What value does your company gain by deploying product development resources one way vs. another when it comes to durability?

R&D organizations are built around what it takes to get the product into production.  The costs of the organization include wages for the engineers and technicians, the costs of the capital equipment used in development and testing, and the overhead from administrative functions.  These are all fixed costs, and in the rubber industry it is typical to see R&D budgets that amount to somewhere between 1% and 5% of sales.

The R&D program lifecycle is iterative.  It goes something like this: design, build, test, qualify for production, launch product.  A quick way to understand product development costs is to look at how long it takes for one design-build-test-launch iteration.  If it takes your tech center one year per iteration, then the cost of one pass through the cycle is something like (company annual sales) x (R&D rate per annual sales)/(number of parallel development programs executing at a given time in your tech center).  For a $2B company with a 2.5% research budget and 10 development programs in the works, this works out to $5M/iteration.

How much of this cost is burned on durability issues?  Potentially all of it, at least within any one given iteration.  At worst, a non-qualifying test result leads to a “back to the drawing board” restart of the iteration.  The durability tests required for qualification can only be made after the prototype is in hand, so a restart means the whole team ends up revisiting and reproducing to correct a failed iteration.  Over the long run, if your iteration failure rate is 1 in 5 iterations (20%), that means you are burning $5M x 20% = $1M per product.

How much of this cost can realistically be avoided?  The big opportunity lies in the fact that the old “build and break” paradigm does not immediately hold accountable design decisions that lead to poor durability, and it does not have enough band-width to allow for much optimization.  A “build and break” only plan is a plan for business failure.  Poor decisions are only tested and caught after big investments in the iteration have all become sunk costs.  The advent of simulation has fueled a new “right the first time” movement that empowers the engineer to very rapidly investigate and understand how alternative materials, alternative geometries, or alternative duty cycles impact durability.  The number of alternatives that can be evaluated and optimized by an analyst before committing other resources is many times greater.  “Right the first time” via simulation is a model that is increasingly favored by OEMs and suppliers because it works.  Expect to halve your iteration failure rate.

twittergoogle_pluslinkedinmail

Tire Society 2017 – Best Question

Every year, the top minds from academia, government and industry gather in Akron to share their work at the Tire Society annual meeting, and to enjoy a few moments of professional camaraderie.  Then we all return to fight for another year in the trenches of the technology wars of our employers.

This year, the meeting offered the latest on perennial themes: modal analysis, traction, materials science, noise, simulation, wear, experimental techniques for material characterization and for model validation.  Too much to summarize with any depth in a blog post.  If you are interested, you should definitely resolve to go next year.  Endurica presented two papers this year.

I presented a demonstration of how the Endurica CL fatigue solver can account for the effects of self-heating on durability in a rolling tire.  Endurica CL computes dissipation using a simple microsphere model that is compatible, in terms of discretization of the shared microsphere search/integration domain, with the critical plane search used for fatigue analysis.  In addition to defining dissipative properties of the rubber, the user defines the temperature sensitivity of the fatigue crack growth rate law when setting up the tire analysis.  In the case considered, a 57 degC temperature rise was estimated, which decreased the fatigue life of the belt edge by a factor of nearly two, relative to the life at 23 degC.  The failure mode was predicted at the belt edges.  For 100% rated load, straight ahead rolling, the tire was computed to have a life of 131000 km.

The best audience question was theoretical in nature: are the dissipation rates and fatigue lives computed by Endurica objective under a coordinate system change?  And how do we know?  The short answer is that the microsphere / critical plane algorithm, properly implemented, guarantees objectivity.  It is a simple matter to test: we can compute the dissipation and fatigue life for the same strain history reported in two different coordinate systems.  The dissipation rate and the fatigue life should not depend on which coordinate system is used to give the strain history.

For the record, I give here the full Endurica input (PCO.hfi) and output (PCO.hfo) files for our objectivity benchmark.  In this benchmark, histories 11 and 12 give the same simple tension loading history in two different coordinate systems.  Likewise, 21 and 22 give a planar tension history in two coordinate systems.  Finally, 31 and 32 give a biaxial tension history in two coordinate systems.  Note that all of the strain histories are defined in the **HISTORY section of the .hfi file.  In all cases, the strains are given as 6 components of the nominal strain tensor, in the order 11, 22, 33, 12, 23, 31.  The shear strains are given as engineering shear components, not tensor (2*tensor shear = engineering shear).

The objectivity test is successful in all cases because, as shown in the output file PCO.hfo, both the fatigue life, and the hysteresis, show the same values under a coordinate system change.  Quod Erat Demonstrondum.

ObjectivityTable

 

twittergoogle_pluslinkedinmail
close
Trial License RequestTrial License RequestTrial License RequestTrial License RequestTrial License Request