Simulation-Driven Development of Combustion Engines: Theory and Examples

This paper describes the simulation-driven design process used in engines technology. The research question is “how to use research in the structural analysis and dynamics field to ensure world-class product development?” This paper describes research on simulation methodologies from the design process perspective, demonstrating the need for research in various steps of product development. Each section of the paper includes one or two practical examples in which research was needed to increase product design quality. In the product definition section, the Digital Design Platform (DDP) shows the coupling between product requirements and simulation tasks. At the concept design stage, it is shown that computational methods can optimize the placement of material in the case of the main bearing cap topology. The second example is JuliaFEM, an open-source finite element method (FEM) platform, which is suitable for heavy-duty method development, where the internals of the FE solver is needed to make new calculation methodologies available. The next section is about detailed design, where an example of an oil sump welds fatigue illustrates the continuous improvement of the simulation methodology. The second example is connecting rod fretting calculation, which illustrates the full complexity of the structural analysis and dynamics simulations. The second last process step is the virtual validation, where first the cylinder head simulation methodology shows the internal connections between different disciplines’ simulations. Another example here is the crankshaft virtual validation process, which describes the complexity of the “simple” component calculation as well as illustrates the number of needed competencies. Finally, in the validation process step, Big Data analyses describe the internals and complexity of the methodologies. Lastly, counterweight measurement device development illustrates that validation of the simulation models and methods sometimes leads toward a measurement device development project. As a conclusion, all the previous methodologies are used to build the Wärtsilä 31 engine, which is the most efficient four-stroke engine in the world. It is, of course, a performance achievement, but a lot of research in simulation methodologies, as explained, was needed to make a reliable product with such a high cylinder peak pressure.


Introduction
T raditional engine design builds on top of prototype engines, which are thoroughly tested in the engine laboratory. The development cycle of the traditional approach is slow because all the changes need to have detailed design including drawings and manufacturing details like CAM programs and molds for castings. Shortening the development time was the motivation for this work. This paper highlights the research needed in the engineering mechanics domain of the four-stroke medium-speed engines to guarantee world-class product development. The need of higher fuel efficiency has been the driver of engine research and development (R&D), which leads to increased cylinder pressure and overall higher stresses in the essential engine components.
An example engine used in this paper is available with eight-to sixteen-cylinder arrangements and has a power output varying from 4.2 to 9.8 MW, with the engine speeds at 720 and 750 rpm. The sixteen-cylinder version's length, width, and height are 9.0, 3.5, and 4.2 m, respectively, and its weight is 89.0 t [1]. Also, the example engine is the most efficient four-stroke diesel engine in the world. In addition, its modular design enables notable reductions in maintenance time and costs, thereby increasing engine power availability and reducing the need for spare parts [1].
The maintenance interval of the medium-speed marine engines is comparable to the designed lifetime of a car engine. According to [1], the maintenance interval of the engine is 8000 hours. As an example, for a passenger car driven at an average speed of 25 km/h for the same duration of 8000 hours, the lifetime will amount to a total of 200000 kilometers, which would be respectable for any car. The medium-speed engine is designed to run at a 100% load continuously, which is another difference when compared to the car engines, which typically only use around 20% of power when driving 100 kilometers per hour.
All the above has been a driver for the simulation method development, in other words, the need to keep producing the reliable engine products. That is where this paper gives the big picture and looks at the topic from the product development process perspective. Design Process of Engines, presented in Figure 1, follows the process of Eppinger and Ulrich's [2] machine design book quite closely. The process itself is very well known, meaning that all machine design books such as Hubka and Eder [3], as well as Ullman [4], have it defined.
The paper structure follows the different phases of the design process gate model, which are product definition, concept design, detailed design, virtual validation, and validation. Figure 1 illustrates the structure and gives a list of examples used in this paper. These examples of the simulations, which have required some R&D development, answer the research question. The discussion is embedded in these sections and will open up each case individually, creating a big picture and finding the common denominator for all of these cases.
Before diving into the details, let us look at how other companies and researchers have been tackling the issue in the form of a literature review. Zimmermann et al. [5] describe a systematic system design methodology, especially from the uncertainty point of view, and Erlandsson's [6] focus is on modal analysis. Both of these studies show the same main design phases. After these full models, let us look into some details.
Quite a significant feature in the design process is the systematic failure model simulations, like Löfstrand et al.'s [7] Partitioned Multi-objective Risk Method (PMRM). Then, the Heikkinen and Müller's [8] design research methodology is applied to further develop the Engineering Workbench into a versatile design support system and expand the functionality to include producibility assessment. Both of these analysis tools are used for a different scale than detailed FEM simulations, as an example. The next paragraph will focus on more detailed simulations.
Seifert et al. [9,10] studied the thermomechanical fatigue of the cylinder head, taking into account the nonlinear effects of plasticity. Their selected methodology is quite close to the one briefly discussed in the section of cylinder head thermomechanical fatigue analysis. Kuribara et al. [11] took it one step further and added the high cycle vibration loading simulated in multibody dynamics software. Ince [12] proposed an alternative methodology for nonlinear plasticity calculations for vehicle components by using Glinka's model, an evaluation of Neuber's rule. Munson et al. [13] highlighted the importance of damping in a dynamic analysis to get the correct loading for fatigue analysis. Critical plane algorithms such as those of Findley and Dang Van have taken a firm position in the steel application fatigue analysis. Gaier et al. [14] used the algorithms for fatigue analysis of carbon composite laminates. Also, imperfection modeling has become more important to accurately estimate the fatigue lifetime and safety factor against an infinite lifetime. Bleicher et al. [15] developed a model to use phased array ultrasonic inspection method to get the local material properties.
Fretting fatigue phenomena has been researched a lot but not yet thoroughly understood. Sato et al. [16] studied the engine main bearing cap fretting. Their model predicts fretting fracture correctly. Cai et al. [17] studied the coated tool steel fretting wear and fatigue.
The correct material models are essential to succeed in the simulations. Möller et al. [18] presented the usage of cyclic material properties in the welded structure fatigue analysis. Jia et al. [19] developed the decoupled hardening models to be used in the plasticity modeling.
Conjugate Heat Transfer Analysis is the basis of the engine thermal distribution and therefore a crucial step to defining thermomechanical fatigue loading. Cicalese et al. [20], as well as Bovo [21], studied the full engine model and its temperature distribution.
Big data analysis is becoming more and more common. Michlberger and Sutton [22] studied low-speed preignition and high knocking. Yanarocak and Boz [23] used these measurement data to design safe engine controls to protect the rocker arm. Yerra and Pilla [24] showed how data could be used in manufacturing to make it more efficient.
Karlberg et al. [25] did a comprehensive literature review of the state of the art in simulation-driven design in 2013, followed by Motte et al.'s [26] computer-based design analysis literature review in 2014, which was again followed by an industrial survey of computer-based design analysis from Petersson et al. [27]. Sandberg et al. [28] stated: "Investigation and evaluations show that supporting tools and relevant information must be made readily available, intuitive, integrated into the environment where they are needed and, ultimately, be perceived as a natural part of daily development for them to be accepted and used." What do these mean in practice?
Three examples of the state-of-the-art simulation-driven design processes from the literature are now presented: first, Larsson's [29] use of multibody simulation (MBS) in the product development process; second, Ahmad et al.'s [30] simulation-driven methodology for the design of haptic devices; and third, Sravan et al.'s [31] simulation-driven methodology of blank holder. Moreover, Gouyou et al. [32] present tolerance analysis and use FEM distinctively in the flange example. The main improvement this paper brings to the stateof-the-art simulation-driven design process is the link between the high-level business requirement and the low-level validation requirements. Finally, simulation frameworks will conclude this literature study.
Saavedra et al. [33] suggest a knowledge-based framework, including a toolkit of simulation methods as its principal element to combine simulations in the product development process. Petersson's [34] method is the use of templates to enable designers to make simulations. Further, Pavasson et al.'s [35] methodology requires an increased amount of multidisciplinary interaction to combine deterministic simulation and probabilistic simulation. Caridi et al. [36] remind us to take into account how much information to share when outsourcing. Furthermore, Ahmed-Kristensen and Vianello [37] investigated the mechanisms involved in the transfer of knowledge between service and design. The findings showed an imbalance in the transfer of knowledge between engineering designers and service engineers, e.g., more than 50% of instances regarding knowledge from service were pushed (hence made available) to the engineering designers without them actively requesting this knowledge, by Ahmed-Kristensen and Vianello [37]. Lastly, Shao et al. [38] proposed the development of the uniform intermediate model that supports high fidelity and efficient visualization of multidisciplinary heterogeneous simulation data. As a conclusion, Könnö et al.'s [39] digital design platform (DDP) provides answers to all these product development processes' needs. Further explanation can be found in the next section, as well as in [40,41].

Product Definition
In the product definition phase of the engine development project, the handling of the customer product requirements together with internal experts' technological requirements is a must. Also, the core team defines other important things in the product definition phase. However, those are not within the scope of this paper. An internal combustion engine is a very troublesome product, whose product development demands a considerable number of different competencies. Even inside the structural analysis and dynamics domain, the number of needed competencies is huge. As an introduction, some of these are listed here for the readers to get a picture of the real size of the problem engine development teams are tackling on a daily basis. All the engine behavior is dynamic by nature, and engine dynamics is a critical area of competencies. For example, Burdiel et al. [42] describe how to fine-tune the dynamic model according to the measurements. Virtual engine modeling is the cornerstone of engine simulations, and for example, Könnö et al. [43] and Frondelius et al. [44] present its usage. It is also used for gear wheel dynamics like in Kuivaniemi et al. [45] and for crankshaft mechanism in Väisänen et al. [46], as well as for connecting rod analysis in Göös et al. [47] and Mäntylä et al. [48] with the elasto-hydrodynamic analysis in Bai et al. [49]. Quick concept calculations are crucial for customer delivery projects, as Heilala et al. [50] presented. Natural frequency calculations in Korpela et al. [51] give a lot for the engine dynamics, especially when combined with topology optimization like in Korpela et al. [52]. The rest of this section is the summary of the research paper [39].
To handle the complexity created by the excess of software platforms, we have chosen a data-centered path as the backbone of our simulation process and data management (SPDM) framework. The fundamental concept is that whatever data collection the users are accessing, they will see different views of that specific information despite the system they are working with. It also means that the setup strives to end entirely document-based simulation reporting, since once one transfers the data into a document, one is also disconnecting it from the source of truth, the simulation object. Next, the focus will be on three pivotal points of making the system successful.
The first essential aspect, albeit not traditionally seen as so much belonging to the SPDM domain, is having all requirements visible and synchronized in the validation data management system. Design failure mode and effects analysis (DFMEA) provides a systematic tool for collecting low-level requirements or simulation tasks and is thus an essential part of the DDP. Although it makes sense to separate requirements into different levels for managing them efficiently, we should never break the link between the high-level business requirement and the low-level validation requirements.
As an example of this, we can take the thermal load on combustion engine components and see the section of cylinder head for thermomechanical fatigue analysis, which effectively indicates the lifetime of a given component; see Leppänen et al. [53] for more details. Väntänen et al. [54] present that lifetime prediction as a chain of simulations, going from fatigue and FE analysis to a computational fluid mechanics combustion simulation, which, in its turn, takes its input from a one-dimensional performance simulation.
When looking at the inputs of that performance simulation, analysts are already looking at the high-level product requirements, such as fuel consumption targets. Similarly, our solution to, e.g., increasing the lifetime by a different choice of material, Kumpula et al. [55] have a direct link to the product cost, yet another high-level requirement. Figure 2 shows a snapshot of the system structure. Here, one can see how test cases linked to the validation requirements state how a given target should show to be validated. The example states that in order to have a satisfactory safety factor in the cylinder head, it has to be over the limit in five different points. The automatically updated parameterization of these points by the latest simulation results provides an always up-to-date validation status. It is an example of a data-centered way of working.
The second cornerstone of our approach is the easy access to simulation data not only for the analyst but the primary consumers of the data, be it engine experts or CAD designers. The target is to completely get rid of static reports, such as Word or PowerPoint documents, in the long run. The motivation came from the study made during the acquisition of the SPDM platform, which showed reporting to be one of the significant time consumers in the simulation process. Also, static reports hardly answer to the digitalization requirements omnipresent in today's industry, where validation results play a crucial role, for example, in condition-based maintenance concepts.
To reach this goal, we utilize two different features of the 3DEXPERIENCE platform, namely, the Result Analytics toolset, which lend themselves to quickly comparing designs based on possibly highly varying types of datasets. One of the main benefits of the Results Analytics tool compared to traditional parameter comparison is the possibility to interactively work on the data with very little expertise in the internals of the system. For example, changing the design targets for ranking the designs is extremely simple, and non-simulation experts can easily handle data produced by complex simulation workflows. Also, since it is possible to include 3D results directly in the tool running on the Web browser, the end user has a lot more than merely a list of numbers for figuring out why a particular design performs in the given way, and it makes it possible to reveal flaws in 3D geometry, for example. A view of the system highlighting the visual aspects of design comparison with interactive 3D simulation content is available for anyone with just a Web browser as shown in Figure 3. Having access to simulation data interactively without the need to install specialized simulation tools is one of the most important aspects of simulation democratization as well.
For other purposes, such as final design validation, we replace the currently used static PowerPoint report with a similar dashboard view, which can contain 3D data as well as other customized widgets, such as 2D plotting tools and tabular results directly linked to the data source. All these activities only run in a Web browser. Additionally, we aim to store the decisions made with the simulation and validation data used to make that call, instead of spreading that knowledge in, e.g., minutes of meetings stored in varying document management systems in the company. This way, we also create the essential data to feed the product lifecycle management (PLM) system with, since the SPDM system ensures the connection to the correct product.
The third key ingredient is simulation democratization, which is a crucial selling point for many of the SPDM solutions nowadays and can be readily implemented in most of the solutions. However, the problem in the successful adoption of democratization depends more on the people involved -on the one hand, hard work on the simulation side to guarantee robustness and functionality and on the other hand, on the user side, understanding the limitations of the preconfigured workflows. To this end, we propose an organic workflow in which the analyst kicks off the activity. Later, if there is a parametric model involved in the development cycle, the designer or engine expert can take over the work in running the parameter configurations based on his expertise, instead of leaving the end user with a black box tool.
Finally, to retain the clarity between the different platforms, we have made a clear decision to keep all CAD-related data in Teamcenter to simplify the connection between the two systems. The PDM-SPDM integration will be carried out with a simple change notice process, which uniquely connects the design and simulation. The target is that this simple process would replace the now commonplace email or messenger communication and also store that information into the PLM vault.
In this section, the DDP was a relief. It shows that DDP increases internal communications and reduces the old data mistakes when everyone is accessing the life data in the database. It also guarantees that everyone has access to all product data. The way customer requirements are connected all the way through to simulation results ensures that if someone has changed them or someone needs to change them, project management understands the whole needed effort and the interconnectivity of the individual requirements and design tasks with its validation.

Concept Design and Simulation
The concept phase of the project starts with a set of customer needs and target specifications and ends with having multiple product concepts from which the core team can select the best ones [2]. Another approach is to start with simulation, then most of the concepts will be simulated, and the simulation will give numerical values to compare concept designs.
This section shows two examples: main bearing cap topology optimization and JuliaFEM platform. According to Bendsoe and Sigmund [56], topology optimization of the solid structures involves the determination of the features such as the number and location and shape of holes and the connectivity of the domain. See www.juliafem.org, Frondelius and Aho [57], and Rapo et al. [58] for more information about JuliaFEM. All numerics have come here to stay, and in the future, topology optimization toolboxes will have more and more intelligence, and they will be more independent automatic design creation tools. The other example, the JuliaFEM platform goes deep into the academic world. There is a need for a full-featured FEM code in the modern scripting language environment that enables to have the solver internals in use. Because the users with industrial-size models want to benefit from open-source platform development, where one writes the missing piece of code, and not just to have the black box solver as one, they have a problem with the commercial FEM codes.

Example 1: Topology Optimization of the Main Bearing Cap
For concept design, topology optimization gives an automatized tool to place the material where it is needed. Topology optimization starts with an empty design space, depending on the project at hand for new development; starting from scratch, the design space is just a big box. However, in the case of a facelift project, such as incremental power increase to an existing engine model, the available design space for individual limiting component is largely limited by available clearances, imposed by other components. Kiseleva et al. [52] give an example of the topology optimization of the turbocharger bracket without such space restrictions.
The rest of this subsection is a summary of the Kuivaniemi et al. Wärtsilä internal calculation report. Figure 4 shows the topology optimization of a main bearing wall thickness using the Tosca Structure. The optimization target was to minimize the deformation of the main bearing, with the constraints of a mass of design elements being less than the certain part of the original mass, i.e., 30% of the original mass remains. Load case is a full load, and the load is given as three different pressure distributions from three different crank angles to the bearing surface. The deformation of the main bearing was captured utilizing the strain energy. As can be seen from Figure 4, the optimized design, proposed by the software, is not ready as such for production. Currently, topology optimization programs are a good help for designers to show the force paths in the structures. However, these tools develop rapidly, and it is likely that once matured, topology optimization software will produce ready-to-use as-is solutions.
How does this reflect the product design process? Topology optimization is one of the cornerstone tools for computer aided engineering (CAE) democratization. The CAE democratization means that simulation work is moved from dedicated simulation people, i.e., structural analysts, to those who do not have formal engineering mechanics studies, and they will start performing advanced simulations themselves. CAE democratization enables much faster concept iterations when the same person is doing the design, and its analysis. In addition, it will increase the quality of the design, and eventually, in the virtual validation step of the product design process, the structural analyst can conclude that this is a first-time-right design where no more design iterations are needed. As a reflection of the research question, the first-time-right scenario can only happen if the CAE democratization works flawlessly, meaning that the simulations of designers and others are correct and comprehensive.
Example 2: JuliaFEM Open-Source FEM Development Platform [57] The JuliaFEM project started as a hobby in the summer of 2015. Founders of the project are Jukka Aho, Tero Frondelius, and Olli Väinölä. By the next year, 2016, the hobby became work, and serious development started.
The JuliaFEM software library is a framework that allows for the distributed processing of massive models across clusters of computers using easy-to-write Julia programming [59] models, which enables a fast development time of the library. It is designed to scale up from a single computer to thousands of machines, each offering local computation and storage. At the moment, shared memory parallelism is available and implementation of domain decomposition model is under work while writing this paper, check juliafem.org for updates. The fundamental design principle is everything is nonlinear. Linearizations are specific cases of physics models, typically all physics are non-linear by nature. See some details in the papers in Frondelius and Aho [57] and Rapo et al. [58]. JuliaFEM is still a work in progress and would need more contributors.
Currently, users can perform the following analyses with JuliaFEM: elasticity, thermal, eigenvalue, contact mechanics, and quasi-static solutions. Typically second-order tetrahedron elements are used in engine simulations. However, most of the typical solid elements are already implemented, and plate and shell elements are under work while writing this paper. For visualization, JuliaFEM uses ParaView presented in Ayachit et al. [60], which prefers XDMF [61] file format using XML to store light data and HDF [62] to store large datasets, which is more or less the open-source standard.
On the one hand, the vision of the JuliaFEM includes the opportunity for massive parallelization using multiple computers with Message Passing Interface (MPI) and threading as well as cloud computing resources in Amazon, Azure, and Google cloud services together with a company's internal server. On the other hand, the real application complexity including the simulation model complexity and geometric complexity, not forgetting the reuse of the existing material models, as well as the whole simulation models, are considered crucial features of the JuliaFEM package. Reinventing the wheel is not the goal, and the idea is to embrace good practices and formats as much as possible. The package implements Abaqus/Calculix input file format and will in the future extend to other FEM solver formats, as needed. Modern development environments enable the user to have a fast development time and high productivity. For developing and creating new ideas and tutorials, the package uses Jupyter notebooks [63] to make easy-to-use handouts. The user interface for JuliaFEM is Jupyter notebook, and the Julia language itself is a real programming language. It enables using JuliaFEM as a part of a more powerful solution cycle, including, for example, data mining, automatic geometry modifications, mesh generation, solution, and post-processing and enabling efficient optimization loops.
Open-source tools are part of Wärtsilä's CAE democratization strategy as a way to save license costs. Simulation software licenses are typically expensive to acquire and maintain. Another attractive feature of the open-source software in the CAE democratization sense is the source code availability. For example, the integration of open-source FEM code to the automatic or semiautomatic simulation process in the DDP or the designer's CAD software becomes easier.

Detailed Design
The detail design phase covers the complete specification of the geometry, materials, and tolerances of all of the unique parts in the product and the identification of all of the standard parts to be purchased from suppliers. In addition, a process plan is established with tooling design for each part the production system produces. The output of this phase is the drawings describing the geometry of each part and its production tooling, the specifications of the purchased parts, and the process plans for the manufacturing and assembly of the product. Three critical matters finalized in the detail design phase are materials selection, production cost, and robust performance [2].
This section presents two examples: fatigue simulation of oil sump welds and connecting rod fatigue and fretting calculations, respectively. Optimization of the welds' details, although manual optimization, in this case, is significant for reliable oil sump design. In this case, the devil is in the details, as small humidification in geometry can lead to a considerable increase in life expectancy.
Connecting rod simulation showed the importance of basic research of fretting fatigue. Without understanding the fundamentals of fretting fatigue, it is nearly impossible to design reliable medium-speed connecting rods. Fatigue [54,64,65,66] and fretting [67,68] still demand basic research to utilize the latest methodology to the engine design.

Example 1: Oil Sump Welds Fatigue Analysis
The designer together with the structural analyst can make many improvements in the detailed design phase of the project. Depending on the component and its loading, optimization of the details can improve component reliability by a large margin. As an example, Figure 5 shows the simulation of oil sump welds. The original work is from the Wärtsilä internal report from Korpela et al.
The FEM of the oil sump uses shell elements. The loading of the full oil sump comes from a virtual engine model; see explanation in a section of virtual validation of crankshaft.
The engine block is much stiffer than the oil sump, and thus the oil sump can be modeled separately using engine block deformations as a boundary condition.
In the next step, all welds are calculated using the FEMFat software. Figure 5 shows one critical weld detail that is too complicated to model by just using the shell elements and FEMFat. In this case, the location was modeled in detail separately, using the submodeling technique in which the structure and welds were modeled fully in 3D.
How does this reflect the product design process? The challenge here is that, typically, the designer designs the welds in the detail design phase, making it difficult for the structural analyst to dimension the welds. In practice, this leads to an iterative and cumbersome process. The designer and the structural analyst need to have close cooperation during the detailed design phase. Still, the same applies as in the previous sections: that the simulations need to be as early as possible but not before they are meaningful. The team leader should make sure that all tasks are in the right order in the product design process to ensure the efficient utilization of available resources.
To emphasize the needed method development, it is, in this case, the submodeling technique. The previous methodology just included a shell mesh and FEMFat process in calculating each weld. It worked until there was a critical weld that was too complicated to model using the simplified shell element model. Overall, it was not a big research task, but it was an essential task for making the reliable end product, which makes it a perfect example. The old way would have been to change this weld detail to something that can be calculated with the old methodology in such a way that the safety factor is high enough. Now, some cost saving was achieved by keeping this easily weldable design.

Example 2: Connecting Rod Simulations [48]
The multibody simulation model of connecting rod in AVL Excite power unit [48] consists of three flexible bodies: a crank pin predefining circular motion, a connecting rod as explained above, and a piston pin. Elastohydrodynamic bearing models [49,69] couple these three bodies together, as well as oil bores modeled as boundary conditions for Reynolds equation solution, the cylinder liner as a rigid body, coupled to the gudgeon pin using compressible nonlinear spring, and the piston inertia modeled by adding it to the mass of the piston pin. Running conditions like the cylinder pressure curve and engine speed are also defined. Figures 6 and 7 show the stresses from the big end housing in a running engine. The simulated stresses correspond well with the measurement over the whole engine cycle. The average difference between measured and simulated curve is only about 0.8%, but the maximum error at the peak is about 5%.
The nonlinear stress analysis with MBS loading has proven to be a trustworthy way to achieve a reliable stress state for the large bore connecting rod. The structural analyst made a comparison between the simulation model and strain gauge measurements in a running engine, and there was a good correspondence between the simulated and measured stresses of the connecting rod housing. By using the workflow mentioned earlier, it is possible to tune the model to correspond to all kinds of manufacturing effects and loading cases and, later on, make design changes to achieve better component lifetime and fatigue safety. Simplifying the FE model improves performance without significant degradation of the accuracy of the overall solution. The performance improvement enables rapid concept designs and, if the user wishes, a complete exploration and subsequent optimization of available designs.
The connecting rod is the second most important component after the crankshaft. There are several challenges in making a perfect connecting rod. However, the connecting rod can be seen as an easy component to optimize because it can largely be done separately from the rest of the engine. Naturally, this task cannot start before the crank pin diameter and piston pin diameters are fixed or at least semifixed. That is why it is an example of the detailed design calculation task. In some engine development projects, the connecting rod is the first component that gets optimized, and many times its design can be locked very early. Nevertheless, it is certainly one of the most interesting components to simulate, due to the challenging loading, especially assembly loading cases.
Next, let us focus on the research needed in the connecting rod case. One of the biggest efforts has been the way of bringing the assembly loads, but, of course, the main thing has been this fretting research that already started fifteen years ago. Frondelius et al. [70] give a nice review into the history of the numerical simulations in Wärtsilä.

Virtual Validation
Virtual validation is like testing and validation but done virtually. In the virtual validation, as accurate a methodology as possible is used to calculate as correct values as absolutely possible. These simulation models are slow to build and run, but they give excellent estimates of the safety and reliability.
In the virtual validation section, two examples are also shown, namely, the cylinder head and crankshaft simulation processes. These are the high-end simulation cases, in which a lot of method development and research were needed to polish the processes in these conditions. The crankshaft case shows a sound correlation to the verification measurements. Both processes are the backbone of the Wärtsilä Structural Analysis and Dynamics team's work.
Cylinder head simulations, as described in Leppänen et al. [53] and Cattarinussi et al. [71], are quite challenging due to combined low cycle-high cycle loading, as Kumpula et al. [55] explain, and, for example, injectors will have their dedicated analysis as in Vuotikka et al. [72]. Additionally, Nyberg et al. [73] show some cast iron component study of the component explosion.
All of the above would make good examples, but the cylinder head and crankshaft are selected to demonstrate the complexity of the models. A lot of effort is put into making these methodologies as good as they are at the moment.
Example 1: Cylinder Head Thermomechanical Fatigue Analysis [53] Thermomechanical fatigue analysis is a method for estimating the component lifetime by considering the combined thermal and mechanical loading. In this paper, the method applied to the cast iron cylinder head component is presented, considering the varying material properties caused by the casting process. This assessment is done by first simulating the stress and strain caused by thermal and mechanical loading with Abaqus FE simulation, and then performing the fatigue postprocessing of the results for the component lifetime estimation. The result is the lifetime estimate for the cylinder head and location of critical points.
The simulation model of a cylinder head is an assembly that includes all the main components connected to the cylinder head. The relevant loading conditions are the assembly, thermal load, and cylinder pressure loads. The engine running profile is taken into account, primarily the number of starts and stops and the number of engine cycles per day. The analyst simulates component temperatures with Star-CCM+ software, where both the thermal load from combustion and the cooling water flow are solved simultaneously [74]. The structural analyst uses frictional contacts for the contact interfaces between the components, considering the shrink fits and clearances in the same way as in Vuotikka et al. [72].
The simulated cylinder head material is nodular cast iron, which has spatially varying material properties caused by the casting process. For that reason, the analysis uses the local material microstructure properties for both the mechanical response of the material and the fatigue analysis. The material parameters for each microstructure have been defined based on cyclic material tests, performed with various loading conditions.
The structural analyst uses Abaqus Standard solver together with the Z-mat material library for nonlinear material behavior to simulate the component stress and strain. The Z-mat model is working as the Abaqus subroutine, calculating the mechanical response during simulation. The analyst defines the unified Chaboche viscoplasticity constitutive model for modeling the material inelastic behavior [75]. The material model includes the effects of thermal dependency and cyclic plasticity, including kinematic and isotropic hardening, creep, and stress relaxation [76,77].
The structural analyst uses the Z-post software with the ONERA fatigue damage model [54,78] to post-process the FE results to get the lifetime estimation. The fatigue model considers the nonlinear accumulation of damage, taking into account the full stress-strain history. The temperature effect, local material microstructure, mean stress effect, stress multiaxiality, and engine operating profile are included in the analysis. Kumpula et al. [55] present the ONERA fatigue model and the used material parameters in more detail. The result of the fatigue analysis is a lifetime estimation for each point in the cylinder head. Figure 8 shows the corresponding lifetime estimation as a color map, where the analyst can pick the most critical locations easily.
In the virtual validation phase of the product development process, the structural analysts use the big assembly FEM models with a great number of degrees of freedom and long simulation times. The models have one single purpose: they try to model the physics as accurately as possible in order to get the absolute safety factors or lifetime of a critical point calculated as accurately as possible. These methods are under constant development, making virtual validation even more reliable.

Example 2: Virtual Validation of a Crankshaft [46]
A crankshaft is a highly loaded component in a reciprocating internal combustion engine. To ensure the reliable functioning of this critical component, a structural analyst has to carry out multiple types of analysis. The most common types are the static concept analysis, the dynamics stress and fatigue analysis, and the closely related bearing [49], connecting rod [47], and fretting analysis [48]. Frondelius et al. [44] show how the crankshaft simulation methodology is a part of the virtual engine concept. Karmann et al. [79] studied split crankshaft with simulation.
Crankshaft dynamics calculation splits into a concept phase and advanced flexible multibody dynamic (MBD) simulations. Advanced flexible MBD simulations can capture transient and nonlinear phenomena. Crankshaft dynamics' central areas of interest are torsion and bending deformation, as well as axial, main bearing, and big end bearing forces. The structural analyst models the complete powertrain with a crankshaft, connecting rods, intermediate gears, camshafts, torsional vibration dampers, and couplings with MBD software AVL Excite power unit.
The crankshaft analysis process described in Figure 9 begins with the geometry of the powertrain components. The geometry is modified for FE meshing using appropriate CAD software, in this case, Siemens NX. The structural analyst uses preprocessing software such as SimLab and HyperMesh for FE model building, the SimLab being the preferred tool because of the straightforward automation of meshing processes and built-in tools for multibody model creation.
The crankshaft simulation case here illustrates the possibilities of the automation of the simulation process. Additionally, Figure 9 shows the complexity of the "single" simulation. Some of the very advanced nonlinear simulations can be automated in such a way that any designer can start the process in the DDP, which is also one example of CAE democratization. On CAE democratization, the role of the structural analysts will change more to being tool developers rather than product development project resources.
Reflecting this paper's research question, how does the above research help in the development of the most efficient four-stroke engine? Crankshaft dimensioning is the most crucial part of the engine development. New methodologies and R&D of the new methodologies were needed to make sure that the crankshaft will be able to handle the high cylinder peak pressure.

Validation and Product Testing
Validation and product testing is the phase of the project where the company tests its physical product to determine whether or not it will meet the key customer needs. Naturally, this phase also tests that the product is reliable and works as expected. In the engine laboratory, a lot of work is also left to fine-tune the engine parameters, for example, matching the turbocharger performance maps together with the engine ones.
Finally, this section of validation and product testing ends the paper with two examples: big data analysis and counterweight fretting measurement device development, respectively. Both of these emphasize how vital close cooperation between the virtual validation and validation teams is. Deploying virtual engine models, the analyst can design the best possible measurement points to make the model verification as easy and as accurate as possible. Due to the digitalization, big data analysis [80,81] becomes more and more important. First, some peculiar features of the big data analysis are explained.

Example 1: Big Data Analysis
The rapidly growing amount of data creates the need to develop new methods for processing large data volumes automatically. Companies should see data analysis as important feedback: data acquisition from the field can be used in product development to make better products in the future. Collecting field data makes it possible to study and learn from the product in real-life conditions that are usually totally different from the ones in an R&D laboratory. Thus, the data is essential for product development. By collecting and examining data, a company can achieve remarkable competitive advantages over other companies in the same industry.
If already delivered products start to malfunction, the only way to solve the problem and learn from it is to acquire measurement data from the field, do data analysis, and provide options for problem-solving based on the analysis results. The importance of data analysis is nowadays well understood [82], and companies have a rapidly growing need to find data analysts and software engineers or partners to improve the company data analysis capabilities [39,44,46,81].
Companies have collected data themselves for a long time, but its systematic utilization in product design is often not done in the companies. Many times, the amount of data is a problem: the amount does not have to be very large when analysis of it with, e.g., spreadsheet program becomes impossible. It is also often the case that easy-to-use, ready-made programs for the specific needs of a company's data do not exist. The company may need to find partners or internal resources that are capable of delivering customized solutions for data analysis. Wärtsilä, together with suppliers, has years of experience in delivering a wide range of data analysis solutions, ranging from small desktop applications to the extended browser-based analysis services capable of analyzing hundreds of terabytes of data in computational clusters with thousands of calculation cores [80].
The virtual model's verification depends on obtaining reliable data from the measurements. Usually, structural analysts together with measurement experts plan the subsequent measurements to get the maximum information from the measurements. However, much improvement in the statistical analysis of the measurement data is still needed. The potential to increase the accuracy of the simulation methodologies relies on big data analysis.

Example 2: Counterweight Measurements [81]
The target was to develop a measurement device based on the latest technology and for operation in extremely demanding environments. The project was divided into smaller phases for this reason. All test phases included environment testing. The first phase was the determination and testing of radio technology in the engine environment. High-voltage regulators and metal structures are known to cause transfer errors over analog radio transmission. Digital radio was selected for data transfer for this reason. Digital radios have a large variety of different operating bands and protocols. The team selected a 2.4 GHz band radio for the data rate, reliability, and power consumption. Data transfer tests were carried out to confirm that the radio performs as required.
Wireless live data transmission with embedded radio communication is typically a compromise between power consumption and data transfer rate. Usually, a higher data transfer rate requires substantial energy consumption. Vibration measurements usually require high accuracy and sampling rate. The requirements included the ability to provide synchronized data from two different measurement spots, both spots having three sensors. The team implemented a particular radio communication protocol because these requirements resulted in such a high data transfer rate. The team created the radio protocol to maximize the data payload on radio packets with reasonable power consumption. Tests inside an engine proved that the selected radio with the particular protocol could perform synchronized measurements in the desired places on a rotating shaft. However, shaft rotation caused a tremendous amount of data transfer errors, and, therefore, the communication protocol had to be improved for live data.
The NMAS has built-in secure remote connection possibilities. The connection can be used to monitor measurement status and signals remotely. The user views data remotely as soon as it is measured. The user can create a remote connection with any regular mobile phone or 3G modem, and the connection is activated automatically as soon as the public network is accessible.
The core feature of the device is the original data postprocessing inside the device. The purpose of the analysis is to get the displacement amplitudes of the counterweight for each specific eigenmode. Halla-aho et al. [81] have developed an intelligent algorithm that can perform all the needed calculations on the fly. The algorithm can be triggered automatically according to specified limit values to avoid unnecessary calculations when nothing unusual is happening. The mathematical principle of the algorithm builds on Fast Fourier Transform (FFT) and double integration. The frequency bands for each selected eigenmode are defined remotely. The device performs the FFT of the acceleration signal and integrates it twice to the displacement for each frequency band. Obtaining inverse FFT gives time signals of the displacement. Finally, a time window and the running amplitude principle are used to get the modal displacement amplitudes. Figure 10 shows the time signal and FFT of the measurement data. The team validated the algorithm by comparing results from the device and traditional measurement data analyzed manually in the office. The calculation inside the box reduces the amount of data from a high-frequency time signal to a few essential numbers per second. Results are available in real time via a network. The manual effort of further data analysis is not needed but still possible. The algorithm can also be set to record raw data from unusual events if needed.
Like in the previous section, good measurement data is the key for the verification of the virtual models. This device is planned to make long-lasting measurements inside a rotating engine. It uses a smart algorithm to store enough information for efficient data analysis. This way, the device saves energy and can keep measuring up for a long period.

Result of the Simulation-Driven Design Process
Wärtsilä 31 engine (Figure 11) was selected as the most efficient four-stroke diesel engine in 2015 [83]. It is the first Wärtsilä engine product to fully utilize the simulation-driven design process as described in this paper. Of course, the methodology is in constant development, and it is much more advanced today than it was five years ago.
Even though efficiency is a performance achievement, it requires the high peak cylinder pressure, which means massive loading to the power system components that are then again optimized using the methodologies described in this paper.
The simulation-driven design process helped to create the first-time-right design. The four W31 test engines have had significantly less teething problems than earlier product releases. W31 has avoided all significant hiccups, and its production ramp up is going nicely while writing this paper. Figure 11 shows part of the development team involved in the W31 engine development project. In December 2015, a total of 433 internal people had reported hours to the W31 engine development project. Also, the project used a great deal of external personnel during the different project phases. The size of the team illustrates the complexity of the product and the number of competencies needed to make this product.

Summary
This paper describes the simulation-driven design process used in Engines Technology. The research question is "how to use research in the structural analysis and dynamics field to ensure world-class product development?" The paper looks at the research of the simulation methodologies from the design process perspective, essentially answering the question of why this research was needed. In each process step, examples illustrate the research needed. Thus, each section of this paper goes through one to two practical examples, in which research was needed to increase the product design quality.
In the Product Definition section, the DDP was a relief. It shows that DDP increases internal communications and reduces the old data mistakes when everyone is accessing the life data in the database. It also guarantees that everyone has access to all product data. The way customer requirements are connected all the way through to simulation results ensures that if someone has changed them or someone needs to change them, the project management understands the whole needed effort and the interconnectivity of the individual requirements and design tasks with its validation. When FMEAs connect the simulation tasks and product requirements we are in a central position of simulation-driven design management.
In the Concept Design and Concept Simulation section, the two examples were main bearing cap topology optimization and JuliaFEM platform, respectively. All numerics have come here to stay, and, as already mentioned, in the future, topology optimization toolboxes will have more and more intelligence, and they will be more independent automatic design creation tools. The other example, the JuliaFEM platform, goes deep into the academic world. There is a need for a full-featured FEM code in the modern scripting language environment because the users with industrial-size models want to benefit from open-source platform development where one writes the missing piece of code.
The Detailed Design section presents two examples, fatigue simulation of oil sump welds and connecting rod fatigue and fretting calculations, respectively. Optimization of the welds' details, although manual optimization, in this case, is needed for reliable oil sump design. The devil is in the details, and thus a small modification in geometry leads to a considerable increase in life expectancy. The connecting rod simulation showed a nice correlation to the dynamic measurement with the average difference only 0.8%. Also the importance of basic research on fretting fatigue was highlighted. Without understanding the fundaments of fretting fatigue, it is nearly impossible to design reliable medium-speed connecting rods.
Also in the Virtual Validation section, two examples are shown, namely, cylinder head and crankshaft simulation processes. These are the high-end simulation cases, where a lot of method development and research were needed to polish the processes in these conditions. The crankshaft case shows a sound correlation to the verification measurements. Both processes are the backbone of the Structural Analysis and Dynamics team's work.
Finally, the Validation and Product Testing section ends the paper with two examples: big data analysis and counterweight fretting measurements. Both of these emphasize how vital close cooperation between virtual validation and validation teams is. Deploying virtual engine models, an analyst can design the best possible measurement points to make the model verification as easy and as accurate as possible.
Overall, this paper explains the importance of the engineering mechanics research for a company whose aim is to maintain the technology leadership position. This paper deals with the different product development project phases in light of real-life examples. These examples offer excellent building ground for the rest of the industry to follow, catch-up, and try to overtake.
As a conclusion, designing the most efficient four-stroke engine in the world demands very high cylinder peak pressure, which acts as the main load and excitation to all engine components. This paper described through examples what kind of research was needed to take this increased loading into account. Even though structural analysis and dynamics method development played a major role in this product development project, a lot of other competencies were needed too. The internal team size of 433 people speaks for itself. Efficiently managing such a high number of people is a challenge, and communication of the changes during the development project also played a major part in the success story.