Bharat Rakshak Forum Announcement

Hello Everyone,

A warm welcome back to the Bharat Rakshak Forum.

Important Notice: Due to a corruption in the BR forum database we regret to announce that data records relating to some of our registered users have been lost. We estimate approx. 500 user details are deleted.

To ease the process of recreating the user IDs we request members that have previously posted on the BR forums to recognise and identify their posts, once the posts are identified please contact the BRF moderator team by emailing BRF Mod Team with your post details.

The mod team will be able to update your username, email etc. so that the user history can be maintained.

Unfortunately for members that have never posted or have had all their posts deleted i.e. users that have 0 posts, we will be unable to recreate your account hence we request that you re-register again.

We apologise for any inconvenience caused and thank you for your understanding.


Project BRF: India's Kaveri Engine Saga

The Military Issues & History Forum is a venue to discuss issues relating to the military aspects of the Indian Armed Forces, whether the past, present or future. We request members to kindly stay within the mandate of this forum and keep their exchanges of views, on a civilised level, however vehemently any disagreement may be felt. All feedback regarding forum usage may be sent to the moderators using the Feedback Form or by clicking the Report Post Icon in any objectionable post for proper action. Please note that the views expressed by the Members and Moderators on these discussion boards are that of the individuals only and do not reflect the official policy or view of the Website. Copyright Violation is strictly prohibited and may result in revocation of your posting rights - please read the FAQ for full details. Users must also abide by the Forum Guidelines at all times.
BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Project BRF: India's Kaveri Engine Saga

Postby maitya » 17 Jan 2014 21:47

This thread will serve as a repository of technical knowledge on Indian efforts to build aero engines, APUs, jet starters and other engine accessories. It will serve as references for future.

Posts and counter posts would be based on published data and state-of-art technical knowledge. All other posts including personal opinions, mere links and quotes would be deleted at sight.

This thread will be highly moderated. The moderators' judgement on whether a post will be retained or not will be final. Users posting unworthy posts repeatedly will attract warnings.
Last edited by Rahul M on 11 Feb 2014 16:25, edited 3 times in total.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 17 Jan 2014 23:02

Kaveri is ab-initio afterburning turbofan development program of GTRE, intended to be the powerplant for the LCA.
As a follow-up to the August 1983 sanction of the development of a multi-role Light Combat Aircraft (LCA), the need of an indigenous turbofan to power it was felt. The ASR for LCA specified broad (and sometimes subjective) dimensional and performance requirements of its powerplant.


The genesis of the Kaveri is quite aptly captured in the CAG: Report No. 16 of 2010 -11 (Air Force and Navy):
Accordingly, there was a corresponding demand for a suitable engine for powering the LCA. Feasibility studies carried out in India and abroad revealed that there was no suitable engine available anywhere in the world, though Rolls Royce RB-1989 stage D and GEF404-F2J engines, by and large, met the requirement, provided certain concessions were granted in the Air Staff Requirements (ASR).

At this point of time, the Gas Turbine Research Laboratory was already working upon an aero-engine project, the GTX 37 engine, since 1982. In August 1986, a feasibility study was carried out jointly by Aeronautical Development Agency (ADA), Hindustan Aeronautics Limited (HAL) and Gas Turbine Research Establishment (GTRE) for evaluating the GTX-37 engine. The feasibility study indicated that the GTX-37 engine would, after certain rescheduling, meet the requirements of the LCA. GTRE accordingly, in December 1986, submitted a project proposal for the development of the Kaveri engine.

GTRE further proposed that it would be desirable to prove the newly designed airframe of the LCA with a proven engine first. Subsequently, the prototypes would be flown with the GTX-35 engine, as soon as this engine was type certified and cleared for the flight.

Based on the above proposal, Government sanctioned a project in March 1989 at a cost of Rs 382.81 crore with the probable date of completion (PDC) as December 1996, for the design and development of Kaveri engine.

The Kaveri Engine Project was sanctioned with the following basic objectives:
1) Designing and developing the GTX-35 engine to meet the specific needs of the LCA.
2) To create a full fledged indigenous base to design and develop any advanced technology engine for future military aviation programmes.
3) The engine so developed was to establish its performance integrity in various categories of tests prescribed by the aero-engine industry world over.

Reams and reams have been written on the current state Kaveri, and there's no point in trying to reproduce it in detail - however the following conclusion by the same CAG report quoted above sums it up quite appropriately:
Despite almost two decades of development effort with an expenditure of Rs 1,892 crore, GTRE is yet to fully develop an aeroengine which meets the specific needs of the LCA. The successful culmination of the project to develop an aero-engine through indigenous efforts is now dependent upon a Joint Venture with a foreign vendor.
Last edited by maitya on 18 Jan 2014 21:36, edited 4 times in total.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 17 Jan 2014 23:14

Guys, I think we'll go nowhere if try to debate about Kaveri/Kabini as state-of-the-art or not, is it good or bad or is built from scratch or not?

As in most things in life, all of these attributes are relative and is thus pointless to label them as absolute, and argue based on it.
So, here’s my take (in 3 parts) on how this ab initio military turbojet/fan Engine development programme should be viewed/evaluated – apologies, it became a bit too long-winded (I have broken it up into 3 parts, for page-navigation ease).

Anyway here goes …

[Part 1]
Kaveri was technology-wise contemporary till late 90s - ok, maybe even early 2000s. Directionally Solidified Casted blades, Flat rating concept, near to 1400deg C TeT, 21-22 OPR etc are all hall marks of late 1990s military engines either being unveiled and in mass-use then.
So should we label it as contemporary engine, carte blanche - hell no!!

Contemporary (1990s) Engine R&D and it’s impact on Kaveri: The thing that happened is, while we were busy developing Kabini/Kaveri, the established engine developers were all into deep R&D and prototyping of the next gen technologies, broadly in the following areas:
1) Turbine Blade material technology - TET increase is one of the most important factors impacting efficiency levels of turbojet/turbofan. And herein the material and casting technology able to withstand additional 300deg C operating env - SCBs partly answers that problem, but TBCs, introduction of higher temp oxidation resistance properties and multi-flow air path within the blades are areas where huge progress was made.

2) Compressor Design - Some sort of a re-birth of this dormant R&D area happened in the 1990s, with surge in R&D (back to basics phenomena?) in the 90s, resulting in huge (some say game-changer) advancement seen in 2000s. Compressor design gains (and to an equally important aspect of the developing an industrial manufacturing capability to translate these design gains to actual manufactured products) direct have a bearing on Pressure Ratios, the other most critical parameter in a turbojet/turbofan.

The advent of supersonic compressor blade speed (1.6M - while Kaveri is stuck at transonic level of 1.2M etc), multi-circular blade design (kaveri, I think, is at best double-circular arc design), low-aspect ratio blade design and more importantly, the advances required in manufacturing engineering technology R&D (and subsequent manufacturing engineering capability) to be able to translate these designs into a high-strength and relatively high-temp compressor blades and disks, ensured that achievement of 2000s contemporary PR levels of 28-30.

So yes, the technology levels that we see in Kaveri/Kabini are what were already available in the military engines on 1980s and 1990s - i.e. these were in R&D and developed in 1970s and 1980s. We started the R&D and development in 1990s and two decades later (slightly more than what has been achieved by the western engine design houses in 1960-70s, maybe) we have close to a working engine.
But the engines available today (F414, for example) have already incorporated the technologies that were developed in 1990s and 2000s.

One pertinent question that comes up though (partially bought out by Pentaih-ji)?
All of the above reasoning is fine and maybe even acceptable - but couldn't we have shortened these 2 decades plus development and engineering period and somehow play catch-up.

The problem that I see in the previous set of posts is trying to argue that without any experience on turbojet and turbofan engine design capability, this is impossible to have been achieved. Well, experience is a huge factor but not the only factor – one of the major reasons for where we are today, IMVHO and daresay, are the design choices (both core engine design and material/manufacturing choices and design) being made back in 1980s.

But to understand this aspect we need to go back a bit and examine the history of Kaveri engine design/development.

[contd ...]

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 17 Jan 2014 23:28

[Part 2]

Kaveri History: Well first-of-all, it’s erroneous to assume Kaveri (or GTX-35 VS) is the absolute first turbojet/turbofan to be designed/developed from the ground up (from scratch) by GTRE – it’s not. In fact Kaveri is not at all a “from the scratch” development in the first place – it more of an upgrade.

It’s predecessors were GTX-37 U (turbojet) -> GTX-37 UB (it’s turbofan version) -> GTX-35 (enhanced turbojet based on 37U tech). And Kaveri (or GTX-35 VS) is more of an upgrade of GTX-35 (same core etc.). The following schematic depicts the Kaveri lineage:


Note: How the reduction of HPC stages were carried out to reduce weight, while increasing the turbine efficiency by increasing TET (and OPR) simultaneously – all of these required adavcnes in materials tech as well. Also note, the mass-flow drop during graduating from a turbojet to turbofan necessitating further efficiencies in turbine and compressor technologies (or increase in the number of corresponding stages).
Pls note the 37 series is from 70s and early 80s while the 35 series from late 80s to early-mid 90s.

But all of these, still doesn’t make the above “lack-of-experience” argument completely void – as none of these predecessors actually flew and are more of a laboratory products (or tech demos).
Back in 1983, right after the LCA project was sanctioned, a concurrent engine evaluation study was conducted by GTRE - in 2 years time, in 1985, this was completed and the summary finding was "No contemporary engine is available world-wide that meets the LCA engine specifications".

F-404 etc were after-thought and more importantly, risk mitigating steps, which due to non-delivery of the actual engine, has now become the default engine. :roll:

Anyway, similarly a “Materials Committee” of GTRE in 1989, after a comprehensive study of various materials of contemporary turbo-jet/fan engines, and also after taking into account the infrastructure facilities available within the country in general and production capability of MIDHANI and DMRL, recommended the development of material, batch production and type certification process etc.

The Kaveri development programme was then launched in 1989.

The Kaveri itself (actually only the core, kabini) first ran in Mar 1995 and 2 Kabini prototypes (C1 and C2) and 3 full engine prototypes (K1, K2 K3) ran between 1995 and 1998.
(contd ...)

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 17 Jan 2014 23:31

[Part 3]
Kaveri Design Choice Rational: With this historical background in place, IMVHO, I’d speculate that what really happened in 1990 or thereabouts, while the performance and (also materials roadmap) design for Kaveri were being finalized , is the designers and technologists of the GTRE were faced with a major dilemma:

    1) On the design front,

      a. is it sensible to aim for the various core design parameters (e.g. OPR, TET, BPR, Combustor efficiency, supersonic compressor regimes, ultra-low aspect ratio blades, blisk manufacturing etc) of the various modern engine development programs in R&D
      b. to stick to the basic already understood basic design layouts of the GTX-37U and 37UBs and try and introduce medium-level of improvements on these design parameters and still meet the Kaveri specs.

    2) Similarly, on the materials front,

      a. Aim of the materials technology being worked on at various material design houses (e.g 2nd and 3rd Gen SCBs, DS based later-stage compressor blades, 1st gen SCB based , Ceramic and Polymer Matrix based combustors and static-engine parts etc etc) and provide a quantum jump in performance parameters that was being asked from Kaveri specs
      b. Provide a more conservative incremental advancement in material tech (e.g. introduce Dir Solidified blades for HPT, Ti and Ni based-equiaxed-casted Compressor blades, contemporary “bolted” disk and blade interfaces, annular combustor etc) and still achieve the Kaveri specs

It may be fashionable to attribute the GTRE folks as failures/worthless/losers etc , but the fact they had a fair idea about the contemporary advancements being carried out world-wide, to have made the design-choices that were made, then.

So the decision matrix then may have looked like:

1(a)2(b) ---|--- 1(a)2(a)
---------- Risk ------------>
1(b)2(a) ---|--- 1(b)2(b)

The GTRE technologists and designers chose 4th quadrant i.e. 1(b)2(b) – of course, with a hidden/inner ambition of getting to the 1st quadrant stuff concurrently and as the general technological level of the country advances in next 2 decades.

Overall Design Goals met/not-met: Pls, one word of caution towards over-emphasizing the success of dry-thrust, 90% wet thrust achievement (SFC, well, not sure) etc – yes those values are achieved, but at what weight (and maybe SFC also) penalty?

If you look at the chart above, a prev gen GTX-37UB also would have met these figures, isn’t it (with even more weight and SFC penalty).

So IMO right way of labeling Kaveri is to call it as qualified-success. As it, for the first time, if pursued with no let-up thru the flight-test-programme, will validate a flying turbofan engine – in technological terms it would
1) validate (and provide invaluable empirical data) the CFD and basic mechanical design of a twin-spool 80KN turbofan (90s level)
2) given enough design and manufacturing technological confidence of 80s level of material tech

Without these there’s no hope of leapfrogging technological gens etc (refer to epilogue section for a glimpse of that), and we're doomed to play catch-up forever.

Inference: But there-in lies the problem,
i.e. first, recall the findings of Engine evaluation study of 1985 which basically stated LCA engine specs are set high-enough to be met by a contemporary engine then. Now contrast that with the constraining technological choices (aka Conservative-Conservative) being made for Kaveri to achieve those.

This essentially means, there’s wafer-thin margin of error towards meeting both the core engine-design parameters and the enabling material/manufacturing design/technology. Even shortfall of one parameter may spell doom – and that’s precisely happened with Kaveri albeit shortfall in meeting almost all design parameters (admittedly, by small enough margin but big enough to all contribute to a compounding effect of the shortfall we see today).

But wait, before we start dishing out our advises, from our hindsight-is-20/20 vantage point, let’s try and think thru why would the GTRE folks not consider high-high risk of 1(a)2(a) approach.

Well, if you look at our national psyche of extremely naval-gazing, if-it’s-made-in-India-must-be-useless, pricing-of-tech-dev-in-terms-of-social-upliftment-missed-cost, 3-legged-cheetah-labelling-user-attitude etc. (Shivji will have a longer list), GTRE folks would be mortally scared of failures arising from such a high-risk endeavour.
Frankly, I’m not very sure if it mattered to the GTRE folks, if LCA flew or not, as long as they have met the Kaveri design parameters. So when the larger program, due to scope creep, necessitated a requirement growth of a next-gen powerplant, Kaveri in it’s present technological form is not even close to it.

Plus all this talk of new imported core etc means exactly that – a fully imported engine in terms of jet-engine tech, nothing more nothing else. :roll:

That’s the price to be paid for a pessimistic/stifling national outlook towards technological advancement with zero-tolerance towards failures and import at all cost attitude. :(

Epilogue: While we constantly continue to berate the GTRE folks for technological failure etc, a small bit of snippet needs understanding.

In mid-2000s, desperate to trying to reduce the overweight Kaveri (it’s still overweight by 150Kg or there-abouts), GTRE folks went ahead experimenting with the absolute cutting edge of material tech i.e. Ceramic Matrix Composites (CMC) and Polymer Matrix Composites (PMC) on some of the non-rotating-non-critical components. CMC was targeted towards a few hot-components like Nozzle divergent petals, exhaust cone etc – while PMC (high temp PMR-15 class) towards bypass duct, CD Nozzle cowls at the back etc.

The aim was to reduce weight by 30Kgs (i.e. approx. 20-25%).

In contrast, pls google around for CMC and PMC related R&D and, more importantly, it’s usage on various aero-engine by established western players (Hint: some links are there couple of pages back on this very thread).

This confidence and attitude are the true by-products of the Kaveri engine development program.
[The End]
Last edited by maitya on 17 Jan 2014 23:42, edited 1 time in total.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 00:09

So where exactly are we wrt Kaveri ... the following info can be a dated (from 2009), but still acurately reflects the overall state of what has been achieved in the program so far, and more importantly, what are the various shortfalls, requiring more work and handholding.

Posting in full, k prasad's entire post (pure gold, must say) from AI09 ...
k prasad wrote:Ok.... this is the GTRE story - (someone come up with sad music plz).... from the Aeroseminar.

An overview of the Kaveri situation was provided by the GTRE director, T. Mohan Rao, who was accompanied by his senior scientists. The hall was packed, and the language and tone of his speech was sadly self-depracating and pleading. Almost as if DRDO has also started losing faith - he had to explain whats going on and why its happening. Sad to see, but there are clear silver linings in the story.

1. He pointed out that the change in IAF requirements and the increase in all up wt by 2 tons killed the Kaveri as they knew it, simply because it could not in any way be able to achieve the new requirements... he was quite angry that they had been blamed for what was obviously not their fault, ie, a low-performing Kaveri for the updated reqs. Bypass Ratio is 0.16 to 0.18... he pointed out that if it had to meet the new stds, the bypass would have to be at least 0.35 to 0.45.

2. 4 Cores and 8 Kaveris built, 1800 hrs testing done.

Thrsut demonstrated: 4774 kgf dry (design value reached). 7000 kgf reheat (2.5-3% shortfall)

3. Pressure ratio - 21.5 overall.

Fan - 3 stage, 3.4 pressure ratio, Surge margin>20.
Compressor 6.4 pressure,Surge>23.
Combustor - efficiency >99%, high intensity annular combustor. Pattern factor of 0.35 and 0.14

Note: These are ACHIEVED values.

4. The present Kaveri will not power combat LCAs, although it will be fitted to an LCA within 9 months. The new program, which is the Kaveri with Snecma Eco core of 90kN will be used. The preslim design studies and configuration have beeen completed.

5.Birdhit requirements of 85% thrust after hit at 0.4-0.5 Mach have been shown and achieved.

6. He pointed out the major factor in delays being them not being given enough infrastructure and testing facilities - Govt has not given funds, babus have sat on them. Instead, they have had to go to CIAM in Russia and Anecom in Germany for tests.

He mentioned that this was the biggest problem - one of the issues they have was in engine strain and the blade throws - they tried to isolate all the causes for 3 yrs, but only when they took it to CIAM for the Non Intrusive Strain Measurement (NSMS) tests did they realize that there were excess vibrations of the 3rd order of engine frequency being developed.... imagine if the facility was there in india.

Then, the compressor tests also, it was only at the Anecom that they could see that the 1st 2 stages were surged by 20%, while the rest were "as dead as government servants" (his quote - shows how low on confidence they are i guess). He pointed out that that would have saved a lot of time and money if that facility was in india. They have since fixed the issue.

Then, the afterburner tests, (the much highlighted high altitude failure) at CIAM - the reqt is for 50% thrust boost over dry thrust at 88% efficiency. The K5 prototype failed in 2003, after working perfectly in the GTRE. They realized that they could not achieve lightup at high altitudes (Dry thrust worked ok).

They took anothe new engine block and the afterburner worked perfectly and has been certified to 15 km.

7. The good news..... they will conduct complete engine trials in CIAM in March. If these trials are successful (and they are highly confident), the Kaveri will be integrated on the LCA within 9 months.

The KADECU FADEC system with manual backup has also been fully certified.

8. The bad news again - The present requirements would need the core to pump out 15-20% more power, which is impossible... hence the eco. Not that there is anything wrong with the core.

He mentioned that otherwise, the Kaveri has met the original requirements, or will meet within the next month, and is good for all other uses except a "combat LCA" - ie, CAT, LIFT, LCA Trainer, etc.

9. When asked where we lack, he mentioned 4 key areas

a. BLISK - integrated single Blade and Disk
b. Single Crystal blades - he categorically said - We do not have that tech at all.
c. Thermal Barrier Coatings - TBC - very critical for high temp engine operation. A talk on this by an American Indian prof attracted a house full audience. He mentioned that this is highly critical and export controlled, so they dont have it.

The last two points were mentioned by Dir, DMRL as one of their areas of research, but I was not able to quiz him on it. PLEASE QUIZ ANY DMRL GUYS U MEET ON THIS.

Mohan Rao appealed that people should realize that this tech takes time, and money, and more importantly, willpower and support.... its not being given by foriegn nations, so if we have to develop, it needs support. This stance found strong support from Saraswat, Sundaram and Selvamurthy in the closing ceremony.

They are not looking at TVC just yet, and it is in the hands of other labs at the moment.

However, the ADE presentation on UCAVs showed a future Indian UCAV (2015) with no tail (MCA design), a non-conventional wingform, and a 3 axis TVC.

10. OK, some nos....

Fan - Successful tests at CIAM
Compressor: (nos in brackets are design values)

6 stage axial flow, 3 stage variable vanes with IGVs.
Corr. tip speed ~370 m/s
Inlet diam: 590 mm

Mass flow: 24.13 kg/s (24.3)
Pressure: 6.42 (6.38)
Efficiency: 85.4% (85%)
Surge %: 21.6 (20% designed)

Has undergone aero testing at CIAM
K8 V4 combustor is close to design.

Pressure = 3.6
Mass flow function= 1.1
Isentropic eff = 85%
Max. TET = 1700K

Is a success, has met design.

11. Future uses:

Navy - KMGT - 1 MW for small ships being developed, 5-6 MW KMGT is a sucess and runs on Diesel, instead of the usual kerosene aviation fuel.

The railways also wants a 7-8MW CNG run engine, which will be a challenge in terms of fuel supply, rather than teh combustion itself, which shouldn't be a problem.

Any qns???

maitya wrote:Pls note the the thrust reported in 2009 in the above post by GTRE director, T. Mohan Rao viz. Dry: 4774kgf - 46.82KN and Wet: 7000kgf - 68.65KN.

Now fast-forward to 2010-2011, and we have this:
From the hindu ... 127075.ece

“In recent times, the engine has been able to produce thrust of 70-75 Kilo Newton but what the IAF and other stake-holders desire is power between 90—95 KN.

“I think with the JV with Snecma in place now, we would be able to achieve these parameters in near future,” they said.

So already around 8% growth in wet thrust achieved (alongwith around 10% growth in dry thrust) - these can't be achieved without improvement in the compressor or turbine or both efficiency.
So justifiably a very good effort and something that GTRE needs to pride itself with. :)

But to go to 81KN stage will require anothe 7% growth in thrust which will have to involve further tweaking the compressor stages.

IMO the mass flow rate *may* continue to be the issue (no published figure so far - the above indicates the compressor mass flow) which can be resolved by increasing the compressort stage pressure ratios - i.e
1) either by making the compressor lighter (improved materials etc., so it rotates faster from the same power from turbine)
2) or by enhancing the aerodynamic efficiency of the compressor (which anyway was a compromised one, as nobody wanted to manufacture the originally designed fan-blades due to increased manufacturing complexity given the low volumes involved)
3) or by increasing the turbine efficiency (so going for higher TET but DS blades wouldn't work and will require SCB tech or by improving the blade-cooling tech)
None of the above are very easy paths to pursue and is actually the technology gen for the contemporary engines.

Talking about 90-95KN etc (interestingly, no mention of the dry thrust) is futile without goig thru the above-mentioned hoofs and, nobody would like to part with their knowhow before they themselves have moved onto the next generation technology itself.

sivab wrote:^^^ It was a known fact from 2008. ... pment.html

Mohana Rao: We have a functional engine, but there is a slight shortfall in performance. It has achieved dry thrust of 4,600kg and reheat thrust of 7,000kg in Bangalore, which is around 3,000ft above sea level. So, it would be around 5,000kg dry thrust and 7,500kg reheat thrust at sea level. The engine is short of thrust by 400kg and overweight by around 150kg. Also, we still have to perform long- endurance tests of the engine to run for many hours.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 00:52

One important aspect that is almost always overlooked (being less glamorous) are the dimensional specifications of Kaveri.

All performance parameters are required to be achieved within the prescribed dimensional parameters - there's no point in trying to create an engine which meets the performance parameters but exceeds the dimensional parameters like Weight, Dia, Length etc. As breaching them can make them impossible to fit in the airframe (or may require change, which normally sets back the actual program by decades) making them unusable. So what are the dimensional parameters of Kaveri?
Length: 137.4 in (3490 mm)
Diameter: 35.8 in (910 mm)
Dry weight: 2,724 lb (1,235 kg)
[Goal: 2,100-2450 lb (950-1100 kg)]

Compressor: two-spool, with low-pressure (LP) and high-pressure (HP) axial compressors:
- LP compressor with 3 fan stages and transonic blading
- HP compressor with 6 stages, including variable inlet guide vanes and first two stators
Combustors: annular, with dump diffuser and air-blast fuel atomisers
Turbine: 1 LP stage and 1 HP stage

So for Kaveri how are these dimensional parameters impacting/constraining the performance shortfall of Kaveri?
... not exactly sure what you are trying to say.
Nowhere in the world billions of $s are spent in developing a science project, as you are alluding to, without clear-cut and exact specifications in place. And, for a turbojet/turbofan development programs, the dimensional and the performance parameters are absolutely vital on deciding which way program would go.

So just like any other turbojet/turbofan program, Kaveri also had it's performance (Th - 52/80KN, SFC - 80/207 kg/kN.h, TWR - 76N/Kg) parameters define to be achieved within the specified dimensional constraints (of L - 3.5m, D - 0.9m and Wt - 950Kg). Those performance and dimensional parameters are around which the LCA airframe dimension, strength and overall design (e.g. air-intake design) itself were specified.

PS: Also as I've pointed out in my previous post, these parameters were not only contemporary but actually world-beaters in those days - recall, how when the Kaveri design/performance-parameter-specifying-committee tried to look for another existing (or being on the verge of being put into use) military turbofan to cross-validate and baseline Kaveri’s parametric-model against what they’d have actually achieved (as opposed to copy-paste from some shiny brochures), they couldn't find any.
This should also put to rest, any notion of modesty/reticence towards the sheer technological scale this program intended to leap-frog – some may rightly say, that we aimed too high.

So anyway, if Kaveri would have met these specified parameters it would have been good enough for the LCA.

And which it almost did - except for the weight and wet thrust part, where it fell short by 10-15% - for which it can be labeled as failure etc, and GTRE folks needs to re-double their effort to regain back those shortfalls.
As after all, theoretically (for the sake of argument) if it were to be argued that Thrust is the be-all and end-all and dimensional constraints are not important, then Kaveri program itself hardly required - those thrust ratios were achieved almost a decade back by it's predecessors (GTX-37 UB tec.)

On the contrary, dimensional parameters are extremely important as it would limit the mass-flow rate, number of compressor (and even turbine) stages that can be squeezed-in, the BPR itself etc - all having direct impact on the performance parameters like Thrust, Weight, SFC etc
(PS: a couple of pages back, I've posted a very very simplified-excel-based turbojet "designing" tool 8) - you may try and play around those 3 simple parameters and do some cause-effect kind of analysis).

But that's hardly the point.

The point that I's trying to make above was - the design choices/technological roadmap chosen (due to a various of factors discussed above) in developing Kaveri. That conservative approach meant, while the program (and these design choices) were sufficient to meet Kaveri performance specs, it provided almost nil cushion against any performance parameter scope-creep.
So when LCA got overweight by 1.2tons or so (it was originally envisaged to be a 5.5-ton class fighter), the Kaveri core couldn't provide any easy scaling-up capability of the performance parameters required to address this scope-creep.
But, how can the GTRE folks be directly faulted for not able to address this scope-creep etc - they didn't have any contribution towards LCAs weight-gain, after all.

Even then, mind-you theoretically, even-now if the mass-flow can be increased, this Kaveri core can have enhanced Thrust rating as well - but all other performance parameters like SFC etc will suffer big time - due to a further reduction in Thermal Efficiency (refer ot that excel, a couple of pages back). So, such kind of stunts are always avoided.

So is there any +ves of this program - Yes, and actually there are too many to count - but all of these +ve can be very easily bought to naught, if Kaveri program is abandoned at this and not taken to it's logical conclusion of completing a comprehensive flight-test program.
With this program, we have for the first time understood and will continue to understand the FD, Mechanical design and material/manufacturing technological/engineering aspects of 80KN class twin-spool turbofan engine (this is as cutting edge as it ever gets), as the next stages of the program are undertaken - something that has taken more than half-of-century by the industrialized world to master, and no nation will ever pass-on that knowledge, however friendly they are to us.

For example (just to make a point), the SPR of a HPC stage can be dramatically improved by carefully "introducing well-shaped" intra-stage shock waves (where is N^3, when you need him most).
And given the OPR achieved so far in Kaveri is about 21 or thereabouts (against a design goal of 27), this can be a very tempting option. But who is going to tell us the blade strength, blade geometry, aspect ratio, solidity etc which helps in initiating, sustaining and optimizing it. And even if they do, who is going to tell us the material composition and physical characteristics of these HPC stage blades that will be required to achieve it - nobody, I repeat, absolutely nobody. We will have to try, fail, try again, fail again, try again and learn it the hard way.

There’s absolutely no other way.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 01:19

The other important aspect is to ascertain how much of an impact the Kaveri R&D subprogram failures are having an impact on the parent program - aka R&D, productionising and operationalizing of LCA.

Any failure or slippage of the vital program (as described as the "Achilles Heel", right at the inception) like that of Kaveri can render the parent LCA programme so much delayed that it loses it relevance by the time it fructifies. So it's worthwhile to understand the rough-timeline of the Kaveri development plan, it's touch-points with the LCA program and any impact (if at all) on the parent program.

So the question arises, is it true that because of non-delivery of Kaveri (with of required TWR, if I may add), the LCA program has been delayed i.e. IOC/FOC could have happened earlier had the Kaveri engine (with the required TWR) would have been made available couple of years back - maybe around 2005-6).

Kaveri engine development is part of the overall LCA program - true - however it was well understood that this aspect of the program would be singularly the riskiest one and is thus well mitigated for (hint: decision timeline for getting GE F-404 and the fact that absolutely no redesign/analysis etc required for matching the F404 engine Fans with the air-inlet design - it was as if the whole airframe was built around F404).
So whilst it is unfortunate that Kaveri program itself didn't deliver, the overall program itself (from a platform-availability to the end-user perspective) was not delayed because of this unavailability.

Sid wrote:^^
Philip ji, first TD flew in 2001, PV in 2003, all with same engine 404. Also, for LPs (as per your link) it was decided to have 404 in 2003. They also first flew on 2007. All indicating high compatibility with aircraft design.

I agree with Maitya's deduction that from the beginning LCA was designed around 404 not Kaveri. Until or unless Kaveri is 404 clone and easily swappable.

Sidji thanks. Actually the TDs got rolled out with F404 much earlier than their 1st flights - details are in the following brief timeline of this:

1. LCA was always intended to fly with F404-GE-F2J3 for the FSED-I phase i.e TD 1/2 + PV 1/2

No sane program management team would risk flying an unproven airframe with an unproven engine.

The dimensional similarity between Kaveri and F404 (e.g Dia - F404 35in vs Kaveri 35.8in, Weight F404 1036Kg vs Kaveri 950-1000Kg etc) are not mere happenstances. The "desired" requiremental dimensions of the LCA power-plant were carefully chosen (in 1987-88 etc) after proper due-diligence of what would be realistically available in 1995-99 or thereabouts.
i.e. in lay-man terms, the dimensions of the intended LCA power-plant (Kaveri) were based on those of forecasted contemporary turbofan engine (that would be available in late 90s) achieving similar Thrust, SFC and a host of other parameters.
Of course, as it is customary in such ab intio development programs, to have the intrinsic design parameters like SFC, OPR, TeT to be a notch above than those that would be available in the contemporary engines during it's developmental phases - more in sync with teh forecasted parameters of engines what would be contemporary during atleast the 1st half of the intended platform's ops life (of 2 decades +).

So not only does the rolled-out LCA TDs (TD1 in 1995 and TD-2 in 1998) had the F404-GE-F2J3 installed on them, but also they were on both PV1 and PV2 in the early 2000s.

All as per planned.

2. Also, as per the original plan (of 1987-88), if Kaveri sub-program succeeded, it was to be fitted onto LCA for the FSED-II phase (PV-3 as the production-prototype variant, PV-4 as the naval variant, and PV-5 as the trainer variant). And of course, then roll-on to the LSPs and SPs as well.

3. So while 1st phase of flight testing of the FSED-I phase was on-going (it itself got delayed due to 1998-sanctions and resultant delays in developing the FBW system), in parallel the Kaveri program was also progressing.
And the Kaveri program started off well actually - core (Kabini) first ran in 1995, full 1st prototype engine (Kaveri) began testing in 1996 and by 1998, all five prototypes (K1-K5/K6) were in testing.
But, in 2003 itself, while LCA was merely into 2 yrs into FSED-I flight testing, the indigenous HPT DS-blades started giving up. So, as a last ditch attempt, to keep the engine program on track, the DS blades were imported from Snecma (this import bit is purely IIRC and I need to cross-check it again).

4. But in mid-2004, the Kaveri failed its high-altitude tests in Russia. So it was then decided, Kaveri will not be ready for the FSED-II phases and GE was awarded a US$105 million contract (in 2004) for 17 F404-IN20 engines for LSPs and NPs (and delivery of which began in 2006).

5. The IAF ASR change (justified, in my opinion) happened in 2004/05 - this led to further delays in re-configuring the basic airframe (and thus further increased weight) to cater to it - so all FSED-II platforms were delayed towards incorporating these changes.

6. PV-1/2/3 and LSP-1 (in 2007) all flew with 2J3 version and LSP-2 with IN20 (in 2008) - followed by other LSPs.
All as per their revised schedule, dictated singularly by the program flight testing schedule - plus the delays due to re-configuring the platform due to ASR change (and of course the prototype building pace by HAL - aka "hand built" platforms).

7. And also due to pt.5 above, the resultant weight increase also made Kaveri completely unsuitable for LCA Mk1. So, in 2007, an additional 24 F404-IN20 afterburning engines were ordered to power the first operational squadron of Tejas fighters – and of course, Kaveri program itself was then officially delinked with the main LCA programme.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 01:23

But the above still leaves out the confusion about sanctions etc and availability of GE F404 engines. Well, it so happens, right after the LCA program was sanctioned (in 1983), the engine feasibility study (of 1986) and Kaveri program sanction (of 1989), GTRE (actually HAL then) went ahead and bought the GE F404 engines required for the FSED-I phases (and some more).

This is corroborated in this 1989 NY Times article (By SANJOY HAZARIKA, Special to the New York Times Published: February 05, 1989) India Plans to Increase Arms Imports and Exports.

Officials also are discussing purchase of technology from France and the United States for components of a futuristic light combat aircraft planned by 1996. New Delhi already has bought several General Electric 404 engines for use on prototypes of this aircraft, opening the door to greater military cooperation more than 20 years after Washington ended arms sales to India.

Plus 1998 sanctions didn’t have much of an impact either, as these engines were already integrated into TD1 and TD2 – plus the LCA internal design was a perfect match for the dimensions of F404 (refer to my previous post on dimensional similarity aspect between Kaveri and F404), so future integrations were also not much of an issue.

BRF Oldie
Posts: 6380
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby UlanBatori » 18 Jan 2014 05:52

There’s absolutely no other way.

Quite true. So they need to either have a small team work very hard for 20 years, or have 30 teams work very hard for 2 years - sort of in Archimedes mode, as in, u don't deliver, ur head is delivered.

Given that this is so crucial, why is the Indian defense establishment unable to articulate the need for this sort of effort? Or is the truth that you can't find 3 teams, let alone 30, that will actually do the work?

I have to say that in 30 years of trying, I have come to the conclusion that if there are really dedicated university-based aerospace research teams in India, I have not found them. Seems like there are a few people in DRDO and ISRO and BARC who care, but the rest seem to be just, well... I don't want to be insulting.

A couple of years ago, IISc was establishing a Pratt&Whitney Chair - did they find someone and is s(he) moving ahead to solve the engine problems?
Last edited by Indranil on 18 Jan 2014 10:26, edited 2 times in total.
Reason: No rants. No opinions. Shudh gyan please. Technical shortcomings matched to particular academic excellences would make an excellent post. In that respect are there specifics where the newly formed NACG, NMCC and DDMB teams be helpful?

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 13:25

So it's high-time we try and decode where exactly is Kaveri lacking wrt not only reaching it's Dry and Wet Thrust (actually TWRs) levels but also why scaling up to GE F414 level thrusts is appearing to be such an insurmountable challenge.

But before doing that, it's required to understand the current layout of the Kaveri wrt tempurature and pressure distn.

NRao wrote:Much water has flowed under the bridge and we are where we were when it all started?

Livefist :: August 2010 :: Kaveri's Compressor Blades + The Indian Single Crystal Effort

1) That presentation (to be clear, from : India's Defence Metallurgical Research Lab (DMRL) in Hyderabad) is from 2010 (there could be newer versions out there)
2) It does not claim that the Kaveri has SCB, it very well could have if that were a fact
3) It states, like a few posters here have, that India does have access to SC technology
But, here is an interesting diagram, we now have a basic idea of temp/pressure/alloys in a Kaveri (it may have changed):

[A Simple Excel based Turbofan analysing tool]
Another important aspect of discussing highly Technical aspects (like that of a Turbofan) from a lay-man point of view is the inevitable "what-if-then ..." kind of scenarios that props up from time to time e.g. it's natural to have a thought like ok, to increase the dry thrust by 10% how much mass-flow rate should be increased (all other performacne aspects being equal".
These, though being strictly theoretical exercises, do increase the understanding of a lot of aspects of the programme as a whole. The following is an humble attempt towards it:

srin wrote:Thank you maitya saar. One more question ... ?

srin, on a slightly different but relevant note, pls appreciate that the problem of trying to analyze/explain turbojet workings strictly from a lay-man pov, is this dependence on rudimentary mathematical aspects to explain away complicated, multi-disciplinary high-funda stuff that includes exotic CeeFDee concepts like 3D NS, Boltzman Equations, Supersonic Shock Propagation, Boundary Value conditions of flow etc etc.

But being a certified DOO, the method of overcoming it that appealed to me most, was to create a simple excel with the basic gas turbine parameters deriving other dependent parameters, from the 1st principles and then use these derivations and results for analysing/explaining generic gas-turbine queries/concepts/issues.
Agreed it’s over-simplified (e.g. Wcmpr = Wtrbn :oops: ) and way-too generic (e.g. both compressor and turbine stages are Isentropic :shock: ) to make some of the explaining even invalid, but still it helps in analyze/explaining.

Plus of course, this allows people to play around with various permutation-combination of the 3 basic turbojet parameters (viz. OPR, TET and Mass Flow) and start deriving and designing their own “paper-turbojets”. :mrgreen:

So here goes:
PS: Some Pir-review will be extremely welcome. :((

Method Used to create this tool: The method I’ve used is,
1. to first take only 3 input very basic variables (viz OPR, TeT and Mass Flow values)
2. plonk it along with other constants viz. Engine Entry Temp, Specific Heat Ratio (for a constant gas mass and volume), Atm Pressure of the engine operating scenario, Velocity of Air Intake and Specific Heat Capacity.
3. Using these, derive other basic gas-turbine parameters (depicted as 1st Level, 2nd Level and 3rd Level derivations)
4. Use the calculation/derivation process to explain away, from lay-man pov, the interplay between various turbo-jet concepts.

Note: Here-in I’d like to also mention that the,
1. Green column was more towards validating the concepts/formulas used here-in, by comparing against published calc results for a certain set of the input variable values
2. Lighter blue column is to then use these baselined calcs (and assumptions) and play-around with the open-source (read Wiki) Kaveri parameters
3. Darker Blue is to do the above (like the lighter blue column) but with the aspirational parameters of Kaveri and see where these calcs go.

So with this tool in place, I can now attempt to analyse/explain some of the turbo-jet idiosyncrasies and hopefully able to arrive at some conclusions - but that for some other day.

Disclaimer: This is much more rudimentary and humble attempt than the legendary missile-performance-predictor tool that ArunSji created years back - so pls avoid bringing it for comparision etc (if done, this stands no chance at all and will bite the dust in the first few secs itself)

PS: Can somebody help this html-challenged-foggy how to type superscript/subscript and symbol laden formulaes in the forum posting software? :oops:
Last edited by maitya on 19 Jan 2014 09:38, edited 3 times in total.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 13:26

Now first let's first look at the most difficult and strategic aspect of a turbofan - namely the Turbines (and more precisely the HPT, to be followed by the LPT).

There are quite a few information that’s getting scattered across various threads and needs consolidation in one place in this thread (for archival and ready reference).

First thing first, Shivji messed up :cry: his invaluable uploads from AI13 on Kaveri by moving the infoboards and images.
They need to be linked from here – so here they are (credit and copyright completely goes to Shivji and Shivji alone).


In a lay-man terms what does these images signify vis-à-vis GTREs Kaveri/Kabini development project. Well, from a 60K ft level, these images are the 1st proof of Single Crystal Blade manufacturing (or should I just leave it at development only) in India.
But at a slightly lower level, we still need to understand, of course purely from a lay-man’s pov, why is this SCB dev/design capability so important for us (from any indigenous military jet engine dev capability pov) - and, more importantly, where do we stand today (from open source info only) in achieving this capability.

So the remainder of this post is an attempt (must admit a bit audacious one) to detail this aspect from a purely lay-man pov.
So here goes:

First the problem stmt:
A typical turbine blade (a HPT blade to be precise) in a turbo-fan would be rotating at a 10K rpm in a 1600deg C (say) temperature operating environment – which would mean the tip is moving at approx 1200kmph.
So for a blade of say 10cm long and a radius of 0.5m, would mean 160MPA of pressure on the blade.
The fan-blade material (in the HP and LP Turbines) would thus need to be able to handle that kind of physical stress along with a 1600deg C thermal stress in order to extract work from the gas stream and convert into to mechanical energy in the form of a rotating shaft to turn the upstream compressors and fans. Plus they need to also have adequate oxidation resistance and hot corrosion resistance at those operating temperature.
A very tall order.

Blade Metallurgy Fundamentals:
The blade metallurgy comes into picture there-in – pls refer to the images posted by Shivji from AI13.

In the Equiaxed (aka traditional) blade, the grain boundaries are on both axis, longitudinal and transverse (aka both length and across), so under thermal and mechanical stress, creep propagation can happen in any direction - but mostly the failures happen radially, the traditional weak link in the microstructure.

With Directionally Solidified (DS) the transverse grain boundaries are removed and columnar microstructures are formed -basically by carefully controlling the temperature gradient, a planar solidification front (across the cross-section of the foil) is first formed and then the whole blade is solidified by moving this planar front longitudinally across the length.
This results in multiple oriented grain structures running parallel to the major axis (aka parallel to the length of the blade) with no transverse (aka cross) grain boundaries. But these long grain boundaries being weak, it requires addition of Boron and Carbon (and hafnium and zirconium) impurities to be added to make them sufficiently strong against creep propagation.
This alignment of grain boundaries longitudinally (length-wise) confers substantial increase in creep life. DS provides as much as 10times more strain control or thermal fatigue compared to Equiaxed blades - plus the impact strength is also more (approx 33%) compared to Equiaxed blades.
<<Insert Image from hdd>>
This schematic shows the DS and Single Crystalline (SC) casting process - In each case, molten super-alloy is poured into a ceramic mold seated on a water cooled copper chill. And grains are nucleated on the chill surface and are grown in a columnar manner parallel to the unidirectional temperature gradient – this is achieved by slowly moving the mold away from the furnace.

Pls notice that, SC blade creation process is almost identical to that of DS process but with one very important difference - that of the presence of grain selector (that helical structure at the bottom, kind of a filter which allows only 1 columnar microstructure to propagate longitudinally across the blade length).
As solidification proceeds, two to six grains enter the helix, (or grain selector) - some grains are physically blocked from entering the helix and the one or few that survive, have their horizontal dendrites most favorably positioned to enter the helix. After one or two turns of the helix only one crystal survives - this one grain emerges from the top of the selector, and this grain fills the entire mold cavity.

The helix wire diameter typically varies from 0.3 to 0.5 cm (0.1 to 0.2 inch).The single crystal production process uses the same vacuum furnaces as for columnar-grain castings (aka DS castings), but the temperature gradients in the furnaces have been increased significantly from about 36 to 72 deg C/cm.

Just to underline the importance of various blade metallurgy technologies, a 25 deg C improvement in metal temperature capability corresponds to a three-fold improvement in blade-life.

The SC myth:
But, the SCB all by itself is not the be all for achieving high TeT – in fact the 1st gen SCBs falls short of last-gen DS of approx. 1450deg C TeT by about 100deg C.
In fact the initial SC work on SC version of the MAR-M200 was abandoned in mid-60s as it was found to provide no additional benefit over DS version of it. It’s only in the late 70s it caught on again when they could overshadow DS blades by removing the artificially added grain-boundary strengthening elements. Also in fact the DS version of MAR-M200 provided very high levels of creep and thermal fatigue strength and the SC versions didn’t really improve too much on these two parameters (it did marginally though) – what the SC version did was to improve on all four vital parameters viz. oxidation resistance and hot-corrosion resistance in addition to creep and thermal fatigue strengths.

Various SC Gens:
1st Gen SCBs: The above SCB research lead to what is known as the 1st Gen SC technology – characterized by not so much of an increase in creep and thermal fatigue strengths over the last versions of DS blades (of late 80s) but improvement in all 4 parameters (oxidation resistance, hot-corrosion resistance, creep and thermal fatigue strengths).
So in lay-man pov having 1st Gen SC blades will not help much in achieving a quantum jump beyond 1450deg C range TeTs.

2nd and 3rd Gen SCBs:
This is where open-source information starts becoming scanty – but these are characterized by degree of Rhenium (one of the rarest elements with third-highest melting point and highest boiling point) mixture in the alloy itself with 2nd Gen (CMSX-4) amounting to 3% (w/w) and for 3rd Gen (CMSX-10) to approx. >6% or so.

However, extrapolating the SCB gen and TeT in engine-development world of the advanced western countries, it can be safely assumed that 3rd Gen SCB is required for >1600deg C TeT (approx 80-100deg C jump for each gen of SCBs).

Current Status - Kabini blades:
Now where do we stand as far as the DS and SCB technology is concerned:
What we already knew was that the investment forging for DS blades are in place and the current Kaveri core (Kabini) achieves the dry (100%) and wet thrust (approx 75%) using them.
In concrete terms it means a TeT of 1476deg C with an OPR of 20-22. The compressor SPR is around 1.6 whilst the aim should have been more than 2 – what needs to be achieved is approx 27 OPR and 1600deg C TeT.

That would require developing,
1) SCB for HPT - the above picture (from Shivji in AI13) is not a definitive proof as we don't know the SCB gen it belongs to.
Pls note that 3rd Gen SCBs are required for that level of TeT – and if the displayed SCB is from 1st Gen, it’s of very little use as the achieved TeT would be lower than what the DS blades in Kabini is achieving currently.

2) The OPR increase in Kabini will necessitate some level of SCB for the later/last stages of the HP Compressors as well.
The current Ti based alloys for the compressors, though enough for the temperatures to be handled at the 1st few stages (of the HPC) will not suffice for the later/last stages (Ti oxidation resistance beyond 700deg C is very poor), if the OPR is go towards 27 or more. And to again maintain the weight of these later stages of the compressors, compressor-level blisk manufacturing needs to be developed as well.
3) TBC for the HPTs - no info on it from the AI13.

Rhenium and current SCB Tech dev effort World-wide:
Continuing with the discussion on various gens of SCB etc, pls note that the world didn't really stop at Gen 3 (circa 2000) or thereabouts. Today the talk is about Gen 4 and 5 level SCB where the Re composition (w/w %) is in the realms of 8% to 9%.

But pls do note, graduating from 2nd Gen (3% of Re) to 3rd Gen (5-6%) SCB metallurgy tech, a major hurdle needed overcoming - what happened was at Re levels around 5% (w/w) or more the solid solubility limit (wrt the matrix of Re crystal structure) was exceeded. Which meant that if such a blade is operated in high-temperature environments for prolonged duration, the excess Re levels may combine with other elements, reducing creep strength (pls note all this effort of increasing Re content was to increase creep strength in the first place).

Note: This is called as Topologically Close-Packed (TCP) Phase where-in you have close-packed atoms in layers separated by relatively large interatomic distances caused by sandwiched (between these layers) larger atoms. These plate-like structures negatively affects mechanical properties (ductility and creep-rupture) and are damaging for two reasons:
i) they tie up Gamma and Gamma-prime strengthening elements in a non-useful form, thus reducing creep strength
ii) they can act as crack initiators because of their brittle nature.

This was resolved by introducing Ru (that controls the TCP phase) to the Ni-based single crystal superalloy and adjusting the composition ratios of other component elements are set to optimal ranges so as to provide the optimal lattice constant of the matrix (γ phase) and the optimal lattice constant of the precipitate (γ' phase).
The 4th and 5th Gen SCBs are thus characterized not only by higher 8-9% (w/w) of Re levels but also gradually increasing % of Ru as well (4th Gen 3% Ru and 5th Gen with 4% Ru w/w).

But that's not all.

This continuous adding of a heavy metal like Re meant (worsened by presence of another heavy element W) a higher specific gravity (g/cm3) of the overall alloy goes up, increasing weight of the blade (and thus the engine as well). This was overcome by reducing the W composition percentage to 1.5-2% (from a level of about 5-6%). So basically increase in Re % in the composition was balanced (not linearly though) by corresponding reduction of the W.

And the research continuous... :eek:
But I guess the bigger question is, where do we stand wrt indigenous mass-manufacture level of turbine blades (so complex geometry etc) with these various Gens of SCB technology. :-?

Next will be another lay-man's attempt :P to look at OPR shortfall impact on Kaveri.

The Superalloys: Fundamentals and Applications By Roger C. Reed
Last edited by maitya on 18 Jan 2014 23:33, edited 2 times in total.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 15:14

Now let's look at the compressor aspects (in a two part series) of the Kaveri ...

[Part 1]
Continuing from my previous post of the SCB and Turbine blade Metallurgy related issues wrt Kaveri, here is another very humble attempt (in 2 parts) of moi, to explain in lay-man terms, of what seems to be the weakness in the Compressor part of the Kaveri/Kabini – which, I dare say, gets suppressed in the hullaballoo of SCB, TeT, BPR and what not.

Preamble: The hadiths, scattered as they were in various threads of BR, of the grand-mullah Enqyoobuddin Gas-turbini needs to completely internalized before any attempt of trying to understand the current Kaveri/Kabini imbroglio – so as per his (Piss BUH) sermon, the only thing that really matters to injiin performance are:
1. Highest possible pressure before heat addition in a combustor (aka OPR impact)
2. Highest possible temperature at end of heat addition post combustor (aka TeT impact)

Some such hadiths can be found here and here - and Vinaji's response to it here. Plus some more can be found here and here (and many other places – pls use the search feature of BR).

Before going on to ramble etc. let me just quote the grand-mullah Gas-turbini again, sermonizing that both are completely intrinsically linked to each other and any attempt to focus on one while ignoring the other simply doesn’t work.

The following schematic explains it succinctly:

As it can be seen in the graph there’s almost an exponential growth in Thrust Values with increasing OPR, but it then plateaus out and starts falling-off – but this plateauing and falling-off can be postponed (and continue with the higher Thrust values) by suitably increasing the TeT.
(Disclaimer: Pls note TR is actually the ratio of TeT and Atmospheric temp, but since Atm Temp can be assumed constant for a particular altitude, the graph will similar if drawn against pure TeT also – and for simplicity sake, though tempting, let’s not bring in flat-rating etc discussion into this pls)[/

Now let’s examine the OPR impact part in a little more detail.
For a given core-mass flow (aka the energy available from the upstream turbines) the pressure ratio achieved before entering the combustor (and heat getting added there-in), is directly linked to (a) Overall Compressor Design and (b)Design/Manufacturing finesse of the compressor blades .

(a)Overall Compressor Design: There are 3 LP and 6 HP stages in Kabini – and it achieves 21-22 OPR, with these 9 stages, which translates to <1.4 Avg Stage Pressure Ratio (aka the pressure ratio achieved by each of these stages). Contrasted against a contemporary design goal of 30 OPR with 6-7 stages (say in 27atm of a F-414 or 35atm of a F-135), that can be achieved with 2.0 Avg stage Pressure Ratio.

So if the Kabini Compressor stages were to be magically (by “jiiinn” tech © grand-mullah Gas-turbini-enqyoobuddin) made efficient enough to nudge the perf towards a 27-30 OPR values (aka increase in the SPR values trending towards a 1.9-2.0), it will either help in increasing the thrust levels (both dry and wet) or if the designers choose to maintain the current thrust levels, they can do the reduction of compressor stages and reduce weight (Kaveri is 150Kg overweight, IIRC).

IOW, a contemporary design of 30OPR achievement by 6-7 stage-compressor design would have meant Kaveri with a reduced weight, still meeting the design thrust-weight ratio quite handsomely, at the current design thrust levels.
Conversely, keeping the same 9 (3 LP + 6 HP) stage core-design, if these stages were to be somehow made efficient enough to attain a contemporary STR in the region of 1.9-2.0, the resultant thrust-levels attained would be higher than the design level but due to the current exceeding-weight levels, again attaining the ballpark thrust-to-weight ratio.

Disclaimer 1: Both the above calc are very rudimentary, assuming a similar PR achievement for each of these compressor stages, which in practice wouldn’t be the case, and would be non-linear – so while a straight calc would have given a result of 5 stages, but given the non-linearity etc, I safely choose 2 more stages to mitigate it.
Disclaimer 2: This “theoretical” 2 stage reduction is a bit interesting. Since, say instead of trying to reduce the HPC stages alone, if an attempt is made for reducing 1 from LPC and 1 from HPC (as LPC stages are generally more heavier than the HPC ones), it will create another issue as there are a certain minm number of LPC stages required to “shape” the flow before it hits the HPC stages to make it perform optimally to generate the required OPR – need to think this thru.
Disclaimer 3: Increasing the OPR blindly, without similar calibrated increase in TeT and combustor design, may actually downgrade the work-extraction (aka thrust-levels, refer to the graph above) levels – but that point discussion, a little later.

But none of these happens in Kaveri and it continues to struggles along with a 1.4 SPR, not because the GTRE folks didn’t aim for it (they actually aimed for a SPR of 1.6 with 88% isentropic efficiency), but more because there were deficiency in the compressor blade design and manufacturing finesse part (detailed in Part 2).


BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 15:16

[Part 2]
(b) Compressor Blade Design/Manufacturing aspects: This brings out the other elephant in the room – design and manufacturing aspects of the compressor stages.

Now, if we look back at the history of compressor stage evolution, there are mainly 3 generations:
1st Gen, Subsonic Level – Achieved in the 1960s, the multi-stage compressor with tip Mach numbers < 0.8, with limited achievable compression ratios to about 17.

2nd Gen, Basic Transonic Level – Achieved towards end of 70s (and early 80s), with tip Mach numbers between 1.0 and 1.1, achieving SPRs of around 1.4-1.5. These were also characterized by slight improvement in blade aspect ratio and double circular arc-profile blade designs (more on this aspect a little later). The avg temp rise per stage also doubled (from 21K to 42K, per stage) due to this increase in blade speed, and thus required improvement in compressor blade metallurgy. Most probably Kaveri compressors are of this gen.

3rd Gen, Adv Transonic Level with wide-chord blade design – Achieved towards end of 80s (and early 90s), with tip Mach numbers trending towards 1.6, achieving SPRs of around 2.0. But the definitive characteristic of this generation is a wide chord blade design (with blade aspect ratios around 1) and multi circular arc-profile blade designs. Here again further improvement in compressor blade metallurgy is required as the the avg temp rise per stage went up to 65-75K. It’s claimed that the compressors of the EJ200, F119 etc (with OPR of 30 achieved in only 6-7 stages) are of this gen.

So, as it can be seen in the above classification, talking about the compressor blade design (and manufacturing) aspects towards improving the efficiency (and the SPR) hinges on the following 4 basic dimension :
1) Compressor Blade Speed
2) Basic Blade Design towards low aspect ratio (wider chord design) – helps in superior aerodynamics in handling the flow between the blades and the side walls and also in the reduced axial pressure gradient along the side walls
3) Blade Geometry– From conventional subsonic aero-foil design to a double circular arc profile to multi-circular arc profile
4) Compressor blade Strength and Blade Loading – to handle higher speed and rise in operating temp

So let’s look at more closely at how each of these factors impacts the SPR (so eventually OPR as well). Pls refer to the following schematic:

1 and 2) Effect of Blade Speed and Blade Aspect Ratio: Not only does the above schematic amply demonstrate that the Stage Pressure Ratio can be increased by increasing the blade-speed but it also depicts that the rate of SPR increase (from the gradient comparison of the graphs) goes up quite significantly with the increase in an another parameter called Work Co-efficient. Furthermore, it also depicts that a blade speed of around the 400m/s (approx. 1.3Mach) combined with a Work Coefficient of 0.8-1.0 ensures the SPR value gets upto the 1.9-2.0 mark.

But this Work Co-efficient of the blades is dependent on the Aspect Ratio of the compressor blades – pls refer to the following schematic again:

As it can be clearly seen with the reduction of the blade aspect ratio the work co-efficient increases which, for a given blade speed, helps increasing the SPR (and thus the OPR as well) (refer to the previous graph as well).

So far so good i.e. the SPR (and thus the OPR, as well) of a compressor can be increased by having low-aspect ratio (aka wide chord) blades and by somehow moving towards a higher blade speed (i.e. from a subsonic to transonic to high transonic levels).

3) Effect of Blade Geometry: But an interesting problem happens while trying to increase the blade speed to the transonic levels – the blade geometry starts playing spoilsport. A conventional highly cambered subsonic aerofoil blade design with high suction surface curvature started producing unacceptably high shock losses and a rapid fall off in SPR.
Furthermore this gets compounded with further spaced co-axial blade stages (a term called solidity – ratio of the blade chord length and distance between the concentric blades, in the same line).

So this was resolved with different blade aerofoil design of first the double circular arc profile and then to multi-circular arc profile – plus by spacing the compressor stages optimally the solidity was also increased in conjunction with this. Pls refer to the schematic below:

As it can be seen with the decrease in suction surface curvature brought about by the introduction of double circular (yellow graph) and multi circular blade (violet graph) profile designs, the efficiency fall can be arrested quite dramatically upto certain transonic blade speeds (before the fall becomes too steep) for e.g. Notice the diff in efficiency levels at 1.3M between subsonic aero-foil design, double circular arc profile design and a multi-circular arc profile design.
It is reported that there are certain multi circular blade aerofoil designs with zero-curvature (further limiting the supersonic expansion ahead of the shock and hence the shock intensity and the inherent losses) on the suction surface allowing blade-speeds to go upto M1.6.

4) Compressor blade Strength and Blade Loading: Increasing blade tip speeds and lower aspect ratio blades comes with a resultant increase in centrifugal force implying mechanical stress (quite a bit) on the blade root and blade-disc fixtures. Furthermore increasing the blade tip speed etc will result in stage temperature increase as well (as shown in the graph at the top). Also low-aspect ratio blades will have addn issues of plate vibrations which can not only create critical blade resonances but also have potential coupling of vibrational excitations over several stages.
(ps: IIRC, Kaveri had to deal with 3rd order resonance issues which got identified only when it got tested in Germany at a very late stage, I think – not sure).

All of this requires addn blade strength and enhanced temp tolerance – pls refer to the following schematic to the impact on compressor blade loading, blade speed, OPR and number or compressor stages – which basically means higher the blade strength, more blade speed it can accommodate, increasing the OPR but thru lesser number of stages.

This then brings firmly into the territory of blisk manufacturing, higher thermal loading metallurgy, High speed milling, Electro-Chemical machining, Linear friction welding etc. etc.

Epilogue: So what does all this mean wrt Kaveri/Kabini?
To increase the OPR from a 21-22 to a more contemporary 27-30 OPR would require progress on several cutting edge compressor technologies as well as improvement in TeT etc – and, more importantly, the compressor shortcomings shouldn’t get overshadowed by the incessant wailing about BPR, SCB and low TeT etc (though equally, if not more, important and inherently linked aspects, no doubt).

The compressor level improvement in Kaveri/Kabini required, IMVVHO, are as follows:
1) Graduating to a high transonic blade speed regime of say 1.5-1.6M

2) Low aspect ratio (aka wide chord) blade design and manufacturing

3) Manufacturing (mass-level) capability of multi-circular arc profile compressor blades (just drawing a design of it on a paper wouldn’t do)

4) To cater to the above three developing/acquiring manufacturing capability of increased blade strength and loading – by usage of blisk manufacturing, higher thermal loading metallurgy, High speed milling, Electro-Chemical machining, Linear friction welding etc. etc.

And last but not the least have proper CeeFDee capability on 3-D NS Flows and other such good and exotic stuff – to test and analyse compressors down to detailed inter-stage data out of the rotating system in order to understand the aerodynamic and vibrational behaviors.
Essentially, as the grand-mullah Enqyoobuddin Gas-turbini had sermoned many moons back, get hordes of DOO and PIGS onto this with freedom of destroying a couple of cores, with harsh timelines and supervision and see the results – PissBUH!!

BRF Oldie
Posts: 6380
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby UlanBatori » 18 Jan 2014 20:36

AoA! I deleted my prior post because I felt it had nothing specifically technical to contribute to the stated purpose of this thread. Didn't see that there is a parallel thread on the Kaveri, where much of what I stated, is already stated. Thanks, and will be happy to discuss any specifics.

I will repeat that I see no alternative to actually developing engine technology inside India - I mean through original R&D which means many engine-years' worth of hard failure experience. It is one thing to expect Indian scientists/engineers to "leap-frog" because we are all so agile on our hindlegs and SDRE, but you cannot "leap-frog" literally millions of engine-years of accumulated experience of the competition.

So of those 30-odd teams, some at least should focus SERIOUSLY on the NextGen engine. The specs should be (assuming prototype inside 3 years): T/W > 13; Turbine Inlet Temp >2100K, stage pressure ratio > 2.2?? Counter-Rotating stages (no stators) for all compressor, fan and turbine spools, ability to make wide-chord, small-hub blades for transonic conditions. Let the academics tell us what magic is needed, and go and do the magic.

I also said to check the Sikorsky X-2 (a Kamov helo did this years ago) to see counter-rotating rotors. Lots of tough dynamics and aerodynamics issues to be tackled. Find Indian profs to find the answers to these inside India.

Will ask the question about the P&W Chair elsewhere.
Last edited by UlanBatori on 18 Jan 2014 22:04, edited 1 time in total.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 21:31

Now that we have seen briefly the Turbine and Compressor shortcomings of the Kaveri, one pertinent question arises as to why GTRE chose a conservative approach (refer to the Kaveri Design choice rational post above) towards tis development - something bought out precisely by Avarachan-ji.

Avarachan wrote:Maitya, thank you for these very informative posts.

I have a question for you. Given that the Kaveri is to be used in IUSAV (the unmanned strike platform), do you think that the conservative design choice was the right one? After all, because GTRE went with the conservative choice, at least India will soon have an engine it can use for other purposes, apart from the Tejas.

Avarachanji, reg the conservative engine technology roadmap for Kaveri, selected by the GTRE folks, well, let me put it this way.

There's simply no other choice - 2nd (1a-2b) and 3rd (1b-2a) quadrant choices couldn't have been taken, as in the late 80s (when this decision was being taken) the status of,

1) Prevalent Materials R&D and, more importantly, indigenously available manufacturing and engineering base to translate these designs to manufactured parts/products etc.
2) Low experience on the mechanical and CFD (from non-flying testbeds like GTX-37U and UB etc.) aspects of an aero-engine design

[Turbine Blade vs Disc Characteristics]
To give a short example of 1)
How many times here in BR we have heard that if only we could have mastered Blisk-manufacturing technology, most (if not all) issues of Kaveri would be sorted. What never gets discussed or thought thru is what it really means in terms of constraints that are being tried to overcome.
I'll not go into too much detail (will reserve that for Material write-up, if it ever gets finished :(( - most likely it won't, just like my engine-design related write-ups - all lying around at 50-60% completion level :oops:), let me try to bring out a small dichotomy (in context of blisk manufacturing usage).

Let's look at a HPT stage of a Turbine - now the blades will be required to withstand 1600-1700deg C temp and tip-rotor speed of about 1.5M. But what about the disk - the temp there seldom will reach beyond 800-900 deg C and speed maybe 0.9M.

Big difference, isn't it? But that's not all.
Look at the picture of an military turbojet/fan HPT, for example that of a F-110 as shown below:


Now if you compare the mass of the disk and that of the blades, it's obvious that the mass of disk is many order-of-magnitude more than that of the blades.
So between the disk and the blades of the same HPT stage, you have the following:

    Factors ----------------- Blade ------------- Disk
    Operating Temp -------- 1700degC -------- 850degC
    Speed -------------------- 1.5M ------------- 0.9M
    Mass ------------------------------------------ easily 10-12 times of Blades (cumulative)
    Cycle Fatigue ----------- High -------------- Low
(will explain this in a later post)

So the mechanical pressure due to good-old centrifugal force on a disk is multiple times more than that of the blades - but the operating temp regime is also very different.

So for the disk you would ideally be looking for some material with very good tensile ductility, high tensile yield and ultimate strength (and LCF too - more on this on some other day) - while the temp operating environment gives you a lot of leeway (compared to blades), so much so you can get away with even equiaxed-casted materials.
But for the blades the requirements are high Thermal Mechanical Fatigue (TMF) resistance (aka higher melting points), Creep-rupture strength and HCF - while lot of leeway on mechanical strength aspects like tensile yield/strength and ductility.

[Usage of Casted vs Wrought Alloys vis-a-vis Turbine blisks]
So for the blades you are constrained to have casted materials that have directionally oriented grains parallel to airfoil axis (aka DS or SC) - but that's not all, due to high-temp operating env, it needs to have a good ability to accept TBC as well (plus higher oxidation resistance properties - again more on this on a later day). But the operating word is "casted" alloys.

And there-in lies the problem.

As casted alloys generally have lower tensile properties, worser ductility and lesser homogeneity making them unsuitable for a disk application - to have those kinds of mechanical properties you need wrought alloys.

So, coming back to the topic of blisk manufacturing etc - you are basically asking/looking for an "integral" manufacturing process where the disk part is made of wrought superalloy, while the blade part being from casted superalloy.

This should give us an idea about what challenging the material/manufacturing R&D and engineering aspect could be.

I'll later bring out a suitable example from 2 as well.

But coming back to your original point about design choices - this above example, would be sufficient to demonstrate that how many options the GTRE folks would have had then, given our experience on integral casting etc, to reject the possibility of blisked turbine stage development route. Instead, it would have been less riskier to have accepted the then prevalent bolted-blades-on-disk philosophy and live with the resultant compromise on the TeT itself. And that's what the GTRE folks did.

Pls note the above example is not some "virtual" one - something quite similar actually happened with Kaveri where-in the in-house developed HPT disk had to be rejected (when the TeT was increased, as the blades were found to be able to accept a few tens of degrees C more TeT) and import (from USA) the HPT disks (while the HPT blades remained indigenous DS ones). But that story for another day. :mrgreen:

[Kaveri LPT Disc Saga]
merlin wrote:maitya, are you sure about the LPT disk from the US? My understanding is that hot section parts for the Kaveri come from Snecma.

merlinji, IIRC the Kaveri/Kabini LPT discs were made from a cast-wrought nickel base superalloy UDIMET-720Li forging billets and bars imported from Special Metals Corp, USA (and before anybody gets "concerned" about sanctions and imports etc, note they have a subsidiary in B'Lore - including subsidiaries in Fr, Gr, UK, HKG, China, S'Pore etc etc.).

Related Factoid 1: Actually the earlier versions of the Kaveri engine used the indigenous INCO-718 based LPT disks ... but was found during testing that a thermal gradient of approx 300deg C resulted in doubling the thermal stress and 40-45% reduction in LCF life.GTRE/DMRL could partially resolve the LCF cycle reduction, IIRC limiting to around 30% reduction or thereabouts but the increased thermal stress issue remained. So they chose to import the UDIMET-720Li based billet and bars and forged the LPT discs in MIDHANI using powder metallurgy based Hot Isothermal Processing.

Related Factoid 2: Alloy 720Li (Li -> ‘low interstitial’) was evolved from that of alloy 720, originally developed for blade application in land-based gas turbines, and thus available freely all over the world. However, to achieve the strength (because of the high speed of rotation and associated centrifugal stresses experienced by discs, disc alloys should possess good elevated temperature tensile strength) and LCF resistance desired for disc application, the processing and heat treatment of alloy 720Li is different from that adopted for alloy 720 for creep-resistant blade application.
Last edited by maitya on 19 Jan 2014 15:53, edited 3 times in total.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 18 Jan 2014 22:25

It is completely justifiable to get frustrated at the non-delivery of the almost 2-decade+ old Kaveri program, not yielding the desired results - but while criticising GTRE for this failure we need to contrast with what they have achieved so far and against what type of constraints. Here goes ...

[Turbofan Gens vis-a-vis Kaveri HPT technology choice]
JTull wrote:...
As far as engines are concerned, we've all known how inadequate efforts towards Kaveri have been and but I was sick of watching everyone after AI-13, being busy back-slapping over some single-crystal blades that aren't of requisite caliber to be used. More years are being wasted at GTRE in coming up products from yesteryears (and being completely clueless in the process), while the world is pursuing and achieving next generation technologies. We're only good for a begging bowl, hoping one of the other engine makers will just give us the required tech by way of consulting contract.

JTullji, while this angst is somewhat justified vis-à-vis Kaveri (and GTRE), the bolded part above (about them being "products from yesteryears" and "completely clueless" etc) is not true.

IMO a better and more balanced perspective is required while dissecting teh Kaveri development process (there are series of posts in that thread, both before and after that post-series, which delves into this aspect).
And to start the ball rolling, let me make very bold stmt - it doesn't matter what gen etc of the technology is being pursued as long as the desired objective is met - and one can only aim for a tech gen provided a baseline level of the required capability and experience is available.

But before we go there let's try and understand various Turbofan Gens wrt Turbine (specifically HPT) blade material (Alloys) and their manufacturing (casting) technology evolution. The following schematic brings that out quite succintly:

As you may have noticed the gradual increase in TeT values with improvement of blade alloy composition and their casting technologies (Equiaxed - DS - SC). This increase has directly seen the not only the TWR level of various Turbofans but also their intrinsic efficiency (better SFC etc).
Also further notice in mid-to-late 80s and early 90s the contemporary casting technology was DS (e.g. in M53, RD33 etc etc) - the late 90s and 1st decade of 2000s saw SC blades getting perfected and productionised and finally becoming operational (e.g. GE F414, M88-2 etc).

So with Kaveri, getting conceptualised in mid-80s, the choice of DS blades for the HPT is all but natural given the contemporary turbine technolgy being used world-wide - with an aim of technological progression to the SC blades as an natural evolution path in 2000s etc.

In fact, a contrarian view can be, we aimed too high a tech gen (given the technological baseline and engineering capability - both on gas-turbine CFD and material tech) with Kaveri and failed in achieving it by a whisker. An ab-intio F-404 gen (so, the 3rd Gen Turbojet – maybe 1st Gen military Turbofan) is just too high to aim for given the indigenous turbojet/turbofan technological and material engineering capability that we possess.

But we’d start somewhere, isn’t it? And any ab-intio tech dev is inherently as-risky-as-it-ever-gets and Kaveri is no exception.
And it’s always better to aim higher and take-on the long and painful grind then to just go for what is available (or import) and maybe make good press out of it – a rhetorical question can be, would IAF ever accepted a GTX-37UB level engine (so required Thrust achieved but very poor SFC and weight) for LCA ?

Moreover, like all technological advancements, the next Gen of technology is more achieved by incremental advances over the previous Gen technological levels. For example, you will notice that the 4th Gen F414 level (aka F414-EPE or F414-GE-INS6 for LCA Mk-II), are based on 3rd Gen F404 tech. Similarly, on the material technology front, notice the diff between 3rd (CMX-4) and 4th Gen (CMX-10) SCBs (except for Re composition increase, and the corresponding balancing of other heavy elements of the alloy, not much of difference between the two gens) – some details of which can be found here.

So, IMVHO, if we pursue Kaveri devt program to it's logical conclusion of full-range-of-FTB tests etc, we'd have mastered the 3rd Gen Turbojet (maybe 1st Gen military Turbofan) technology end-to-end.
This would enable to baseline the CFD design aspects of the Core, it's Thermodynamic design and interplay and the mechanical and manufacturing technology of the Compressors/Combustor/Turbine components.
Based on which the 4th Gen engine etc development can be taken up.

And regarding your point about "back-slapping over some single-crystal blades that aren't of requisite caliber to be used" - well, that's a huge improvement on it's own, as basically until this years' AI, we were not even sure if we are able to manufacture any SCBs (forget about 2nd, 3rd, 4th Gen SCB etc) in the first place.

Pls refer to this talk by GTRE director, T. Mohan Rao, in AI09 here – notice, how he points out the basic future thrust areas that need working on … and I quote,
a. BLISK - integrated single Blade and Disk
b. Single Crystal blades - he categorically said - We do not have that tech at all.
c. Thermal Barrier Coatings - TBC - very critical for high temp engine operation.

That was back in 2009 and in 2013, we have a photo of an indigenously developed SCB blade - yes, we don’t know yet which gen it is, and what kind of thermal fatigue, creep resistance and mechanical pressure it can withstand – but isn’t that a huge progress, even if we conservatively assume it to be 2nd Gen SCB etc.
Last edited by maitya on 19 Jan 2014 15:53, edited 2 times in total.

BRF Oldie
Posts: 6380
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby UlanBatori » 18 Jan 2014 22:45

Maitya: What is the point of having the massive disks to hold the HP turbine blades? Is it only strength or is it the flywheel effect? IOW, why hasn't someone replaced that TFTA metal disc with a much lighter SDRE structure based on, say, concentric rings with spokes? Seems like they could cut the engine mass a lot?

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 19 Jan 2014 16:21

Having understood somewhat as to where stand on the Kaveri program, it may be worthwhile to look into which direction the program needs to take henceforth. But first a look into the role of various foreign engine design houses and consultants needs a brief examination.

[Role of foreign engine design houses and Consultancy]
Karan M wrote:Maitya,
Can you point out whether a collaboration with MTU & other independent designers/manufacturer consortiums is possible? Snecma seems out (a deal could not be agreed on after so many days). The Russians are also not #1 because despite their political plus points, the AL55I case shows how unreliable they can be from time to time, besides which all their tier 1 resources are already on AL31 derivatives & the PAKFA engine programs.

The only approach available seems to be the go it alone but with multiple partners sort of thing. Not ideal, but perhaps, if only we can coordinate things, might get us better tech & knowledge versus taking somebody else's design for local assembly.

KaranMji, first the disclaimer: What I'm posting below, is more-or-less from memory, so may have got some of the data-points incorrect. Too lazy to cross-check my half-finished Kaveri write-up (and the references, there-in) as well. But the general contour of the point that I'm trying to make stands. So pls take it strictly FWIW.

Not sure how much approaching MTU would help at the current stage of maturity Kaveri is in - unless MTU (or for that matter any other established turbofan design house), agrees to part away with their crown jewels wrt
1. Compressor Stage manufacturing technology vis-à-vis 30+ OPR achievement in a 6 stage layout
2. 1650-1700deg C TeT Turbine blade/disc manufacturing capability in an industrial scale

In fact, since you mention MTU, pls note that the GTRE folks already did so and reached out to the MTU right at the start of the program. And they did so to almost all established Engine houses across the globe for consultancy, peer-validation and many times, to import critical components, so that the engine itself can be progressed (and later these components can be replaced with indigenous components, a standard practice for such kind of ab-initio major programme anywhere in the world).
Plus, this being ab initio programme, there were sometimes multiple agencies being approached for the same stuff.

The following list is maybe a bit dated info (around 2002-04), but to get a fair idea on the spread of help-being-requested by GTRE, here's a list of agencies approached by them for various consultative help:

MTU, Germany
1. Over speed & Burst Margin Test on K6HP Turbine Rotor Assembly
2. Over speed & Burst Margin Test on K6LP Turbine Rotor Assembly

Via 'Rosoboronexport"
1. Exploratory Altitude Testing of Kaveri Engine
2. Exploratory Altitude Testing of Kaveri Engine
3. Fan Casing Containment Test for Kaveri Engine
4. Testing for Main Combustor at Sea level and Altitude conditions

Via Gromov Flight Research Institute
1. Technical services for Kaveri Engine

Test Devices INC, USA
1. Over speed & Burst margin test on K6HPC Rotor assembly
2. Over speed & burst margin test on K6 Fan Rotor assembly
3. Design, Analysis, Testing & Optimization of Damper for the LP Turbine Rotor Blade

1. Dynamic Analysis under Blade off condition of Kaveri engine

Applied Technology Consultants Ltd., UK
1. Dynamic Analysis under Blade off condition of Kaveri engine
2. Consultancy for Reheat System Design Review/Audit
3. Consultancy for HP Turbine Risk Analysis/Review
4. Consultancy for Weight Reduction Study
5. Consultancy for Thermal and Hydraulic Modelling of Kaveri Lubrication System
6. Consultancy for Kaveri Fan Aerodynamic and Mechanical Design Review/Audit and enhancement.
7. Consultancy for Critical Design Review of the Kaveri Engine Project
8. Consultancy for Accelerated Simulated Mission Endurance Test (ASMET) Cycle and test schedule definition and development programme integration.
9. Consultancy for Kaveri Integrated Test, Development and Procurement programmes
10. Consultancy for Kaveri PFRT Fan Aerodynamic Design 3D Blade-to-Blade and Viscous Analysis
11. Consultancy for Kaveri K4 Build 06 HP Compressor Blade Stage 1 Failure Investigation and Follow-up
12. Consultancy for Review and Proposal for the Resolution of Vibration Problems in the Kaveri Engine
13. Consultancy for Design Review and Audit of High Temperature High Pressure Heat Transfer Rig
14. Consultancy for Kaveri K5 & K8 Compressor Blade Stage I vibration & Rub Investigation Problems in the Kaveri Engine

But the point I'm trying to make is, if you look closely at this list, most of these help were in the form of consultative support for design validation, issue/failure confirmation and the required resolution approach - plus of course help in testing various aspects of a turbofan.

Rarely will you find some help in the form of a major turbofan component/system being put forward.

In fact, it's not that GTRE were not aware of the challenges on the compressor and turbine design and manufacturing aspects, in an ab-initio programme like this. But they were also aware that stuff like Turbine (both HPT and LPT) Blades and Discs, the very heart (and thus most difficult and riskiest aspect) of a turbofan will not be available from anybody. So the whole focus was on them while they wanted the relatively non-strategic compressor stages (at least the fan stages) be imported from Germany (IIRC from MTU) - i.e. they wanted to do the design of the fan blades and wanted MTU to manufacture them.

End result was that it was denied first with reasoning that GTRE's design was just too complicated for them to manufacture - and when GTRE simplified the blade design (to a lesser efficient one) , then the reasoning was that the volume being these being asked is commercially non-viable to be manufactured. GTRE was then forced to manufacture it and most probably was forced to settle with a further sub-optimal compressor design as the design had to match with the indigenous manufacturing capability available then with MIDHANI et all.

So yes, morale of the story is, collaborate we must but when it comes to the cutting-edge turbofan-core design and manufacturing technology capability building, we were basically on our own. Irony is had GTRE aimed for a lessen gen core, maybe just maybe, this collaboration-story would have been different.

And more ironical aspect is that when the (currently achieved) dry thrust was in danger due to the indigenous LPT blades and LPT disc integration challenges (the disc was simply giving up while blades were able to cope with the RPM at around 1400deg C TeT), it was USA who supplied the LPT disc and saved the day.

[Future Program roadmap]
Sagar G wrote:...
Plan B is to reduce the weight of Tejas on a fast track basis and get Kaveri meet it's damn design values. They are trying to create new alloys to reduce weight but there is no public indication of the same so I guess it's not in the priority list which should have been the case.

SagarG-ji, what is required urgently is reduction of Kaveri weight to it's desired design goal (950Kg - it's currently 120-150Kg or 12-15% overweight) an achieve the 76N/Kg TWR as it was originally envisioned. LCA airframe weigth reduction etc, if it happens, is well-and-good but Kaveri in it's present form will not make it to LCA - maybe, optimistically a Kaveri MK-II for LCA Mk-II later stages, but we will cross that bridge once we reach there.

Future Path:Currently, one path is to pursue this Kaveri engine weight-reduction which in itself is an extremely challenging task - as such large-scale weight reduction means playing around with the Core both material-wise and design-wise.
For example, the heaviest part of a turbojet is its compressor stages, specifically the Fan stages (LPCs). Reducing a stage there would surely ensure the weight being bought down drastically but the basic engine performance would also takes a nose-dive as the Overall Compressor Pressure Ratios (total of Stage PRs of each of the stages) will also reduce significantly.
(pls note, the totaling of Stage Pressure Ratios across various Compressor stages is non-linear totaling - aka something like summing a Geometric progression).

So the trick is reducing a stage without reducing the Overall Compressor PR - which essentially means increasing the SPR of the remaining Stages from the current level. And there are 3 ways of achieving this:
1) Higher blade speed - maybe in the realm of 1.5-1.6M
2) Low aspect ratio (aka wide chord) blade design
3) Multi-circular arc profile compressor blades

Pls refer to this earlier posts on Compressor Blade Design and Manufacturing aspects for further details.

But the above three would mean developing/acquiring manufacturing capability of increased blade strength and loading – by usage of blisk manufacturing, higher thermal loading metallurgy, High speed milling, Electro-Chemical machining, Linear friction welding etc. etc.
Absolute cutting-edge of material and manufacturing technology (technology which nobody will part with), and we are simply not there.

And before berating GTRE et all, pls note that the current Russian progression path of 117 Series to T-50 2nd stage engines is just that (from atleast last 10-odd years, and are still 5-6 years from achieveing it). Take that!!

Alternate Path:The other option path (iterative) is,
1) to keep the engine core as-is and first flight qualify it - this will baseline the basic engine Core Aerodynamic and Thermodynamic design (and it's other performance parameters).

2) Then go for a slightly higher core inlet dia (so increased mass-flow), larger Compressor and larger Turbine stages (so increased weight) and brute-force increase the Thrust levels (both dry and wet). At this stage vital parameters like OPR, SFC, TeT etc may not have improved significantly (though the Thrust and weight levels would have increased).

3) Then try and improve the Compressor and turbine efficiencies (maybe drop a compressor stage etc) by incorporating higher blade speed, Low-aspect-ratio blades, Multi-circular-arc blade profiles, 4th Gen SCB etc.

4) And finally go for Ceramic Matrix Composites (CMC - for the hot sections like Turbines etc) and Polymer Matrix Composites (PMC for the cold sections like the later-compressor stages) based improvements.

Pls note these phases are not strictly sequential as achieving each phase goals would require some aspects of Pt3 and Pt4 above realizations.

A variable-cycle engine can come thereafter.

A good 1-2 decades of solid R&D with almost unlimited funding - or be dependent on 100h-TBO hand-me-down stuff. :evil:

BRF Oldie
Posts: 6380
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby UlanBatori » 21 Jan 2014 01:20

maitya, all good stuff but.. a few points:

1. I think the new F-135 engine uses counter-rotating compressor and turbine stages. What this does is to (a) eliminate the stator blade weight (gyan in basic courses: stator does no work), (b) correct the swirl and recover the swirl momentum, (c) reduce the rpm needed for a given relative velocity, and (d) in net effect, increase the SPR substantially. The price paid is steep: there must be larger secondary losses, vortex-interactions etc. Which is where hard, hard R&D comes in. So pushing for higher SPR by going ever higher in tip Mach and RPM is not the answer, the radical option is the counter-rotating stage.

2. I see that the Unducted Turbofan (aka PropFan) is making a comeback. This may not be relevant for fighter planes, but is for transports. Also uses exotic flowfield solvers and structural wizardry, i.e., hard R&D.

3. T/W of 76N/kg means only 7.75. Long way from the 12 claimed for the above, and you can see why those radical decisions will lead to huge weight savings.

4. The GE version of the F-135 (which got killed) is sitting out there, ready for tech transfer/export orders. Wonder if there is any move to get that to India as a string tied to the F404/414s? Or at least as a sample/demo exhibition/tour like the 5 Russian GSLV engines...

What I am saying is that the parameter playground for the Modern TF may be far away from where the Kaveri is playing today. BTW, I have not paid attention to this whole field except for seeing a few news clips (and doing some in-depth soul-searching to justify T/W of 12 for some other reason) for many saal, so don't worry about my level of inside gyan.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 21 Jan 2014 23:53

[Increasing the SPR Level s of the Kaveri Compressor Stages]
UlanBatori wrote:maitya, all good stuff but.. a few points:

1. I think the new F-135 engine uses counter-rotating compressor and turbine stages. What this does is to (a) eliminate the stator blade weight (gyan in basic courses: stator does no work), (b) correct the swirl and recover the swirl momentum, (c) reduce the rpm needed for a given relative velocity, and (d) in net effect, increase the SPR substantially. The price paid is steep: there must be larger secondary losses, vortex-interactions etc. Which is where hard, hard R&D comes in. So pushing for higher SPR by going ever higher in tip Mach and RPM is not the answer, the radical option is the counter-rotating stage.

Saar, what you say above are all fundamental facts and can't be argued against - but what we are trying below is (in absence of any direct confirmation from GTRE et all) to do an educated second-guess and come out with a few (a tiny list) of reasons as to where we are today and what needs to be done to progress this program.

But for the benefit of lay-man (you and other learned maulanas may pls skip the following sub-section), the gas dynamics within a rotor-stator system of a typical compressor needs understanding first.

[Compressor Stator and Rotor Dynamics]
Yes, stators (or "stationary fan") are basically there to build up pressure by converting the KE from the preceding paired-rotor-blade of the compressor stage (and the preceding rotor increases the relative KE of the flow-stream by adding "swirl" to the flow).

Note: Addition of energy to the flow-stream by the rotors happens because:
1) The total energy carried in the flow (called Stagnation Pressure) is sum-total of the internal energy (called static pressure) and the kinetic energy associated with the velocity of the air-stream (Hint: good old Bernoulli Equation)
2) As usual, there are three perpendicular components of flow-stream velocity (radial, tangential and axial) - and the KE component addition by the rotor is sum total of kinetic energy associated with each component of velocity (squared).
3) The rotor adds "swirl" to the flow increasing its angular momentum i.e. the KE addition happens due to the increase in tangential velocity component, mentioned above

Once the KE addition has been done by the rotor, the paired stator (which is static, and is "hung" from the core wall) extracts energy from the preceding-rotor swirl and coverts it into pressure (thus relative flow-stream velocity decreases) i.e. it converts the kinetic energy of the swirl to internal energy, raising the static pressure of the flow. Do note, no work (or any energy addition) can be done by stators as they are static - they are pure convertors.

So the thumb-rule to remember is that the moving-rotors increases velocity (and thus KE) of the flow-stream while the static-stators slows it down and increases the pressure. So, the velocity-pressure profile in a multi-stage compressor looks something like the following schematic.

[Back to the original discussion]
What UlanBatori-ji is saying that "why-phor do you need the stators, hain jee?". All you need is to increase the static pressure of the flow isn't it - so, just get another rotor stage (maybe the next stage one) and make it rotate in the opposite direction of the "primary rotor" and recover the swirl momentum na. This counter rotating rotor stage will also add it's KE component and increase the velocity of the flow-stream (with a 90-deg phase of the post-1st-rotor-stage etc.) - two in one, ek ke sath ek free.
Remove the dead-wood stators and save weight (and there-by stop scratching that head of yours and mumbling "135Kg overweight engine" etc.)

And mind-you this is actually got implemented in an operational engine - so pls, no excuses and hand-waiving of "just a theoretical concept" etc.

Problem is, for a certified DOO like moi, there's absolutely no theoretical counter-logic to this, that I can think of (and as they taught us in the good-old-madrassa, no theoretical counter-logic is sure-shot sign of laziness etc!!).

[Potential drawbacks of Compressor Contra-rotating Rotor Stages]
Couple of small fly-in-the-ointment points though:
1) How do we actually implement it - by introducing a pair of counter-rotating concentric shafts is it? So one counter-rotating concentric shaft corresponding to the HPT-driven shaft and another similar one for the LPT-driven shaft, is it?
Do we possess the level of manufacturing maturity required to come up with such a complex gearing mechanism etc - we are no US after all!!
Plus what about the weight-penalty of introducing new shafts etc (maybe not of the same scale as that saved by removing the stators, but still there'll be considerable weight penalty).

2) What about the secondary losses (UlanBatori-ji mentions that as well) - the losses within a compressor stages are mainly of the following types:

a) Disc friction loss (loss is from skin friction on the discs that house the blades of the compressors. This loss varies with different types of discs)
b) Incidence loss (loss is caused by the angle of the air and the blade angle not being coincident. The loss is at a minimum to about an angle of ± 4deg, after which the loss increases rapidly)
c) Blade loading and profile loss (loss is due to the negative velocity gradients in the boundary layer, which gives rise to flow separation)
d) Skin friction loss (loss is from skin friction on the blade surfaces and on the annular walls)
e) Clearance loss (loss is due to the clearance between the blade tips and the casing)
f) Wake loss (loss is from the wake produced at the exit of the rotary)
g) Stator profile and skin friction loss (loss is from skin friction and the attack angle of the flow entering the stator)

We most probably will dwell with these in further details at a later stage, but for the current topic in hand, pls note except for (a) and (g) above all of the other needs very careful optimization via blade design (wrt intra-blade CFD etc). Specially (e) Clearance Loss (wrt no Stators as a "protective" boundary to prevent spilling of vortices etc to the next rotor stage), can be a major problem to master.
Indranil and other Aerodynamic and CFD experts can comment more on this aspect.

3) And then there is this solidity issue (between two consecutive rotors with complex shapes) to deal with - this becomes more complex not only due to the permissible attachment of blades closer to each other (when stators are not there), but also due to the blade-shape induced eddys and vortices as well (pls note generally speaking vortices can be very beneficial for energizing a flow and thus prevent departure etc - but will the effort required to turn it by a 90-deg phase by the next rotor, worth it (wrt the counter-rotating arrangement being discussed here).
So the short answer is I don’t know and I doubt without solid and extensive R&D, modeling, studies we will ever know – it’s not for nothing that it took approx a dacade+ for GE and PW to come up with this concept and operationalizing it.

[Contra-Rotating Rotors and the Kaveri perspective]
But the bigger (and more pertinent question is), if this is a better and next-gen path to reduce compressor and overall engine weight and efficiency, why isn’t GTRE pursuing it? Surely, if we the armchair designers (except maybe UlanBatoriji) can think of this, surely career turbojet designers and engineers in GTRE must have long back considered it.
A simple google search on contra-rotating rotors in an axial turbofan yields a plethora of papers and patent related publications – couldn’t find anything from GTRE folks (of course I’d admit didn’t spend enough time looking or searching etc).

None of us would know and can only speculate :
As I’d tried to speculate in the “Kaveri Design Choice Rational” post earlier, I’d speculate it’s pure question of risk-appetite (I’d not subscribe the “laziness” and “institutional-inertia” just yet – “lack of initiative” yes maybe, evident from lack of even theoretical papers or studies on it from Indian entities etc.)
In a scenario GTRE charted that path (e.g. like charting the 3rd Gen SCB instead of DS blade path that they chose etc.) right from the start, in the event they failed and had to redesign with Fan-and-HPC stages to more contemporary rotor-stator-paired stages, we (the inherent argumentative-and-naval-gazing Indians that we are) would have bayed for their blood and labeled them as “failures”, “incompetent”, “parasites” etc etc (shivji has a more comprehensive list).

Not that, they have fared any better today in our so-called public “scrutiny”, but in 1980-90s they simply couldn’t have chosen such a risky path.

But, at the same time, I’m in the process of getting appalled (thanks again UlanBatoriji for being the margdarshak –as you have been always, if I may add) as to this lack of initiative to forward-think and technology forecast and atleast commission theoretical studies etc. And more than that if they could have done something similar wrt material tech via CMC and PMC stuff why not in the rotary-CFD area?

Instead the whole focus by GTRE seems to be the old sledge-hammer approach of:
1) Increase TeT with better HPT (and LPT) blades (and disks) material (and casting tech) and extract more rotational energy and get the blade-tip speed and thus increase the SPR somewhat. This path has very less head-room though as beyond 1.6M etc, it’s unknown-unknown state.

2) Try the Composite route and try and get the FAN weight reduced (FANs are the heaviest aspect of the engine, IIRC). Difficulty and constraints of contemporary CNC machines towards complex-geometry shaping of the composites would be the limiting factor.

Pls note that, I’m not trying to propose that we shouldn’t the above 2 – by all means we should. But as a parallel risk-mitigation and also from future tech dev strategy, we should seriously look at this “contra-rotating compressor blade-based stages” approach as well (c’mon at least a feasibility study is needed isn’t it).

[Kaveri – Attempts towards SPR increase via Compressor Rotor Blade-Tip Speed increase]
But before we wrap up, I think we should analyse this increased compressor blade tip speed path (pt. 1 above) a bit (because there’s an increased mass-flow bit inherent in it as well).
Fig 1 represents a typical Compressor rotor stage and how KE is built-up across it – and Fig 2 represents the velocity triangles across such a compressor stage comprising of a rotor and a stator (there’s an entry vane aspect as well, but not all compressor stages will have a entry vane – they are typical to the LP stages of a compressor). These velocity vectors are used for deducing the various formulas.

Fig 3 is the corresponding Euler turbine equation and is very important as it depicts the relationship between Stagnation Temperature rise across a compressor stage wrt the corresponding rotor Tip Speed increase and mass flow change through the stage. As it can be clearly seen that the stagnation temperature rise across the stage increases with the tip Mach number squared, and for fixed positive blade angles, decreases with increasing mass flow. Fig 4 is the corresponding schematic representation.

So what does the above representation mean from a Kaveri perspective. Well, as I’ve mentioned above, there’ll be an attempt to better the current Compressor SPRs of Kaveri via TeT improvements. This will essentially mean increase the compressor blade rotor tip speed which, as the above schematic shows, will result in increase in ambient temperature across the compressor stages.

Increasing temperature on the compressor stages will mean breaching the 650-700deg C (or so) max operating temp point of the Ti-based Compressor blades. So we should see an attempt towards junking the current Ti-based blades and switching to Equiaxed-casted Ni-alloy based compressor blades atleast for the later HPC stages (where the peak temp will be reached).

However, there’ll be also another parallel attempt towards increasing the mass-flow by slightly increasing the dia (mainly to inflate the Thrust levels) – this will have an indirect attempt of cooling (relatively) these compressor stages (as depicted in the equation and graph above).

Interesting times ahead!!

PS: The rotor-stator dynamics in a turbine is exactly opposite of compressor discussion.

PPS: UlanBatoriji, sorry I didn’t get enough time to think-thru your other points as of now.

BRF Oldie
Posts: 6380
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby UlanBatori » 22 Jan 2014 00:20

Now u r getting into the reasons why in war there is no "you were close second". I think in WW2 the ingineurs had 2 choices:
a) solve dem equations and invert the matrices and make the dang thing work perfectly. By subah tomorrow.
b) At subah tomorrow, eeph not done, pick up rifle and get on truck headed to phrontline onlee.
And if you made a chalta hai solution, and it breaks down on the trench line, WELL!!! go out and fix it there.
As James Bond and Mullah Archimedes said: "You only live do bar. Phirsht the din when u were born. And doosri bar when Houristan stares u in the musharraf."

So this is why there is a "Working Injun" using counter-rotating etc. People with Aag in the Belly. All the points u mention are baad issues:
1) How do we actually implement it - I think there are some planetary gear designs (e.g. on Propfan designs of the late 1980s) and maybe just concentric shafts, I don't know. Remember, we want counter-rotating STAGES, not entire machines. Stage ek goes left, stage do goes right. Stage teen goes left...
Weight-penalty of introducing new shafts etc. Sure, and gears are esp. troublesome.
2) What about the secondary losses
a) Disc friction loss: hire Mullah Archimedes-bin-Tribology
b) Incidence loss : aerodynamics/ adjustment by experiment
c) Blade loading and profile loss: actually blades may be thinner because centrifugal stress is lower because rpm is lower.
d) Skin friction loss: yes, but much smaller because rpm is half so dynamic pressure is lower.
e) Clearance loss: yes, but all have it, they just let the machine rub itself in and make a thin wear-slot, right? sharp file, use EyeEyeTee Phitting Shop Gulag experience. :)
f) Wake loss: think interaction of wake with succeeding rotor stage. More interesting. Meaning Research $$$. :mrgreen: But hey, if you manage to make the leading edge vortex from a curved low-AR blade come over the surface of a succeeding blade, you may manage to get a much higher stage pressure ratio! Many saal worth of people staring at See Eff Dee plots, and then testing in lab.
g) Stator profile and skin friction loss. Ha! No stator! Actually it may turn out that profile drag comes down because the flow may self-adjust on counter-rotating stages, vs. having to separate at high incidence on a fixed-geometry stator. Just a wild guess, but based on empirical evidence: Eeph say A.May or Dr. Shrilleen stand in a wind, shawl will stream straight back and flutter in separated flow around Bluff Body. But if M. Bedi walks in a breeze like a counter-rotating blade heading through the flow over the prior stage blade, the dynamics of the rapid swaying motion make the sari cling to the blade design. 8) I think this is a version of Co-Anda Ephekt.

Again: I am just making wild guesses. Noooo idea how F-135 or GE version are done.

P.S. Biggest cited superstition about counter-rotating (reason cited when they cruelly dissed my precious experimental proposal in the late 80s/early 90s, result of many many moons of sitting up with smoke coming thru ears) is "instability". I flunked all the rotor dynamics classes (if I ever took any), but as I hear it, there will be many many resonant frequencies and you can't hope to damp them all. But the proof is that the F-135 is working, and the X-2 is flying, so I think they have somehow circumvented this superstition. And that's why the Unducted Turbofan aka Propfan seems to be making a comeback. Subjecting such an engine to the impact of a 9-G landing, and high-AOA operation... well, they are doing it.

BRF Oldie
Posts: 6380
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby UlanBatori » 22 Jan 2014 03:11

Don't know if halal - if not will move to the academic thread.
Google Scholar search for counter-rotating:

Counter rotating fans — An aircraft propulsion for the future?
Also (no url)
Schnell, R, Wallscheid, L. Unsteady Blade Pressure Distributions on a Counter Rotating Propfan at On- and Off-Design Conditions. 15th International Symposium on Air Breathing Engines, ISABE, Bangalore, India, AIAA, 2001. 1–10


Aircraft engine with inter-turbine engine frame supported counter rotating ...

Counter rotating turbofan engine

Counter rotating fan aircraft gas turbine engine with aft booster
Analysis of Technical Challenges in Vaneless Counter-Rotating Turbomachinery

BR Mainsite Crew
Posts: 12442
Joined: 27 Jul 2006 17:51
Location: Trying to mellow down :)

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby negi » 22 Jan 2014 03:48

On the contra rotating topic I think GTRE took path of least resistance , for someone building their first military TF engine for a single engine fighter AC it is obvious to have chosen a path well trodden. By the way despite lakhs of hours of live flight test data and testing even likes of Kamov have not been able to rule out the rotor blade collision in their designs.

Also F-35 is a different use case it requires contra rotating fans for low speed handling and hovering capabilities , conceptually it is like the RR Pegasus engine onboard the Harrier only much more advanced. LCA or CTOL ACs do not need to have such control over gyroscopic effects caused by direction of air flow.

BRF Oldie
Posts: 5741
Joined: 11 May 2005 06:56
Location: Doing Nijikaran, Udharikaran and Baazarikaran to Commies and Assorted Leftists

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby vina » 22 Jan 2014 09:18

[Potential drawbacks of Compressor Contra-rotating Rotor Stages]
Couple of small fly-in-the-ointment points though:
1) How do we actually implement it - by introducing a pair of counter-rotating concentric shafts is it? So one counter-rotating concentric shaft corresponding to the HPT-driven shaft and another similar one for the LPT-driven shaft, is it?
Do we possess the level of manufacturing maturity required to come up with such a complex gearing mechanism etc - we are no US after all!!
Plus what about the weight-penalty of introducing new shafts etc (maybe not of the same scale as that saved by removing the stators, but still there'll be considerable weight penalty).

Zimble onlee.. Cut & Paste, Beg & Borrow, and rebersh yin jin ear what you have. I mean, we already have the Pigasses oops Pegauses engine from the Harrier, which uses a contra rotating spool. Cut paste and copy that yin-jin-earring from that !

But that is too late for the Kaveri. Maybe for the next YinJin. The priority should be to get the current Kaveri to upto design specs, for that the problem is materials. Get that in , the rest will fall in place.

Posts: 107
Joined: 15 May 2007 20:53

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby Kartman » 22 Jan 2014 23:27

Maulana UlanBatori,
Along the lines of the (GE) unducted propfan, is the (P&W) geared turbofan that is being pitched as the "next best thing since sliced double-roti aka CFM56". Doesn't have contra-rotating spools IIRC, but planetary gearbox for speed reduction like a turboprop. Might have similarities to the F-135, since they're both from karkhana-e-P&W.

BRF Oldie
Posts: 6380
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby UlanBatori » 23 Jan 2014 18:57

I was noting the evolving noise about the 5th Gen Fighter. Clearly there is going to be a spate of articles from completely unbiased ppl who just happen to work in think-tanks funded by US, Oiropean entities saying how the IAF just HATES the Russian design. But I want to point out something: If you and your 6th coujin competitor, both are able to develop a fighter engine with T/W of 12 (as on the F-35) then would you not try to do at least as well on the 5th Gen or 6th Gen? What chance will a fighter with engines of T/W 8 or 9 have against you? This is a strong reason why there HAS to be at least a tech demonstrator engine of Indiangenius design that comes somewhere in the vicinity of 12 T/W. It can be half vaporware, but hopefully beyond the Photoshop stage. It will at least drive the design of the 5th Gen towards something that has a hope when it eventually comes out. So someone needs to be looking at designs that can get to those levels. And I think counter-rotating spools are pretty-much a must if you are going with axial turbomachines designs at all (as opposed to what? Djinn-Relativistic Compression?). Plus one has to build manufacturing facilities that can at least turn out 1 prototype engine every 6 months? So in 5 years (while they do yada-yada-yada and develop the RFQ) 10 iterations will have been done and 9 engines tested to destruction with data and computations done. This worked on GSLV, I don't see why it can't work on aircraft engines. Think about it - if the rpm is low enough because of counter-rotation, maybe you don't need Single Crystal or Blisk? Maybe all of the LP compressor and fan can be composite? Wonder what it takes to do a 3-D printed complete stage.. at least at 25% scale.

Posts: 207
Joined: 30 Jan 2006 14:16

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby Shankk » 26 Jan 2014 22:12

Last edited by Indranil on 27 Jan 2014 01:01, edited 1 time in total.
Reason: This is a technical-only discussion thread.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 27 Jan 2014 16:27

Having looked into the design aspect of turbofan/Kaveri, it's hightime we explore/analyse the other very crucial aspect of engine development - the Material aspect.
(Disclaimer: Pls note the following sections are solely aimed towards building up the technological context of the Kaveri Program from a pure layman perspective. These posts/sections are not towards detailing out Material physics/chemistry/mechanics etc and neither is an exhaustive ready-reference of the core subject)

As we have seen in the preceding posts, the thrust and efficiency improvement in Turbojet/Turbofan engines has moved lock-step with the core material design and the corresponding manufacturing technology evolution. A key indicator towards it is the Thrust, SFC etc improvement of the overall engines vis-à-vis first the turbine blade material evolution (from FE based to Ni-Fe based alloys to Nickel based alloys) and then thru the generational changes in the manufacturing methodology (Wrought –> Conventionally Cast –> Directionally Solidified –> Single Crystal) using these alloys. So in the following series of posts I’d attempt to build up the necessary context of the various turbofan component-use material properties and their manufacturing technological aspects.

[Material Properties at High Temperature]
The material need of turbojet/turbofan is vastly different from other type of engines mainly due to the mechanical forces it’s required to withstand at very high ambient temperature regime. The mechanical behavior and strength characteristics of most of the metals (we’ll come to the why part later) vary very wildly beyond a certain temperature regime – and the reason for that is, for most metals,
1) at high temperatures, the “time duration” of load application becomes very significant differentiator towards it’s material strength (and for the same material, strengths at low temperatures are usually not a function of time)
2) oxidation (so, conversion of metal atoms to it’s oxides) happens more rapidly at high temperature

[Creep and Creep-Rupture]
So, except for obvious stuff like metals weakening at higher temperature etc., there is this added dimension of combined-impact of “time-duration” (prolonged) of the load and that of the higher temperature, which needs careful consideration. The following schematic brings it out quite vividly:

Thus if a metal is subject to a load that is considerably lesser than what would have broken it at room temperature, for a longer duration and at a high temperature, then the metal would begin to extend with time. This time dependent extension is called creep - and if this load application is continued for a longer duration the metal will eventually fracture (or rupture, as it's scientifically called). This creep strength and the creep-rupture (called as stress-rupture) strength are vital parameters of materials that comes into play while selecting them for applications where prolonged physical load will be applicable at high ambient temperature.

The impact is so much, that for a normal mechanical load the deformation (creep, fracture etc ) can start at as low as 50% of the melting point of the metal – contrasted with a situation, where the metal would have withstood perfectly normally the same load at room (or lower) temperature.

And in the turbines and other “hot” areas of a turbojet/turbofan that is exactly what happens – i.e. the combination of the mechanical forces of a high-speed spinning turbine (and other “mechanical” forces as well) along-with the thermal loads of a very high ambient temperature, makes choosing the appropriate material for such applications so unique.

[Cyclic Load – LCF/HCF]
But that’s not all – then there’s this problem of behavior of metals to cyclic loads at elevated temperatures. Any metal that is subjected to a certain level of cyclical loads will fail irrespective of ambient temperature, after a certain number of cycles – however, with increased temperature they will fail sooner i.e. after much lesser number of cycles. This is schematically represented in the following diagram:

Low Cycle Fatigue (LCF) is caused by loading that typically causes failure in less than 10^4 cycles – and LCF can be induced by either by pure (thus very high) mechanical loads (and of course, high temperature) or by a combination of both moderately high mechanical loads and thermal loads (i.e. even higher temperature regime).

The type of LCF that is induced by this combination of thermal load and mechanical loads is called Thermal-Mechanical Fatigue (TMF) – where failure occurs in a relatively low number of cycles.

In a HPT and LPT, for the highly mechanically loaded turbine disks, mechanically-induced LCF is major concern/criteria for selection. While for the HPT (and LPT) blades where the highest temperature loads and the slightly relatively-lower mechanical loads are applicable, TMF is a major concern. This dichotomy has a major impact/constraint towards designing Turbine blisks etc.
High Cycle Fatigue (HCF) is associated with lower stress repeated mechanical loads leading to fatigue failure in a high number of cycles (about 10^4 to 10^8 cycles). Normally HCF is not a problem with superalloys unless a design error occurs and a component is subjected to a high-frequency vibration that forces rapid accumulation of fatigue cycles.

[Basic Parameters towards Material Selection]
So for selecting materials for a turbofan (hot sections like turbine components and not so hot components like Fan and Compressor components) , the following properties of material needs considering :
a) Resistance to the creep-rupture process
b) Have very good higher temperature short-time strength (yield, ultimate)
c) Have very good fatigue properties (including fatigue crack propagation resistance)
d) Exceptional oxidation resistance

In addition to these properties,
i) Related Mechanical Properties like Dynamic Modulus, Crack growth rates and Fracture Toughness
ii) Related Physical Properties like Thermal Expansion Co-efficient and Density
are also taken into consideration for selecting appropriate materials for various components of a turbofan/turbojet.

[Nickel (Ni) as the basic material]
Having understood the properties that the material needs to possess to be able to be selected for turbojet/turbojet, especially the higher temperature components like the HPT and LPTs. To that end there are two important factors that needs considering viz.
1) Obviously higher melting points
2) Crystalline Structure which controls the toughness and ductility of the metal

But first the Crystalline Structure factor:
For crystalline solids (like metals), the most common types of unit cells are the faced-centered cubic (FCC), the body-centered cubic (FCC) and the hexagonal close-packed (HCP).

In FCC Crystalline structure, the bonding from the presence of outer d electrons provides from more cohesive energy making them more tough and ductile.
Moreover, we need to understand how each of these crystalline structure influences the most important factor i.e. resistance to creep rupture process. The reationship between the Creep Sheer Strain Rate and Normalised Activation Energy levels and Diffusability (at melting point, is is pretty clearly depicted in the following schematic:

So it’s pretty clear that FCC crystalline structure with the relatively highest Normalised Activation Energy levels and relatively lowest Diffusability (at melting point) levels is most conducive towards the creep rupture resistance properties. So while selecting materials for turbofan components where mechanical and thermal loads are the highest (so more prone to creep and creep-rupture), the choice would normally be with metals having a FCC crystalline structure.

Melting Point comparison: Now let’s examine the melting points of various metals wrt their atomic number.

Comparing side-by-side with the Periodic table, it’s pretty evident that the max melting point is achieved by the VB and VIB column elements (refer to the peaks and near-peaks in the graph) in each of the rows – and that moving down across VB and VIB columns, the melting point increases. Plus for each of the rows, after reaching peak melting points in VB and VIB elements, the melting points then starts falling down as we move towards the right across each of the rows (aka towards VIIB, VIIIB etc).

Now, combining with the Crystalline structure of these metals and the fact that FCC crystalline-structured elements are most suitable towards creep-rupture-resistance etc., it’s pretty clear that choice of metals would be limited to VIIIB and the IB metals.
But then closer inspection reveals that out of these VIIIB metals there is quite a large number of Platinum Group of Metals (PGM) which are dense, rare and very expensive, ruling them out for large-scale usage etc.

So basically we are left with Ni, Fe and Co that are quite suitable for their usage as base material in the high temperature components of a turbojet/fan. A superalloy's base alloying element is usually nickel, cobalt, or nickel-iron – and as the following illustration (of material strength vs temperature) shows, the type of superalloy you want to use depends upon the temperature regime the corresponding turbo-machinery (made of these superalloys) are meant to operate.

Do note however that, as we have discussed before, the superalloys are required to have the following physical properties:
1) excellent mechanical strength and resistance to creep (tendency to slowly deform under stress) at high temperatures
2) good surface stability;
3) corrosion resistance
4) oxidation resistance

But the problem is Ni on it’s pure form (melting point 1455deg C) doesn’t provide any of the above properties sufficiently enough (except maybe of corrosion resistance) to be used in turbine blades and withstand the operating conditions (temperature reaching 1600deg C etc).

So alloys are formed by adding small quantities of various other elements so that all four of the above mentioned properties can be met – in the modern Ni based superalloys as many as 12-14 elements are added with very tight tolerance level of quantity to be added
1) For Strength (Molybedenum, Tantalum, Tungsten, W and Rhenum)
2) Oxidation Resistance (Chromium and Aluminium)
3) Hot Corrosion Resistance (Titanium)
4) Phase Stability (by Nickel itself)

So, here are the major alloying elements that are normally added to Ni-based superalloys - and a list of representative Ni based Superalloys (Note: the superalloys depicted below are not just some random samples being exhibited but these details we will use in the later sections - Hint: refer to the comment column).

Thus a superalloy, or a high-performance alloy, can be defined as an alloy that exhibits excellent mechanical strength and creep resistance at high temperatures, good surface stability, and corrosion and oxidation resistance.

So it’s necessary to always keep in mind the type of temperature and pressure regime the diff parts of a turbofan are subjected to before carefully deciding the type of superalloy material to be used there-in – the following schematic brings that out quite succinctly.
For example, the physical and thermal stresses are of completely different nature between the disk and the blades of a turbine.

The following schematic (of a civil turbofan), though quite dated, adequately depicts the usage of various type of Superalloys in various parts of an engine.
So while a Titanium based alloy would be more appropriate for the LP compressor blades (temp not going beyond 700deg C), Nickel based superalloys would be more appropriate for the turbine blades (operating temp zone 1200-1700deg C), and also in some of the HP Compressor blades (atleast the last stages of a HPC).
And it needs to be further noted that superalloys are of various compositions and thus they exhibit various type of mechanical and thermal stress resistance, and many of these types are relevant to a particular engine parts i.e there is no one type of superalloy (with a specific composition) that can be bulk fitted to all parts of an engine.


BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 05 Feb 2014 10:41

(... now comes the boring, verbose but the essential bits ...) :P

As we have seen in the previous series of posts, the melting point of the base-element (in this case we are focusing only at Ni for the time being – and for Ni the pure melting point is 1455deg C) is just not enough to withstand the TeT (operating temp zone 1200-1700deg C) of modern turbofan/jets turbine stages.

Note: Talking about theoretical melting points of Ni in it’s pure form, or for that matter any metal, is of very little value – as in practical life, the base metal that one will normally get, will have sufficient amount of impurities to have a way lower melting points.
Plus when various other elements are added to Ni to form Superalloys, the melting point then no longer remains a discrete value – it becomes more of a range and that too, a quite a large range. This constrains the turbine designers to appropriately tailor the “operating environment” (read TeT) of the turbine so that the temp there-in remains way below the lower-limit of this melting point range. We will touch this aspect a little later in somewhat detail.

[High Temperature Strengthening]
So the solution to this is developing “high temperature strength” of Ni by various methods – and for lesser mortals around the globe, those with no access to “jiiinn-injiin” tech (© grand-mullah Gas-turbini-enqyoobuddin), this is exactly where alloys (and superalloys) come in.
We also saw in the previous posts, that apart from this high temperature creep resistance property, the other important superalloy material-properties are fatigue life, phase stability, as well as oxidation and corrosion resistance. These other properties like Oxidation and Corrosion resistance are provided by the formation of a protective oxide layer which is formed when the metal is exposed to oxygen and encapsulates the material, and thus protecting the rest of the component. In a superalloy, oxidation or corrosion resistance is provided by alloying (or mixing) elements such as aluminum and chromium with the base-element Ni.

But the all-important high temperature strength property, in a Superalloy is achieved “somewhat” thru through a concept called solid-solution-strengthening and “majorly” via the formation of secondary phase precipitates such as gamma-prime (written as ') through a phenomenon called precipitation-strengthening.
Note: There are various other precipitation phases like '', Metallic Carbides (denoted as MxCy), ,,,Metallic Nitrides etc – we will not discuss them, as they are out-of-scope in the current context of this thread/discussion.

But before we do that, this important point needs to kept in mind at all times (while discussion/analyzing Superalloy material tech) viz.
Superalloy strength properties are directly related to not only the chemistry of the alloy (i.e. base and the alloying elements and their interplay at a crystal level) but also to a host of “manufacturing or engineering” processes like melting procedures, forging and working processes, casting techniques and above-all the heat-treatment post forming, forging and casting.

Now let’s examine in some detail the two crucial “high temperature strength” enablers viz. Solid-solution-strengthening and Precipitation-strengthening.

[Dislocation of atoms]
But before that we need to understand what exactly is this material strength aims to prevent i.e. what is the nature of dislocation movement at the crystalline lattices level, that these strengthening processes are addressing. Pls refer to the following schematic.

As it’s evident above, how atomic-dislocations in a crystal lattice can move easily throughout an unalloyed metal – also pls note, it’s easier for these dislocations to propagate in the crystalline lattices of a grain (Note: Thus reducing the number of grains in Dir Solidified and Single-Crystalline blades helps prevents creep rapture etc – more on this later).

The whole effort of “strengthening” alloys, is to somehow prevent these dislocations to propagate as much as possible – and the next two sections deals with various alloying methods of doing so.

[Methods of High-Temp Strengthening in superalloys]
But before that, do note that the energy required for the dislocation movement of these atoms at the crystalline lattice level, comes from the heat energy that is abundantly available – so net-net, the whole effort is towards increasing the “heat energy availability” requirement for the dislocation movement of atoms by introducing “blockers” in forms of alloying atoms (in the crystalline lattice), to a level so that this can be prevented (upto a level, of course – beyond which every thing deforms, melts etc.). And that is what the “temperature” part of the “high-temperature-strengthening of superalloys” is referred to as.

Anyway, strengthening in superalloys is generally by either
1) Solid-solution hardening where-in substituted atoms interfere with deformation
2) Precipitation hardening where-in precipitates interfere with deformation

Note: there are other types of hardening as well – viz. work hardening (energy is stored by deformation) and during carbide production a favorable distn of secondary phases interferes with deformation. However these two types are ignored here as they are not very relevant to Ni based superalloys.

[Solid Solution Strengthening of Superalloys]
1) Solid Solution Strengthening - Solid solution hardening is simply the act of dissolving one metal into another, done during casting, when all the metals involved are in liquid form.
Since we are talking about Nickel based superalloys here, we will about of certain other molten metals (the solute) that can be dissolved into main metal Ni (the solvent), that will help impede the movement of dislocations, imparting extra strength to the resultant Ni Alloy.

However, there are two types of solid solution strengthening, depending upon the atomic size of the solute wrt the solvent (Ni, in this case).

a) Substitutional Solution Strengthening – where-in the atoms of the solute material of the substitutional solution replaces the atoms of the solvent material, Ni, in the crystal lattice. And since the solute and solvent atoms are of different sizes, they interrupt the regularity of the crystal lattice thus preventing the dislocations from easily propagating around this interruption.

The energy (from higher stress levels or temperature or both) required for these dislocations to move around the substitutional atoms are significantly higher resulting higher creep resistance and general strength of the material.

b) Interstitial Solution Strengthening - The second type of solid solution strengthening is where the solute atoms are small enough to fit into spaces (interstices) between the solvent atoms in the crystal lattice. Again here also, the alloying element catches the dislocation and prevents it from moving further, as shown in the schematics below.
And similarly, it then requires greater stress or thermal energy for the dislocation to move around the interstitial atom resulting in higher creep resistance and general strength of the material.

Note wrt Kaveri, usage of the UDIMET-700LI superalloy used in LPT disc - the LI stands for low Interstitial (more on this point later).

Last edited by maitya on 06 Feb 2014 00:16, edited 1 time in total.

BRF Oldie
Posts: 2415
Joined: 19 May 2010 10:00

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby vic » 05 Feb 2014 19:48

Steel and Industrial Forgings Ltd (SIFL), a Public sector firm – fully owned by Govt of Kerala, is a leading
manufacturer of forging in the country since 1984. Apart from DRDO, SIFL supplies forgings to HAL,
ISRO, Railways, Defence, BEML, BHEL, L&T, and Caterpillar, etc. SIFL's product mix consists of
forgings with carbon steel, alloy steel, SS, Aluminium alloy, Super alloys, Maraging steel, Titanium alloys,
etc. SIFL has so far developed and supplied nine different types of forgings to DRDO, for their prestigious
LCA-Kaveri Engine Project.

Forgings – aero engine Housing No 2 Bearing (GTM-Ti 64-Titanium Alloy – 25.5 kg) Stub Shaft-II LPC (GTM-Ti 64 -Titanium Alloy – 37.5 kg) Shaft Stage-II LPC (GTM-Ti 64 -Titanium Alloy – 46 kg) Inlet Casing (GTM-Ti 64-Titanium Alloy – 37.5 kg) Shaft II Stage Fan (GTM-Ti 64-Titanium Alloy – 30 kg) Housing No 2 Bearing (GTM-SU-718-Super Alloy – 43.5 kg) Housing No 5 Bearing (GTM-SU-718-Super Alloy – 62.5 kg) Rear Mount (GTM-SU-718-Super Alloy –58 kg) Rear Guide Mount (GTM-SU-718-Super Alloy – 58 kg).

BRF Oldie
Posts: 2062
Joined: 11 Aug 2016 06:14

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby member_20292 » 10 Feb 2014 14:28

Maitya ji; would like to get in touch with you offline. I yam materials science grad from the university on the banks of the ganges, home of tulsidas, kalidas and robert pirsig. P Ramachandra Rao , P R Rao, CNR Rao, Kamanio Chattopadhyay are all alumni.

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: The Kaveri Saga - India's attempt to build a modern Turb

Postby maitya » 11 Feb 2014 14:55

A famous poet once said - "... to be or not to be ..." ityadi.

And I's caught in exact same dilemma for more than a week now, as to how to introduce in pure lay-man terms, an otherwise bland and highly technical topic of "Precipitation Strengthening" and somehow maintain interest.
Normally, the moment turbofan/turbojet etc are uttered, one only expects to hear stuff like TeT, OPR, HPT/LPT, Vanes, Boltzmann, Raleigh, Euler and what not. But the equally important aspect of material technology (and more importantly, from Kaveri perspective, the Engineering Process and Tech part) is not much mentioned or totally ignored - and even if it does get mentioned in the passing, it remains strictly confined to SCB and Blisks.

Problem is, to reach those topics and understand them with some depth, the essential-base-concept-building is required - and the aim of these current series of 3-4 posts on pure material tech, is to somehow do that building.

So the dilemma was to either to post something like the following and be done with it:
Superalloys consist of an austenitic face-centered-cubic (fcc) crystal structure matrix phase, , plus a variety of secondary phases from secondary precipitate phases that form in the -matrix. These along with other eta and delta phases help ...
... that a set of superstructures {A-A2, A3B-D03, AB-B2, AB-B32, AB3-D03, and B-A2} was selected to be representative of a series of bcc-based ordered phases. The total energies were calculated using the WIEN2k3 software package, based on the Full Potential Linearized Augmented Plane Wave (FLAPW) method within the generalized gradient approximation (GGA).4) Muffin-tin radii of 2.0 au (0.106 nm) for Co, Ni, and Al were assumed, and RKmax was fixed at 9.0, which almost corresponds to the 20 Ry (270 eV) cut-off energy.

Short, correct and sweet - but for most (especially those without any formal madrassa-based Material Sc/Engg degrees like moi), it would have passed harmlessly, without any resistance, many miles above the head. :P

... Or to find some less-then-accuarte-but-easier-analogy and carry on with detailed discussion/analysing in pure lay-man terms. :((

After much soul-searching (and successfully withstanding VHF nags of the SHQ, for not being attentive enough ... :wink: ... :wink: ... to daughter's impending exam-related studies :mrgreen: ), I've decided to choose the second route.

Now as to the exact nature of dilemma on the topic on hand - is as follows:

Disclaimer/Note: Any contemporary discussion on Ni- based for turbofan applications towards precipitation hardening/strengthening etc need to be centered on Ni-Al alloys, as AlNi3 phase is the main gamma-prime precipitation strengthening phase for HPT-blade type superalloys etc. However to discuss the precipitation-hardening process in lay man terms, we first require a simpler phase diagram – which obviously is not the case the Ni-Al one.
Moreover, due to the complicated nature of various factors (e.g. disordered-A1 lattice structure vs ordered-L12 lattice structure precipitation phase etc) that comes into play in Ni-Al alloys, it’d be very difficult to maintain reasonable amount readability and ease.

So, only for illustration of the precipitation-hardening process, I’m choosing a far simpler phase-diagram – that of Ni-Cr.

But once the concept of prepitation hardening etc has been established we will revert back to Ni-Al phase diagram etc and analyse further, at the crystalline lattice level, what really happens with Ni-Al alloy precipitation strengthening etc.

... the actual post will be here in a day or two. :mrgreen:

BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: Project BRF: India's Kaveri Engine Saga

Postby maitya » 14 Feb 2014 13:53

Sorry for the long gap ... anyway here's the next tranche (on generic Precipitation Hardening) - pls note I'll follow this one up with a specific write-up on Ni-Al Precipitation Hardening aspects which is more relevant to the discussion on hand. This generic write-up creates the necessary preamble for those specific aspects.

[Precipitation Strengthening]
As we have seen in the previous posts creep resistance is dependent on slowing the speed of dislocation motion within a crystal structure – and the way of doing so is to introduce blockers (like those in Solid Solution Strengthening particles, discussed above) in the form of different metallic particles that are readily soluble in the crystalline structure. Another, special kind of “blocker particles” that goes a long way towards preventing these dislocations were by introducing some "special" metallic particles or "impurities" in a carefully controlled heat-treatment conditions. Broadly the following four characteristics of these “blocker particles” (from now onwards, will be referred as precipitated particles) are required towards impeding the movement of dislocations:

a) They need to be hard (or as less deformable as possible – to prevent dislocations to “cut thru” them) and discontinuous i.e. to be as isolated as possible amongst themselves (but can “group” with the base particles)
b) Smaller and more numerous (as opposed to bigger but more widely spaced particles)
c) Be as spherical-in-shape as possible (to prevent stress build-up issues)
d) And obviously, more the merrier (but the ductility of the alloy as whole will go for a toss, if way too many non-deformable particles find it’s way in the lattice)

Now, taking the advantage of changes in the solid solubility with temperature phenomenon, these fine "impurity" particles are produced by a carefully calibrated heat-treatment-over-time process, which impede the movement of dislocations significantly, thus increasing both the yield and creep-rupture strength of the overall alloy.
This above mentioned method of heat treatment towards increasing the yield and creep rupture strength of alloys is called Precipitation hardening (also called age hardening).

[Yield Strength Anomaly]
This happens due to a phenomenon called Yield Strength Anomaly - simply put, that contrary to the common principle of metal strength decreasing with rise in temperature, in many alloys (thus in superalloys as well), there exists a range of temperature over which the strength of the alloy increases with increasing temperature. For some alloys (with ordered lattice structure - more on this a little later), the range can be as large as upto 50-60% of the absolute melting temperature. So for Ni based alloys, this range would approach roughly till 1000deg C, if proper care had been taken with the tolerance level the temperature range, the time of exposure and precise composition of the alloying solutes, during heat treatment.

To understand the reasoning behind this, we need to carefully look at the following binary phase-diagrams of Ni-Cr and Ni-Al.
Pls note that in real life the precipitation hardening (and the resultant heat treatment and ageing etc) happens on a cocktail of 6-7 or more alloying elements – the Phase diagram of them, though relevant, is way too complex to be discussed here and are thus avoided.

[Phase Diagrams]

First of all let’s study briefly the Ni-Cr Phase diagram above concentrating mostly around the red-dotted line area that I’ve drawn on it. Pls note the following observations:

1) The curved line at the top separates the liquid and solid states of the alloy – so at temp 1600K, this alloy would be in liquid form or melt. Do note here that how with decreasing concentration of Cr the melting point decreases (and vice-versa) tending towards the melting point of Ni (approx. 1400deg C)

2) Above 80% composition of Ni in the Alloy the crystal structure obviously mimics that Ni (A1 – FCC structure)

3) However, beyond 80-20% split between Ni-Cr, any further increase in Cr content, shows gradual introduction of BCC crystal structure in the lattice. Now it’s a mixture of both FCC_A1 and BCC_A2 structures.

4) But the interesting bit is with increasing temperature, even with increasing concentration of Cr, more(if not all) % of FCC structured particles are found. IOW with increasing temperature the solvent metal (Ni in this case) can absorb much more of the solute metal (Cr, in this case) and vice-versa.

[Basic process of Precipitation Hardening]
Disclaimer/Note: Any contemporary discussion on Ni- based for turbofan applications towards precipitation hardening/strengthening etc need to be centered on Ni-Al alloys, as AlNi3 phase is the main gamma-prime precipitation strengthening phase for HPT-blade type superalloys etc. However to discuss the precipitation-hardening process in lay man terms, we first require a simpler phase diagram – which obviously is not the case the Ni-Al one.
Moreover, due to the complicated nature of various factors (e.g. disordered-A1 lattice structure vs ordered-L12 lattice structure precipitation phase etc) that comes into play in Ni-Al alloys, it’d be very difficult to maintain reasonable amount readability and ease.
So, only for illustration of the precipitation-hardening process, I’m choosing a far simpler phase-diagram – that of Ni-Cr.
Do note, that Cr in any Ni superalloy is used more from oxidation resistance point-of-view - so pls take this illustration and detailing only from from that pov.
But once the concept of prepitation hardening etc has been established we will revert back to Ni-Al phase diagram etc and analyse further, at the crystalline lattice level, what really happens with Ni-Al alloy precipitation strengthening etc.

As mentioned above, the strength and hardness of some metal alloys are enhanced by the formation of extremely small uniformly dispersed particles of a second phase within the original phase matrix i.e. careful introduction of new metal particles (second phase) in the continuum of the original or base metal crystal lattice. And this is accomplished by phase transformations that are induced by appropriate heat treatments over a varying period of time. The process is called precipitation hardening because the small particles of the new phase are termed ‘precipitates’ – and the term “Age hardening” is also used because the strength develops with time, or as the alloy ages.

Let’s consider a particular % composition of the Ni-Cr alloy – say roughly about 80-20% Ni-Cr split as shown as the vertical red line in the Phase diagram. Obviously this particular split is deliberately chosen to have a mixture of FCC and BCC structured particles available.

Now, pls note that the Precipitation Strengthening is normally done as a two-step heat-treatment process viz.

1) Solution Heat Treatment: Now if the above alloy is heated to a temperature that lies within the A1_FCC phase (indicated as T0 in the diagram – approx. 1000deg K), the BCC phase particles will start dissolving into (or transforming into) FCC phase. Now that elevated temperature is maintained over a period of time, all particles of 80-20% Ni-Cr composition will eventually be of FCC type i.e. more atoms of solute metal (Cr) is absorbed within the solvent metal (Ni) lattice.
Now the alloy is then cooled rapidly enough to a much lower temperature (say T1 in the diagram), which for many alloys is room temperature, that any re-formation of BCC phase particles (via diffusion) are prevented. So now we have a non-equilibrium super-saturated solution where-in many BCC phase atoms are present in FCC stage at temp T1 i.e. atoms of the solute metal (Cr) are trapped as a supersaturated solid solution in the solvent metal (Ni). Pls note that the resultant alloy at this stage is soft and weak.

2) Precipitation Heat Treatment: This resultant alloy is then again heated to a higher intermediate temperature (say T2) so that remains within the “boundary” of the theoretical two-phase (FCC_A1 and BCC_A2) state. This temperature is carefully chosen so that temperature diffusion rate is high-enough to kick-start the so-far stalled process. This diffusion will allow the formation of BCC phase particles of the Ni-Cr composition in a finely dispersed state. Then after allowing the alloy to remain at that elevated temperature T2, for an appropriate amount of “ageing” time, the alloy is then cooled down (this rate of cooling is not a factor) to room temperature (do note that for some alloys, aging continues to occur spontaneously at room temperature over extended time periods).

[Pitfalls of Precipitation Hardening]
Pls refer to the following schematics to understand the various limiting factors of precipitation hardening.

The precipitation heat treatment temperature and the time (or age) is extremely crucial in determining the level of hardening achieved. Ageing at too low a temperature T1 will mean full precipitation will not happen (particles are too small to prevent dislocations) and the desired strength will not be achieved – a phenomenon called under-aging. But, if aging is carried out at too high a temperature T4, the precipitated particles will not be fine enough (particles are too large and too dispersed to interact with dislocations), resulting in again sub-optimal alloy strengths.

The point that needs to be always kept in mind is that precipitation hardening is sort of a temporary phenomenon (well, temporary but long enough for many practical applications etc.) and thus all precipitation-hardened alloys are metastable. Attaining of an equilibrium stage is anyway going to happen sooner or later, where-in they will eventually soften – and that, this attaining of equilibrium stage happens, either by heating to high enough temperatures or exposing them to relatively lower temperatures for long periods of time.

There's effectively a competition for the solute atoms between the metastable transition particle clusters and the equilibrium particle clusters - and the equilibrium cluster particles will always win (so the question is, how to defer that "wining" as much as possible and maintain the alloy strength as longer as possible).

[Factors impacting precipitation strengthening]

Apart from the process factors like heat-treatment temperature and time, the following chemical properties (or shall I say the atomic-structure) of the metals chosen plays a crucial role in degree of strengthening achieved:

1) Degree of Mismatch of Crystal Lattice size and Crystal Structure: More match on the Crystal Lattice size and Crystal Structure between the solute and the solvent particles, helps in several ways.

Coherency (or atomic matching) between solute and solvent particles (i.e. between the transition phase and the matrix), helps in creating a local strain field within the matrix. It's this combination of a fine precipitate size and the localized strain fields (due to lesser lattice mis-fitting) which impedes dislocation movement. With over-ageing (i.e with time) the precipitate particle size gradually grows (and they also get more widely dispersed) and this local strain field reduces becoming more and more favorable to allowing dislocation movements thru them (or bow/loop over them).

Another way of looking at this phenomenon is via the lattice distortion that is achieved by the precipitated particles (refer to the above schematic) - and for this lattice distortion to happen, there needs to high-degree of coherency (or atomic matching) between the transition phase and matrix. And as long as this lattice distortion is maintained the dislocation movements will require much higher energy to overcome it - of course this lattice distortion vanishes once equilibrium is reached thus making it easier (read requiring much lesser energy) for the dislocation movements to pass thru it.

However, there are exceptions to this - some precipitation-hardened nickel-base superalloys have phases (as the ordered Ni3(Al,Ti) precipitate phase) that are extremely stable at elevated temperatures. This we will examine in further details in the next post (a short one) dedicated towards the specific case of precipitation hardening of Ni-Al alloys.

2) Precipitate Order: If the precipitate atoms can somehow occupy “preferred” positions (called ordering), the amount of energy required by the deformation to pass thru increases. The ordered precipitates possess an extra energy due to this preferred atom positions (compared to normal disordered or random atomic positions) called APB (antiphase domain boundary) which the deformation force needs to overcome to pass thru.

3) Precipitate Particle Shape: The shape of the precipitate particle also plays an important role towards conferring the required strength to the alloy. For example, in earlier version of Ni superalloys, spherical (or spheroidal) shaped precipitate particles were found (due to 0 to +/- 0.2% mismatch solute and solvent lattices). But later with slightly increasing mismatch of +/-0.5 to 1%, the precipitate particle shape became cube (or cuboidal). Nowadays with even further increasing mismatch of +/- 1.25%, plate-like precipitate particles are observed.

4) Precipitate Particle Size: In addition to the normal optimum precipitate atomic size requirement (viz. too small, deformation will slice thru; too big, deformation will bow/loop over), the property (e.g. creep and stress rupture, tensile strength, fatigue resistance etc), of the alloy end-use may sometime dictate the number of atomic-sized precipitates to be considered. Normally, the thumb rule is, higher ageing temperatures will produce coarse gamma-prime precipitates that are desirable for creep and stress rupture applications; while lower ageing temperatures produce finer gamma-prime precipitates desirable for applications requiring tensile strength and fatigue resistance.

For example, to defend against creep-rupture progression, single-sized precipitate particles are more desirable (achieved in DS and SC cast alloys of turbine blades). This is accomplished by choosing single higher ageing temperatures that will produce coarse gamma-prime precipitates that blocks creep and stress rupture propagation more effectively (e.g. in Turbine blades).

However to defend against notch-sensitivity (in wrought superalloys, used normally in turbine disks), a two-sized precipitate particle arrangement is more desirable – this is normally achieved by a two-cycle precipitation hardening (called Double Ageing) at two different temperatures and for the corresponding two different ageing times. So, an initial ageing temperature of relatively higher temperature to get suitable particle coarsening to achieve moderate amount of creep and stress rupture properties and then “final” lower ageing- temperature, to achieve finer precipitate particles towards strength and fatigue resistance properties.
Last edited by maitya on 14 Feb 2014 22:17, edited 2 times in total.

Karan M
BR Mainsite Crew
Posts: 13542
Joined: 19 Mar 2010 00:58

Re: Project BRF: India's Kaveri Engine Saga

Postby Karan M » 14 Feb 2014 18:19

Kaveri engine blades at Defexpo-14.


BR Mainsite Crew
Posts: 434
Joined: 02 Feb 2001 12:31

Re: Project BRF: India's Kaveri Engine Saga

Postby maitya » 18 Mar 2014 22:10

[Intermetallic Nickel Aluminide (Ni3Al) and Precipitation Strengthening in Ni-based Superalloys]
As we have seen in the previous few posts, the degree of superalloy precipitation strengthening is primarily due to a combination of process factors (like heat-treatment temperature and time) and chemical properties like the atomic-structure, their orientation in the lattice etc of the metals chosen for the alloy. While these various factors have been discussed in some detail (with the help of simpler and representative phase diagram) to bring out the basic understanding of them, but further elaboration of the Ni-Al phases is absolutely essential to complete the superalloy precipitation strengthening understanding. More so, since in all contemporary Ni based superalloys (like the ones used in Kaveri) the unique intermetallic Ni3AL (and/or Ni3Ti) phase is the primary strengthening factor for conferring these superalloys such fantastic mechanical and thermal properties.

It should be noted here also, that in real life a multitude of constituent alloying metallic elements at the atomic level interacts (sometimes deleteriously as well) with each other to offer a resultant temperature and mechanical strengthening properties – it is beyond the scope of this post to try and examine all of these interplays – and instead be focussed on the “main” ones like Al3Ni phases etc.

[Al-Ni Phase Diagram]
First thing first, let’s focus on the big and main aspect of precipitation strengthening in Ni based superalloys – namely the intermetallic Ni3AL phase (the Gamma-prime phase) in an Al-Ni phase diagram as depicted below.

The schematic above is same Ni-Al phase diagram from the previous post but with quite a bit of annotations to help point out the salient points – some of which are as follows:
1) There exists a quite a narrow band (71-78%) of Ni-Al composition where-in over a major part of the solid temperature regime an Intermetallic Ni3Al phase is found.

2) This intermetallic phase, though narrow, exists over a very large temperature range (more than 560-1300 deg C) – the implication is, once the narrow composition-range of Al-Ni in a superalloy is met, the resultant precipitation strengthening remains available over a very large range of temperature, extending almost upto the melting point of the superalloy.

3) The gamma matrix phase and the intermetallic gamma-prime phase both exhibit a FCC structure but are called “disordered” FCC structure (in the phase diagram, this can be seen outside this intermetallic phase area, where Al% is even lower in the composition). By contrast, a further inspection of the crystal structure of the intermetallic phase like Ni3Al exhibits,
i) There are Al atoms at the cube corners and Ni atoms at the centres of the faces.

ii) Each Ni atom has four Al and eight Ni as nearest neighbours – so, each Al atom has twelve Ni atoms as it’s nearest neighbours.

This strong degree of chemical consistency is referred to as “ordering” where-in the Ni and Al atoms have quite distinct positions to occupy in relative to each other. Furthermore, they exhibit the following characteristics:
a) In each unit cell, a significant degree of directional, covalent bonding exist between the number of Ni and Al atoms in each unit cell

b) In these crystal structures the Ni–Al rather than Ni–Ni or Al–Al bonds are preferred.

This “ordering” plays a significant role in precipitation strengthening in Ni based superalloys (i.e. increase of strength with temperature), which is opposite/anomalous to the normal metallic behaviour like decreasing of strength with temp increase etc. The enthalpy of formation or ordering energy with respect to the FCC phase is about 3kJ/mol – and this strengthening is due to the fact that the dislocations travelling thru the matrix gamma phase can’t enter the gamma-prime matrix phase without the formation of an anti-phase boundary (APB), and therefore another dislocation is required for the gamma-prime phase to counter the first APB. The cutting stress required to overcome this APB is of the order of approx. 400MPa, which is substantial.
The detail of this strengthening dynamics follows a little later in this post.

4) Beyond approx. 1300 deg C, for a given chemical composition, this intermetallic gama-prime phase starts narrowing - aka, the fraction of gamma-prime decreases as the temperature is increased beyond ~1300deg C.
This property is used in the solution-precipitation heat treatment pairing (as discussed in the previous post) – where-in at a sufficiently high temperature (the solution treatment) the Gamma-prime particles are dissolved into Gamma matrix particles and then aged (the precipitation treatment) at a lower temperature to produce uniform and fine dispersion of strengthening precipitates.

5) Similarly, this Gama-prime phase extends all the way upto the liquid (aka melting) phase (marked in green), though over even a stricter/narrower range of composition - this has a profound effect in DS and SC casting process, which we will take up while discussing them in further detail. However it should be further noted here that it’s now known that the intermetallic Ni3Al maintains the ordering upto a temperature roughly equivalent to its melting temperature of about 1375◦C.

[Ni3Al and Precipitation Hardening of Superalloys]
It’s pertinent note here that the unit face centred cubic cell of Ni is with a lattice parameter of 0.352nm, an atomic radius of 0.124nm and Van der Waals radius of 0.163 nm (and those for a FCC cell of Al is of lattice parameter of 0.405 nm, atomic radius of 0.118nm but the same Van der Waals radius of 0.184 nm).
The resultant intermetallic gamma-prime (Ni3Al) lattice parameter is 0.357nm.

Similar Lattice Parameter Impact: This very similar lattice parameter of the gamma-phase (0.352nm – for native Ni FCC) and the gamma-prime phase (0.357nm – for Ni3Al FCC), means the intermetallic gamma-prime precipitates in a cube-cube orientation to the matrix (the gamma-phase) – aka the cell edges of their cube lattice structure are exactly parallel to corresponding edges of the matrix/gamma phase.
Plus, since their lattice parameters are similar, for small precipitate sized particles, there is coherency (or atomic matching) between solute and solvent particles (i.e. between the gamma-prime and the matrix phases – aka Ni atoms of a gamma-prime lattice finds an Al atom of the neighbouring gamma-prime lattice). This helps in maintaining the interfacial energy low thus creating a local strain field within the matrix which impedes dislocation movement between phases.

Pls note that normally a spherical particle, because of 1.24 times less area compared to a cube, is the preferred shape for minimising the surface energy. However, with coherent phased particles, the above mentioned cube-cube orientation results in the crystalographic planes of the cubic matrix and precipitate to remain continuous, and thus still minimise interfacial energy.

This is the primary reason why, unlike in other metals, a precipitation hardened Ni3Al based superalloy, the tensile strength increases with increasing temperature (upto approx. 650deg C, beyond which the available heat energy is normally enough to allow the dislocations to shear thru the gamma-prime particles etc.).

Moreover, it should also be noted here that the intermetallic gamma-prime phase particles are themselves atomically ordered (aka within the gamma-prime lattice itself, each Ni atom having a Al item as it’s neighbour etc) – this again creates a local strain field which prevents dislocations to penetrate thru them.
Plus the Ni3Al gamma-prime phase is quite ductile and thus imparts strength to the matrix without lowering the fracture toughness of the alloy.

This advantage however gets squandered with increasing particle size (and thus local strain field reduction) when over-aging happens.

Pls refer to the following schematic to understand Yield stress behaviour of Superalloys with temperature and the role that the composition, heat-treatment and volume fraction of the intermetallic Ni3Al phase plays towards it.

Ni3Al based Precipitation Strengthening and pre-heat treatment: The impact of pre-heat treatment while aging towards conferring the precipitation strengthening properties are exhibited by the above left chart - where-in,
i) The graph for Ni-8%Al alloy, which doesn't have enough volume fraction of Gamma-prime phase for precipitation strengthening, thus exhibits strength-gain purely due to solid-solution strengthening.

ii) The Ni-14%Al alloy graphs, does have enough volume fraction of Gamma-prime phase for precipitation strengthening but exhibits quite different degree of strengthening (dependent upon the fraction, size and distribution of theγ Gamma-prime particles), solely due to the various temperature pre-treatment they are subjected to,

a) When quenched (aka very rapidly cooled) from 1000deg C, results in very very fine particle size, resulting in sub-optimal strengthening (but still more than those from pure solid-solutioning)
b) When aged from 850deg C, the gamma-prime precipitate particles were too coarse (200-250nm) again resulting in suboptimal low-temp (upto 650deg C) tensile strength
c) But when aged from the "right" temperature of 700deg C, results in much finer (40-60nm) gamma-prime precipitate particles providing the desired low-temperature tensile strength.

Also pls note the relative flat-nature of the "right-aged" (aged at 700deg C) graph which also proves the precipitation-strengthening anomaly of Ni3Al precipitation-strengthened superalloy, where-in teh alloy "holds-on" to it's tensile strength upto a certain temperature (of approx 650deg C etc.) - by contrast, any metal woould have their tensile strength decreased with increasing temperature (like the pure Ni tensile strength graph at the bottom of the chart).

Gamma-prime Volume fraction impact:Another factor that impacts the precipitation strengthening via the Ni3Al gamma-prime phase, is the amount of gamma-prime in the alloy i.e., the higher the volume fraction of gamma-prime, the greater is the flow stress. However this relationship is not seen in the lower temperatures where-in at about 25% of Gamma-prime phase composition results in the peaking of the flow-stress.

The above schematic (the right chart) brings out this hardening at high-temperature dependence on volume fraction of the Gamma-prime phase.

This phenomenon is due to the fact that, beyond 650deg C etc upon deformation, cross-slip of segments of the Gamma-prime phase happens between planes ({111} to {001} etc.) – these slipped segments resist deformation as they need trailing APB to overcome these cross-slipped deformations. These cross-slipped deformations are called Kear–Wilsdorf locks and further details canbe found in any Materials Science book.

So, for applications (like turbine disks) where tensile strength properties are important upto a high (but not that high as experienced by the turbine blades) temperature, this precipitation strengthening property caused due to the finer Ni3Al gamma-prime phase particles is utilised. The Ni3Al Gamma-prime volume-fraction becomes a key factor as the operating temperature regime of the turbine disc ends up deciding it – aka for a HPT disc application where the operating temperature would be around 800-1000deg C, a higher volume-fraction of the Ni3Al Gamma-prime phase would be preferred (say approx. 60-70% volume fraction – as in, say Mar-M-200, a variant of which gets used in Kaveri).

But for a LPT disc application, where-in a temp drop of approx. 300-400 deg C across the HPT stage(s), would mean an approx. disc operating temp regime of 600-700 deg C, a lower volume fraction of the Ni3Al Gamma-prime phase would be preferred (say approx. 20-30% volume fraction – as in, say Udimet-720Li, that gets used in Kaveri LPT).

It needs to be noted here that, theoretically looking at precipitation strengthening factor alone, the tensile strength at lower temperatures of the HPT discs would remain lower than that of LPT discs, due to the composition volume fraction impact (refer to the schematic above). However, practically due to the usage of casting techniques like Powder Metallurgy etc, these tensile strengths are normally enhanced to higher level.

[Slight Lattice Parameter Misfit Impact]:
As we have seen above, the lattice parameter of the gamma-prime phase (0.357nm – for Ni3Al FCC) though very similar, is slightly more than that of the gamma-phase (0.352nm – for native Ni FCC).
But, this slight more value, also allows the misfit to be made slightly lesser than that of the matrix gamma phase, by slightly altering the chemical composition (particularly the aluminium to titanium ratio, in the overall superalloy mix). And this negative misfit helps formation of rafts of gamma-prime phase (actually layers of it), in a direction normal to the applied stress. Now these precipitation rafts reduce creep rate by impeding dislocation climbs across them.

The following schematic brings out the Creep-Rate impact due to precipitation hardening by the Ni3Al phase (alongwith the Gamma-prime grain size influence).

So for Turbine Blade applications, most susceptible to Creep-Rupture stress, the gamma-prime phase with an appropriate level of volume fraction and gamma-prime “coarse” grains plays an important strengthening role. However do note that Cree-Rupture stress impediment is more fundamentally influenced by the casting process like Single-Crystalline, Directional Solidification etc than that from precipitation strengthening composition etc., which we will take in some details in subsequent posts.

BRFite -Trainee
Posts: 47
Joined: 01 Sep 2008 08:02

Re: Project BRF: India's Kaveri Engine Saga

Postby kittigadu » 30 Mar 2014 10:05

Maitya: Do you have a cross-section for the Kaveri engine ?

BRF Oldie
Posts: 31040
Joined: 01 Jan 1970 05:30
Location: Pindliyon ka Gooda

Re: Project BRF: India's Kaveri Engine Saga

Postby shiv » 21 Jun 2014 06:12

Very interesting article by one of my favorite out-of-box thinkers in the latest issue of "Vayu" - Prof Prodyut Das about Kaveri.

Need to scan or obtain the article for all to read.

He says
1. India did not have the essential test rigs for engine development and still does not have them. Even Egypt in 1964 had a modified An 12 as flying test bed.

2. We shoud stop worrying about Blisk/single crystal etc and look at little details like improving airflow around individual blades etc.

I have not yet re read the article in detail - will post later. Maybe will post scans.

Overall he praises Kaveri if it has reached 90% of its thrust and suggests the ways forward

Added later - here is the article on Prof Das' blog
The Kaveri Turbofan Project- an “open source” assessment

If reports that the Kaveri has reached 90% of its Full Military power are true it represents a considerable achievement for the Engineers concerned. It also indicates no foreign collaboration is required to complete this project. The above numerator is unfortunately tarnished by the denominator of several decades of development with no engine flight cleared and a realistic date of completion is uncertain. Jet Engines development presupposes certain facilities as sine quo non: a) Test rigs for combustion chamber development b) Test rigs for testing the compressor spools together at rated conditions, c) test rigs for testing the turbine blading for cooling, thermal and mechanical loads simultaneously and finally d) a flight test bed to test the engine in the air. Item d) is still not available in the country and there are reasons to believe that items a), b, c) were not available at the time of taking up the project and may not in fact be satisfactorily available even now. Recall that Egypt, developing the E300 engine under the guidance of Ferdinand Brandner, with much poorer traditions and resources, had a flying test bed , a modified AN12, in 1964,.

The lack of these basic test rigs and their exploitation would have had a significant effect on the programme. The present “problems” with the engine – lack of performance, unreliability and overweight can be traced directly to the lack of the above test rigs and indicates a lack of top leadership at the front line of problems. It was “disconnected thinking”, in 1987, to so confidently say that our engine would be “flat rated”. The basic tools needed for the job was nowhere there. The relatively low total running hours (< 2000hrs for the entire programme spread over about ten engines) would mean that the infantile “measles and mumps” kind of problems have not yet been exposed. If the engine hours are correct, it was surely premature to have air tested the engine in 2003 when it, quite expectedly, failed. “A part of the learning process” is not an adequate explanation for this kind of repeated self induced “failure”. The failure delayed the project and should not have been done at that point of time. I recall a former Director, in discussing the Kaveri pressure ratio, admitting privately “Yes. We did over reach ourselves”. He was being modest! The fault, dear Brutus, is in our stars! Pratt & Whitney (P&W) was not allowed to do Jet Engine work because of the War. GE was clearly ahead. Immediately after, in 1945 itself, P&W set up a Turbine Laboratory (WTL). Note that they named this critical survival asset after their Chief Engineer Andrew Wilgoos and not after Rentschler, their Founder & Chairman!) WTL was fully integrated into P&Ws mission to be a prime player rivaling GE and had the skill and resources of P&W on tap. We set up GTRE but it was a completely different entity vis a vis HAL in terms of aims, service conditions and critical performance parameters. Yet GTRE was supposed to depend on HAL. We do things right but don’t or cannot do the right things! Even given the best of intentions results would be what they are.

The Good news is that an engine that is giving 90% of its cold thrust cannot be all that bad. The engineers who can achieve that also cannot be bad. What has been lacking has been the leadership over several “generations” of higher management. We will come to this point later. The Kaveri does not want more technology. It needs more care and analysis. Jet engines, though inherently simple, are extremely sensitive to detail as the following examples will illustrate. “Point one millimetre” (‘four thou’ if you are that old!) is the general unspecified tolerance in aerospace machinery. It is the average thickness of human hair. If the gap between the rotating blades and the casing varies by this “point one “ millimeter in a Kaveri sized engine it means a difference in the turbine tip/casing flow area of about the size of a 20mm hole. Imagine the differences in flows if you are dealing with pressures of around 20 bars! If the clearance is that amount too little, you will soon get very expensive sounds, blades being shed and possibly an engine fire. The tip clearance is a decider for TBOs. The current technique is to remotely sense the tip clearance and heat or cool the casing locally to keep the clearance constant. No wonder the grudgingly respected Chinese engineers still manage to stir fry their new engines with some regularity! The same “thickness” or (thinness, if you will!) in the engine casing will vary the weight of the engine by approximately 5-8 kg and an increase or decrease in engine length by about ten millimeter will affect engine weight by about 4 to 5 kgs due to casing and shaft weights. Of course a 0.1 mm variation in blade profile is unthinkable. I cite these figures to show the “gearing” between cause and effect in Jet engine development and the need to go over details, components and results with a fine comb -and an engineering Sherlock Holmes by your side!

However creditable the performance of the “troops” the present situation reflects on the higher direction of the programme. There are two management issues involved. The first was to undertake the project without having the physical resources ready. Everything is always wanted yesterday. It appears the then leaders, (assuming they knew clearly what was involved) either wanted to “make someone happy” or wanted the project “at any cost”. Honesty about the situation-so disdained by the “clever”- is an essential requirement –and a mark of leadership. In 1962 Lt. Gen Kaul, by acceding to political pressure gave us the Himalayan Blunder .Nine years later Sam Manekshaw by stubbornly (but charmingly!) refusing to move until he was ready, delivered Bangladesh! The second area of failure of Leadership was a failure of knowledge. There was a lack perhaps of a holistic view of what the engine was supposed to do. They apparently wanted an engine “just like the F404” rather than thinking more systemically about an adequate engine which would do the job. By these two fatal lacunae-one physical and the other mental- GTRE fell into “mission impossible” mode.

Rebooting our mindset
Let us look at the above in a bit more detail. Modern Western Military engines are, perhaps surprisingly, strongly injected with technologies developed for competing in the civilian markets. It makes sense for the West to use these thoroughly proven technologies in their military programmes- it helps to amortize costs! An opposite corollary was the USSR where Technology Development was always led by Military requirements and USSR civil engines were the dregs in terms of Sfc and TBO! For a civilian engine a TBO of 4000 hrs is “essential”. The plane flies fourteen hours per day. One cannot yank the engine off the pylon every 6 weeks as a R29B style of 550 hour TBO would entail. Every gram of fuel saved per hour is of consequence given the huge number of hours flown per year. This entails engines having compression ratio s of 20:1 to 30:1 with current research exploring 70:1. (Want to play catch up with the Technology, any one!) One could go on but the drift is that before we follow someone’s lead we have to stop and think of our task and the cloth we have for our coat. What are these?

a) Slash the engine ‘to begin with’ TBO to around 400-500 hours .Insist the Air Force declare what is their attrition rate for single engine close support fighters. I know we lost about 30 Hunters out of 96 active in the six squadrons in the nine years between inductions to just before the ’65 war. Very few if any of these could have approached 1000 hours. It would be interesting to have a histogram of the number of engine hours of all the MiG 21s at the time of their write off. If this figure is pretty low as I suspect it to be, there is no need to make a 2000 TBO or 4000hr technical life an immediate target. A 250 hrs TB0 (Incoming! Incoming! Duck! Duck!) would last a couple of years on a fighter airframe. Reduction of TBO time will significantly reduce the development task without affecting the operational efficiency. The ‘problem” of low TBO-replacing engines- can be ameliorated by designing for easy installation and removal. In the Mig 15 two men could do it in one hour! Engines are more “plumbed” nowadays but that is where the challenge of good engineering comes in! Incidentally, an Indian Engine built with Indian materials in Indian factories would be formidably competitive against all comers even with these low TBOs and TTLs.

b) Do we really need 20:1 CRs (compression ratio) given the engine becomes heavier and more surge prone as we jack up the CR? Higher CRs mean more stages and the compressor and combustor casings being open ended pressure vessels, mostly in heavy alloys add much to the weight. Remember that a 0.1 mm thicker casing will add 8 kg to the weight! We know the benefits of high compression ratios are subjected to diminishing returns. The Orpheus with a CR of 6:1had a sfc 1.03, the R 25 had a CR of 12:1 and had a sfc of 0.9 and an engine with 20:1 CR will have an sfc of around 0.8. This “high compression ratio” led improvement in sfc does not pay in our typical low duration sorties. For an IAF standard fighter sortie the weight of engine plus fuel required (for the same level of technology in other areas) disfavours the high compression engine. Also because the compressor passage areas are fixed, the resistance to compressor flows at part throttle (where the wretched engine will be spending most of its life, anyway!) the proneness to surging will cause problems. Finally to remember is that high CRs in themselves are a partial contributor to the sfc figures. Burners, combustor and turbine blade technology being the others.

c) Are we worrying too much about smoke and NOx? Western “standards” are again derived from already existing and already proven and available low risk “Civilian” technology which we do not have. A short combustor means a lighter engine because the shaft and casing becomes shorter. Shorter combustors will require focused research on getting the spray pattern “tighter” in the spread of droplet size. How much work has been done in this area before we set our targets?

d) Western aircraft design philosophy believes that VG intakes don’t make sense below M1.3. Our designers follow the same track. This, I believe, is a “frozen” thought from the ‘60s and the days of electromechanical sensors and actuators. Given developments in sensor technology and computer controls we should look at new variable geometry intake configurations to maximize pressure recovery. Even if we can save the equivalent of one or two stages on the compressor it would help in reducing the length of compressor, ergo a lighter engine.

e) Also to be examined is the total thrust /fuel flow requirement profile and optimize the engine’s weight and fuel consumption in relation to the task. A typical LCA type engine will have the following profile. A/B thrust approx 2⅟2-3 minutes, Full military 6 minutes, 60% thrust 20 minutes, 45% thrust 25 minutes, and flight idle about 5 minutes. The figures are illustrative but the idea that we must reduce the Total fuel burn/sortie rather than optimize for a rarely used “best” figure. The intake, the engine and the afterburner together have to be seen as a system which will give optimum performance in the 0.6-o.8M at low level with all other conditions being seen as “special” cases for the system.

f) A consequent question to the point made above is given that relatively small duration of operation of the max. Installed thrust how much of the thrust should come from the engine and how much from the A/B? The Tyumanskii/Gavrilov R 25 of the MiG 21bis is an example of alternative thinking. The dry thrust is 59kN, with A/b it is 69kN but with a “boosted” a/b it gives 97kN (from Russian sources!) which thrust wise would be ample even for the LCA! The use of the boosted A/b reduced engine life at the rate of one hour per three minutes but it works! Anyway as said before a “totally Indian engine” will be cheaper.

There are several more such issues but the point I am trying to make is that we have to see the task not as an engine “just like” something else, as I suspect, had been done. Let us move from mere Information to Knowledge and, hopefully, from Knowledge to Wisdom! GTRE hamstrung itself by trying a “drop fit” replacement for the F404. The saner approach would have been to have a dialogue with ADA so that ADA would be prepared to “rebore” (Not, please, literally as one irate reader seemed to think!) the LCA airframe to accept the slightly different engine. We must therefore come to a state of mind where we read the book and then throw it away to chart our own course. So what needs to be done?

If you have ten hours to chop a tree…
Spend nine sharpening your axe! Build up and “sophisticate” our test rigs so that the key problems can be solved in detail. For example the test rig for the turbine blade should not only be able to handle a mass flow of around 5kg/sec @ 1400⁰ C- for a cascade of four or five blades but also will be able to simulate the creep loads on the blade whilst a separate air source will supply cooling air through the internal passages. Similarly for the compressor test rigs it is necessary to have rigs powerful enough to test the two spools together irrespective of what may be the practice in other countries. A short combustion chamber will need research on droplet uniformity, spray pattern, burner types and configurations. Turbulence and uniformity of temperature at Turbine entry are other areas to study. The test rigs help to break down the problem before synthesizing the solution. These test rigs are the axes for the problem and in future we must emphasize test rigs and their roles in any project. Normally the evolution, design, fabrication, and operation of productive test rigs will require the same quality of ingenuity and good engineering as the engine itself.

The obvious thing to do-don’t!
Perhaps there is a need to review the Jet engine programme as a “National” programme rather than a DRDO baby. No single organization can do the job alone. In England Bristol Engines starting Jet development from scratch but let Lucas focus on the critical fuel systems and combustion. Team work has to be enforced by getting GTRE back to what it really was set up to do and HAL has to be forced to pick up “GTRE’s baby” and bring it up to some state of civil behaviour. It is possible that a team of HAL ‘s best designers and fitters from Koraput and Engine Division are transferred to lead the Kaveri programme. Unfortunately, whilst administratively such action is possible it won’t work in peacetime. Internal priorities would change; the organizations concerned would become creative. We would see tribal warfare the Pathans would relish! As things stand GTRE must find a way out from the difficulties it has created for itself!

What ails thee Knight?
Over the decades our betters have replaced in our Engineering colleges “practice based” engineering with “science based” engineering-even at the undergraduate level! Consequently GTRE, as with other scientific research establishments in India, has unquestioningly adopted the rather large assumption that possession of an engineering degree confers the abilities of an engineer to the holder. The natural consequence of this assumption is that the more the degree the more the “qualification” of the person to take engineering decisions -never mind that one of the most esteemed and successful engineering leaders in the country, who has unfailingly delivered, Mr. E Sreedharan of Pamban Bridge, Delhi Metro et al ( the list be long!) is a “mere” B.E. The reality is that Engineering is a practioner’s art and the “qualification”- irrespective of its degree -is merely a license to enter the area. Possibly, as in education, in selecting “leaders”, possessions of qualifications have outweighed other parameters. The result is a lack of engineering leaders who enjoy being “at the front”. I could cite several examples (looking back, quite amusing!) of the effect of lack senior “engineering leadership at the “frontline” .That will have to wait. However I will give an “unrelated” example. Rommel won his battles often with inferior forces, because he had much more direct knowledge of the tactical situation “real time” and was personally judging the situation with his great experience and technical skills-apparently he was an IC engine “nut”-rather than relying on what some inexperienced Feldwebel thought of the situation. This undistorted, experienced, assessment of realties came from being right at the front when his opponents were at their HQ way back from the action. How many “Top” Scientist work side by side with the fitters? The administrative problem is that passionate engineers often tend to be “enfants terrible” of the organization and are often ACR’d ( quite validly, depending on your priorities!) “Not quite mature” or “good but simple minded”! The net result of all this is that GTRE probably has excellent administrators- and they are also needed -but it does not have excellent practical engineers who can calmly “think things through” and yet have the authority to get thing s done.

There be hope yet…
Despite the clouds above the situation is ripe for rapid rectification which should enable us to have- without foreign collaboration- a flight cleared engine within a predictable and short time scale Foreign collaboration, if available, may not hurt but I believe the here the demand for collaboration is a bureaucratic “failsafe” decision; no one can be blamed. It is this lack of “the right stuff” – people who will work on the engine rather than eat their dinner-which is why we are where we are at present. Instead of commercial collaboration what we can do is however is to get retired engine designers over as a Teacher or a guide. The Chinese not only regularly had Hooker over as an honoured Guest they also had Ferdinand Brandner over as a Professor in their top University. I don’t think Brandner simply taught the prescribed course! The other reason for rejecting foreign collaboration for the Kaveri is the nature of the present need. The answer to the Kaveri’s performance problems cannot be yet more technology-there is no magic in Technology- but more care and thought and listening to what the engine is trying to tell us- yes it talks! Assuming the basic design (barring, apparently the A/B) was sound, what is needed is a hundred small improvements - improving the surface finish of the compressor casing bore or the blades, working on cleaning up flows near the roots, stressing the components down to closer margins, tightening technology processes and so on rather than introducing “blisks” or “shrouded blading” or SCBs which everyone seems to talk about. We put in certain Technology. It was put in to do a job. Why is it not then doing it? It is here that GTRE is, by its charter, subtly handicapped. Being a R&D set up it does not have those seasoned practiced people whose hands can “read “the engine even with their eyes closed. A R&D organization, anywhere on the Globe will not have the skills common in a production unit.

Cutting your coat
We need to:

i) Enter into a dialogue with the customer about TBO, engine change procedures, TTL et al.
ii) Back off from trying to build something “same as the GE F XYZ”. It is not necessary or even the best solution. The Airframe boys should be ready to rebore their fuselage. Everyone does it all the time.
iii) Flog the engines on the test beds even if they are developing no more thrust than kerosene stove. If 550 hrs TBO is technical target one would expect 5500 hours on a batch of ten engines anyway. That way at least the infantile mechanical problems are exposed and can be corrected.
iv) Prioritize the acquisition of more than one flying test bed. Do you know Harry Folland’s last design was a large test bed to test the 2000 hp class Bristol engines that were supposed to be coming up in the 40s. A large simple multi engine aircraft an enlarged Canberra using the AL31 would be a lovely project for “people building”.

If we were to do it again
In future the task has to be bifurcated with GTRE contributing by providing experimental data and HAL Engine plant doing all the nitty gritty mechanical detail design stuff in which HAL is arguably, by far and away is better placed to do. Let me illustrate by one example: The Kaveri accessories drives gear box. HAL Helicopter Division has years of experience designing and making lightweight gear boxes for helicopters. For reasons possibly of “unease with HAL”, ADA gave the contract to CVRDE, a sister organization but with no aerospace collaboration and no direct access to the technology. My bet is that HAL Helicopter Division would have given a better gearbox in shorter time simply because the HAL’s supply chains of know how, information, machinery and process technology and human resources were shorter than CVRDE. With every “license manufacture” agreement comes a wealth of information- materials, processes, heat treatments, machining methods, testing methods and parameters, even how and where to mark the part no and how to store the part. Over the years, at HAL this “know how” has been subconsciously processed into Know why. To CVRDE it would be new territory. The difference may or may not have been much but “look after the days and the years will look after themselves”. This is why RAE and it’s cousins at AMES or Zhukovsky do not design engines and aircraft.

What are the areas on which DRDO/GTRE is particularly well equipped to focus on and what will be necessary for us to develop for a future Indian Engine programme?

1) Carbon fibre fan casings: TETs, engine efficiencies and thrust are in symbiosis. Given modern TETs a pure jet is no longer efficient and some degree of bypass is inescapable. The fan shroud operating at relatively low pressures and temperatures is an ideal case for (carbon) composites. DRDOs appropriate unit should develop expertise on fabricating and proving fan shrouds of approximate 900mm dia. and capable of handling pressures of 2-5 bars.
2) Short Length combustors: Excellence in combustion is a key to fuel efficiencies and light weights. GTRE must focus on a target of the shortest combustor length. Dual spray nozzles optimized for cruise and max thrust as used in modern civilian engines may be explored if found imperative.
3) Compressor Aerofoils: The R11 achieved a 9:1 compression ratio using just six stages with consequent savings in weight. Could this be “the starting block” for a new development programme aimed at high pr. rise per stage with stable operations?
4) Carbon fibre fans capable of sustaining bird hits.
5) Turbine cooling technology: GTRE must further improve its capability to simulate actual working conditions faced by turbine blades.
6) Production technology for precision cast “ready to use” turbine blades.
7) Expansible thermal coatings to minimize “heat losses” through compressor casings.
8) Technology for “milling” combustor surfaces to very close limits.
9) Fan gearing systems. The future engines will all be geared so that the fan drive turbine can run at its happiest speed. This will give us useful freedom in fan design.
10) Blisks. The Centrifugal Compressors, carved from an aluminum “cheese”, was an early form of Blisk. If HAL has the Goblin compressors process sheets these could be the starting point for our “Blisk” programme. Why not give HAL the contract?
It is tempting to suggest that the actual bench testing should be done by a different and independent group. Honda used to test all their engines at a different and independent test site. This is merely good Industrial practice and should be worth replicating here.
The flying test bed is of course an imperative. “Outsourcing” this function is simply not on. Apart from the problem of logistics there is also the subtle question of security of the engine itself when abroad. Countries adopt or build their own special aircraft for acting as a flying test bed. It is just a pipedream that if a few airworthy C119G airframes were available today one could toy with the idea of an interim test bed for the Kaveri! The old thing was configurationally ideal for a test bed. Of course an “enlarged” Canberra (ref “The Haft of the Spear” Vayu) would be another option. These would be simple aeroplanes capable of being designed, built and maintained by simple people and would need a simple budget!
With such a list of activities to do GTRE would be busy and happy. I am reminded of the fact that TsAGI “discovered” that the tailed delta configuration was the best layout for the supersonic combat role and such was the quality and reliability of its findings that both Mikoyan and Sukhoi OKBs were not too proud to rely on Ts AGI data for the MiG 21 and Su9 aircraft plan forms. Perhaps the proud traditions of high quality fundamental research continue till today; the similarity of aerodynamic layout of the Su 27 and the MiG 29 is no coincidence.

Give unto Caesar….
We must give unto HAL that the logical house for development of actual engines is HAL Engine Division. The reason is that they are organized, experienced and their supply chains are shorter. What then will they do? For my money they should engage in the development of three “core” engines using not tomorrow’s technology, not today’s technology but yesterday’s technology. By yesterday’s technology I mean technology that has been in production at HAL BLR or KPT for the last five years at least and we are all exposed to it thoroughly. The three cores will be of sizes 10kN, 25 kN and 60kN. They should all be single shaft turbojets and the stress will be on timeliness, reliability and technology security above all the other necessary aims. It is a sedulous myth that advanced features “teach”. If advanced features are a cause of such delay as enabling the proposer (s) to retire without delivering, then “advanced features” is de facto an accessory to a swindle. Maximum stress will be put on using Midhani materials. The Orpheus, the work of a Master, has a few interesting features which could be replicated. The shaft is a thin walled large diameter tube; it easily permits the insertion of the second spool’s shaft as was done in the case of the Pegasus which gave three times the thrust even in its earliest version (despite VTOL configuration!). We can expect more. The second Orpheus feature I find desirable is that it has a limited number of stages (7+1) on a short shaft and so can use just two bearings thus avoiding the third bearing and jointed shaft with its attendant proneness to whirling vibrations and ,who knows, blade shedding. In fact starting point for the 25 kN could well be the Orpheus since large quantities of partly used engines must be with ED Sulur (?) following the retirement of the Kiran! The purpose of these core engines is they will over time form a family of fan and “leaky” engines for a variety of military and civil applications ranging from 10kN to 250 kN. They will incorporate the certificated advanced technology GTRE will no doubt develop. In fact GTRE’s contribution will be essential to the success of the programme. A side effect of the development of these small “cores” is that any of the Embraer’s can be rigged up as a 3 engine flying test bed either DC10 or Lockheed Tri star style or even in place of the AEW pack and the tail arrangement changed to a twin fin arrangement-good project with a “bite” for our young engineers in collaboration with Embraer.
In a lighter vein, GTRE should quietly examine its press statements carefully before clutching in the “tongue”. Talks about Marine Kaveri are allowable but to talk of a Kaveri powered locomotive is to betray “Ivory Tower” disconnection. Not only will the engine choke in the Indian dust but also the power would be so enormous the train length would exceed the loop line (siding!) length used by the Railways. Marine and Power versions are completely different animals using different materials and operating in different ambient conditions. These derivatives will in no way help the Aircraft Engine programme and recalls Northkote Parkinsons’s story about how a big Government funded project to make a hyper rocket fuel failed miserably but said the Chief of the project at a press conference “I am afraid we have failed to have a useful rocket fuel but fortunately we find it is an excellent paint remover!” The UACV Kaveri idea is much better and on the right track.

Nil desperandum!
The Kaveri is in no way worse off than the LCA programme. What is needed, as with the LCA, is not more technology but more care and attention to detail. That will transform both projects if not into outstanding stupor mundi (wonder of the world) products as so tiresomely claimed but at least in to serviceable and affordable equipment. I take this opportunity to thank Shri Ashok Baweja, (Chairman HAL2004-2009) for suggesting during a casual conversation why I did not do a piece on the Kaveri. This piece had its genesis in his suggestion and is by way of thanks for the same.

Posts: 267
Joined: 24 Oct 2004 07:17
Location: Brisbane, Oz

Re: Project BRF: India's Kaveri Engine Saga

Postby Rien » 31 Jul 2014 17:45

Titanium for Kaveri

I've been reading some articles about the use of titanium of help cut the weight of jet engines. The mass of Kaveri is a
weighty issue, and the primary reason that it doesn't meet the IAF requirements.

Titanium offers a way to cut the weight of an engine, and substitute the use of nickel alloys. Bharat does have strong expertise in Titanium and its alloys, while there is no domestic source of nickel. There are also no experts in creating nickel alloys locally either, while the Kaveri already consists of 20% titanium alloys.

Increasing that % to 33, matching the amount used in other engines would make the Kaveri project a success. The expertise in working with Titanium from sponge form has been developed by ISRO and Madhani, so the expertise is available locally. ISRO has launched spacecraft and satellites, so experience in working with titanium alloys is available locally.

Real World Experience with Titanium alloys in turbines

Both GE and Snecma have built engines with extensive of titanium, in production.

GE ... se-207148/

GEnx, breakthrough lies in its use of an intermetallic compound called titanium aluminide (TiAl) in the LPT blades.
“After 20-30 years of research it is a key breakthrough to use this material in this way,” says GE Aircraft Engines
materials and process engineering general manager Robert Schafrik. The TiAl is being used in the sixth and seventh stages of the LPT, after proving its robustness on more than 1,500 cycles in a CF6. “For us to step up and bet the engine on this technology is a big deal,” says Schafrik, who adds: “If it has only 50% of the weight of nickel alloys you’ve got to believe this is here to stay.”

SNECMA ... ml?lang=en

Titanium Alloys for Turbines

Titanium aluminide (TiAl), an alloy of titanium and aluminum, is a new-generation material with outstanding qualities. Standing up to very high temperatures (750°C), it will cut the weight of a blade in half compared with the nickel-based alloys traditionally used in low-pressure turbines. As part of the new LEAP engine, this alloy will be used for the first time in the world on a single-aisle commercial jet. It will contribute to the excellent performance of this new engine, which offers 15% lower fuel consumption than the best engines now in service. ... inide.html

Blades made of titanium aluminide offer an enormous technical potential. Showing high resistance to high temperature,
these components can withstand temperatures of up to 850°C, a value which is 250°C greater than values currently
achievable with other materials. A further plus is that these blades, with a density of 4 g/ cm³, provide a weight-saving
of around 10 % over currently available titanium alloys blades. At the same time, the new titanium aluminide blades are
only half as heavy as comparable blades made of special steel alloys. The bottom line is that the new titanium aluminide blades enable savings in weight and fuel consumption, thus achieving major reductions in CO2 emissions. ... ing/47894/

TiAl has a density of 50% compared to current alloys, but it is rarely used in engines due to being very difficult and
expensive to cast. Avio, however, has shown that blades of different configurations and sizes can be manufactured in TiAl
by using EBM.

Manufacturing method

EBM is a type of additive manufacturing for metal parts. It is often classified as a rapid manufacturing method. The technology manufactures parts by melting metal powder layer by layer with an electron beam in a high vacuum. Unlike some metal sintering techniques, the parts are fully dense, void-free, and extremely strong.

This solid freeform fabrication method produces fully dense metal parts directly from metal powder with characteristics of
the target material. The EBM machine reads data from a 3D CAD model and lays down successive layers of powdered material.

These layers are melted together utilizing a computer controlled electron beam. In this way it builds up the parts. The
process takes place under vacuum, which makes it suited to manufacture parts in reactive materials with a high affinity
for oxygen, e.g. titanium.

Unfortunatley, EBM is not a technology available locally, so the machines to do it will have to be imported or developed.

Example of Titanium blade ... -4003.aspx

Titanium alloys and ISRO ... 8-3#page-1

In conclusion, titanium alloys are an exciting development. Up till now GTRE has been working on a technology that is already obsolete, with SNECMA and GE already working on titanium aluminium alloys to replace nickel alloys. So even if GTRE is 100% successful with their development effort, they will have created something that is already obsolete.

More imports! The IAF will cry out. The only way to head them off is to develop a titanium engine instead. And the expertise and skills to do it is available locally. Titanium is preferable to nickel, not merely for better performance, but also because more local expertise is available.

Return to “Military Issues & History Forum”

Who is online

Users browsing this forum: A Sharma, abhijitm, Bing [Bot], Dileep, dkhare, Majestic-12 [Bot], Manish_Sharma, ranjan.rao, Thakur_B and 44 guests