https://www.stratpost.com/india-get-ahe ... hter-tech/India should get ahead of AI apartheid with fighter tech
Supriyo Gupta warns of a coming Artificial Intelligence apartheid, akin to the nuclear, missile and supercomputer apartheid India faced and argues that India should get ahead of AI apartheid through fighter technology.
Before I begin, let it be said that much of what follows here is a distillation of writings in various formats and virtually every sentence that follows can be Googled back to a source from where it has been extracted, usually verbatim in order to avoid hashing up through reinterpretation.
India debates fiercely today about which fighter aircraft it should buy. The filters we are using date back to the 1950s, when strategic alliances were defined by the aircraft that each superpower was willing to ‘share’ with countries within their sphere of influence. India had played the non-aligned card to little avail and was mostly stuck with Russian aircraft along with a handful of British and French aircraft. As we embrace the free market of fighter aircraft sales, we are heading towards a moment in history when a different veil is coming down to separate the powers – one that will rocket the leaders away from the rest, at a speed and pace that will define the balance of power for the next 200 years.
This is really about the technologies that have resulted in a harsh battle developing at the crossroad of technology, business and human development. Words like artificial intelligence, deep learning, neural networks are finding their way into mainstream lexicons. They represent a whole new civilizational process coming into play. The other day, a young Indian entrepreneur with a large online commerce platform, solemnly laid out the spectre of what he calls the “end of humanity” – when machines through self-learning have learnt to design, further develop and replicate themselves, eliminating jobs, wresting control and becoming autonomous in their behaviour.
Hang in there. Sounds a lot like a geek on weed? Okay. Here is something to chew on: In 1988, the U.S.S. Vincennes mistakenly destroyed an Iranian airbus due to an autonomous friend/foe radar system. The missing piece in 1988 was cognition and discrimination – understanding data correctly and then exercising discrimination at the point of engagement on the basis of superior processing capability. Artificial Intelligence has been around in defence – going as far back as the early part of the 20th Century – but it’s only recently, with the ramping up of processing speed and the creation of neural networks, that the game is changing.
AI in Defence
Even now, defence technology is fairly primitive. Russian arms manufacturer, Kalashnikov, has developed a fully automated combat module based on artificial neural networks which allows it to identify targets, learn and make decisions on its own. Kalashnikov promises to unveil a whole line of neural network based products. Primitive, but already a bit scary, that machines will decide on targets and take autonomous decisions. But Artificial Intelligence is going way ahead of this.
Let’s take a quick look at a piece of news that bypassed much of mainstream media, but is definitely a direct pointer to how artificial intelligence is taking the art of warfare – actually, the art of warfare on century’s old game boards – to a different level, thereby setting the stage for real advancements on battlefields. And the fun part is that the leader in this play is none other than Google. (As also, Amazon, Facebook and a host of others who are trying to get your free market choices narrowed down to a behavioural construct).
Here is a short update from Singularity Hub: “The AlphaGo AI that grabbed headlines last year after beating a master of the board game Go has just been trounced 100-0 by an updated version. And unlike its predecessor, the new system taught itself from first principles paving the way for AI that can think for itself.
Interestingly, the logo at the beginning of the video above, of the Strategic Capabilities Office in the office of the U.S. Secretary of Defense, pictured here, features a Go board | Image:
When chess fell to AI in the 1990s, computer scientists looking for a new challenge turned to the millennia-old Chinese game, Go, which despite its simpler rules has many more possible moves and often requires players to rely on instinct.
It was predicted it would be decades before an AI could beat a human master, but last year a program called AlphaGo developed by Google’s DeepMind subsidiary beat 18-time world champion Lee Sedol 4–1 in a series of matches in South Korea.
It was a watershed moment for AI research that showcased the power of the “reinforcement learning” approach championed by DeepMind. Not only did the system win, it also played some surprising yet highly effective moves that went against centuries of accumulated wisdom about how the game works.
Now, just a year later, DeepMind has unveiled a new version of the program called AlphaGo Zero in a paper in Nature that outperforms the version that beat Sedol on every metric. In just three days and 4.9 million training games, it reached the same level that took its predecessor several months and 30 million training games to achieve. It also did this on just four of Google’s tensor processing units—specialized chips for training neural networks—compared to 48 for AlphaGo.”
To understand where we are headed, we need to have some basic understanding of what these mean – at least today. Here is a simple explanation from Nvidia:
“The easiest way to think of their relationship is to visualize them as concentric circles with AI — the idea that came first — the largest, then machine learning — which blossomed later, and finally deep learning — which is driving today’s AI explosion — fitting inside both. Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it… Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task….Another algorithmic approach from the early machine-learning crowd, Artificial Neural Networks, came and mostly went over the decades. Neural Networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.
How It Works
You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.
Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and “examined” by the neurons — its octagonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network’s task is to conclude whether this is a stop sign or not.
It comes up with a “probability vector,” really a highly educated guess, based on the weighting. In our example, the system might be 86% confident the image is a stop sign, 7% confident it’s a speed limit sign, and 5% it’s a kite stuck in a tree, and so on — and the network architecture then tells the neural network whether it is right or not.
Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google’s AlphaGo learned the game, and trained for its Go match — it tuned its neural network — by playing against itself over and over and over. Deep Learning has enabled many practical applications of Machine Learning and by extension the overall field of AI. Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep Learning’s help, AI may even get to that science fiction state we’ve so long imagined.”
Nonetheless, these are still primitive days in the AI, Neural Networks and Deep Learning space for defence which has typically led technology so long. The sector is desperately trying to learn from firms as diverse as Google to fashion and Facebook. Look around carefully and see how artificial intelligence, deep learning and neural networks are already changing your life. Facebook is slowly understanding you and feeding you with content you tend to agree with. That’s a software network that’s learning how you think and starts mimicking your behaviour. Extend such ‘learning’ to more complex situations. For instance, when faced with a mob with sticks, stones and also a few armed with guns, police forces find it difficult to distinguish under pressure and apply counter measures equally. What if a machine can separate these two as different threats and apply counter measures differently?
Art: Saurabh Joshi/StratPost
AI and Fighters
Now coming to the Gripen/Rafale/Eurofighter. Or any of the fighters currently under development as opposed to those that are getting souped up with a few impressive add-ons, which don’t fundamentally change the performance or capabilities of the aircraft but make them look both modern, and of course, carry the tag of being experienced.
Here is what is not on the platter – though, admittedly, the news being put out today is at a fairly generic level: F-35s, F-22s and other fighter jets will soon use improved “artificial intelligence” to control nearby drone “wingmen” able to carry weapons, test enemy air defenses or perform intelligence, reconnaissance and surveillance missions in high risk areas.
Or how about: The U.S. Air Force, working with Lockheed Martin’s Skunk Works have demonstrated another round of flight capabilities for an autonomous F-16 fighter jet, which is meant to show what an eventual “Unmanned Combat Air Vehicle” (UCAV) could do using technology they’ve developed. During this demonstration, the experimental aircraft was able to “autonomously plan and execute air-to-ground strike missions” based on mission information provided, as well as the assets made available by the planning team, but it was also able to react to unexpected changes during the mission, including “capability failures, route deviations and loss of communication,” according to a Lockheed news release. The talk of fighter aircraft town when it comes to technology is “an F-35 computer system, Autonomic Logistics Information System, that involves early applications of artificial intelligence wherein computers make assessments, go through checklists, organize information and make some decisions by themselves – without needing human intervention.”
The problem here is that ‘unmanned aircraft’ or collaborative wingmen is still at low levels of artificial intelligence and more about data linking, some degree of machine and a lot of pre-programming with some reactive scenario adjusting software that deals with unplanned – but not unexpected situations – with planned – and not self-learning – solutions.
The real big leap will take place in the very near future as processing capacity reaches mindboggling levels. The difference is that most aircraft today in the air are less smart than a standard smartphone and way, way dumber than iPhone X or Samsung 8. However, the future will be less hardware driven as focused on how much AI can be integrated into the existing hardware.
New Arms Race
At the same time, the ethics of AI are likely to lead to a situation very similar to the nuclear divide – with some countries storming ahead and then cordoning off the rest due to the growing fears of random and indiscriminate decision making by what are essentially machines. As
Techcrunch points out, “Use of autonomous weapons on the battlefield is obviously controversial, of course. The UN seems to be moving towards a possible uniform ban on AI-powered weapons, and it’s obviously the basis for more than one dystopian sci-fi story. Critics argue use of autonomous weapons could increase the number of civilian deaths in warfare, and muddy responsibility for the loss of those lives – proponents essentially argue the opposite, saying use of autonomous systems will decrease casualties overall and lead to shorter, more decisive conflict.”
The future of air combat will be almost nothing like what we see, plan and project today. At the hub of future air battles will be aircraft with awesome levels of situational awareness married to neural networks that play a bit of a chess game, processing data, selecting options and launching engagements at a speed about ten times faster than your current Facebook suggesting friends or topics to read when you show your preference for a particular engagement. Meaning, almost instantly. If that sounds flippant, think about all the data that is married between your phone, your Gmail, your social media and your browsing habits as you move very, very randomly between hundreds of thousands of bytes of data. Compared to that, the elements and variables in the air in a war scenario are fairly limited, easily identifiable and highly predictable in trajectory, engagement options, capacity and capability.
So how are the world powers going about it?
According to The Hague Centre for Strategic Studies, “While still somewhat lagging behind on its great power rivals in terms of deep machine learning capabilities, the Russian Federation has displayed a steady commitment to developing and deploying a wide range of robotic military platforms, including unmanned ground vehicles (UGVs), with the full backing of its MoD and domestic industries: in January 2017, President Putin called for the creation of “autonomous robotic complexes”.
Speaking in 2015, Robert Work, the then-US deputy secretary of defense, emphasized “human-machine collaboration combat teaming”, arguing that: “Early adoption will be a key competitive advantage, while those that lag in investment will see their competitiveness slip”.
In this speech to the Defense One National Security Forum conference, Work identified five pillars to the military future:
1. Autonomous deep learning machine systems, which are able to see the patterns through the chaff of hybrid warfare, to give early warning that something is happening in gray zone conflict areas (such as the Ukraine), and which are able to respond at extreme speed, and under rapidly shrinking engagement windows. Such learning systems might, he argues, fill the gap in those fields – such as air defense or cyber defense – where human operators alone cannot achieve sufficient speed to stop or degrade a determined attack.
2. Human machine collaboration, which will include the promotion of so-called ‘Centaur’ warfighting, going from the observation that teams combining the strategic analysis of a human with the tactical acuity of a computer, reliably defeat either human-only or computer-only teams across many games.
3. Assisted human operations, where wearable electronics, uploadable combat apps; heads-up displays, exoskeletons, and other systems, can enable humans on the front line to perform better in combat.
4. Advanced human-machine combat teaming, where a human working with unmanned systems is able to take better decisions and undertake cooperative operations. Examples of these are the Army’s Apache and Gray Eagle UAV systems, which are designed to operate in conjunction. Other examples are drone ‘motherships’; electronic warfare networks, or swarming systems which will help transform operations by enabling one mission commander to direct a full swarm of micro-UAVs.
5. Network-enabled semi-autonomous weapons, where systems are both linked, and hardened to survive cyberattack.
But as the Hague Centre rightly concludes, “Our own hunch is that AI (and a number of attendant technological developments that are co-emerging around big data) may have a much more disruptive impact on the essence ‘defense’ than the focus on AI-enhanced physical robotics and how they might affect our current way of safeguarding defense suggest.”
AI and Indian Fighter Procurement
While AI and policy is a big discussion in itself, returning to the opening question, India’s fighter aircraft purchase program needs to be focused on acquiring a platform with avionics that are expandable. Even as the Americans are trying hard to get Indians to focus on things like thrust and vectoring and so forth, the problem is that there is little porting capability in most aircraft for future AI capability to be incorporated.
Second, AI developments have just about reached the tipping point and are going to scale up quickly, very quickly. Even as India struggles with ‘design and development’ of basic fighter aircraft frameworks, the world of air defence is rocketing away at an acceleration that may well make much of what we are investing our time and effort in, quite irrelevant. The key is to find a partner which is willing to bring India in from the front door and give a seat on the table of AI development now. Clearly, the US simply does not see India as a partner in key technology domains. On the other hand, the Swedes, French or Germans just might need the tech hands Indians end up bringing to the table for accelerating much of their thinking on AI. Companies like Saab have put on their board AI specialists – a clear recognition that the company would need to be taking tough decisions on future investments in that space.AI ApartheidThe biggest reason India needs to get on board with a partner country willing to share the primary work table on AI is simply this: Very soon as autonomous machine intelligence starts dominating this space, the fear and threat of such technology getting into the wrong hands will start the ball rolling for the next generation of technology apartheid. India was for long a nuclear pariah, a missile pariah and a supercomputer pariah. All at the behest of a sound American policing of the world in which India was not seen to be a trustworthy partner. This time will be no different.
There is slowly an Artificial Intelligence denial regime coming into place. As in the nuclear denial regime, the effort will be to ensure that the keystones stay locked in the western world across military and civilian use. Indian ecommerce firms are already fearing a wipe out as the Amazons and Googles are surging ahead in AI with far greater invested capital. A lot of what these two firms are putting together are also learning grounds for the military. Several platforms built by Google’s Boston Dynamics with U.S. defense funding could be weaponized, though they’re publicly touted as rescue robots.
Independent groups are pushing hard in the UN for a regime that would remove the tools of AI from countries deemed irresponsible.
In August more than 100 robotics and artificial intelligence leaders, including the billionaire head of Tesla, Elon Musk, urged the UN to take action against the dangers of the use of artificial intelligence in weaponry, sometimes referred to as “killer robots”. Invariably such calls leads to a stockpile in the more advanced and an embargo led regime for the rest.
Late last year, as part of a U.N. disarmament conference, participating countries discussed whether or not to start formal discussions on a ban of lethal autonomous weapons following three years of informal discussions.
89 Nations have agreed to establish a group of governmental experts on Lethal Autonomous Weapons systems (LAWS) to address challenges posed by such weapons. Nineteen additional nations joined in the call to ban laws, including Argentina, China and Peru. However, China which was an early votary, has started stepping back and now has moved from calls to ban to “responsible use” while US has a “lets see where this goes posture”. Russia has moved ahead with demonstration of AI based systems. While India has been leading some of these discussions, there is no stated position.
However the key issue is for India to use its current buys to secure participation in the future of AI.
There is another BIG reason for this pointed out by the tech entrepreneur with a very successful online platform and I quote him verbatim:
Because of the self-reinforcing and exponential nature of AI progress, the gap between number 1 and number 2 will keep on increasing (in terms of capabilities and not in terms of months) as we move forward. Infact by the time it reaches its pinnacle, even a 3-6 month gap would mean 100X more capability (as opposed to 10%higher capability today).
“Because of the domain-agnostic nature of AI algorithms, it is possible to achieve much progress in the understanding and development using some other domain and then apply those learnings in a totally different domain. For example, the same deep mind that beat GO champions lowering the energy usage in Google Data Centres. That’s why it’s important to choose the domain/problem statements that provide a fertile ground for AI to evolve fast rather than choosing the problem that you want solved. If you closely observe Google, that is what they are doing. The ultimate objective is not to create AI for the Olympic Games, but almost 90% of the early effort has been on games (GO being just one of those).”
This piece reflects the personal views of the author.