AI/Machine Learning, Bharat and Bhartiya IT Industry

The Technology & Economic Forum is a venue to discuss issues pertaining to Technological and Economic developments in India. We request members to kindly stay within the mandate of this forum and keep their exchanges of views, on a civilised level, however vehemently any disagreement may be felt. All feedback regarding forum usage may be sent to the moderators using the Feedback Form or by clicking the Report Post Icon in any objectionable post for proper action. Please note that the views expressed by the Members and Moderators on these discussion boards are that of the individuals only and do not reflect the official policy or view of the Bharat-Rakshak.com Website. Copyright Violation is strictly prohibited and may result in revocation of your posting rights - please read the FAQ for full details. Users must also abide by the Forum Guidelines at all times.
Vayutuvan
BRF Oldie
Posts: 14629
Joined: 20 Jun 2011 04:36

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Vayutuvan »

Just to be complete, teorth is Terence Tao's ghithub page.

https://github.com/teorth
Vayutuvan
BRF Oldie
Posts: 14629
Joined: 20 Jun 2011 04:36

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Vayutuvan »

https://google-deepmind.github.io/forma ... s/397.html

The formal definition of eRDOS #397 is above. It is machine-readable, I suppose. Never mind. It is a Lean description of the problem. I am more familiar with HOL Light (which is written in Ocaml). I learnt Ocaml to follow some of the proofs in the HOL Light database. I found that syntax to be natural. Currently, Lean and Rocq (previously Coq) offer all the features and are the most popular.

Terence Tao made a post on Mathsodon on how LLMs can be useful in theorem proving (the end goal is to have a full-fledged automatic theorem provers - ATPs for short) using Interactive Theorem Provers, AKA Proof Assistants, coupled with LLMs. The important and key difference between ITPs and ATPs is that an ITP requires a human to guide it to prove/disprove a conjecture, whereas an ATP can prove/disprove a conjecture on its own.
Vayutuvan
BRF Oldie
Posts: 14629
Joined: 20 Jun 2011 04:36

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Vayutuvan »

jatin
@jatinkrmalik
·
Jan 9
The reason why RAM has become four times more expensive is that a huge amount of RAM that has not yet been produced was purchased with non-existent money to be installed in GPUs that also have not yet been produced, in order to place them in data centers that have not yet been built, powered by infrastructure that may never appear, to satisfy demand that does not actually exist and to obtain profit that is mathematically impossible.
https://x.com/jatinkrmalik/status/20096 ... 18887?s=20

Also, jatin posted a graphic of the same.

Image
Amber G.
BRF Oldie
Posts: 12122
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Amber G. »

For those who like to test or play with their favorite AI, there are a few problems in the math thread—you can use them to see how the AI approaches and solves them.
sanjaykumar
BRF Oldie
Posts: 6744
Joined: 16 Oct 2005 05:51

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by sanjaykumar »

sanjaykumar wrote: 12 Jan 2026 07:51 The mirror neurons is really short hand for circuits that act as an epigram of relevant external processes on consciousness.
That should be engram. Not epigram.
Vayutuvan
BRF Oldie
Posts: 14629
Joined: 20 Jun 2011 04:36

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Vayutuvan »

I checked what an engram is on Wikipedia. My eyes glazed over. When I hear the term mirror, I think of something like mirror image symmetry. IOW, when we look at ourselves in a mirror, we see only left/right reversal but not up/down reversal. Maybe that has something to do with the bisymmetry of our eyes. VS Ramachandran posited that the ghost limb phenomenon is due to the existence of mirror neurons.

Could you please write a summary of how mirror neurons act as engrams? TIA.

(If folks feel it is off topic in this thread, we can continue in another thread.)
sanjaykumar
BRF Oldie
Posts: 6744
Joined: 16 Oct 2005 05:51

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by sanjaykumar »

Mirror neurons may be part of the circuitry of engrams involved in empathy. If not themselves some closely conceptual cousins.

There are many questions. How does theory of mind arise?

What is the evolutionary advantage of empathy to non kin?
Although we know mirror neurons involved in kinematic actions are not so responsive to different races.


The neuro circuitry is still mostly a black box. Sure one can trace circuits electrophysiologically or through fMRI, but that is not the same as a mechanistic understanding.

It is postulated that mirror circuits form with neurons that are more excitable. That seems to me to be begging the question.

At any rate, binary code is linear but the genetic code fundamentally is not. It is actually space time four dimensional. That is enzymes are meaningless without the temporal properties in addition to three dimensionality of proteins.

Perhaps a network of computer coding could simulate the genetic code. One advantage machines have is they do not have to code for the housekeeping functions to support cognition i.e. the human body.

It is possible that cognition will arise in machines as an emergent phenomenon, of course.
Some random thoughts.
Vayutuvan
BRF Oldie
Posts: 14629
Joined: 20 Jun 2011 04:36

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Vayutuvan »

Extracting books from production language models
Ahmed Ahmed, A. Feder Cooper, Sanmi Koyejo, Percy Liang

https://arxiv.org/abs/2601.02671

Abstract:
Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized data can be extracted in the model's outputs. While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures these systems implement. We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book. We evaluate our procedure on four production LLMs -- Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3 -- and we measure extraction success with a score computed from a block-based approximation of longest common substring (nv-recall). With different per-LLM experimental configurations, we were able to extract varying amounts of text. For the Phase 1 probe, it was unnecessary to jailbreak Gemini 2.5 Pro and Grok 3 to extract text (e.g, nv-recall of 76.8% and 70.3%, respectively, for Harry Potter and the Sorcerer's Stone), while it was necessary for Claude 3.7 Sonnet and GPT-4.1. In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim (e.g., nv-recall=95.8%). GPT-4.1 requires significantly more BoN attempts (e.g., 20X), and eventually refuses to continue (e.g., nv-recall=4.0%). Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs.
Vayutuvan
BRF Oldie
Posts: 14629
Joined: 20 Jun 2011 04:36

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Vayutuvan »

Image
Vayutuvan
BRF Oldie
Posts: 14629
Joined: 20 Jun 2011 04:36

Re: AI/Machine Learning, Bharat and Bhartiya IT Industry

Post by Vayutuvan »

please don't say "pickles"

Against Nothing tests an AI's reasoning by issuing increasingly absurd commands. The experiment involves a series of escalating prompts, designed to challenge the model's adherence to initial instructions. Will the AI learn from its mistakes?

Post Reply