BR Maths Corner-1

The Technology & Economic Forum is a venue to discuss issues pertaining to Technological and Economic developments in India. We request members to kindly stay within the mandate of this forum and keep their exchanges of views, on a civilised level, however vehemently any disagreement may be felt. All feedback regarding forum usage may be sent to the moderators using the Feedback Form or by clicking the Report Post Icon in any objectionable post for proper action. Please note that the views expressed by the Members and Moderators on these discussion boards are that of the individuals only and do not reflect the official policy or view of the Bharat-Rakshak.com Website. Copyright Violation is strictly prohibited and may result in revocation of your posting rights - please read the FAQ for full details. Users must also abide by the Forum Guidelines at all times.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Yes, that is correct. Bucket sort is O(n) only if the range is known which is true in this case. An O(n) pass will give you min and max and then do the bucket sort.
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

Nandu wrote:I came across this recently. It is a programming problem with a mathematical solution.

Given two finite sequences of numbers a, and b, which are of the same length n, we need to determine if they are the same, up to permutations.

The usual programmer's way is to sort both arrays and then sequentially compare for equality, but for small values of n, (say n <= 5). This might be faster.

If, for j=1 to n, (n >= 0), sum(a^j) = sum(b^j), then b is a permutation of a.

Question: Can you prove this?

This is correct and fairly easy to prove..
Basically if you solve it, the equation is (n!) order and n! solutions ==> These are counting all the permutation...

For example, if one has
x+y+z+w = 10
x^2+y^2+z^2+w ^2 = 30
x^3+y^3+z^3+w^3 = 100
x^4+y^4+z^4+w^4 = 354

(easy to see that (symmetric) solution (1,2,3,4) works)

If one reduces it, the equation will be (1*2*3*4 = 24 degree in x so it will have 24 solution (fundamental theorem of algebra)..
Since, (1,2,3,4) has 4! permutation so these must be all (and only) solutions...

QED
(Actually any n *symmetric* and first eq of degree 1, second of degree 2, third of degree 3 .. etc
will work two)
For example, one can also (for n=4)
x+y+z+w
xy+xz+xw+yz+yz+zw
xyz+yzw+zwx+xyw
xyzw
will work too.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

The arrays a and b may not contain permutations of [1...n] but some arbitrary integers. In that case O(n) algorithm can be obtained using bucket sort.
kasthuri
BRFite
Posts: 411
Joined: 02 Jan 2009 08:17
Location: Mount Doom in Mordor

Re: BR Maths Corner-1

Post by kasthuri »

shaardula wrote: Its just a measure of uncertainty, no? further more, uncertainty itself is the difference between model and "reality"? sort of a language to describe a condition. as a grammer for that language, the theory itself is true in that it is consistent and complete. no?
After deliberating your thought for sometime, I think you are referring to "deterministic" models in the above sentence. I am assuming you are trying to say probabilistic models "fill the gap" between deterministic models and physical realities. While you distinguish between deterministic and probabilistic models, I don't look at them separately. After all, deterministic models use functions to model phenomena while probabilistic models use generalized functions (or distributions) for modeling. The difference is just between using differential equations and stochastic differential equations and nothing more. Differential equations will give us field equations while stochastic equations will give us probabilistic waves. The former is more "close" in filling the gap than the latter one, since deterministic models give little satisfaction of grasping the truth.
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

matrimc wrote:The arrays a and b may not contain permutations of [1...n] but some arbitrary integers. In that case O(n) algorithm can be obtained using bucket sort.
If I understand you correctly ... No one is talking about permutation of [1,..n] but any sent of arbitrary set of n numbers (a_1,a_2,...a_n ) (I gave particular example when n=4)

Point is, if one satisfy n equations (a_1+a_2+a_3+...=b_1+b_2+b_3 = k_1, and sigma (a_2)^2 = sigma(b_2)^2 = k_2 etc...

Then
1. both a_1, a_2,a_3 ... (and b_1, b_2, b_3) are set of roots of the n above equation (with x_1,x_2 in the equation.
2. There are exactly n! set of roots of the n equations.
3. All such sets (n! in number) are nothing but a permutation of a_1, a_2, a_3...
4. Hence b_1,b_2,b_3 are just a permutation of a_1,a_2,a_3 ..

(One can easily see the logic, if one consider a small number of n, say 2 ..
if given x+y = 5
and x^2+y^2 = 13
then values of x,y are only some permutation of (2 and 3)
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Now I understand. I was thrown off by your example. Very nice solution.

That said, it is still worse than an O(n^2) naive algorithm for small n where one has to be cognizant of the hidden constants in sort based algorithm. I know you are not proposing it but just general observation.

Added Later: I got another idea - I think the above fact can be used to derive a fast probabilistic algorithm for checking whether two arrays are permutations of each other. Might be faster than sort based algorithm in certain situations - like the arrays are reals indicating measurement data from some physical experiment.
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

FWIW - (some random comments)

It is not easy to define, what is "more efficient" ... does one mean average time or worse case scenario

.. Are the two lists random... or "almost" (nearly) permutation of each other..?

Are the elements of list come from a small (or large) selection of choices (eg each element may be just the day of the week - and thus a number between 1 and 7 ... or social security number (10^9 different choices)

etc... etc...
***
Using "sort and then compare" is easy to implement, if for no other reason than that sorting algorithms are already written and one can just use a command like "if sorted(a) = sorted(b)". It is fairy efficient for worse case scenario ... of the order of (n*log(n)).

... But even if the two lists are totally random, one still has to use algorithm to sort the two lists and this may waste time unnecessary in some practical cases.

While, if you use the method proposed by Nandu, and if you are lucky you will find that if the first test fails, you are done. (sum of all numbers of first list is not the same as the other list - you do not have to test for the other equations ).

So in practice the second method may be more efficient ... because, in case of a failure, you need not do remaining comparisons.

In all, IMO, the theory here is much more complicated than just considering the worse case scenario.

****

Other algorithm(s)) I have seen, are ...

(where range of numbers is small ...... for example if one knows that all numbers are between 1 and 100 .. (or each element is a letter from an alphabet .. (26 possibilities) or a byte/char (256 possibilities .etc) ..

For example, if you want to see the order of digits in a large number (or list) x (say of 100,000,000) digits) is some permutation of digits in number y..)

(test for digit n ( for n = 0 to 9) , if count of n in list x = count of n in list b )

(This is much faster than sorting two lists consisting of 1,00,000,000 items in each)

(And you stop the first time, the count is different)

****

Another way, easy to write, but o(n^2) in worse case, may not be that bad in practice..

start with first number in list a
check if this number is in list b
if failure ==> exit with the conclusion that lists do not match...
else (as soon as you find it in list b) remove this number from both the lists..
Recursively do this..
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Usually one can always speed up algorithms if there is knowledge of the problem structure. Average case analysis is all good and dandy but sometimes the ill-conditioning might be inherent in the problem. The question is whether the ill-conditioning arises because of wrong modeling or inherent in the problem to be solved. In case of comparison based sorting, the lower bounds is proved to be O(n log(n)). Now if one knows that the inputs are mostly sorted, some of the bad worst case sorting algorithms might work better. So in this case, it would be a bad idea to just use whatever sort your system provides. You would be better off writing your own sort taking into account the knowledge about the data you have.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Amber G. wrote: Another way, easy to write, but o(n^2) in worse case, may not be that bad in practice..

start with first number in list a
check if this number is in list b
if failure ==> exit with the conclusion that lists do not match...
else (as soon as you find it in list b) remove this number from both the lists..
Recursively do this..
That's exactly what I meant when I mentioned a naive O(n^2) (that should be a Big O not small o). If you have to preserve the contents of vector b, then you need to have a mark vector and the numbers are real numbers, then you have to compare the numbers within some error epsilon. This naive algorithm will be good in practice only for small n or you know before hand that the inputs are permutations of each other with low probability. Algebraically speaking, the problem is does there exist a linear transformation that takes the first R^n vector into the second R^n vector? If it does, find it. In case both are integers, then the answer is to find a permutation matrix (which can be represented as a renaming vector).
kasthuri
BRFite
Posts: 411
Joined: 02 Jan 2009 08:17
Location: Mount Doom in Mordor

Re: BR Maths Corner-1

Post by kasthuri »

^^^ Quite true...knowledge of the data is important in dealing with computational problems. matrimcji, what do you mean by ill-conditioning in modeling? By definition, ill-conditioning (in the strict sense) means the perturbation in the solution gets magnified due to the noise in the input data and therefore, by default it is assumed to be inherent in the problem to be solved.
For example, lets take image processing where the objective is to get an image as we imaged it. But due to the poisson noise we can never retrieve the actual image through deconvolution - how much ever precise kernel we use. This is because of the inherent ill-conditioning of the problem where the poisson noise gets magnified during deconvolution (or inversion in the Fourier space). I don't see how the ill-conditioning arises due to modeling.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

What I have in mind is the numerical ill-conditioning. The examples I was thinking of are from engineering modeling. For example, if you have a resistive network where two nodes are connected by a very high conductance which is usually done for modeling ease. The conductance may be several orders of magnitude higher than other conductances incident on the node. Better modeling would be to treat them as two separate nodes with a side constraint that voltages (for which you are solving) at these two nodes be equal (which actually constrains the solution vector to lie in a subspace in which this constraint is satisfied). Another example is when you have a statics problem which one is trying to solve using finite element methods and the model is inadequately constrained (imposed boundary conditions) in which case it is singular but sometimes due to the finite precision of the computer, one ends up with something that looks near singular. Direct methods give you totally garbage answers while iterative methods will take forever to converge. Sometimes people use huge penalties for constraint satisfaction which tend to make the resulting problem ill-conditioned.
Improper meshes like bad aspect ratio elements, improper meshing resolution around small feature sizes, wrong elements for the problem at hand, etc. A lot of these problems can be solved if one decreases the mesh size (i.e. increases the number of elements) which in turn require faster linear equation solvers. The balance is between the problem size (in terms of the number of global degrees of freedom) and ill-conditioning arising out of bad aspect ratio elements.
kasthuri
BRFite
Posts: 411
Joined: 02 Jan 2009 08:17
Location: Mount Doom in Mordor

Re: BR Maths Corner-1

Post by kasthuri »

^^^ Sure. I understand these. I was also talking about numerical ill-conditioning, where ||A||*||inv(A)|| is large, given the matrix A. The example I gave you and the problems you state above boil to the same thing. Of course there are several ways to make conditioning better like using pre-conditioners, regularization techniques and in general modeling the problem differently so that the resulting matrix has a better condition number (for convergence etc). But the ill-condition does not necessarily "arise" out of bad models. Ill-condition comes out of data that bad matrices (or models) make solution/reconstruction look worse. Good matrices/modeling makes the reconstruction/solution better. Does this makes sense?
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Kasthuri wrote:Ill-condition comes out of data that bad matrices (or models) make solution/reconstruction look worse. Good matrices/modeling makes the reconstruction/solution better. Does this makes sense?
Yes, that is correct. But there are two factors that have to be kept in mind for deriving fast algorithms for linear equations.

Firstly, the ill-conditioning may be inherent in the problem itself. For example, you might have seen what is called the snap-through problem. Just in case, imagine a dome shaped shell where you apply a normal downward force on top of the dome like so.

Code: Select all

    |
    V
    _
  /   \
/       \
Then when you are modeling, you go through psuedo-time stepping and slowly increase the force from 0 to the intended final force. In this case, the shell deforms and goes through from one equilibrium state (that is the downward cup) to another equilibrium state (upward cup). Initially the system of linear equations (the descretized problem) are positive definite and finally after the snap-through they are positive definite. But somewhere along the way, they go through an indefiniteness (in fact a singularity). As you near the singularity in your psuedo-time stepping, the condition number becomes worse and worse where the matrix is coming closer to singular. That is one thing that the solution method must handle. PCG works only for SPD matrices so does non-pivoting symbolic factorization used in the direct methods. Since there are no provably good pre-conditioners for indefinite/unsymmetric matrices, if the linear equation solver can get feedback from the outside non-linear loop, then the linear solver can take the appropriate action to avoid solution slowdown. For example, take the case of popular but ad hoc preconditioners. Incomplete Cholesky can be used as a preconditioner in a PCG initially, and can be switched to ILU(k) in GMRES when the outside loop is nearing the singularity increasing the k as you come nearer and then decreasing k past the singularity and switching back to Incomplete Cholesky. Of course when the k is increasing, the number of operations go up as the ILU(k) has more non-zeros, so everything needs to be parameterized.

The second factor is more to do with short cuts taken by the modelers to decrease the problem size which might actually result in slowing down the linear equation solver (especially if it is iterative). Of course, there is more to it than that short sentence.

More later when I find some more time.
kasthuri
BRFite
Posts: 411
Joined: 02 Jan 2009 08:17
Location: Mount Doom in Mordor

Re: BR Maths Corner-1

Post by kasthuri »

matrimc wrote: But there are two factors that have to be kept in mind for deriving fast algorithms for linear equations. Firstly, the ill-conditioning may be inherent in the problem itself....The second factor is more to do with short cuts taken by the modelers to decrease the problem size which might actually result in slowing down the linear equation solver (especially if it is iterative).
Yes, this is precisely what I was talking about. The two factors that you mention can mathematically be described as follows:

Ultimately, what we are trying to do is solving Ax = b, where b is some measurement vector. The problem would be straight-forward if A is a well-behaved matrix where faster methods like CG/GMRES can be applied for quick solution (of course CG requires SPD etc.). However, there are couple of things that could go wrong.

1. Usually, any physical measurement will contain errors 'e'. Therefore, the actual problem that we are trying to solve is not Ax = b but Ax = b_0+e where 'b_0' is the measurement without any error. Thus, the solution 'x' is not inv(A)*b but inv(A)*b_0 + inv(A)*e. The second term is what ill-conditioning is all about.

2. If A is well-behaved, then inv(A)*e can be reasonably estimated. But in practice, A usually comes with a convolution kernel whose singular values will converge to zeros as the size of the matrix increases. This can be verified with a SVD decomposition of A. This is the first factor you mentioned - the ill-conditioning is in the problem itself - the matrix comes with the singular values that are close to zeros as the size of the matrix increases.

As for the second factor (ie., the modelers decreasing the problem size) is concerned, it is a very natural thing to do. The larger the problem is, more singular values need to be taken into account and thus we could potentially lose the solution. If we reduce the problem size, we may get a crude approximation that may be really far from the original solution or get struck in local minima, thus requiring more iterations to get near the original solution.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Kasthuri garu

Your application seems to be in Image Processing. I am not familiar with that area. Can you write quick review of the application? Thanks

Added Later: You can assume that I know what a convolution operation is and other related stuff.
kasthuri
BRFite
Posts: 411
Joined: 02 Jan 2009 08:17
Location: Mount Doom in Mordor

Re: BR Maths Corner-1

Post by kasthuri »

matrimc wrote:Kasthuri garu

Your application seems to be in Image Processing. I am not familiar with that area. Can you write quick review of the application? Thanks

Added Later: You can assume that I know what a convolution operation is and other related stuff.
Sure...will do when I find some time.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

No hurry. Please go easy on measure theory :) as I am a Comp. Scientist with limited knowledge of analysis.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Largest Known Prime, 48th Known Mersenne Prime Found!!
On January 25th, prolific GIMPS contributor Dr. Curtis Cooper discovered the 48th known Mersenne prime, 2^57,885,161 - 1, a 17,425,170 digit number. This find shatters the previous record prime number of 12,978,189 digits, also a GIMPS prime, discovered over 4 years ago. The discovery is eligible for a $3,000 GIMPS research discovery award.

Dr. Cooper is a professor at the University of Central Missouri. This is the third record prime for Dr. Cooper and his University. Their first record prime was discovered in 2005, eclipsed by their second record in 2006. Computers at UCLA broke that record in 2008. UCLA held the record until Dr. Cooper and the University of Central Missouri reclaimed the world record with this discovery.

While Dr. Cooper's computer found the record prime, the discovery would not have been possible without all the GIMPS volunteers that sifted through numerous non-prime candidates. GIMPS founder George Woltman and PrimeNet creator Scott Kurowski thank and congratulate all the GIMPS members that made this discovery possible.

Mersenne primes are extremely rare, only 48 are known. GIMPS, founded in 1996, has discovered the last 14 Mersenne primes. Mersenne primes were named for the French monk Marin Mersenne, who studied these numbers more than 350 years ago. Chris Caldwell maintains an authoritative web site on the history of Mersenne primes as well as the largest known primes.

The primality proof took 39 days of non-stop computing on one of the University of Central Missouri's PCs. To establish there were no errors during the proof, the new prime was independently verified using different programs running on different hardware. Jerry Hallett verified the prime using CUDALucas running on a NVidia GPU in 3.6 days. Dr. Jeff Gilchrist verified the find using the standard GIMPS software on an Intel i7 CPU in 4.5 days. Finally, Serge Batalov ran Ernst Mayer's MLucas software on a 32-core server in 6 days (resource donated by Novartis IT group) to verify the new prime.
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

Thanks. Another sucesss for GIMPS... not too long ago we talked about this..
http://forums.bharat-rakshak.com/viewto ... e#p1382937
saip
BRF Oldie
Posts: 4231
Joined: 17 Jan 2003 12:31
Location: USA

Re: BR Maths Corner-1

Post by saip »

I have a simple question. I always thought whether your run,jog or walk a mile you will spend about the same number of calories. The only difference I thought would be the wind resistance which should be in this case nominal. Say a man weighing 160 lb walks a mile at 4 mph, would he be spending more energy if he runs the same distance at say 6 mph and if so why?
TIA
SriKumar
BRF Oldie
Posts: 2245
Joined: 27 Feb 2006 07:22
Location: sarvatra

Re: BR Maths Corner-1

Post by SriKumar »

^^^Well, this question is more for the physics thread but the answer is short..so here goes. It takes more energy to move an object (or a person) faster rather than slowly. Imagine a body that is moving from one point to another. For it to reach its destination in 1 hour (rather than 2 hours), it has to move faster, meaning it has to accelerate to a higher velocity, meaning the force on the object has to be higher. Energy (in calories or joules) = work done = force x distance. So, a higher applied force means more energy is used/needed.
vishvak
BR Mainsite Crew
Posts: 5836
Joined: 12 Aug 2011 21:19

Re: BR Maths Corner-1

Post by vishvak »

Dictionary traces maths concepts to Vedas
All branches of mathematics are well represented in the Vedas, Aranyakas, Brahminical literature, Upanishads, Panini's Ashtadhyayi and Yaska's Nirukto, the dictionary explains. It goes on to prove that most solutions that can be arrived through algebra, geometry and trigonometry have Sanskrit roots. Thus, what the world knows as Pythagoras' theorem existed in the Sulbasutras provided in the manuscripts of Boudhayan, Apostombo, Manaba and Katyayan. A large number of formulae developed thousands of years ago, which lead to the same assumption as modern theorems, have been provided in the dictionary, with their places of occurrence in Indian punthis.
ramana
Forum Moderator
Posts: 59808
Joined: 01 Jan 1970 05:30

Re: BR Maths Corner-1

Post by ramana »

AmberG, MatriMC et al, I would like a big picture view of math in 21 century.

What are the challenges and what should we hope will be solved in this century?

I want our members and lurkers to get their imagination piqued by the grand challenges they may not be aware of.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

A quick answer but will think over some more to come up with the areas that I am familiar with.

1. We can always go back to the old faithful - Hilbert problems in which there are some still some unsolved problems.

2. One list is Clay Mathematical Institute problems Millennium Prize Problems which has a couple in common with Hilbert's problems.

3. There is another list by Fields Medalist Stephen Smale Smale's problems which is in the same spirit as Hilbert's problems. I am surprised to see that the status of the 9th problem is not closed. Khachiyan gave a psuedo-polynomial time ellipsoid algorithm which led to the general consensus in CS community that $LP ∈ P$. Karmarkar's algorithm belonging to Ellipsoid algorithm family which is also a practical LP algorithm that is competitive with Simplex for problems arising out of applications. May be Smale wants a true polynomial time algorithm. Somebody please check whether Prof. Eva Tardos of MIT has shown this?

For reference: The linear programming problem: find a polynomial time algorithm which for given matrix $A ∈ R^{m×n} and b ∈ R^m decides whether there exists x ∈ R^n with Ax ≥ b$.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

I misread Smale's 9th problem as LP optimization problem.

LP is min c^t x subject to Ax >= b

But the problem is asking for a feasible point for the set of linear inequalities in polynomial time. I am wrong in thinking that it is closed (which is sort of obvious considering person of Smale's standing would know the status).

There is a paper at arXiv that shows that a reduction from LP will give the solution. The reduction is almost obvious, but I think the author also misread the problem since he places a restriction that the feasibility set is not empty which is not an answer to Smale's 9th problem as this problem asks for a stronger result, i.e. no such restriction is mentioned.
SriKumar
BRF Oldie
Posts: 2245
Joined: 27 Feb 2006 07:22
Location: sarvatra

Re: BR Maths Corner-1

Post by SriKumar »

^^^I looked at the list of Clay problems and found the P vs NP problem. About a year or two ago, there was some excitement when someone proposed a solution for it. So, I tried to read up a bit about the question and am part of the way there. http://www.youtube.com/watch?v=msp2y_Y5MLE
The above video of a lecture at MIT by a math prof. was actually the clearest discussion that I had seen/read. Well worth the 1 hour. Aimed at a 10th class math level (which is perfect for me). I have a few questions, if anyone will indulge.

1. Formulating the question as the statement: P= NP seemed a bit odd to me. This statement, as I understood it, is all about the time required to solve a problem i.e. polynomial time vs. non-deterministic polynomial time (fast vs. slow, to put it simplistically). It says nothing about the type of problems being addressed- unless there is something deeply implicit. Also, how would one even start working with this equation when trying to work out a solution?

2. The substance of the question P vs NP, as I understood it, was that some problems do not have efficient algorithms to get a solution. The above video names two of them: factors and searches (cliques). Other sources have mentioned traveling salesman problems etc.

Take the simplest example, that of finding factors. Multiplying a bunch of large numbers is trivial, but finding factors for a large number is not. Supposedly, the main way is a brute force way where you try out a bunch of solutions (factors) and see if they work, i.e. trial and error (this was my simplistic understanding). So, there is this class of problems where a solution, if given, is easily verifiable, but finding a solution is difficult; and for large 'n', practically impossible (will take years). This class is NP.

Question: How will proving P = NP actually give us algorithms to the various, diverse problems under the category of NP? In other words, if someone did manage to prove P=NP, would that proof automatically contain (or give rise to) algorithms for problems as different as finding factors of large numbers, traveling salesman problem, cliques (i.e. # of connections between nodes) and everything else that comes under NP class of problems?
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Srikumar, very good questions - I will come back with a cogent post in a few days (unless somebody takes it up before). Things are a little nuanced in terms of efficient encoding, i.e. what is the language one describes the problem in, and the machine model. P and NP are sets whose members are efficient encoding of problems. By the way, one always limits to countable sets only in all these discussions. It is easy to work with binary encoding as encoding in any other base can be converted to binary within a linear polynomial amount of time which you can convince yourself. (edit: Polynomial is more correct as the machine model is a Turing Machine). Unary encoding is not allowed due to some technical reasons (Edit: The reason is because any base > 1 encoding is more efficient than unary encoding). These sets are defined in terms of the numbers of atomic steps a Turing machine (deterministic or non-deterministic) takes to solve the problem. What is easy to see is that $P \subseteq NP$. To show $P \neq NP$ (which is what is believed) one has to show a problem that is in NP yet does not belong to P. If one wants to show $P \eq NP$, then one has to show that $NP \subseteq P$ by showing that some arbitrary x that is $x \in NP$ implies $x \in P$.

More later...
Last edited by Vayutuvan on 06 Mar 2013 23:45, edited 1 time in total.
SriKumar
BRF Oldie
Posts: 2245
Joined: 27 Feb 2006 07:22
Location: sarvatra

Re: BR Maths Corner-1

Post by SriKumar »

Ah....so it is set theory. Will wait for the rest of it, take your time.
Multatuli
BRFite
Posts: 612
Joined: 06 Feb 2007 06:29
Location: The Netherlands

Re: BR Maths Corner-1

Post by Multatuli »

Here is a online resource: On-Line Encyclopedia of Integer Sequences

http://oeis.org/?language=english

From Wikipedia, the free encyclopedia

The On-Line Encyclopedia of Integer Sequences (OEIS), also cited simply as Sloane's, is an online database of integer sequences, created and maintained by N. J. A. Sloane, a researcher at AT&T Labs. In October 2009, the intellectual property and hosting of the OEIS were transferred to the OEIS Foundation. OEIS records information on integer sequences of interest to both professional mathematicians and amateurs, and is widely cited. As of 24 January 2013 it contains over 220,000 sequences, making it the largest database of its kind.

Each entry contains the leading terms of the sequence, keywords, mathematical motivations, literature links, and more, including the option to generate a graph or play a musical representation of the sequence. The database is searchable by keyword and by subsequence.
ashish raval
BRFite
Posts: 1390
Joined: 10 Aug 2006 00:49
Location: London
Contact:

Re: BR Maths Corner-1

Post by ashish raval »

Carl Sagan meets Stephen hawking and Arthur Clarke. Interesting, Carl Sagan mentioned that Israel have a n bomb or rather a capability of it.
http://io9.com/5988847/imagine-carl-sag ... t-happened
SaiK
BRF Oldie
Posts: 36424
Joined: 29 Oct 2003 12:31
Location: NowHere

Re: BR Maths Corner-1

Post by SaiK »

gurus, please to laymenfy chi squared tests in statistical analysis with some examples and use?
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

I just received an email/Arxiv preprint from an IIT Kanpur undergraduate
Mr. Kimpang Avidadgh who has supplied an elegant proof using elementary technique, for the famous Leviculus Conjuncture.

The interesting part, as the papers intro says, that the proof was not original but
was found in one of the old Jain manuscript by a relatively theorem called
Anatmagya sidhant (अनात्मज्ञ)

The Leviculus Conjuncture for those who do not know is:
The average of any two consecutive odd primes is always a composite number
.

( For example, if you take two consecutive odd primes, say 13 and 17, the average is 15, and sure enough 15 is composite (5 times 3))

The proof, once you know it, is not very complicated. Hopefully some one here can prove or reproduce the proof.
(I will explain the paper in simple term when I get some time)
Link to Arxiv preprint:
http://arxiv.org/abs/1203.6902
ArmenT
BR Mainsite Crew
Posts: 4239
Joined: 10 Sep 2007 05:57
Location: Loud, Proud, Ugly American

Re: BR Maths Corner-1

Post by ArmenT »

Amber G. wrote: Link to Arxiv preprint:
http://arxiv.org/abs/1203.6902
Are you sure you posted the correct link. This one seems to lead to an April Fool's joke :).
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

^^^Sorry I got my link mixed up. :P .. Will post the whole article.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

Amber G. wrote: Mr. Kimpang Avidadgh
That was a red-flag already :lol:

But again, I may be showing my ignorance because names like that can be found in some parts of India.
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

^^^ I also found the name like that a little amusing but after checking my Sanskrit dictionary, I think the name is not as unusual as one thinks...
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

I can see how avidagdh can be a sanskrit name (something like "somebody who is immune to fire") but avidadgh? In any case I almost fell for your sucker punch :oops:.
But the correct theorem of course is for any two ODD consecutive primes, the average is a composite.
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

matrimc wrote:I can see how avidagdh can be a sanskrit name (something like "somebody who is immune to fire") but avidadgh?
Sorry for my spelling mistakes. I think it is अविदग्ध which is अ + विदग्ध , and one meaning is uncorrected or or unripe..I was told, it is also used, for something like milk which did not turn sour..
(But my dictionary also tells me that there are some unflattering meanings too .. see here
Amber G.
BRF Oldie
Posts: 9292
Joined: 17 Dec 2002 12:31
Location: Ohio, USA

Re: BR Maths Corner-1

Post by Amber G. »

^^^ Interestingly "Leviculus" in Latin means "silly", and the root of this (from levis) is from Sanskrit word लघु.
(From http://www.latinwordlist.com/latin-word ... 246704.htm..

Longtime ago, a I tried this "theorem" on a young student, (a future IMO gold medalist and now a full math professor) who worked on the above theorem quite seriously before realizing how obvious the theorem was, and breaking into laughter ..

I dedicate the above to Sophie Germain, whose birthday falls on April 1, and I admire her work in number theory greatly... Lot of her papers are written in the name of "M. Le Blanc") and most people did not know she was a women as women were not taken seriously as mathematicians.

When she was young, she "attended" the prestigious École Polytechnique, which did not allow girls, so she took up a name/identity of another student (M. Le Blanc - who enrolled but did not attend the school) and fooled virtually everyone. (except Lagrange, when he saw her homework and realized/found out the truth when he demanded to see "her/M. Blanc" in his office)

Actually she corresponded with Gauss for years, who had no idea that the brilliant mathematician "Le Blanc" was a woman!
(Sophie knew that Gauss will not read (or take seriously) any woman's work)

(When Gauss's town was under occupation by Napolean's army.. Sophie asked the french general - who was a family friend of Sophie - to personally protect Gauss.. when the army officer informed Gauss that he was under his protection and should thank Germain.. Gauss had no idea who this "Sophie Germain" was.. and it was only later he was told and could not believe that the brilliant mathematician "M. Le Blanc" whom he corresponded for years was actually Sophie Germain!

I am reading some of the letters this lady wrote to Gauss and am impressed how brilliant this person was even though she was not able to attend college like any other man in her time. She also did some work in physics.

She did get her (honorary) PhD, about 3 years after her death, after Gauss lobbied for it.

An ultimate/unusual April Fool story which is quite heart warming.
Vayutuvan
BRF Oldie
Posts: 12089
Joined: 20 Jun 2011 04:36

Re: BR Maths Corner-1

Post by Vayutuvan »

IIRC, she is listed as Gauss's student in Mathematics Genealogy DB. Are you sure the PhD is honorary?

Added later
Never mind - you are right - this is what the DB says -
Dissertation: Honorary degree awarded by Gottingen under pressure from Gauss
Post Reply