It has been a while since I posted here, though I have been regularly posting in my complex variables class blog last semester.

And for those who like complex variables and numerical analysis, this is an exciting, interesting development.

But as to the title of my post: I was working to finish up a proof that one kind of wild knot is not “equivalent” to a different kind of wild knot and I had developed a proof (so I think) that the complement of one knot contains an infinite collection of inequivalent tori (whose solid tori contain the knot non-trivially) whereas the other kind of knot can only have a finite number of such tori. I still like the proof.

But it turns out that there is already an invariant that does the trick nicely..hence I can shorten and simplify the paper.

But dang it..I liked my (now irrelevant to my intended result) result!

### Like this:

Like Loading...

*Related*

these can used for Descartes and Osgood theorems as observed prof dr mircea orasanu and prof drd horia orasanu and also main results

Comment by tusangu — January 2, 2019 @ 3:51 am

in many situations are considered results connected with Legendre and Descartes as presented by prof dr mircea orasanu and prof drd horia orasanu during times specially followed for COLLEGE LYCEUM MAGNA ,Louis university ,but no Colleg virgil magearu or Colleg traian Buc. where are considered LAGRANGIAN DEFINING with followed considerations

Consider the discrete optimization problem (which we refer to as Problem A)

,

where – is a non-decreasing -order-convex function on a partially set .

Let be an optimal solution of Problem A, and let be the point obtained by the following iterative procedure [4]:

which halts on the step if either or is the maximal element of the set (the set contains the zero , as we have stipulated). This point is called the gradient maximum os the function on the set [4].

By a guaranteed error estimate for the gradient algorithm in Problem A we mean a number

.

By perturbations of problem A by means problem B

,

where is a non-decreasing -order-convex function on a partially set and .

Let be a guaranteed error estimate for the gradient algorithm in some unperturbed (perturbed) discrete optimization problem. As usual (see. [3]), we say that the gradient algorithm is stable if , where as .

Theorem. Let and be guaranteed error estimates for the gradient algorithm in Problems A and B, respectively. Then .

To prove Theorem, we need the following lemma.

Lemma. The gradient maximum and the global maximum of any -ordered-convex non-decreasing function on are connected by the following relations:

, (1)

where

– is the set of all maximal elements of the partially ordered set .

Proof of Lemma. By virtue of item of Theorem 4 [4], we have for

Comment by ciodedu — January 2, 2019 @ 11:39 am