College Math Teaching

August 27, 2012

Why most “positive (preliminary) results” in medical research are wrong…

Filed under: editorial, pedagogy, probability, research, statistics — collegemathteaching @ 12:53 am

Suppose there is a search for a cure (or relief from) a certain disease.  Most of the time, cures are difficult (second law of thermodynamics at work here).  So, the ratio of “stuff that works” to “stuff that doesn’t work” is pretty small.  For our case, say it is 1 to 1000.

Now when a proposed “remedy” is tested in a clinical trial, there is always a possibility for two types of error: type I which is the “false positive” (e. g., the remedy appears to work beyond placebo but really doesn’t) and “false negative” (we miss a valid remedy).

Because there is so much variation in humans, setting the threshold for accepting the remedy too low means we’ll never get cures.  Hence a standard threshold is .05, or “the chance that this is a false positive is 5 percent”.

So, suppose 1001 different remedies are tried and it turns out that only 1 of them is a real remedy (and we’ll assume that we don’t suffer a type II error).  Well, we will have 1000 remedies that are not actually real remedies, but about 5 percent, or about 50 will show up as “positive” (e. g. brings relief beyond placebo).  Let’s just say that there are 49 “false positives”.

Now saying “we tried X and it didn’t work” isn’t really exciting news for anyone other than the people searching for the remedy.  So these results receive little publicity.  But “positive” results ARE considered newsworthy.  Hence the public sees 50 results being announced: 49 of these are false positive and 1 is true.   So the public sees 50 “this remedy works! (we think; we still need replication)” announcements, and often the medial leaves off the “still needs replication” part..at least out of the headline.

And….of the 50 announcements …..only ONE (or 2 percent) pans out.

The vast majority of results you see announced are…wrong. 🙂

Now, I just made up these numbers for the sake of argument; but this shows how this works, even when the scientists are completely honest and competent.

August 14, 2012

A quick thought on the “Interdisciplinary” focus on the undergraduate level

Filed under: academia, calculus, editorial, integrals, mathematics education, pedagogy, quantum mechanics — collegemathteaching @ 9:15 pm

A couple of weeks ago, I attended “Mathfest” in Madison, WI. It was time well spent. The main speaker talked about the connections between algebraic geometry and applied mathematics; there were also good talks about surface approximations and about the applications of topology (even the abstract stuff from algebraic topology).

I just got back from a university conference; there the idea of “interdisciplinary education” came up.

This can be somewhat problematic in mathematics; here is why: I found that one of the toughest things about teaching upper division mathematics to undergraduates is to reshape their intuitions. Here is a quick example: suppose you were told that \int^{\infty}_1 f(x) dx is finite and that, say, f is everywhere non-negative and is continuous. Then lim_{x \rightarrow \infty} f(x) = ? Answer: either zero or it might not exist; in fact, there is no guarantee that f is even bounded!

This, of course, violates the intuition developed in calculus, and it is certainly at odds with the intuition developed in science and engineering courses. Example: just look at the “proofs” that the derivative (or second derivative) operator is Hermitian provided f is square integrable that you find in many quantum mechanics textbooks.

Developing the proper “mathematics attitude” takes time and it doesn’t help if the mathematics student is too immersed in other disciplines…at least it doesn’t help if an intellectually immature math student is getting bad intuition reinforced from other disciplines.

August 5, 2012

Mathfest Day III

I only attended the major talks; the first one was by Richard Kenyon. The material, while interesting, flew by a little quickly (though it wouldn’t have for someone who researches full time). The main idea: piecewise approximation to smooth objects is extremely useful, not only topologically but also geometrically (example)

Something especially interesting to me: when trying to approximate certain smooth surfaces, the starting approximation doesn’t matter that much; there are many different piecewise linear sequences that converge to the same surface (not a surprise). There is much more there; this is a lecture I’d like to see again (if it gets posted).

The next one was the third Bernd Sturmfels; this was a continuation of his “algebraic geometry’s usefulness in optimization” series. One big idea: we know how to optimize a linear function on a polygon (e. g., simplex method). It turns out that we can sometimes speed up the process by the “central curve” method; the idea is to use algebraic geometry to do an optimization problem on the constraint plus a term involving logs: form c^T\vec{x} + \lambda \sum^{n}_{i=1}log(x_i) where c^T is the cost function. There is much more there.

The last talk was by an Ivy League professor; it was called “putting topology to work”. On one hand, it was great in the sense that there were many interesting applications. He then asked a sensible question: “how do we teach the essentials of this topology to engineers”?

His solution: revise the undergraduate curriculum so that…well…undergraduates had algebraic topology (or at least homological algebra) in their…linear algebra course. 🙂 It must be nice to teach Ivy league caliber undergraduates. 🙂

The elephant in the room: NO ONE seemed to ask the question: “do the students in our classrooms have the ability to learn this stuff to begin with?”

Do you really think that a class full of students with ACTs in the 22-26 range will be able to EVER handle the advanced stuff, no matter how well it is taught?

August 4, 2012

Day 2, Madison MAA Mathfest

The day started with a talk by Karen King from the National Council of Teachers of Mathematics.
I usually find math education talks to be dreadful, but this one was pretty good.

The talk was about the importance of future math teachers (K-12) actually having some math background. However, she pointed out that students just having passed math courses didn’t imply that they understood the mathematical issues that they would be teaching…and it didn’t imply that their students would do better.

She gave an example: about half of those seeking to teach high school math couldn’t explain why “division by zero” was undefined! They knew that it was undefined but couldn’t explain why. I found that astonishing since I knew that in high school.

Later, she pointed out that potential teachers with a math degree didn’t understand what the issues were in defining a number like 2^{\pi} . Of course, a proper definition of this concept requires at least limits or at least a rigorous definition of the log function and she was well aware that the vast majority of high school students aren’t ready for such things. Still, the instructor should be; as she said “we all wave our hands from time to time, but WE should know when we are waving our hands.”

She stressed that we need to get future math teachers to get into the habit (she stressed the word: “habit”) of always asking themselves “why is this true” or “why is it defined in this manner”; too many of our math major courses are rule bound, and at times we write our exams in ways that reward memorization only.

Next, Bernd Sturmfels gave the second talk in his series; this was called Convex Algebraic Geometry.

You can see some of the material here. He also lead this into the concept of “Semidefinite programming”.

The best I can tell: one looks at the objects studied by algebraic geometers (root sets of polynomials of several variables) and then takes a “affine slice” of these objects.

One example: the “n-ellipse” is the set of points on the plane that satisfy \sum^m_{k=1} \sqrt{(x-u_k)^2 + (y-v_k)^2} = d where (u_k, v_k) are points in the plane.

Questions: what is the degree of the polynomial that describes the ellipse? What happens if we let d tend to zero? What is the smallest d for which the ellipse is non-vanishing (Fermat-Webber point)? Note: the 2 ellipse is the circle, the 3 ellipse (degree 8) is what we usually think of as an ellipse.

Note: these type of surfaces can be realized as the determinant of a symmetric matrix; these matrices have real eigenvalues. We can plot curves over which an eigenvalue goes to zero and then changes sign. This process leads to what is known as a spectrahedron ; this is a type of shape in space. A polyhedron can be thought of as the spectrahedron of a diagonal matrix.

Then one can seek to optimize a linear function over a spectrahedron; this leads to semidefinite programming, which, in general, is roughly as difficult as linear programming.

One use: some global optimization problems can be reduced to a semidefinite programming problem (not all).

Shorter Talks
There was a talk by Bob Palais which discussed the role of Rodrigues in the discovery of the quaternions. The idea is that Rodrigues discovered the quaternions before Hamilton did; but he talked about these in terms of rotations in space.

There were a few talks about geometry and how to introduce concepts to students; of particular interest was the concept of a geodesic. Ruth Berger talked about the “fish swimming in jello” model: basically suppose you had a sea of jello where the jello’s density was determined by its depth with the most dense jello (turning to infinite density) at the bottom; and it took less energy for the fish to swim in the less dense regions. Then if a fish wanted to swim between two points, what path would it take? The geometry induced by these geodesics results in the upper half plane model for hyperbolic space.

Nick Scoville gave a talk about discrete Morse theory. Here is a user’s guide. The idea: take a simplicial complex and assign numbers (integers) to the points, segments, triangles, etc. The assignment has to follow rules; basically the boundary of a complex has to have a lower number that what it bounds (with one exception….) and such an assignment leads to a Morse function. Critical sets can be defined and the various Betti numbers can be calculated.

Christopher Frayer then talked about the geometry of cubic polynomials. This is more interesting than it sounds.
Think about this: remember Rolles Theorem from calculus? There is an analogue of this in complex variables called the Guass-Lucas Theorem. Basically, the roots of the derivative lie in the convex hull of the roots of the polynomial. Then there is Marden’s Theorem for polynomials of degree 3. One can talk about polynomials that have a root of z = 1 and two other roots in the unit circle; then one can study where the the roots of the derivative lie. For a certain class of these polynomials, there is a dead circle tangent to the unit circle at 1 which encloses no roots of the derivative.

August 2, 2012

MAA Mathfest Madison Day 1, 2 August 2012

I am sitting in the main ballroom waiting for the large public talks to start. I should be busy most of the day; it looks as if there will be some interesting all day long.

I like this conference not only for the variety but also for the timing; it gives me some momentum going into the academic year.

I regret not taking my camera; downtown Madison is scenic and we are close to the water. The conference venue is just a short walk away from the hotel; I see some possibilities for tomorrow’s run. Today: just weights and maybe a bit of treadmill in the afternoon.

The Talks
The opening lecture was the MAA-AMS joint talk by David Mumford of Brown University. This guy’s credentials are beyond stellar: Fields Medal, member of the National Academy of Science, etc.

His talk was about applied and pure mathematics and how there really shouldn’t be that much of a separation between the two, though there is. For one thing: pure mathematics prestige is measured by the depth of the result; applied mathematical prestige is mostly measured by the utility of the produced model. Pure mathematicians tend to see applied mathematics as shallow and simple and they resent the fact that applied math…gets a lot more funding.

He talked a bit about education and how the educational establishment ought to solicit input from pure areas; he also talked about computer science education (in secondary schools) and mentioned that there should be more emphasis on coding (I agree).

He mentioned that he tended to learn better when he had a concrete example to start from (I am the same way).

What amused me: his FIRST example was on PDE (partial differential equations) model of neutron flux through nuclear reactors used for submarines; note that these reactors were light water, thermal reactors (in that the fission reaction became self sustaining via the absorption of neutrons whose energy levels had been lowered by a moderator (the neutrons lose energy when they collide with atoms that aren’t too much heavier).

Of course, in nuclear power school, we studied the PDEs of the situation after the design had been developed; these people had to come up with an optimal geometry to begin with.

Note that they didn’t have modern digital computers; they used analogue computers modeled after simple voltage drops across resistors!

About the PDE: you had two neutron populations: “fast” neutrons (ones at high energy levels) and “slow” neutrons (ones at lower energy levels). The fast neutrons are slowed down to become thermal neutrons. But thermal neutrons in turn cause more fissions thereby increasing the fast neutron flux; hence you have two linked PDEs. Of course there is leakage, absorption by control rods, etc., and the classical PDEs can’t be solved in closed form.

Another thing I didn’t know: Clairaut (from the “symmetry of mixed partial derivatives” fame) actually came up with the idea of the Fourier series before Fourier did; he did this in an applied setting.

Next talk Amie Wilkinson of Northwestern (soon to be University of Chicago) gave a talk about dynamical systems. She is one of those who has publication in the finest journals that mathematics has to offer (stellar).

The whole talk was pretty good. Highlights: she mentioned Henri Poincare and how he worked on the 3-body problem (one massive body, one medium body, and one tiny body that didn’t exert gravitational force on the other bodies). This creates a 3-dimensional system whose dynamics live in 3-space (the system space is, of course, has much higher dimension). Now consider a closed 2 dimensional manifold in that space and a point on that manifold. Now study the orbit of that point under the dynamical system action. Eventually, that orbit intersects the 2 dimensional manifold again. The action of moving from the first point to the first intersection point actually describes a motion ON THE TWO MANIFOLD and if we look at ALL intersections, we get a the orbit of that point, considered as an action on the two dimensional manifold.

So, in some sense, this two manifold has an “inherited” action on it. Now if we look at, say, a square on that 2-dimensional manifold, it was proved that this square comes back in a “folded” fashion: this is the famed “Smale Horseshoe map“:

Other things: she mentioned that there are dynamical systems that are stable with respect to perturbations that have unstable orbits (with respect to initial conditions) and that these instabilities cannot be perturbed away; they are inherent to the system. There are other dynamical systems (with less stability) that have this property as well.

There is, of course, much more. I’ll link to the lecture materials when I find them.

Last morning Talk
Bernd Sturmfels on Tropical Mathematics
Ok, quickly, if you have a semi-ring (no additive inverses) with the following operations:
x \oplus y = min (x,y) and x \otimes y = x + y (check that the operations distribute), what good would it be? Why would you care about such a beast?

Answer: many reasons. This sort of object lends itself well to things like matrix operations and is used for things such as “least path” problems (dynamic programming) and “tree metrics” in biology.

Think of it this way: if one is considering, say, an “order n” technique in numerical analysis, then the products of the error terms adds to the order, and the sum of the errors gives the, ok, maximum of the two summands (very similar).

The PDF of the slides in today’s lecture can be found here.

Create a free website or blog at WordPress.com.