25 January 2007

Chem + CS: A Close Relationship?

So, the past few days have been incredible. The dinner that I had to emcee on Tuesday turned out extremely well, and a number of people came up to me afterward and told me that I'd done a great job. Plus, I got to steal one of the huge flower arrangements and now it's busy making the apartment all cheerful and girly. I was able to resolve something that I'd been worried about for a long time, classes were amazing (more below), and I got to walk around in the snow with Tom for awhile between classes.

The walking thing is something I haven't really done before...just wandering around with someone, sometimes talking, sometimes not. The silences are always really comfortable silences, and I'm finding that it gives me a lot more time to get my thoughts together for class, or whatever else I've got on my mind. Plus the snow is so pretty and calm.

Anyway. I decided to be cute with the title of the post and give it a quasi-double meaning. For those wondering, here's meaning #1*, courtesy of my Artifical Intelligence class from today:

We were discussing various search algorithms for solving problems (specifically problems that can be framed in terms of beginning from a "start" state and ending up at the optimal "finish" state). On Tuesday the lecture had been about various graph algorithms that could also optimize either time or space; today the focus was on problems that would still be relatively inefficient to solve using the previously described algorithms, and whose "path to solution" doesn't really matter. (Traveling Salesperson is probably the most famous, although there are quite a few others...)

So, the idea was that instead of taking time to find the most optimal "next" step at each current step, to just consider a randomly chosen neighbor at each step. If the neighbor could be shown to be a better choice than the current location, that neighbor would be visited. The interesting part, though, is that a certain percentage of the time (so, with a given probability) the "worse" neighbor would be chosen anyway. This is to ensure that enough of the problem space gets explored that the algorithm doesn't end up settling on a false "optimum" state that just happens to be better than its immediate neighbors. Over time, the probability for moving to another worse neighbor is exponentially decreased; given enough iterations the algorithm ends up at the optimal solution.

So, what's that got to do with chemistry?

Well...the name of the technique is "Simulated Annealing", because the equation used to calculate the probability for visiting a worse neighbor is the same thing as the equation for a Boltzman distribution (excepting k, since it's just a constant). Instead of delta(Energy), it uses delta("optimality" of state) and "T" in this context decreases exponentially with a given number of iterations.

The idea is that the original equation gives the probability of moving between two states of energy in a solid at a given temperature T. As the temperature is decreased, it will approach an equilibrium where the probability of being in that state is proportional to the temperature. Looking at the equation, this means that states of low energy (or a small difference in optimality) relative to T are more likely to occur, so as the simulation proceeds only states closer and closer to the optimum are accepted. The professor included a citation of the original proof in his lecture slides:

N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth. A.H. Teller
and E. Teller, Journal Chem. Phys. 21 (1953) 1087-1092


Ha. Tell me that's not one of the coolest things ever. (Okay, actually, I'm probably more excited than is absolutely necessary or warranted. Oh, well...)

* It doesn't get to be a double meaning because I'm not a CS major. But it can be a "one plus epsilon" meaning since I'm taking a CS concentration.

No comments: