How important are supercomputers in unraveling protein structures?
How important are supercomputers in unraveling protein structures? For the first time ever, the chemical structure of the HIV-1 coat protein – a jigsaw puzzle of approximately 64 million atoms – has been exposed in detail. In an article published in Nature last week, American scientists describe how the 216 hexagons and 12 pentagons are ordered exactly. It’s a spectacular find that can further the development of new medication for the virus that causes aids. Still, most media sources seemed even more interested in the performance of supercomputer Blue Waters. Are its calculation skills really that special, and will its services be used for the unraveling of all complex protein structures from now on?
“The use of supercomputers in this area of expertise isn’t entirely new”, says Heiner Friedrich, associate professor of Materials and Interface Chemistry of the Department of Chemical Engineering & Chemistry. Being one of the researchers of the Soft Matter Cryo TEM Unit, he’s already using high-tech electron microscopy regularly. “We’ve been making simulations based on high-resolution cryo-electron microscopic images for a while now. It’s the huge dataset that makes the study special. More than 60 million atoms were modeled; you simply can’t calculate all that without the use of a supercomputer. And it will only come to be used more often in the future, because we want to be able to structure ever bigger and more complex structures. Computers are often the limiting factor in that respect, but luckily developments are rapid. After all, the computer that’s on your desk today was the supercomputer of the 80s…”
“Of course the fact that this involves the aids virus doesn’t go unnoticed. But looking at it from a purely technical viewpoint, it’s no mean feat either, especially as far as the upgraded resolution is concerned. Previously, this type of structure could generate a resolution of 1.6 nanometers, but now they’re able to display images with a resolution of 0.8 nanomters. And a reduction by a factor of two is ten times as complex. Reading the reconstruction definitely makes your scientific heart skip a beat - it’s brilliant.”
“In the world of material sciences we also use simulations, albeit on a different level. Cells have the advantage of being able to synthesize proteins and protein complexes, which can subsequently be studied in detail. But polymers can’t be created with atomic precision. Besides, we’re usually interested in the larger aggregates anyway. Although we do need quite some calculation power, we can still do without supercomputers easily. Our challenge is of a different nature. Most material scientists put ‘dry’ material under their electron microscope, which is fairly resistant to the electron beam so it can be studied for hours on end. However, the nanoparticles we work with are in a solution, so they are extremely vulnerable, and on top of that they can be shaped differently. It’s therefore all the more challenging to make proper models out of those. Because that’s what it’s all about, eventually, like the Nature article stated as well: you’re producing a structural model that serves as a starting point for hypothesis-based research. It provides an image of what you’re studying, and from there you can start asking more in-depth questions. And that’s an important step forward for science, surely.”
Discussion