Today I’ve watched in Youtube a very interesting lecture from Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab about the multicore revolution. What the hardware has done to the software as the lecture presenter wisely put it.Sooner than later we’ll be able to embed hundreds of processors on a single chip. This is like MpSoCs in embedded systems, the main subject of the multicube project which I’m working.
The processor core becomes the new transistor and poses a challenge on
how to build compelling new applications which use efficiently the new
processing power. Recently scientific community and their exascale computation have taken the lead but they have a real need of computation for e-sciences! The issue is that the availability in desktop and mobile computers of multicore processors on one hand and the availability of practically unlimited computation power in datacenters (computing clouds, as they have it) have risen concerns in industry and academia, whether this computation power for the masses is useful at all.
This is the synopsis:
July 22, 2008 Berkeley Lab lecture: Parallel computing used to be reserved for big science and engineering projects, but in two years that’s all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn’t kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community’s efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.
Kathy Yelick introduces some hints about the sort of applications (related mostly to professional multimedia editors, musicians, etc.) and the techniques for automatic paralellization of the application tasks beyond the manual threading model.