In a design space exploration scenario each point in design space is a HW/SW platform parameter configuration set which must be evaluated so that the heuristic search algorithm can assess the fitness of that platform in finding the optimum parameter configuration set, which is tested against a series of design-value targets (i.e performance estimation: time, area, power consumption)
If the design space exploration landscape is smooth then it is expected that the different HW/SW platforms generated for evaluation with event driven simulation have some sort of similarity and/or follow some sort of patterns which can be exploited to reduce the amount of simulation work that must be carried out at each evaluation point without increasing the intrinsic estimation errors (i.e as we move higher in the level of abstraction in the model)
Some in-depth knowledge of SystemC event-driven simulation kernel mechanisms would be useful in order to gain some insight on the simulation work that is performed on evaluating each point within design space.
Some in-depth knowledge of SCoPE simulation mechanisms would be also useful in order to get knowledge about how the SW part is simulated and interacts with the hardware event-driven simulation (i.e the SystemC part)
The topics suggested for starting the search are the following:
- event-driven simulation,
- SystemC TLM,
- context-based learning,
- machine learning (neural and evolutionary approaches),
- Evolutionary Fuzzy Estimation,
Some search phrases I figure out they would be at hand here:
- fast systemc simulation for design space exploration,
- fast system level simulation for design space exploration,
- system level simulation acceleration for design space exploration,
- system level multiple simulation optimization in design space exploration
Some wide ideas which have come up my mind: machine learning based estimators that are used instead of the real simulation or embedded event simulation strategies based on the different stages of simulation, i.e detecting the simulation runs that are identical from the previous simulation and controlling the simulation steps with a Genetic Fuzzy Controller that knows how to proceed.
One basic idea which seems straightforward is that one should only do the amount of simulation work corresponding to the changes in the underlying HW/SW architecture skipping the rest and using the previous simulation results instead. But I must find the concrete strategies which implement that reasoning. The early proposal outlined last weekby Hector is a good blueprint to consider.