Archivo de la etiqueta: evolutionary computation

Multiple-choice Knapsack Problem

In my new job at University of Cantabria, where I work on a FP7 European research called MULTICUBE, whose objective is to build an innovative design space exploration (DSE) tool for MpSoCs, I’ve found out an unexpected application of genetic algorithms.

When you co-design a hardware/software application on a MpSoC architecture you must chose a suitable architecture (doing an architecture exploration by tuning parameters of the afore mentioned architecture in order to optimize power, performance, area, etc) and then you must map the functionality implemented by the application software optimally (run-time exploration) over the different hardware resources (e.g. microprocessor cores) in order to fully use all the resources available within the constraints imposed by the application.

The process of mapping optimally could be seen as a classical Knapsack Problem (see a Google Search on the subject), which is a problem in combinatorial optimization. The thing is, that instead of 0-1-knapsack problem, we’ve got a Multiple-choice Knapsack Problem (MMKP), that is one of the list of knapsack problems with n items (e.g. application tasks) and m knapsacks (e.g. hardware resources) with capacities Wi.

There are several approximations to the solution of the (NP-Complete) problem including greedy ones, dynamic programming ones, heuristic ones, etc. But the ones that I’m interested in are the evolutionary computation approaches. It’s not easy to find open access to papers which present such solution approaches. I’ve found out a paper who explains a heuristic approach (convex hull) with the quality of service (QoS) management problem of the Multiple Resource Multiple Dimension (MRMD) systems example.

Solving the Multidimensional Multiple-choice Knapsack Problem by constructing convex hulls
Md Mostofa Akbar (a),(b), M. Sohel Rahman (b), M. Kaykobad (b), E.G. Manning (a), G.C. Shoja (a)

(a) Department of CSC, PANDA Lab, University of Victoria, Victoria, BC, Canada, USA
(b) Department of CSE, BUET, Dhaka, Bangladesh

In the paper the authors state that:

Various other techniques like tabu search [14], simulated annealing [15] and genetic algorithms [16] can also be applied to solve the variants of Knapsack Problem. The genetic algorithm has the exponential worst-case complexity—it can explore all the items. Tabu search and simulated annealing are based on looking at the neighbours. These are costlier than the greedy approach used in HEU (heuristic). HEU uses a two-way interchange approach and searches candidates in the neighbourhood which yield better revenue, and changes one selection to another. But in tabu search, simulated annealing and genetic algorithm approach, current solution is moved to another solution by upgrading some and downgrading some. This upgrade and downgrade at the same step requires more time because we have to search all neighbouring combinations of current solution. In the next section, we present our main contribution by presenting a new heuristic to solve MMKP by constructing convex hulls.

The references aforementioned are:

[14] Dammeyer F,Voss S. Dynamic tabu list management using the reverse elimination method.Annals of Operations Research 1991.

[15] Drexel A. A simulated annealing approach to the multiconstraint zero-one knapsack problem. Annals of Computing 1988;40:1–8.

[16] Khuri S, Back T, Heitkotter J. The zero/one multiple knapsack problem and genetic algorithms. ACM Symposium of applied computation; 1994.

So it seems this time genetic algorithms are not the best option at hand to solve the problem … :)

more biological inspiration for evolutionary algorithms

The fact is that the bulk of EA techniques relies on a very simplified model of biological genetics, using haploid crossover and point mutation. While a few researchers have investigated diploid crossover:

– Smith, R. E. (1988) An investigation of diploid genetic algorithms for adaptive search of nonstationary functions. TCGA Report No. 88001, University of Alabama, The Clearinghouse for Genetic Algorithms, Tuscaloosa (Smith 1988),

there is still a minimal application of the current knowledge of biological genetic processes. In particular, the understanding fo how genes regulate each other’s activity needs significant study (for example this?) with respect to artificial evolution.

Another fruitful area would be a comparative study of the transcription and translation mechanisms in an artificial evolution context. Similarly, the role that introns play in preserving useful schema is an active research domain (although this has been the subject of greater study in the genetic programming community, where junk genes lead to bloat of the tree representation genomes)

the diploid chromosome mechanism for Evolutionary Algorithms

(from book Applied Evolutionary Algorithms in Java)

One interesting variance between biological evolution mechanisms and those used in most evolutionary algorithms is that organisms normally use a diploid chromosome structure, while EAs use a haploid design. First, however, we need a basic definition of the term “diploid”:

Pertaining to homologous chromosomes, where each cromosome number has a pair of chromosomes, such as th 23 pairs in humans totaling our 46 chromosome compliment. (biology online)

A diploid chromosome structure is therefore composed of a pair of chromosomes. One advantage this offers is to allow memory to be incorporated into the individual’s chromosome structure. In an EA context the choice between the two values requires some form of dominance function. Lewis et al. (1998) give a useful comparison of the relative merits of haploid and diploid designs:

A Comparison of Dominance Mechanisms and Simple Mutation on Non-Stationary Problems. Jonathan Lewis, Emma Hart, Graeme Ritchie. Lecture Notes in Computer Science

Tha main proposed application area for a diploid EA is when the fitness function is time-varying. Hence the aditional information stored in the chromosome can provide a secondary source of diversity within the population. In addition, the diploid structure may enable the retention of long-term memory of past solutions, which will be useful if the fitness landscape is time-dependent. However, very little experimental work has yet been performed in this area.

To this last point, I’ve searched for extra info (papers) on time-varying (changing) optimization problems, being that this is a clear line of research in this field. These are my findings:

Memory Enhanced Evolutionary Algorithms for Changing Optimization Problems. Jürgen Branke. Proceedings of the Congress on Evolutionary Computation (1999)

Evolutionary Approaches to Dynamic Optimization Problems A Survey. Jürgen Branke. Evolutionary Algorithms for Dynamic Optimization Problems (1999)

Evolutionary Approaches to Dynamic Optimization Problems Updated Survey. Jürgen Branke. Evolutionary Algorithms for Dynamic Optimization Problems (2001)

Evolutionary Algorithms for Non-Stationary Environments. Krzysztof Trojanowski, Zbigniew Michalewicz. Proc. of 8th Workshop: Intelligent Information systems (1999)

Non-stationary Function Optimization using Evolutionary Algorithms with a Case-based Memory. J. Eggermont, T. Lenaerts.

evolvable hardware, FPGAs and evolutionary algorithms

(from the book Applied Evolutionary Algorithms in Java)

Evolvable Hardware

On of the most exciting developments in EA systems has been the development of hardware that directly supports evolution-based search algorithms. In particular, the availability of Field Programmable Gate Arrays (FPGA) chips allows real-time configuration of logic circuits that can evaluate a specified function (Thompson, 1996) Work at Sussex University by Adrian Thompson demonstrated that a FPGA device could be automatically configured using a genetic algorithm to solve a signal processing problem. This work open an entirely new domnain for tha application of evolutionar algorithms in two important ways: first by showing how reconfigurable hardware could allow improve processing of an EA problem; and second by showing that evolution could exploit the actual physics of a synthetic environment.

Thompson draws some fascinating conclusions regarding the power of such techniques:

A surprising hypothesis is suggested: Even within robust digital design, unsconstrained evolution can produce cirsuits beyond hte scope of conventional design rules. Previously it had been assumed that the domain of robust digital design was fully covered by conventional design rules. Given the undoubted utility of robust digitl designs, it is an exciting possibility that evolution could explore novel regions of design space, containing circuits that may be better for some applications

The main difficulty in utilising an EA technique, however, is the sensitivity of the evolved design to the exact environmental conditions under which it was evolved. For example, the resulting ciruit is normally sensitive to temperature. Further work from Thompson has aimed to enhance the robustness of this process by evolving circuits across a range of temperature values.

The emerging field of evolvable hardware, in FPGA devices, is particularly exciting as it enables the possibility of real-timne learning by EA systems. It also opens an entirely new domain of physical evolution in synthetic substrates, which take advantage of the actual ohysics of electronic systems. Providing such systems can achieve robust solutions, then it cpuld lead to entirely new classes of hardware, which fully exploit the capabilities of the silicon substrate; including their application to evolving analogue electronic devices. (NASA has been particularly interested in the possibilities offered by this technology and has arranged a series of workshops on Evolvable Hardware)

multiobjective optimisation as an advanced Evolutionary Algorithms technique

(from book Applied Evolutionary Algorithms in Java)

It is a fact of life that we rarely, if ever, achieve all our current goals, since multiple goals of any agent are frequiently in conflict with each other (a robot to take the shortest route to the target and the need to safely avoid obstacles en route) Most real-world applications for EA methods contain conflicting sets of optimisation criteria; hence we require some means for achieving multiobjective optimisation.

Given some set of competing performance criteria such as resource efficiency, cost, and speed, when no further improvement can be made in one without degrading the quality of another, then the system is said to be Pareto-optimal.

The approaches to multimobjective optimisation can be divided into three categories:

(1) Plain aggregating approaches. Objectives are numerically combined into a single objective function to be optimised (i.e weighted sum approach)

(2) Population-based non-Pareto approaches. Different objectives affect the selection or deselection of different parts of the population in turn (sort of tabu search?)

(3) Pareto-based approaches. The population is ranked, making direct use of the definition of Pareto dominance.

This categorization is made by Carlos M. Fonseca (see publications) I’ve done a search in Google which has yielded the following interesting paper hits:

Genetic Algorithms for Multiobjective Optimization: Formulation, Discussion and Generalization. Carlos M. Fonseca, Peter J. Fleming. Genetic Algorithms: Proceedings of the Fifth International Conference (1993)

An Overview of Evolutionary Algorithms in Multiobjective Optimization Carlos M. Fonseca, Peter J. Fleming. Evolutionary Computation (1995)