Discrete Optimization using Mathematica
Author
Daniel Lichtblau
Title
Discrete Optimization using Mathematica
Description
Discrete Optimization using Mathematica
Category
Academic Articles & Supplements
Keywords
URL
http://www.notebookarchive.org/2018-07-6z2565c/
DOI
https://notebookarchive.org/2018-07-6z2565c
Date Added
2018-07-15
Date Last Modified
2018-07-15
File Size
103.45 kilobytes
Supplements
Rights
Redistribution rights reserved



Discrete Optimization using Mathematica
Discrete Optimization using Mathematica
Daniel Lichtblau
Wolfram Research, Inc., 100 Trade Centre Dr., Champaign, Illinois 61820 (USA)
danl@wolfram.com
danl@wolfram.com
In SCI2002, Proceedings of the World Conference on Systemics, Cybernetics, and Informatics. Volume 16, N. Callaos, T. Ebisuzaki, B. Starr, J. M. Abe, D. Lichtblau, eds., pages 169-174. International Institute of Informatics and Systemics. July 2002.
Abstract
Abstract
Many important classes of optimization problems are discrete in nature. Examples are the standard problems of integer programming (including the ever important "knapsack problems"), permutation assignment problems (e.g., the notorious "traveling salesman problem"), set coverings, set partitioning, and so on. Problems of practical interest are frequently too large to be solved by brute force enumeration techniques. Hence there is a need for more refined methods that use various tactics for sampling and searching the domain in pursuit of optimal solutions.
Version 4.2 of Mathematica has a flexible package for performing global optimization. It includes powerful functionality from the category of evolutionary programming. Ways to apply this technology to various problems in discrete optimization will be discussed. We will present details of how to code problems so that built-in Mathematica functions can digest them and will illustrate with a variety of examples. We will also discuss some practical tuning considerations.
Keywords: evolutionary algorithms, discrete optimization, set partitioning, subset covering, quadratic assignment problem.
Version 4.2 of Mathematica has a flexible package for performing global optimization. It includes powerful functionality from the category of evolutionary programming. Ways to apply this technology to various problems in discrete optimization will be discussed. We will present details of how to code problems so that built-in Mathematica functions can digest them and will illustrate with a variety of examples. We will also discuss some practical tuning considerations.
Keywords: evolutionary algorithms, discrete optimization, set partitioning, subset covering, quadratic assignment problem.
Introduction
Introduction
Optimization problems that involve discrete parameters are quite common. Examples include task assignments, knapsack problems, and so forth. In Mathematica we now have some powerful global optimization technology, described in [1], and contained in a function called . Among its features is a method from the family of evolutionary algorithms (see [2] for a broad discussion of these) based on the "differential evolution" method presented in [6]. That method was designed primarily for continuous optimization problems. It differs from standard genetic methods in that vector elements take on values in a continuum; this was motivated by a need to represent continuous variables in a way that discrete-valued genes cannot readily do. In other respects, e.g. mutation and mating, this method resembles a genetic algorithm.
NMinimize
It turns out that differential evolution is a powerful tool for handling constrained optimization, stochastic problems (see [1] for an example), and, of all things, discrete optimization. In this paper we demonstrate with several examples how one might attack discrete problems. It must be stated that these methods are heuristic. We generally do not know that the results returned are optimal (unless shown to be so by other means). What we claim is that the methods to be presented are practical ways to get "near" optimal results without resorting to problem-specific software.
Acknowledgements
Acknowledgements
Much of the code on which this paper rests was developed by Sergei Shebalov (summer intern at Wolfram Research, Inc., 1999, 2000) and Brett Champion of Wolfram Research, Inc.
NMinimize
There is a discussion of the technology behind in Brett Champion's paper for this conference [1]. The primary focus there is to describe the general technology and provide examples of continuous optimization problems.
NMinimize
All timings indicated below are for a 1.4 GHz machine running a development Mathematica kernel under the Linux operating system (Mathematica (TM) is a registered trademark of Wolfram Research, Incorporated [7]). Many of these examples were also presented in [3].
Initialization
Initialization
We begin by loading the add-on package that has the function.
NMinimize
In[]:=
Needs["NumericalMath`NMinimize`"];Off[General::spell,General::spell1];
In order to do discrete optimizations effectively we will disable various mechanisms that are in place to determince "convergence". This will force the program to work for the full specified maximum iterations.
In[]:=
SetOptions[NMinimize,AccuracyGoalInfinity,PrecisionGoalInfinity,CompiledFalse];
A simple example
A simple example
We start with a basic coin problem. We are given coins in pennies, nickels, dimes, and quarters, of total value $12563.29, and we are to determine how many coins might be of each type. There are several ways one might set up such a problem in . We will try to minimize the sum of squares of differences between actual values and desired values of the two linear expressions implied by the information above. For our search space we will impose obvious range constraints on the various coin types. We will want to alter the seeding of the random number generator (this changes the random initial parameters used to seed the optimization code) so we specify the method with this option added. We will do runs of this.
143267
NMinimize
10
In[]:=
TimingTable{min,sol}=NMinimize+,{p,n,d,q}∈Integers,0<=p<=1256329,0<=n<=12563295,0<=d<=125632910,0<=q<=125632925,{p,n,d,q},MaxIterations->1000,Method->{DifferentialEvolution,RandomSeed->Random[Integer,1000]},{10}
2
(p+5n+10d+25q-1256329)
2
(p+n+d+q-143267)
Out[]=
{223.53Second,{{0.,{d6362,n50489,p50839,q35577}},{0.,{d43218,n57917,p21614,q20518}},{0.,{d99194,n38267,p3004,q2802}},{0.,{d45794,n43001,p32434,q22038}},{0.,{d40522,n67331,p15454,q19960}},{0.,{d51018,n40919,p30904,q20426}},{0.,{d34674,n62009,p23544,q23040}},{0.,{d78822,n22550,p28834,q13061}},{0.,{d65434,n22865,p36939,q18029}},{0.,{d54378,n38981,p30419,q19489}}}}
We obtained valid solutions each time. Using only, say, iterations we tend to get solutions about half the time and "near" solutions the other half (wherein either the number of coins and/or total value is off by a very small amount). Note that this type of problem is one of constraint satisfaction. An advantage in these is that we can discern from the proposed solution whether it is valid; those are exactly the cases for which we get an object value of zero, with all constraints satisfied.
400
Partitioning a set
Partitioning a set
We illustrate this sort of problem with an old example from computational folklore. We are to partition the integers from to into two sets of , such that the sums of the square roots in each set are as close to equal as possible. There are various ways to set this up as a problem for , and we illustrate one such below.
1
100
50
NMinimize
We will utilize a simple way to choose elements from the set of . This is an approach we often use for adapting permutation problems for . We work with real values from to and their ordering (from the Mathematica function) determines which is to be regarded as first, which as second, and so on. Note that this is different from the last example in that we now work with continuous variables even though the problem itself involves a discrete set.
50
100
NMinimize
100
0
1
Ordering
In[]:=
splitRange[vec_]:=With[{newvec=Ordering[vec],halflen=Floor[Length[vec]/2]},{Take[newvec,halflen],Drop[newvec,halflen]}]
Once we have a way to associate a pair of subsets to a given set of values in the range from to , we form our objective function. A convenient choice is simply a square of a difference; this is often the case in optimization problems.
100
0
1
In[]:=
obfun[vec:{__Real}]:=With[{vals=splitRange[vec]},Abs[(Apply[Plus,Sqrt[N[First[vals]]]]-Apply[Plus,Sqrt[N[Last[vals]]]])]]
We now put these components together into a function that provides our set partition.
In[]:=
getHalfSet[n_,opts___Rule]:=Module[{vars,xx,ranges,nmin,vals},vars=Array[xx,n];ranges=Map[{#,0,1}&,vars];{nmin,vals}=NMinimize[obfun[vars],ranges,opts];{nmin,Map[Sort,splitRange[vars/.vals]]}]
If we do not specify otherwise, the default behavior will be to use the method; this is a sign of sound automatic behavior of [1]. All the same we explicitly set that method so that we can more readily pass it nondefault method-specific options. Finally, we set this to run many iterations with a lot of search points.
DifferentialEvolution
NMinimize
In[]:=
Timing[{min,{s1,s2}}=getHalfSet[100,Method{DifferentialEvolution,CrossProbability.9,SearchPoints100,PostProcessFalse},MaxIterations10000]]
Out[]=
{916.523Second,{5.30582×,{{1,4,6,8,9,13,17,20,23,24,28,29,30,33,35,36,37,38,40,41,42,46,47,48,50,51,52,54,55,56,57,58,61,63,66,67,69,71,72,73,74,75,77,78,81,83,92,94,97,99},{2,3,5,7,10,11,12,14,15,16,18,19,21,22,25,26,27,31,32,34,39,43,44,45,49,53,59,60,62,64,65,68,70,76,79,80,82,84,85,86,87,88,89,90,91,93,95,96,98,100}}}}
-6
10
We obtain a fairly small value for our objective function.
A subset covering problem
A subset covering problem
The problem below was posed in the news group comp.soft-sys.math.mathematica. We are given a set of sets, each containing integers between and . Their union is the set of all integers in that range, and we want to find a set of subsets that covers that entire range. For those interested in trying this problem, the input in electronic form may be found in [3].
1
64
12
In[]:=
subsets={{1,2,4,8,16,32,64},{2,1,3,7,15,31,63},{3,4,2,6,14,30,62},{4,3,1,5,13,29,61},{5,6,8,4,12,28,60},{6,5,7,3,11,27,59},{7,8,6,2,10,26,58},{8,7,5,1,9,25,57},{9,10,12,16,8,24,56},{10,9,11,15,7,23,55},{11,12,10,14,6,22,54},{12,11,9,13,5,21,53},{13,14,16,12,4,20,52},{14,13,15,11,3,19,51},{15,16,14,10,2,18,50},{16,15,13,9,1,17,49},{17,18,20,24,32,16,48},{18,17,19,23,31,15,47},{19,20,18,22,30,14,46},{20,19,17,21,29,13,45},{21,22,24,20,28,12,44},{22,21,23,19,27,11,43},{23,24,22,18,26,10,42},{24,23,21,17,25,9,41},{25,26,28,32,24,8,40},{26,25,27,31,23,7,39},{27,28,26,30,22,6,38},{28,27,25,29,21,5,37},{29,30,32,28,20,4,36},{30,29,31,27,19,3,35},{31,32,30,26,18,2,34},{32,31,29,25,17,1,33},{33,34,36,40,48,64,32},{34,33,35,39,47,63,31},{35,36,34,38,46,62,30},{36,35,33,37,45,61,29},{37,38,40,36,44,60,28},{38,37,39,35,43,59,27},{39,40,38,34,42,58,26},{40,39,37,33,41,57,25},{41,42,44,48,40,56,24},{42,41,43,47,39,55,23},{43,44,42,46,38,54,22},{44,43,41,45,37,53,21},{45,46,48,44,36,52,20},{46,45,47,43,35,51,19},{47,48,46,42,34,50,18},{48,47,45,41,33,49,17},{49,50,52,56,64,48,16},{50,49,51,55,63,47,15},{51,52,50,54,62,46,14},{52,51,49,53,61,45,13},{53,54,56,52,60,44,12},{54,53,55,51,59,43,11},{55,56,54,50,58,42,10},{56,55,53,49,57,41,9},{57,58,60,64,56,40,8},{58,57,59,63,55,39,7},{59,60,58,62,54,38,6},{60,59,57,61,53,37,5},{61,62,64,60,52,36,4},{62,61,63,59,51,35,3},{63,64,62,58,50,34,2},{64,63,61,57,49,33,1}};Union[Flatten[subsets]]Range[64]
Out[]=
True
Method 1
Method 1
We set up our objective function as follows. We represent a set of subsets of this master set by a set of integers in the range from to the number of subsets (which in this example is also ). This set is allowed to contain repetitions. Our objective function to minimize will be based on how many elements from through are "covered". Specifically it will be raised to the #(elements not covered) power. The code below does this. As it is a problem that may require many iterations and a relatively large search space, we allow for nondefault option values for these parameters.
12
12
1
64
1
64
2
In[]:=
f[n:{___Integer},set_,mx_Integer]:=2^Length[Complement[Range[mx],Union[Flatten[set[[n]]]]]]
In[]:=
spanningSets[set_,nsets_,iter_,sp_,cp_]:=Module[{vars,rnges,max=Length[set],nmin,vals},vars=Array[xx,nsets];rnges=Map[(1≤#≤max)&,vars];{nmin,vals}=NMinimize[{f[vars,set,max],Append[rnges,Element[vars,Integers]]},vars,MaxIterationsiter,Method{DifferentialEvolution,SearchPointssp,CrossProbabilitycp}];vals=Union[vars/.vals];{nmin,vals}]
Timing[{min,sets}=spanningSets[subsets,12,700,200,.94]]Length[Union[Flatten[subsets[[sets]]]]]
{196.7Second,{1.,{1,15,16,21,26,30,34,41,45,54,59,60}}}64
This example has an unfortunate drawback. It requires a nondefault value for the option in the method of . It is not entirely trivial to find useful values for such options, or even to know which such options to alter from default values. We hope to ameliorate this need in future work by providing an adaptive evolution that modifies its own parameters as it proceeds.
CrossProbability
DifferentialEvolution
NMinimize
Method 2
Method 2
This problem may also be tackled in other ways. One is to cast it as a standard knapsack problem. We will show how to do this. First we transform our set of subsets into a "bit vector" representation; each subset is represented by a positional list of zeros and ones.
In[]:=
densevec[spvec_,len_]:=Module[{vec=Table[0,{len}]},Do[vec[[spvec[[j]]]]=1,{j,Length[spvec]}];vec]mat=Map[densevec[#,64]&,subsets];
We now work with variables and minimize their sum, subject to the constraint that an appropriate inner product be strictly positive.
0-1
In[]:=
spanningSets2[set_,iter_,sp_,seed_,cp_:.5]:=Module[{vars,rnges,max=Length[set],nmin,vals},vars=Array[xx,max];rnges=Map[(0≤#≤1)&,vars];{nmin,vals}=NMinimize[{Apply[Plus,vars],Join[rnges,{Element[vars,Integers]},Thread[vars.set≥Table[1,{max}]]]},vars,MaxIterationsiter,Method{DifferentialEvolution,CrossProbabilitycp,SearchPointssp,RandomSeedseed}];vals=vars/.vals;{nmin,vals}]
In[]:=
Timing[{min,sets}=spanningSets2[mat,2000,100,0,.9]]
Out[]=
{1930.4Second,{12.,{0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,1}}}
We have again obtained a result that uses subsets. We check that it covers the entire range.
12
In[]:=
Apply[Plus,Map[Min[#,1]&,sets.mat]]
Out[]=
64
We see that this method was much slower. Experience indicates that it needs a lot of iterations and careful setting of the option. So at present has difficulties with this formulation. All the same it is encouraging to realize that one may readily set this up as a standard knapsack problem; over time internals may improve to the point where it can more efficiently handle the problem in this way.
CrossProbability
NMinimize
NMinimize
An assignment problem
An assignment problem
Our next example is a benchmark from the literature of discrete optimization. We are given two square matrices. We want a permutation that, when applied to the rows and columns of the second matrix, multiplied element-wise with corresponding elements of the first, and all elements summed, we get a minimum. The matrices we use have rows; they may be found in [4]. This is known as the "NUG25" example and it is an example of a quadratic assignment problem (QAP). The optimal result is known and was verified by a large parallel computation. The actual matrices are in a hidden cell below. One should recognize that the methods of handling this problem can, with minor modification, be applied to related ones such as the travelling salesman problem. In general, problems that require an optimal permutation may be amenable to the methods described below.
25
Method 1
Method 1
Our first problem is to decide how one can make a set of values into a permutation. One approach, as with the set partition problem, is to take their . In order that this avoid collisions we thus work with real values.
Ordering
permuteMatrix[mat_,perm_]:=mat[[perm,perm]]
QAP[mat1_,mat2_,cp_,it_,sp_,sc_]:=Module[{len=Length[mat1],obfunc,vars,vv,nmin,vals,rnges},vars=Array[vv,len];rnges=Map[{#,0,1}&,vars];obfunc[vec:{__Real}]:=Apply[Plus,Flatten[mat1*permuteMatrix[mat2,Ordering[vec]]]];{nmin,vals}=NMinimize[obfunc[vars],rnges,MaxIterationsit,Method{DifferentialEvolution,SearchPointssp,CrossProbabilitycp,ScalingFactorsc,PostProcessFalse}];{nmin,Ordering[vars/.vals]}]
Again we face the issue that this problem requires nonstandard values for options to the method, in order to achieve a reasonable result. While this is regretable it is clearly better than having no recourse at all. The specific values we use were found by Brett Champion by running many short trials and noticing which parameter values seemed to work best (it was also his idea to try nondefault values in the first place).
DifferentialEvolution
The idea behind having relatively small is that we do not want many crossovers in mating a pair of vectors. This in turn is because of the way we define a permutation. In particular it is not just values but relative values across the entire vector that give us the permutation. Thus disrupting more than a few, even when mating a pair of good vectors, is likely to give a bad vector.
CrossProbability
Below is the optimal permutation and the objective function value we obtain therefrom. We also show as baseline the result from applying no permutation, and also the result of applying several random permutations. This gives some idea of how to gauge the results below.
In[]:=
p={5,11,20,15,22,2,25,8,9,1,18,16,3,6,19,24,21,14,7,10,17,12,4,23,13};best=Apply[Plus,Flatten[aa*permuteMatrix[bb,p]]]baseline=Apply[Plus,Flatten[aa*bb]]randomvals=Table[perm=Ordering[Table[Random[],{25}]];Apply[Plus,Flatten[aa*permuteMatrix[bb,perm]]],{10}]
Out[]=
37444838{5250,5076,5050,4628,4858,5140,5058,5250,4962,4870}
A much more strenuous run over random permutations may be indicative of how hard it is to get good results by randomized search.
In[]:=
SeedRandom[1111];Timing[randomvals=Table[perm=Ordering[Table[Random[],{25}]];Apply[Plus,Flatten[aa*permuteMatrix[bb,perm]]],{1000000}];]Min[randomvals]
Out[]=
{1130.2Second,Null}4284
We see that the baseline permutation (do nothing) and random permutations tend to be far from optimal, and even a large sampling will get us only about half way from baseline to optimal. A relatively brief run with good values for the algorithm parameters, on the other hand, yields something better still.
In[]:=
SeedRandom[11111];Timing[{min,perm}=QAP[aa,bb,.06,200,40,.6]]
Out[]=
{17.33Second,{4066,{5,3,18,11,2,17,8,9,23,1,22,10,6,19,13,12,20,14,7,25,24,4,21,16,15}}}
We now try a longer run.
In[]:=
SeedRandom[11111];Timing[{min,perm}=QAP[aa,bb,.06,5000,100,.6]]
Out[]=
{1349.22Second,{3836,{5,2,25,11,24,12,17,4,18,21,20,3,8,14,16,15,23,9,6,7,13,22,10,19,1}}}
Method 2
Method 2
Once again we have a very different approach to this problem. The idea is to generate a permutation as a "shuffle" of a set of integers. We have for a set of integers from to , the length of the set in question. The range restriction is the only stipulation and in particular it may contain repeats. We associate to it a unique permutation as follows. We initialize a "deck" to contain zeros . The first element in is then set to the first element in . We also have a "marker" set telling us that that first element is now "used". We iterate over subsequent elements in , setting them to the corresponding values in provided those values are not yet used. Once done with this iteration we go through the elements that have no values, assigning them in sequence the values that have not yet been assigned.
vector
1
len
len
deck
vector
deck
vector
This notion of associating a list with repeats to a distinct shuffle has a clear drawback insofar as earlier elements are more likely to be assigned corresponding values in . All the same this provides a reasonable way to make a chromosome vector containing possible repeats correspond to a permutation. Moreover, one can see that any sensible mating process of two chromosomes will less drastically alter the objective function than would be the case in method 1, as the corresponding permutation now depends far less on overall ordering in the chromosomes. The advantage is thus that this method will be less in need of intricate crossover-probability parameter tuning.
vector
In[]:=
getPerm=Compile[{{vec,_Integer,1}},Module[{p1,p2,len=Length[vec],k},p1=p2=Table[0,{len}];Do[k=vec[[j]];If[p2[[k]]0,p2[[k]]=j;p1[[j]]=k;],{j,len}];k=1;Do[If[p1[[j]]0,While[p2[[k]]≠0,k++];p1[[j]]=k;p2[[k]]=j],{j,len}];p1]];
QAP2[mat1_,mat2_,cp_,it_,sp_]:=Module[{len=Length[mat1],obfunc,vars,vv,nmin,vals,constraints},vars=Array[vv,len];constraints=Prepend[Map[(1≤#≤len)&,vars],Element[vars,Integers]];obfunc[vec:{__Integer}]:=Apply[Plus,Flatten[mat1*permuteMatrix[mat2,getPerm[vec]]]];{nmin,vals}=NMinimize[{obfunc[vars],constraints},vars,Method{DifferentialEvolution,SearchPointssp,CrossProbabilitycp,PostProcessFalse},MaxIterationsit];{nmin,getPerm[vars/.vals]}]
Though we suspect that tuning is less important for this method, we anyway show details of a tuning run. We take several short runs with a given crossover probability parameter value, average them, and see what values seem to work best. We then try a longer run using one such value. This method is generally useful for problems where default settings may not be optimal.
In[]:=
SeedRandom[111111];Timing[vals=Table[{j,Table[{min,perm}=QAP2[aa,bb,j/100,200,10];min,{5}]},{j,5,95,5}];]v2=Map[{#[[1]],Apply[Plus,#[[2]]]/5.}&,vals]
Out[]=
{465.17Second,Null}{{5,4320.4},{10,4362.8},{15,4320.8},{20,4292.},{25,4333.6},{30,4328.8},{35,4353.2},{40,4304.8},{45,4351.2},{50,4359.2},{55,4342.4},{60,4329.2},{65,4312.8},{70,4321.2},{75,4325.6},{80,4262.8},{85,4278.},{90,4238.4},{95,4225.6}}
From this we see that large values seem to work at least slightly better than other values. A similar test run over values in the range through indicates that might be best for our purposes.
0.90
0.98
0.96
In[]:=
SeedRandom[111111];{min,perm}=QAP2[aa,bb,.96,20000,200]
Out[]=
{3788,{6,3,21,23,16,12,25,9,10,2,19,17,15,7,20,22,1,4,5,8,11,13,18,24,14}}
This gets us quite close to the global minimum with a scant two dozen lines of code. While it is mildly more complicated than method 1 above, it has one clear advantage. Both intuition and practical experience indicate that it is less in need of careful tuning of relatively obscure algorithm parameters. Also note that much of the code complexity is in associating a unique permutation to a given vector in our search space. This part is likely to be problematic for any algorithm. The fact that we employ a genetic one makes it all the more so because we then need it to behave well with respect to a mating process, the internals of which are a black box. Hence a method that is relatively stable with respect to tuning parameters is all the more desirable.
Using method 1 above, Brett Champion has achieved even better results for this problem. He has found a permutation that gives a function value of 3752. But the important point is that we have obtained a quite practical heuristic improvement in the objective function relative to random values, and this idea may apply even in cases where the absolute minimizer is not already known.
What lies beyond
What lies beyond
This example comes from a family, the hardest of which is NUG30. The salient features are that the baseline and "typical" random permutations give values around , and the minimizing permutation gives . With some tuning of parameters and reasonably long runs we have obtained values within of that minimum.
8000
6124
DifferentialEvolution
100
Other approaches for knapsack and related problems
Other approaches for knapsack and related problems
It should be noted that knapsack problems that have exclusively equality constraints can often be done much more readily using lattice reduction methods. See [4] for a simple example of how one might tackle, for example, a "subset-sum" problem using Mathematica. The set cover problem above requires inequalities and hence is not amenable to the lattice approach.
A related application of lattice reduction is to find "small" solutions to a system of linear integer equations. Details may be found in [5].
Summary and future work
Summary and future work
We have seen several examples of discrete optimization problems and how one might tackle them in Mathematica. The underlying method we utilize is a member of the evolutionary family of optimization algorithms and we have seen that it is reasonably well adapted for these problems. We have discussed various ways to cast our examples so that simple Mathematica code using may be applied. Indeed, we noticed that there are often very different ways to approach the same combinatorial optimization problem.
NMinimize
This uses technology presently under development and so it is to be expected that it might improve over time. For example, we hope to make it adaptive so that one need not tune the parameters (in essence, they become part of the chromosomes and thus themselves evolve). We also might allow an automated multilevel approach wherein several short runs (similar to tuning runs) will be used to initialize values for a longer run. This said, it is clear from the examples that is already a powerful tool for handling nontrivial combinatorial optimization problems.
NMinimize
References
References
[1] B. Champion (2002). Numerical optimization in Mathematica: an insider's view of . These proceedings.
NMinimize
[2] C. Jacob (2001). Illustrating Evolutionary Computation with Mathematica. Morgan Kaufmann.
[3] D. Lichtblau (2001). in Mathematica 4.2. 2001 Mathematica Developer's Conference notebook. An electronic version may be be found at:
http://library.wolfram.com/conferences/devconf2001/lichtblau/lichtblau.nb
NMinimize
http://library.wolfram.com/conferences/devconf2001/lichtblau/lichtblau.nb
[4] D. Lichtblau (2002). Usenet news group comp.soft-sys.math.mathematica communication. Archived at:
http://library.wolfram.com/mathgroup/archive/2002/Feb/msg00410.html
http://library.wolfram.com/mathgroup/archive/2002/Feb/msg00410.html
[5] D. Lichtblau (2002). Revisiting strong Gröbner bases over Euclidean domains. Submitted.
[6] K. Price and R. Storn (1997). Differential evolution. Dr. Dobb's Journal, April 1997. pp. 18–24, 78.
[7] S. Wolfram (1999). The Mathematica Book (4th edition). Wolfram Media/Cambridge University Press.


Cite this as: Daniel Lichtblau, "Discrete Optimization using Mathematica" from the Notebook Archive (1999), https://notebookarchive.org/2018-07-6z2565c

Download

