ThisLNCSvolumecontainsthepaperspresentedatthe8thSimulatedEvolution and Learning (SEAL 2010) Conference held during December 1-4, 2010 at the Indian Institute of Technology Kanpur in India. SEAL is a prestigious inter- tional conference series in evolutionaryoptimization and machine learning. This biennial event started in Seoul, South Korea in 1996 and was thereafter held in Canberra, Australia in 1998, Nagoya, Japan in 2000, Singapore in 2002, Busan, South Korea in 2004,Hefei, China in 2006and Melbourne, Australia in 2008. SEAL 2010 received 141 paper submissions in total from 30 countries. After a rigorous peer-review process involving 431 reviews in total (averaging a little morethan3reviewsperpaper),60full-lengthand19shortpaperswereaccepted for presentation (both oral and poster) at the conference. The full-length papers alonecorrespondtoa42. 6%acceptancerateandshortpapersaddanother13. 5%. ThepapersincludedinthisLNCSvolumecoverawiderangeoftopicsinsi- latedevolutionandlearning. Theacceptedpapershavebeenclassi?edintothef- lowingmaincategories:(a)theoreticaldevelopments,(b)evolutionaryalgorithms andapplications,(c)learningmethodologies,(d)multi-objectiveevolutionary- gorithms and applications,(e) hybrid algorithms and (f) industrial applications.
The conference featured three distinguished keynote speakers. Narendra Karmarkar's talk on "Beyond Convexity: New Perspectives in Computational Optimization" focused on providing new theoretical concepts for non-convex optimization and indicated a rich connection between optimization and ma- ematical physics and also showed a deep signi?cance of advanced geometry to optimization. The advancement of optimization theory for non-convex problems is bene?cial for meta-heuristic optimization algorithms such as evolutionary - gorithms. Manindra Agrawal's talk on "PRIMES is in P" provided a mu- improved version of his celebrated and ground-breaking 2002 work on poly- mial time algorithm for testing prime numbers. The theoretical computation work presented in this keynote lecture should be motivating for the evolutionary optimization and machine learning community at large.