MENU

TUTORIALS

The OpenMOLE platform for model exploration and validation

organized by Juste Raimbault, Romain Reuillon, and Mathieu Leclaire

The OpenMOLE platform is a free and open source software, developed for and by researchers for more than 10 years at the Complex Systems Institute in Paris. The main goal of the platform is to help researchers from any discipline interested in simulation models to explore and validate their models. It is based on three complementary axis: (i) model embedding for any programming language; (ii) transparent access to high performance computing infrastructures; and (iii) state-of-the-art methods in sensitivity analysis, design of experiments, exploration and validation (see https://openmole.org/). A high level workflow language based on scala facilitates model integration and numerical experiments. The platform can be run as a server on any computer but also be set as a web service to easily monitor heavy computational experiments. This platform and associated model validation techniques and practices have value for the Artificial Life community, since simulation models are a key components of ALife research. The aim of this tutorial is to provide a first hands-on and basic knowledge of how to set up numerical experiments, starting with simple model embedding and standard design of experiments for parameter space exploration. It will then illustrate more advanced techniques, including model calibration using a multi-objective genetic algorithm and diversity search applied to model phase space. Finally, new methods for spatial sensitivity analysis will be briefly presented and illustrated.

Website

TBA

Phylogenies: how and why to track them in artificial life

organized by Emily Dolson, Matthew Andres Moreno, and Alexander Lalejini

Phylogenies (i.e., ancestry trees) group extant organisms by ancestral relatedness to render the history of hierarchical lineage branching events within an evolving system. These relationships reveal the evolutionary trajectories of populations through a genotypic or phenotypic space. As such, phylogenies open a direct window through which to observe ecology, differential selection, genetic potentiation, emergence of complex traits, and other evolutionary dynamics in artificial life (ALife) systems. In evolutionary biology, phylogenies are often estimated from the fossil record, phenotypic traits, and extant genetic information. Although substantially limited in precision, such phylogenies have profoundly advanced our understanding of the evolution of life on Earth. In digital systems, we often have the ability to create perfect (or near perfect) phylogenies that reveal the step-by-step process by which evolution unfolds. However, phylogeny tracking and phylogeny-based analyses are not yet commonplace in ALife. Fortunately, a number of software tools have recently become available to facilitate such analyses, such as Phylotrackpy, DEAP, Empirical, MABE, and hstrat.

Biologists have developed many sophisticated and powerful phylogeny-based analysis techniques. For example, existing work uses properties of tree topology to infer characteristics of the evolutionary processes acting on a population. With an understanding of the differences between biology and artificial life, these approaches can be imported into ALife systems. For example, phylodiversity metrics can be used to detect diversity-maintaining ecological interactions and ongoing generation of significant evolutionary innovations.

This tutorial will provide an introduction to phylogenies, how to record them in digital systems, and use cases for phylogenetic analyses in an artificial life context. We will open with a quick discussion of prior research enabled by and based on phylogenies in digital evolution systems. We will then survey existing phylogeny software tools and lead interactive tutorials on tracking phylogenies in both traditional and distributed computing environments. Next, we will demonstrate measurements and data visualizations that phylogenetic data enables, including Muller plots, phylogenetic topology metrics, and annotated phylogeny visualizations. Lastly, we will discuss open questions and future directions related to phylogenies in artificial life.

Evolving Robot Bodies and Brains in Unity

organized by Frank Veenstra, Emma Stensby Norstein, and Kyrre Glette

The evolution of robot bodies and brains allows researchers to investigate which building blocks are interesting for evolving artificial life. Agnostic to the evolutionary approach used, the supplied building blocks influence how artificial organisms will behave. What should these building blocks look like? How should we associate control units with these building blocks? How should we represent the genomes of these robots? In this tutorial we discuss (1) previous approaches to evolving robots and virtual creatures, (2) outline how Unity simulations and Unity’s ML-agents package can be used as an interface, and (3) our approach to evolving bodies and brains using Unity.

There are many existing solutions that are tailored to experimenting with body brain co-optimization and we have been using several simulation approaches to evolve modular robots that are represented by directed trees (directed acyclic graphs). Since evolving bodies can be relatively complex, we give participants an overview of existing methods and invite the participants to get some guided hands-on experience using the Unity ML-Agents for evolving robots. The Unity ML-Agents toolkit is an open-source toolkit for game developers, AI researchers, and hobbyists that can be used to train agents using various AI methods. Similar to OpenAI gym, it supplies a Python API through which one can optimize agents in a variety of environments. The Unity ML-Agents toolkit provides an easy-to-use interface that is flexible enough to allow for quick design iterations for evolving robot bodies and brains.

This tutorial is aimed at researchers that are interested in simulating the evolution of bodies and brains of robots. The tutorial will provide an overview of existing approaches to evolving bodies and brains of robots, and demonstrate how to design and incorporate control units, morphological components, environments and objectives. Participants will learn how to use Unity ML-Agents as a tool with evolutionary algorithms and learn how they can create incorporate their own robotic modules for evolving robots.

For an example of a master student’s work with this approach, see: https://www.youtube.com/watch?v=qaAJ8SJDAIs.

Cellular Automata, Self-Reproduction & Complexity

organized by Prof. Chrystopher L. Nehaniv (University of Waterloo, Canada)

Cellular automata (CAs) are a widely applied model of massively parallel computation based on local neighborhoods and updates introduced by John von Neumann and Stanislaw Ulam. The tutorial introduces the concept of cellular automata and examples, and overviews basic results, such as for the Game of Life (which is computationally universal and shows emergent properties), and variations of the cellular automata concept, including random boolean networks, synchronous and asynchronous automata networks, and discrete dynamical systems with external inputs. Von Neumann, one of the grandfathers of Artificial Life, also used CAs as a formal tool to study the logic of life and complexity: in particular, How is self-reproduction possible?, or How is it possible for a mechanistic system to reproduce itself? How is it possible for something to produce something as complex or more complex than itself? Prior to the discovery of the structure of DNA and its relation to these questions, von Neumann gave several different solutions. It turns out that some correspond to life as we now know, and others perhaps to life as it could be. We survey his solutions to these problems, and discuss progress on these questions since then on self-reproducing systems. Also, we survey open problems for Artificial Life research that go beyond the state-of-the-art in the synthesis of self-reproducing systems, and offer challenges for researchers entering the field including those related to the concepts of individuality, robustness, evolution and self-production (autopoiesis).

Website

TBA

How to build Research Software: Python

organized by Penn Faulkner Rainford

Research runs in cycles. Those can be of PhDs or Grants but eventually blocks of research come to an end. When they do there are many things we can consider outputs. As well as papers, qualifications and outreach we have software. While software is listed as an output it is more often an abandonment. Code is often not reused and rarely maintained. This is because the code, written for a purpose and normally by a single author, isn’t usable by others. It can be hard to port to a different computer or operating system. It can be hard to tell how to use functions or algorithms. It can even be incorrect.

These problems are challenging to correct after authors leave projects. Much easier is the creation of reusable code in the first place. Code with clear definitions and descriptions, stored accessibly and tested for distribution and use. This is achievable. There are many tools and methods to assist development to these goals. By improving our software output we make the next cycle easier as software can be reused rather than re-implemented.

In the tutorial we will look at general methods for structuring, documenting, storing and testing code. We will introduce general and python specific tools for supporting developers.

Methods: Versioning and Version Control, Documenting in Code and Test driven development.

General Tools: Git, GitHub and Sphinx.

Python tools: IDEs, unittest and Poetry.

We will work through examples, but attendees are encouraged to bring their own projects to work on. The final part of the session (20-30 minutes) will be put aside for questions and to help people with their own code.

Untangling Cognition: How Information Theory can demystify brains

organized by Clifford Bohm

Information Theory (IT) defines mathematical methods to analyze similarities and dissimilarities within data. A number of researchers have developed techniques using IT to study cognition. With IT, we can measure: what an agent knows about its environment (Representation), where information is stored (Fragmentation), how distributed it is (Smearedness), how information flows through an agent (Data Flow and Transfer Entropy), and the complexity of an agent (PHI). Using these methods can help explain behavior or otherwise intractable structures.

This tutorial will provide an introduction to IT and cover ways that IT can be used to understand simple cognitive structures (ANNs, GRNs, Markov Brains, and other “digital brains”); whether evolved or trained. The tutorial will focus on discrete entropy (although continuous entropy will be introduced). The tutorial is designed to develop concepts and intuitions, so, while some mathematical details will be necessary, they will be avoided whenever possible. After this tutorial, attendees should have a sense of what can and cannot be asked with IT, and how they may employ IT in their own research.

This tutorial is primarily designed for individuals who work with some form of artificial cognition (i.e., digital brains, Gene Regulatory Networks, artificial/spiking networks) and are interested in analyzing the behavior of these systems, but should also serve as a broad introduction to IT for those with other interests.

Website

TBA

Self-Organizing Systems with Machine Learning

organized by Bert Chan & Alexander Mordvintsev

In this tutorial, we are going to demonstrate how we can use machine learning as a practical tool to design self-organizing systems, train emergent patterns to perform desired tasks or achieve predefined goals. These systems are composed of large numbers of locally interacting “microscopic” agents (e.g. grid cells, particles), they work together towards a shared common goal (e.g. matching a target pattern, or surviving in a virtual environment), and form dynamical “macroscopic” patterns that are believed to be performing morphological computation. Such systems are often described as demonstrating self-organization of collective intelligence.

We are going to put emphasis on cases of hierarchical organization of virtual matter, when higher-level structures demonstrate the characteristics of agent-like behavior. Examples include: Neural Cellular Automata (NCA), where self-organizing patterns can be trained using gradient descent and back-propagation-through-time to reproduce a texture or auto-classify symbols, with capabilities of spontaneous regeneration and noise resistance; complex adaptive systems called Lenia, where agent-like localized patterns (or “virtual creatures”) are trained for agent-agent and agent-environment interactions inside a virtual environment; Flow Lenia, where mass conservation law is incorporated into Lenia such that energy constraints and species-species interactions become feasible; and Particle Lenia, where the concept of energy minimization is introduced in Lenia applied to a particle system.

Writing research software well and collaboratively in Python: best practices around software sustainability, collaborative work, and open- and reproducible science

organized by Nadine Spychala

Frequently, code used for generating scientific results is either not available, or not readily implementable/sufficiently understandable for reproducing results and/or building knowledge incrementally on top of them. This results in redundant work and, in the grand scheme of things, slows down scientific progress tremendously. Moreover, code that is not designed to be possibly re-used – and thus scrutinized by others – runs the risk of being flawed and thus produce, in turn, flawed results. Finally, it hampers collaboration – something that becomes increasingly important as people from all over the world become more inter-connected, more diversified and specialized knowledge is produced, and the mere amount of people working in science increases. To manage those developments well and avoid working in silos, it is important to have structures at place that enable people to join forces, and respond to and integrate each other’s work well.

Science at large it is still operating within the general doctrine of doing work alone rather than collaboratively. This may hold for Artificial Life (ALife) in particular, as for cultural/scientific practice reasons, special value may be placed on individual rather than collaborative research outputs. At the same time, there is a growing need within the Alife community to foster collaborative patterns and meet reproducibility and open science standards. This includes

  • ensuring that results are correct (by using, e. g., unit testing) and reproducible (e. g., by using configuration files), and documentation is sufficient,
  • ensuring the correctness of software at scale (by, e. g., scaling up unit testing),
  • ensuring software can be run in various environments (by, e. g., using containers), and that it is improved and managed over its lifetime,
  • knowledge about software architecture and design which helps avoid major code refactoring following minor changes to the code base,
  • open-sourcing a project on, e. g., GitHub.

This carpentries-style tutorial will address a selection of the major points mentioned above. As the tutorial is still under construction, the foci/exact contents are yet to be determined.

For folks looking for educational resources beyond 90 min tutorial time, I recommend having a look at this course – which the tutorial described here certainly takes inspiration from – from the Carpentries Incubator, involving an estimated time-commitment of 2-4 hours spanned over 5 days.

This tutorial is targeted to anyone who has basic programming skills in Python, and aims to learn more about best practices and new ways to tackle research software development in ALife. It is suitable for all career levels – from students to (very) senior researchers for whom writing code is part of their job, and who either are eager to up-skill and learn things anew, or would love to have a proper refresh and/or new perspectives on research software development.

Overall, I aim to contribute to better software and collaborative practices – and therefore better research – in the field of ALife.

I get support for this tutorial from my fellowship at the Software Sustainability Institute.

Dynamical Consciousness: Filling the explanatory gap

organized by Antoine Pasquali

Almost three decades have already passed since Chalmers introduced the hard problem of consciousness in 1995 – the question of why cognitive systems sometimes have phenomenal experience when engaged in information processing? – and yet, we are still struggling to find an exact answer. Indeed, should we approach this problem philosophically, neurologically, psychologically or computationally, we only seem to evince, respectively, what is consciousness, where, when and how it emerges in the brain. However, the fifth question – why? – remains mostly eluded. Indeed, main theories of consciousness – i.e., the Global Workspace theory, the Higher-Order Thought theory, the Bayesian theory, the Integrated Information theory, the Radical Plasticity theory – tackle the mystery of consciousness from their respective perspective, each with a different strength, but unfortunately, each susceptible to a different blind spot. Consequently, they lack the explanatory power to address the fifth question – which they either take for granted or ignore entirely. Hence, they tend to focus only on the (yet-not-so) easy problems. Well then, what would it take for us to set aside our differences, and try and combine these theories together as pieces of a larger puzzle? Once the full picture is revealed, it becomes possible to decipher consciousness as a combined set of mechanisms, rather than as a list of seemingly unrelated properties. This way we may finally see that consciousness is in fact subtended by processes that are extended both in space and in time, and that are inherently dynamical. This way we may elaborate a meta-theory, a combination of the others, that agrees with them in every aspect but does not lack the explanatory power to solve Chalmer’s hard problem.

Should you wish to embark on such a journey with me please attend this tutorial. It is intended to the beginner and the intermediate in notions relating to consciousness. It might interest the advanced to some extent, would they wish to consolidate the foundation of their knowledge of the theoretical landscape or learn more about the ability of these theories to address different aspects of human cognition. I will provide the audience with a solid explanation about what the theories entail, what are their strengths and their shortcomings. Starting from founding experiments and results interpretations, I will help the attendees build a strong comprehension of the basic concepts subtending these theories, resulting in the acquisition of an advanced level of knowledge about the field. In addition, I will challenge their ability to evaluate the theories in terms of their explanatory power, and to combine them to finally bridge the gap between the seemingly independent properties that characterize consciousness. Finally, I will show how the new theory of consciousness that I have developed is building up on this literature to finally address all questions about consciousness at once rather than separately, and how it can eventually be used to crack open the hard problem and ultimately fill the explanatory gap of consciousness.