Ruprecht-Karls-Universität Heidelberg

Hosted by Intern. Wissenschaftsforum Heidelberg, sponsored by MathComp

International Symposium
"Scientific Computing for the Cognitive Sciences"

Poster Session

One of the main goals of the symposium is to foster scientific discussions among the participants. As the number of slots for talks and presentations is naturally limited, we encourage the preparation of posters. Also speakers are encouraged to prepare posters, discussions after the talk are often facilitated when backup-material can be used.
The poster contents are free - it may contain overview over research areas, specific results, or challenges and open problems.

The poster session will take place on Thursday early afternoon and will be combined with an extended on site lunch break. However, the posters will be present during the whole symposium.

Poster Abstracts:

Carola Barth, Holger Diedam, Michael Engelhart, Joachim Funke, and Sebastian Sager

Short bio

Holger Diedam received his diploma in mathematics in 2009 at the University of Heidelberg, spending research periods at the HoChiMinh City University of Technology, Vietnam and the INRIA Grenoble, France. He is currently holding a Doctoral stipend from the Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences. His research interests include nonlinear mixed-integer dynamic optimization and automatic model generation with applications in economics.

See the short bios of Carola Barth, Joachim Funke and Sebastian Sager in the invited speaker section, and of Michael Engelhart in his poster contribution abstract.

Abstract

Optimization to Measure Performance - Details on the TOBAGO Software Framework

We present details on TOBAGO, the Tailorshop Optimization-Based Analysis and data Generation tOol. The computational tool has been developed to provide an automatic and user-friendly interface between experimental data from the Tailorshop scenario on the one, and a mathematical model and modern optimization tools on the other hand.

This approach allows us to analyze probands' performance automatically even in a complex microworld as the Tailorshop. Furthermore, we can design an objective indicator function and hence disprove the assumption that the ``fruit fly of complex problem solving'', namely the Tailorshop scenario that has been used for dozens of published studies, is mathematically not accessible.

TOBAGO is a publicly available software framework that allows automatic processing of a large number of test cases and supports the user with a wide range of analysis and plotting functions. As an example, we solve optimization problems that differ in size and initial conditions, based on real world experimental data from 12 rounds of 174 participants. The goals are twofold: first, from the solutions we gain additional insight into a complex system which facilitates the analysis of a participant's performance in the test. Second, we propose a methodology to automatize this process by providing a new criterion based on the solution of a series of optimization problems.

The underlying optimization problems are nonconvex mixed-integer nonlinear programs (MINLPs) and pose challenges to the optimization community.


Michael Engelhart, Joachim Funke, and Sebastian Sager

Short bio

Michael Engelhart studied mathematics with minor physics and economics at the University of Heidelberg. He received his diploma in mathematics in 2009. Currently he is a PhD student in the junior research group ``Mathematical and Computational Optimization'' of Heidelberg Graduate School Mathematical and Computational Methods for the sciences. His PhD project is about optimization-based analysis and training of human decision making and his research interests include mixed-integer nonlinear programming, mixed-integer dynamic optimization, and application of these methods in the context of complex problem solving.

See the short bios of Joachim Funke and Sebastian Sager in the speakers section.

Abstract

Challenges and Perspectives in Optimization-based Analysis of Human Decision Making

While the use of computer-based test scenarios in complex problem solving has been widely spread over the years, the application of optimization methods is fairly new. Although optimal solutions may yield an ideal performance indicator, one might not have even thought of applying optimization methods someday, when these test scenarios have been designed. Therefore mathematical models which are explicitly or implicitly contained in these scenarios may not have mathematical properties which are required for the application of optimization methods. However, the use of optimization opens up new possibilities for the analysis of human decision making.

Based on one of the most famous test scenarios in complex problem solving, the Tailorshop, we show which challenges may arise in the context of complex problem solving when using optimization methods and how they can be adressed. Furthermore, we give an outlook on possible new aspects of human decision making fostered by these methods.


Ido Erev and Iris Nevo

Short bio

Ido Erev (PhD in Psychology from UNC in 1990) is an ATS' Women's Division Professor of Industrial Engineering and Management, and the head of the Max Wertheimer Minerva center for Cognitive Research at the Technion. His research focuses on exploring, modeling and deriving the implications of human adaptation to economic incentives.

Iris Nevo is a graduate student at the Max Wertheimer Minerva center for Cognitive Research at the Technion.

Abstract

On Surprise, Change, and the Effect of Recent Outcomes

Leading models of learning processes rest on the assumption that learners tend to select the alternatives that led to the best recent outcomes. The current research highlights three boundaries of this "recency assumption." Analysis of the stock market and simple laboratory experiments suggests that positively surprising obtained payoffs, and negatively surprising forgone payoffs reduced the rate of repeated choice. In addition all previous outcomes, but the most recent, have similar effect of future choices. We show that these results, and the other robust properties of learning processes, can be captured with a simple refinement of the leading models.

The present investigation starts with an apparent inconsistency between the recency effect documented in the stock market and in basic learning research. Analyses of financial markets reveal that the volume of trade tends to increase after sharp price increase, and also after sharp price decline (see review in Karpoff, 1987). Higher volume of trade implies that owners are more likely to sell, and potential buyers are more likely to buy. Thus, the data suggest a four-fold response pattern to recent outcomes: Owners appear to exhibit negative recency after obtained gains (behave as if they expect a price decrease after a large price increase), but positive recency after a loss (expect another decrease after a large decrease). Potential buyers appear to exhibit the opposite pattern: positive recency after a large forgone gain (expect another increase after a large gain that they missed), and a negative recency after a forgone loss.

Basic study of learning appears to reflect a simpler effect of recent outcomes. Most studies document a robust positive recency effect (see Estes, 1976; Biele et al., 2009; Barron & Yechiam, 2009): People tend to select the alternative that led to the best outcome in the previous trials. This pattern is consistent with the law of effect (Thorndike, 1898) and is assumed by most learning models (see Bush & Mosteller, 1956; Erev & Roth, 1998; Sutton & Barto, 1998; Camerer and Ho, 1999; Fudenberg & Levine, 1999; Marchiori & Warglien, 2008; Dayan & Niv, 2008, and a review in Erev & Haruvy, in press).

The most natural explanation for this apparent inconsistency is based on the suggestion that the basic properties of human learning are only part of many factors that affect behavior in the stock market. The current analysis focuses on a less natural explanation of the inconsistency. It considers the possibility that the financial data reflect an important behavioral regularity that was ignored by basic learning research. The attempt to achieve this goal led us to focus on the role of surprising outcomes. Specifically, we hypothesize that "surprise-trigger-change." This hypothesis is consistent with the stock market data: Large price changes are surprising, and for that reason they increase trade (change implies trade). In addition, the surprise-trigger-change hypothesis can explain the fact that most learning studies document positive recency: The indications for a positive recency effect were obtained in analyses that focus on the aggregated recency effect (and do not examine this effect contingent on the level of surprise).

The first part of the current paper tests the surprises-trigger-change hypothesis in simple binary choice experiments. The analysis continues with an exploration of the implications of the current results to the modeling of learning.


Martin Felis, Katja Mombaur, Hideki Kadone, and Alain Berthoz

Short bio

Martin Felis received his diploma in mathematics with specialization in optimal control from the University of Heidelberg in 2009. He has been since then a Ph.D. student of the Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences and a member of the research group Optimization in Robotics and Biomechanics. His research interests include, multi-body dynamics, screw theory, animation, and optimal--control methods.

Hideki Kadone received his diploma in information science and technology with specialization in mechano-informatics from the University of Tokyo in 2008. He has been since then a Postdoctoral Researcher at the Laboratory of Physiology of Perception and Action at the College de France. His research interests include, emotion and gaze in locomotion and its application to robotic systems.

Alain Berthoz is currently Professor at the College de France, member of the French Academy of sciences, the Academia Europae, American Academy of Arts and Sciences and other Academies (Belgium, Bulgaria). He is the Director of the Laboratory of Physiology of Perception and Action of CNRS. He is an engineer, psychologist and neurophysiologist and has done his carreer at the CNRS as a scientists in the field of cognitive neuroscience. He is the author of more than 250 papers in international journals and the author of several books.

See the short bio of Katja Mombaur in the speaker section.

Abstract

Modeling and Identification of Emotional Aspects of Locomotion

The study of emotional facial expressions and of emotional body language is currently receiving a lot of attention in the cognitive sciences. In this project, we are not studying particular emotional gestures, but rather focus on the implicit bodily expression of emotions during standard motions such as walking forwards. An underlying assumption of our work is that all human motion is optimal in some sense and that different emotions induce different objective functions, which result in different ``deformations'' of normal motion. Our analysis is based on whole-body dynamic models of the walking subjects as well as our previous work on generating complex locomotion by means of optimal control techniques and on the identification of human objectives by means of inverse optimal control.

In this case-study we acquired motion capture data of two subjects with various acted emotional walking gaits. We created a multi-body model for the 3D dynamics simulation that includes segments for legs, trunk, arms, and head. For the optimization we formulated an optimal control problem which approximates the motion capture data. This problem was then solved with the highly-efficient software package MUSCOD-II, developed at the IWR in Heidelberg, which uses a direct multiple-shooting discretization scheme. The obtained approximated motions give us insights into emotional human motions not only on a kinematic level but also the acceleration and force profiles of the joints, which differ greatly for the recorded emotions.


Ranan D. Kuperman

Short bio

Ranan D. Kuperman is Senior Lecturer of International Relations at the University of Haifa.
    Education: BS Chemistry, Hebrew University; MA Political Science, CUNY Graduate School; PhD Political Science, Tel Aviv University.
    Research: dynamic decision making, strategic interaction, network analysis, international political economy.
    Recent book: Cycles of violence: The evolution of the Israeli decision regime governing the use of limited military force. Lexington Books, 2005 (hard), 2007 (paper).
    Visiting Professor: Rice University; San Diego State University; University of Canterbury.
    Fellowships: Minerva (University of Tubingen); DAAD (Peace Research Institute Frankfurt).
    Awards: Chechik award for national security research; Bar Lev award for national security research.

Abstract

An On-Line Simulator Module for the Study of Decision Making in Complex Environments

This poster describes an online software module that provides researchers with the capability to design interactive microworld simulators on an internet server in order to study how people engage with complex environments. The module is structured in such a manner that the designer of the simulator can create by default a very simple action-reaction simulator. However, the module includes a toolbox that consists of a variety of applications that can increase the complexity of the decision making processes in three different ways. First there are a number of procedures for creating the structure of the environment (objects, variables and functions) and how it changes. Second, the module allows the designer to determine what environmental data will be presented to subjects operating the simulator, when it can be observed by the subjects and in what formats (textual, tables, graphs). Third, it is possible to determine what policies will be available to the subjects operating the microworld and when they will be available. The simulator module will also include special features for researchers to monitor decision making practices. Thus, it is possible to record the type of information that is being perceived by subjects operating the simulator and when this information is observed. Another important advantage of this module is that it provides a high degree of control over the differences between alternative simulations and thereby allows more systematic comparisons between simulations. This because the microworld simulators produced with the aid of this module will be constructed on the basis of a standardized set of computational components.


Michael L. Raschke, Thomas Schlegel and Thomas Ertl

Short bio

Dipl. Phys. Michael Raschke studied physics at the Universität Stuttgart and the Ruprecht-Karls-Universität Heidelberg with a specialisation in robotics and computer science. After his diploma thesis at the Interdisciplinary Center for Scientific Computing in Heidelberg he is now working on his PhD at the Institute for Visualisation and Interactive Systems at the Universität Stuttgart. His research interests are artificial intelligence, cognition, simulation and visualisation.

Dr. Thomas Schlegel is leader of the Interactive Systems Research Team and holds a postdoctoral position at VIS in 2008. He received his Dipl.-Inf. and his Doctorate degrees from the University of Stuttgart. From 2002, he worked as a European and national research project leader at Fraunhofer Society and led the research cluster for Production Organisation and Management. Dr. Schlegel was member of the Executive Board in I*PROMS Network of Excellence and scientific coordinator of INT-MANUS as well as coordinator of the research projects IWARD, KOMPASS and LIKE. His research field is interactive systems and human computer interaction with focus on model-based user interface generation, multimodal user interfaces and semantic models in interaction.

Prof. Thomas Ertl is the head of the Visualisation and Interactive Systems Institute (VIS) as well as the Visualisation Institute of the Universität Stuttgart (VISUS) and one of the leading R&D-specialists in the field of visualisation. He received a masters degree in computer science from the University of Colorado at Boulder and a PhD in theoretical astrophysics from the University of Tübingen. Currently, Dr. Ertl is a full professor of computer science at the Universität Stuttgart, Germany. Prior to that he was a professor of computer graphics and visualisation at the University of Erlangen where he lead the scientific visualisation group. Besides that, he is a cofounder and a member of the board of science+computing ag, a Tuebingen based IT company. His research interests include visualisation, computer graphics and human computer interaction in general with a focus on volume rendering, flow visualisation, multiresolution analysis, parallel and hardware accelerated graphics, large datasets and interactive steering. Dr. Ertl is coauthor of more than 300 scientific publications and he served as a reviewer for most of the conferences and journals in the field. Since 2007 Dr. Ertl is Editor-in-Chief of the IEEE Transactions on Visualisation and Graphics and Vice President of the Eurographics Association.

Abstract

An Interdisciplinary Approach for the Study of Cognitive Aspects in Visualization

A few years ago most visualization techniques needed high performance computers. Due to the fast growth of processing power in PCs, PDAs, smartphones and cell phones, the visualization techniques are available for these devices. This progress leads to many new questions of usability and user-experience of computer interfaces and data visualization. Closely linked to usability are questions of cognitive processes, e.g. how human beings are deriving a principal idea of the information content of a presented diagram which only shows lines, symbols and other geometrical elements. If this “transportation process” of information content is designed optimally the user will correctly understand the meaning of the data presented in a very short period of time. Interesting issues arise: Do state of the art presentation techniques for information support the understanding process of the graphical output optimally? What are the main processes when human beings see diagrams, figures and general graphical outputs of software systems? Which algorithms can be used to model these processes? To study the interaction process between users and graphical output systems we will combine results and techniques from the three disciplines: Cognitive Sciences, Artificial Intelligence, Visualization.

The first step of the interdisciplinary approach is an evaluation of state of the art visualization techniques. This will lead to a first understanding of which type of visualization is useful in a specific context, for a specific data and for a specific user group. Based on these studies we will develop a first version of a cognitive model that represents the feature analysis of visualization by a human being. Interesting questions are: Which elements of visualization are important for the user to get an idea of the presented data? In which order are these elements perceived and to which mental description leads this order? How can an interactive exploration support the user in getting the idea of the visualization and how can this process be described [7]? In which notation can the cognitive models be formulated? Next, the cognitive model is implemented in software by using artificial intelligence algorithms. An important question in general is, to what degree of completeness this mapping is possible. A prototype could be a so called “virtual observer”. This virtual observer could support the developer optimizing his or her visualization under constraints of the context and knowledge of the user. Finally, this interdisciplinary combination of cognitive sciences, artificial intelligence and visualization gives new perspectives in these three scientific disciplines.

References

[1] C. Ware; Information visualization: perception for design; Elsevier/Morgan Kaufmann, 2004
[2] C. Ware; Visual Thinking: for Design; Morgan Kaufmann Series in Interactive Technologies, 2008
[3] Interview with John McCarthy about “What is Artificial Intelligence?”, http://www-formal.stanford.edu/jmc/whatisai/whatisai.html, July 2010
[4] Stuart Russell, Peter Norvig; Artificial Intelligence: A Modern Approach, Pearson Education
[5] M. Raschke, K. Mombaur, A. Schubert: "JacksonBot - Design, Simulation and Optimal Control of an Action Painting Robot" in F. Huang and R.-C. Wang (Eds.): ArtsIT 2009, LNICST 30, pp. 120-127, 2010
[6] C.D. Hansen, C.R. Johnson (eds.); The visualization handbook; Elsevier, 2005
[7] Thomas Schlegel, Michael Raschke: Interaction-Cases: Model-based description of complex interactions in use cases (to be published in the proceedings of the IADIS Interfaces and Human Computer Interaction Conference 2010)

Jenny Koppelt, Ronny Scherer, and Rüdiger Tiemann

Short bio

Jenny Koppelt studied applied natural science at the TU Bergakademie Freiberg, graduated (diploma) and was awarded the Georg Agricola medal at the TU Bergakademie Freiberg in 2006. She is a Ph.D. student in chemistry education and a scientific co-worker at the Interdisciplinary Centre for Educational Research in Berlin. Furthermore she is a member of the Association of Chemistry and Physics Education. Her research is focused on the assessment and the modeling of complex problem solving competences in chemistry.

Ronny Scherer studied chemistry and maths at the Humboldt-Universität zu Berlin and holds the first state examination for teaching these to subjects at secondary schools. He started his Ph.D. studies in chemistry education in 09/2008. Furthermore he is a member of the Association of Chemistry and Physics Education and the Psychometric Society. His research is focused on the assessment of competence development in the field of complex problem solving within the domain of chemistry.

Prof. Rüdiger Tiemann studied chemistry and physics and holds both state examinations for teaching these subjects at secondary schools. Furthermore he holds a Ph.D. in chemistry education and has postdoctoral experiences in chemistry and physics education. He is a member of the European Association on Research of Learning and Instruction, the National Association on Research on Science Teaching and the Association of Chemistry and Physics Education where he has been a board member since 2005. Prof. Tiemann is a reviewer for international and national conferences and journals. He is the head of the 2005 newly implemented department of chemistry education at the Humboldt-Universität zu Berlin, and holds a full research university professorship. His research is based on the empirical classroom research by video studies and computer based assessment, both in the field of problem solving.

Abstract

Assessing Students' Problem Solving Abilities in Chemistry Using a Virtual Laboratory

Problem solving is defined as ``[...] an individual's capacity to use cognitive processes to confront and resolve real, cross-disciplinary situations where the solution path is not immediately obvious and where the literacy domains or curricular areas that might be applicable are not within a single domain of mathematics, science or reading'' (OECD, 2003, p. 156). However, different studies indicate that the ability to solve problems is associated with domain-specific knowledge (e.g. Funke & Frensch, 2007; Hambrick, 2004). To solve chemical problems it is usually necessary to identify and manipulate variables in chemical systems and to monitor resulting changes. Thus, this kind of problems can be described as complex. If there is a large sample size it is impossible to analyze problem solving abilities by observing students carrying out real experiments. The only alternative is a computer-based assessment simulating an appropriate chemical experiment. Hence in the project complex problem solving competences in the field of chemistry are investigated using a virtual laboratory. We assume that a problem solving process can be characterized by four steps: understanding and characterizing the problem (PUC), representing the problem (PR), solving the problem (PS) and reflecting and communicating the solution (SRC). Each step can be described by 3 levels (Koppelt & Tiemann, 2009). The steps and levels are operationalized for chemical contents. To evaluate this model a complex and domain-specific problem has been developed. The task belongs to the field of organic chemistry and was realized in a computer-based test, which is adapted for 15year old students (grade 10). The students were asked to synthesize an ester meeting several demands and to maximize its yield (displacement of the chemical equilibrium). Especially the maximization of the yield shows the complex character of the problem. This subproblem can only be solved by systematical variations of the initial system. To analyze the students’ problem solving abilities, all activities were logged and interpreted using a coding manual.

The complex problem consists of different subtasks which comprise several items that can be reassigned to the problem solving steps. Each item has been rated using partial credits which are related to the different levels. Furthermore, the problem was designed in a way, that it is possible to handle the tasks for each step at each level (Tiemann & Koppelt, 2009).

Based on the evaluated model of problem solving competences and the developed virtual laboratory for students of grade 10 further virtual laboratories have been designed to assess the development of students' problem solving abilities in chemistry. Another complex problem has been developed to validate the evaluated competence model for grade 10 within a different chemical context. The computer-based tests contain different tasks which are specified for grades 8, 10, and 12 (Scherer & Tiemann, 2010).

We chose a common-item design which can be used to transform person ability parameters onto a common scale. The estimation of person and item parameters follows an item response theory approach such as the Rasch model or the partial credit model. To adequately link these parameters different methods of vertical scaling (Kolen & Brennan, 2004) will be carried out.

Based on a grade-to-grade growth definition the computer-based assessments contain grade-specific as well as common items which form an anchor test between grades 8 and 10 as well as 10 and 12. Grade-specific items cover students' performances in complex problem solving within a grade. Common items are used to link students' performances with the objective of comparing complex problem solving abilities of different grades. The computer-based assessments are vertically scaled with grade 10 as the baseline and grades 8 and 12 as the linkage groups (Tiemann, Koppelt & Scherer, 2010).

The poster shows the development of the virtual laboratory based on the theoretical model and the analysis of students' problem solving abilities based on the log files. Furthermore, the other virtual laboratories assessing the development of students' problem solving competences are presented.

References

    Funke, J. & Frensch, P. A. (2007). Complex Problem Solving - The European Perspective: 10 Years After. In D. H. Jonassen (Ed.), Learning to solve complex scientific problems (pp. 25--47). New York/London: Lawrence Erlbaum Associates.
    Hambrick, D. Z. (2004). The role of domain knowledge in higher-level cognition. In O. Walter & R. Engle (Eds.), Handbook of understanding and measuring intelligence (pp. 361--372). Thousand Oaks, California: Sage Publications.
    Kolen, M. J. & Brennan, R. L. (2004). Test equating, scaling, and linking. New York: Springer Science+Business Media.
    Koppelt, J. & Tiemann, R. (2009). Computerbasierte Erfassung dynamischer Problemlösekompetenz [Computer-based assessment of complex problem solving competence]. In D. Höttecke (Ed.), Chemie- und Physikdidaktik für die Lehramtsausbildung [Chemistry and physics education for teacher training] (pp. 265--267). Berlin: LIT.
    OECD (2003). The PISA 2003 assessment framework - Mathematics, reading, science and problem solving knowledge and skills. Paris: OECD Publications.
    Scherer, R. & Tiemann, R. (2010). Die Entwicklung komplexer Problemlösekompetenz im Chemieunterricht [The development of complex problem solving competences in chemistry]. In D. Höttecke (Ed.), Entwicklung naturwissenschaftlichen Denkens zwischen Phänomen und Systematik [Development of scientific thinking between phenomena and taxonomy] (pp. 428--430). Berlin: LIT.
    Tiemann, R. & Koppelt, J. (2009). Problem solving competencies in chemistry. Paper presented at the National Association of Research in Science Teaching (NARST) - Annual International Conference, Garden Grove, Los Angeles, USA, April 17--21.
    Tiemann, R., Koppelt, J. & Scherer, R. (2010). Computer based assessment of complex problem solving in chemistry. Paper presented at The European Conference on Educational Research (ECER), Helsinki, Finland. [accepted]

Alexander Schubert, Katja Mombaur, and Michael L. Raschke

Short bio

Alexander Schubert studied Mathematics and Physics in Heidelberg, where he graduated with state board examinations in 2009 and 2010. After being a research student at IWR, he became a Member of the HGS MathComp in February 2010, where he is currently working on his PhD thesis in the research group Optimization in Robotics and Biomechanics. He is interested in the structures of cognitive motion and their applications in humanoid robots as well as in mathematical models for cognitive processes in general and their potential for robotics.

See the short bio of Katja Mombaur in the invited speaker section.

Abstract

Art Robots and Cognitive Models

We are interested in the relationship between movements performed by artists and the resulting artworks, and we aim to evaluate and model how the characteristics of these motions and the underlying emotions of the artist are reflected in the painting. Additionally, we are studying the psychological and cognitive nature of art perception in order to correlate image properties resulting from different motion types with the aesthetic experiences of contemplators. We are especially investigating modern artworks inspired by the Action Painting style of Jackson Pollock.

This interdisciplinary research touches not only aspects of arts and cognitive sciences but also of robotics and scientific computing.

As a first approach, we developed JacksonBot, a small robot arm, that splashes color from a container at the end effector on the canvas. The paintings produced by this platform rely on a combination of the algorithmic generation of robot arm motions with random effects of the splashing color. We have evaluated the effect of different shapes of input motions on the resulting painting. To compute the robot joint trajectories necessary to move along a desired end effector path, we used an optimal control based approach to solve the inverse kinematics problem. The robot platform in that context acted purely as a tool of the human artist or programmer.

Currently we are shifting to a more sophisticated robotics platform on the basis of the small humanoid robot Nao by Aldebaran Robotics. The advantages of this platform from a mechanical perspective are the ability to generate truly dynamic arm motions in a more precise way and to mimic the actions of a human artist moving around the painting. Based on our previous work on motion optimization, we will pursue different optimization strategies for the robot arm movements and study their effect on the painting. In addition, the goal is to implement feedback of the current status of the painting and, based on our cognitive studies of human art perception, equip the robot with models that allow it to autonomously evaluate the artwork.

This ongoing work is based on a collaboration with psychologists (Prof. Joachim Funke, Marieke Bechtold) and artists (Nicole Suska) as well as art historians.


Christiane Schwieren

Short bio

Christiane Schwieren is Professor of Behavioral Economics at the Alfred Weber Institute of Economics at Heidelberg University, Germany. She received her PhD in economics from Maastricht University, the Netherlands, and holds a Diploma in Psychology and a Master in Political Science from Heidelberg University, Germany. Before coming back to Heidelberg, she had a position as Assistant Professor at the Department of Economics and Business of Universitat Pompeu Fabra, Barcelona, Spain. Her current research focuses on age, personality, and gender differences, group decision-making, effects of different incentive schemes, and neuroeconomic foundations of age differences. She mainly uses experimental methods in her research.

Abstract

Demographic Change and Heterogeneity in Economic Behavior

Demographic change and aging society are buzzwords in current debates on many economic themes. Microeconomic research has started to study whether old and young subjects behave differently with respect to decision-making in economic and social settings. The main focus so far was to use classical paradigms well tested with younger participants for the study of older participants. In these studies, few differences have been found (see, e.g., Kovalchik et al., 2004; Charness & Villeval, 2007).Cognitive psychological research has shown that elderly people follow other goals than younger individuals in decision making (emotional and social aims, rather than informational aims; e.g., Carstensen et al., 1999; Mather, 2006), and are more motivated to keep a positive affective state (e.g., Mather, 2006). With respect to risk attitudes, results using standard tasks so far are inconclusive (see, e.g, Charness & Villeval, 2007; Carstensen & Hartel, 2006). Some find that older people are more risk averse than younger people, while others do not replicate this. It is assumed that older people are rather trustful towards other people and are less risk averse with respect to social risk (see Mather, 2006). In our own research, so far we studied differences in reaction towards uncertainty and in reaction to competitive incentives, and found some differences and many similarities.

Nearly all of the research on age (and gender!) differences so far stops at the point where differences between young and old people in a certain domain can be shown. With respect to the current debate on the consequences of demographic change it is however important to understand the consequences of a change in the age composition of our workforce and society, given the specific behavioral tendencies of old and young people. To do this, standard methods of economic research might not be sufficient, and new modeling techniques are necessary in the future, to be able to estimate the consequences of demographic change for society and the work place.


Dimitry Volchenkov and Bettina Bläsing

Short bio

Bettina Bläsing:

    since 2007: Responsible investigator (Motion Intelligence) and Scientific Board member of the Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University
    since 2007: Scientific coordinator (Graduate School topic area Skill Webs) at the Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University
    since 2006: Research scientist at the Neurocognition and Action Research Group (Prof. Dr. Thomas Schack), Faculty of Psychology and Sport Science, Bielefeld University
    2005--2006: Postdoctoral project at the Wolfgang Köhler Primate Research Center, Max-Planck-Institute for Evolutionary Anthropology, and Scientific coordinator at the Institutes of Psychology, University of Leipzig
    2004--2005: Science journalist and scientific editor (Bertelsmann Foundation, Deutscher Ärzteverlag, Die Welt, Berliner Zeitung, GEO)
    1999--2004: PhD at Bielefeld University, Department of Biological Cybernetics Theoretical Biology (Prof. Dr. Holk Cruse)
    1997--1998: MSc Applied Animal Behaviour and Animal Welfare at the University of Edinburgh and Roslin Institute, Scotland
    1990--1997: Biology (Diplom) at Bielefeld University

Dimitri Volchenkov obtained his Ph.D. in theoretical physics at the Saint-Petersburg State University (Russia) and habilitated in CNRS Centre de Physique Theorique (Marseille, France). He worked in Texas A&M University (USA), Zentrum fuer Interdisziplinaere Forschung (Bielefeld, Germany), Centre de Physique Theorique (Marseille, France), Bielefeld-Bonn Stochastic Research Center (Germany). He is the Researcher at the Center of Excellence Cognitive Interaction Technology (Bielefeld, Germany). His research interests are stochastic nonlinear dynamics, plasma turbulence, urban spatial networks and their impact on poverty and environments, stochastic analysis of complex networks, and physics of dance.

Abstract

Spatio-temporal Analysis of Full-body Movement in Classical Dance

Classical dance is a highly technical art with its own specialized movement vocabulary that has been developed within more than three centuries. Excellence in dance, characterized by maximum perfection and expressiveness of movement at a minimum energetic input and visible effort, arises due to optimized coordination of all body parts, resulting from the advantageous cooperation of cognitive and sensorimotor control systems. This exquisite balance in spatio-temporal coordination can be ruined by even a slight imperfectness in movement planning and execution. Our aim in this study was to analyze full-body movement profiles over time of dancers performing classical pirouette turns.

To identify the spatio-temporal characteristics of dance figures, we collected 3D kinematic data using a VICON motion capture system with 12 infrared cameras, for ballet movements performed by 11 students (aged 13-17 years, 9 girls) aspiring to a career of professional dancers. The relative dynamics of 42 retro-reflective markers fixed on defined positions of the dancers’ bodies was investigated by the Procrustes analysis method widely used in shape matching and shape recognition. The specific kinetic energy of motor actions (described by markers’ trajectories in space) has been subjected to the spatio-temporal analysis based on the biorthogonal decomposition of the multivariate signal.

Spatio-temporal movement characteristics of a full-body kinematic model reveal a complex hierarchical structure of movements in classical dance. This structure is characterized by a strong coupling between instantaneous movement curvature and velocity and a strong hierarchy of spatio-temporal configurations, amassing specific kinetic energy of motor actions and providing a natural ground for modularity and compact representation of human movements. The hierarchy of spatio-temporal configurations measured by the normalized entropy of human movements might explain how a single correction given verbally by an expert has the potential to change the quality of movement performance dramatically. Entropy quantifying the space-time complexity of the dancers' movements relates classical dance to the informational aspects of modern communication theory, together with other interactions between humans via speech, gesture and music. We have found that for all trials the magnitudes of entropy fluctuated in the range between 0.7 and 0.98, well fitting with the entropy range in usual human languages.

Our approach to analyzing and visualizing complex full-body movement data provides access to a profound and detailed insight into the coordination of body-parts over time. Based on a specifically adapted visualization technique, it additionally enables scientists and practitioners to gain an intuitive understanding of the quality of movement performance.


Last modified by: Sebastian Sager on 2016-10-05
to the top of the page